V8 CPU and Memory Profiling
You can also open and explore profiles and snapshots captured in Google Chrome DevTools for your client-side code.
Why is profiling important
A carefully designed algorithm can make your code faster and manage your memory consumption better, even more efficiently than the virtual machine can. Profiling is the way to look inside the execution of your code and prove your assumptions about your design decisions.
Preparing for V8 CPU and memory heap profiling
V8 CPU profiling is provided through the PyCharm built-in functionality, so you do not need to install any additional software.
V8 memory heap profiling is provided through the globally installed v8-profiler package.
To install v8-profiler globally
Open the built-in PyCharm Terminal (Alt+F12) and type
npm install -g v8-profiler at the command prompt.
To identify the processes that consume most of your CPU, you can use two methods: sampling and tracing.
- When the sampling method is applied, you periodically record stack traces of your application. The periods between records are measured in conventional units referred to as ticks.
This method does not guarantee very good accuracy or precision for the following reason: snapshots are taken at random moments therefore any function can happen to be recorded in a snapshot. However, sampling can give us a rough picture of where the most of time is spent.
When the tracing method is used, we actively record tracing information by ourselves, directly in the code. It is obviously better to get exact measurements of how much time each method took, and also allows you to count how many times the traced method was called. The disadvantage of this method is that it comes with bigger result distortion compared to sampling.
Result Distortion. Both sampling and tracing introduce delays into execution and therefore influence the profiling results. With sampling, delays can be estimated as some fixed amount of time for each sampling event and do not introduce greater distortion than the sampling method itself (i.e. the delay is much shorter than the sampling interval). With tracing, the profiling delay depends on the code and the places where we made tracing measurements. For instance, if a traced method is called inside other traced methods numerously, all inner delays will accumulate for the outer method. If so, it may be difficult to separate the execution time from tracing distortion.
Usually we use sampling and tracing methods together. We start with sampling to get an idea of which parts of our code take the most time, and then instrument the code with tracing calls to zero on the issues.
Measurements are made not only for the work of your code, but also for the activities performed by the engine itself, such as compilation, calls of system libraries, optimization, and garbage collection. The following time metrics are made for execution of functions themselves and for performing activities:
Total: the number of ticks (the time) during which a function was executed or an activity was performed.
Total%: the ratio of a function/activity execution time to the entire time when measurements were made.
Self: the pure execution time of a function/activity itself, without the time spent on executing functions called by it.
Self%: the ratio of the pure execution time of a function/activity to the entire time when the measurements were made.
Of Parent: the ratio of the pure execution time of a function to the execution time of the function that called it (Parent).
Configuring CPU profiling
To invoke V8 CPU profiling on application start, you need to specify additional settings in the Node.js run configuration according to which the application will be launched.
Choose Node.js from the pop-up list.on the main menu. Click on the toolbar and select
From the list, choose the Node.js run configuration to activate CPU Profiling in or create a new configuration as described in Running and Debugging Node.js.
- Switch to the V8 Profiling pane and specify the following:
Select the Record CPU profiling info checkbox.
In the Log folder field, specify the folder to store recorded logs in. Profiling data are stored in V8 log files
Collecting CPU profiling information
Select the run configuration from the list on the main toolbar and then choose Run .on the main menu or click
When the scenario that you need to profile is executed, stop the process by clicking the Stop toolbar button.
V8 log file will be processed by V8 scripts to calculate averaged call traces. PyCharm opens the V8 Profiling Tool Window.
Analyzing CPU profiling information
Analyzing the profiling logs is available after the process stops because currently stopping and restarting profiling during execution of an application is not supported.
The collected profiling data is displayed in the V8 Profiling Tool Window which PyCharm opens automatically when you stop your application. If the window is already opened and shows the profiling data for another session, a new tab is added. Tabs that were opened automatically are named after the run configurations that control execution of the applications and collecting the profiling data.
If you want to open and analyze some previously saved profiling data, choose
isolate-<session number>. PyCharm creates a separate tab with the name of the log file.
Exploring call Trees
Based on the collected profiling data, PyCharm builds three call trees and displays each of them in a separate pane. Having several call trees provides the possibility to analyze the application execution from two different points of view: on the one hand, which calls were time consuming ("heavy"), and on the other hand, "who called whom".
The Top Calls pane shows a list of performed activities sorted in the descending order by the Self metrics. For each activity PyCharm displays its Total, Total%, and Self% metrics. For each function call, PyCharm displays the name of the file, the line, and the column where the function is defined. The diagram in the Overview pane shows distribution of self time for calls with the Self% metrics above 1%.
- The Bottom-up pane also shows the performed activities sorted in the descending order by the Self metrics. Unlike the Top Calls pane, the Bottom-up pane shows only the activities with the Total% metrics above 2 and the functions that called them. This is helpful if you encounter a heavy function and want to find out where it was called from.
For each activity PyCharm displays its execution time in ticks and the Of Parent metrics. For each function call, PyCharm displays the name of the file, the line, and the column where the function is defined.
- The Top-down pane shows the entire call hierarchy with the functions that are execution entry points at the top. For each activity PyCharm displays its Total, Total%, Self, and Self% metrics. For each function call, PyCharm displays the name of the file, the line, and the column where the function is defined. Some of the functions may have been optimized by V8, see Optimizing for V8 for details.
The functions that have been optimized are marked with an asterisk (
*) before the function name.
The functions that possibly require optimization but still have not been optimized are marked with a tilde (
~) character before the function name. Though optimization may be delayed by the engine or skipped if the code is short-running, a tilde (
~) points at a place where the code can be rewritten to achieve better performance.
To navigate to the source code of a function, select the function in question in the tree and click on the toolbar or choose Jump to source on the context menu of the selection. The file with the source code of the selected function is opened in the editor with the cursor positioned at the function.
When a tab for a profiling session is opened, by default the nodes with heaviest calls are expanded. While exploring the trees, you may like to fold some nodes or expand other ones. To restore the original tree presentation, click the Expand Heavy Traces button on the toolbar.
- To have PyCharm display only the calls that indeed cause performance problems, filter out light calls:
Click the Filter button on the toolbar.
Using the slider, specify the minimum Total% or Parent% value for a call to be displayed and click Done.
To expand or collapse all the nodes in the active pane, click or on the toolbar respectively.
To expand or collapse a node, select it and choose Expand Node or Collapse Node on the context menu of the selection.
- Save and compare calls and lines:
To save a line with a function and its metrics, select the function and choose Copy on the context menu of the selection. This may be helpful if you want to compare the measurements for a function from two sessions, for example, after you make some improvements to the code.
To save only the function name and the name of the file where the function is defined, select the function and choose Copy Call on the context menu of the selection.
To compare an item with the contents of the Clipboard, select the item in question and choose Compare With Clipboard on the context menu of the selection. Compare the items in the Difference Viewer that opens.
To save the call tree in the current pane to a text file, click on the toolbar and specify the target file in the dialog box that opens.
Analyzing the Flame chart
Use the multicolor chart in the Flame Chart tab to find where the application paused and explore the calls that provoked these pauses. The chart consists of four areas:
The upper area shows a timeline with two sliders to limit the beginning and the end of a fragment to investigate.
The bottom area shows a stack of calls in the form of a multicolor chart. When called for the first time, each function is assigned a random color, whereupon every call of this function within the current session is shown in this color.
The middle area shows a summary of calls from the Garbage Collector, the engine, the external calls, and the execution itself. The colors reserved for the Garbage Collector, the engine, the external calls, and the execution are listed on top of the area:
The right-hand pane lists the calls within a selected fragment, for each call the list shows its duration, the name of the called function, and file where the function is defined.
Selecting a Fragment in the Timeline
To explore the processes within a certain period of time, you need to select the fragment in question. You can do it in two ways:
Use the sliders:
Click the window between two sliders and drag it to the required fragment:
In either case, the multicolor chart below shows the stack of calls within the selected fragment.
To enlarge the chart, click the selected fragment and then click the Zoom button on the toolbar. PyCharm opens a new tab and shows the selected fragment enlarged to fit the tab width so you can examine the fragment with more details.
Synchronization in the Flame chart
The bottom and the right-hand areas are synchronized: as you drag the slider in the bottom area through the timeline the focus in the right-hand pane moves to the call that was performed at each moment.
Moreover, if you click a call in the bottom area, the slider moves to it automatically and the focus in the right-hand pane switches to the corresponding function, if necessary the list scrolls automatically. And vice versa, if you click an item in the list, PyCharm selects the corresponding call in the bottom area and drags the slider to it automatically:
PyCharm supports navigation from the right-hand area to the source code of called functions, to the other panes of the tool window, and to areas in the flame chart with specific metrics.
To jump to the source code of a called function, select the call in question and choose Jump to Source on the context menu of the selection.
- To switch to another pane, select the call in question, choose Navigate To on the context menu of the selection and then choose the destination:
- Navigate in Top Calls
- Navigate in Bottom-up
- Navigate in Top-down
- To have the flame chart zoomed at the fragments with specific metrics of a call, select the call in question, choose Navigate To on the context menu of the slection, and then choose the metrics:
- Navigate to Longest Time
- Navigate to Typical Time
- Navigate to Longest Self Time
- Navigate to Typical Self Time
You can also navigate to the stack trace of a call to view and analyze exceptions. To do that, select the call in question and choose Show As Stack Trace. PyCharm opens the stack trace in a separate tab, to return to the Flame Chart pane, click V8 CPU Profiling tool window button in the bottom tool window.
Using global objects to store collections of data, with complicated free policies.
Errors in usages of closures: closures keep references onto outside objects.
Too frequent memory allocation.
Configuring memory profiling
To allow taking memory snapshots, you need to specify additional settings in the Node.js run configuration according to which the application will be launched.
Chooseon the main menu.
Click and choose the Node.js run configuration to activate CPU Profiling in or create a new configuration as described in Running and Debugging Node.js.
Switch to the V8 Profiling pane and select the Allow taking heap snapshots checkbox.
To take memory snapshots of an application running on a Docker container, select the Auto configure checkbox and add v8-profiler to your
package.json file, then switch to the V8 profiling tab and specify the path as
Collecting memory profiling information
Select the run configuration from the list on the main toolbar and then choose Run .on the main menu or click
At any time during the application execution, click the Take Heap Snapshot button on the toolbar of the Run tool window.
In the dialog box that opens, choose the folder to store the taken snapshot in and specify the name to save the snapshot file with. To start analyzing the snapshot immediately, select the Open snapshot checkbox.
Click OK to save the snapshot.
Analyzing memory profiling information
The collected profiling data is displayed in the V8 Heap Tool Window, which opens when you take a snapshot at choose to open it. If the window is already opened and shows the profiling data for another session, a new tab is added. Tabs that were opened automatically are named after the run configurations that control execution of the applications and collecting the profiling data.
If you want to open and analyze some previously saved mempry profiling data, choose
.snapshot file. PyCharm creates a separate tab with the name of the selected file.
The tool window has three tabs that present the collected information from difference point of views.
- The Containment tab shows the objects in your application grouped under several top-level entries: DOMWindow objects, Native browser objects, and GC Roots, which are roots the Garbage Collector actually uses. See Containment View for details.
For each object, the tab shows its distance from the GC root, that is the shortest simple path of nodes between the object and the GC root, the shallow size of the object, and the retained size of the object. Besides the absolute values of the object's size, PyCharm shows the percentage of memory the object occupies.
The Biggest Objects tab shows the most memory-consuming objects sorted by their retained sizes. In this tab, you can spot memory leaks provoked by accumulating data in some global object.
The Summary tab shows the objects in your application grouped by their types. The tab shows the number of objects of each type, their size, and the percentage of memory that they occupy. This information may be a clue to the memory state.
Each tab has a Details pane, which shows the path to the currently selected object from GC roots and the list of object’s retainers, that is, the objects that keep links to the selected object. Every heap snapshot has many “back” references and loops, so there are always many retainers for each object.
Navigating through a snapshot
To help differentiate objects and move from one to another without losing the context, mark objects with text labels. To set a label to an object, select the object of interest and click on the toolbar or choose Mark on the context menu of the selection. Then type the label to mark the object with in the dialog box that opens.
- To navigate to the function or variable that corresponds to an object, select the object of interest and click on the toolbar or choose Edit Source on the context menu of the selection. If the button and the menu option are disabled, this means that PyCharm has not found a function or a variable that corresponds to the selected object.
If several functions or variables are found, they are shown in a pop-up suggestion list.
To jump from an object in the Biggest Objects or Summary tab or Occurrences view to the same object in the Containment tab, select the object in question in the Biggest Objects or Summary tab and click on the toolbar or choose Navigate in Main Tree on the context menu of the selection. This helps you investigate the object from the containment point of view and concentrate on the links between objects.
- To search through a snapshot:
In the Containment tab, click on the toolbar.
- In the V8 Heap Search Dialog that opens, specify the search pattern and the scope to search in. The available scopes are:
Everywhere: select this checkbox to search in all the scopes. When this checkbox is selected, all the other search types are disabled.
In the V8 Heap Tool Window, link names are marked with the
Class Names: select this checkbox to search among functions-constructors.
Text Strings: select this checkbox to perform a textual search in the contents of the objects.
- Snapshot Object IDs: select this checkbox to search among the unique identifiers of objects. V8 assigns such a unique identifier in the format to each object when the object is created and preserves it until the object is destroyed. This means that you can find and compare the same objects in several snapshots taken within the same session.
In the V8 Heap Tool Window, object IDs are marked with the
Marks: select this checkbox to search among the labels you set to objects manually by clicking on the toolbar of the Containment tab.
The search results are displayed in the Details pane, in a separate Occurrences of '<search pattern>' view. To have the search results shown grouped by the search scopes you specified, press the Group by Type toggle button on the toolbar.
When you open the dialog box next time, it will show the settings from the previous search.