Profiling Node.js apps

Reading Time: 2 minutes

At Atlassian, we have add-ons we develop and use to enable more powerful workflows within our products. Using Atlassian Connect, we can write those add-ons in any language. We often write those add-ons using Node.js. But once in a while we run into a performance problem with an add-on. I had to learn how to profile a Node.js app, so we could live happier lives with less waiting.

If you poke at node via node --help, it doesn’t appear if there are lot of options. Since Node.js is just a front end for V8, you can find the options hidden away behind node --v8-options. I wouldn’t jump to typing out that command in your terminal just yet. There are a couple hundred options that you could pass to the V8 engine. A little searching turns up V8 supports node --prof.


  --prof (Log statistical profiling information (implies --log-code).)

If you run your Node.js application with the --prof flag (e.g. node --prof app.js), node will generate a -v8.log file, e.g. isolate-0x101804600-v8.log, of stack samples. I believe Node.js added this in v0.12.x, so if you’re stuck on an older version you might be out of luck. Peeking in the log, it appears to be a CSV file with entries of execution details. This generates data every few moments, taking a “tick” by peeking at the stack to see where your program is spending time. Raw data is a bit hard to use, but there’s a nice profiling visualization tool to collate the data for you in the V8 repository.

This profiling visualization tool processes the file in your browser and provides a plot and ordered call information. Select the file your node instance generated and hit “Start”. The profile information will give you a block of text which might start off like (path edited so it fits on the screen):


[JavaScript]:

  ticks  total  nonlib   name
   103    1.4%    6.6%  Stub: LoadICTrampolineStub
    96    1.3%    6.2%  Stub: InstanceofStub
    75    1.0%    4.8%  RegExp: ^(\/?|)([\s\S]*?)((?:\.{1,2}|[^\/]+?|)(\.[^.\/]*|))(?:[\/]*)$
    72    1.0%    4.6%  Stub: CompareICStub {12}
    67    0.9%    4.3%  LazyCompile: *InnerArrayIndexOf native array.js:1031:27
    46    0.6%    3.0%  Stub: CEntryStub
    45    0.6%    2.9%  Stub: CallICStub(args(1), METHOD,
    41    0.5%    2.6%  Stub: LoadICTrampolineStub {1}
    31    0.4%    2.0%  Builtin: ArgumentsAdaptorTrampoline
    19    0.3%    1.2%  Stub: LoadICStub {1}
    17    0.2%    1.1%  Stub: StringAddStub_CheckNone_NotTenured
    17    0.2%    1.1%  Stub: KeyedLoadICTrampolineStub
    16    0.2%    1.0%  Stub: RegExpExecStub
    14    0.2%    0.9%  Stub: FastNewClosureStub {1}
    14    0.2%    0.9%  LazyCompile: *TreeWalker._visit .../node_modules/uglify-js/lib/ast.js:931:21

Showing me a few things of interest at the top of usage list of the profiler. UglifyJS is the first bit of userland code showing up. But UglifyJS doesn’t help us find a problem with normal requests, so you can change range of ticks (effectively time, but each tick should align with the next iteration in the event loop) in the page to something that would exclude the app startup and thus one time costs, such as UglifyJS.

There is a plot generated with this tool; but I’ll admit I haven’t figured out how to interept that data. If someone knows of some good instructions on interepting that data, feel free to share it in the comments.

So between using node --prof and the visualization tool you should be able to see bottlenecks in your application! Using the --prof flag isn’t perfect, if you need more details you would need to instrustment your code instread of sampling it to better results.