How to generate a Node.js flame graph on CentOS? - node.js

I'd like to generate a flame graph for my node.js app. Unfortunately, my dev box is OSX (doesn't support utrace helpers, per the linked article) and my production box is CentOS (doesn't even have dtrace).
I've found some indication that something like SystemTap might be a dtrace alternative, but I've been unable to cobble together an effective working way to generate the appropriate stacks.out file to feed into stackvis.
Does anybody know of a decent tutorial on how to get this up and running? I'd prefer it on CentOS (so I can examine my production app) but OSX would also be sufficient.

From the latest google searches, people are unhappy with SystemTap on Centos, but here is an article http://dtrace.org/blogs/brendan/2012/03/17/linux-kernel-performance-flame-graphs/ that was referenced by someone's FlameGraph github project https://github.com/brendangregg/FlameGraph
I would say move towards the real solution, of getting dtrace installed rather than relying on the work around.

On Linux, the perf_events profiler can be used to sample stack traces, and has JIT symbol support. For node.js, you need to be running version 0.11.13 or higher, with the v8 option --perf-basic-prof. That option creates a /tmp/perf-PID.map file for symbol translation, which perf uses. Once you have perf profiling stack traces with JavaScript symbols, you can then create flame graphs using stackcollapse-perf.pl (from the FlameGraph repo) on the output of "perf script".
I wrote up the full instructions here: http://www.brendangregg.com/blog/2014-09-17/node-flame-graphs-on-linux.html

Related

C++ entry points not showing up in node.js profiling output

When running node --prof <command>, then later node --prof-process on macOS, my profiling output no-longer shows any of the C++ entry points, leading to a lot of unaccounted-for gaps in my profiling data. Changed around the same time, I now just see the node binary in these profiling trees where it wasn't showing up before, so it's like the profiler is no-longer able to "dive in" to the internals of node.
I think this started when trying to improve dtrace permissions with csrutil, but I've restored things back to their factory settings and this still happens.
What causes C++ entry points to not show up in traces? Is there a way to fix the issue?
Update:
Just tried turning off SIP entirely with csrutil disable (which is a bad thing to do), and the problem persists, so maybe SIP is a red herring here.
The amazing wizards in the node.js github issues figured this out.
In short, I learned that two commands are used by the profiler on macOS: c++filt and nm. When I tried reporting which versions of those commands I had installed, I got this message back for nm:
» nm --version
Agreeing to the Xcode/iOS license requires admin privileges, please run “sudo xcodebuild -license” and then retry this command.
Apparently the requirement of accepting the license was added, perhaps after an upgrade, and this was blocking the ability for the profiler to look up and demangle C++ symbols. After I accepted the license, the profiler started working normally again.
Hopefully this helps others running across the same scenario.

Accessing the WebKit API from Haskell via something other than WebKitGTK

I'm trying to understand if there is any other way to access the WebKit API directly from from a Haskell (ghc-7.10.2 currently) program without having to go through something like webkitgtk3, which is a Haskell wrapper around WebKitGTK.
It appears WebKitGTK does not expose the full WebKit API, for example these are offered by WebKit:
HTMLCanvasElement # developer.apple.com
CanvasRenderingContext2DClass # developer.apple.com
but not by WebKitGTK:
WebKitDOMHTMLCanvasElement # webkitgtk.org
Is there any way to access the WebKit API of a running WebKit.app or Safari.app especially on OS X but also Linux and Windows?
P.S. Some background to this: I'm developing an application using GHCJS, but since GHCJS is so much slower (and doesn't integrate with all Emacs IDE features, I think), I also want to be able to compile the same (or almost the same) code base using GHC. So I got familiar with webkitgtk and even spent several days trying to get webkitgtk-2.4.9 to build on OS X via Homebrew, because webkitgtk3 only currently builds against 2.4.9, only to find out that the nice and fully featured Canvas API which WebKit exposes is not at all available through WebKitGTK. Hence the search for alternatives. And it is also for this reason I'm adding ghcjs — it is very likely that other users of GHCJS will find this post interesting.

When using someone else's application code do I need to run Cmake to get the project structure for my operating system.

I am getting into a position where I have to use other people code for projects, for example openTLD. I want to change some of the code to give it more functionality and use it in a diffrent way. What I have found is that many people have packaged their files in such a way that you are supposed to use
cmake
and then
make
and sometimes after that
make install
I don't want to install the software on my system. What I am looking to do is get these peoples code to a point where I can add to it in Eclipse or even just using Nano and then compile it.
At what point is the code in a workable/usable state. Can I use it after doing cmake or do I need to also call make? Is my thinking correct that it would be better to edit the code after calling cmake as opposed to before? I am not going to want my finished code to be cross platform supported, it will only be on Linux. Is it easer to learn cmake and edit the code befor running cmake as opposed to not learning cmake and using the code afterwards, if that is possible?
You question is a little open ended.
Looking at the opentld project, there is a binary and a library available for use. If you are interested in using the binary in your code, you need to download the executables(Linux executables are not posted). If you are planning to use the library, you have two options. Either you use the pre-built library or build it during your build process. You would include the header files in your custom application and link with the library.
If you add more details, probably others can pitch in with new answers or refine the older ones.

Tools to help manage sets of multiple versions of executables on Linux?

We are in a networked Linux environment and what I'm looking for is a FOSS or generic system level method for managing what versions of executables and libraries get used, per session. The executables would preferably be installed on the network. The executables will be in-house tools and installs of commercial packages like Houdini, Maya and Nuke.
The need for this is that we'd prefer to have multiple versions of the software installed and available for the artists but there needs to be an easy way to select which version to use. As an added benefit, I'd like to be able to track the version of software used to generate a given output as metadata. I've worked at studios that did this successfully but I was not 100% up to speed on how it was achieved. Every executable in a given set was assigned a single uber version for the set. That way, the "approved packages" of the studio tools were all collapsed into a single package of tools that were known to work together.
Due to the way they install, some programs make setting this up easy (It's as simple as adding their install directories to $PATH). Other programs don't make it quite so easy. I'm particularly worried about how to handle the libraries a program might install. What's needed is a generic access method I can use to wrap everything into a clean front end.
Does anyone know of such a system available in the wild or am I going to have to implement it from scratch? Google hasn't been very helpful in finding a solution.
Thanks!
Check out the "modules" system at http://modules.sourceforge.net/ ; it's quite widely used in HPC.
There is eselect . I have only used it on funtoo(offspring of gentoo) but it seems to be doing what you need. It is also written entirely in BASH, so it should be quite possible to port to other distros.

Is there a library and header to use to access EWMH/NetWM functions?

I need to get similar information on the current windows and virtual desktops as that provided by the command-line app wmctrl. I s there some (C/C++) API header & lib-files that I can use?
If it must be in C/C++, i think libxcb-wm is the most prominent one: very mature, still actively developed, and from Freedesktop, the same organization that created the EWMH spec.
On Debian/Ubuntu you have the binary packages libxcb-ewmh2 (run-time library) and libxcb-ewmh-dev (development headers), both from source package xcb-util-wm:
sudo apt install libxcb-ewmh-dev # also pulls libxcb-ewmh2, as usual
And official Documentation and Tutorial from Xorg
Download the source code of wmctrl and study it. If you are making some free software with the same or compatible GPLv2 license you could take some code from it.
There is only one source file main.c and it seems to do ordinary Xlib calls, notably XGetWindowProperty calls wrapped in get_property
I'm very surprised you asked the question here. With free software, it is so much simpler and quicker to download the source code and study it.

Resources