How to collect memory dump from meteor/nodejs application? - node.js

I'd like to learn to analyze my meteor and node services' performance and memory usage better than just trying to log various thing to console. I've read couple articles about memory management in Node and some baby steps about analyzing the memory dumps with Chrome developer tools.
The question is, how do I get those memory dumps from my apps in the first place?
This memory and performance analysis is done on the server side service. As far as I know, the memory dumps got from Chrome browser are client side memory dumps.

It seems that this node package
https://github.com/bnoordhuis/node-heapdump
can be used to collect heapdumps on server side. Still need to figure out how to properly use it and then time to analyze those dumps.
At the moment I am just writing a single heapdump every time I start my app, but maybe more sophisticated writing method is needed to actually get something done.

Related

How can I debug a memory leak for a loopback application on Heroku?

We have a quite nasty memory leak going on in a loopback (node.js) app, but it does not seem to happen locally, only on Heroku.
It steadily increases memory usage without any requests, and I fired up 10 000 requests locally without seeing a similar pattern.
I currently have no good ideas for how to debug this further.
It turns out disabling New Relic fixed the issue. We had log level set to debug to figure out another issue, and suddenly all hell broke loose. They do indeed have a notice on their documentation about this.
I believe there is a blog post from Strongloop dealing with memory leak profiling here. It goes over installing heap dump and how to use Chrome dev tools to collect heap snapshots on the client side using the JavaScript console built into the browser. Analysis of the heap can be done within this same console as well.

What is consuming memory in my Node JS application?

Background
I have a relatively simple node js application (essentially just expressjs + mongoose). It is currently running in production on an Ubuntu Server and serves about 20,000 page views per day.
Initially the application was running on a machine with 512 MB memory. Upon noticing that the server would essentially crash every so often I suspected that the application might be running out of memory, which was the case.
I have since moved the application to a server with 1 GB of memory. I have been monitoring the application and within a few minutes the application tends to reach about 200-250 MB of memory usage. Over longer periods of time (say 10+ hours) it seems that the amount keeps growing very slowly (I'm still investigating that).
I have been since been trying to figure out what is consuming the memory. I have been going through my code and have not found any obvious memory leaks (for example unclosed db connections and such).
Tests
I have implemented a handy heapdump function using node-heapdump and I have now enabled --expore-gc to be able to manually trigger garbage collection. From time to time I try triggering a manual GC to see what happens with the memory usage, but it seems to have no effect whatsoever.
I have also tried analysing heapdumps from time to time - but I'm not sure if what I'm seeing is normal or not. I do find it slightly suspicious that there is one entry with 93% of the retained size - but it just points to "builtins" (not really sure what the signifies).
Upon inspecting the 2nd highest retained size (Buffer) I can see that it links back to the same "builtins" via a setTimeout function in some Native Code. I suspect it is cache or https related (_cache, slabBuffer, tls).
Questions
Does this look normal for a Node JS application?
Is anyone able to draw any sort of conclusion from this?
What exactly is "builtins" (does it refer to builtin js types)?

iis Cpu is on 95% usage with very little users - on production

I have a web site and I am using iis as my web server. I noticed that on production server, the cpu reaches 95% usage pretty fast with very little users. this behaviour I don't see on my developement server. I am using visual studio to develop and iis as my local web server as well.
How much big traffic you have on production comparing to development server? How their parameters compare? Before starting a deep analysis of the application itself, I would identify all the infrastructure and environmental differences. Sometime such problems happens because of some other software, like antivirus software running in the background...
Nevertheless, because it sounds rather as a application problem, I would first check Event Viewer for errors. Then I would start from monitoring a few Performance Counters to correlate % Processor Time counter with Current Connections, Available Memory, # of Exceps Thrown / sec, % Time in GC and so on. This kind of behavior usually has a reason from the list:
excessive loops usage due to some logic error, like calling the same service again and again, trying to load or parse malfunctioned file etc. This can be analyzed with dump analysis (look below).
high CPU usage due to Garbage Collector - when memory usage is extensive (or there is a memory leak even) GC may start to consume more and more CPU fighting with the memory shortage. You will see this with memory-related performance counters.
a considerable amount of exceptions thrown (for example due to some environmental problems like network unavailability, production data difference) can also consume a lot of CPU. Event Viewer and exception-related performance counters (as they can be handled silently by your application) should be a indicator here.
To further analyze your application, I suggest to make a full memory dump during high CPU usage. You can do that with Debug Diag tool. Please refer this IIS troubleshooting guide for details.

What are some effective strategies to track down native memory leaks in a node.js process?

I've been trying to track down a very slow, but persistent, native memory leak in a node.js app, and I've run out of strategies.
The process has what appears to be a level heap, but as the hours and days roll on, the RSS of the node.js process slowly grows. The process is a job handler that runs the same type of job for different parameters, over and over. The growth of the RSS of the process takes the same shape as the line plotting the cumulative number of jobs run, so each job run is somehow leaking a bit of memory.
Since the heap is more or less constant, the standard heap inspection tools don't seem to be much help.
Here's an example of what the memory consumption looks like:
Currently running on node 0.8.7. Each job does a number of database reads/writes, communicates with a redis instance, and does some web requests using mikael/request.
Have you updated to the newest release?
I know everyone says that :), I just felt like I should join the band wagon of updating my version of node.js on my production servers every two weeks when I think I have an issue. Sounds like a great idea doesn't it?
So I have been wondering the same thing, I have several node.js projects that I have been managing for a few months now (and also that I wrote last year). It seems that very slowly the V8 engine, or my node application, just eats memory and never frees it. (its slow enough that I only have to restart them every now and then)
Which is very stressful, especially considering that it should free up the RSS memory, or eventually peak out.
If you are interested in tracking objects being leaked inside of the runtime (and by that i mean javascript objects, functions, etc), mozilla has a very complete blog post about tracking down memory leaks and a few links to projects that can be used to do this.
For what ever reason they don't have this one on the list though. (it seems simple enough, I'm trying it out now on my own projects to see if it works, I tend to not get any of the V8 based ones to compile correctly)
heapdump and here is a link to a how to guide.
From my own experience the V8 engine seems to allocate memory, and hold onto it just incase it needs that exact same memory chunk later. Also my brother who has been using Node.js heavily for about 3 years has seen the same thing.
Also just for completeness (I know you already have), if any one would like to verify that you are not leaking memory inside of V8, an engineer from joyent has a pretty decent write up of how to track V8 memory leaks down.

Troubleshooting a Hanging Java Web App

I have a web application that hangs under high loads. I'm not going to go into the specifics of the code because I really just want some troubleshooting advice and tooling recommendations.
It's a web app, so each request get's a thread. Under a high load test, the app begins to consume all of the cpu, while becoming unresponsive. I suspect that the request threads are hanging in the new code that we are testing. Due to the fact of the cpu consumption, I'm assuming this must be on my app side. My understanding, which could be wrong, is that total cpu consumption indicated my first troubleshooting efforts should be in looking at the code that's consuming those cycles.
What are some tools and/or methods for inspecting which threads are hanging and on what lines of code? Again, I can easily force the app into the problematic behavior.
I've found and been trying out visualvm. Seems like the perfect tool. Still open for suggestions though. I looked at eclipse TPTP and it seems to be end-of-life-ing as well as requiring a more heavy weight deployment.
You can insert logging messages at starting a thread and closing a thread. Then you start the application and inspect the output while penetrating the code.
Another way is to look for memory leaks. If you are sure you haven't one, you can extend the virtual memory of your JVM.
#chad: do you have Database in whole picture...you may want to start by looking what is happening at DB side...you can very well look into DB locks, current sessions etc.

Resources