I am trying to analyse the Google CLoud Stackdriver's Profiling, now can anyone please tell me how can I Optimize my code by using this.
Also, I cannot see any of my function name and all, i don't know what is this _tickCallback and which part of the code it is executing ??
Please help me, anyone.
When looking at node.js heap profiles, I find that it's really helpful to know if the code is busy and allocating memory (i.e. does a web server have a load?).
This is because the heap profiler takes a snap shot of everything that is in the heap at a the time of profile collection, including allocations which are not in use anymore, but have not been garbage collected.
Garbage collection doesn't happen very often if the code isn't allocating memory. So, when the code isn't very busy, the heap profile will show a lot of memory allocations which are in the heap but no longer really in use.
Looking at your flame graph, I would guess that your code isn't very busy (or isn't allocating very much memory), so memory allocations from the profiler dominate the heap profiles. If you're code is a web server and you profiled it when it didn't have a load, it may help if you generate a load for it while profiling.
To answer the secondary question: _tickCallback is a Node.js internal function that is used for running things on timers. For example, if anything is using a setTimeout. Anything that is scheduled on a timer would have _tickCallback on the bottom of the stack.
Elsewhere on the picture, you can see some green and cyan 'end' functions on the stack. I suspect those are the places where you are calling response.end in your express request handlers.
Related
I would like to unterstand the GC Process a little bit better in Nodejs/V8.
Could you provide some information for the following questions:
When GC is triggered, does this block the event loop of node js?
Is GC running in it's own process or is just a submethod of the event-loop ?
When spawning nodejs process via Pm2 (clustered mode) does the instance
really have it's own process or is the GC shared between the
instances ?
For Logging Purposes I am using Grafana
(https://github.com/RuntimeTools/appmetrics-statsd), can someone
explain the differences \ more details about these gauges:
gc.size the size of the JavaScript heap in bytes.
gc.used the amount of memory used on the JavaScript heap in bytes.
Are there any scenarios where GC is not freeing memory (gc.used) in relation with stress tests?
The questions are related to an issue that I am currently facing. The used memory of GC is rising and doesn't release any memory (classical memory leak). The problem is that it only appears when we a lot of requests.
I played around with max-old-space-size to avoid pm2 restarts, but it looks like that GC is not freeing up anymore and the whole application is getting really slow...
Any ideas ?
ok some questions, I already figured out:
gc.size = Total Heap Size (https://nodejs.org/api/v8.html -> getHeapStatistics),
gc.used = used_heap_size
it looks ok that when gc_size hits a plateu that it never goes down again =>
Memory usage doesn't decrease in node.js? What's going on?
Why is garbage collection expensive? The V8 JavaScript engine employs a stop-the-world garbage collector mechanism. In practice, it means that the program stops execution while garbage collection is in progress.
https://blog.risingstack.com/finding-a-memory-leak-in-node-js/
If my nodejs memory is reached upto 1.5GB and then i am not applying any load on it and the system is idle for 30 minutes then also garbage collector is not freeing the memory
It's impossible to say anything specific, like "why gc is not collecting the garbage" as you ask in the comments, when you say nothing about what your program is doing or how the code looks like.
I can only point you to good explanation of how GC works in Node and explain how to run GC manually to see if that helps. When you run node with the --expose-gc flag, then you can use:
global.gc();
in your code. You can try to run that code in a timeout, on a regular interval or at any other specific moment and see if that frees your memory. If it does, that would mean that the GC was indeed not running and that was the problem. If that doesn't free your memory that could mean that the problems is not with the GC not running, but rather not being able to free anything.
Memory not being freed after manual GC invocation would mean that you have some memory leak or that you use so much memory that cannot be freed by the GC.
If the memory is freed after running GC manually it could mean that it is not running by itself, maybe because you are doing a very long, blocking operation that doesn't give the event loop any chance to run. For example having a long running for or while loop could give such an effect.
Not knowing anything about your code or what it does, it's not possible to give you any more specific solution to your problem.
If you want to know how and when the GC in Node works, then there is some good documentation online.
There is a nice article by StrongLoop about how the GC works:
Node.js Performance Tip of the Week: Managing Garbage Collection
Also this article by Daniel Khan is worth reading:
Understanding Garbage Collection and hunting Memory Leaks in Node.js
It's possible that the GC is running but it can't free any memory because you have some memory leak. Without seeing an actual code or even an explanation of what it does it's realy impossible to say more.
I am considering using d for my ongoing graphics engine. The one thing that turns me down is the GC.
I am still a young programmer and I probably have a lot of misconceptions about GC's and I hope you can clarify some concerns.
I am aiming for low latency and timing in general is crucial. From what I know is that GC's are pretty unpredictable, for example my application could render a frame every 16.6ms and when to GC's kicks in it could go up to any number like 30ms because it is not deterministic right?
I read that you can turn down the GC in D, but then you can't use the majority of D's standard library and the GC is not completely off. is this true?
Do you think it makes sense to use D in a timing crucial application?
Short answer: it requires lot of customization and can be really difficult if you are not an experienced D developer.
List of issues:
Memory management itself is not that big problem. In real-time applications you never ever want to allocate memory in a main loop. Having pre-allocated memory pools for all main data is pretty much de-facto standard way to do such applications. In that sense, D is not different - you still call C malloc directly to get some heap for your pools and this memory won't be managed by a GC, it won't even know about it.
However, certain language features and large parts of Phobos do use GC automagically. For example, you can't really concatenate slices without some form of automatically managed allocation. And Phobos simply has not had a strong policy about this for quite a long time.
Few language-triggered allocations won't be a problem on their own as most memory used is managed via pools anyway. However, there is a killer issue for real-time software in stock D : default D garbage collector is stop-the-world. Even if there is almost no garbage your whole program will hit a latency spike when collection cycle is ran, as all threads get blocked.
What can be done:
1) Use GC.disable(); to switch off collection cycles. It will solve stop-the-world issue but now your program will start to leak memory in some cases, as GC-based allocations still work.
2) Dump hidden GC allocations. There was a pull request for -vgc switch which I can't find right now, but in its absence you can compile your own druntime version that prints backtrace upon gc_malloc() call. You may want to run this as part of automatic test suite.
3) Avoid Phobos entirely and use something like https://bitbucket.org/timosi/minlibd as an alternative.
Doing all this should be enough to target soft real-time requirements typical for game dev, but as you can see it is not simple at all and requires stepping out of stock D distribution.
Future alternative:
Once Leandro Lucarella ports his concurrent garbage collector to D2 (which is planned, but not scheduled), situation will become much more simple. Small amount of GC-managed memory + concurrent implementation will allow to meet soft real-time requirements even without disabling GC. Even Phobos can be used after it is stripped from most annoying allocations. But I don't think it will happen any soon.
But what about hard real-time?
You better not even try. But that is yet another story to tell.
If you do not like GC - disable it.
Here is how:
import core.memory;
void main(string[] args) {
GC.disable;
// your code here
}
Naturally, then you will have to do the memory manage yourself. It is doable, and there are several articles about it. It has been discussed here too, I just do not remember the thread.
dlang.org also has useful information about this. This article, http://dlang.org/memory.html , touches the topic of real-time programming and you should read it.
Yet another good article: http://3d.benjamin-thaut.de/?p=20 .
I'm using chrome (the dev version for my mac).
I was looking at the timeline for my page loading and I saw that there is a 150ms delay due to some garbage collection taking place while loading the page.
It's the yellow line.
I was curious if there's any way to stop this, delay it, whatever so I get the page to load faster?
Against the grain of some of the comments, this isn't a C++ issue.
Garbage Collection happens when v8 (the javascript engine in chrome) engine detects that it should start freeing up memory used by objects that are no longer needed in the code. You can visit the v8 page for more information about what the garbage collector does.
There might be lots of reasons why your code is garbage collecting early, and in that case we would need to see your code. Do you have a lot of variables that go out of scope at page load?
Don't create so much garbage: Look at where your JavaScript program allocates memory during load and see if you can eliminate the garbage collection by reusing data structures or delaying that work until after the page has loaded. This lets you 'delay' garbage collection.
Is it due to basic misunderstandings of how memory is dynamically allocated and deallocated on the programmer's part? Is it due to complacency?
No. It's due to the sheer amount of accounting it takes to keep track of every memory allocation. Who is responsible for allocating the memory? Who is responsible for freeing it? Ensuring that you use the same API to allocate and free the memory, etc... Ensuring you catch every possible program flow and clean up in every situation(for example, ensure you clean up after you catch an error or exception). The list goes on...
In a decent sized project, one can lose track of allocated resources.
Sometimes a function is written expecting an uninitialized data structure as input that it will then initialize. Someone passes in a data structure that already initialized, and thus the previously allocated memory is leaked.
Memory leaks are caused by basic misunderstandings the same sense every bug is. And I would be shocked to find out anyone writes bug free code the first time every time. Memory leaks just happen to be the kind of bug that rarely causes a crash or explicitly wrong behavior (other than using too much memory, of course), so unless memory leaks are explicitly tested for a developer will likely never know they are present. Given that changes in the codebase always add bugs, and memory leaks are virtually invisible, memory leaks expand as a program ages and expands in size.
Even in languages which have automatic memory management, memory can be leaked because of cyclical references, depending on the garbage collection algorithm used.
I think it is due to the pressures of working in job that requires dead-lines and upper management pushing the project to get it out the door. So you could imagine, with the testing, q&a, peer code reviews, in such pressurized environments, that memory leaks could slip through the net.
Since your question did not mention language, today, there's automatic memory management that takes care of the memory accounting/tracking to ensure no memory leaks occur, think Java/.NET, but a few can slip through the net. It would have been with the likes of C/C++ that uses the malloc/new functions, and invariably are harder to check, due to the sheer volume of memory being allocated.
Then again, tracking down those leaks can be hard to find which is throwing another curveball to this answer - is it that it works on the dev's machine that it doesn't show up, but when in production, the memory starts leaking like hell, is it the configuration, hardware, software configuration, or worse, the memory leak can appear at random situation that is unique to within the production environment, or is it the time/cost constraint that allowed the memory leaks to occur or is it that the memory profiling tools are cost prohibitive or lack of funding to help the dev team track down leaks...
All in all, each and everyone within the dev team, have their own responsibility to ensure the code works, and know the rules about memory management (for example, such as for every malloc there should be a free, for every new there should be a delete), but no blame should be accounted for the dev team themselves, neither is finger pointing at the management for 'piling on the pressure on the dev team' either.
At the end of day, it would be false economy to rely on just the dev team and place 'complacency' on their shoulders.
Hope this helps,
Best regards,
Tom.
Bugs.
Even without bugs, it can be impossible to know in advance which function should deallocate memory. It's easy enough if the code structure is essentially functional (the main function calls sub-functions, which process data then return a result), but it isn't trivial if several treads (or several different objects) share a piece of memory. Smart pointers can be used (in C++), but otherwise it's more or less impossible.
Leaks aren't the worst kind of bug. Their effect is generally just a cumulative degradation in performance (until you run out of memory), so they just aren't as high a priority.
Lack of structured scopes and clear ownership of allocated memory.