electron memory usage profiling - memory-leaks

I have some memory problems with my electron app.
On startup the memory usage is about 120 MB. The JS heap stays constant at 32 MB. Without performing any actions in the browser window, the memory usage of the renderer in the task manager goes up by about 1 MB every second. After increasing by 20 MB it seems to go down by around 16 again (propably GC). but leaving the window open for several minutes results in 300 MB memory usage. So there is a memory leak somewhere.
Since the JS heap size never changes, I assume the leak in inside the Node process, am I correct on that part?
How can I analyze the memory usage in the electron/node process? (since the chrome profile does not seem to help in that case)
related to https://spectrum.chat/electron/general/debugging-high-memory-usage-in-electron~80057ff2-a51c-427f-b6e1-c297d47baf5b and https://www.electronjs.org/docs/tutorial/performance

I have the same problem, my app starts with 200MB of used memory and less than 20 minutes, it uses more than 450MB just doing nothing...only showing some images. It happened the same with a raspberry pi +3b. The use of memory grows 'till rasp dies.
What i found is that if you have the devtools window opened (i assume you have for debug purposes) it just eat all memory.
Once i closed the devtools window, my app on a Win system uses 100MB (stable) and in my raspberry pi 300MB (stable).
I've read somewhere that when use GPU to render it uses a lot of memory too so i used
app.disableHardwareAcceleration();

Related

When does Node garbage collect?

I have a NodeJS server running on a small VM with 256MB of RAM and I notice the memory usage keeps growing as the server receives new requests. I read that an issue on small environments is that Node doesn't know about the memory constraints and therefore doesn't try to garbage collect until much later (so for instance, maybe it would only want to start garbage collecting once it reaches 512MB of used RAM), is it really the case?
I also tried using various flags such as --max-old-space-size but didn't see much change so I'm not sure if I have an actual memory leak or if Node just doesn't GC as soon as possible?
This might not be a complete answer, but it's coming from experience and might provide some pointers. Memory leak in NodeJS is one of the most challenging bugs that most developers could ever face.
But before we talk about memory leak, to answer your question - unless you explicitly configure --max-old-space-size, there are default memory limits that would take over. Since certain phases of Garbage collection in node are expensive (and sometimes blocking) steps, depending upon how much memory is available to it, it would delay (e.g. mark-sweep collection) some of the expensive GC cycles. I have seen that in a Machine with 16 GB of memory it would easily let the memory go as high as 800 MB before significant Garbage Collections would happen. But I am sure that doesn't make ~800 MB any special limit. It would really depend on how much available memory it has and what kind of application are you running. E.g. it is totally possible that if you have some complex computations, caches (e.g. big DB Connection Pools) or buggy logging libraries - they would themselves always take high memory.
If you are monitoring your NodeJs's memory footprint - sometime after the the server starts-up, everything starts to warm up (express loads all the modules and create some startup objects, caches warm up and all of your high memory consuming modules became active), it might appear as if there is a memory leak because the memory would keep climbing, sometimes as high as ~1 gb. Then you would see that it stabilizes (this limit used to be lesser in <v8 versions).
But sometimes there are actual memory leaks (which might be hard to spot if there is no specific pattern to it).
In your case, 256 MB seems to be meeting just the minimum RAM requirements for nodejs and might not really be enough. Before you start getting anxious of memory leak, you might want to pump it up to 1.5 GB and then monitor everything.
Some good resources on NodeJS's memory model and memory leak.
Node.js Under the Hood
Memory Leaks in NodeJS
Can garbage collection happen while the main thread is
busy?
Understanding and Debugging Memory Leaks in Your Node.js Applications
Some debugging tools to help spot the memory leaks
Node inspector |
Chrome
llnode
gcore

node.js heap memory and used heap size [pm2]

I am currently running node.js using pm2.
And recently, I was able to check "custom metrics" using the pm2 monit command.
Here, information such as Heap size, used heap size, and active requests are shown.
I don't know how the heap size is determined. Actually, I checked pm2 running on different servers.
Each was set to 95mib / 55mib, and accordingly, the used heap size was different.
Also, is the heap usage closer to 100% the better?
While searching on "StackOverflow" to find related information, I saw the following article.
What does Heap Usage mean in PM2
Also what means active requests ? It is continuously zero.
Thank you!
[Edit]
env : ubuntu18.04 [ ec2 - t3.micro ]
node version : v10.15
[Additional]
server memory : 1GB [ 40~50% used ]
cpu : vCPU (2) [ 1~2% used ]
The heap is the RAM used by the program you're asking PM2 to manage and monitor. Heap space, in Javascript and similar language runtimes, is allocated when your program creates objects and released upon garbage collection. Your runtime asks your OS for more heap space whenever it needs it: when active allocations exceed the free space. So your heap size will probably grow as your program starts up. That's normal.
Most programs allocate and release lots of objects as they do their work, so you should not try to optimize the % usage of your heap. When your program is running at a steady state – that is, after it has started up — you'll find the % utilization creeping up until garbage collection happens, and then dropping back. For example, a nodejs/express web server allocates req and res objects for each incoming request, then uses them, then drops them so the garbage collector can reclaim their RAM.
If your allocated heap size keeps growing, over minutes or hours, you probably have a memory leak. That is a programming bug: a problem you should do your best to solve. You should look up how that works for your application language. Other than that, don't worry too much about heap usage.
Active requests count work being done via various asynchronous objects like file writers and TCP connections. Unless your program is very busy it stays near zero.
Keep an eye on loop delay if your program does computations. If it creeps up, some computation function is hogging Javascript.

Node.JS process uses 1.4 GB of memory, but heapdump is only 300 MB

I'm running a Socket.IO server with Node.JS, which normally uses about 400 MB of memory, because there's a lot of data being cached to send to clients. However, after a couple of hours it suddenly starts growing to 1.4 GB of usage over about 40 minutes. Someone told me to use heapdump to find if there is a memory leak.
The problem is that the heapdump only turned out to be 317 MB and nothing in it looks out of the ordinary, so I'm stuck with debugging. I've also run it with nodetime, which says that the V8 heap usage is around 400 MB, but the total V8 heap size is 1.4 GB.
How do I find out where the remaining 1 GB comes from?
Maybe node-memwatch could help you?
https://github.com/lloyd/node-memwatch
From its Readme:
node-memwatch is here to help you detect and find memory leaks in
Node.JS code. It provides:
A leak event, emitted when it appears your code is leaking memory.
A stats event, emitted occasionally, giving you data describing your
heap usage and trends over time.
A HeapDiff class that lets you compare the state of your heap between
two points in time, telling you what has been allocated, and what has
been released.

Tracking memory leaks in node.js - v8 profiler vs htop

Recently we've discovered that our node.js app is most probably having some memory leak (memory consumption showed in htop is growing and growing). We've managed to isolate small amount of code into separate script which is still causing memory leak and now trying to hunt it down. However we're having some troubles with analysing and understanding our test results gathered by htop tool and this v8 profiler: http://github.com/c4milo/node-webkit-agent
Right after script start htop is showing following memory consumption:
http://imageshack.us/a/img844/3151/onqk.png
Then app is running for 5 minutes and I'm taking heap snapshots each 30 secs. After 5 mins results are following:
Heap snapshots sizes:
http://imageshack.us/a/img843/1046/3f7x.png
And results from htop after 5 mins:
http://imageshack.us/a/img33/5339/2nb.png
So if I'm reading this right then V8 profiler shows that there is no serious memory leak, but htop shows that memory consumption grew up from 12MB to 56MB! Can anyone tell where is this difference coming from? And why even at the beginning of tests htop shows 12MB vs 4MB showed by profiler?
htop author here. You're reading the htop numbers right. I don't know about the V8 profiler, but on the issue of "12MB vs 4MB" at the start, it's most likely the case that V8 is accounting only your JS data, while htop accounts the entire resident memory usage of the process, including C libraries used by V8 itself, etc.

How to tell where the memory went in Linux

I have a long running process that I suspect has a memory leak. I use top to monitor the memory levels of each process and nothing uses more than 15% of the total RAM. The machine has 4GB of RAM and the process starts with well over 3GB free. The process itself does very heavy, custom calculations on several MB of data. It takes a single core at 100%.
As time goes on, memory disappears but top does not blame my long running process. Instead, the "cached" and "buffers" memory increases and the "free" memory is reduced to as low as 2MB. The process eventually finishes its job and exits without issue but the memory never comes back. Should I be concerned or is this "normal"? Are there other tools besides top that can provide a deeper understanding?
Thanks.
That's normal. Your process is operating on files which are getting cached in memory. If there is "memory pressure" (demand from other programs) then that cache memory will be relinquished. The first time I wrote an X widget to show how much memory was "free" it took me a while to get used to the idea that free memory is doing you no good: Best to have it all in use doing some kind of caching until it's needed elsewhere!

Resources