Node.js killed due to out of memory - node.js

I'm running Node.js on a server with only 512MB of RAM. The problem is when I run a script, it will be killed due to out of memory.
By default the Node.js memory limit is 512MB. So I think using --max-old-space-size is useless.
Follows the content of /var/log/syslog:
Oct 7 09:24:42 ubuntu-user kernel: [72604.230204] Out of memory: Kill process 6422 (node) score 774 or sacrifice child
Oct 7 09:24:42 ubuntu-user kernel: [72604.230351] Killed process 6422 (node) total-vm:1575132kB, anon-rss:396268kB, file-rss:0kB
Is there a way to get rid of out of memory without upgrading the memory? (like using persistent storage as additional RAM)
Update:
It's a scraper which uses node module request and cheerio. When it runs, it will open hundreds or thousands of webpage (but not in parallel)

If you're giving Node access to every last megabyte of the available 512, and it's still not enough, then there's 2 ways forward:
Reduce the memory requirements of your program. This may or may not be possible. If you want help with this, you should post another question detailing your functionality and memory usage.
Get more memory for your server. 512mb is not much, especially if you're running other services (such as databases or message queues) which require in-memory storage.
There is the 3rd possibility of using swap space (disk storage that acts as a memory backup), but this will have a strong impact on performance. If you still want it, Google how to set this up for your operating system, there's a lot of articles on the topic. This is OS configuration, not Node's.

Old question, but may be this answer will help people. Using --max-old-space-size is not useless.
Before Nodejs 12, versions have an heap memory size that depends on the OS (32 or 64 bits). So, following documentations, on 64-bit machines that (the old generation alone) would be 1400 MB, far away from your 512mb.
From Nodejs12, heap size take care of system RAM; however Nodejs' heap isn't the only thing in memory, especially if your server isn't dedicated to it. So set the --max-old-space-size permit to have a limit regarding the old memory heap, and if your application comes closer, the garbage collector will be triggered and will try to free memory.
I've write a post about how I've observed this: https://loadteststories.com/nodejs-kubernetes-an-oom-serial-killer-story/

Related

how to allow pm2 to use all of the available system memory

I have multiple micro-services written in Node and running on pm2. Whenever there is a high traffic on any of these micro-services, the memory doesn't exceed 800 MB even though the system has more than 10GB of memory free. Instead the system becomes slow. I have used only the below command with no additional settings to start the services.
pm2 start app.js --name='app_name'
I have gone through the docs for pm2 but it only mention about limiting the memory usage using max-memory-restart. Is there a way I can make sure my micro-services use all the available system memory.
Whenever there is a high traffic on any of these micro-services, the memory doesn't exceed 800 MB even though the system has more than 10GB of memory free. Instead the system becomes slow.
You need to look at CPU metrics too, not just memory. More likely than not, those services aren't starved for memory and would begin to swap out to disk, but are just working your server's CPUs.
Profiling your services wouldn't hurt either, to find any possible bottlenecks or stalls that occur during high load.
Is there a way I can make sure my micro-services use all the available system memory.
Yes, there is: use more memory in those services. There's no intrinsic limit unless you've configured one.

Bad allocation error when application reaches around 1.5GB of system memory usage

First some background. I'm building a 32-bit application but running on 64-bit windows.
The application loads a bunch of files for graphical rendering and is multithreaded.
The problem is that I am getting bad allocation errors when the application reaches around 1.5GB. This boundary varies widely from 1.5GB to 1.8GB and never seems to approach the 2GB single application memory boundary I would expect it to.
The application itself is multithreaded and in my testing it seems to be able to allocate more memory if I remove one of the threads.
Is there a reason I am unable to allocate up to the full 2GB??
Note: GPU memory usage is around 400MB and even if I turn off the rendering the issue is still there.
Thanks in advance for any help!

Running Ubuntu with nothing installed uses 500 out of 512MB which process should I kill?

Running linux ubuntu 14.04 on a digitalOcean server which gives me 512MB ram. Surprisingly, when trying to run activator for a play app I came to realice that almost all the memory was used. Using 'htop' command I get this output. which process should I kill (I am using 2 ssh connections, one to monitor and the other one to do stuff).
I could also assign swap memory but that would affect performance. I thought 512MB should be more than enough to run a play server. I mean, seriously, we put a man on the moon with reaaaaly much less.
Linux makes as much use of memory as it can, but that doesn't mean that it's not available for your applications. It will use memory to cache certain things (such as files) and memory for buffers.
In your screenshot you'll see the memory usage bar is made of different coloured sections:
Green is memory in use
Blue is buffer
Yellow is cache
So generally any applications you run that require more memory will allocate it out of the memory used to cache data.
Having swap space is generally a good idea - it won't affect performance unless the kernel starts swapping heavily, but that's generally better than the alternative which is your applications will crash with an out-of-memory error.

Does it mean memory leaks in node.js?

I am using node.js 0.10.18 (on Amazon EC2) as our ios game http server.
I use process.memoryUsage() to print memory usage.
I find the memory usages of our nodes are abnormal.
After running for two days:
node on machine1:
2014-10-13T02:35:04.782Z - vital: Process: heapTotal 119.70 MB heapUsed 84.62 MB rss 441.57 MB
node on machine2:
2014-10-13T02:36:01.057Z - vital: Process: heapTotal 744.72 MB heapUsed 108.19 MB rss 1045.53 MB
The results are:
Both the heapUsages are very small, it has nothing to do with how long the node process runs.
The heapTotal on machine2 is much larger than heapUsed, and it will never get small until I restart the process. But heapTotal on machine1 seems normal.
machine1 is Amazon EC2 m3.xlarge, machine2 is Amazon EC2 m3.medium. From Amazon CloudWatch I know that the performance of machine2 is insufficient, sometimes the CPU usage of machine2 goes to 100%. So does the abnormal heapTotal usage have something to do with the insufficiency of the hardware? The 100% cpu usage is not a result of our node processes because using the node-usage module I see that the cpu usage of our node process is never higher than 50%. I think the usage is stolen by the neighboring virtual machines (you know there is shared cpu time on Amazon EC2s).
I know that the buffer memory usage = (rss - heapTotal). I find that the buffer memory usage on both machines will increase gradually. You see, both of the buffer memory usages are more than 300MB after running for two days.
My questions are:
Why heapTotal usage will not get released even if the heapUsed is very small? Is it a problem with node itself or else some bug of my own code? Is the only way to fix it to upgrade the hardware?
Why is the buffer usage increasing gradually? Does it mean there are memory leaks? Is it a problem with node itself or else some bug of my own code? Or just ignore it?
Thanks!
From this post https://www.joyent.com/blog/walmart-node-js-memory-leak. I found that there was a memory leak bug in the version <= 0.10.21. And the bug was fixed in version 0.10.22
I upgraded node to the latest version 0.10.32 and also enhanced the hardware of machine2.
Both the two memory problems never appear again. The memory use of both will only increase by several MBs each day, and I think it is normal, because my node process will cache some player data.
So maybe the two problems are caused by the same reason and it has been fixed.

Optimize Node.js memory consumption

I'm writing a simple cms in Node.js, Express and MongoDB. I'm planning to run a different Node.js process for every site. The problem is that after startup the process takes about 90m of RAM and for me it's too big (eight site take all server RAM). This memory is taken after the first connection to the site and other connections don't affect the memory.
Is there a guideline or a list of "best practices" to optimize this memory usage? I'm trying to track where the memory is allocated with process.memoryUsage() or a similar function but it's not simple to do this.
Is not a problem of memory leaks or something similar because the memory usage doesn't grow up after the first connection, so probably the optimization could be in loading less modules or do something differently...
The links below may help you to understand and detect memory leaks (if they do exist):
Debugging memory leaks in node.js
Detecting Memory Leaks in Node.js Applications
Tracking Down Memory Leaks in Node.js
These SO questions may also be useful:
How to monitor the memory usage of Node.js?
Node.js Memory Leak Hunting
Here is a quick fix, a node.js lib that will restart the any node process once it reaches a certain size.
https://github.com/DoryZi/memory_limiter
Set the --max_old_space_size CLI flag to control the maximum heap size. There's a post that describes Running a node.js app in a low-memory environment
tl;dr; Try setting this value, in megabytes, to about 80% of the maximum memory footprint you want node to try to remain under. e.g. to run app.js and keep it under 500MB RAM used
node --max_old_space_size=400 app.js
This setting is also described in the Node JS CLI documentation

Resources