I am working in an embedded environment, where resources are quite limited. We are trying to use node.js, which works well, but typically consumes about 60 megabytes of virtual memory (real memory used is about 5 megabytes.) Given our constraints, this is too much virtual memory; we can only afford to allow node.js to use about 30 megabytes of VM, at most.
There are several command-line options for node.js, such as "--max_old_space_size",
"--max_executable_size", and "--max_new_space_size", but after experimentation, I find that these all control real memory usage, not maximum virtual memory size.
If it matters, I am working in a ubuntu linux variant on an ARM architecture.
Is there any option or setting that will allow one to set the maximum amount of virtual memory that a node.js process is allowed to use?
You can use softlimit to execute the node with limited size. Or you can directly use setrlimit of Linux, but not really sure how to call it from NodeJS, see this SO question
Related
How to get current memory usage using nodejs?
I have a backend application. When ram usage of the operating system is greater than 7GB, I want to decline user requests.
The main tools you have built into nodejs without going to external programs are these:
process.memoryUsage()
process.memoryUsage.rss()
Probably you want the second one because resident set size is closest to the total OS RAM memory allocated to the process.
I have multiple micro-services written in Node and running on pm2. Whenever there is a high traffic on any of these micro-services, the memory doesn't exceed 800 MB even though the system has more than 10GB of memory free. Instead the system becomes slow. I have used only the below command with no additional settings to start the services.
pm2 start app.js --name='app_name'
I have gone through the docs for pm2 but it only mention about limiting the memory usage using max-memory-restart. Is there a way I can make sure my micro-services use all the available system memory.
Whenever there is a high traffic on any of these micro-services, the memory doesn't exceed 800 MB even though the system has more than 10GB of memory free. Instead the system becomes slow.
You need to look at CPU metrics too, not just memory. More likely than not, those services aren't starved for memory and would begin to swap out to disk, but are just working your server's CPUs.
Profiling your services wouldn't hurt either, to find any possible bottlenecks or stalls that occur during high load.
Is there a way I can make sure my micro-services use all the available system memory.
Yes, there is: use more memory in those services. There's no intrinsic limit unless you've configured one.
I'm running Node.js on a server with only 512MB of RAM. The problem is when I run a script, it will be killed due to out of memory.
By default the Node.js memory limit is 512MB. So I think using --max-old-space-size is useless.
Follows the content of /var/log/syslog:
Oct 7 09:24:42 ubuntu-user kernel: [72604.230204] Out of memory: Kill process 6422 (node) score 774 or sacrifice child
Oct 7 09:24:42 ubuntu-user kernel: [72604.230351] Killed process 6422 (node) total-vm:1575132kB, anon-rss:396268kB, file-rss:0kB
Is there a way to get rid of out of memory without upgrading the memory? (like using persistent storage as additional RAM)
Update:
It's a scraper which uses node module request and cheerio. When it runs, it will open hundreds or thousands of webpage (but not in parallel)
If you're giving Node access to every last megabyte of the available 512, and it's still not enough, then there's 2 ways forward:
Reduce the memory requirements of your program. This may or may not be possible. If you want help with this, you should post another question detailing your functionality and memory usage.
Get more memory for your server. 512mb is not much, especially if you're running other services (such as databases or message queues) which require in-memory storage.
There is the 3rd possibility of using swap space (disk storage that acts as a memory backup), but this will have a strong impact on performance. If you still want it, Google how to set this up for your operating system, there's a lot of articles on the topic. This is OS configuration, not Node's.
Old question, but may be this answer will help people. Using --max-old-space-size is not useless.
Before Nodejs 12, versions have an heap memory size that depends on the OS (32 or 64 bits). So, following documentations, on 64-bit machines that (the old generation alone) would be 1400 MB, far away from your 512mb.
From Nodejs12, heap size take care of system RAM; however Nodejs' heap isn't the only thing in memory, especially if your server isn't dedicated to it. So set the --max-old-space-size permit to have a limit regarding the old memory heap, and if your application comes closer, the garbage collector will be triggered and will try to free memory.
I've write a post about how I've observed this: https://loadteststories.com/nodejs-kubernetes-an-oom-serial-killer-story/
I've been watching some weird phenomena in programming for quite some time, since overcommit is enabled by default on linux systems.
It seems to me that pretty much every high level application (eg. application written in high level programming language like Java, Python or C# including some desktop applications written in C++ that use large libraries such as Qt) use insane amount of virtual operating memory. For example, it's normal for web browser to allocate 20GB of ram while using only 300MB of it. Or for a dektop environment, mysql server, pretty much every java or mono application and so on, to allocate tens of gigabytes of RAM.
Why is that happening? What is the point? Is there any benefit in this?
I noticed that when I disable overcommit in linux, in case of a desktop system that actually runs a lot of these applications, the system becomes unusable as it doesn't even boot up properly.
Languages that run their code inside virtual machines (like Java (*), C# or Python) usually assign large amounts of (virtual) memory right at startup. Part of this is necessary for the virtual machine itself, part is pre-allocated to parcel out to the application inside the VM.
With languages executing under direct OS control (like C or C++), this is not necessary. You can write applications that dynamically use just the amount of memory they actually require. However, some applications / frameworks are still designed in such a way that they request a large chunk memory from the operating system once, and then manage the memory themselves, in hopes of being more efficient about it than the OS.
There are problems with this:
It is not necessarily faster. Most operating systems are already quite smart about how they manage their memory. Rule #1 of optimization, measure, optimize, measure.
Not all operating systems do have virtual memory. There are some quite capable ones out there that cannot run applications that are so "careless" in assuming that you can allocate lots & lots of "not real" memory without problems.
You already found out that if you turn your OS from "generous" to "strict", these memory hogs fall flat on their noses. ;-)
(*) Java, for example, cannot expand its VM once it is started. You have to give the maximum size of the VM as a parameter (-Xmxn). Thinking "better safe than sorry" leads to severe overallocations by certain people / applications.
These applications usually have their own method of memory management, which is optimized for their own usage and is more efficient than the default memory management provided by the system. So they allocate huge memory block, to skip or minimize the effect of the memory management provided by system or libc.
I have always run erlang applications on powerful servers. However, sometimes, you cannot avoid such memory errors, especially when users are many
Crash dump was written to: erl_crash.dump
eheap_alloc: Cannot allocate 467078560 bytes of memory (of type "heap").
What makes it more annoying is that you have a server with 20GB of RAM, with say 8 cores. Looking at the memory which erlang says, it could not allocate and that is why it crashed, is also disturbing , because it is very little memory compared to what the server has in stock.
My question today (i wish it is not closed) , is that, what Operating system configurations can be done (consider RedHat , Solaris, Ubuntu or Linux in general), to make it offer the erlang VM more memory when it needs it ? If one is to run an erlang application on such capable servers, what memory consideration (outside erlang) should be made as regards the underlying operating system ? Problem Background Erlang consumes Main Memory, especially when processes are in thousands. I am running a Web service using Yaws Web Server. On the same node, i have Mnesia running with about 3 ram_copies tables. Its a notification system, as part of a larger Web application running on an intranet. Users access this very system via JSONP from the main application running off a different web server and a different hardware as well. Each user connection queries mnesia directly for any data it needs. However, as users increase i always get the crash dump. I have tweaked the application itself as much as possible, clean up the code to standard, used more binaries than strings e.t.c. avoided single points like gen_servers between yaws processes and mnesia, so that each connection, just hits mnesia directly. The server is very capable with lots of RAM and Disc Space. However, my node crashes when it needs a little more memory, thats why i need to find a way of forcing the Operating system to expand more memory to erlang. Operating system is REDHAT ENTERPRISE 6
It is probably because you are running in 32bit mode where only approx 4 GB of RAM is addressable. Try switching to the 64bit version of erlang and try again.
Several various server tutorials I have read say that if the service runs as a non root user, you may have to edit the /etc/security/limits.conf to allow that user to access more memory than it is typically allowed. the example below lets user fooservice use 2GB.
fooservice hard memlock 2097152