I am new to pm2 concept,I am facing problem where my cpu usage increases and reaches upto 100% memory and my server goes down resulting to crashing of website,so can anyone please consult me on this.Do I need to change the configuration of my production(live) server such as increasing memory?My code is also neccessary and sufficient.I am ec2 user.
The system requirements will mostly depend on your application which you told nothing about. If CPU reaches 100% then you likely have some tight loop that is actively adding delays by burning cycles synchronously or something like that. The 100% memory usage can mean memory leaks and in that case no RAM will be sufficient because leaking memory will use up all your RAM eventually, no matter how large it is.
You need to profile your application with real usage patterns on a system where that app works and only then you will know how much resources it needs. This is true for every kind of application.
Additionally if you notice that resources usage grown over time then it may be a sign of some resource leaking, like memory leaking, spawning processes that don't exit but use CPU and RAM, etc.
first of all i would like to suggest you to follow these guideline for production envoiremnt.
1) disable morgon if you enable it as a dev envoiremnt.
2) use nginx or pm2 for load balancing.
or you can easily handle load balancing by using this command
pm2 start server.js -i 10
3)handle uncaugh exception. ie:
process.on("uncaughtException".function (err){
//do error handling
})
Related
I'm building a Node.js + Express web application using pm2 cluster mode as a load balancer. This turned out to be a big performance improvement, as my application now spawns an instance of itself for each one of my CPU cores.
To make the most advantage of it, i'm using a custom start script-- in which I added pm2's max_memory_restart option, so if one of the instances exceed 400mb memory usage it restarts itself. Seeing that behavior in action, I couldn't avoid myself to question if it is safe to use this option. Although it's nice to have an auto-restart kick in when memory grows over certain point, I thought of two possible downsides:
If one of my endpoints has memory intensive usage, said instance could restart itself in the middle of processing giving the user an error
If my server has, let's say, 2GB of RAM and 8 CPU cores, then the max_memory_restart option should be max 256mb if I'm running pm2 in cluster mode, as it applies for each instance. Isn't there any risk giving a fairly low max_memory_restart value here? Theoretically the instances would be restarting frequently in this case
Given these scenarios, Is it safe/adequate to use pm2's max_memory_restart option?
I have multiple micro-services written in Node and running on pm2. Whenever there is a high traffic on any of these micro-services, the memory doesn't exceed 800 MB even though the system has more than 10GB of memory free. Instead the system becomes slow. I have used only the below command with no additional settings to start the services.
pm2 start app.js --name='app_name'
I have gone through the docs for pm2 but it only mention about limiting the memory usage using max-memory-restart. Is there a way I can make sure my micro-services use all the available system memory.
Whenever there is a high traffic on any of these micro-services, the memory doesn't exceed 800 MB even though the system has more than 10GB of memory free. Instead the system becomes slow.
You need to look at CPU metrics too, not just memory. More likely than not, those services aren't starved for memory and would begin to swap out to disk, but are just working your server's CPUs.
Profiling your services wouldn't hurt either, to find any possible bottlenecks or stalls that occur during high load.
Is there a way I can make sure my micro-services use all the available system memory.
Yes, there is: use more memory in those services. There's no intrinsic limit unless you've configured one.
I am running a node app on a Digital Ocean cloud server, and the app merely services API requests. All client-side assets are served by a CDN, and the DB is accessed remotely, rather than stored on the server instance itself.
I have the choice of a greater number of vCPUs or RAM. I have no idea what that means in any way, so any feedback is a great help.
A single node.js server will run your Javascript on only one CPU so it doesn't help your Javascript run any faster to have more CPUs unless you cluster your app and run multiple node.js processes sharing the load of your app or unless there are other processes on the same server that are being used by your server.
Having more RAM (memory) will only improve things if you actually need more RAM. That depends entirely upon what the memory usage profile is of your app and how much RAM you already have available. Probably, you would already know if you were running out of RAM because you either get drastic slow-down when the OS starts page swapping or your process crashes when out of memory.
So, in order to know which would benefit you more, you really need more data on how your existing app is performing (whether it is ever bog down with CPU intensive operations and how much RAM it uses compared to how much you have available). It is quite possible that neither will actually matter to you - it totally depends upon the usage profile or your server process.
If you have no more data than this and have to make a choice, choose the vCPUs because there are some circumstances where it might help you (and gives you the option to go to clustering in the future if needed) whereas adding more RAM when you aren't even using what you already have won't help you at all.
Am facing issue in my application servers. Assume that - there are two nodes in the Load-balancer.
Suddenly one of the node from them becomes unhealthy.
When I logged in that instance. There were no logs coming in pm2.
then I check its CPU it was very high.
So please guide me how can I fix this issue. Or any way to debug it.
Check out flame graphs to see where your Node app is CPU bound.
You can also use the new debugging system in Node 6.3 (--inspect) to debug with the full power of Chrome DevTools.
PM2 has some limited protection for runaway issues like this via the max-memory-restart option. Typically, high CPU will also correlate with high memory usage and this option can be used to restart your app when it begins consuming large amounts of memory (which in your case may or may not be the correct moment but it should help).
--max-memory-restart <memory> specify max memory amount used to autorestart (in octet or use syntax like 100M)
I'm writing a simple cms in Node.js, Express and MongoDB. I'm planning to run a different Node.js process for every site. The problem is that after startup the process takes about 90m of RAM and for me it's too big (eight site take all server RAM). This memory is taken after the first connection to the site and other connections don't affect the memory.
Is there a guideline or a list of "best practices" to optimize this memory usage? I'm trying to track where the memory is allocated with process.memoryUsage() or a similar function but it's not simple to do this.
Is not a problem of memory leaks or something similar because the memory usage doesn't grow up after the first connection, so probably the optimization could be in loading less modules or do something differently...
The links below may help you to understand and detect memory leaks (if they do exist):
Debugging memory leaks in node.js
Detecting Memory Leaks in Node.js Applications
Tracking Down Memory Leaks in Node.js
These SO questions may also be useful:
How to monitor the memory usage of Node.js?
Node.js Memory Leak Hunting
Here is a quick fix, a node.js lib that will restart the any node process once it reaches a certain size.
https://github.com/DoryZi/memory_limiter
Set the --max_old_space_size CLI flag to control the maximum heap size. There's a post that describes Running a node.js app in a low-memory environment
tl;dr; Try setting this value, in megabytes, to about 80% of the maximum memory footprint you want node to try to remain under. e.g. to run app.js and keep it under 500MB RAM used
node --max_old_space_size=400 app.js
This setting is also described in the Node JS CLI documentation