Unable to Run Multiple Node Child Processes without Choking on DigitalOcean - node.js

I've been struggling to run multiple instances of Puppeteer on DigitalOcean for quite some time with little luck. I'm able to run ~5 concurrently using tools like puppeteer-cluster, but for some reason the whole thing just chokes with little helpful messaging. So, I switched to spawning ~5 child processes without any additional library -- just Puppeteer itself. Same issue. Chokes with no helpful errors.
I'm able to run all of these jobs just fine locally, but after I deploy, I hit these walls. So, my hunch is that it's a resource/performance issue, but I can't say for sure.
I'm running a droplet with 1GB and 3CPUs on Digital Ocean.
Basically, I'm just looking for ways to start troubleshooting something like this. is there a way I can know for sure that I'm hitting resource walls? I've tried pm2 and the DO dashboard graphs, but I feel like those are all leaving a lot of information out, or else I'm missing something else altogether.

Author of puppeteer-cluster here. You are right, 1 GB of memory is likely not enough for running 5 browser windows (or tabs) in addition to your operating system and maybe even other background tasks.
Here is a list of resources you should check:
Memory: Use a tool like htop to check your memory usage while your application is running.
CPU: Again, you can use htop for that, 3 vCPUs should be more than enough for 5 windows.
Disk space: Use a tool like df to check if there is enough space on the disk. I know of multiple cases in which there was not enough space on the disk (like some old kernels filling the disk), and Chrome needs at least some space to run.
Network throughput: Rarely the problem, but sometimes the network just does not have the bandwidth to support many open browser. Use a tool like nload to check the network throughput.
To use htop or nload, you start your script in the background (node script.js &) or use a terminal multiplexer (like tmux). Resource problems should then be easy to spot.

Most probably you're running out of memory, 5 puppeteer processes are a lot for a 1GB VM.
You can run
grep -i 'killed process' /var/log/messages
to confirm that the OOM killer terminated your processes.

Related

Is there a non-cpu intensive way to gather system usage data in Nodejs?

Context; I'm writing a monitoring/management app for a VPS, which is running Linux
Reasons; I need to quickly be able to identify overloaded threads, high ram usage, badly behaving tasks.
Problems and current stage; Right now my code works well, I'm using systeminformation npm module to gather some system information like CPU usage, memory usage, disk status and task list, I put it into an object and send to all connected clients on a socket.io server. Problem is, it seems that this approach literally brings the host machine to it's knees (Both server and client are running locally, because I'm still working on them), by that I mean my CPU usage going from 6% to 80% in an instant, which is ridiculous. I want this updating to be atleast once a second, but if possible, 60/s. Point is, I need to either find a different way of retrieving the usage data of CPU (ideally with each thread as well), memory, disks and the list of tasks. I know this question is not very specific, but I believe this is something more people than just me encounter, that being that NodeJS just kills the machine (irony). The question remains, looking forward towards any help!
I tried different approaches before but they seemed to lower the usage by a bit or just up it because of the need to have more modules loaded. This generally leads me to the conclusion I just need a better module to handle this stuff.

Debug high CPU usage in Azure WebApp (Linux)

I have set up an Azure WebApp (Linux) to run a WordPress and an other handmade PHP app on it. All works fine but I get this weird CPU usage graph (see below).
Both apps are PHP7.0 containers.
SSHing in to the two containers and using top I see no unusual CPU hogging processes.
When I reset both apps the CPU goes back to normal and then starts to raise slowly as shown below.
The amount of HTTP requests to the apps has not relation to the CPU usage at all.
I tried to use apache2ctl to see if there are any pending requests but that seems not possible to do inside a docker container.
Anybody got an idea how to track down the cause of this?
This is the top output. The instance has 2 cores. Lots of idle time but still over 100% load and none of the processes use the CPU ...
After handling with MS Support on that issue it seems to have boiled down to the WordPress theme being to slow or inefficient. Each request took very long and hogged CPU resources. All following requests started queuing up and thus increasing the CPU load.
Why that would not show as %CPU in top I was not explained.
They proposed to use a different theme or upscale to a multi core instance.
I am unsatisfied with that solution and will monitor further and try to find the real culprit.
I had almost exactly the same CPU Percentage graph as you did, although a Node.JS app instead of PHP. Disabling Diagnostic Logs > Docker Container Logging seems to have solved the problem for me.
I do not need those logs because I am logging to application insights.
But, in your case you might need more of those logs. I have no solution for that, but I am guessing that heavier log rotation or reducing the sizes of the logs by other means might help

Unable to locate the memory hog on openvz container

i have a very odd issue on one of my openvz containers. The memory usage reported by top,htop,free and openvz tools seems to be ~4GB out of allocated 10GB.
when i list the processes by memory usage or use ps_mem.py script, i only get ~800MB of memory usage. Similarily, when i browse the process list in htop, i find myself unable to pinpoint the memory hogging offender.
There is definitely a process leaking ram in my container, but even when it hits critical levels and i stop everything in that container (except for ssh, init and shells) i cannot reclaim the ram back. Only restarting the container helps, otherwise the OOM starts kicking in in the container eventually.
I was under the assumption that leaky process releases all its ram when killed, and you can observe its misbehavior via top or similar tools.
If anyone has ever experienced behavior like this, i would be grateful for any hints. The container is running icinga2 (which i suspect for leaking ram) , although at most times the monitoring process sits idle, as it manages to execute all its scheduled checks in more than timely manner - so i'd expect the ram usage to drop at those times. It doesn't though.
I had a similar issue in the past and in the end it was solved by the hosting company where I had my openvz container. I think the best approach would be to open a support ticket to your hoster, explain them the problem and ask them to investigate. Maybe they use some outdated kernel version or they did changes on the server that have impact on your ovz container.

Node.js Clusters with Additional Processes

We use clustering with our express apps on multi cpu boxes. Works well, we get the maximum use out of AWS linux servers.
We inherited an app we are fixing up. It's unusual in that it has two processes. It has an Express API portion, to take incoming requests. But the process that acts on those requests can run for several minutes, so it was build as a seperate background process, node calling python and maya.
Originally the two were tightly coupled, with the python script called by the request to upload the data. But this of course was suboptimal, as it would leave the client waiting for a response for the time it took to run, so it was rewritten as a background process that runs in a loop, checking for new uploads, and processing them sequentially.
So my question is this: if we have this separate node process running in the background, and we run clusters which starts up a process for each CPU, how is that going to work? Are we not going to get two node processes competing for the same CPU. We were getting a bit of weird behaviour and crashing yesterday, without a lot of error messages, (god I love node), so it's bit concerning. I'm assuming Linux will just swap the processes in and out as they are being used. But I wonder if it will be problematic, and I also wonder about someone getting their web session swapped out for several minutes while the longer running process runs.
The smart thing to do would be to rewrite this to run on two different servers, but the files that maya uses/creates are on the server's file system, and we were not given the budget to rebuild the way we should. So, we're stuck with this architecture for now.
Any thoughts now possible problems and how to avoid them would be appreciated.
From an overall architecture prospective, spawning 1 nodejs per core is a great way to go. You have a lot of interdependencies though, the nodejs processes are calling maya which may use mulitple threads (keep that in mind).
The part that is concerning to me is your random crashes and your "process that runs in a loop". If that process is just checking the file system you probably have a race condition where the nodejs processes are competing to work on the same input/output files.
In theory, 1 nodejs process per core will work great and should help to utilize all your CPU usage. Linux always swaps the processes in and out so that is not an issue. You could start multiple nodejs per core and still not have an issue.
One last note, be sure to keep an eye on your memory usage, several linux distributions on EC2 do not have a swap file enabled by default, running out of memory can be another silent app killer, best to add a swap file in case you run into memory issues.

Running Ubuntu with nothing installed uses 500 out of 512MB which process should I kill?

Running linux ubuntu 14.04 on a digitalOcean server which gives me 512MB ram. Surprisingly, when trying to run activator for a play app I came to realice that almost all the memory was used. Using 'htop' command I get this output. which process should I kill (I am using 2 ssh connections, one to monitor and the other one to do stuff).
I could also assign swap memory but that would affect performance. I thought 512MB should be more than enough to run a play server. I mean, seriously, we put a man on the moon with reaaaaly much less.
Linux makes as much use of memory as it can, but that doesn't mean that it's not available for your applications. It will use memory to cache certain things (such as files) and memory for buffers.
In your screenshot you'll see the memory usage bar is made of different coloured sections:
Green is memory in use
Blue is buffer
Yellow is cache
So generally any applications you run that require more memory will allocate it out of the memory used to cache data.
Having swap space is generally a good idea - it won't affect performance unless the kernel starts swapping heavily, but that's generally better than the alternative which is your applications will crash with an out-of-memory error.

Resources