Too many mongod processes - node.js

We have test server with 3 different node.js apps running on it. Each application is using the same MongoDB database test instance of which also runs on the same server. So at any given moment of time we have at most 3 different open connections to mongodb server.
The issue is that after each code deployment (which basically is: killing currently running process, code update and starting new process) i see new process(thread of a single proccess) on server which is shown in htop as /usr/bin/mongod --config /etc/mongodb.conf. Thus once in awhile we have to restart the test server because there are too many not used threads like that and it makes the mongod process take all the RAM.
I am not sure why is this happening and looking for solution to fix this issue.
My assumption is that if we simply kill the node.js proccess the connection (and therefore the thread related to this connection) somehow stays alive and therefore instead of killing nodejs process, we should gracefully shut it down with closing the DB connection.

htop is also showing different threads, your mongod isn't started multiple times, which wouldn't be possible with the same config because the port is already in use.
use top or ps aux | grep mongod and you should see just one process.
you can also configure htop not to show those, press F2 > display options > hide userland threads.

Related

Problems with orphan ibus-daemon processes after xrdp time-out in Centos

I have deployed Centos 7 with Gnome for a use case that requires multiple users to log into the desktop environment.
I have XRDP setup to time-out after 3 days if there is no activity and end the session. This mostly works but often I have lingering sessions when I run loginctl. A quick ps and the only processes left behind that I can tie back to the sessions (based on dates) are ibus-daemon processes. This is making it hard to track the number of active sessions or enforce limits with ghost processes hanging around.
Nov03 ? 00:00:00 ibus-daemon --xim --panel disable
I have read there is a way to add an argument to the daemon to enable a timeout but I have been unable to figure out what is spinning up the ibus-daemon (parent process is 1) or where the startup string is. There is not a systemd or init file so some other process is calling these.
Any suggestions where I can track down the startup config or mitigate this would be appreciated!

Postgres processes suddenly stop after high CPU?

I'm running a Postgres DB with a Node.js web application on a Ubuntu droplet with 16 vCPUs, and have found a strange behavior during times of high load for Postgres processes. It seems that Postgres processes stop completely following a time with near 100% CPU load, causing my API to freeze. Why is this?
Beneath attached are 2 screenshots that are approximately 1 minute apart from each other, using top in the command line.
Starting web app through PM2 — https://i.stack.imgur.com/Vs6WW.png
After a while — https://i.stack.imgur.com/RTc9G.png
Finally — https://i.stack.imgur.com/DVY10.png
I'm finding this behavior only when my server is handling 10,000+ requests per 10 minutes. Is this intended? What's going on here and is it possible to somehow not have these processes "stop" and not respawn unless I restart my Node.js app?
UPDATE: Postgres log file shows a lot of unexpected EOF on client connection with an open transaction. Is this caused by CPU overload or errors within the transaction?

Two node.js processes are running while I don't suppose that. Can I kill them?

While I don't suppose that there are any processes for node.js on my windows computer, but there are two node.js processes that I can see on task manager.
I don't mean to run any node.js right now. But I have two node.js processes like above.
I had run node.js process by pm2 module before, so it affects badly maybe.
Is it ok to kill the two processes manually from task manager? Or, either process has any other purposes than executing program I wrote, so I should keep either of them or both?

Node.js Active handles rise suddenly

I have a Parse Server which is a Node.js + express wrapper for a mobile app (about 100 simultaneous users every day), hosted on DigitalOcean. The app server communicates with MongoDB, which is hosted on another droplet of DigitalOcean. I'm using pm2 as a process manager and its monitoring tool, which is web-based. On the same process, we operate LiveQuery, a WebSocket server made by the Parse community as well.
The thing is, I've been having some performance issues with the server. Everything works smoothly, until the Active handles rise up uncontrollably! (see the image below) It's like after one point the server says "I'm done! Now I rest!"
Usually the active handles stay between 30 to 70. The moment I restart the process with pm2 restart everything goes back to normal!
I've been having this issue for quite some time now and I haven’t been able to figure out what’s causing it! Any help will be greatly appreciated!
EDIT: I did a stress test where I created 200 LiveQuery sockets for 1 user, instead of 2 that a user normally has and there was a spike of 300 active handles, for like 5 seconds! The moment the sockets were all created, everything went back to normal!
I usually use restart based on memory usage
pm2 start filename.js --max-memory-restart 160 --exp-backoff-restart-delay=100
pm2 has also built-in cron job or autostart script setup in case the server ever restarts, see https://pm2.keymetrics.io/docs/usage/restart-strategies/
it would be could if pm2 would provide restart options based on active connections or heap memory

How to prevent pm2 from restarting application on error during startup

Context
I've added configuration validation to some of the modules that compose my Node.js application. When they are starting, each one checks if it is properly configured and have access to the resources it needs (e.g. can write to a directory). If it detects that something is wrong it sends a SIGINT to itself (process.pid) so the application is gracefully shutdown (I close the http server, close possible connections to Redis and so on). I want the operator to realize there is a configuration and/or environment problem and fix it before starting the application.
I use pm2 to start/stop/reload the application and I like the fact pm2 will automatically restart it in case it crashes later on, but I don't want it to restart my application in the above scenario because the root cause won't be eliminated by simply restarting the app, so pm2 will keep restarting it up to max_restarts (defaults to 10 in pm2).
Question
How can I prevent pm2 from keeping restarting my application when it is aborted during startup?
I know pm2 has the --wait-ready option, but given we are talking about multiple modules with asynchronous startup logic, I find very hard to determine where/when to process.send('ready').
Possible solution
I'm considering making all my modules to emit an internal "ready" event and wire the whole thing chaining the "ready" events to finally be able to send the "ready" to pm2, but I would like to ask first if that would be a little bit of over engineering.
Thanks,
Roger

Resources