High process count on cPanel hosted node.js app, and increasing - node.js

I have a single node.js app running on my shared web host. The cPanel shows 67/100 processes, and 7 entry processes.
The thing is, the site currently doesn't do anything except letting users see it.
The number of processes when I first deployed the app a week ago was only 11/100. But it keeps rising gradually, for no apparent reason..
I was wondering if my code has any issue to be causing this.. It is fairly simple, but there may be something I do not see.
My entire project is hosted on github at https://github.com/ravindukrs/HackX-Jr-Web
===================
What I tried
I Stopped the app from cPanel. But number of processes didn't go down. It slightly reduced the CPU Usage though.
Note
CPU Usage remains 0/100 even when the app is running.
I am not a great developer, so code may not be optimized. But was just wondering if I am creating any processes that do not end..
The site is currently hosted at https://hackxjr.lk
Thank you in advance.
Update: Count is still going up

Here is my experience with this same problem and how I resolved it.
I setup a simple NodeJs app on my Namecheap host and about a day later my whole domain was unavailable. I checked CPANEL and noticed 200/200 processes were running. So I contacted Support and they said this:
As I have checked, there are a lot of stuck NodeJS processes, I will take necessary actions from my end and set up a cron job for you that will remove the stuck processes automatically, so you won't face such issue again. Please give me 7-10 minutes.
Here is the cron job they setup:
ps faux | grep lsnode | grep -v 'grep' > $HOME/tmp_data; IFS=$'\n'; day=$(date +%d); for i in $(cat $HOME/tmp_data); do for b in $i; do echo $i | awk -F[^0-9]* '{print $2" "$9}' | awk -v day1=$(date +%d) '{if($2+2<day1)print$1}' | xargs kill -9 && echo "NodeJS process killed"; done; done >/dev/null 2>&1
I have not had an issue since.

I also had the problem on Namecheap. Strange that it is always them…
Support told me it had to do with their CageFS and that it only can be fixed/reseted via support.
Edit:
support gave me a new cronjob to run
kill -9 $(ps faux | grep node | grep -v grep | awk {'print $2'})
For me, this one is working better than the command from Gerardo.

You can stop unused processes by run this command:
/usr/bin/pkill -9 lsnode

I have encountered the exact same issue like you are describing. My hosting provider for NodeJS apps and PHP sites is Namecheap too. Strange that their name keeps popping up on this thread.
This is what Namecheap support said:
According to our check, the issue was caused by the numerous stuck processes generated by the Node.js app. We have removed them and the websites are now up. In case the issue reappears, we would recommend contacting a web developer to optimize your app and/or setting up the following cron job to kill the processes:
/usr/bin/pkill -9 lsnode >/dev/null 2>&1
If you are using cPanel, this article might help you to setup a cron job: https://www.namecheap.com/support/knowledgebase/article.aspx/9453/29/how-to-run-scripts-via-cron-jobs/

Related

Why do we use"/etc/init.d/process start"

Why do we use /etc/init.d/httpd start in the below program? Why can't we use service httpd start? For me it's showing as unrecognized service. (I have installed httpd already.)
#!/bin/bash
if (( $(ps -ef | grep httpd | wc -l) > 1 ))
then
echo "httpd is running!!!"
else
/etc/init.d/httpd start
fi
:-) vishal I don't mean to frustrate you. However, it's difficult to answer your question without a lot of assumptions.
Some considerations for not using /etc/init.d/httpd start
hard-coded location
assumes that httpd is stored in file /etc/init.d/httpd and not say apache2 or nginx or something else.
even the ps -ef test assumes the process name to be httpd and sometimes it's not
Some considerations for not using service httpd start
There may also be good reasons for not using service httpd start in this script because it may have side effects. For example,
service may not have httpd registered as a service
if you use service you may end up relaunching other dependent services which you may not want to do
service may bury the errors during start up and you may want to.

Stop a running dotnet core website running on kestrel

When deploying a new version of an existing .net core website. How do I first safely stop the old running kestrel app?
Here is an exmaple of what I would like to write (Pseudo deployment script):
dotnet stop mysite/mysite.dll <---- this line here
mv mysite/ mysite.bak/
cp newly-published-mysite/ mysite/
dotnet run mysite/mysite.dll
killall dotnet seems a little unsafe. How would it work if I were hosting two small sites on one box?
Accordingly to this discussion, there is no safe way to stop Kestrel now. You need to find a PID by name of your dll and kill it:
kill $(ps aux | grep 'MySite.dll' | awk '{print $2}')
In case of process tree, you need to manually grep all child IDs and call kill for each. Like it was done in
Microsoft.Extensions.Internal.ProcessExtensions.KillTree method (correct link from the discussion).
I have decided to use supervisor to monitor and manage the process. Here is an excellent article on getting it set up.
It allows simple control over specific dotnet apps like this:
supervisorctl stop MyWebsiteName
supervisorctl start MyWebsiteName
And it has one huge advantage in that it can try restart the process if it falls over, or when the system reboots... for whatever reason.

'top -p PID -n1' not working as cronjob

With totem playing a video, running top -p $(pidof totem) -n1 from the terminal works perfectly. Extending it slightly to top -p $(pidof totem) -n1 > ~/test.txt and running it as a once-per-minute cronjob, test.txt is written to, but produces a blank document. There are no errors being produced, or at least nothing is listed in the log files, and Cron doesn't send any mail alerts indicating a problem either. I have no problem getting Cron to run other scripts.
The same thing occurs with all 3 Linux distros I have - Mint 17.2, Deepin 2014, and Ubuntu 10.04.
I have found posts from several people who have encountered the same problem, but unfortunately none of the below suggestions they received work for me;
adding the following;
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
export DISPLAY=:0.0
or any of the following;
export TERM=xterm
top -bn1 | head
TERM=dumb top -n1 | head
TERM=xterm top -n1 | head
At this stage, I'm beginning to think that it's top itself that's buggy. After spending many hours working on a Bash script that's based around top -p, it's pretty demoralising to discover that once done, I now can't get it to run as a cronjob.
Can anyone offer any suggestions as to what the problem might be?
Edit: I'm a mite embarrassed to admit that top -p PID -bn1 | head does work after all! I must have mistyped or something when I tried it first time around! Nevermind, hope it's of help to someone in the future.

How to log the Ram and CPU usage for Linux Processes

How would I track the CPU and Ram usage for a process that may run, stop, and then re-run with a different PID?
I am looking to track this information for all processes on a Linux server but the problem is when the process stops and restarts, it will have a different PID and I am not sure how to identify it as the same process.
What you're looking for here is called "process accounting".
http://tldp.org/HOWTO/Process-Accounting/
If you know the command of the process, just pipe it to a grep like this:
ps ux | grep yourcommandgoeshere
You can setup a crontab to record output of commands like
top -b -n1 | grep
ps ux | grep
Alternatively, you can use sealion service. By simply installing agent and configuring it according to your needs in simple steps, you can see output of the executed commands online.
Hope it helps...

How to make sure an application keeps running on Linux

I'm trying to ensure a script remains running on a development server. It collates stats and provides a web service so it's supposed to persist, yet a few times a day, it dies off for unknown reasons. When we notice we just launch it again, but it's a pain in the rear and some users don't have permission (or the knowhow) to launch it up.
The programmer in me wants to spend a few hours getting to the bottom of the problem but the busy person in me thinks there must be an easy way to detect if an app is not running, and launch it again.
I know I could cron-script ps through grep:
ps -A | grep appname
But again, that's another hour of my life wasted on doing something that must already exist... Is there not a pre-made app that I can pass an executable (optionally with arguments) and that will keep a process running indefinitely?
In case it makes any difference, it's Ubuntu.
I have used a simple script with cron to make sure that the program is running. If it is not, then it will start it up. This may not be the perfect solution you are looking for, but it is simple and works rather well.
#!/bin/bash
#make-run.sh
#make sure a process is always running.
export DISPLAY=:0 #needed if you are running a simple gui app.
process=YourProcessName
makerun="/usr/bin/program"
if ps ax | grep -v grep | grep $process > /dev/null
then
exit
else
$makerun &
fi
exit
Then add a cron job every minute, or every 5 minutes.
Monit is perfect for this :)
You can write simple config files which tell monit to watch e.g. a TCP port, a PID file etc
monit will run a command you specify when the process it is monitoring is unavailable/using too much memory/is pegging the CPU for too long/etc. It will also pop out an email alert telling you what happened and whether it could do anything about it.
We use it to keep a load of our websites running while giving us early warning when something's going wrong.
-- Your faithful employee, Monit
Notice: Upstart is in maintenance mode and was abandoned by Ubuntu which uses systemd. One should check the systemd' manual for details how to write service definition.
Since you're using Ubuntu, you may be interested in Upstart, which has replaced the traditional sysV init. One key feature is that it can restart a service if it dies unexpectedly. Fedora has moved to upstart, and Debian is in experimental, so it may be worth looking into.
This may be overkill for this situation though, as a cron script will take 2 minutes to implement.
#!/bin/bash
if [[ ! `pidof -s yourapp` ]]; then
invoke-rc.d yourapp start
fi
If you are using a systemd-based distro such as Fedora and recent Ubuntu releases, you can use systemd's "Restart" capability for services. It can be setup as a system service or as a user service if it needs to be managed by, and run as, a particular user, which is more likely the case in OP's particular situation.
The Restart option takes one of no, on-success, on-failure, on-abnormal, on-watchdog, on-abort, or always.
To run it as a user, simply place a file like the following into ~/.config/systemd/user/something.service:
[Unit]
Description=Something
[Service]
ExecStart=/path/to/something
Restart=on-failure
[Install]
WantedBy=graphical.target
then:
systemctl --user daemon-reload
systemctl --user [status|start|stop|restart] something
No root privilege / modification of system files needed, no cron jobs needed, nothing to install, flexible as hell (see all the related service options in the documentation).
See also https://wiki.archlinux.org/index.php/Systemd/User for more information about using the per-user systemd instance.
I have used from cron "killall -0 programname || /etc/init.d/programname start". kill will error if the process doesn't exist. If it does exist, it'll deliver a null signal to the process (which the kernel will ignore and not bother passing on.)
This idiom is simple to remember (IMHO). Generally I use this while I'm still trying to discover why the service itself is failing. IMHO a program shouldn't just disappear unexpectedly :)
Put your run in a loop- so when it exits, it runs again... while(true){ run my app.. }
I couldn't get Chris Wendt solution to work for some reason, and it was hard to debug. This one is pretty much the same but easier to debug, excludes bash from the pattern matching. To debug just run: bash ./root/makerun-mysql.sh. In the following example with mysql-server just replace the value of the variables for process and makerun for your process.
Create a BASH-script like this (nano /root/makerun-mysql.sh):
#!/bin/bash
process="mysql"
makerun="/etc/init.d/mysql restart"
if ps ax | grep -v grep | grep -v bash | grep --quiet $process
then
printf "Process '%s' is running.\n" "$process"
exit
else
printf "Starting process '%s' with command '%s'.\n" "$process" "$makerun"
$makerun
fi
exit
Make sure it's executable by adding proper file permissions (i.e. chmod 700 /root/makerun-mysql.sh)
Then add this to your crontab (crontab -e):
# Keep processes running every 5 minutes
*/5 * * * * bash /root/makerun-mysql.sh
The supervise tool from daemontools would be my preference - but then everything Dan J Bernstein writes is my preference :)
http://cr.yp.to/daemontools/supervise.html
You have to create a particular directory structure for your application startup script, but it's very simple to use.
first of all, how do you start this app? Does it fork itself to the background? Is it started with nohup .. & etc? If it's the latter, check why it died in nohup.out, if it's the first, build logging.
As for your main question: you could cron it, or run another process on the background (not the best choice) and use pidof in a bashscript, easy enough:
if [ `pidof -s app` -eq 0 ]; then
nohup app &
fi
You could make it a service launched from inittab (although some Linuxes have moved on to something newer in /etc/event.d). These built in systems make sure your service keeps running without writing your own scripts or installing something new.
It's a job for a DMD (daemon monitoring daemon). there are a few around; but I usually just write a script that checks if the daemon is running, and run if not, and put it in cron to run every minute.
Check out 'nanny' referenced in Chapter 9 (p197 or thereabouts) of "Unix Hater's Handbook" (one of several sources for the book in PDF).
A nice, simple way to do this is as follows:
Write your server to die if it can't listen on the port it expects
Set a cronjob to try to launch your server every minute
If it isn't running it'll start, and if it is running it won't. In any case, your server will always be up.
I think a better solution is if you test the function, too. For example, if you had to test an apache, it is not enough only to test, if "apache" processes on the systems exist.
If you want to test if apache OK is, then try to download a simple web page, and test if your unique code is in the output.
If not, kill the apache with -9 and then do a restart. And send a mail to the root (which is a forwarded mail address to the roots of the company/server/project).
It's even simplier:
#!/bin/bash
export DISPLAY=:0
process=processname
makerun="/usr/bin/processname"
if ! pgrep $process > /dev/null
then
$makerun &
fi
You have to remember though to make sure processname is unique.
One can install minutely monitoring cronjob like this:
crontab -l > crontab;echo -e '* * * * * export DISPLAY=":0.0" && for
app in "eiskaltdcpp-qt" "transmission-gtk" "nicotine";do ps aux|grep
-v grep|grep "$app";done||"$app" &' >> crontab;crontab crontab
disadvantage is that the app names you enter have to be found in ps aux|grep "appname" output and at same time being able to be launched using that name: "appname" &
also you can use the pm2 library.
sudo apt-get pm2
And if its a node app can install.
Sudo npm install pm2 -g
them can run the service.
linux service:
sudo pm2 start [service_name]
npm service app:
pm2 start index.js

Resources