I have a node.js server app which is being started twice for some reason. I have a cronjob that runs every minute, checking for a node main.js process and if not found, starting it. The cron looks like this:
* * * * * ~/startmain.sh >> startmain.log 2>&1
And the startmain.sh file looks like this:
if ps -ef | grep -v grep | grep "node main.js" > /dev/null
then
echo "`date` Server is running."
else
echo "`date` Server is not running! Starting..."
sudo node main.js > main.log
fi
The log file storing the output of startmain.js shows this:
Fri Aug 8 19:22:00 UTC 2014 Server is running.
Fri Aug 8 19:23:00 UTC 2014 Server is running.
Fri Aug 8 19:24:00 UTC 2014 Server is not running! Starting...
Fri Aug 8 19:25:00 UTC 2014 Server is running.
Fri Aug 8 19:26:00 UTC 2014 Server is running.
Fri Aug 8 19:27:00 UTC 2014 Server is running.
That is what I expect, but when I look at processes, it seems that two are running. One under sudo and one without. Check out the top two processes:
$ ps -ef | grep node
root 99240 99232 0 19:24:01 ? 0:01 node main.js
root 99232 5664 0 19:24:01 ? 0:00 sudo node main.js
admin 2777 87580 0 19:37:41 pts/1 0:00 grep node
Indeed, when I look at the application logs, I see startup entries happening in duplicate. To kill these processes, I have to use sudo, even for the process that does not start with sudo. When I kill one of these, the other one dies too.
Any idea why I am kicking off two processes?
First, you are starting your node main.js application with sudo in the script startmain.sh. According to sudo man page:
When sudo runs a command, it calls fork(2), sets up the execution environment as described above, and calls the execve system call in the child process. The main sudo process waits until the command has completed, then passes the command's exit status to the security policy's close method and exits.
So, in your case the process with name sudo node main.js is the sudo command itself and the process node main.js is the node.js app. You can easily verify this - run ps auxfw and you will see that the sudo node main.js process is the parent process for node main.js.
Another way to verify this is to run lsof -p [process id] and see that the txt part for the process sudo node main.js states /usr/bin/sudo while the txt part of the process node main.js will display the path to your node binary.
The bottom line is that you should not worry that your node.js app starts twice.
Related
I installed froxlor a while back and uninstalled it again, because it didn't fit my need. the server I'm running is a debian web server. after inspecting the system log file using
grep CRON /var/log/syslog
I noticed that there are still some froxlor things going on.
most noticable are log entries like:
Jun 25 10:55:01 v220200220072109810 CRON[5633]: (root) CMD (/usr/bin/nice -n 5 /usr/bin/php -q /var/www/froxlor/scripts/froxlor_master_cronjob.php --tasks 1> /dev/null)
Jun 25 11:00:01 v220200220072109810 CRON[5727]: (root) CMD (/usr/bin/nice -n 5 /usr/bin/php -q /var/www/froxlor/scripts/froxlor_master_cronjob.php --tasks 1> /dev/null)
however, when inspecting the crontab for the root user, I don't have any active crontabs. Any ideas on how to fix this issue?
I am having an issue getting the .sh file to run. I can run it via sudo ./starter.sh when inside my app directory, but relying for it on reboot isn't working.
I am using an Ubuntu 12.04 VM on Windows 7. I have my files on Windows shared with the VM, so I access my files via /mnt/hgfs/nodejs-test
I am running a node.js server with Nginx via my local VM.
As of right now, I can go to http://node.dev and it will properly load up my server.js located in nodejs-test (/mnt/hgfs/nodejs-test) and output hello world to the screen.
So running the site isn't a problem..but getting forever (forever.js installed globally) to kick in on reboot isn't working. I suspect it simply can't execute my SH file.
Here is my starter.sh
#!/bin/sh
if [ $(ps aux | grep $USER | grep node | grep -v grep | wc -l | tr -s "\n") -eq 0 ]
then
export PATH=/usr/local/bin:$PATH
forever start --sourceDir /mnt/hgfs/nodejs-test/server.js >> /mnt/hgfs/nodejs-test/serverlog.txt 2>&1
fi
Now I have tried sudo crontab -e (and added my path to the file) as well as just crontab -e and did the same thing. Upon reboot...nothing.
#reboot /mnt/hgfs/nodejs-test/starter.sh
I tried editing that cronjob to this
#reboot /var/www/nodejs-test/starter.sh
because I created a symlink in /var/www/nodejs-test to /mnt/hgfs/nodejs-test
Where can I check to see if an error fires on reboot, or is it possible my reboot cron isn't running at all? I know running the starter.sh DOES work though.
EDIT The /mnt/hgfs/nodejs-test is owned by root (which might be a windows thing given the files exist on my Windows 7 os). My ubuntu user is "bkohlmeier" which I created on installing the VM.
EDIT #2
Nov 10 13:05:01 ubuntu cron[799]: (CRON) INFO (pidfile fd = 3)
Nov 10 13:05:01 ubuntu cron[875]: (CRON) STARTUP (fork ok)
Nov 10 13:05:01 ubuntu cron[875]: (CRON) INFO (Running #reboot jobs)
Nov 10 13:05:02 ubuntu CRON[887]: (bkohlmeier) CMD (/mnt/hgfs/nodejs-test/starter.sh)
Nov 10 13:05:02 ubuntu CRON[888]: (root) CMD (/var/www/nodejs-test/starter.sh >/dev/null2>&1)
Nov 10 13:05:02 ubuntu CRON[877]: (CRON) info (No MTA installed, discarding output)
Nov 10 13:05:02 ubuntu CRON[878]: (CRON) info (No MTA installed, discarding output)
Ok, I found a solution. Whether it is the "right" solution I don't know. Because my system shares files between HOST and GUEST (windows 7 and Ubuntu VM), the /mnt/hgfs/ has to get mounted (I think)..and the reboot happens quick enough I think the mount isn't aware yet.
So I added this via crontab -e
#reboot /bin/sleep 15; /var/www/nodejs-test/starter.sh
and it worked like a charm.
I want to have multiple httpd services running on a CentOS box, so that if I'm developing a mod_perl script and need to restart one of them, the others can run independently. I had this setup on Windows and am migrating.
Naturally this means separate PID files. I configure mine using the PidFile directive in httpd.conf, and point the init.d script to the same place. It creates the file okay, but does not populate it with all PIDs:
$ sudo killall httpd ; sudo service httpd-dev restart
Stopping httpd: cat: /var/run/httpd/httpd-dev.pid: No such file or directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
Starting httpd: [ OK ]
$ sudo cat /var/run/httpd/httpd-dev.pid
18279
$ ps -A | grep httpd
18279 ? 00:00:00 httpd
18282 ? 00:00:00 httpd
18283 ? 00:00:00 httpd
18284 ? 00:00:00 httpd
18285 ? 00:00:00 httpd
18286 ? 00:00:00 httpd
18287 ? 00:00:00 httpd
18288 ? 00:00:00 httpd
18289 ? 00:00:00 httpd
...why might this be? Makes it hard to kill just my dev httpd procs later when there will be other httpds. Can't just use 'killall' forever...
$ httpd -v
Server version: Apache/2.2.24 (Unix)
I should note that CentOS 6.4 minimal didn't come with killproc installed, so I changed my init.d to use
kill -9 `cat ${pidfile}`
instead. I guess killproc would search out child PIDs? So I have to install python to install killproc just to use init scripts for httpd?
There are two things here:
Your single Apache instance might have several PIDs associated with it, depending on the type of MPM selected. However, this should not affect you, since you only need to kill the PID written into the PID file, and that process will kill all the rest of the Apache instance.
If you try to run several Apache instances side by side, you'll have to specify a different PID file, one for each. Then you can decide which instances you want to kill - you have to process the PID file of each instance selected. Giving the same PID file to several instances, and expecting them each to put their own PID into the same file, that'll not work.
I'm trying to use a cron job to call a healthcheck script I've written to check the status of a web app (api) I've written (a url call doesn't suffice to test full functionality, hence the custom healthcheck). The healthcheck app has several endpoints which are called from a shell script (see below), and this script restarts the bigger web app we are checking. Naturally, I'm having trouble.
How it works:
1) cron job runs every 60s
2) healthcheck script is run by cron job
3) healthcheck script checks url, if url returns non-200 response, it stops and start a service
What works:
1) I can run the script (healthcheck.sh) as the ec2-user
2) I can run the script as root
3) The cron job calls the script and it runs, but it doesn't stop/start the service (I can see this by watching /tmp/crontest.txt and ps aux).
It totally seems like a permissions issue or some very basic linux thing that I'm not aware of.
The log when I run as root or ec2-user (/tmp/crontest.txt):
Fri Nov 23 00:28:54 UTC 2012
healthcheck.sh: api not running, restarting service!
api start/running, process 1939 <--- it restarts the service properly!
The log when the cron job runs:
Fri Nov 23 00:27:01 UTC 2012
healthcheck.sh: api not running, restarting service! <--- no restart
Cron file (in /etc/cron.d):
# Call the healthcheck every 60s
* * * * * root /srv/checkout/healthcheck/healthcheck.sh >> /tmp/crontest.txt
Upstart script (/etc/init/healthcheck.conf)- this is for the healthcheck app, which provides endpoints which we call from the shell script healthcheck.sh:
#/etc/init/healthcheck.conf
description "healthcheck"
author "me"
env USER=ec2-user
start on started network
stop on stopping network
script
# We run our process as a non-root user
# Upstart user guide, 11.43.2 (http://upstart.ubuntu.com/cookbook/#run-a-job-as-a-different-user)
exec su -s /bin/sh -c "NODE_ENV=production /usr/local/bin/node /srv/checkout/healthcheck/app.js" $USER
end script
Shell script permissions:
-rwxr-xr-x 1 ec2-user ec2-user 529 Nov 23 00:16 /srv/checkout/healthcheck/healthcheck.sh
Shell script (healthcheck.sh):
#!/bin/bash
API_URL="http://localhost:4567/api"
echo `date`
status_code=`curl -s -o /dev/null -I -w "%{http_code}" $API_URL`
if [ 200 -ne $status_code ]; then
echo "healthcheck.sh: api not running, restarting service!"
stop api
start api
fi
Add path to start/stop command to your script:
#!/bin/bash
PATH=$PATH:/sbin/
or your full path to start and stop commands:
/sbin/stop api
you can check path to them using whereis:
$ whereis start
/sbin/start
Answer found in another question!
Basically the cron jobs operate in a limited environment, so in 'start [service]', the start command is not found!
Modifying the script to look like so makes it work:
#!/bin/bash
PATH="/bin:/sbin:/usr/bin:/usr/sbin:/opt/usr/bin:/opt/usr/sbin:/usr/local/bin:/usr/local/sbin"
...
I have started an app with
forever start app.js
After that I typed,
forever list
and it shows that
The "sys" module is now called "util". It should have a similar interface.
info: No forever processes running
But I checked my processes with
ps aux | grep node
and it shows that
root 1184 0.1 1.5 642916 9672 ? Ss 05:37 0:00 node
/usr/local/bin/forever start app.js
root 1185 0.1 2.1 641408 13200 ? Sl 05:37 0:00 node
/var/www/app.js
ubuntu 1217 0.0 0.1 7928 1060 pts/0 S+ 05:41 0:00 grep --color=auto node
I cannot control over the process, since I cannot list the process in "forever list"
How can I let "Forever" knowing its running processes and let having control over its running processes.
forever list should be invoked with same user as that of processes.
Generally it is root user (in case of ubuntu upstart unless specified) so you can switch to root user using sudo su and then try forever list.
PS. Moved to pm2 recently which has lot more features than forever.
i had the same problem today.
in my case: i'm using NVM and forgot that it doesn't set/modify the global node path, so i had to set it manually
export NODE_PATH="/root/.nvm/v0.6.0/bin/node"
If you exec the forever start app.js within init.d you should later type sudo HOME=/home/pi/devel/web-app -u root forever list to have the correct list.
A fix would be great for this.
Encountered this one as well.
I believe an this issue was logged here.
What I could recommend for now is to find the process that's using your node port e.g. 3000. Like so:
sudo lsof -t -i:3000
That command will show the process id.
Then kill the process by performing:
kill PID
sudo su
forever list
This will output the correct list (processes started by root user).