Using Cron to make Forever.js Reboot-Proof - node.js

I'm trying to execute Forever.js on System Restarts using a bash script (named starter.sh) to check if my app is running or not:
#!/bin/sh
if [ $(ps -e -o uid,cmd | grep $UID | grep node | grep -v grep | wc -l | tr -s "\n") -eq 0 ]
then
export PATH=/usr/local/bin:$PATH
forever start --sourceDir ~/var/www/mysite app.js >> ~/var/www/mysite/log.txt 2>&1
fi
Then I've appended the following code to crontab:
#reboot ~/var/www/mysite/starter.sh
but after restarting the system (sudo reboot) Forever.js doesn't start.
In the log file I receive the following messages:
/root/var/www/mysite/starter.sh: 6:
/root/var/www/mysite/starter.sh: forever: not found
Any idea?
P.S.
if I call Forever from command line (forever start --sourceDir ~/var/www/mysite app.js) all work properly.

I would look into something like upstart to start/stop your node scripts on reboot. This post goes into a lot of detail about doing exactly what you're after, and you can possible simplify the setup a bit for your needs:
https://www.exratione.com/2013/02/nodejs-and-forever-as-a-service-simple-upstart-and-init-scripts-for-ubuntu/
But if you're not running ubuntu or similar, each environment has its own start/stop services type of thing. On Mac OS X you can use launchd instead. launchd has a lot of features, but hopefully this post can guide you in the right direction:
http://paul.annesley.cc/2012/09/mac-os-x-launchd-is-cool/

the missing piece:
n=$(which node);n=${n%/bin/node}; chmod -R 755 $n/bin/*; sudo cp -r $n/{bin,lib,share} /usr/local
This series of commands put forever into /usr/local/bin/.

Related

Why doesn't tcpdump run in background?

I logged in a virtual machine via ssh and I tried to run a script in background, the script is shown below:
#!/bin/bash
APP_NAME=`basename $0`
CFG_FILE=$1
. $CFG_FILE #just some variables
CMD=$2
PID_FILE="$PIDS_DIR/$APP_NAME.pid"
CUR_LOG_DIR=$LOGS_RUNNING
echo $$ > $PID_FILE
#Main script code
#This script shall be called using the following syntax
# $ nohup script_name output_dir &
TIMESTAMP=`date +"%Y%m%d%H%M%S"`
CAP_INTERFACE="eth0"
/usr/sbin/tcpdump -nei $CAP_INTERFACE -s 65535 -w file_result
rm $PID_FILE
The result should be tcpdump running in background, redirecting the command result to file_result.
The script is called with:
nohup $SCRIPT_NAME $CFG_FILE start &
And It is stopped calling the STOP_SCRIPT:
##STOP_SCRIPT
PID_FILE="$PIDS_DIR/$APP_NAME.pid"
if [ -f $PID_FILE ]
then
PID=`cat $PID_FILE`
# send SIGTERM to kill all children of $PID
pkill -TERM -P $PID
fi
When I check the file_result, after running the stop script, It is empty.
What is happening? How can I solve it?
I found this link: https://it.toolbox.com/question/launching-tcpdump-processes-in-background-using-ssh-060614
The author seems to have faced a similar issue. They debate about race conditions, but I didn't understand completely.
I'm not sure what you're trying to accomplish by having the startup script itself continue to run, but here's an approach that I think accomplishes what you're trying to do, namely start tcpdump and have it continue to run immune to hangups via nohup. I've simplified things a bit for illustrative purposes - feel free to add any variables back as you see fit, such as the nohup.out output directory, TIMESTAMP, etc.
Script #1: tcpdump_start.sh
#!/bin/sh
rm -f nohup.out
nohup /usr/sbin/tcpdump -ni eth0 -s 65535 -w file_result.pcap &
# Write tcpdump's PID to a file
echo $! > /var/run/tcpdump.pid
Script #2: tcpdump_stop.sh
#!/bin/sh
if [ -f /var/run/tcpdump.pid ]
then
kill `cat /var/run/tcpdump.pid`
echo tcpdump `cat /var/run/tcpdump.pid` killed.
rm -f /var/run/tcpdump.pid
else
echo tcpdump not running.
fi
To start tcpdump, just run tcpdump_start.sh.
To stop the tcpdump instance started with tcpdump_start.sh, just run tcpdump_stop.sh.
The captured packets will be written to the file_result.pcap file, and yes, it's a pcap file, not a text file, so it helps to name it with the proper file extension. The tcpdump statistics will be written to the nohup.out file when tcpdump is terminated.
I too had faced problems when running tcpdump over an SSH session.
In my case, I was running
sudo nohup tcpdump -w {pcap_dump_file} {filter} > /dev/null 2>&1 &
Where, running this command over Paramiko SSH session as a background process was the problem.
To get around this, I used screen utility of Linux.
screen is an easy to use tool for long-running of processes as a service.
Might be an old post, but this is also relevant. I couldn;t understand why no file was being created only to realise that the file might not be created until a certain amount of data had been captured.
https://github.com/the-tcpdump-group/tcpdump/issues/485

how to daemonize a script

I am trying to use daemon on Ubuntu, but I am not sure how to use it even after reading the man page.
I have the following testing script foo.sh
#!/bin/bash
while true; do
echo 'hi' >> ~/hihihi
sleep 10
done
Then I tried this command but nothing happened:
daemon --name="foo" -b ~/daemon.out -l ~/daemon.err -v -- foo.sh
The file hihihi was not updated, and I found this in the errlog:
20161221 12:12:36 foo: client (pid 176193) exited with 1 status
How could I use the daemon command properly?
AFAIK, most daemon or deamonize programs change the current dir to root as part of the daemonization process. That means that you must give the full path of the command:
daemon --name="foo" -b ~/daemon.out -l ~/daemon.err -v -- /path/to/foo.sh
If it still did not work, you could try to specify a shell:
daemon --name="foo" -b ~/daemon.out -l ~/daemon.err -v -- /bin/bash -c /path/to/foo.sh
It is not necessary to use daemon command in bash. You can daemonize your script manually. For example:
#!/bin/bash
# At first you have to redirect stdout and stderr to /dev/null
exec >/dev/null
exec 2>/dev/null
# Fork and go to background
(
while true; do
echo 'hi' >> ~/hihihi
sleep 10
done
)&
# Parent process finished but child still working

Parallelism in bash script

I got a script that is starting up some virtual machines. After the deployment I want to install a few things on the VMs. Because these installation can take up to 6 minutes per VM. It would be much more efficient to execute these installations in parallel. In Java I would probably use Threads but in a bash script I do not know. My first approach was sth like this:
function install {
plink -ssh -i /var/lib/one/Downloads/id_rsa_ubuntu_putty..ppk root#$1 wget https://www.dropbox.com/s/xdhnx/install.sh
plink -ssh -i /var/lib/one/Downloads/id_rsa_ubuntu_putty..ppk root#$1 chmod 4500 install.sh
plink -ssh -i /var/lib/one/Downloads/id_rsa_ubuntu_putty..ppk root#$1 ./install.sh
echo $1 angestoßen
}
echo -------------------------------------------------
echo Alle VMs erfolgreich deployed
for i in "${IParray[#]}"
do
install $i &
done
wait
I created a funtion and tried to connect the function calls in the for-loop by using "&" which should create subprocesses. But for some how this is not working properly. Can anybody help me out
Maybe use GNU Parallel like this:
#!/bin/bash
IParray=(192.168.0.1 192.168.0.2)
function install {
echo $1
# plink...
}
# Make install() visible to GNU Parallel
export -f install
# Run a bunch of installs in parallel
parallel install ::: ${IParray[#]}

My crond job doesn't work as expected, why?

I created a shell script to check a tomcat instance status. If the instance is not started, then start it:
if [ `ps -ef | grep 'travelco' | grep -v grep | wc -l` -eq 0 ];then
sudo /home/q/tools/bin/restart_tomcat.sh /home/www/travelco/
else
echo 'travelco started'
fi
Then I tested the script and it worked well. But after I added it as a crond job, this script didn't work as expected.
I used crontab -e, and added
*/1 * * * * /home/yuliang.jin/travelcoCheck.sh
After that, even though I can see the script executed in the crontab log(sudo tail -f /var/log/cron), the tomcat instance was not started. Why?
There's a sudo in your script but are you sure that your current user has the permission to execute /home/q/tools/bin/restart_tomcat.sh without password authentication?
You should add the script to /etc/sudoers to allow your current user to execute the script without password, or you can just sudo crontab -e to run the script as root (and don't forget to delete sudo in your script if you do so).
If there is any other option, don't sudo in a cron job.
travelcoCheck.sh will be matched by the grep travelco and is not cancelled by the grep -v grep, so wc -l will be at least 1 always. So restart_tomcat.sh will not run.
(As a side note: whether or not your ps-parsing stack gets caught by ps is something of a dark art and is full of corner cases and race conditions and generally difficult to get to work right. Stuff like this is why dbus was invented.)

Server Startup Script Problems

I am trying to set up a Minecraft server. However, the basic startup scripts provided do not fit my needs. I want a script that will:
Start a new screen running the jarfile and (pretty much) only the jarfile (so i can ^C it if needed without killing other things like screen or my gzip commands)
Gzip any logs that weren't gzipped automatically by the jarfile (for if/when i ^C'ed the server, or if it crashed)
Run a command with sudo to set the process in the first argument to a high priority (/usr/bin/oom-priority)
Run a http-server on the resource-pack directory in a different screen and send ^C to it when the server closes
I have these three commands. I run startserver to start the server.
startserver:
#!/bin/bash
set -m
cd /home/minecraftuser/server/
echo
screen -dm -S http-server http-server ./resource-pack
screen -dm -S my-mc-server startserver_command
(sleep 1; startserver_after) &
screen -S my-mc-server
startserver_command:
#!/bin/bash
set -m
cd /home/minecraftuser/server/
echo
java -Xmx768M -Xms768M -jar ./craftbukkit.jar $# &
env MC_PID=$! > /dev/null
(sleep 0.5; sudo /usr/bin/oom-priority $MC_PID) &
fg 1
echo
read -n 1 -p 'Press any key to continue...'
and startserver_after:
#!/bin/bash
cd /home/minecraftuser/server/
wait $MC_PID
find /home/minecraftuser/server/logs -type f -name "*.log" -print | while read file; do gzip $file &
done
screen -S http-server -p 0 -X stuff \^c\\r
Edit: When I run startserver, I get a command prompt then a bunch of gzip errors of files already existing (I am expecting these errors, but when I run startserver I'm supposed to get the java program). Somehow I am in a screen because when I do ^A d, I am brought to a new prompt.
Once I am out of the screen, screen -ls returns two instances of my-mc-server. One is a blank command prompt, the other is the server running successfully.
Edit 2: I changed startserver_command to remove the asterisk from env MC_PID=$! & (not needed there) and added it to (sleep 1; startserver_after) (makes it faster), redirected env line to /dev/null (removes entire environment listing at beginning of output). Still didn't fix the entire problem.
Instead of starting each screen session from the scripts, you can just use a custom .screenrc to specify some startup windows (and to run commands/scripts):
#$HOME/mc-server.screenrc
screen -t http-server 0 'startserver'
screen -t my-mc-server 1 'startserver_command'
screen -t gzip-logs 2 'startserver_after'
Then simply start screen (specifying the config file to use, if it's not the default ~/.screenrc)
screen -dm -c mc-server.screenrc

Resources