Can I do This start up service below, there are no errors showing once run, but the server script below does not run!
ln /lib/systemd/aquarium.service aquarium.service
systemctl daemon-reload
systemctl enable aquarium.service
systemctl start aquarium.service
thanks
aquarium.service:
[Unit]
Description=Start aquarium server
[Service]
WorkingDirectory=/home/root/python/code/aquarium/
ExecStart=/bin/bash server.* start
KillMode=process
[Install]
WantedBy=multi-user.target
here is the server.sh script
#!/bin/bash
PID=""
function get_pid {
PID=`pidof python ./udpthread.py`
}
function stop {
get_pid
if [ -z $PID ]; then
echo "server is not running."
exit 1
else
echo -n "Stopping server.."
kill -9 $PID
sleep 1
echo ".. Done."
fi
}
function start {
get_pid
if [ -z $PID ]; then
echo "Starting server.."
./udpthread.py &
get_pid
echo "Done. PID=$PID"
else
echo "server is already running, PID=$PID"
fi
}
function restart {
echo "Restarting server.."
get_pid
if [ -z $PID ]; then
start
else
stop
sleep 5
start
fi
}
function status {
get_pid
if [ -z $PID ]; then
echo "Server is not running."
exit 1
else
echo "Server is running, PID=$PID"
fi
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
status)
status
;;
*)
echo "Usage: $0 {start|stop|restart|status}"
esac
Try using "Type=forking" and use complete filename.
[Unit]
Description=Start aquarium server
[Service]
WorkingDirectory=/home/root/python/code/aquarium/
Type=forking
ExecStart=/bin/bash server.sh start
KillMode=process
[Install]
WantedBy=multi-user.target
if it not work, post output of this command:
# journalctl -u aquarium.service
Related
This init script is for a node service that sometimes crashes. When it does the started file /var/run/openrc/started/mStream is left behind and this prevents openrc from starting the app because it thinks it's already running.
The first start works fine, but then the symlink needs to be manually removed if the program crashes.
The name of the init script for the service is mStream
#!/sbin/openrc-run
pidfile=/var/run/node.pid
procname=/usr/bin/node
command_background=true
start() {
cd /home/alpine/setup/mStream
ebegin "Starting ${SVCNAME}"
start-stop-daemon --start --pidfile ${pidfile} --exec node --background cli-boot-wrapper.js
eend $?
}
stop() {
ebegin "Stopping ${SVCNAME}"
killall node
eend $?
}
restart() {
stop
wait;
start
}
status() {
netstat -naptu | grep -q ":80.*LISTEN.*node" && {
echo "Running"
} || {
echo "Stopped"
exit 1
}
}
I have a runit service I use to run a rails app using unicorn.
Its restart command uses a signal (USR2) to handle a zero-downtime restart. Basically, it waits until the new process is ready before the old ones die.
This causes a very long (40 seconds) restart time, in which service myservice restart doesn't return until the end.
While I can give runit a longer timeout (which I already do), I want to make this restart a fire-and-forget kind of action so it'll return instantly (or after the USR2 signal was fired, but without waiting for it to complete.
The entire logic is taken from multiple blog posts about zero-downtime rails deployments with unicorn restarts:
https://gist.github.com/czarneckid/4639793
https://gist.github.com/JeanMertz/8996796
https://nulogy.com/who-we-are/company-blog/articles/zero-downtime-deployments-with-chef-nginx-and-unicorn/
This is the runit script (generated by chef):
#!/bin/bash
#
# This file is managed by Chef, using the <%= node.name %> cookbook.
# Editing this file by hand is highly discouraged!
#
exec 2>&1
#
# Since unicorn creates a new pid on restart/reload, it needs a little extra
# love to manage with runit. Instead of managing unicorn directly, we simply
# trap signal calls to the service and redirect them to unicorn directly.
#
RUNIT_PID=$$
APPLICATION_NAME=<%= #options[:application_name] %>
APPLICATION_PATH=<%= File.join(#options[:path], 'current') %>
BUNDLE_CMD="<%= #options[:bundle_command] ? "#{#options[:bundle_command]} exec" : '' %>"
UNICORN_CMD=<%= #options[:unicorn_command] ? #options[:unicorn_command] : 'unicorn' %>
UNICORN_CONF=<%= #options[:unicorn_config_path] ? #options[:unicorn_config_path] : File.join(#options[:path], 'current', 'config', 'unicorn.rb') %>
RAILS_ENV=<%= #options[:rails_env] %>
CUR_PID_FILE=<%= #options['pid'] ? #options['pid'] : File.join(#options[:path], 'current', 'shared', 'pids', "#{#options[:application_name]}.pid") %>
ENV_PATH=<%= #options[:env_dir] %>
OLD_PID_FILE=$CUR_PID_FILE.oldbin
echo "Runit service restarted (PID: $RUNIT_PID)"
function is_unicorn_alive {
set +e
if [ -n $1 ] && kill -0 $1 >/dev/null 2>&1; then
echo "yes"
fi
set -e
}
if [ -e $OLD_PID_FILE ]; then
OLD_PID=$(cat $OLD_PID_FILE)
echo "Old master detected (PID: $OLD_PID), waiting for it to quit"
while [ -n "$(is_unicorn_alive $OLD_PID)" ]; do
sleep 5
done
fi
if [ -e $CUR_PID_FILE ]; then
CUR_PID=$(cat $CUR_PID_FILE)
if [ -n "$(is_unicorn_alive $CUR_PID)" ]; then
echo "Detected running Unicorn instance (PID: $CUR_PID)"
RUNNING=true
fi
fi
function start {
unset ACTION
if [ $RUNNING ]; then
restart
else
echo 'Starting new unicorn instance'
cd $APPLICATION_PATH
exec chpst -e $ENV_PATH $BUNDLE_CMD $UNICORN_CMD -c $UNICORN_CONF -E $RAILS_ENV
sleep 3
CUR_PID=$(cat $CUR_PID_FILE)
fi
}
function stop {
unset ACTION
echo 'Initializing graceful shutdown'
kill -QUIT $CUR_PID
while [ -n "$(is_unicorn_alive $CUR_PID)" ]; do
echo '.'
sleep 2
done
echo 'Unicorn stopped, exiting Runit process'
kill -9 $RUNIT_PID
}
function restart {
unset ACTION
echo "Restart request captured, swapping old master (PID: $CUR_PID) for new master with USR2"
kill -USR2 $CUR_PID
sleep 2
echo 'Restarting Runit service to capture new master PID'
exit
}
function alarm {
unset ACTION
echo 'Unicorn process interrupted'
}
trap 'ACTION=stop' STOP TERM KILL
trap 'ACTION=restart' QUIT USR2 INT
trap 'ACTION=alarm' ALRM
[ $RUNNING ] || ACTION=start
if [ $ACTION ]; then
echo "Performing \"$ACTION\" action and going into sleep mode until new signal captured"
elif [ $RUNNING ]; then
echo "Going into sleep mode until new signal captured"
fi
if [ $ACTION ] || [ $RUNNING ]; then
while true; do
[ "$ACTION" == 'start' ] && start
[ "$ACTION" == 'stop' ] && stop
[ "$ACTION" == 'restart' ] && restart
[ "$ACTION" == 'alarm' ] && alarm
sleep 2
done
fi
This is a super weird way to use Runit, move your reload logic to the control/h script and use sv hup (or since it doesn't seem to be anything more than sending USR2 sv 2). The main run script shouldn't be involved.
The following bash script doesn't work because command 'expect' always return 0 regardless which exit code of the remote script /tmp/my.sh returns.
any idea to make it work? thanks.
#!/usr/bash
user=root
passwd=123456abcd
host=10.58.33.21
expect -c "
spawn ssh -o StrictHostKeyChecking=no -l $user $host bash -x /tmp/my.sh
expect {
\"assword:\" {send \"$passwd\r\"}
eof {exit $?}
}
"
case "$?" in
0) echo "Password successfully changed on $host by $user" ;;
1) echo "Failure, password unchanged" ;;
2) echo "Failure, new and old passwords are too similar" ;;
3) echo "Failure, password must be longer" ;;
*) echo "Password failed to change on $host" ;;
esac
Edited at 10:23 AM 11/27/2013
Thanks for the comments. Let me emphasis the problem once again,
The main script is supposed to run on linux server A silently, during which it invokes another script my.sh on server B unattended. The question is how to get exit code of my.sh?
That's why I cannot leverage ssl_key approach in my case, which requires at least one time configuration.
#!/usr/bin/expect
set user root
set passwd 123456abcd
set host 10.58.33.21
set result_code 255
# exp_internal 1 to see internal processing
exp_internal 0
spawn ssh -o StrictHostKeyChecking=no -l $user $host bash -x /tmp/my.sh && echo aaa0bbb || echo aaa$?bbb
expect {
"assword:" {send "$passwd\r"; exp_continue}
-re "aaa(.*)bbb" {set result_code $expect_out(1,string)}
eof {}
timeout {set result_code -1}
}
switch $result_code {
0 { puts "Password successfully changed on $host by $user" }
1 { puts "Failure, password unchanged" }
2 { puts "Failure, new and old passwords are too similar" }
3 { puts "Failure, password must be longer" }
-1 { puts "Failure, timeout" }
default { puts "Password failed to change on $host" }
}
exit $result_code
Error:
Starting nginx: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] still could not bind()
Script:
cat /etc/init.d/nginx
#!/bin/sh
#
# nginx - this script starts and stops the nginx daemon
#
# chkconfig: - 85 15
# description: Nginx is an HTTP(S) server, HTTP(S) reverse \
# proxy and IMAP/POP3 proxy server
# processname: nginx
# config: /etc/nginx/nginx.conf
# config: /etc/sysconfig/nginx
# pidfile: /var/run/nginx.pid
# Source function library.
. /etc/rc.d/init.d/functions
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0
nginx="/usr/local/nginx/sbin/nginx"
prog=$(basename $nginx)
sysconfig="/etc/sysconfig/$prog"
lockfile="/var/lock/subsys/nginx"
pidfile="/var/run/${prog}.pid"
NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf"
[ -f $sysconfig ] && . $sysconfig
start() {
[ -x $nginx ] || exit 5
[ -f $NGINX_CONF_FILE ] || exit 6
echo -n $"Starting $prog: "
daemon $nginx -c $NGINX_CONF_FILE
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}
stop() {
echo -n $"Stopping $prog: "
killproc -p $pidfile $prog
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}
restart() {
configtest_q || return 6
stop
start
}
reload() {
configtest_q || return 6
echo -n $"Reloading $prog: "
killproc -p $pidfile $prog -HUP
echo
}
configtest() {
$nginx -t -c $NGINX_CONF_FILE
}
configtest_q() {
$nginx -t -q -c $NGINX_CONF_FILE
}
rh_status() {
status $prog
}
rh_status_q() {
rh_status >/dev/null 2>&1
}
# Upgrade the binary with no downtime.
upgrade() {
local oldbin_pidfile="${pidfile}.oldbin"
configtest_q || return 6
echo -n $"Upgrading $prog: "
killproc -p $pidfile $prog -USR2
retval=$?
sleep 1
if [[ -f ${oldbin_pidfile} && -f ${pidfile} ]]; then
killproc -p $oldbin_pidfile $prog -QUIT
success $"$prog online upgrade"
echo
return 0
else
failure $"$prog online upgrade"
echo
return 1
fi
}
# Tell nginx to reopen logs
reopen_logs() {
configtest_q || return 6
echo -n $"Reopening $prog logs: "
killproc -p $pidfile $prog -USR1
retval=$?
echo
return $retval
}
case "$1" in
start)
rh_status_q && exit 0
$1
;;
stop)
rh_status_q || exit 0
$1
;;
restart|configtest|reopen_logs)
$1
;;
force-reload|upgrade)
rh_status_q || exit 7
upgrade
;;
reload)
rh_status_q || exit 7
$1
;;
status|status_q)
rh_$1
;;
condrestart|try-restart)
rh_status_q || exit 7
restart
;;
*)
echo $"Usage: $0 {start|stop|reload|configtest|status|force-reload|upgrade|restart|reopen_logs}"
exit 2
esac
What do I need to do to fix this?
Right now I am using killall -9 nginx in order to restart the service.
As root, run a netstat -nap | grep 80 to check which process is still listening on port 80.
Also, you may want to try using the -s option to send signals to nginx instead: http://wiki.nginx.org/CommandLine
This works as expected if I run it from the command line (node index.js). But when I execute this Node.js (v0.10.4) script as a daemon from a init.d script the stdout return value in the exec callback is not set. How do I fix this?
node.js script:
var exec = require('child_process').exec;
setInterval(function()
{
exec('get_switch_state', function(err, stdout, stderr)
{
if(stdout == "on")
{
// Do something.
}
});
}, 5000);
init.d script:
#!/bin/bash
NODE=/development/nvm/v0.10.4/bin/node
SERVER_JS_FILE=/home/blahname/app/index.js
USER=root
OUT=/home/pi/nodejs.log
case "$1" in
start)
echo "starting node: $NODE $SERVER_JS_FILE"
sudo -u $USER $NODE $SERVER_JS_FILE > $OUT 2>$OUT &
;;
stop)
killall $NODE
;;
*)
echo "usage: $0 (start|stop)"
esac
exit 0
I ended up not using Node.js exec child_process. I modified the init.d script above (/etc/init.d/node-app.sh) as follows:
#!/bin/bash
NODE=/home/pi/development/nvm/v0.10.4/bin/node
SERVER_JS_FILE=/home/pi/development/mysql_test/index.js
USER=pi
OUT=/home/pi/development/mysql_test/nodejs.log
case "$1" in
start)
echo "starting node: $NODE $SERVER_JS_FILE"
sudo -u $USER TZ='PST' $NODE $SERVER_JS_FILE > $OUT 2>$OUT &
;;
stop)
killall $NODE
;;
*)
echo "usage: $0 (start|stop)"
esac
exit 0
This script launches the Node.js app "index.js" at boot up and everthing works as expected.