Can not start a script in monit - linux

I have a script which have to run another script in background:
#!/bin/bash
( cd /var/lib/docker/volumes/hostcontrol-pipe/_data/ && ./run-pipe.sh ) &
Firstly it changes directory and runs a script. run-pipe.sh creates named pipes in its directory.
And I have a monit config file to monitor this script and to restart it if it's not running:
check program check-pipe with path /bin/bash -c "echo 'ping' > /var/lib/docker/volumes/hostcontrol-pipe/_data/host-pipe" with timeout 1 seconds
if status != 0 then
restart
start program = "/var/lib/docker/volumes/hostcontrol-pipe/_data/monit.sh"
First line checks the script is running writing to its pipe, it works.
The line "start program" doesn't work - the script doesn't run and it's abscent in the "ps ax". But I see in "sudo monit -vI":
'check-pipe' start: '/var/lib/docker/volumes/hostcontrol-pipe/_data/monit.sh'
'check-pipe' started
'check-pipe' program started
So, why monit cant run the script? I tried different variants, but cant run it. I can run it without changing directory (cd), but this is nessecary.

Actually monit could not run a script in background because where wasn't output. After adding output file it started to work.
start program = "/bin/bash -c 'cd /var/lib/docker/volumes/hostcontrol-pipe/_data && ./run-pipe.sh > 1.log &'"
Or otherwise /dev/null as output stream.

Related

ssh does not return even after execution

The following ssh command does not return to terminal. It hangs though the execution is completed. The execution hangs after echo hi command.
ssh user#testserver "echo hello;source .profile;source .bash_profile;/apps/myapp/deploytools/ciInstallAndRun.sh; echo hi"
Output
hello
<outoutfrom remote script"
hi
ciInstallAndRun.sh
echo 'starting'
cd /apps/myapp/current
./tctl kill
cd /apps/myapp
mv myapp_v1.0 "myapp_v1.0_`date '+%Y%m%d%H%M'`"
unzip -o /apps/myapp/myappdist-bin.zip
java -classpath .:/apps/myapp/deploytools/cleanup.jar se.telenor.project.cleanup.Cleanup /apps/myapp myapp_v1.0_ 3
cd /apps/myapp/myapp_v1.0
echo 'Done with deploy'
chmod -R 775 *
echo 'Done'
./tctl start test
Source OS: Redhat
Dest Os: Solaris 10 8/07
Any idea to fix this.
Any idea to fix this.
Your installation script has spawned a child process.
Add a ps -f or ptree $$ command before echo hi. You'll see a child process or multiple child processes spawned by your install script.
To stop the SSH command from hanging, you need to detach such child process(es) from your terminal's input/output. You can sedirect your script's output to a file - both stdout and stderr with > /some/output/file 2>&1, and also redirect its input with < /dev/null.
Or you can use the nohup command.
You haven't provided an MCVE, as others have noted, but this is likely the problem command in you install script, since your question implies that you see the expected output from your install script:
./tctl start test
You probably would do better to replace it with something like:
./tctl start test </dev/null >/some/log/file/path.log 2>&1

Cannot return to shell session after script

I cannot get a script to return to bash.
The script is kicked off via the following Docker directives:
ENTRYPOINT ["/bin/bash", "-c"]
CMD ["set -e && /config/startup/init.sh"]
The init script looks like this:
#!/bin/bash
if [ -d /etc/postfix/init.d ]; then
for f in /etc/postfix/init.d/*.sh; do
[ -f "$f" ] && . "$f"
done
fi
echo "[x] Starting supervisord ..."
/usr/bin/supervisord -c /etc/supervisord.conf
bash
And this is the command I use to kick off the image into a container:
docker run -it --env-file ENV_LOCAL mailrelay
The init script runs as expected (and I see output from the scripts within the /etc/postfix/init.d/ directory and supervisord kicks off Postfix.
The problem is getting the script to return to the parent process (bash) instead of needing to start a new one. After it hits the supervisord the session sits there, requiring a Ctrl+C to get it to get back into a bash prompt.
If I leave off the call to bash at the end of the init.sh script, Ctrl+D exits the script AND the container, returning me to the host OS (osx). If I replace the bash call with exit, it returns to the host OS as well.
Is supervisord behaving the way it's supposed to, by running in the foreground this way? I'd like to be able to easily get back into the container shell session to check to see if things are running. Am I left with needing to Ctrl+D (into the secondary bash session) in order to do this?
UPDATE
Marc B
take out the bash line, so you don't start a new shell. and if
supervisord doesn't go into the background automatically, you could
try running it with & to force it into the background, or maybe
there's an extra cli option to force it to go into daemon mode
I've tried removing the last call to bash, but as I've mentioned it just sits there still, and Ctrl+D takes me to the host OS (exits the container).
I just tried /usr/bin/supervisord -c /etc/supervisord.conf & (and left off the call to bash at the end) and it just immediately returns to host OS, exiting the container. I assume because the container had nothing left to "do", and so stopped.
#!/bin/bash
if [ -d /etc/postfix/init.d ]; then
for f in /etc/postfix/init.d/*.sh; do
[ -f "$f" ] && . "$f"
done
fi
echo "[x] Starting supervisord ..."
/usr/bin/supervisord -c /etc/supervisord.conf
one
bash # You are spawning a new bash shell here. Remove this statement
At the end your're stuck in a child bash shell :(
Now if you're not returning to the parent shell, the last command that you have run is the culprit.
/usr/bin/supervisord -c /etc/supervisord.conf
You can either force the command to run in the background by
/usr/bin/supervisord -c /etc/supervisord.conf & #the & tells to run in background
A workaround for keeping the container open is mentioned here

Cron Job Killing and Restarting Python Script

I set up a cron job on a linux server to kill and restart a python script (run.py) every other day. I set the job to run as root, but I find that sometimes it doesn't kill the process properly (and ends up running two scripts in a row).
Is there a better way to do this?
My cron job parameters:
0 8 * * 1,4,7 cd /home/myUser && ./start.sh
start.sh:
#!/bin/bash
echo "Running..."
sudo pkill -f run.py
sudo python run.py &
I guess run.py runs as python, not run.py. So you won't find anything with kill -f run.py.
You should echo the PID of the process to a file and use that value to kill the previous process if it's still running. Just add echo $! >/path/to/pid.file as the last line in your start.sh script to do so.
Read more:
https://serverfault.com/questions/205498/how-to-get-pid-of-just-started-process
How to read a file into a variable in shell?
http://www.cyberciti.biz/faq/kill-process-in-linux-or-terminate-a-process-in-unix-or-linux-systems/
Example to get you started:
#!/bin/bash
echo "Running..."
sudo pkill -F /path/to/pid.pid
sudo python /path/to/run.py &
echo $! > /path/to/pid.pid
Another alternative to this is making the python script run on upstart if you are on a system that supports upstart. Then you can just do sudo /sbin/start job_name at the begin and sudo /sbin/stop job_name this makes upstart manage the pids for you.
Python upstart script
Upstart python script

Linux Debian Run commands at boot in init script

I'm new to Linux (obviously) and I need to run some commands whenever my Linux server boots without typing them into the console manually.
I have this file called overpass.conf that runs on boot perfectly:
description 'Overpass API dispatcher daemon'
env DB_DIR=/var/www/osm/db/
env EXEC_DIR=/var/www/osm/
start on (local-filesystems and net-device-up)
stop on runlevel [!2345]
pre-start script
rm $DB_DIR/osm3s* || true
rm /dev/shm/osm3s* || true
end script
exec $EXEC_DIR/bin/dispatcher --osm-base --db-dir=$DB_DIR
However, I want to also run the following:
cp -pR "/root/osm-3s_v0.7.4/rules" "/var/www/osm/db/"
nohup /var/www/osm/bin/dispatcher --areas --db-dir="/var/www/osm/db/" &
chmod 666 "/var/www/osm/db/osm3s_v0.7.4_areas"
nohup /var/www/osm/bin/rules_loop.sh "/var/www/osm/db/" &
I have tried adding them to the bottom of the file, adding exec to the execution commands and even tried removing the quotes, then testing with start overpass but it throws errors if I add any commands to the original ones.
How can I go about executing those 4 commands after the original ones? I'm a noob in distress. Thanks!
Edit
I solved it with these commands:
vi /etc/init.d/mystartup.sh
-Add commands to the script
chmod +x /etc/init.d/mystartup.sh
update-rc.d mystartup.sh defaults 99
There's also /etc/rc.local that is executed at the end of the boot process.

Upstart error terminated with status 1

I have an ubuntu 10.04 server and tried to create an upstart script:
description "node-workerListener"
author "me"
start on startup
stop on shutdown
script
# We found $HOME is needed. Without it, we ran into problems
export HOME="/var/www"
exec sudo -u www-data /usr/local/bin/node /var/www/vhost/node/test/workerListener.js 2>&1 >> /var/log/node/helloworld.log
end script
This should start a node script, which works, if I start it manually on the command line. But when i try to "start node-workerListener" I get the message "node-workerListener start/running, process 1323", but it doesn't.
In /var/log/syslog: "...init: node-workerListener main process (1317) terminated with status 1"
What can I do?
You can also use forever https://github.com/nodejitsu/forever to run the node process.
Here is a detailled article : http://blog.nodejitsu.com/keep-a-nodejs-server-up-with-forever

Resources