I have a script forever.py which I want to run all the time in the background (also after that I close the terminal connected to the VM).
I used nohup python3 forever.py & and it worked, but the problem is that after some days it crashes (I guess due to memory overflow) and I need to restart it manually again.
To solve this, I did as suggested here, created a bash.sh file containing:
#!/bin/bash
until python3 forever.py; do
echo "'forever.py' crashed with exit code $?. Restarting..." 2>stderr.txt
sleep 1
done
and in the terminal, ran the command:
nohup bash bash.sh &
Currently it's running well and I hope the it restart when the program crashes.
My question is: how do I stop the execution of this?
I tried pkill nohup but it doesn't work!
Suggesting to investigate more about pkill command here:
pkill -9 -x "forever.py"
Related
I'm trying to run a script, say script.py that does not take any input from the terminal in the background using nohup by
nohup python3 script.py &
This command worked perfectly the last time I used it (and the script kept running for days!), but some error interrupted the process and it stopped. Eventually, nohup.out contained no error. (Is it supposed to?)
Now, I'm trying to run the script again, but am failing to do so. I use the same command, and then open the running processes using top but I get:
[2]- Stopped nohup python3 script.py
[3]+ Stopped nohup python3 script.py
I am failing to understand why is this happening now. Any help is appreciated!
P.S: The script runs perfectly without nohup.
I was using
nohup ./program_name &
to run my program, program_name prints out some values and status of the running process including how much percentage the program has finished, but since I'm running it using nohup so I can't see how close my program to finish is, is there anyway I can still get that information?
We have to Just open nohup.out to see output. Probably you want
tail -f nohup.out
for streaming output
Perhaps adjust your nohup command line to capture all output to a file:
nohup ./program_name > /tmp/programName.log 2>&1 &
Then, you can monitor programName.log using tail:
tail -f /tmp/programName.log
Put the below command in current terminal where the program is running
jobs command used to lists the jobs that you are running in the background and in the foreground
jobs -l
[6]+ 6069 Running nohup perl test1.pl &
[6]+ 6069 Done nohup perl test1.pl
I am trying to run the following command as part of the bash script which suppose to open ssh channel, run the program on the remote machine, save the output to the file for 10 sec, kill the process, which was writing to the file and then give the control back to bash script.
#!/bin/bash
ssh hostname '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null; sshpid=!$; sleep 10; kill -9 $sshpid 2>/dev/null &'
Unfortunately, what it seems to be doing is starting the program: nodes-listener remotely, but it never gets any further and it doesn't give control to the bash script. So, the only way to stop the execution is to do Ctrl+C.
Killing ssh doesn't help (or rather can't be executed) since the control is not with bash script as it waits for the command within the ssh session to complete, which of course never happens as it has to be killed to stop.
Here's the command line that you're running on the remote system:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null
sshpid=!$
sleep 10
kill -9 $sshpid 2>/dev/null &
You should change it to this:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null & <-- Ampersand goes here
sshpid=$!
sleep 10
kill -9 $sshpid 2>/dev/null
You want to start nodes-listener and then kill it after ten seconds. To do this, you need to start nodes-listener as a background process, so that the shell which is executing this command line to move on to the next command after starting nodes-listener. The & in your command line is in the wrong place, and would apply only to the kill command. You need to apply it to the nodes-listener command.
I'll also note that your sshpid=!$ line was incorrect. You want sshpid=$!. $! is the process ID of the last command started in the background.
You need to place the ampersand after the first command, then put the remaining commands onto the next line:
ssh hostname -- '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=$!; sleep 10; kill $sshpid 2>/dev/null'
Btw, ssh is returning after all commands had been executed. This does mean it will close the allocated pty as well. If there are still background jobs running in that shell session, they would being killed by SIGHUP. This means, you can probably omit the explicit kill command. (Depends on whether nodes-listener handles SIGHUP and SIGTERM differently). Having this, you could simplify the code to the following:
ssh hostname -- sh -c '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sleep 10'
I have resolved this by pushing the shell script to the remote machine and executing it there. It is actually less tidy and relies on space being available on the remote computer.
Since my remote machine is a small physical device, the issue of the space usage is important (even for the tiny amount of space required in this case).
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=!$
sleep 20
sync
# killing nodes-listener process and giving control back to the base bash
killall -9 nodes-listener 2>/dev/null && echo "nodes-listener is killed"
Well, I'm basically trying to make a bash script runs a node script forever. I made the following bash script:
#!/bin/bash
while true ; do
cd /myscope/
unlink nohup.out
node myscript.js
sleep 6
done & echo $! > pid
I'm expecting that when it runs, it starts up node with the given script, checks if node exits, sleeps for 6 seconds if so and reopen node. Also, I'm expecting it to run in background and writes it's pid (the bash pid) on a file called "pid".
Everything explained above works as expected, apparently, but I'm also expecting that when the pid of the bash script is killed, the node script would stop running, I don't know why that made sense in my mind, but when it comes to practice, it doesn't work. The bash script is killed indeed, but the node script keeps running and that is freaking me out.
I've tested it in the terminal, by not sending the bash script to the background and entering ctrl+c, both scripts gets killed.
I'm obviously miss understanding something on the way the background process works. For god sake, can anybody help me?
There are lots of tools that let you do what you're trying, just two off the top of my head:
https://github.com/nodejitsu/forever - A simple CLI tool for ensuring that a given script runs continuously (i.e. forever)
https://github.com/remy/nodemon - Monitor for any changes in your node.js application and automatically restart the server - perfect for development
Maybe the second it's not what you're looking for, but still worth a look.
If you can't or don't want to use those then the problem is that if you kill the parent process the child one is still there, so, you should kill that too:
pkill -TERM -P $PID
where $PID is the parent PID.
I am trying to run a program automatically within a bash script after killing the LXDE session. My script consists of:
#!/bin/sh
pkill lxsession;
sh /home/pi/RetroPie/EmulationStation/emulationstation
I tried this as well:
#!/bin/sh
nohup & pkill lxsession &
writevt /dev/tty1 'emulationstation'
My aim is to log out of the LXDE session and run EmulationStation on my Raspberry Pi with a bash script. I'm using pkill lxsession; to bypass lxsession's logout confirmation dialog.
As it stands, this script just gets me to the command line from a working LXDE desktop. Thanks for reading.
Dont EmulationStation need some sort of X server running in the background for it to work?
IF not, then try the following:
#!/bin/sh
pkill lxsession;
sleep 5
su -c sh /home/pi/RetroPie/EmulationStation/emulationstation
exit
It could also be that when you log out of your lxde session the emulationstation dosent have a usershell to open it, therefore "su -c"
I'm not sure if its going to work but I hope you solve it. :)