run the script every 30 minutes bash - linux

I want to run the script every 30 minutes with cron but I have problem with my code.
In every 30 min I have to kill old script and run it again. I have somethink like this, but it is not working:
cd /var/www/scripts
pkill -f bot
now="$(date +%Y%m%d%H%M%S)"
screen -S bot
node mybot.js >> logi/logi_$now.txt

You may not use screen for running things in background in a script. Use ampersand (&) to background a process and nohup so it won't be killed when cron script exits. Also remember a subprocess PID in a file.
Something like this:
kill -- "$(cat mybot.pid)"
now="$(date +%Y%m%d%H%M%S)"
nohup node mybot.js >> "logi/logi_$now.txt" &
echo $! > mybot.pid

use crontab :
crontab -l
*/30 * * * * /path/to/your/command
save and run

The line
node mybot.js >> logi/logi_$now.txt
is never reached, as screen -S <session name> will start a screen session and therefore a new shell and connect to it. The rest of the script would only execute once that 'inner' session terminates.
screen is more for interactive use. Calling it in a script like this is rather strange. I guess you want to have node mybot.js >> logi/logi_$now.txt running in the background, so that your script can terminate while node keeps running. See Redirecting stdout & stderr from background process and Node.js as a background service for options how to do that.

Related

Start lots of background jobs but keep their logs separated

I have little experiences in shell commands in unix.
So far, I have checked stackOverflow and know how to run simple shell scripts in order by
using echo
echo $(sh dosomthing1.sh)
echo $(sh dosomthing2.sh)
directly using sh xxx and wait
sh dosomthing1.sh
wait
sh dosomthing2.sh
using &&
sh dosomthing1.sh && sh dosomthing2.sh
But these ways seem to be helpless to solve my problem...
Here is my problem:
I have a basic shell script to do a maven compile and then using "nohup xxx &" to start a java application in background. the script is shown below:
#get the input env parameter
env=$1
#goto application root directory
cd /applicationDir
#to compile
mvn install -Dmaven.test.skip=true
#to start with parameter env
nohup java -jar -Dspring.profiles.active=$env myApplication.jar &
#to tail the log
tail -20f myApplication.log
I have too many different applications with the same startup scripts and it is hard to start them one by one. I need to start them with one command.
All the shell scripts are expected to be processed one by one in order. If there are any exceptions, skip and run the next one.
And when I tried to write a script like this:
sh start1.sh
wait
echo "application 1 was start up"
sh start2.sh
wait
echo "application 2 was start up"
...
sh startxxx.sh
wait
echo "application xxx was start up"
Though all the children shell scripts will process in order as what I expected, and the output infomations looked like the shell is functioning well, but the fact is only the last application will be started, all the previous command "nohup xxxx &" will be shut down.
Also I have tried to write like this:
sh start1.sh &
sh start2.sh &
...
sh startxxx.sh &
Although the result was what I want, all the application will be started well, but during processing the scripts, because of the parallel running of the scripts, the consoled output is unreadable. It comes to a good result but not a graceful way.
I have no idea how to solve this problem...
Please help me with this, thank you very much!
When you have a script with commands, you cam do chmod +x start.sh. Now the script can be started with ./start.sh. You will avoid an additional sh process and with ls -l you can see which scripts are executable.
In your scripts you have tail -f. This will be very confusing for a backgound process. Start the scripts in the background and view the logging from the console. I do hope that each script is using a different myApplication.jar and myApplication.log.
When the logging in the logfile is duplicated in stdout (your commandline window), you can throw that logging away.
./start1.sh > /dev/null 2>&1 &
./start2.sh > /dev/null 2>&1 &
./startxxx.sh > /dev/null 2>&1 &
The processes will be killed when you logout before the scripts are terminated. This can be avoided with nohup:
nohup ./start1.sh > /dev/null 2>&1 &
nohup ./start2.sh > /dev/null 2>&1 &
nohup ./startxxx.sh > /dev/null 2>&1 &
Edit:
OPS wants to start programs in a fixed order.
Starting scripts exactly one after another in order, should be possible by calling them in the right order (perhaps with an additional sleep 1).
When you need to wait for program 1 finished some init stuff, you need to check that. Use 1 script calling all scripts and add some control statements, like
nohup java something &
while ! grep -q "Started" myApplication.log; do
sleep 1
done
When the java program has an error the while will wait for ever, so replace this with some max retrycount
for ((retry=0l retry<100; retry++)); do
grep -q "Started" myApplication.log && break
sleep 1
done
https://man7.org/linux/man-pages/man8/cron.8.html
This might help you. Cron is a task scheduler, which you can use to run programs in sequence. If the man page is difficult to understand, look for tutorials on it; I'm sure some would exist.

Crontab not running second command

I have the following two lines in crontab. I expect the first line to start my python script 30 seconds after boot, and the second line to kill and restart the script every two minutes.
#reboot (/bin/sleep 30; /usr/bin/python3 -u /home/pi/Desktop/file.py > /home/pi/Desktop/logfile 2>&1)
*/2 * * * * (kill $(pgrep -f 'python3 -u /home/pi/Desktop/file.py'); /usr/bin/python3 -u /home/pi/Desktop/file.py > /home/pi/Desktop/logfile 2>&1)
The script does run correctly upon boot and the script is killed two minutes later, but the script is not restarted by the second line. I don't believe it is a syntax error, because if I copy the second line directly into terminal (without the */2 * * * *), it properly kills and restarts the script. Why does this line work in terminal, but not in crontab?
Thanks in advance
I'm not sure why, but it seems that crontab will not execute any other commands in the same line after the 'kill $()' command.
I discovered this by placing printf commands to a log file before an after the kill command, but only the one before kill ended up in the log. I removed the kill command but left pgrep in its place, which resulted in the first printf text, the PID number, and the second printf text in the log.
My work around was just to place the two commands in a shell file, and have crontab run the shell. Seems to work just fine.

How to keep a bash script running in the background

I write a simple bash script:
while :
do
sleep 2;
//my code
done
Now I want this bash script always be running.
bash mybash.sh > /dev/null &
When I run above command my bash works fine. but when I close my terminal I think my bash is killed. because it doesn't work as my script make some files when it running.
Run the script "bash script.sh" in terminal and press ctrl+z and then use 'bg' command to put the script in background
#!/bin/bash
while true; do
// your code
sleep 5;
done;
write a bash script and put it that to cron and check once it will start comment the cron it will run in a background.
insted of sleep 5 you can use whatever second you want to put.
For checking your process use below commend to get the details
ps -ef | grep script_file_name
if you find more then one process is running leave one process and rest kill the process for script.
Hope so this will resolve your issue....!!!!

Kill ssh or\and remote process from bash script

I am trying to run the following command as part of the bash script which suppose to open ssh channel, run the program on the remote machine, save the output to the file for 10 sec, kill the process, which was writing to the file and then give the control back to bash script.
#!/bin/bash
ssh hostname '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null; sshpid=!$; sleep 10; kill -9 $sshpid 2>/dev/null &'
Unfortunately, what it seems to be doing is starting the program: nodes-listener remotely, but it never gets any further and it doesn't give control to the bash script. So, the only way to stop the execution is to do Ctrl+C.
Killing ssh doesn't help (or rather can't be executed) since the control is not with bash script as it waits for the command within the ssh session to complete, which of course never happens as it has to be killed to stop.
Here's the command line that you're running on the remote system:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null
sshpid=!$
sleep 10
kill -9 $sshpid 2>/dev/null &
You should change it to this:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null & <-- Ampersand goes here
sshpid=$!
sleep 10
kill -9 $sshpid 2>/dev/null
You want to start nodes-listener and then kill it after ten seconds. To do this, you need to start nodes-listener as a background process, so that the shell which is executing this command line to move on to the next command after starting nodes-listener. The & in your command line is in the wrong place, and would apply only to the kill command. You need to apply it to the nodes-listener command.
I'll also note that your sshpid=!$ line was incorrect. You want sshpid=$!. $! is the process ID of the last command started in the background.
You need to place the ampersand after the first command, then put the remaining commands onto the next line:
ssh hostname -- '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=$!; sleep 10; kill $sshpid 2>/dev/null'
Btw, ssh is returning after all commands had been executed. This does mean it will close the allocated pty as well. If there are still background jobs running in that shell session, they would being killed by SIGHUP. This means, you can probably omit the explicit kill command. (Depends on whether nodes-listener handles SIGHUP and SIGTERM differently). Having this, you could simplify the code to the following:
ssh hostname -- sh -c '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sleep 10'
I have resolved this by pushing the shell script to the remote machine and executing it there. It is actually less tidy and relies on space being available on the remote computer.
Since my remote machine is a small physical device, the issue of the space usage is important (even for the tiny amount of space required in this case).
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=!$
sleep 20
sync
# killing nodes-listener process and giving control back to the base bash
killall -9 nodes-listener 2>/dev/null && echo "nodes-listener is killed"

Running shell script command after executing an application

I have written a shell script to execute a series of commands. One of the commands in the shell script is to launch an application. However, I do not know how to continue running the shell script after I have launched the application.
For example:
...
cp somedir/somefile .
./application
rm -rf somefile
Once I launched the application with "./application" I am no longer able to continue running the "rm -rf somefile" command, but I really need to remove the file from the directory.
Anyone have any ideas how to compete running the "rm -rf" command after launching the application?
Thanks
As pointed out by others, you can background the application (man bash 'job control', e.g.).
Also, you can use the wait builtin to explicitely await the background jobs later:
./application &
echo doing some more work
wait # wait for background jobs to complete
echo application has finished
You should really read the man pages and bash help for more details, as always:
http://unixhelp.ed.ac.uk/CGI/man-cgi?sh
http://www.gnu.org/s/bash/manual/bash.html#Job-Control-Builtins
Start the application in the background, this way the shell is not going to wait for it to terminate and will execute the consequent commands right after starting the application:
./application &
In the meantime, you can check the background jobs by using the jobs command and wait on them via wait and their ID. For example:
$ sleep 100 &
[1] 2098
$ jobs
[1]+ Running sleep 100 &
$ wait %1
put the started process to background:
./application &
You need to start the command in the background using '&' and maybe even nohup.
nohup ./application > log.out 2>&1

Resources