Linux infinite loop with background - linux

I have a little script called "CheekyScript.sh" that looks something like this:
#!/bin/bash
nohup mvn run_something_pretty_long
This clearly work pretty fine as it starts a long process in the background that continues running after the session has expired and the user has logged out.
What I wish to achieve is pretty simple, introduce a little infinite loop, to this process is being ran over and over again but only AFTER the nohup is completed. Of course I still wish this entire bash script and the nohup within to run long after the session expired and I'm logged out.
I was thinking something similar:
#!/bin/bash
while true
do
nohup mvn run_something_pretty_long
sleep 60
done
Obviously is what this does is that it starts the nohup process every 60 seconds. The desired thing would be wait for the nohup, wait a minute and start the loop again.
I was wondering what is the best practice solution for something like this?
Thank you very much in advance.

use crontab
add an entry like this
1 * * * * /path/to/something
In the something script
#!/bin/bash
LOCKFILE=/var/lock/mvn.lock
[ -f $LOCKFILE ] && exit 0
# Upon exit, remove lockfile.
trap "{ rm -f $LOCKFILE ; exit 255; }" EXIT
touch $LOCKFILE
mvn run_something_pretty_long
exit 0
This tries to run the script once a minute and mostly fails as the lockfile exists. But if the script is finished the lockfile isn't there and it starts again
By default cron emails all output to the user that owns the job

You want to run your long running script either once, or repeatedly. And you want to run both of these using nohup. Since you already have one script that handles the first (run once) case, make two copies of your "CheekyScript.sh". The first one runs once, and the second you edit to run repeatedly (and can optionally check for a done condition).
This one runs once,
#!/bin/bash
#CheekyScriptOnce.sh
nohup mvn run_something_pretty_long
This one runs repeatedly,
#!/bin/bash
#CheekyRepeat.sh
thing="mvn run_something_pretty_long"
delay=60;
nohup (while [ 1 ] ; do $thing; sleep $delay; done)
But you want some way to signal done. A control file can handle that,
#!/bin/bash
#CheekyRepeatConditional.sh
thing="mvn run_something_pretty_long"
delay=60;
if [ ! -d etc ] ; then mkdir etc; fi
touch etc/Cheeky.run
nohup (while [ -f etc/Cheeky.run ] ; do $thing; sleep $delay; done)

Related

How do I setup two curl commands to execute at different times forever?

For example, I want to run one command every 10 seconds and the other command every 5 minutes. I can only get the first one to log properly to a text file. Below is the shell script I am working on:
echo "script Running. Press CTRL-C to stop the process..."
while sleep 10;
do
curl -s -I --http2 https://www.ubuntu.com/ >> new.txt
echo "------------1st command--------------------" >> logs.txt;
done
||
while sleep 300;
do
curl -s -I --http2 https://www.google.com/
echo "-----------------------2nd command---------------------------" >> logs.txt;
done
I would advise you to go with #Marvin Crone's answer, but researching cronjobs and back-ground processes doesn't seem like the kind of hassle I would go through for this little script. Instead, try putting both loops into separate scripts; like so:
script1.sh
echo "job 1 Running. Type fg 1 and press CTRL-C to stop the process..."
while sleep 10;
do
echo $(curl -s -I --http2 https://www.ubuntu.com/) >> logs.txt;
done
script2.sh
echo "job 2 Running. Type fg 2 and press CTRL-C to stop the process..."
while sleep 300;
do
echo $(curl -s -I --http2 https://www.google.com/) >> logs.txt;
done
adding executable permissions
chmod +x script1.sh
chmod +x script2.sh
and last but not least running them:
./script1.sh & ./script2.sh &
this creates two separate jobs in the background that you can call by typing:
fg (1 or 2)
and stop them with CTRL-C or send them to background again by typing CTRL-Z
I think what is happening is that you start the first loop. Your first loop needs to complete before the second loop will start. But, the first loop is designed to be infinite.
I suggest you put each curl loop in a separate batch file.
Then, you can run each batch file separately, in the background.
I offer two suggestions for you to investigate for your solution.
One, research the use of crontab and set up a cron job to run the batch files.
Two, research the use of nohup as a means of running the batch files.
I strongly suggest you also research the means of monitoring the jobs and knowing how to terminate the jobs if anything goes wrong. You are setting up infinite loops. A simple Control C will not terminate jobs running in the background. You are treading in areas that can get out of control. You need to know what you are doing.

Parallel run and wait for pocesses from subshell

Hi all/ I'm trying to make something like parallel tool for shell simply because the functionality of parallel is not enough for my task. The reason is that I need to run different versions of compiler.
Imagine that I need to compile 12 programs with different compilers, but I can run only 4 of them simultaneously (otherwise PC runs out of memory and crashes :). I also want to be able to observe what's going on with each compile, therefore I execute every compile in new window.
Just to make it easier here I'll replace compiler that I run with small script that waits and returns it's process id sleep.sh:
#!/bin/bash
sleep 30
echo $$
So the main script should look like parallel_run.sh :
#!/bin/bash
for i in {0..11}; do
xfce4-terminal -H -e "./sleep.sh" &
pids[$i]=$!
pstree -p $pids
if (( $i % 4 == 0 ))
then
for pid in ${pids[*]}; do
wait $pid
done
fi
done
The problem is that with $! I get pid of xfce4-terminal and not the program it executes. So if I look at ptree of 1st iteration I can see output from main script:
xfce4-terminal(31666)----{xfce4-terminal}(31668)
|--{xfce4-terminal}(31669)
and sleep.sh says that it had pid = 30876 at that time. Thus wait doesn't work at all in this case.
Q: How to get right PID of compiler that runs in subshell?
Maybe there is the other way to solve task like this?
It seems like there is no way to trace PID from parent to child if you invoke process in new xfce4-terminal as terminal process dies right after it executed given command. So I came to the solution which is not perfect, but acceptable in my situation. I run and put compiler's processes in background and redirect output to .log file. Then I run tail on these logfiles and I kill all tails which belongs to current $USER when compilers from current batch are done, then I run the other batch.
#!/bin/bash
for i in {1..8}; do
./sleep.sh > ./process_$i.log &
prcid=$!
xfce4-terminal -e "tail -f ./process_$i.log" &
pids[$i]=$prcid
if (( $i % 4 == 0 ))
then
for pid in ${pids[*]}; do
wait $pid
done
killall -u $USER tail
fi
done
Hopefully there will be no other tails running at that time :)

How can I make a ksh script terminate itself if any issues?

I have written a few ksh scripts, about 6 scripts.
These are written to handle huge data files, something like 207 MB big. while running the script, sometimes it gets stuck and does not end.
Human interruption is required.
In production environment, I want it to run automatically, and should be able to end automatically if any issues without the need of any human interruption.
If there are some issues with a file, the script should end and start executing the next file.
How can make it terminate itself, if it gets stuck?
I assume, that the only way you see the issues is that the script takes too long. In that case a simple script that kills the process after a time-out should be sufficient:
#!/bin/bash
# Killersrcipt
PID=$1
TIME=$2
typeset -i i
i=0
while [ $i -lt $TIME ] ; do
if ps $PID > /dev/null ; then
i=$i+1
sleep 1
else
exit 0
fi
done
kill $PID
Your workflow would then be something like:
#!/bin/bash
process_1 &
killerscript $! 60
process_2 &
killerscript $! 30
...
If you have other ways to detect issues in your processes, you can easily add them to the loop in your killerscript.

How to properly sigint a bash script that is run from another bash script?

I have two scripts, in which one is calling the other, and needs to kill it after some time. A very basic, working example is given below.
main_script.sh:
#!/bin/bash
cd "${0%/*}" #make current working directory the folder of this script
./record.sh &
PID=$!
# perform some other commands
sleep 5
kill -s SIGINT $PID
#wait $PID
echo "Finished"
record.sh:
#!/bin/bash
cd "${0%/*}" #make current working directory the folder of this script
RECORD_PIDS=1
printf "WallTimeStart: %f\n\n" $(date +%s.%N) >> test.txt
top -b -p $RECORD_PIDS -d 1.00 >> test.txt
printf "WallTimeEnd: %f\n\n" $(date +%s.%N) >> test.txt
Now, if I run main_script.sh, it will not nicely close record.sh on finish: the top command will keep on running in the background (test.txt will grow until you manually kill the top process), even though the main_script is finished and the record script is killed using SIGINT.
If I ctrl+c the main_script.sh, everything shuts down properly. If I run record.sh on its own and ctrl+c it, everything shuts down properly as well.
If I uncomment wait, the script will hang and I will need to ctrl+z it.
I have already tried all kinds of things, including using 'trap' to launch some cleanup script when receiving a SIGINT, EXIT, and/or SIGTERM, but nothing worked. I also tried bring record.sh back to the foreground using fg, but that did not help too. I have been searching for nearly a day now already, with now luck unfortunately. I have made an ugly workaround which uses pidof to find the top process and kill it manually (from main_script.sh), and then I have to write the "WallTimeEnd" statement manually to it as well from the main_script.sh. Not very satisfactory to me...
Looking forward to any tips!
Cheers,
Koen
Your issue is that the SIGINT is delivered to bash rather than to top. One option would be to use a new session and send the signal to the process group instead, like:
#!/bin/bash
cd "${0%/*}" #make current working directory the folder of this script
setsid ./record.sh &
PID=$!
# perform some other commands
sleep 5
kill -s SIGINT -$PID
wait $PID
echo "Finished"
This starts the sub-script in a new process group and the -pid tells kill to signal every process in that group, which will include top.

defer pipe process to background after text match

So I have a bash command to start a server and it outputs some lines before getting to the point where it outputs something like "Server started, Press Control+C to exit". How do I pipe this output so when this line occurs i put this process in the background and continue with another script/function (i.e to do stuff that needs to wait until the server starts such as run tests)
I want to end up with 3 functions
start_server
run_tests
stop_server
I've got something along the lines of:
function read_server_output{
while read data; do
printf "$data"
if [[ $data == "Server started, Press Control+C to exit" ]]; then
# do something here to put server process in the background
# so I can run another function
fi
done
}
function start_server{
# start the server and pipe its output to another function to check its running
start-server-command | read_server_output
}
function run_test{
# do some stuff
}
function stop_server{
# stop the server
}
# run the bash script code
start_server()
run_tests()
stop_tests()
related question possibly SH/BASH - Scan a log file until some text occurs, then exit. How?
Thanks in advance I'm pretty new to this.
First, a note on terminology...
"Background" and "foreground" are controlling-terminal concepts, i.e., they have to do with what happens when you type ctrl+C, ctrl+Z, etc. (which process gets the signal), whether a process can read from the terminal device (a "background" process gets a SIGTTIN that by default causes it to stop), and so on.
It seems clear that this has little to do with what you want to achieve. Instead, you have an ill-behaved program (or suite of programs) that needs some special coddling: when the server is first started, it needs some hand-holding up to some point, after which it's OK. The hand-holding can stop once it outputs some text string (see your related question for that, or the technique below).
There's a big potential problem here: a lot of programs, when their output is redirected to a pipe or file, produce no output until they have printed a "block" worth of output, or are exiting. If this is the case, a simple:
start-server-command | cat
won't print the line you're looking for (so that's a quick way to tell whether you will have to work around this issue as well). If so, you'll need something like expect, which is an entirely different way to achieve what you want.
Assuming that's not a problem, though, let's try an entirely-in-shell approach.
What you need is to run the start-server-command and save the process-ID so that you can (eventually) send it a SIGINT signal (as ctrl+C would if the process were "in the foreground", but you're doing this from a script, not from a controlling terminal, so there's no key the script can press). Fortunately sh has a syntax just for this.
First let's make a temporary file:
#! /bin/sh
# myscript - script to run server, check for startup, then run tests
TMPFILE=$(mktemp -t myscript) || exit 1 # create /tmp/myscript.<unique>
trap "rm -f $TMPFILE" 0 1 2 3 15 # arrange to clean up when done
Now start the server and save its PID:
start-server-command > $TMPFILE & # start server, save output in file
SERVER_PID=$! # and save its PID so we can end it
trap "kill -INT $SERVER_PID; rm -f $TMPFILE" 0 1 2 3 15 # adjust cleanup
Now you'll want to scan through $TMPFILE until the desired output appears, as in the other question. Because this requires a certain amount of polling you should insert a delay. It's also probably wise to check whether the server has failed and terminated without ever getting to the "started" point.
while ! grep '^Server started, Press Control+C to exit$' >/dev/null; do
# message has not yet appeared, is server still starting?
if kill -0 $SERVER_PID 2>/dev/null; then
# server is running; let's wait a bit and try grepping again
sleep 1 # or other delay interval
else
echo "ERROR: server terminated without starting properly" 1>&2
exit 1
fi
done
(Here kill -0 is used to test whether the process still exists; if not, it has exited. The "cleanup" kill -INT will produce an error message, but that's probably OK. If not, either redirect that kill command's error-output, or adjust the cleanup or do it manually, as seen below.)
At this point, the server is running and you can do your tests. When you want it to exit as if the user hit ctrl+C, send it a SIGINT with kill -INT.
Since there's a kill -INT in the trap set for when the script exits (0) as well as when it's terminated by SIGHUP (1), SIGINT (2), SIGQUIT (3), and SIGTERM (15)—that's the:
trap "do some stuff" 0 1 2 3 15
part—you can simply let your script exit at this point, unless you want to specifically wait for the server to exit too. If you want that, perhaps:
kill -INT $SERVER_PID; rm -f $TMPFILE # do the pre-arranged cleanup now
trap - 0 1 2 3 15 # don't need it arranged anymore
wait $SERVER_PID # wait for server to finish exit
would be appropriate.
(Obviously none of the above is tested, but that's the general framework.)
Probably the easiest thing to do is to start it in the background and block on reading its output. Something like:
{ start-server-command & } | {
while read -r line; do
echo "$line"
echo "$line" | grep -q 'Server started' && break
done
cat &
}
echo script continues here after server outputs 'Server started' message
But this is a pretty ugly hack. It would be better if the server could be modified to perform a more specific action which the script could wait for.

Resources