bash cron flock screen - linux

I am using cron to run a bash script periodically, and trying to use flock to prevent this script and the processes it creates from being run multiple times.
The entry in crontab to run it every minute is:
*/1 * * * * flock -n /tmp/mylockfile /home/user/myscript.sh arg1 arg2
The problem is, myscript.sh spawns multiple screen sessions in detached mode, it contains
for i in {1..3};
do
screen -d -m ./mysubscript.sh arg3
done
Running screen with "-d -m" as above starts screen in "detached" mode as forked process, but these processes do not inherit the lock from flock, so that every minute 3 new screen processes running mysubscript.sh show up.
If I use "-D -m" instead then only one screen process runs all the time until mysubscript.sh finishes, not three.
What I need is flock to only run myscript.sh if none of the the screen processes running mysubscript.sh are running.
How can this be achieved? Are there flags in screen or flock that can help to achieve this?
EDIT: If I change the line inside the for loop into running mysubscript.sh as a background process with:
./mysubscript.sh arg3 &
The locking behavior is exactly as I want, except that I do not have the separate screens anymore.

Depending on your exact needs you can either run your subscript loop all consecutively or create individual screen sessions for each.
Multiple Screen Method
screens.sh
#!/bin/bash
if ! screen -ls | grep -q "screenSession"
then
for i in {1..3};
do
screen -S screenSession_${i} -d -m sh mysubscript.sh arg3 &
done
{ echo "waking up after 3 seconds"; sleep 3; } &
wait # wait for process to finish
exit # exit screen
else echo "Screen Currently Running"
fi
Sessions
62646.screenSession_1 (Detached)
62647.screenSession_2 (Detached)
62648.screenSession_3 (Detached)
This method will setup screens for each of iteration of your loop and execute the subscript. If cron happens to try and run the script while there are still active sockets then it will exit until next cron.
Single Screen Caller Method
screencaller.sh
#!/bin/bash
if ! screen -ls | grep -q "screenSession"
then screen -S screenSession -d -m sh screen.sh
else echo "Screen Currently Running"
fi
screen.sh
#!/bin/bash
for i in {1..3};
do
sh mysubscript.sh arg3
done
{ echo "waking up after 3 seconds"; sleep 3; } &
wait # wait for process to finish
exit # exit screen
Session
59916.screenSession (Detached)
This method uses a separate caller script then simply loops your subscript one after the other in the same screen session.
Either method would then be executed like so (eg. screens.sh, or screencaller.sh):
*/1 * * * * flock -n /tmp/mylockfile screens.sh arg1 arg2
Or if you wanted to run it manually from CLI just do:
$ sh screens.sh
If you wanted to enter the session you would just call screen -r screenSession. A couple of other useful commands are screen -ls and ps aux | grep screen which shows the screens and processes that you are currently running.

Related

Keep attached screen session alive and waiting for more commands to enter from detached screens

I had this idea of a script that I was trying to make come to light. I want one main process that is running and keeping track of subprocesses running in the background. I decided to go with screen for my implementation, and I can run the main.sh in attached mode, with the subcommands all running in detached. I can also send text to the main.sh once each sub-process has hit their point. However, i want the main script to be running until all sub-processes finish (accomplished), but I want it to be updating what it is printing out to the user. I currently have the main process doing:
while screen -list | grep -q process-1 || screen -list | grep -q process-2 || screen -list | grep -q process-3; do
sleep 1
done
which works perfectly as the main loop, but any data i send to that process just prints out like text, not like a command. Is there a way I can keep a screen session alive and receiving more commands/variables?
I currently plan on sending data from the sub-process like screen -S main -X stuff "PROCESS_1=FINISHED" and the main will keep trying to grab the variable PROCESS_1 to get it's status.
I can't just get the result of the sub-process either, as for at least one of the commands i plan on it running continuously, but I want to let the main-process know when it has hit a certain point. I was also tinkering with the idea of using file descriptors but I had issues getting that working in detached mode.
Is this possible and I haven't just used/found the right command? Do i need to somehow use another screen as a like a data layer so that the main layer just prints out to the user?
For completeness, here is the current setup:
start.sh
#!/bin/bash
screen -S process-1 -dm ./process-1.sh
screen -S process-2 -dm ./process-2.sh
screen -S main -m ./main.sh
main.sh
#!/bin/bash
while screen -list | grep -q process-1 || screen -list | grep -q process-2; do
sleep 1
done
process-1.sh
#!/bin/bash
count=0
while [[ count -ne 5 ]]; do
((count+=1))
sleep 5
done
screen -S main -X stuff "PROCESS_1=FINISHED"
process-2.sh
#!/bin/bash
count=0
while [[ count -ne 3 ]]; do
((count+=1))
sleep 5
done
screen -S main -X stuff "PROCESS_2=FINISHED"
Thanks for any advice you can provide.
EDIT:
Something along these lines is the end goal, like Vue.js, a constant screen that updates whenever changes are made (or in this case processes end)

How do I setup two curl commands to execute at different times forever?

For example, I want to run one command every 10 seconds and the other command every 5 minutes. I can only get the first one to log properly to a text file. Below is the shell script I am working on:
echo "script Running. Press CTRL-C to stop the process..."
while sleep 10;
do
curl -s -I --http2 https://www.ubuntu.com/ >> new.txt
echo "------------1st command--------------------" >> logs.txt;
done
||
while sleep 300;
do
curl -s -I --http2 https://www.google.com/
echo "-----------------------2nd command---------------------------" >> logs.txt;
done
I would advise you to go with #Marvin Crone's answer, but researching cronjobs and back-ground processes doesn't seem like the kind of hassle I would go through for this little script. Instead, try putting both loops into separate scripts; like so:
script1.sh
echo "job 1 Running. Type fg 1 and press CTRL-C to stop the process..."
while sleep 10;
do
echo $(curl -s -I --http2 https://www.ubuntu.com/) >> logs.txt;
done
script2.sh
echo "job 2 Running. Type fg 2 and press CTRL-C to stop the process..."
while sleep 300;
do
echo $(curl -s -I --http2 https://www.google.com/) >> logs.txt;
done
adding executable permissions
chmod +x script1.sh
chmod +x script2.sh
and last but not least running them:
./script1.sh & ./script2.sh &
this creates two separate jobs in the background that you can call by typing:
fg (1 or 2)
and stop them with CTRL-C or send them to background again by typing CTRL-Z
I think what is happening is that you start the first loop. Your first loop needs to complete before the second loop will start. But, the first loop is designed to be infinite.
I suggest you put each curl loop in a separate batch file.
Then, you can run each batch file separately, in the background.
I offer two suggestions for you to investigate for your solution.
One, research the use of crontab and set up a cron job to run the batch files.
Two, research the use of nohup as a means of running the batch files.
I strongly suggest you also research the means of monitoring the jobs and knowing how to terminate the jobs if anything goes wrong. You are setting up infinite loops. A simple Control C will not terminate jobs running in the background. You are treading in areas that can get out of control. You need to know what you are doing.

Parallel run and wait for pocesses from subshell

Hi all/ I'm trying to make something like parallel tool for shell simply because the functionality of parallel is not enough for my task. The reason is that I need to run different versions of compiler.
Imagine that I need to compile 12 programs with different compilers, but I can run only 4 of them simultaneously (otherwise PC runs out of memory and crashes :). I also want to be able to observe what's going on with each compile, therefore I execute every compile in new window.
Just to make it easier here I'll replace compiler that I run with small script that waits and returns it's process id sleep.sh:
#!/bin/bash
sleep 30
echo $$
So the main script should look like parallel_run.sh :
#!/bin/bash
for i in {0..11}; do
xfce4-terminal -H -e "./sleep.sh" &
pids[$i]=$!
pstree -p $pids
if (( $i % 4 == 0 ))
then
for pid in ${pids[*]}; do
wait $pid
done
fi
done
The problem is that with $! I get pid of xfce4-terminal and not the program it executes. So if I look at ptree of 1st iteration I can see output from main script:
xfce4-terminal(31666)----{xfce4-terminal}(31668)
|--{xfce4-terminal}(31669)
and sleep.sh says that it had pid = 30876 at that time. Thus wait doesn't work at all in this case.
Q: How to get right PID of compiler that runs in subshell?
Maybe there is the other way to solve task like this?
It seems like there is no way to trace PID from parent to child if you invoke process in new xfce4-terminal as terminal process dies right after it executed given command. So I came to the solution which is not perfect, but acceptable in my situation. I run and put compiler's processes in background and redirect output to .log file. Then I run tail on these logfiles and I kill all tails which belongs to current $USER when compilers from current batch are done, then I run the other batch.
#!/bin/bash
for i in {1..8}; do
./sleep.sh > ./process_$i.log &
prcid=$!
xfce4-terminal -e "tail -f ./process_$i.log" &
pids[$i]=$prcid
if (( $i % 4 == 0 ))
then
for pid in ${pids[*]}; do
wait $pid
done
killall -u $USER tail
fi
done
Hopefully there will be no other tails running at that time :)

Don't show the output of kill command in a Linux bash script [duplicate]

How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?
In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.
The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar
Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798
Maybe detach the process from the current shell process by calling disown?
The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'
Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.
This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).
disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt
Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done
Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'
I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &
Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null

Bash trap not killing children, causes unexpected ctrl-c behavior

edit
For future readers. The root of this problem really came down to running the function in an interactive shell vs. putting it in a separate script.
Also, there are many things that could be improved in the code I originally posted. Please see comments for things that could/should have been done better.
/edit
I have a bash function intended to rerun a process in the background when files in a directory change (think like Grunt, but for general purposes). The script functions as desired while running:
The subprocess is correctly started (including any children)
On file change, the sub is killed (including children) and started again
However, on exit (ctrl-c) none of the processes are killed. Additionally, pressing ctrl-c a second time will kill the current terminal session. I'm assuming this is a problem with my trap, but have been unable to identify a reason for the issue.
Here is the code of rerun.sh
#!/bin/bash
# rerun.sh
_kill_children() {
isTop=$1
curPid=$2
# Get pids of children
children=`ps -o pid --no-headers --ppid ${curPid}`
for child in $children
do
# Call this function to get grandchildren as well
_kill_children 0 $child
done
# Parent calls this with 1, all other with 0 so only children are killed
if [[ $isTop -eq 0 ]]; then
kill -9 $curPid 2> /dev/null
fi
}
rerun() {
trap " _kill_children 1 $$; exit 0" SIGINT SIGTERM
FORMAT=$(echo -e "\033[1;33m%w%f\033[0m written")
#Command that should be repeatedly run is passed as args
args=$#
$args &
#When a file changes in the directory, rerun the process
while inotifywait -qre close_write --format "$FORMAT" .
do
#Kill current bg proc and it's children
_kill_children 1 $$
$args & #Rerun the proc
done
}
#This is sourced in my bash profile so I can run it any time
To test this, create a pair of executable files parent.sh and child.sh as follows:
#!/bin/bash
#parent.sh
./child.sh
#!/bin/bash
#child.sh
sleep 86400
Then source the rerun.sh file and run rerun ./parent.sh. In another terminal window I watch "ps -ef | grep pts/4" to see all processes for the rerun (in this example on pts/4). Touching a file in the directory triggers a restart of parent.sh and children. [ctrl-c] exits, but leaves the pids running. [ctrl-c] again kills bash and all other processes on pts/4.
Desired behavior: on [ctrl-c], kill children and exit to shell normally. Help?
--
Code sources:
Inotify idea from: https://exyr.org/2011/inotify-run/
Kill children from: http://riccomini.name/posts/linux/2012-09-25-kill-subprocesses-linux-bash/
This isn't a good practice to follow in the first place. Track your children explicitly:
children=( )
foo & children+=( "$!" )
...then, you can kill or wait for them explicitly, referring to "${children[#]}" for the list. If you want to get grandchildren as well, this is a good user for fuser -k and a lockfile:
lockfile_name="$(mktemp /tmp/lockfile.XXXXXX)" # change appropriately
trap 'rm -f "$lockfile_name"' 0
exec 3>"$lockfile_name" # open lockfile on FD 3
kill_children() {
# close our own handle on the lockfile
exec 3>&-
# kill everything that still has it open (our children and their children)
fuser -k "$lockfile_name" >/dev/null
# ...then open it again.
exec 3>"$lockfile_name"
}
rerun() {
trap 'kill_children; exit 0' SIGINT SIGTERM
printf -v format '%b' "\033[1;33m%w%f\033[0m written"
"$#" &
#When a file changes in the directory, rerun the process
while inotifywait -qre close_write --format "$format" .; do
kill_children
"$#" &
done
}

Resources