Kill background process when another process ends in Linux - linux

I have a little question and I hope someone can help me because I can not find a proper solution.
I want to resolve a hostname; while waiting for the result, I'd like to print a notification if it takes more than 30 seconds with shell script commands, preferably built-ins or ubiquitous system commands.
I have a background process that sleeps and then prints a message; while sleeping, the process runs ping, but I can't figure out how to kill the background process after the ping finishes and the message keeps printing even if the ping ends prior to the 30 second time limit since this is part of a bigger script that takes some time to run.
Here's the code that I've been using:
((sleep 30; echo "Querying the DNS server takes more than 30 seconds.") & ping -q -c 1 localhost >/dev/null)
I would greatly appreciate any and all help. Other solutions are welcome too; I just want to tell the user that the DNS is too slow and this will affect the further execution. I have tried ping -w or -W but this is not measuring the resolution time. I have tried to trap the result from the ping. I have tried to kill all processes with the same GPID but it is killing the console also. I am not the best with scripts, maybe this is the reason why this takes me so much time. Thank you in advance.

I hope this approach helps you. I think everything is pretty much portable, except for "bc" maybe. I can give you a "bc-less" version if you need it. Good luck!
#!/bin/bash
timeout=10; ## This is how long to wait before doing some batshit!
printed=1; ## this is how many times you want the message displayed (For #instance, you might want a message EVERY X seconds)
starttime="$( date +%F ) $( date +%T.%3N )"
################### HERE GOES YOUR BACKGROUND PROCESS
sleep 30 &
#######################################################
processId=$! ## And here we got the procees Id
#######################################################
while [ ! -z "$( ps -ef | grep $processId | grep -v grep )" ]
do
endtime="$( date +%F ) $( date +%T.%3N )";
timeelapsed=$( echo " $(date -d "$endtime" "+%s" ) - $(date -d "$starttime" "+%s" ) " | bc );
if [[ ($timeelapsed -gt $timeout) && ($printed -ne 0) ]]
then
echo "This is taking more than $timeout seconds";
printed=$(( printed - 1 ));
starttime="$( date +%F ) $( date +%T.%3N )"
fi
done
### Do something once everything finished
echo "The background process ended!!"

Related

Shutdown computer when all instances of a given program have finished

I use the following script to check whether wget has finished downloading. To check for this, I'm looking for its PID, and when it is not found the computer shutdowns. This works fine for a single instance of wget, however, I'd like the script to look for all already running wget programs.
#!/bin/bash
while kill -0 $(pidof wget) 2> /dev/null; do
for i in '-' '/' '|' '\'
do
echo -ne "\b$i"
sleep 0.1
done
done
poweroff
EDIT: I'd would be great if the script would check if at least one instance of wget is running and only then check whether wget has finished and shutdown the computer.
In addition to the other answers, you can satisfy your check for at least one wget pid by initially reading the result of pidof wget into an array, for example:
pids=($(pidof wget))
if ((${#pids[#]} > 0)); then
# do your loop
fi
This also brings up a way to routinely monitor the remaining pids as each wget operation completes, for example,
edit
npids=${#pids[#]} ## save original number of pids
while (( ${#pids[#]} -gt 0 )); do ## while pids remain
for ((i = 0; i < npids; i++)); do ## loop, checking remaining pids
kill -0 ${pids[i]} || pids[$i]= ## if not unset in array
done
## do your sleep and spin
done
poweroff
There are probably many more ways to do it. This is just one that came to mind.
I don't think kill is a right Idea,
may be some thing on the lines like this
while [ 1 ]
do
live_wgets=0
for pid in `ps -ef | grep wget| awk '{print $2}'` ; # Adjust the grep
do
live_wgets=$((live_wgets+1))
done
if test $live_wgets -eq 0; then # shutdown
sudo poweroff; # or whatever that suits
fi
sleep 5; # wait for sometime
done
You can adapt your script in the following way:
#!/bin/bash
spin[0]="-"
spin[1]="\\"
spin[2]="|"
spin[3]="/"
DOWNLOAD=`ps -ef | grep wget | grep -v grep`
while [ -n "$DOWNLOAD" ]; do
for i in "${spin[#]}"
do
DOWNLOAD=`ps -ef | grep wget | grep -v grep`
echo -ne "\b$i"
sleep 0.1
done
done
sudo poweroff
However I would recommend using cron instead of an active waiting approach or even use wait
How to wait in bash for several subprocesses to finish and return exit code !=0 when any subprocess ends with code !=0?

clear my script logs every 10 second

I have script with name : run.sh
This is my script code :
#!/usr/bin/env bash
install() {
sudo apt-get update
sudo apt-get upgrade
}
if [ "$1" = "install" ]; then
install
else
if [ ! -f ./tg/tgcli ]; then
echo "tg not found"
echo "Run $0 install"
exit 1
fi
#sudo service redis-server restart
#./tg/tgcli -s ./bot/bot.lua -l 1 -E $#
./tg/tgcli -s ./bot/bot.lua $#
fi
and when run this script give me output like this every second :
[09:54] 2014 Hello
[09:55] 2014 Hi
[09:57] 2014 How Are you ?
and many like this (thousands in hour !)
and my server get slow in 5 hour.
i check print commands in bot.lua but there are no way to remove print it.
can you add some codes to clear my script logs every 10 second ?
Thanks a lot.
My Script Output Doesn't Save Anywhere and Just Show me in terminal
I want a code such as clear command on linux terminal , clear my script logs every 10 minute or 5 minute.
After 5 day of script running i can (sometimes can't) login my server and my server get very slow and i must wait 3 or 5 minute to login my server and this amazing after login my server my server again get fast !
and i forgot say i use byobu screen for run my scripts and I think screen get my server slow down.
I don't think that something as simple as this would cause your server to slow down, but you can add a check to your script to calculate the size or line count of your log file every time it runs.
This function assumes you are redirecting your output to a log file. Set the variables to whatever makes the most sense.
log_check() {
line_count=$(wc -l $log_file | awk '{print $1}')
size_check=$(du -ax $log_file | awk '{print $1}')
max_file_size="1500"
max_file_length="1000"
if [[ $line_count >= $max_file_length || $size_check >= $max_file_size ]]; then
echo "" > $log_file
fi
}
I would also recommend using [[ ]] over [ ] since this is a bash script, as long as you don't plan in it being posix compliant and only plan on using it with bash [[]] is always better than [].
EDIT:
Since you are logging output to the terminal and not a file you can literally use the clear command in your script.
Try this out and see how the functionality works
for i in {1..20}; do
echo $i
if (( i == 10 )); then
clear
fi
done
I'm assuming your code has a loop somewhere, if not it will be a bit more complex to clear the terminal session. I'm not really sure what part of your code is actually printing anything to stdout, I'm guessing it's this piece here
./tg/tgcli -s ./bot/bot.lua $#
You could try something like this, which will background your initial process and then run clear every 60 seconds to clear the terminal window. Is there any reason you're not writing the output to a log file? That alone could solve some of your issues as well.
#!/bin/bash
./tg/tgcli -s ./bot/bot.lua $# &
pid="$!"
check_pid() {
ps -ef |grep "$pid"|grep -v 'grep' &>/dev/null
}
cnt=1
until ! check_pid; do
if (( cnt == 6 )); then
clear
cnt=1
fi
sleep 10
((cnt++))
done

How to get watch to run a bash script with quotes

I'm trying to have a lightweight memory profiler for the matlab jobs that are run on my machine. There is either one or zero matlab job instance, but its process id changes frequently (since it is actually called by another script).
So here is the bash script that I put together to log memory usage:
#!/bin/bash
pid=`ps aux | grep '[M]ATLAB' | awk '{print $2}'`
if [[ -n $pid ]]
then
\grep VmSize /proc/$pid/status
else
echo "no pid"
fi
when I run this script in bash like this:
./script.sh
it works fine, giving me the following result:
VmSize: 1289004 kB
which is exactly what I want.
Now, I want to run this periodically. So I run it with watch, like this:
watch ./script.sh
But in this case I only receive:
no pid
Please note that I know the matlab job is still running, because I can see it with the same pid on top, and besides, I know each matlab job take several hours to finish.
I'm pretty sure that something is wrong with the quotes I have when setting pid. I just can't figure out how to fix it. Anyone knows what I'm doing wrong?
PS.
In the man page of watch, it says that commands are executed by sh -c. I did run my script like sh -c ./script and it works just fine, but watch doesn't.
Why don't you use a loop with sleep command instead?
For example:
#!/bin/bash
pid=`ps aux | grep '[M]ATLAB' | awk '{print $2}'`
while [ "1" ]
do
if [[ -n $pid ]]
then
\grep VmSize /proc/$pid/status
else
echo "no pid"
fi
sleep 10
done
Here the script sleeps(waits) for 10 seconds. You can set the interval you need changing the sleep command. For example to make the script sleep for an hour use sleep 1h.
To exit the script press Ctrl - C
This
pid=`ps aux | grep '[M]ATLAB' | awk '{print $2}'`
could be changed to:
pid=$(pidof MATLAB)
I have no idea why it's not working in watch but you could use a cron job and make the script log to a file like so:
#!/bin/bash
pid=$(pidof MATLAB) # Just to follow previously given advice :)
if [[ -n $pid ]]
then
echo "$(date): $(\grep VmSize /proc/$pid/status)" >> logfile
else
echo "$(date): no pid" >> logfile
fi
You'd of course have to create logfile with touch.
You might try just running ps command in watch. I have had issues in the past with watch chopping lines and such when they get too long.
It can be fixed by making the terminal you are running the command from wider or changing the column like this (may need to adjust the 160 to your liking):
export COLUMNS=160;

Recording messages received on a port with SOCAT

I have a server with an open port which receives between 50 and 1000 messages per second. By message I mean that a single line of text is sent.
Essentially we want to record these messages in a file which will be processed every hour (or x minutes).
I have created a bash script (see below) which runs in the background and it works except when I kill the socat process (so I can take the file for processing and it can start a new file) we get part of a message, plus I am sure we are losing messages during the split second that socat is down.
DELAY="3600"
while true
do
NEXT_STOP=`date +%s --date "$DELAY second"`
(
while [ "$(date +%s)" -lt "$NEXT_STOP" ]
do
killall socat
socat -u TCP-LISTEN:6116,reuseaddr,keepalive,rcvbuf=131071,reuseaddr OPEN:/var/ais/out_v4.txt,creat,append
done
) & sleep $DELAY ; killall socat
mv /var/ais/out_v4.txt "/var/ais/_socat_received/"$(date +"%Y-%m-%d-%T")"__out_v4.txt"
done
Is there a way to:
Get socat to rotate its output file without killing the process
Or can we purge the content of the file whilst SOCAT is writing to it. e.g. cut the first 10000 lines into another file, so the output file remains a manageable size?
Many thanks in advance
For anyone interested the final solution looks like the following, the key difference to Nicholas solution below is that I needed to grep the PID of the socat process rather than use $?:
#!/bin/bash
DELAY=600
SOCAT_PID=$(/bin/ps -eo pid,args | grep "socat -u TCP-LISTEN:12456" | grep -v grep | awk '{ print $1 }')
while `kill -0 $SOCAT_PID`
do
touch /var/ais/out.txt
NEXT_STOP=`date +%s --date "$DELAY second"`
while `kill -0 $SOCAT_PID` && [ "$(date +%s)" -lt "$NEXT_STOP" ]
do
head -q - >> /var/ais/out.txt
done
mv /var/ais/out.txt "/var/ais/_socat_received/"$(date +"%Y-%m-%d-%T")"__out.txt"
done
In addition adding the start script within an infinite while loop so that when the client disconnects we restart socat and wait for the next connection attempt:
while true
do
socat -u TCP-LISTEN:12456,keepalive,reuseaddr,rcvbuf=131071 STDOUT | /var/ais/socat_write.sh
done
Instead of reinventing the wheel, you could use rotatelogs or multilog, both of which read log messages on std input and write them to log files with very flexible rotation config.
Even one step higher, the functionality you described is very similar to what rsyslogd and the like do.
Not so obvious! socat doesn't have any options for changing what it does with the connection halfway through. That means you'll have to be a little bit sneaky. Use socat with the output as STDOUT, and pipe to this script:
#!/bin/bash
rv=0
while [ $rv -lt 1 ]
do
NEXT_STOP=`date +%s --date "$DELAY second"`
while [ "$(date +%s)" -lt "$NEXT_STOP" ] && [ $rv -lt 1 ]
do
head -q - >> /var/ais/out_v4.txt
rv=$?
done
mv /var/ais/out_v4.txt "/var/ais/_socat_received/"$(date +"%Y-%m-%d-%T")"__out_v4.txt"
done
Totally untested, but looks reasonable?
socat -u TCP-LISTEN:6116,reuseaddr,keepalive,rcvbuf=131071,reuseaddr STDOUT | ./thescript.sh

Start a command, count lines of output after 10 seconds, then either restart it or let it run

I have an interesting situation I am trying to script. I have a program that outputs 26,000 lines after 10 seconds when it starts successfully. Otherwise I have to kill it and start it again. I tried doing something like this:
test $(./long_program | wc -l) -eq 26000 && echo "Started successfully"
but that only works if the program finishes running. Is there a clever way to watch the output stream of a command and make decisions accordingly? I'm at a loss, not quite sure even how to start searching for this. Thanks!
What about
./long_program > mylogfile &
pid=$!
sleep 10
# then test on mylogfile length and kill $pid if needed
count=0
until [ $count -eq 26000 ]; do
killall ./longrun
#start in background
./longrun >output.$$ &
sleep 10
count=$(wc -l output.$$ |awk '{print $1}')
done
echo "done"
#disown so it continues after current login quits
disown -h

Resources