How to check if file size is not incrementing ,if not then kill the $$ of script - linux

I am trying to figure out a way to monitor the files I am dumping from my script. If there is no increment seen in child files then kill my script.
I am doing this to free up the resources when not needed. Here is what I think of , but I think my apporch is going to add burden to CPU. Can anyone please suggest more efficent way of doing this?
Below script is suppose to poll in every 15 sec and collect two file size of same file, if the size of the two samples are same then exit.
checkUsage() {
while true; do
sleep 15
fileSize=$(stat -c%s $1)
sleep 10;
fileSizeNew=$(stat -c%s $1)
if [ "$fileSize" == "$fileSizeNew" ]; then
echo -e "[Error]: No activity noted on this window from 5 sec. Exiting..."
kill -9 $$
fi
done
}
And I am planning to call it as follow (in background):
checkUsage /var/log/messages &
I can also get solution if, someone suggest how to monitor tail command and if nothing printing on tail then exit. NOT SURE WHY PEOPLE ARE CONFUSED. End goal of this question is to ,check if the some file is edited in last 15 seconds. If not exit or throw some error.
I have achived this by above script,but I don't know if this is the smartest way of achiveing this. I have asked this question to know views from other if there is any alternative way or better way of doing it.

I would based the check on file modification time instead of size, so something like this (untested code):
checkUsage() {
while true; do
# Test if file mtime is 'second arg' seconds older than date, default to 10 seconds
if [ $(( $(date +"%s") - $(stat -c%Y /var/log/message) )) -gt ${2-10} ]; then
echo -e "[Error]: No activity noted on this window from ${2-10} sec. Exiting..."
return 1
fi
#Sleep 'first arg' second, 15 seconds by default
sleep ${1-15}
done
}
The idea is to compare the file mtime with current time, if it's greater than second argument seconds, print the message and return.
And then I would call it like this later (or with no args to use defaults):
[ checkusage 20 10 ] || exit 1
Which would exit the script with code 1 as when the function return from it's infinite loop (as long as the file is modified)
Edit: reading me again, the target file could be a parameter too, to allow a better reuse of the function, left as an exercise to the reader.

If on Linux, in a local file system (Ext4, BTRFS, ...) -not a network file system- then you could consider inotify(7) facilities: something could be triggered when some file or directory changes or is accessed.
In particular, you might have some incron job thru incrontab(5) file; maybe it could communicate with some other job ...
PS. I am not sure to understand what you really want to do...

I suppose an external programme is modifying /var/log/messages.
If this is the case, below is my script (with minor changes to yours)
#Bash script to monitor changes to file
#!/bin/bash
checkUsage() # Note that U is in caps
{
while true
do
sleep 15
fileSize=$(stat -c%s $1)
sleep 10;
fileSizeNew=$(stat -c%s $1)
if [ "$fileSize" == "$fileSizeNew" ]
then
echo -e "[Notice : ] no changes noted in $1 : gracefully exiting"
exit # previously this was kill -9 $$
# changing this to exit would end the program gracefully.
# use kill -9 to kill a process which is not under your control.
# kill -9 sends the SIGKILL signal.
fi
done
}
checkUsage $1 # I have added this to your script
#End of the script
Save the script as checkusage and run it like :
./checkusage /var/log/messages &
Edit :
Since you're looking for better solutions I would suggest inotifywait, thanks for the suggestion from the other answerer.
Below would be my code :
while inotifywait -t 10 -q -e modify $1 >/dev/null
do
sleep 15 # as you said the polling would happen in 15 seconds.
done
echo "Script exited gracefully : $1 has not been changed"
Below are the details from the inotifywait manpage
-t <seconds>, --timeout <seconds> Exit if an appropriate event has not occurred within <seconds> seconds. If <seconds> is zero (the default),
wait indefinitely for an event.
-e <event>, --event <event> Listen for specific event(s) only. The events which can be listened for are listed in the EVENTS section.
This option can be specified more than once. If omitted, all events
are listened for.
-q, --quiet If specified once, the program will be less verbose. Specifically, it will not state when it has completed establishing all
inotify watches.
modify(Event) A watched file or a file within a watched directory was
written to.
Notes
You might have to install the inotify-tools first to make use of the inotifywait command. Check the inotify-tools page at Github.

Related

Linux command execution rate limiting

I have a Linux command that can be called by another application multiple times (in quick succession) with different parameters. The problem is, if the command gets executed in too quick of succession, the function that it performs will not work properly.
What I’m looking for is some simple way to ensure that each call to the command will be properly delayed/spaced (by a couple milliseconds) from each other.
Order of execution does not matter in this case and I have no control over how the application makes the calls.
Edit: The command being called is used to transmit an RF signal on a Raspberry Pi. As such, the command execution must be exclusive (no concurrency) with an additional delay between executions to prevent the receivers from misreading the signals.
For anyone with the same problem, this worked for me: https://unix.stackexchange.com/questions/408934/how-to-serialize-command-execution-on-linux
CMD="<some command> && sleep <some delay in seconds>"
flock /tmp/some_lockfile $CMD
For a simple concurrency control, which will limit concurrent execution to instances, consider the following while loop (modify as needed).
Note that the script must be invoked as /path/to/script.sh so that it will find other instances. Starting with 'bash /path/to/script.sh' will require changes!
#! /bin/bash
# Process identifier.
echo "START $$"
ME=${0##*/}
# Max number of instances
N=5
# Sleep while there are more than N instances.
while [[ "$(pgrep -c -x $ME)" -gt "$N" ]] ; do echo Waiting ... ; sleep 1 ; done
# Execute the job
sleep "$#"
echo "Done $$"

Delaying not preventing Bash function from simultaneous execution

I need to block the simultaneous calling of highCpuFunction function. I have tried to create a blocking mechanism, but it is not working. How can I do this?
nameOftheScript="$(basename $0)"
pidOftheScript="$$"
highCpuFunction()
{
# Function with code causing high CPU usage. Like tar, zip, etc.
while [ -f /tmp/"$nameOftheScript"* ];
do
sleep 5;
done
touch /tmp/"$nameOftheScript"_"$pidOftheScript"
echo "$(date +%s) I am a Bad function you do not want to call me simultaniously..."
# Real high CPU usage code for reaching the database and
# parsing logs. It takes the heck out of the CPU.
rm -rf /tmp/"$nameOftheScript"_"$pidOftheScript" 2>/dev/null
}
while true
do
sleep 2
highCpuFunction
done
# The rest of the code...
In short, I want to run highCpuFunction at least with a gap of 5 seconds. Regardless of the instance/user/terminal. I need to allow other users to run this function but in proper sequence and with a gap of at least 5 seconds.
Use the flock tool. Consider this code (let's call it 'onlyoneofme.sh'):
#!/bin/sh
exec 9>/var/lock/myexclusivelock
flock 9
echo start
sleep 10
echo stop
It will open file /var/lock/myexclusivelock as descriptor 9 and then try to lock it exclusively. Only one instance of the script will be allowed to pass behind the flock 9 command. The rest of them will wait for the other script to finish (so the descriptor will be closed and the lock freed). After this, the next script will acquire the lock and execute, and so on.
In the following solution the # rest of the script part can be executed only by one process. The test and set is atomic, and there isn't any race condition, whereas test -f file .. touch file, two processes can touch the file.
try_acquire_lock() {
local lock_file=$1
# Noclobber option to fail if the file already exists
# in a sub-shell to avoid modifying current shell options
( set -o noclobber; : >"$lock_file")
}
# Trap to remove the file when the process exits
trap 'rm "$lock_file"' EXIT
lock_file=/tmp/"$nameOftheScript"_"$pidOftheScript"
while ! try_acquire_lock "$lock_file";
do
echo "failed to acquire lock, sleeping 5sec.."
sleep 5;
done
# The rest of the script
It's not optimal, because the wait is done in a loop with sleep. To improve, one can use inter process communication (FIFO), or operating system notifications or signals.
# Block current shell process
kill -STOP $BASHPID
# Unblock blocked shell process (where <pid> is the id of the blocked process)
kill -CONT <pid>

How can I make a ksh script terminate itself if any issues?

I have written a few ksh scripts, about 6 scripts.
These are written to handle huge data files, something like 207 MB big. while running the script, sometimes it gets stuck and does not end.
Human interruption is required.
In production environment, I want it to run automatically, and should be able to end automatically if any issues without the need of any human interruption.
If there are some issues with a file, the script should end and start executing the next file.
How can make it terminate itself, if it gets stuck?
I assume, that the only way you see the issues is that the script takes too long. In that case a simple script that kills the process after a time-out should be sufficient:
#!/bin/bash
# Killersrcipt
PID=$1
TIME=$2
typeset -i i
i=0
while [ $i -lt $TIME ] ; do
if ps $PID > /dev/null ; then
i=$i+1
sleep 1
else
exit 0
fi
done
kill $PID
Your workflow would then be something like:
#!/bin/bash
process_1 &
killerscript $! 60
process_2 &
killerscript $! 30
...
If you have other ways to detect issues in your processes, you can easily add them to the loop in your killerscript.

How to kill a process on no output for some period of time

I've written a program that is suppose to run for a long time and it outputs the progress to stdout, however, under some circumstances it begins to hang and the easiest thing to do is to restart it.
My question is: Is there a way to do something that would kill the process only if it had no output for a specific number of seconds?
I have started thinking about it, and the only thing that comes to mind is something like this:
./application > output.log &
tail -f output.log
then create script which would look at the date and time of the last modification on output.log and restart the whole thing.
But it looks very tedious, and i would hate to go through all that if there were an existing command for that.
As far as I know, there isn't a standard utility to do it, but a good start for a one-liner would be:
timeout=10; if [ -z "`find output.log -newermt #$[$(date +%s)-${timeout}]`" ]; then killall -TERM application; fi
At least, this will avoid the tedious part of coding a more complex script.
Some hints:
Using the find utility to compare the last modification date of the output.log file against a time reference.
The time reference is returned by date utility as the current time in seconds (+%s) since EPOCH (1970-01-01 UTC).
Using bash $[] operation to subtract the $timeout value (10 seconds on the example)
If no output is returned from the above find, then the file wasn't changed for more than 10 seconds. This will trigger a true in the if condition and the killall command will be executed.
You can also set an alias for that, using:
alias kill_application='timeout=10; if [ -z "`find output.log -newermt #$[$(date +%s)-${timeout}]`" ]; then killall -TERM application; fi';
And then use it whenever you want by just issuing the command kill_application
If you want to automatically restart the application without human intervention, you can install a crontab entry to run every minute or so and also issue the application restart command after the killall (Probably you may also want to change the -TERM to -KILL, just in case the application becomes unresponsive to handleable signals).
The inotifywait could help here, it efficiently waits for changes to files. The exit status can be checked to identify if the event (modify) occurred in the specified interval of time.
$ inotifywait -e modify -t 10 output.log
Setting up watches.
Watches established.
$ echo $?
2
Some related info from man:
OPTIONS
-e <event>, --event <event>
Listen for specific event(s) only.
-t <seconds>, --timeout <seconds>
Exit if an appropriate event has not occurred within <seconds> seconds.
EXIT STATUS
2 The -t option was used and an event did not occur in the specified interval of time.
EVENTS
modify A watched file or a file within a watched directory was written to.

defer pipe process to background after text match

So I have a bash command to start a server and it outputs some lines before getting to the point where it outputs something like "Server started, Press Control+C to exit". How do I pipe this output so when this line occurs i put this process in the background and continue with another script/function (i.e to do stuff that needs to wait until the server starts such as run tests)
I want to end up with 3 functions
start_server
run_tests
stop_server
I've got something along the lines of:
function read_server_output{
while read data; do
printf "$data"
if [[ $data == "Server started, Press Control+C to exit" ]]; then
# do something here to put server process in the background
# so I can run another function
fi
done
}
function start_server{
# start the server and pipe its output to another function to check its running
start-server-command | read_server_output
}
function run_test{
# do some stuff
}
function stop_server{
# stop the server
}
# run the bash script code
start_server()
run_tests()
stop_tests()
related question possibly SH/BASH - Scan a log file until some text occurs, then exit. How?
Thanks in advance I'm pretty new to this.
First, a note on terminology...
"Background" and "foreground" are controlling-terminal concepts, i.e., they have to do with what happens when you type ctrl+C, ctrl+Z, etc. (which process gets the signal), whether a process can read from the terminal device (a "background" process gets a SIGTTIN that by default causes it to stop), and so on.
It seems clear that this has little to do with what you want to achieve. Instead, you have an ill-behaved program (or suite of programs) that needs some special coddling: when the server is first started, it needs some hand-holding up to some point, after which it's OK. The hand-holding can stop once it outputs some text string (see your related question for that, or the technique below).
There's a big potential problem here: a lot of programs, when their output is redirected to a pipe or file, produce no output until they have printed a "block" worth of output, or are exiting. If this is the case, a simple:
start-server-command | cat
won't print the line you're looking for (so that's a quick way to tell whether you will have to work around this issue as well). If so, you'll need something like expect, which is an entirely different way to achieve what you want.
Assuming that's not a problem, though, let's try an entirely-in-shell approach.
What you need is to run the start-server-command and save the process-ID so that you can (eventually) send it a SIGINT signal (as ctrl+C would if the process were "in the foreground", but you're doing this from a script, not from a controlling terminal, so there's no key the script can press). Fortunately sh has a syntax just for this.
First let's make a temporary file:
#! /bin/sh
# myscript - script to run server, check for startup, then run tests
TMPFILE=$(mktemp -t myscript) || exit 1 # create /tmp/myscript.<unique>
trap "rm -f $TMPFILE" 0 1 2 3 15 # arrange to clean up when done
Now start the server and save its PID:
start-server-command > $TMPFILE & # start server, save output in file
SERVER_PID=$! # and save its PID so we can end it
trap "kill -INT $SERVER_PID; rm -f $TMPFILE" 0 1 2 3 15 # adjust cleanup
Now you'll want to scan through $TMPFILE until the desired output appears, as in the other question. Because this requires a certain amount of polling you should insert a delay. It's also probably wise to check whether the server has failed and terminated without ever getting to the "started" point.
while ! grep '^Server started, Press Control+C to exit$' >/dev/null; do
# message has not yet appeared, is server still starting?
if kill -0 $SERVER_PID 2>/dev/null; then
# server is running; let's wait a bit and try grepping again
sleep 1 # or other delay interval
else
echo "ERROR: server terminated without starting properly" 1>&2
exit 1
fi
done
(Here kill -0 is used to test whether the process still exists; if not, it has exited. The "cleanup" kill -INT will produce an error message, but that's probably OK. If not, either redirect that kill command's error-output, or adjust the cleanup or do it manually, as seen below.)
At this point, the server is running and you can do your tests. When you want it to exit as if the user hit ctrl+C, send it a SIGINT with kill -INT.
Since there's a kill -INT in the trap set for when the script exits (0) as well as when it's terminated by SIGHUP (1), SIGINT (2), SIGQUIT (3), and SIGTERM (15)—that's the:
trap "do some stuff" 0 1 2 3 15
part—you can simply let your script exit at this point, unless you want to specifically wait for the server to exit too. If you want that, perhaps:
kill -INT $SERVER_PID; rm -f $TMPFILE # do the pre-arranged cleanup now
trap - 0 1 2 3 15 # don't need it arranged anymore
wait $SERVER_PID # wait for server to finish exit
would be appropriate.
(Obviously none of the above is tested, but that's the general framework.)
Probably the easiest thing to do is to start it in the background and block on reading its output. Something like:
{ start-server-command & } | {
while read -r line; do
echo "$line"
echo "$line" | grep -q 'Server started' && break
done
cat &
}
echo script continues here after server outputs 'Server started' message
But this is a pretty ugly hack. It would be better if the server could be modified to perform a more specific action which the script could wait for.

Resources