While true fetching inputs from list in Bash - linux

I'm looking for an alternative with while loop that should retrieve the inputs from one file. Even I failed to interpret this with while true condition, in this case I'm unable to give the variables from a file like (cat filename),
My requirement is to check 100 servers whether they are up or not via a ping test for every 5 minutes and log it some custom location (let's say /tmp/output/out-`date +%F`). And also the same script should delete the same logs after 7 days and max size not exceed 1MB per file size.
while read IP
do
ping -c1 $IP &> /dev/null && echo $IP is up || echo $IP is down
sleep 2
done < IP
Note: to run this script in infinite loop, I'm unable to parse the input as variable from file.
while true
do
...
done < filename
Ideas appreciated :)

Multiple tasks run in parallel:
#!/bin/bash
t=5m # time interval
p=() # pid list
_pingAndLog(){ # $1 is the server ip list
local ip
while :; do
while read -r ip || [[ -n $ip ]]; do
if ping -c1 $ip >/dev/null 2>&1; then
echo "`date +%H:%M`: $ip is up"
else
echo "`date +%H:%M`: $ip is down"
fi >>pingtest-`date +%F`.log
done <"$1" # $1 = ip list
sleep "$t"
done
}
_killOldLog(){
while :; do
# use "-mtime +7" to find old files, 7 days
find . -type f -mtime +7 -name 'pingtest-*\.log' -delete
sleep 24h
done
}
_cleanUp(){
echo kill ${p[#]}
kill ${p[#]}
}
for s in *\.list; do # for each file ip list
[[ "$s" = '*.list' ]] && break # no file found, then quit
_pingAndLog "$s" & p+=($!) # run in background, remember pid
done
_killOldLog & p+=($!)
trap _cleanUp 0 2 3 # 0-exit; 2-interrupt, 3-quit
wait # wait backround jobs; Ctrl-C to exit
Note:
Because logfiles are separated by date, so maybe no need to check their sizes; just delete those that are old.
while sleep "$t"; do .. done is also OK to contruct an endless loop as you required.
I've modify this script so that it can ping in parallel, multiple list of IPs.

Related

script that will monitor changes in at least 2 files/directories and execute a third script only when both are modified

I have two sensors that each create an entry in a text file when triggered. Now I need something to monitor these two files (i can also put them in 2 directories each if that helps in any way) and trigger a third script only when changes occur to both of the aforementioned files/directories. I need real-time (or near to it) between events and notification. I have found tools like inotify-wait, fswatch, entr and some others but all of these are triggered at any change.
At the moment I'm trying this but it does not work properly:
#!/bin/bash
while inotifywait -e modify /home/user/triggerdir/ ;
do
if [ "inotifywait -e modify /home/user/triggerdir2/" ];
then
echo Alert | mail -n -s "test-notify SCRIPT HUZAAAA" user#gmail.com
else
# Don't do anything unless we've found one of those
:
fi
done
I have looked for similar issues/solutions on the web, the closest would be this but it has no working answer.
Since you're having trouble with that, you might consider a simplistic approach.
Rather than a loop, I'd put your script in the crontab. Run it every day, every hour, every minute, whatever you need. If you need more often, you could loop, but make sure you at least sleep a second to be nice to the CPU.
If a minute or more between event and notification is ok this should be all you need:
#!/bin/bash
key=/some/safe/path/.hidden_filename
[[ -e "$key" ]] || touch "$key" # make sure it exists
if [[ file1 -nt "$key" && file2 -nt "$key" ]]; then
mail -n -s "test-notify SCRIPT HUZAAAA" user#gmail.com <<< "Alert!"
touch "$key"
fi
I have hacked something together which does work as I need it to though is terrible coding (probably shouldn't be called that)
3 scripts involved:
script 1:
#!/bin/bash
count=0
while :
do
{ inotifywait -e modify /home/user/triggerdir/ && let count="$count + 1"; } || exit 1
if [ "$count" -eq "2" ]; then
echo Alert | mail -n -s "Huzzah" user#gmail.com
/home/user/trigger2.sh &&
killall trigger.sh inotifywait
fi
done
script 2:
#!/bin/bash
count=0
while :
do
{ inotifywait -e modify /home/user/triggerdir/ && let count="$count + 1"; } || exit 1
if [ "$count" -eq "2" ]; then
echo Alert | mail -n -s "Huzzah" user#gmail.com
/home/user/trigger.sh &&
killall trigger2.sh #Do something.
# count=-250
fi
done
as the two scripts spawn bash/inotify processes I run once in 24 hours a cronjob two kill those using this script 3:
#!/bin/bash
killall trigger2.sh trigger.sh inotifywait bash
any help to improve is welcome, thanks :)

How to timeout a tail pipeline properly on shell

I am implementing monitor_log function which will tail the most recent line from running log and check required string with while loop, the timeout logic should be when the tail log running over 300 seconds, it must close the tail and while loop pipeline.
The big issue i found is for some server the running log NOT keep generating, which means tail -n 1 -f "running.log" will also NOT generate output for while loop to consume, hence the timeout checking logic if [[ $(($SECONDS - start_timer)) -gt 300 ]] will not hit properly.
e.g I set 300 seconds to timeout, but if running.log stopped generate new line before 300 seconds and no more new line in 30 minutes, tail will not generate new output in 30 minutes, hence timeout checking logic in while loop not hit in 30 minutes, so even after 300 seconds it keep tailing and not break out, and if no new line coming from running.log forever, the timeout checking logic will not hit forever.
function monitor_log() {
if [[ -f "running.log" ]]; then
# Timer start
start_timer=$SECONDS
# Tail the running log last line and keep check required string
tail -n 1 -f "running.log" | while read tail_line
do
if [[ $(($SECONDS - start_timer)) -gt 300 ]]; then
break;
fi
if [[ "$tail_line" == "required string" ]]; then
capture_flag=1
fi
if [[ $capture_flag -eq 1 ]]; then
break;
fi
done
fi
}
Could you help to figure out the proper way to timeout the tail and while loop when 300 seconds ? Thank you.
Two options worth considering for inactivity timeout. Usually, option #1 works better.
Option 1: Use timeout (read -t timeout).
It will cap the the 'read' time. See information from bash man. The timeout will cause the read to fail, breaking the whlie loop.
In the code above, replace
tail -n 1 -f "running.log" | while read tail_line
with
tail -n 1 -f "running.log" | while read -t 300 tail_line
Option 2: TMOUT envvar
It's possible to get same effect by setting TMOUT env var.
From bash man - 'read' command:
-t timeout
Cause read to time out and return failure if a complete line of input (or a specified number of characters) is not
read within timeout seconds. timeout may be a decimal number with a
fractional
portion following the decimal point. This option is only effective if read is reading input from a terminal,
pipe, or other special file; it has no effect when reading from
regular files. If
read times out, read saves any partial input read into the specified variable name. If timeout is 0, read returns
immediately, without trying to read any data. The exit status is 0 if
input is
available on the specified file descriptor, non-zero otherwise. The exit status is greater than 128 if the
timeout is exceeded.
Based on dash-o's answer I did test for option 1, the -t for read command works fine only when while read loop on main shell and tail in sub shell, in my question, the tail in main shell, and while read loop consume its output in subshell, in this condition, even setup -t for read command, script not stop when time used up. Refer to
Monitoring a file until a string is found, Bash tail -f with while-read and pipe hangs and How to [constantly] read the last line of a file?
The working code based on dash-o's solution below:
function monitor_log() {
if [[ -f "running.log" ]]; then
# Tail the running log last line and keep check required string
while read -t 300 tail_line
do
if [[ "$tail_line" == "required string" ]]; then
capture_flag=1
fi
if [[ $capture_flag -eq 1 ]]; then
break;
fi
done < <(tail -n 1 -f "running.log")
# Silently kill the remained tail process
tail_pid=$(ps -ef | grep 'tail' | cut -d' ' -f5)
kill -13 $tail_pid
fi
}
But as test, this function after timeout auto terminate will left tail process alive, we can observe PID by check ps -ef on console, need to kill tail_PID separately.
Also test another solution: not change tail and while read loop position, so tail still on main shell and while read loop keep in sub shell after | pipeline, the only change is adding GNU's timeout command before tail command, it works perfect and no tail process left after timeout auto terminate:
function monitor_log() {
if [[ -f "running.log" ]]; then
# Tail the running log last line and keep check required string
timeout 300 tail -n 1 -f "running.log" | while read tail_line
do
if [[ "$tail_line" == "required string" ]]; then
capture_flag=1
fi
if [[ $capture_flag -eq 1 ]]; then
break;
fi
done
fi
}

Linux shell (sh) CLI test if ping successful

How to wire linux shell (sh) script to test with ping if host is reachable?
I guess there could be solution that uses grep but maybe ping provides that option by itself?
I am more into getting a whitelisting a successful ping operation that reached the host then checking if there was any error. I don't care about the reason of ping not succeeding in reaching a host.
I would like to limit ping attempts count and maximum amount of time to reach the host so the script does not waits too long for ping trying to reach a host.
dt=$(date +%d)
cksize=50
echo "Start $(date)"
while IFS= read -r sn
do
echo "*************************************************"
echo "Begin checking NODES client: $sn"
if ping -c 1 "$sn" -i 5 > /dev/null
then
echo "$sn node up"
else
echo "$sn node down"
fi
done < server_list
parallel -j0 --timeout 15 'ping -c 5 -i 0.2 {} >/dev/null 2>&1 && echo {} up || echo {} down' ::: freenetproject.org debian.org no-such.domain {1..254}.2.3.4
You can do it like this. It will do it in parallel for all hosts.
#!/bin/bash
for server in 'google.com' 'github.com' 'fakeserver.com'
do
{ ping -o "$server" &>/dev/null && echo "$server is UP" || echo "$server is DOWN" ; } &
done
wait
Regards!

How to add threading to the bash script?

#!/bin/bash
cat input.txt | while read ips
do
cmd="$(snmpwalk -v2c -c abc#123 $ips sysUpTimeInstance)"
echo "$ips ---> $cmd"
echo "$ips $cmd" >> out_uptime.txt
done
How can i add threading to this bash script, i have around 80000 input and it takes lot of time?
Simple method. Assuming the order of the output is unimportant, and that snmpwalk's output is of no interest if it should fail, put a && at the end of each of the commands to background, except the last command which should have a & at the end:
#!/bin/bash
while read ips
do
cmd="$(nice snmpwalk -v2c -c abc#123 $ips sysUpTimeInstance)" &&
echo "$ips ---> $cmd" &&
echo "$ips $cmd" >> out_uptime.txt &
done < input.txt
Less simple. If snmpwalk can fail, and that output is also needed, lose the && and surround the code with curly braces,{}, followed by &. To redirect the appended output to include standard error use &>>:
#!/bin/bash
while read ips
do {
cmd="$(nice snmpwalk -v2c -c abc#123 $ips sysUpTimeInstance)"
echo "$ips ---> $cmd"
echo "$ips $cmd" &>> out_uptime.txt
} &
done < input.txt
The braces can contain more complex if ... then ... else ... fi statements, all of which would be backgrounded.
For those who don't have a complex snmpwalk command to test, here's a similar loop, which prints one through five but sleeps for random durations between echo commands:
for f in {1..5}; do
RANDOM=$f &&
sleep $((RANDOM/6000)) &&
echo $f &
done 2> /dev/null | cat
Output will be the same every time, (remove the RANDOM=$f && for varying output), and requires three seconds to run:
2
4
1
3
5
Compare that to code without the &&s and &:
for f in {1..5}; do
RANDOM=$f
sleep $((RANDOM/6000))
echo $f
done 2> /dev/null | cat
When run, the code requires seven seconds to run, with this output:
1
2
3
4
5
You can send tasks to the background by &. If you intend to wait for all of them to finish you can use the wait command:
process_to_background &
echo Processing ...
wait
echo Done
You can get the pid of the given task started in the background if you want to wait for one (or few) specific tasks.
important_process_to_background &
important_pid=$!
while i in {1..10}; do
less_important_process_to_background $i &
done
wait $important_pid
echo Important task finished
wait
echo All tasks finished
On note though: the background processes can mess up the output as they will run asynchronously. You might want to use a named pipe to collect the output from them.

Bash sizeout script

I like very much the style, how bash handle the shells.
I am looking for the native solution to cover a bash command for testing the size of a result file and exit in the case of that became too big in size.
I am thinking about a command like
sizeout $fileName $maxSize otherBashCommand
It would be usefull to use it in a backup script like:
sizeout $fileName $maxSize timeout 600s ionice nice sudo rear mkbackup
To make it one step more complicated, i would call it over ssh:
ssh $remoteuser#$remoteServer sizeout $fileName $maxSize timeout 600s ionice nice sudo rear mkbackup
What kind of design pattern should i use for this ?
Solution
I have modified Socowi's code a little
#! /bin/bash
# shell script to stop encapsulated script in the case of
# checked file reaching file size limit
# usage
# sizeout.sh filename filesize[Bytes] encapsulated_command arguments
fileName=$1 # file we are checking
maxSize=$2 # max. file size (in bytes) to stop the pid
shift 2
echo "fileName: $fileName"
echo "maxSize: $maxSize"
function limitReached() {
if [[ ! -f $fileName ]]; then
return 1 # file doesn't exist, return with false
fi
actSize=$(stat --format %s $fileName)
if [[ $actSize -lt $maxSize ]]; then
return 1 # filesize under maxsize, return with false
fi
return 0
}
# run command as a background job
$# &
pid=$!
# monitor file size while job is running
while kill -0 $pid; do
limitReached && kill $pid
sleep 1
done 2> /dev/null
wait $pid # return with the exit code of the $pid
I added wait $pid to the end, that returns with the exit code of the background process instead of it's on exit code.
Monitor the File Size Every n Time Units
I don't know whether there is a design pattern for your problem, but you could write the sizeout script as follows:
#! /bin/bash
filename="$1"
maxsize="$2" # max. file size (in bytes)
shift 2
limitReached() {
[[ -e "$filename" ]] &&
(( "$(stat --printf="%s" "$filename")" >= maxsize ))
}
limitReached && exit 0
# run command as a background job
"$#" &
pid="$!"
# monitor file size while job is running
while kill -0 "$pid"; do
limitReached && kill "$pid"
sleep 0.2
done 2> /dev/null
This script checks the file size every 200ms and kills your command if the file size exceeds the maximum. Since we only check every 200ms, the file may end up with (yourWriteSpeed Bytes/s × 0.2s) more than the specified maximum size.
The following points can be improved:
Validate parameters.
Set a trap to kill the background job in every case, for instance when pressing Ctrl+C.
Monitor File Changes
The script from above is not very efficient, since we check the file size every 200ms, even if the file does not change at all. inotifywait allows you to wait until the file changes. See this answer for more information.
A Word on SSH
You just need to copy the sizeout script over to your remote server, then you can use it like on your local machine:
ssh $remoteuser#$remoteServer path/to/sizeout filename maxSize ... mkbackup

Resources