I am trying to make a script that outputs a status bar for my window manager. It outputs the normal stuff like the time, date, weather, etc.
One of the strings it outputs is the number of updates for the system (Arch Linux). As the "API" where I am pulling the update number from has a max number of requests per day I had to add a delay to the updates() function (that outputs the number of updates) so that that maximum number of requests is not exceeded.
Adding this delay makes the problem start.
Somehow the updates_aur variable is not being stored in memory and cannot be accessed until the delay I implemented is removed. (MORE EXPLANATION IN CODE BELOW)
I would like for a delay to be implemented that makes the updates not being checked every iteration but one in 60.
I tried exporting the "updates_aur" and the "updates_arch" to environment variables so that they would be stored in memory but as the script creates a subshell they are not accessible for them to be updated/retrieved.
updates() {
if [ "$internet" = true ]; then
if ! updates_arch=$(checkupdates 2> /dev/null | wc -l ); then
updates_arch=0
fi
if (( $counter % 60 == 0 )); then #this is done to add a delay and not saturate aur requests
if ! updates_aur=$(yay -Qum --devel 2> /dev/null | wc -l); then
updates_aur=0
fi
else
:
fi
updates=$(("$updates_arch" + "$updates_aur"))
if [ "$updates" -gt 0 ]; then
echo " Updates: $updates"
else
echo " Updates: 0"
fi
echo $delim
else
:
fi
}
This is then called in a while loop (the while loop also increments the counter by 1)
Full code: https://github.com/Baitinq/dwm/blob/master/scripts/dwm-status
I expected the variable of aur_updates to be updated and stored whenever the counter % 60 == 0, but the actual result is that the variable can only be accessed when the counter % 60 == 0, as if it wasn't being stored in memory and updated, but created with every while loop iteration in which the counter % 60 == 0.
Since you're running your functions in a subshell, for example (from the status function):
echo $(updates)
Their variables' (updates_aur) values are lost when the subshell exits. There's no reason to echo a function that contains an echo within it. Just call your function directly:
updates
This occurs in several places with other functions. It's also not necessary there.
Related
Scenario:
I have a shell script running on embedded linux. The script starts an application which needs the state of a variable to be on.
Code:
So I do it like this
#!/bin/sh
start_my_app=false
wait_for_something=true
while $wait_for_something; do
wait_for_something=$(cat /some/path/file)
if [ "$wait_for_something" = "false" ]
then
echo Waiting...
elif [ "$wait_for_something" = "true" ]
then
echo The wait has ended
wait_for_something=false
start_my_app=true
else
fi
done
if [ "$start_my_app" = "true" ]
then
/usr/bin/MyApp
fi
#End of the script
/some/path/file has a value false and turns to true in a few seconds by another script in different component. And then as the logic goes wait_for_something in my script becomes true and /usr/bin/MyApp is started.
The problem and hence the question:
But I want to do it in a better way.
I dont want to wait infinitely in a while loop expecting the content value in /some/path/file to be set true after some time.
I want to wait for the content value in /some/path/file to be set true for only 5 seconds. If /some/path/file does not contain true in 5 seconds, I want to get out setting start_my_app to false.
How can I achieve this functionality in a shell script on linux?
PS:
My whole script is run in the background by another script
Use the SECONDS variable as a timer.
SECONDS=0
while (( SECONDS < 5 )) && IFS= read -r value < /some/path/file; do
if [[ $value = true ]]; then
exec /usr/bin/MyApp
fi
done
If you never read true from the file, your script will exit after 5 seconds. Otherwise, the script replaces the current shell with MyApp, effectively exiting the while loop.
I have DB error log file, it will grow continuously.
Now i want to set some error monitoring on that file for every 5 minutes.
The problem is i don’t want to scan whole file for every 5 minutes(when monitoring cron executed), because it may grow very big in future. Scanning through whole(big) file for every 5 mins will consume bit more resources.
So i just want to scan only the lines which were inserted/written to the log during last 5 mins interval.
Each error recorded in log will have Timestamp prepend to it like below:
180418 23:45:00 [ERROR] mysql got signal 11.
So i want to search with pattern [ERROR] only on lines which were added from last 5 mins(not whole file) and place the output to another file.
Please help me here.
Feel free if u need more clarification on my question.
I’m using RHEL 7 and i’m trying to implement above monitoring through bash shell script
Serializing the Byte Offset
This picks up where the last instance left off. If you run it every 5 minutes, then, it'll scan 5 minutes of data.
Note that this implementation knowingly can scan data added during an invocation's run twice. This is a little sloppy, but it's much safer to scan overlapping data twice than to never read it at all, which is a risk that can be run if relying on cron to run your program on schedule (likewise, sleeps can run over the requested time if the system is busy).
#!/usr/bin/env bash
file=$1; shift # first input: filename
grep_opts=( "$#" ) # remaining inputs: grep options
dir=$(dirname -- "$file") # extract directory name to use for offset storage
basename=${file##*/} # pick up file name w/o directory
size_file="$dir/.$basename.size" # generate filename to use to store offset
if [[ -s $size_file ]]; then # ...if we already have a file with an offset...
old_size=$(<"$size_file") # ...read it from that file
else
old_size=0 # ...otherwise start at the front.
fi
new_size=$(stat --format=%s -- "$file") || exit # Figure out current size
if (( new_size < old_size )); then
old_size=0 # file was truncated, so we can't trust old_size
elif (( new_size == old_size )); then
exit 0 # no new contents, so no point in trying to search
fi
# read starting at old_size and grep only that content
dd iflag=skip_bytes skip="$old_size" if="$file" | grep "${grep_opts[#]}"; grep_retval=$?
# if the read failed, don't store an updated offset
(( ${PIPESTATUS[0]} != 0 )) && exit 1
# create a new tempfile to store offset in
tempfile=$(mktemp -- "${size_file}.XXXXXX") || exit
# write to that temporary file...
printf '%s\n' "$new_size" > "$tempfile" || { rm -f "$tempfile"; exit 1; }
# ...and if that write succeeded, overwrite the last place where we serialized output.
mv -- "$tempfile" "$new_size" || exit
exit "$grep_retval"
Alternate Mode: Bisect For The Timestamp
Note that this can miss content if you're relying on, say, cron to invoke your code every 5 minutes on-the-dot; storing byte offsets can thus be more accurate.
Using the bsearch tool by Ole Tange:
#!/usr/bin/env bash
file=$1; shift
start_date=$(date -d 'now - 5 minutes' '+%y%m%d %H:%M:%S')
byte_offset=$(bsearch --byte-offset "$file" "$start_date")
dd iflag=skip_bytes skip="$byte_offset" if="$file" | grep "$#"
Another approach could be something like this:
DB_FILE="FULL_PATH_TO_YOUR_DB_FILE"
current_db_size=$(du -b "$DB_FILE" | cut -f 1)
if [[ ! -a SOME_PATH_OF_YOUR_CHOICE/last_size_db_file ]] ; then
tail --bytes $current_db_size $DB_FILE > SOME_PATH_OF_YOUR_CHOICE/log-file_$(date +%Y-%m-%d_%H-%M-%S)
else
if [[ $(cat last_size_db_file) -gt $current_db_size ]] ; then
previously_readed_bytes=0
else
previously_readed_bytes=$(cat last_size_db_file)
fi
new_bytes=$(($current_db_size - $previously_readed_bytes))
tail --bytes $new_bytes $DB_FILE > SOME_PATH_OF_YOUR_CHOICE/log-file_$(date +%Y-%m-%d_%H-%M-%S)
fi
printf $current_db_size > SOME_PATH_OF_YOUR_CHOICE/last_size_db_file
this prints all bytes of DB_FILE not previously printed to SOME_PATH_OF_YOUR_CHOICE/log-file_$(date +%Y-%m-%d_%H-%M-%S)
Note that $(date +%Y-%m-%d_%H-%M-%S) will be the current 'full' date at the time of creating the log file
you can make this an script, and use cron to execute that script every five minutes; something like this:
*/5 * * * * PATH_TO_YOUR_SCRIPT
Here is my approach:
First, read the whole log once so far.
If you reach the end, collect and read new lines for a timespan (in my example 9 seconds, for faster testing, while my dummy server appends to the logfile every 3 seconds).
After the timespan, echo the cache, clear the cache (an array arr), loop and sleep for some time, so that this process doesn't consume all CPU time.
First, my dummy logfile writer:
#!/bin/bash
#
# dummy logfile writer
#
while true
do
s=$(( $(date +%s) % 3600))
echo $s server msg
sleep 3
done >> seconds.log
Startet via ./seconds-out.sh &.
Now the more complicated part:
#!/bin/bash
#
# consume a logfile as written so far. Then, collect every new line
# and show it in an interval of $interval
#
interval=9 # 9 seconds
#
printf -v secnow '%(%s)T' -1
start=$(( secnow % (3600*24*365) ))
declare -a arr
init=false
while true
do
read line
printf -v secnow '%(%s)T' -1
now=$(( secnow % (3600*24*365) ))
# consume every line created in the past
if (( ! init ))
then
# assume reading a line might not take longer than a second (rounded to whole seconds)
while (( ${#line} > 0 && (now - start) < 2 ))
do
read line
start=$now
echo -n "." # for debugging purpose, remove
printf -v secnow '%(%s)T' -1
now=$(( secnow % (3600*24*365) ))
done
init=1
echo "init=$init" # for debugging purpose, remove
# collect new lines, display them every $interval seconds
else
if ((${#line} > 0 ))
then
echo -n "-" # for debugging purpose, remove
arr+=("read: $line \n")
fi
if (( (now - start) > interval ))
then
echo -e "${arr[#]]}"
arr=()
start=$now
fi
fi
sleep .1
done < seconds.log
Output with logfile generator in 3 seconds, running for some time, then starting the read-seconds.sh script, with debugging output activated:
./read-seconds.sh
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................init=1
---read: 1688 server msg
read: 1691 server msg
read: 1694 server msg
---read: 1697 server msg
read: 1700 server msg
read: 1703 server msg
----read: 1706 server msg
read: 1709 server msg
read: 1712 server msg
read: 1715 server msg
^C
Every dot represents a logfile line from the past and therefor skipped.
Every dash represents a logfile line collected.
Below is the simplified scheme of the script I am writing. The program must take parameters in different ways, so there is a fine division to several functions.
The problem is that the chainloading of the return value from deeper functions breaks on the trap, where the result is to be checked to show a message.
#! /usr/bin/env bash
check_a_param() {
[ "$1" = return_ok ] && return 0 || return 3
}
check_params() {
# This trap should catch negative results from the functions
# performing actual checks, like check_a_param() below.
return_trap() {
local retval=$?
[ $retval -ne 0 ] && echo 'Bad, bad… Dropping to manual setup.'
return $retval
}
# check_params can be called from different functions, not only
# setup(). But the other functions don’t care about the return value
# of check_params().
[ "${FUNCNAME[1]}" = setup ] \
&& trap "return_trap; got_retval=$?; trap - RETURN; return $got_retval;" RETURN
check_a_param 'return_bad' || return $?
# …
# Here we check another parameters in the same way.
# …
echo 'Provided parameters are valid.'
return 0 # To be sure.
}
ask_for_params() {
echo 'User sets params manually step by step.'
}
setup() {
[ "$1" = force_manual ] && local MANUAL=t
# If gathered parameters do not pass check_params()
# the script shall resort to asking user for entering them.
[ ! -v MANUAL ] && {
check_params \
&& echo "check_params() returned with 0. Not running manual setup."
|| false
}|| ask_for_params
# do_the_job
}
setup "$#" # Either empty or ‘force_manual’.
How it should work:
↗ 3 → 3→ trap →3 ↗ || ask_for_params ↘
check_a_param >>> check_params >>> [ ! -v MANUAL ] ↓
↘ 0 → 0→ trap →0 ↘ && ____________ do_the_job
The idea is, if a check fails, its return code forces check_params() to return, too, which, in its turn would trigger the || ask_for_params condition in setup(). But the trap returns 0:
↗ 3 → 3→ trap →0
check_a_param >>> check_params >>> [ ! -v MANUAL ] &&… >>> do_the_job
↘ 0 → 0→ trap →0
If you try to run the script as is, you should see
Bad, bad… Dropping to manual setup.
check_params() returned with 0. Not running manual setup.
Which means that the bad result triggered the trap(!) but the mother function that has set it, didn’t pass the result.
In attempt to set a hack I’ve tried
to set retval as a global variable declare -g retval=$? in the return_trap() and use its value in the line setting the trap. The variable is set ([ -v retval ] returns successfully), but …has no value. Funny.
okay, let’s putretval=Eeh to the check_params(), outside the return_trap() and just set it to $? as a usual param. Nope, the retval in the function doesn’t set the value for the global variable, it stays ‘Eeh’. No, there’s no local directive. It should be treated as global by default. If you put test=1 to check_params() and test=3 in check_a_param() and then print it with echo $testat the end of setup(), you should see 3. At least I do. declare -g doesn’t make any difference here, as expected.
maybe that’s the scope of the function? No, that’s not it either. Moving return_trap() along with declare -g retval=Eeh doesn’t make any difference.
when the modern software means fall, it’s time to resort to good old writing to a file. Let’s print the retval to /tmp/t with retval=$?; echo $retval >/tmp/t in return_trap() and read it back with
trap "return_trap; trap - RETURN; return $(</tmp/t)" RETURN
Now we can finally see that the last return directive which reads the number from the file, actually returns 3. But check_params() still returns 0!
++ trap - RETURN
++ return 3
+ retval2=0
+ echo 'check_params() returned with 0. Not running manual setup.'
check_params() returned with 0. Not running manual setup.
If the argument to the trap command is strictly a function name, it returns the original result. The original one, not what return_trap() returns. I’ve tried to increment the result and still got 3.
You may also ask ‘Why would you need to unset the trap so much?’. It’s to avoid another bug, which causes the trap to trigger every time, even when check_params() is called from another function. Traps on RETURN are local things, they aren’t inherited by another functions unless there’s debug or trace flags explicitly set on them, but it looks like they keep traps set on them between runs. Or bash keeps traps for them. This trap should only be set when check_params() is called from a specific function, but if the trap is not unset, it continues to get triggered every time check_a_param() returns a value greater than zero independently of what’s in FUNCNAME[1].
Here I give up, because the only exit I see now is to implement a check on the calling function before each || return $? in check_params(). But it’s so ugly it hurts my eyes.
I may only add that, $? in the line setting the trap will always return 0. So, if you, for example, declare a local variable retval in return_trap(), and put such code to check it
trap "return_trap; [ -v retval ]; echo $?; trap - RETURN; return $retval" RETURN
it will print 0 regardless of whether retval is actually set or not, but if you use
trap "return_trap; [ -v retval ] && echo set || echo unset; trap - RETURN; return $retval" RETURN
It will print ‘unset’.
GNU bash, version 4.3.39(1)-release (x86_64-pc-linux-gnu)
Funny enough,
trap "return_trap; trap - RETURN" RETURN
simply works.
[ ! -v MANUAL ] && {
check_params; retval2=$?
[ $retval2 -eq 0 ] \
&& echo "check_params() returned with 0. Not running manual setup." \
|| false
}|| ask_for_params
And here’s the trace.
+ check_a_parameter return_bad
+ '[' return_bad = return_ok ']'
+ return 3
+ return 3
++ return_trap
++ local retval=3
++ echo 3
++ '[' 3 -ne 0 ']'
++ echo 'Bad, bad… Dropping to manual setup.'
Bad, bad… Dropping to manual setup.
++ return 3
++ trap - RETURN
+ retval2=3
+ '[' 3 -eq 0 ']'
+ false
+ ask_for_params
+ echo 'User sets params manually step by step.'
User sets params manually step by step.
So the answer is simple: do not try to overwrite the result in the line passed to the trap command. Bash handles everything for you.
I am using Konsole on kubuntu 14.04.
I want to take arguments to this shell-script, and pass it to a command. The code is basically an infinite loop, and I want one of the arguments to the inner command to be increased once every 3 iterations of the loop. Ignoring the actual details, here's a gist of my code:
#!/bin/bash
ct=0
begin=$1
while :
do
echo "give: $begin as argument to the command"
#actual command
ct=$((ct+1))
if [ $ct%3==0 ]; then
begin=$(($begin+1))
fi
done
I am expecting the begin variable to be increased every 3 iterations, but it is increasing in the every iteration of the loop. What am I doing wrong?
You want to test with
if [ $(expr $cr % 3) = 0 ]; then ...
because this
[ $ct%3==0 ]
tests whether the string $ct%3==0, after parameter substitution, is not empty. A good way for understanding this is reading the manual for test and look at the semantics when it is given 1, 2, 3 or more arguments. In your original script, it only sees one argument, in mine it sees three. White space is very important in the shell. :-)
In BASH you can completely utilize ((...)) and refactor your script like this:
#!/bin/bash
ct=0
begin="$1"
while :
do
echo "give: $begin as argument to the command"
#actual command
(( ct++ % 3 == 0)) && (( begin++ ))
done
I know how to check the status of the previously executed command using $?, and we can make that status using exit command. But for the loops in bash are always returning a status 0 and is there any way I can break the loop with some status.
#!/bin/bash
while true
do
if [ -f "/test" ] ; then
break ### Here I would like to exit with some status
fi
done
echo $? ## Here I want to check the status.
The status of the loop is the status of the last command that executes. You can use break to break out of the loop, but if the break is successful, then the status of the loop will be 0. However, you can use a subshell and exit instead of breaking. In other words:
for i in foo bar; do echo $i; false; break; done; echo $? # The loop succeeds
( for i in foo bar; do echo $i; false; exit; done ); echo $? # The loop fails
You could also put the loop in a function and return a value from it. eg:
in() { local c="$1"; shift; for i; do test "$i" = "$c" && return 0; done; return 1; }
Something like this?
while true; do
case $RANDOM in *0) exit 27 ;; esac
done
Or like this?
rc=0
for file in *; do
grep fnord "$file" || rc=$?
done
exit $rc
The real question is to decide whether the exit code of the loop should be success or failure if one iteration fails. There are scenarios where one make more sense than the other, and other where it's not at all clear cut.
The bash manual says:
while list-1; do list-2; done
until list-1; do list-2; done
[..]The exit status of the while and until commands is the exit status
of the last command executed in list-2, or zero if none was executed.[..]
The last command that is executed inside the loop is break. And the exit value of break is 0 (see: help break).
This is why your program keeps exiting with 0.
The break builtin for bash does allow you to accomplish what you are doing, just break with a negative value and the status returned by $? will be 1:
while true
do
if [ -f "./test" ] ; then
break -1
fi
done
echo $? ## You'll get 1 here..
Note, this is documented in the help for the break builtin:
help break
break: break [n] Exit for, while, or until loops.
Exit a FOR, WHILE or UNTIL loop. If N is specified, break N enclosing
loops.
Exit Status: The exit status is 0 unless N is not greater than or
equal to 1.
You can break out of n number of loops or send a negative value for breaking with a non zero return, ie, 1
I agree with #hagello as one option doing a sleep and changing the loop:
#!/bin/bash
timeout=120
waittime=0
sleepinterval=3
until [[ -f "./test" || ($waittime -eq $timeout) ]]
do
$(sleep $sleepinterval)
waittime=$((waittime + sleepinterval))
echo "waittime is $waittime"
done
if [ $waittime -lt $sleepinterval ]; then
echo "file already exists"
elif [ $waittime -lt $timeout ]; then
echo "waited between $((waittime-3)) and $waittime seconds for this to finish..."
else
echo "operation timed out..."
fi
I think what you should be asking is: How can I wait until a file or a directory (/test) gets created by another process?
What you are doing up to now is polling with full power. Your loop will allocate up to 100% of the processing power of one core. The keyword is "polling", which is ethically wrong by the standards of computer scientists.
There are two remedies:
insert a sleep statement in your loop; advantage: very simple; disadvantage: the delay will be an arbitrary trade-off between CPU load and responsiveness. ("Arbitrary" is ethically wrong, too).
use a notification mechanism like inotify (see: man inotify); advantage: no CPU load, great responsiveness, no delays, no arbitrary constants in your code; disadvantage: inotify is a kernel API – you need some code to access it: inotify-tools or some C/Perl/Python code. Have a look at inotify and bash!
I would like to submit an alternative solution which is simpler and I think more elegant:
(while true
do
if [ -f "test" ] ; then
break
fi
done
Of course of this is part of a script then you could user return 1 instead of exit 1
exit 1
)
echo "Exit status is: $?"
Git 2.27 (Q2 2020), offers a good illustration of the exit status in a loop, here within the context of aborting a failing test early (e.g. by exiting a loop), which is to say "return 1".
See commit 7cc112d (27 Mar 2020) by Junio C Hamano (gitster).
(Merged by Junio C Hamano -- gitster -- in commit b07c721, 28 Apr 2020)
t/README: suggest how to leave test early with failure
Helped-by: Jeff King
Over time, we added the support to our test framework to make it easy to leave a test early with failure, but it was not clearly documented in t/README to help developers writing new tests.
The documentation now includes:
Be careful when you loop
You may need to verify multiple things in a loop, but the following does not work correctly:
test_expect_success 'test three things' '
for i in one two three
do
test_something "$i"
done &&
test_something_else
'
Because the status of the loop itself is the exit status of the test_something in the last round, the loop does not fail when "test_something" for "one" or "two" fails.
This is not what you want.
Instead, you can break out of the loop immediately when you see a failure.
Because all test_expect_* snippets are executed inside a function, "return 1" can be used to fail the test immediately upon a failure:
test_expect_success 'test three things' '
for i in one two three
do
test_something "$i" || return 1
done &&
test_something_else
'
Note that we still &&-chain the loop to propagate failures from earlier commands.
Use artificial exit code 🙂
Before breaking the loop set a variable then check the variable as the status code of the loop, like this:
while true; do
if [ -f "/test" ] ; then
{broken=1 && break; };
fi
done
echo $broken #check the status with [[ -n $broken ]]