How to use one environment variable when calling a bash script repeatedly - linux

I have a task to monitor the system with a quota, if the monitored result is over the quota, send a warning email. But this monitor program should be called once in half an hour, after one warning email is sent out, the next time if the monitored state is still the same as last time, there is no need to send the same warning email again.
In order to do this, I would like to make use of environment variable to store the state of the last monitored result, so that the next time it can be checked and duplicate email would not be sent. One of my solution is to add or update the export syntax in .bashrc, but in order to activate the updated export syntax, I have to run bash, which might be unnecessary.
So I would like ask is there any way to update the environment variable so that every time when the monitor program Bash script is called, it gets the fresh updated value?

This is a self contained solution using a heredoc. At first glance it may seem an elaborate and inperfect solution, it does have its uses in that it's resilient and it works well when deploying across more than one machine, requires no special monitoring or permissions of external files, and most importantly, there are no unwanted surprises with environment.
This example uses bash, but it will work with sh if the $thisfile variable is set manually, or another way.
This example assumes that 20 is already in the script file as mymonitorvalue, and uses argument $1 as a proof of concept. You would obviously change newval="$1" to whatever calculates the quota:
Example usage:
#bash $>./script 10
Value from previous run was 20
Value from this test was 10
Updating value ...
#bash $>./script 10
not doing anything ... result is the same as last time
#bash $>
Script:
#!/bin/bash
thisfile="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" ; thisfile="${DIR}${0}"
read -d '' currentvalue <<'EOF'
mymonitorval=20
EOF
eval "$currentvalue"
function changeval () {
sed -E -i "s/(^mymonitorval\=)(.*)/mymonitorval\="$1"/g" "$thisfile"
}
newvalue="$1"
if [[ "$newvalue" != "$mymonitorval" ]]; then
echo "Value from previous run was $mymonitorval"
echo "Value from this test was "$1""
echo "Updating value ..."
changeval "$newvalue"
else
echo "not doing anything ... result is the same as last time"
fi
Explanation:
thisfile= can be set manually for script location. This example uses the automated solution from here: https://stackoverflow.com/a/246128
read -d...EOF is the heredoc which is saved into variable $currentvalue
eval "$currentvalue" in this case is the equivalent of typing mymonitorval=10 into a terminal
function changeval...} updates the contents of the heredoc in place (it changes the physical .sh script file)
newvalue="$1" is for the purpose of testing. $newvalue would be determined by whatever your script is that is calculating quota
if.... block is to perform two alternate sets of actions depending on whether $newvalue is the same as it was last time or not.

Store environment variable in different .file and then source <.file>

Related

how to extend a command without changing the usage

I have a global npm package that provided by a third party to generate a report and send it to server.
in_report generate -date 20221211
And I want to let a group of user to have the ability to check whether the report is generated or not, in order to prevent duplication. Therefore, I want to run a sh script before executing the in_report command.
sh check.sh && in_report generate -date 20221211
But the problem is I don't want to change the command how they generate the report. I can do a patch on their PC (able to change the env path or etc).
Is it possible to run sh check.sh && in_report generate -date 20221211 by running in_report generate -date 20221211?
If this "in_report" is only used for this exact purpose, you can create an alias by putting the following line at the end of the ".bashrc" or ".bash_aliases" file that is used by the people who will need to run in_report :
alias in_report='sh check.sh && in_report'
See https://doc.ubuntu-fr.org/alias for details.
If in_report is to be used in other ways too, this is not the solution. In that case, you may want to call it directly inside check.sh if a certain set of conditions on the parameters are matched. To do that :
alias in_report='sh check.sh'
The content of check.sh :
#!/bin/sh
if [[ $# -eq 3 && "$1" == "generate" && "$2" == "-date" && "$3" == "20"* ]] # Assuming that all you dates must be in the 21st century
then
if [[ some test to check that the report has not been generated yet ]]
then
/full/path/to/the/actual/in_report "$#" # WARNING : be sure that nobody will move the actual in_report to another path
else
echo "This report already exists"
fi
else
/full/path/to/the/actual/in_report "$#"
fi
This sure is not ideal but it should work. But by far the easiest and most reliable solution if applicable would be to ignore the aliasing thing and tell those who will use in_report to run your check.sh instead (with the same parameters as they would put to run in_report), and then you can directly call in_report instead of the /full/path/to/the/actual/in_report.
Sorry if this was not very clear. In that case, feel free to ask.
On most modern Linux distros the easiest would be to place a shell script that defines a function in /etc/profile.d, e.g. /etc/profile.d/my_report with a content of
function in_report() { sh check.sh && /path/to/in_report $*; }
That way it gets automatically placed in peoples environment when they log in.
The /path/to is important so the function doesn't call itself recursively.
A cursory glance through doco for the Mac suggests that you may want to edit /etc/bashrc or /etc/zshrc respectively.

"read" command not executing in "while read line" loop [duplicate]

This question already has answers here:
Read user input inside a loop
(6 answers)
Closed 5 years ago.
First post here! I really need help on this one, I looked the issue on google, but can't manage to find an useful answer for me. So here's the problem.
I'm having fun coding some like of a framework in bash. Everyone can create their own module and add it to the framework. BUT. To know what arguments the script require, I created an "args.conf" file that must be in every module, that kinda looks like this:
LHOST;true;The IP the remote payload will connect to.
LPORT;true;The port the remote payload will connect to.
The first column is the argument name, the second defines if it's required or not, the third is the description. Anyway, long story short, the framework is supposed to read the args.conf file line by line to ask the user a value for every argument. Here's the piece of code:
info "Reading module $name argument list..."
while read line; do
echo $line > line.tmp
arg=`cut -d ";" -f 1 line.tmp`
requ=`cut -d ";" -f 2 line.tmp`
if [ $requ = "true" ]; then
echo "[This argument is required]"
else
echo "[This argument isn't required, leave a blank space if you don't wan't to use it]"
fi
read -p " $arg=" answer
echo $answer >> arglist.tmp
done < modules/$name/args.conf
tr '\n' ' ' < arglist.tmp > argline.tmp
argline=`cat argline.tmp`
info "Launching module $name..."
cd modules/$name
$interpreter $file $argline
cd ../..
rm arglist.tmp
rm argline.tmp
rm line.tmp
succes "Module $name execution completed."
As you can see, it's supposed to ask the user a value for every argument... But:
1) The read command seems to not be executing. It just skips it, and the argument has no value
2) Despite the fact that the args.conf file contains 3 lines, the loops seems to be executing just a single time. All I see on the screen is "[This argument is required]" just one time, and the module justs launch (and crashes because it has not the required arguments...).
Really don't know what to do, here... I hope someone here have an answer ^^'.
Thanks in advance!
(and sorry for eventual mistakes, I'm french)
Alpha.
As #that other guy pointed out in a comment, the problem is that all of the read commands in the loop are reading from the args.conf file, not the user. The way I'd handle this is by redirecting the conf file over a different file descriptor than stdin (fd #0); I like to use fd #3 for this:
while read -u3 line; do
...
done 3< modules/$name/args.conf
(Note: if your shell's read command doesn't understand the -u option, use read line <&3 instead.)
There are a number of other things in this script I'd recommend against:
Variable references without double-quotes around them, e.g. echo $line instead of echo "$line", and < modules/$name/args.conf instead of < "modules/$name/args.conf". Unquoted variable references get split into words (if they contain whitespace) and any wildcards that happen to match filenames will get replaced by a list of matching files. This can cause really weird and intermittent bugs. Unfortunately, your use of $argline depends on word splitting to separate multiple arguments; if you're using bash (not a generic POSIX shell) you can use arrays instead; I'll get to that.
You're using relative file paths everywhere, and cding in the script. This tends to be fragile and confusing, since file paths are different at different places in the script, and any relative paths passed in by the user will become invalid the first time the script cds somewhere else. Worse, you aren't checking for errors when you cd, so if any cd fails for any reason, then entire rest of the script will run in the wrong place and fail bizarrely. You'd be far better off figuring out where your system's root directory is (as an absolute path), then referencing everything from it (e.g. < "$module_root/modules/$name/args.conf").
Actually, you're not checking for errors anywhere. It's generally a good idea, when writing any sort of program, to try to think of what can go wrong and how your program should respond (and also to expect that things you didn't think of will also go wrong). Some people like to use set -e to make their scripts exit if any simple command fails, but this doesn't always do what you'd expect. I prefer to explicitly test the exit status of the commands in my script, with something like:
command1 || {
echo 'command1 failed!' >&2
exit 1
}
if command2; then
echo 'command2 succeeded!' >&2
else
echo 'command2 failed!' >&2
exit 1
fi
You're creating temp files in the current directory, which risks random conflicts (with other runs of the script at the same time, any files that happen to have names you're using, etc). It's better to create a temp directory at the beginning, then store everything in it (again, by absolute path):
module_tmp="$(mktemp -dt module-system)" || {
echo "Error creating temp directory" >&2
exit 1
}
...
echo "$answer" >> "$module_tmp/arglist.tmp"
(BTW, note that I'm using $() instead of backticks. They're easier to read, and don't have some subtle syntactic oddities that backticks have. I recommend switching.)
Speaking of which, you're overusing temp files; a lot of what you're doing with can be done just fine with shell variables and built-in shell features. For example, rather than reading line from the config file, then storing them in a temp file and using cut to split them into fields, you can simply echo to cut:
arg="$(echo "$line" | cut -d ";" -f 1)"
...or better yet, use read's built-in ability to split fields based on whatever IFS is set to:
while IFS=";" read -u3 arg requ description; do
(Note that since the assignment to IFS is a prefix to the read command, it only affects that one command; changing IFS globally can have weird effects, and should be avoided whenever possible.)
Similarly, storing the argument list in a file, converting newlines to spaces into another file, then reading that file... you can skip any or all of these steps. If you're using bash, store the arg list in an array:
arglist=()
while ...
arglist+=("$answer") # or ("#arg=$answer")? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" "${arglist[#]}"
(That messy syntax, with the double-quotes, curly braces, square brackets, and at-sign, is the generally correct way to expand an array in bash).
If you can't count on bash extensions like arrays, you can at least do it the old messy way with a plain variable:
arglist=""
while ...
arglist="$arglist $answer" # or "$arglist $arg=$answer"? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" $arglist
... but this runs the risk of arguments being word-split and/or expanded to lists of files.

Accessing the value returned by a shell script in the parent script

I am trying to access a string returned by a shell script which was called from a parent shell script. Something like this:
ex.sh:
echo "Hemanth"
ex2.sh:
sh ex.sh
if [ $? == "Hemanth" ]; then
echo "Hurray!!"
else
echo "Sorry Bro!"
fi
Is there a way to do this? Any help would be appreciated.
Thank you.
Use a command substitution syntax on ex2.sh
valueFromOtherScript="$(sh ex.sh)"
printf "%s\n" "$valueFromOtherScript"
echo by default outputs a new-line character after the string passed, if you don't need it in the above variable use printf as
printf "Hemanth"
on first script. Also worth adding $? will return only the exit code of the last executed command. Its values are interpreted as 0 being a successful run and a non-zero on failure. It will NEVER have a string value as you tried to use.
A Bash script does not really "return" a string. What you want to do is capture the output of a script (or external program, or function, they all act the same in this respect).
Command substitution is a common way to capture output.
captured_output="$(sh ex.sh)"
This initializes variable captured_output with the string containing all that is output by ex.sh. Well, not exactly all. Any script (or command, or function) actually has two output channels, usually called "standard out" (file descriptor number 1) and "standard error" (file descriptor number 2). When executing from a terminal, both typically end up on the screen. But they can be handled separately if needed.
For instance, if you want to capture really all output (including error messages), you would add a "redirection" after your command that tells the shell you want standard error to go to the same place as standard out.
captured_output="$(sh ex.sh 2>&1)"
If you omit that redirection, and the script outputs something on standard error, then this will still show on screen, and will not be captured.
Another way to capture output is sending it to a file, and then read back that file to a variable, like this :
sh ex.sh > output_file.log
captured_output="$(<output_file.log)"
A script (or external program, or function) does have something called a return code, which is an integer. By convention, a value of 0 means "success", and any other value indicates abnormal execution (but not necessarily failure) : the meaning of that return code is not standardized, it is ultimately specific to each script, program or function.
This return code is available in the $? special shell variable immediately after the execution terminates.
sh ex.sh
return_code=$?
echo "Return code is $return_code"

Bash output happening after prompt, not before, meaning I have to manually press enter

I am having a problem getting bash to do exactly what I want, it's not a major issue, but annoying.
1.) I have a third party software I run that produces some output as stderr. Some of it is useful, some of it is regularly stuff I don't care about and I don't want this dumped to screen, however I do want the useful parts of the stderr dumped to screen. I figured the best way to achieve this was to pass stderr to a function, then use conditions in that function to either show the stderr or not.
2.) This works fine. However the solution I have implemented dumped out my errors at the right time, but then returns a bash prompt and I want to summarise the status of the errors at the end of the function, but echo-ing here prints the text after the prompt meaning that I have to press enter to get back to a clean prompt. It shall become clear with the example below.
My error stream generator:
./TestErrorStream.sh
#!/bin/bash
echo "test1" >&2
My function to process this:
./Function.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # This is used simply to simulate the processing work I'm doing on the errors.
echo "Completed"
}
I source the Function.sh file to make ProcessErrors() available, then I run:
2> >(ProcessErrors) ./TestErrorStream.sh
I expect (and want) to get:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
Completed
user#user-desktop:~/path$
However what I really get is:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
user#user-desktop:~/path$ Completed
And no clean prompt. Of course the prompt is there, but "Completed" is being printed after the prompt, I want to printed before, and then a clean prompt to appear.
NOTE: This is a minimum working example, and it's contrived. While other solutions to my error stream problem are welcome I also want to understand how to make bash run this script the way I want it to.
Thanks for your help
Joey
Your problem is that the while loop stay stick to stdin until the program exits.
The release of stdin occurs at the end of the "TestErrorStream.sh", so your prompt is almost immediately available compared to what remains to process in the function.
I suggest you wrap the command inside a script so you'll be able to handle the time you want before your prompt is back (I suggest 1sec more than the suspected time needed for the function to process the remaining lines of codes)
I successfully managed to do this like that :
./Functions.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # simulate required time to process end of function (after TestErrorStream.sh is over and stdin is released)
echo "Completed"
}
./TestErrorStream.sh
#!/bin/bash
echo "first"
echo "firsterr" >&2
sleep 20 # any number here
./WrapTestErrorStream.sh
#!/bin/bash
source ./Functions.sh
2> >(ProcessErrors) ./TestErrorStream.sh
sleep 6 # <= this one is important
With the above you'll get a nice "Completed" before your prompt after 26 seconds of processing. (Works fine with or without the additional "time" command)
user#host:~/path$ time ./WrapTestErrorStream.sh
first
Line was:firsterr
Completed
real 0m26.014s
user 0m0.000s
sys 0m0.000s
user#host:~/path$
Note: the process substitution ">(ProcessErrors)" is a subprocess of the script "./TestErrorStream.sh". So when the script ends, the subprocess is no more tied to it nor to the wrapper. That's why we need that final "sleep 6"
#!/bin/bash
function ProcessErrors {
while read data; do
echo Line was:"$data"
done
sleep 5
echo "Completed"
}
# Open subprocess
exec 60> >(ProcessErrors)
P=$!
# Do the work
2>&60 ./TestErrorStream.sh
# Close connection or else subprocess would keep on reading
exec 60>&-
# Wait for process to exit (wait "$P" doesn't work). There are many ways
# to do this too like checking `/proc`. I prefer the `kill` method as
# it's more explicit. We'd never know if /proc updates itself quickly
# among all systems. And using an external tool is also a big NO.
while kill -s 0 "$P" &>/dev/null; do
sleep 1s
done
Off topic side-note: I'd love to see how posturing bash veterans/authors try to own this. Or perhaps they already did way way back from seeing this.

Unable to run BASH script in current environment multiple times

I have a bash script that I use to move from source to bin directories from anywhere I currently am (I call this script, 'teleport'). Since it basically is just a glorified 'cd' command, I have to run it in the current shell (i.e. . ./teleport.sh ). I've set up an alias in my .bashrc file so that 'teleport' matches '. teleport.sh'.
The first time I run it, it works fine. But then, if I run it again after it has run once, it doesn't do anything. It works again if I close my terminal and then open a new one, but only the first time. My intuition is that there is something internally going on with BASH that I'm not familiar with, so I thought I would run it through the gurus here to see if I can get an answer.
The script is:
numargs=$#
function printUsage
{
echo -e "Usage: $0 [-o | -s] <PROJECT>\n"
echo -e "\tMagically teleports you into the main source directory of a project.\n"
echo -e "\t PROJECT: The current project you wish to teleport into."
echo -e "\t -o: Teleport into the objdir.\n"
echo -e "\t -s: Teleport into the source dir.\n"
}
if [ $numargs -lt 2 ]
then
printUsage
fi
function teleportToObj
{
OBJDIR=${HOME}/Source/${PROJECT}/obj
cd ${OBJDIR}
}
function teleportToSrc
{
cd ${HOME}/Source/${PROJECT}/src
}
while getopts "o:s:" opt
do
case $opt in
o)
PROJECT=$OPTARG
teleportToObj
;;
s)
PROJECT=$OPTARG
teleportToSrc
;;
esac
done
My usage of it is something like:
sjohnson#corellia:~$ cd /usr/local/src
sjohnson#corellia:/usr/local/src$ . ./teleport -s some-proj
sjohnson#corellia:~/Source/some-proj/src$ teleport -o some-proj
sjohnson#corellia:~/Source/some-proj/src$
<... START NEW TERMINAL ...>
sjohnson#corellia:~$ . ./teleport -o some-proj
sjohnson#corellia:~/Source/some-proj/obj$
The problem is that getopts necessarily keeps a little bit of state so that it can be called in a loop, and you're not clearing that state. Each time it's called, it processes one more argument, and it increments the shell's OPTIND variable so it'll know which argument to process the next time it's called. When it's done with all the arguments, it returns 1 (false) every time it's invoked, which makes the while exit.
The first time you source your script, it works as expected. The second (and third, fourth...) time, getopts does nothing but return false.
Add one line to reset the state before you start looping:
unset OPTIND # clear state so getopts will start over
while getopts "o:s:" opt
do
# ...
done
(I assume there's a typo in your transcript, since it shows you invoking the script -- not sourcing it -- on the second try, but that's not the real problem here.)
The problem is that the first time you call is you are sourcing the script (thats what ". ./teleport") does which runs the script in the current shell thus preserving the cd. The second time you call it, it isn't sourced so you create a subshell, cd to the appropriate directory, and then exit the subshell putting you right back where you called the script from!
The way to make this work is simply to make teleportToSrc and teleportToObj aliases or functions in the current shell (i.e. outside a script)

Resources