Increment Number (variable) in bash script - linux

I need to increment a number inside a varaible in a bash script.
But after the script is done, the variable should be exported with the new number and available next time the script is running.
IN MY SHELL
set x=0
SCRIPT
" If something is true.. do"
export x=$(($x+1)) //increment variable and save it for next time
if [ $x -eq 3 ];then
echo test
fi
exit

You cannot persist a variable in memory between two processes; the value needs to be stored somewhere and read on the next startup. The simplest way to do this is with a file. (The fish shell, which supports "universal" variables, uses a separate process that always runs to communicate with new shells as they start and exit. But even this "master" process needs to use a file to save the values when it exits.)
# Ensure that the value of x is written to the file
# no matter *how* the script exits (short of kill -9, anyway)
x_file=/some/special/file/somewhere
trap 'printf '%s\n' "$x" > "$x_file"' EXIT
x=$(cat "$x_file") # bash can read the whole file with x=$(< "$x_file")
# For a simple number, you only need to run one line
# read x < "$x_file"
x=$((x+1))
if [ "$x" -eq 3 ]; then
echo test
fi
exit

Exporting a variable is one way only. The exported variable will have the correct value for all child processes of your shell, but when the child exits, the changed value is lost for the parent process. Actually, the parent process will only see the initial value of the variable.
Which is a good thing. Because all child processes can potentially change the value of an exported variable, potentially messing things up for the other child processes (if changing the value would be bi-directional).
You could do one of two things:
Have the script save the value to a file before exiting, and
reading it from the file when starting
Use source your-script.bash or . your-script.bash. This way, your
shell will not create a child process, and the variable gets changed in
same process

Related

Accessing the value returned by a shell script in the parent script

I am trying to access a string returned by a shell script which was called from a parent shell script. Something like this:
ex.sh:
echo "Hemanth"
ex2.sh:
sh ex.sh
if [ $? == "Hemanth" ]; then
echo "Hurray!!"
else
echo "Sorry Bro!"
fi
Is there a way to do this? Any help would be appreciated.
Thank you.
Use a command substitution syntax on ex2.sh
valueFromOtherScript="$(sh ex.sh)"
printf "%s\n" "$valueFromOtherScript"
echo by default outputs a new-line character after the string passed, if you don't need it in the above variable use printf as
printf "Hemanth"
on first script. Also worth adding $? will return only the exit code of the last executed command. Its values are interpreted as 0 being a successful run and a non-zero on failure. It will NEVER have a string value as you tried to use.
A Bash script does not really "return" a string. What you want to do is capture the output of a script (or external program, or function, they all act the same in this respect).
Command substitution is a common way to capture output.
captured_output="$(sh ex.sh)"
This initializes variable captured_output with the string containing all that is output by ex.sh. Well, not exactly all. Any script (or command, or function) actually has two output channels, usually called "standard out" (file descriptor number 1) and "standard error" (file descriptor number 2). When executing from a terminal, both typically end up on the screen. But they can be handled separately if needed.
For instance, if you want to capture really all output (including error messages), you would add a "redirection" after your command that tells the shell you want standard error to go to the same place as standard out.
captured_output="$(sh ex.sh 2>&1)"
If you omit that redirection, and the script outputs something on standard error, then this will still show on screen, and will not be captured.
Another way to capture output is sending it to a file, and then read back that file to a variable, like this :
sh ex.sh > output_file.log
captured_output="$(<output_file.log)"
A script (or external program, or function) does have something called a return code, which is an integer. By convention, a value of 0 means "success", and any other value indicates abnormal execution (but not necessarily failure) : the meaning of that return code is not standardized, it is ultimately specific to each script, program or function.
This return code is available in the $? special shell variable immediately after the execution terminates.
sh ex.sh
return_code=$?
echo "Return code is $return_code"

How to use one environment variable when calling a bash script repeatedly

I have a task to monitor the system with a quota, if the monitored result is over the quota, send a warning email. But this monitor program should be called once in half an hour, after one warning email is sent out, the next time if the monitored state is still the same as last time, there is no need to send the same warning email again.
In order to do this, I would like to make use of environment variable to store the state of the last monitored result, so that the next time it can be checked and duplicate email would not be sent. One of my solution is to add or update the export syntax in .bashrc, but in order to activate the updated export syntax, I have to run bash, which might be unnecessary.
So I would like ask is there any way to update the environment variable so that every time when the monitor program Bash script is called, it gets the fresh updated value?
This is a self contained solution using a heredoc. At first glance it may seem an elaborate and inperfect solution, it does have its uses in that it's resilient and it works well when deploying across more than one machine, requires no special monitoring or permissions of external files, and most importantly, there are no unwanted surprises with environment.
This example uses bash, but it will work with sh if the $thisfile variable is set manually, or another way.
This example assumes that 20 is already in the script file as mymonitorvalue, and uses argument $1 as a proof of concept. You would obviously change newval="$1" to whatever calculates the quota:
Example usage:
#bash $>./script 10
Value from previous run was 20
Value from this test was 10
Updating value ...
#bash $>./script 10
not doing anything ... result is the same as last time
#bash $>
Script:
#!/bin/bash
thisfile="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" ; thisfile="${DIR}${0}"
read -d '' currentvalue <<'EOF'
mymonitorval=20
EOF
eval "$currentvalue"
function changeval () {
sed -E -i "s/(^mymonitorval\=)(.*)/mymonitorval\="$1"/g" "$thisfile"
}
newvalue="$1"
if [[ "$newvalue" != "$mymonitorval" ]]; then
echo "Value from previous run was $mymonitorval"
echo "Value from this test was "$1""
echo "Updating value ..."
changeval "$newvalue"
else
echo "not doing anything ... result is the same as last time"
fi
Explanation:
thisfile= can be set manually for script location. This example uses the automated solution from here: https://stackoverflow.com/a/246128
read -d...EOF is the heredoc which is saved into variable $currentvalue
eval "$currentvalue" in this case is the equivalent of typing mymonitorval=10 into a terminal
function changeval...} updates the contents of the heredoc in place (it changes the physical .sh script file)
newvalue="$1" is for the purpose of testing. $newvalue would be determined by whatever your script is that is calculating quota
if.... block is to perform two alternate sets of actions depending on whether $newvalue is the same as it was last time or not.
Store environment variable in different .file and then source <.file>

setting global variable in bash

I have function where I am expecting it to hang sometime. So I am setting one global variable and then reading it, if it didn't come up after few second I give up. Below is not complete code but it's not working as I am not getting $START as value 5
START=0
ineer()
{
sleep 5
START=5
echo "done $START" ==> I am seeing here it return 5
return $START
}
echo "Starting"
ineer &
while true
do
if [ $START -eq 0 ]
then
echo "Not null $START" ==> But $START here is always 0
else
echo "else $START"
break;
fi
sleep 1;
done
You run inner function call in back ground, which means the START will be assigned in a subshell started by current shell. And in that subshell, the START value will be 5.
However in your current shell, which echo the START value, it is still 0. Since the update of START will only be in the subshell.
Each time you start a shell in background, it is just like fork a new process, which will make a copy of all current shell environments, including the variable value, and the new process will be completely isolate from your current shell.
Since the subshell have been forked as a new process, there is no way to directly update the parent shell's START value. Some alternative ways include signals passing when the subshell which runs inner function exit.
common errors:
export
export could only be used to make the variable name available to any subshells forked from current shell. however, once the subshell have been forked. The subshell will have a new copy of the variable and the value, any changes to the exported variable in the shell will not effect the subshell.
Please take the following code for details.
#!/bin/bash
export START=0
ineer()
{
sleep 3
export START=5
echo "done $START" # ==> I am seeing here it return 5
sleep 1
echo "new value $START"
return $START
}
echo "Starting"
ineer &
while true
do
if [ $START -eq 0 ]
then
echo "Not null $START" # ==> But $START here is always 0
export START=10
echo "update value to $START"
sleep 3
else
echo "else $START"
break;
fi
sleep 1;
done
The problem is that ineer & runs the function in a subshell, which is its own scope for variables. Changes made in a subshell will not apply to the parent shell. I recommend looking into kill and signal catching.
Save pid of inner & by:
pid=$!
and use kill -0 $pid (that is zero!!) to detect if your process still alive.
But better redesign inner to use lock file, this is safer check!
UPDATE From KILL(2) man page:
#include <sys/types.h>
#include <signal.h>
int kill(pid_t pid, int sig);
If sig is 0, then no signal is sent, but error checking is still
performed; this can be used to check for the existence
of a process ID or process group ID.
The answer is: in this case you can use export.
This instruction allow all subprocesses to use this variable.
So when you'll call the ineer function it will fork a process that is copying the entire environment, including the START variable taken from the parent process.
You have to change the first line from:
START=0
to:
export START=0
You may also want to read this thread: Defining a variable with or without export

Propagate values in shell script

I have a simple script that uses CAPDEPTH like a count variable and calls some tests for each value.
#!/bin/bash
# SCRIPT ONE
CAPDEPTH=1
...
while [ $CAPDEPTH -lt 11 ];
do
echo "cap depth - $CAPDEPTH"
make test-all-basics
let CAPDEPTH=CAPDEPTH+1
done
And in line
eval "make test-all-basics"
It will do multiple calls to another shell script which I also want to make dependent on value of CAPDEPTH. Here are couple of lines from that script.
# SCRIPT TWO
...
R_binary="${R_HOME}/bin/exec${R_ARCH}/R"
capture_arg="--tracedir $(CAPDEPTH)"
...
My question is how to get the value of CAPDETH propagated from SCRIPT ONE to SCRIPT TWO. Is that even possible?
I've tried export of the variable CAPDEPTH in both scripts, but does not seems to work.
#!/bin/bash
# SCRIPT ONE
CAPDEPTH=1
...
while [ $CAPDEPTH -lt 11 ]; do
echo "cap depth - $CAPDEPTH"
export CAPDEPTH # Makes it so that any child process inherits the variable CAPDEPTH and anything it contains.
make test-all-basics
let CAPDEPTH++ # Increments CAPDEPTH by +1
done
# SCRIPT TWO
...
R_binary="${R_HOME}/bin/exec${R_ARCH}/R"
capture_arg="--tracedir $CAPDEPTH"
...
Inside script one use,
export CADEPTH=....
let CADEPTH=...
echo $CADEPTH
and inside script two, do:
source script1
and then,
echo $CADEPTH

Unable to run BASH script in current environment multiple times

I have a bash script that I use to move from source to bin directories from anywhere I currently am (I call this script, 'teleport'). Since it basically is just a glorified 'cd' command, I have to run it in the current shell (i.e. . ./teleport.sh ). I've set up an alias in my .bashrc file so that 'teleport' matches '. teleport.sh'.
The first time I run it, it works fine. But then, if I run it again after it has run once, it doesn't do anything. It works again if I close my terminal and then open a new one, but only the first time. My intuition is that there is something internally going on with BASH that I'm not familiar with, so I thought I would run it through the gurus here to see if I can get an answer.
The script is:
numargs=$#
function printUsage
{
echo -e "Usage: $0 [-o | -s] <PROJECT>\n"
echo -e "\tMagically teleports you into the main source directory of a project.\n"
echo -e "\t PROJECT: The current project you wish to teleport into."
echo -e "\t -o: Teleport into the objdir.\n"
echo -e "\t -s: Teleport into the source dir.\n"
}
if [ $numargs -lt 2 ]
then
printUsage
fi
function teleportToObj
{
OBJDIR=${HOME}/Source/${PROJECT}/obj
cd ${OBJDIR}
}
function teleportToSrc
{
cd ${HOME}/Source/${PROJECT}/src
}
while getopts "o:s:" opt
do
case $opt in
o)
PROJECT=$OPTARG
teleportToObj
;;
s)
PROJECT=$OPTARG
teleportToSrc
;;
esac
done
My usage of it is something like:
sjohnson#corellia:~$ cd /usr/local/src
sjohnson#corellia:/usr/local/src$ . ./teleport -s some-proj
sjohnson#corellia:~/Source/some-proj/src$ teleport -o some-proj
sjohnson#corellia:~/Source/some-proj/src$
<... START NEW TERMINAL ...>
sjohnson#corellia:~$ . ./teleport -o some-proj
sjohnson#corellia:~/Source/some-proj/obj$
The problem is that getopts necessarily keeps a little bit of state so that it can be called in a loop, and you're not clearing that state. Each time it's called, it processes one more argument, and it increments the shell's OPTIND variable so it'll know which argument to process the next time it's called. When it's done with all the arguments, it returns 1 (false) every time it's invoked, which makes the while exit.
The first time you source your script, it works as expected. The second (and third, fourth...) time, getopts does nothing but return false.
Add one line to reset the state before you start looping:
unset OPTIND # clear state so getopts will start over
while getopts "o:s:" opt
do
# ...
done
(I assume there's a typo in your transcript, since it shows you invoking the script -- not sourcing it -- on the second try, but that's not the real problem here.)
The problem is that the first time you call is you are sourcing the script (thats what ". ./teleport") does which runs the script in the current shell thus preserving the cd. The second time you call it, it isn't sourced so you create a subshell, cd to the appropriate directory, and then exit the subshell putting you right back where you called the script from!
The way to make this work is simply to make teleportToSrc and teleportToObj aliases or functions in the current shell (i.e. outside a script)

Resources