setting global variable in bash - linux

I have function where I am expecting it to hang sometime. So I am setting one global variable and then reading it, if it didn't come up after few second I give up. Below is not complete code but it's not working as I am not getting $START as value 5
START=0
ineer()
{
sleep 5
START=5
echo "done $START" ==> I am seeing here it return 5
return $START
}
echo "Starting"
ineer &
while true
do
if [ $START -eq 0 ]
then
echo "Not null $START" ==> But $START here is always 0
else
echo "else $START"
break;
fi
sleep 1;
done

You run inner function call in back ground, which means the START will be assigned in a subshell started by current shell. And in that subshell, the START value will be 5.
However in your current shell, which echo the START value, it is still 0. Since the update of START will only be in the subshell.
Each time you start a shell in background, it is just like fork a new process, which will make a copy of all current shell environments, including the variable value, and the new process will be completely isolate from your current shell.
Since the subshell have been forked as a new process, there is no way to directly update the parent shell's START value. Some alternative ways include signals passing when the subshell which runs inner function exit.
common errors:
export
export could only be used to make the variable name available to any subshells forked from current shell. however, once the subshell have been forked. The subshell will have a new copy of the variable and the value, any changes to the exported variable in the shell will not effect the subshell.
Please take the following code for details.
#!/bin/bash
export START=0
ineer()
{
sleep 3
export START=5
echo "done $START" # ==> I am seeing here it return 5
sleep 1
echo "new value $START"
return $START
}
echo "Starting"
ineer &
while true
do
if [ $START -eq 0 ]
then
echo "Not null $START" # ==> But $START here is always 0
export START=10
echo "update value to $START"
sleep 3
else
echo "else $START"
break;
fi
sleep 1;
done

The problem is that ineer & runs the function in a subshell, which is its own scope for variables. Changes made in a subshell will not apply to the parent shell. I recommend looking into kill and signal catching.

Save pid of inner & by:
pid=$!
and use kill -0 $pid (that is zero!!) to detect if your process still alive.
But better redesign inner to use lock file, this is safer check!
UPDATE From KILL(2) man page:
#include <sys/types.h>
#include <signal.h>
int kill(pid_t pid, int sig);
If sig is 0, then no signal is sent, but error checking is still
performed; this can be used to check for the existence
of a process ID or process group ID.

The answer is: in this case you can use export.
This instruction allow all subprocesses to use this variable.
So when you'll call the ineer function it will fork a process that is copying the entire environment, including the START variable taken from the parent process.
You have to change the first line from:
START=0
to:
export START=0
You may also want to read this thread: Defining a variable with or without export

Related

How to unset envrionment variable from background in linux?

I want to delete an envrionment variable from background process which is sleep a little. I setted the "asd" variable with value "foo"
export asd=foo
after that I want to delete from background process. I try this, but doesn't work:
(sleep 3;unset asd;) &
When 3 seconds elapsed, the "export" command still show the previous setting. What do I wrong?
My goal is the "asd" variable removed after 3 seconds.
How to unset envrionment variable from background in linux?
You can set a trap in parent process and unset it inside the trap while setting the background process to deliver a signal after specified time.
asd=foo
trap 'unset asd' SIGUSR1
p=$BASHPID
( sleep 1; kill -SIGUSR1 $p ) &
echo $asd # will print foo
sleep 2
echo $asd # will print empty line
Note that it will not unset a variable "exactly" after the specified time, but when the handler for the signal gets executed.
I guess alternatively I could imagine patchin bash and writing a bash builtin command that would create a thread that after specified time would unset the variable. Note that setenv is not thread safe, so such setup would have to synchronized with other bash code.
What do I wrong?
You did unset the variable in a subshell. Subshell environment doesn't affect parent shell.

bash how to kill parent process, or exit from parent process from a function in bash module script

I have a few scripts:
functions.sh - has many functions defined including work() and abort()
it looks like this:
#!/bin/bash
abort()
{
message=$1
echo Error: $message ..Aborting >log
exit 7
}
work()
{
cp ./testfile ./test1 #./testfile doesnt exist so non zero status here
if [ $? -eq 0 ];then
echo "variable_value"
else
abort "Can not copy"
fi
}
parent.sh - parent script is the main script, it looks like this:
#!/bin/sh
. ./functions.sh
value=$(work)
echo "why is this still getting printed"
Basically i have many functions in the functions.sh file, and i am sourcing that file in parent.sh to make available all functions. Parent can call any function and any function in functions.sh can call abort at which point execution of parent should stop, but its not happening, parent.sh runs to the next step. Is there a way to get around this problem?
I realise that its happening due to assignment step value=$(work). But is there a way to abort execution just at abort function call in this case?
I'm not convinced that this behavior should be described as a "problem". As you noted, when you invoke work in a subshell, aborting from work just aborts the subshell. This is correct, expected behavior. Fortunately, the value returned by work in the subshell is available to the parent, so you can simply respond to it and write:
value=$(work) || exit
If work returns a non-zero value, then the script will exit with that same non-zero value.

Dual use bash script - source but also exec subshell? Dynamic return/exit?

My current setup starts with a function that is ostensibly in .bashrc (.bash_it/custom/funcs.bash to be precise)
#!/usr/bin/env bash
function proset() {
. proset-core "$#";
}
proset-core does some decrypting of secrets and exports those secrets to the session, hence the need for the . instead of just running it as a script/subshell.
If something goes wrong in proset-core, I use return instead of exit since I don't want the SSH connection to be dropped.
if [ "${APP_JSON}" = "null" ] ; then
echo -e "\n${redtext}App named $NAME not found in ${APPCONF}. Aborting.${resettext}\n";
return;
fi
This makes sense in the context of the exported proset function, but precludes usage as a script since return isn't valid except from within a function.
Is there a way to detect how it's being called and return one or the other as appropriate?
Just try to return, and exit if it fails.
_retval=$?
return 2>/dev/null || exit "$_retval"
The only case where your code will still be continuing after the return was invoked at top-level (outside of a function) is if you were executed rather than sourced, and should that happen, exiting is the Right Thing.
Make the builtin variable $SHLVL part of $# args as the last arg. Then at test point:
if [ "${#: -1}" -lt $SHLVL ]; then
# SHLVL arg is less than current SHLVL
# we are in a subshell
exit
else
return
fi
Ended up using
calledBy="$(ps -o comm= $PPID)";
if [ "x${calledBy}" = "xsshd" ]; then
return 1;
else
exit 1;
fi
since it didn't require passing anything extra. Anything that might cause this to be problematic please comment. Not too worried about being bash-specific or portable.
Credit: get the name of the caller script in bash script

Increment Number (variable) in bash script

I need to increment a number inside a varaible in a bash script.
But after the script is done, the variable should be exported with the new number and available next time the script is running.
IN MY SHELL
set x=0
SCRIPT
" If something is true.. do"
export x=$(($x+1)) //increment variable and save it for next time
if [ $x -eq 3 ];then
echo test
fi
exit
You cannot persist a variable in memory between two processes; the value needs to be stored somewhere and read on the next startup. The simplest way to do this is with a file. (The fish shell, which supports "universal" variables, uses a separate process that always runs to communicate with new shells as they start and exit. But even this "master" process needs to use a file to save the values when it exits.)
# Ensure that the value of x is written to the file
# no matter *how* the script exits (short of kill -9, anyway)
x_file=/some/special/file/somewhere
trap 'printf '%s\n' "$x" > "$x_file"' EXIT
x=$(cat "$x_file") # bash can read the whole file with x=$(< "$x_file")
# For a simple number, you only need to run one line
# read x < "$x_file"
x=$((x+1))
if [ "$x" -eq 3 ]; then
echo test
fi
exit
Exporting a variable is one way only. The exported variable will have the correct value for all child processes of your shell, but when the child exits, the changed value is lost for the parent process. Actually, the parent process will only see the initial value of the variable.
Which is a good thing. Because all child processes can potentially change the value of an exported variable, potentially messing things up for the other child processes (if changing the value would be bi-directional).
You could do one of two things:
Have the script save the value to a file before exiting, and
reading it from the file when starting
Use source your-script.bash or . your-script.bash. This way, your
shell will not create a child process, and the variable gets changed in
same process

Re-run bash script if another instance was invoked

I have a bash script that may be invoked multiple times simultaneously. To protect the state information (saved in a /tmp file) that the script accesses, I am using file locking like this:
do_something()
{
...
}
// Check if there are any other instances of the script; if so exit
exec 8>$LOCK
if ! flock -n -x 8; then
exit 1
fi
// script does something...
do_something
Now any other instance that was invoked when this script was running exits. I want the script to run only one extra time if there were n simultaneous invocations, not n-times, something like this:
do_something()
{
...
}
// Check if there are any other instances of the script; if so exit
exec 8>$LOCK
if ! flock -n -x 8; then
exit 1
fi
// script does something...
do_something
// check if another instance was invoked, if so re-run do_something again
if [ condition ]; then
do_something
fi
How can I go about doing this? Touching a file inside the flock before quitting and having that file as the condition for the second if doesn't seem to work.
Have one flag (lockfile) to signal that a things needs doing, and always set it. Have a separate flag that is unset by the execution part.
REQUEST_FILE=/tmp/please_do_something
LOCK_FILE=/tmp/doing_something
# request running
touch $REQUEST_FILE
# lock and run
if ln -s /proc/$$ $LOCK_FILE 2>/dev/null ; then
while [ -e $REQUEST_FILE ]; do
do_something
rm $REQUEST_FILE
done
rm $LOCK_FILE
fi
If you want to ensure that "do_something" is run exactly once for each time the whole script is run, then you need to create some kind of a queue. The overall structure is similar.
They're not everone's favourite, but I've always been a fan of symbolic links to make lockfiles, since they're atomic. For example:
lockfile=/var/run/`basename $0`.lock
if ! ln -s "pid=$$ when=`date '+%s'` status=$something" "$lockfile"; then
echo "Can't set lock." >&2
exit 1
fi
By encoding useful information directly into the link target, you eliminate the race condition introduced by writing to files.
That said, the link that Dennis posted provides much more useful information that you should probably try to understand before writing much more of your script. My example above is sort of related to BashFAQ/045 which suggests doing a similar thing with mkdir.
If I understand your question correctly, then what you want to do might be achieved (slightly unreliably) by using two lock files. If setting the first lock fails, we try the second lock. If setting the second lock fails, we exit. The error exists if the first lock is delete after we check it but before check the second existant lock. If this level of error is acceptable to you, that's great.
This is untested; but it looks reasonable to me.
#!/usr/local/bin/bash
lockbase="/tmp/test.lock"
setlock() {
if ln -s "pid=$$" "$lockbase".$1 2>/dev/null; then
trap "rm \"$lockbase\".$1" 0 1 2 5 15
else
return 1
fi
}
if setlock 1 || setlock 2; then
echo "I'm in!"
do_something_amazing
else
echo "No lock - aborting."
fi
Please see Process Management.

Resources