How to unset envrionment variable from background in linux? - linux

I want to delete an envrionment variable from background process which is sleep a little. I setted the "asd" variable with value "foo"
export asd=foo
after that I want to delete from background process. I try this, but doesn't work:
(sleep 3;unset asd;) &
When 3 seconds elapsed, the "export" command still show the previous setting. What do I wrong?
My goal is the "asd" variable removed after 3 seconds.

How to unset envrionment variable from background in linux?
You can set a trap in parent process and unset it inside the trap while setting the background process to deliver a signal after specified time.
asd=foo
trap 'unset asd' SIGUSR1
p=$BASHPID
( sleep 1; kill -SIGUSR1 $p ) &
echo $asd # will print foo
sleep 2
echo $asd # will print empty line
Note that it will not unset a variable "exactly" after the specified time, but when the handler for the signal gets executed.
I guess alternatively I could imagine patchin bash and writing a bash builtin command that would create a thread that after specified time would unset the variable. Note that setenv is not thread safe, so such setup would have to synchronized with other bash code.
What do I wrong?
You did unset the variable in a subshell. Subshell environment doesn't affect parent shell.

Related

Recover after "kill 0"

I have a script that invokes kill 0. I want to invoke that script from another script, and have the outer script continue to execute. (kill 0 sends a signal, defaulting to SIGTERM, to every process in the process group of the calling process; see man 2 kill.)
kill0.sh:
#!/bin/sh
kill 0
caller.sh:
#!/bin/sh
echo BEFORE
./kill0.sh
echo AFTER
The current behavior is:
$ ./caller.sh
BEFORE
Terminated
$
How can I modify caller.sh so it prints AFTER after invoking kill0.sh?
Modifying kill0.sh is not an option. Assume that kill0.sh might read from stdin and write to stdout and/or stderr before invoking kill 0, and I don't want to interfere with that. I still want the kill 0 command to kill the kill0.sh process itself; I just don't want it to kill the caller as well.
I'm using Ubuntu 16.10 x86_64, and /bin/sh is a symlink to dash. That shouldn't matter, and I prefer answers that don't depend on that.
This is of course a simplified version of a larger set of scripts, so I'm at some risk of having an XY problem, but I think that a solution to the problem as stated here should let me solve the actual problem. (I have a wrapper script that invokes a specified command, capturing and displaying its output, with some other bells and whistles.)
One solution
You need to trap the signal in the parent, but enable it in the child. So a script like run-kill0.sh could be:
#!/bin/sh
echo BEFORE
trap '' TERM
(trap 15; exec ./kill0.sh)
echo AFTER
The first trap disables the TERM signal. The second trap in the sub-shell re-enables the signal (using the signal number instead of the name — see below) before running the kill0.sh script. Using exec is a minor optimization — you can omit it and it will work the same.
Digression on obscure syntactic details
Why 15 instead of TERM in the sub-shell? Because when I tested it with TERM instead of 15, I got:
$ sh -x run-kill0.sh
+ echo BEFORE
BEFORE
+ trap '' TERM
+ trap TERM
trap: usage: trap [-lp] [arg signal_spec ...]
+ echo AFTER
AFTER
$
When I used 15 in place of TERM (twice), I got:
$ sh -x run-kill0.sh
+ echo BEFORE
BEFORE
+ trap '' 15
+ trap 15
+ exec ./kill0.sh
Terminated: 15
+ echo AFTER
AFTER
$
Using TERM in place of the first 15 would also work.
Bash documentation on trap
Studying the Bash manual for trap shows:
trap [-lp] [arg] [sigspec …]
The commands in arg are to be read and executed when the shell receives signal sigspec. If arg is absent (and there is a single sigspec) or equal to ‘-’, each specified signal’s disposition is reset to the value it had when the shell was started.
A second solution
The second sentence is the key: trap - TERM should (and empirically does) work.
#!/bin/sh
echo BEFORE
trap '' TERM
(trap - TERM; exec ./kill0.sh)
echo AFTER
Running that yields:
$ sh -x run-kill0.sh
+ echo BEFORE
BEFORE
+ trap '' TERM
+ trap - TERM
+ exec ./kill0.sh
Terminated: 15
+ echo AFTER
AFTER
$
I've just re-remembered why I use numbers and not names (but my excuse is that the shell — it wasn't Bash in those days — didn't recognize signal names when I learned it).
POSIX documentation for trap
However, in Bash's defense, the POSIX spec for trap says:
If the first operand is an unsigned decimal integer, the shell shall treat all operands as conditions, and shall reset each condition to the default value. Otherwise, if there are operands, the first is treated as an action and the remaining as conditions.
If action is '-', the shell shall reset each condition to the default value. If action is null ( "" ), the shell shall ignore each specified condition if it arises.
This is clearer than the Bash documentation, IMO. It states why trap 15 works. There's also a minor glitch in the presentation. The synopsis says (on one line):
trap n [condition...]trap [action condition...]
It should say (on two lines):
trapn[condition...]
trap [action condition...]

Increment Number (variable) in bash script

I need to increment a number inside a varaible in a bash script.
But after the script is done, the variable should be exported with the new number and available next time the script is running.
IN MY SHELL
set x=0
SCRIPT
" If something is true.. do"
export x=$(($x+1)) //increment variable and save it for next time
if [ $x -eq 3 ];then
echo test
fi
exit
You cannot persist a variable in memory between two processes; the value needs to be stored somewhere and read on the next startup. The simplest way to do this is with a file. (The fish shell, which supports "universal" variables, uses a separate process that always runs to communicate with new shells as they start and exit. But even this "master" process needs to use a file to save the values when it exits.)
# Ensure that the value of x is written to the file
# no matter *how* the script exits (short of kill -9, anyway)
x_file=/some/special/file/somewhere
trap 'printf '%s\n' "$x" > "$x_file"' EXIT
x=$(cat "$x_file") # bash can read the whole file with x=$(< "$x_file")
# For a simple number, you only need to run one line
# read x < "$x_file"
x=$((x+1))
if [ "$x" -eq 3 ]; then
echo test
fi
exit
Exporting a variable is one way only. The exported variable will have the correct value for all child processes of your shell, but when the child exits, the changed value is lost for the parent process. Actually, the parent process will only see the initial value of the variable.
Which is a good thing. Because all child processes can potentially change the value of an exported variable, potentially messing things up for the other child processes (if changing the value would be bi-directional).
You could do one of two things:
Have the script save the value to a file before exiting, and
reading it from the file when starting
Use source your-script.bash or . your-script.bash. This way, your
shell will not create a child process, and the variable gets changed in
same process

Terminate a process started by a bash script with CTRL-C

I am having an issue with terminating the execution of a process inside a bash script.
Basically my script does the following actions:
Issue some starting commands
Start a program who waits for CTRL+C to stop
Do some post-processing on data retreived by the program
My problem is that when I hit CTRL+C the whole script terminates, not just the "inner" program.
I have seen around some scripts that do this, this is why I think it's possible.
You can set up a signal handler using trap:
trap 'myFunction arg1 arg2 ...' SIGINT;
I suggest keeping your script abortable overall, which you can do by using a simple boolean:
#!/bin/bash
# define signal handler and its variable
allowAbort=true;
myInterruptHandler()
{
if $allowAbort; then
exit 1;
fi;
}
# register signal handler
trap myInterruptHandler SIGINT;
# some commands...
# before calling the inner program,
# disable the abortability of the script
allowAbort=false;
# now call your program
./my-inner-program
# and now make the script abortable again
allowAbort=true;
# some more commands...
In order to reduce the likelihood of messing up with allowAbort, or just to keep it a bit cleaner, you can define a wrapper function to do the job for you:
#!/bin/bash
# define signal handler and its variable
allowAbort=true;
myInterruptHandler()
{
if $allowAbort; then
exit 1;
fi;
}
# register signal handler
trap myInterruptHandler SIGINT;
# wrapper
wrapInterruptable()
{
# disable the abortability of the script
allowAbort=false;
# run the passed arguments 1:1
"$#";
# save the returned value
local ret=$?;
# make the script abortable again
allowAbort=true;
# and return
return "$ret";
}
# call your program
wrapInterruptable ./my-inner-program

setting global variable in bash

I have function where I am expecting it to hang sometime. So I am setting one global variable and then reading it, if it didn't come up after few second I give up. Below is not complete code but it's not working as I am not getting $START as value 5
START=0
ineer()
{
sleep 5
START=5
echo "done $START" ==> I am seeing here it return 5
return $START
}
echo "Starting"
ineer &
while true
do
if [ $START -eq 0 ]
then
echo "Not null $START" ==> But $START here is always 0
else
echo "else $START"
break;
fi
sleep 1;
done
You run inner function call in back ground, which means the START will be assigned in a subshell started by current shell. And in that subshell, the START value will be 5.
However in your current shell, which echo the START value, it is still 0. Since the update of START will only be in the subshell.
Each time you start a shell in background, it is just like fork a new process, which will make a copy of all current shell environments, including the variable value, and the new process will be completely isolate from your current shell.
Since the subshell have been forked as a new process, there is no way to directly update the parent shell's START value. Some alternative ways include signals passing when the subshell which runs inner function exit.
common errors:
export
export could only be used to make the variable name available to any subshells forked from current shell. however, once the subshell have been forked. The subshell will have a new copy of the variable and the value, any changes to the exported variable in the shell will not effect the subshell.
Please take the following code for details.
#!/bin/bash
export START=0
ineer()
{
sleep 3
export START=5
echo "done $START" # ==> I am seeing here it return 5
sleep 1
echo "new value $START"
return $START
}
echo "Starting"
ineer &
while true
do
if [ $START -eq 0 ]
then
echo "Not null $START" # ==> But $START here is always 0
export START=10
echo "update value to $START"
sleep 3
else
echo "else $START"
break;
fi
sleep 1;
done
The problem is that ineer & runs the function in a subshell, which is its own scope for variables. Changes made in a subshell will not apply to the parent shell. I recommend looking into kill and signal catching.
Save pid of inner & by:
pid=$!
and use kill -0 $pid (that is zero!!) to detect if your process still alive.
But better redesign inner to use lock file, this is safer check!
UPDATE From KILL(2) man page:
#include <sys/types.h>
#include <signal.h>
int kill(pid_t pid, int sig);
If sig is 0, then no signal is sent, but error checking is still
performed; this can be used to check for the existence
of a process ID or process group ID.
The answer is: in this case you can use export.
This instruction allow all subprocesses to use this variable.
So when you'll call the ineer function it will fork a process that is copying the entire environment, including the START variable taken from the parent process.
You have to change the first line from:
START=0
to:
export START=0
You may also want to read this thread: Defining a variable with or without export

Unix: What is the difference between source and export?

I am writing a shell script, to read a file which has key=value pair and set those variables as environment variables. But I have a doubt, if I do source file.txt will that set the variables defined in that file as environment variable or I should read the file line by line and set it using export command ?
Is source command in this case different than export?
When you source the file, the assignments will be set but the variables are not exported unless the allexport option has been set. If you want all the variables to be exported, it is much simpler to use allexport and source the file than it is to read the file and use export explicitly. In other words, you should do:
set -a
. file.txt
(I prefer . because it is more portable than source, but source works just fine in bash.)
Note that exporting a variable does not make it an environment variable. It just makes it an environment variable in any subshell.
source (.) vs export (and also some file lock [flock] stuff at the end):
In short:
source some_script.sh, or the POSIX-compliant equivalent, . some_script.sh, brings variables in from other scripts, while
export my_var="something" pushes variables out to other scripts/processes which are called/started from the current script/process.
Using source some_script.sh or . some_script.sh in a Linux shell script is kind of like using import some_module in Python, or #include <some_header_file.h> in C or C++. It brings variables in from the script being sourced.
Using export some_var="something" is kind of like setting that variable locally, so it is available for the rest of the current script or process, and then also passing it in to any and all sub-scripts or processes you may call from this point onward.
More details:
So, this:
# export `some_var` so that it is set and available in the current script/process,
# as well as in all sub-scripts or processes which are called from the
# current script/process
export some_var="something"
# call other scripts/processes, passing in `some_var` to them automatically
# since it was just exported above!
script1.sh # this script now gets direct access to `some_var`
script2.sh # as does this one
script3.sh # and this one
is as though you had done this:
# set this variable for the current script/process only
some_var="something"
# call other scripts/processes, passing in `some_var` to them **manually**
# so they can use it too
some_var="something" script1.sh # manually pass in `some_var` to this script
some_var="something" script2.sh # manually pass in `some_var` to this script
some_var="something" script3.sh # manually pass in `some_var` to this script
except that the first version above, where we called export some_var="something" actually has a recursive passing or exporting of variables to sub-processes, so if we call script1.sh from inside our current script/process, then script1.sh will get the exported variables from our current script, and if script1.sh calls script5.sh, and script5.sh calls script10.sh, then both of those scripts as well will get the exported variables automatically. This is in contrast to the manual case above where only those scripts called explicitly with manually-set variables as the scripts are called will get them, so sub-scripts will NOT automatically get any variables from their calling scripts!
How to "un-export" a variable:
Note that once you've exported a variable, calling unset on it will "unexport it", like this:
# set and export `some_var` so that sub-processes will receive it
export some_var="something"
script1.sh # this script automatically receives `some_var`
# unset and un-export `some_var` so that sub-processes will no longer receive it
unset some_var
script1.sh # this script does NOT automatically receive `some_var`
In summary:
source or . imports.
export exports.
unset unexports.
Example:
Create this script:
source_and_export.sh:
#!/bin/bash
echo "var1 = $var1"
var2="world"
Then mark it executable:
chmod +x source_and_export.sh
Now here is me running some commands at the terminal to test the source (.) and export commands with this script. Type in the command you see after the lines beginning with $ (not including the comments). The other lines are the output. Run the commands sequentially, one command at a time:
$ echo "$var1" # var1 contains nothing locally
$ var1="hello" # set var1 to something in the current process only
$ ./source_and_export.sh # call a sub-process
var1 = # the sub-process can't see what I just set var1 to
$ export var1 # **export** var1 so sub-processes will receive it
$ ./source_and_export.sh # call a sub-process
var1 = hello # now the sub-process sees what I previously set var1 to
$ echo "$var1 $var2" # but I can't see var2 from the subprocess/subscript
hello
$ . ./source_and_export.sh # **source** the sub-script to _import_ its var2 into the current process
var1 = hello
$ echo "$var1 $var2" # now I CAN see what the subprocess set var2 to because I **sourced it!**
hello world # BOTH var1 from the current process and var2 from the sub-process print in the current process!
$ unset var1 # unexport (`unset`) var1
$ echo "$var1" # var1 is now NOT set in the current process
$ ./source_and_export.sh # and the sub-process doesn't receive it either
var1 =
$ var1="hey" # set var1 again in the current process
$ . ./source_and_export.sh # if I **source** the script, it runs in the current process, so it CAN see var1 from the current process!
var1 = hey # notice it prints
$ ./source_and_export.sh # but if I run the script as a sub-process, it can NOT see var1 now because it was `unset` (unexported)
var1 = # above and has NOT been `export`ed again since then!
$
Using files as global variables between processes
Sometimes, when writing scripts to launch programs and things especially, I have come across cases where export doesn't seem to work right. In these cases, sometimes one must resort to using files themselves as global variables to pass information from one program to another. Here is how that can be done. In this example, the existence of the file "~/temp/.do_something" functions as an inter-process boolean variable:
# In program A, if the file "~/temp/.do_something" does NOT exist,
# then create it
mkdir -p ~/temp
if [ ! -f ~/temp/.do_something ]; then
touch ~/temp/.do_something # create the file
fi
# In program B, check to see if the file exists, and act accordingly
mkdir -p ~/temp
DO_SOMETHING="false"
if [ -f ~/temp/.do_something ]; then
DO_SOMETHING="true"
fi
if [ "$DO_SOMETHING" == "true" ] && [ "$SOME_OTHER_VAR" == "whatever" ]; then
# remove this global file "variable" so we don't act on it again
# until "program A" is called again and re-creates the file
rm ~/temp/.do_something
do_something
else
do_something_else
fi
Simply checking for the existence of a file, as shown above, works great for globally passing around boolean conditions between programs and processes. However, if you need to pass around more complicated variables, such as strings or numbers, you may need to do this by writing these values into the file. In such cases, you should use the file lock function, flock, to properly ensure inter-process synchronization. It is a type of process-safe (ie: "inter-process") mutex primitive. You can read about it here:
The shell script flock command: https://man7.org/linux/man-pages/man1/flock.1.html. See also man flock or man 1 flock.
The Linux library C command: https://man7.org/linux/man-pages/man2/flock.2.html. See also man 2 flock. You must #include <sys/file.h> in your C file to use this function.
References:
https://askubuntu.com/questions/862236/source-vs-export-vs-export-ld-library-path/862256#862256
My own experimentation and testing
I'll be adding the above example to my project on GitHub here, under the bash folder: https://github.com/ElectricRCAircraftGuy/eRCaGuy_hello_world

Resources