Related
I want to load some environment variables from a file before running a node script, so that the script has access to them. However, I don't want the environment variables to be set in my shell after the script is done executing.
I can load the environment variables like this:
export $(cat app-env-vars.txt | xargs) && node my-script.js
However, after the command is run, all of the environment variables are now set in my shell.
I'm asking this question to answer it, since I figured out a solution but couldn't find an answer on SO.
If you wrap the command in parentheses, the exports will be scoped to within those parens and won't pollute the global shell namespace:
(export $(cat app-env-vars.txt | xargs) && node my-script.js)
Echo'ing one of the environment variables from the app.env file after executing the command will show it as empty.
This is what the env command is for:
env - run a program in a modified environment
You can try something like:
env $(cat app-en-vars.txt) node my-script.js
This (and any unquoted $(...) expansion) is subject to word splitting and glob expansion, both of which can easily cause problems with something like environment variables.
A safer approach is to use arrays, like so:
my_vars=(
FOO=bar
"BAZ=hello world"
...
)
env "${my_vars[#]}" node my-script.js
You can populate an array from a file if needed. Note you can also use -i with env to only pass the environment variables you set explicitly.
If you trust the .txt's files contents, and it contains valid Bash syntax, you should source it (and probably rename it to a .sh/.bash extension). Then you can use a subshell, as you posted in your answer, to prevent the sourced state from leaking into the parent shell:
( source app-env-vars.txt && node my-script.js )
If you file just contains variables like
FOO='x y z'
BAR='bar'
...
you can try
eval $(< app-en-vars.txt) node my-script.js
env and start.env that should run in any shell.
It actually does except for KORN where env variable setting is not behaves the way I would expect. So look at example.
file set.env :
#!/bin/bash
export MY_VAR="home" || setenv MY_VAR "home"
file start.sh :
#!/bin/bash
command . ./set.env || source set.env
echo "$MY_VAR"
i can see the print of variable.
but if try to echo it in terminal under ksh, it turned to be not defined.
ksh$ start.sh
home
ksh$ echo $MY_VAR
ksh$
I would expect to see $MY_VAR in my session... any ideas ?
//run under red hat
When you run start.sh, you're executing it as a subcommand, not sourcing it. Consequently, changes it makes to environment variables are scoped to that process and its children; once the process exits, the environment variables it sets die with it.
To portably source the script, executing it in your current shell and thus setting environment variables within that shell, run:
# this works on any POSIX shell, including ksh (and bastardizations such as mksh)
. start.sh
...or, less portably:
# this is a bashism
source start.sh
BTW, as a practice, command . ./set.env is... odd. command prevents execution of shell functions, but any environment where a function named . is defined is arguably a buggy environment. Consider . start.sh alone.
This question already has answers here:
Can I export a variable to the environment from a Bash script without sourcing it?
(13 answers)
Closed 3 years ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
I'm trying to write a shell script that, when run, will set some environment variables that will stay set in the caller's shell.
setenv FOO foo
in csh/tcsh, or
export FOO=foo
in sh/bash only set it during the script's execution.
I already know that
source myscript
will run the commands of the script rather than launching a new shell, and that can result in setting the "caller's" environment.
But here's the rub:
I want this script to be callable from either bash or csh. In other words, I want users of either shell to be able to run my script and have their shell's environment changed. So 'source' won't work for me, since a user running csh can't source a bash script, and a user running bash can't source a csh script.
Is there any reasonable solution that doesn't involve having to write and maintain TWO versions on the script?
Use the "dot space script" calling syntax. For example, here's how to do it using the full path to a script:
. /path/to/set_env_vars.sh
And here's how to do it if you're in the same directory as the script:
. set_env_vars.sh
These execute the script under the current shell instead of loading another one (which is what would happen if you did ./set_env_vars.sh). Because it runs in the same shell, the environmental variables you set will be available when it exits.
This is the same thing as calling source set_env_vars.sh, but it's shorter to type and might work in some places where source doesn't.
Your shell process has a copy of the parent's environment and no access to the parent process's environment whatsoever. When your shell process terminates any changes you've made to its environment are lost. Sourcing a script file is the most commonly used method for configuring a shell environment, you may just want to bite the bullet and maintain one for each of the two flavors of shell.
You're not going to be able to modify the caller's shell because it's in a different process context. When child processes inherit your shell's variables, they're
inheriting copies themselves.
One thing you can do is to write a script that emits the correct commands for tcsh
or sh based how it's invoked. If you're script is "setit" then do:
ln -s setit setit-sh
and
ln -s setit setit-csh
Now either directly or in an alias, you do this from sh
eval `setit-sh`
or this from csh
eval `setit-csh`
setit uses $0 to determine its output style.
This is reminescent of how people use to get the TERM environment variable set.
The advantage here is that setit is just written in whichever shell you like as in:
#!/bin/bash
arg0=$0
arg0=${arg0##*/}
for nv in \
NAME1=VALUE1 \
NAME2=VALUE2
do
if [ x$arg0 = xsetit-sh ]; then
echo 'export '$nv' ;'
elif [ x$arg0 = xsetit-csh ]; then
echo 'setenv '${nv%%=*}' '${nv##*=}' ;'
fi
done
with the symbolic links given above, and the eval of the backquoted expression, this has the desired result.
To simplify invocation for csh, tcsh, or similar shells:
alias dosetit 'eval `setit-csh`'
or for sh, bash, and the like:
alias dosetit='eval `setit-sh`'
One nice thing about this is that you only have to maintain the list in one place.
In theory you could even stick the list in a file and put cat nvpairfilename between "in" and "do".
This is pretty much how login shell terminal settings used to be done: a script would output statments to be executed in the login shell. An alias would generally be used to make invocation simple, as in "tset vt100". As mentioned in another answer, there is also similar functionality in the INN UseNet news server.
In my .bash_profile I have :
# No Proxy
function noproxy
{
/usr/local/sbin/noproxy #turn off proxy server
unset http_proxy HTTP_PROXY https_proxy HTTPs_PROXY
}
# Proxy
function setproxy
{
sh /usr/local/sbin/proxyon #turn on proxy server
http_proxy=http://127.0.0.1:8118/
HTTP_PROXY=$http_proxy
https_proxy=$http_proxy
HTTPS_PROXY=$https_proxy
export http_proxy https_proxy HTTP_PROXY HTTPS_PROXY
}
So when I want to disable the proxy,
the function(s) run in the login shell and sets the variables
as expected and wanted.
It's "kind of" possible through using gdb and setenv(3), although I have a hard time recommending actually doing this. (Additionally, i.e. the most recent ubuntu won't actually let you do this without telling the kernel to be more permissive about ptrace, and the same may go for other distros as well).
$ cat setfoo
#! /bin/bash
gdb /proc/${PPID}/exe ${PPID} <<END >/dev/null
call setenv("foo", "bar", 0)
END
$ echo $foo
$ ./setfoo
$ echo $foo
bar
This works — it isn't what I'd use, but it 'works'. Let's create a script teredo to set the environment variable TEREDO_WORMS:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL -i
It will be interpreted by the Korn shell, exports the environment variable, and then replaces itself with a new interactive shell.
Before running this script, we have SHELL set in the environment to the C shell, and the environment variable TEREDO_WORMS is not set:
% env | grep SHELL
SHELL=/bin/csh
% env | grep TEREDO
%
When the script is run, you are in a new shell, another interactive C shell, but the environment variable is set:
% teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
When you exit from this shell, the original shell takes over:
% exit
% env | grep TEREDO
%
The environment variable is not set in the original shell's environment. If you use exec teredo to run the command, then the original interactive shell is replaced by the Korn shell that sets the environment, and then that in turn is replaced by a new interactive C shell:
% exec teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
If you type exit (or Control-D), then your shell exits, probably logging you out of that window, or taking you back to the previous level of shell from where the experiments started.
The same mechanism works for Bash or Korn shell. You may find that the prompt after the exit commands appears in funny places.
Note the discussion in the comments. This is not a solution I would recommend, but it does achieve the stated purpose of a single script to set the environment that works with all shells (that accept the -i option to make an interactive shell). You could also add "$#" after the option to relay any other arguments, which might then make the shell usable as a general 'set environment and execute command' tool. You might want to omit the -i if there are other arguments, leading to:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL "${#-'-i'}"
The "${#-'-i'}" bit means 'if the argument list contains at least one argument, use the original argument list; otherwise, substitute -i for the non-existent arguments'.
You should use modules, see http://modules.sourceforge.net/
EDIT: The modules package has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
Another workaround that I don't see mentioned is to write the variable value to a file.
I ran into a very similar issue where I wanted to be able to run the last set test (instead of all my tests). My first plan was to write one command for setting the env variable TESTCASE, and then have another command that would use this to run the test. Needless to say that I had the same exact issue as you did.
But then I came up with this simple hack:
First command ( testset ):
#!/bin/bash
if [ $# -eq 1 ]
then
echo $1 > ~/.TESTCASE
echo "TESTCASE has been set to: $1"
else
echo "Come again?"
fi
Second command (testrun ):
#!/bin/bash
TESTCASE=$(cat ~/.TESTCASE)
drush test-run $TESTCASE
You can instruct the child process to print its environment variables (by calling "env"), then loop over the printed environment variables in the parent process and call "export" on those variables.
The following code is based on Capturing output of find . -print0 into a bash array
If the parent shell is the bash, you can use
while IFS= read -r -d $'\0' line; do
export "$line"
done < <(bash -s <<< 'export VARNAME=something; env -0')
echo $VARNAME
If the parent shell is the dash, then read does not provide the -d flag and the code gets more complicated
TMPDIR=$(mktemp -d)
mkfifo $TMPDIR/fifo
(bash -s << "EOF"
export VARNAME=something
while IFS= read -r -d $'\0' line; do
echo $(printf '%q' "$line")
done < <(env -0)
EOF
) > $TMPDIR/fifo &
while read -r line; do export "$(eval echo $line)"; done < $TMPDIR/fifo
rm -r $TMPDIR
echo $VARNAME
Under OS X bash you can do the following:
Create the bash script file to unset the variable
#!/bin/bash
unset http_proxy
Make the file executable
sudo chmod 744 unsetvar
Create alias
alias unsetvar='source /your/path/to/the/script/unsetvar'
It should be ready to use so long you have the folder containing your script file appended to the path.
It's not what I would call outstanding, but this also works if you need to call the script from the shell anyway. It's not a good solution, but for a single static environment variable, it works well enough.
1.) Create a script with a condition that exits either 0 (Successful) or 1 (Not successful)
if [[ $foo == "True" ]]; then
exit 0
else
exit 1
2.) Create an alias that is dependent on the exit code.
alias='myscript.sh && export MyVariable'
You call the alias, which calls the script, which evaluates the condition, which is required to exit zero via the '&&' in order to set the environment variable in the parent shell.
This is flotsam, but it can be useful in a pinch.
You can invoke another one Bash with the different bash_profile.
Also, you can create special bash_profile for using in multi-bashprofile environment.
Remember that you can use functions inside of bashprofile, and that functions will be avialable globally.
for example, "function user { export USER_NAME $1 }" can set variable in runtime, for example: user olegchir && env | grep olegchir
Another option is to use "Environment Modules" (http://modules.sourceforge.net/). This unfortunately introduces a third language into the mix. You define the environment with the language of Tcl, but there are a few handy commands for typical modifications (prepend vs. append vs set). You will also need to have environment modules installed. You can then use module load *XXX* to name the environment you want. The module command is basically a fancy alias for the eval mechanism described above by Thomas Kammeyer. The main advantage here is that you can maintain the environment in one language and rely on "Environment Modules" to translate it to sh, ksh, bash, csh, tcsh, zsh, python (?!?!!), etc.
I created a solution using pipes, eval and signal.
parent() {
if [ -z "$G_EVAL_FD" ]; then
die 1 "Rode primeiro parent_setup no processo pai"
fi
if [ $(ppid) = "$$" ]; then
"$#"
else
kill -SIGUSR1 $$
echo "$#">&$G_EVAL_FD
fi
}
parent_setup() {
G_EVAL_FD=99
tempfile=$(mktemp -u)
mkfifo "$tempfile"
eval "exec $G_EVAL_FD<>'$tempfile'"
rm -f "$tempfile"
trap "read CMD <&$G_EVAL_FD; eval \"\$CMD\"" USR1
}
parent_setup #on parent shell context
( A=1 ); echo $A # prints nothing
( parent A=1 ); echo $A # prints 1
It might work with any command.
I don't see any answer documenting how to work around this problem with cooperating processes. A common pattern with things like ssh-agent is to have the child process print an expression which the parent can eval.
bash$ eval $(shh-agent)
For example, ssh-agent has options to select Csh or Bourne-compatible output syntax.
bash$ ssh-agent
SSH2_AUTH_SOCK=/tmp/ssh-era/ssh2-10690-agent; export SSH2_AUTH_SOCK;
SSH2_AGENT_PID=10691; export SSH2_AGENT_PID;
echo Agent pid 10691;
(This causes the agent to start running, but doesn't allow you to actually use it, unless you now copy-paste this output to your shell prompt.) Compare:
bash$ ssh-agent -c
setenv SSH2_AUTH_SOCK /tmp/ssh-era/ssh2-10751-agent;
setenv SSH2_AGENT_PID 10752;
echo Agent pid 10752;
(As you can see, csh and tcsh uses setenv to set varibles.)
Your own program can do this, too.
bash$ foo=$(makefoo)
Your makefoo script would simply calculate and print the value, and let the caller do whatever they want with it -- assigning it to a variable is a common use case, but probably not something you want to hard-code into the tool which produces the value.
Technically, that is correct -- only 'eval' doesn't fork another shell. However, from the point of view of the application you're trying to run in the modified environment, the difference is nil: the child inherits the environment of its parent, so the (modified) environment is conveyed to all descending processes.
Ipso facto, the changed environment variable 'sticks' -- as long as you are running under the parent program/shell.
If it is absolutely necessary for the environment variable to remain after the parent (Perl or shell) has exited, it is necessary for the parent shell to do the heavy lifting. One method I've seen in the documentation is for the current script to spawn an executable file with the necessary 'export' language, and then trick the parent shell into executing it -- always being cognizant of the fact that you need to preface the command with 'source' if you're trying to leave a non-volatile version of the modified environment behind. A Kluge at best.
The second method is to modify the script that initiates the shell environment (.bashrc or whatever) to contain the modified parameter. This can be dangerous -- if you hose up the initialization script it may make your shell unavailable the next time it tries to launch. There are plenty of tools for modifying the current shell; by affixing the necessary tweaks to the 'launcher' you effectively push those changes forward as well.
Generally not a good idea; if you only need the environment changes for a particular application suite, you'll have to go back and return the shell launch script to its pristine state (using vi or whatever) afterwards.
In short, there are no good (and easy) methods. Presumably this was made difficult to ensure the security of the system was not irrevocably compromised.
The short answer is no, you cannot alter the environment of the parent process, but it seems like what you want is an environment with custom environment variables and the shell that the user has chosen.
So why not simply something like
#!/usr/bin/env bash
FOO=foo $SHELL
Then when you are done with the environment, just exit.
You could always use aliases
alias your_env='source ~/scripts/your_env.sh'
I did this many years ago. If I rememeber correctly, I included an alias in each of .bashrc and .cshrc, with parameters, aliasing the respective forms of setting the environment to a common form.
Then the script that you will source in any of the two shells has a command with that last form, that is suitable aliased in each shell.
If I find the concrete aliases, I will post them.
Other than writings conditionals depending on what $SHELL/$TERM is set to, no. What's wrong with using Perl? It's pretty ubiquitous (I can't think of a single UNIX variant that doesn't have it), and it'll spare you the trouble.
What is export for?
What is the difference between:
export name=value
and
name=value
export makes the variable available to sub-processes.
That is,
export name=value
means that the variable name is available to any process you run from that shell process. If you want a process to make use of this variable, use export, and run the process from that shell.
name=value
means the variable scope is restricted to the shell, and is not available to any other process. You would use this for (say) loop variables, temporary variables etc.
It's important to note that exporting a variable doesn't make it available to parent processes. That is, specifying and exporting a variable in a spawned process doesn't make it available in the process that launched it.
To illustrate what the other answers are saying:
$ foo="Hello, World"
$ echo $foo
Hello, World
$ bar="Goodbye"
$ export foo
$ bash
bash-3.2$ echo $foo
Hello, World
bash-3.2$ echo $bar
bash-3.2$
It has been said that it's not necessary to export in bash when spawning subshells, while others said the exact opposite. It is important to note the difference between subshells (those that are created by (), ``, $() or loops) and subprocesses (processes that are invoked by name, for example a literal bash appearing in your script).
Subshells will have access to all variables from the parent, regardless of their exported state.
Subprocesses will only see the exported variables.
What is common in these two constructs is that neither can pass variables back to the parent shell.
$ noexport=noexport; export export=export; (echo subshell: $noexport $export; subshell=subshell); bash -c 'echo subprocess: $noexport $export; subprocess=subprocess'; echo parent: $subshell $subprocess
subshell: noexport export
subprocess: export
parent:
There is one more source of confusion: some think that 'forked' subprocesses are the ones that don't see non-exported variables. Usually fork()s are immediately followed by exec()s, and that's why it would seem that the fork() is the thing to look for, while in fact it's the exec(). You can run commands without fork()ing first with the exec command, and processes started by this method will also have no access to unexported variables:
$ noexport=noexport; export export=export; exec bash -c 'echo execd process: $noexport $export; execd=execd'; echo parent: $execd
execd process: export
Note that we don't see the parent: line this time, because we have replaced the parent shell with the exec command, so there's nothing left to execute that command.
This answer is wrong but retained for historical purposes. See 2nd edit below.
Others have answered that export makes the variable available to subshells, and that is correct but merely a side effect. When you export a variable, it puts that variable in the environment of the current shell (ie the shell calls putenv(3) or setenv(3)).
The environment of a process is inherited across exec, making the variable visible in subshells.
Edit (with 5 year's perspective):
This is a silly answer. The purpose of 'export' is to make variables "be in the environment of subsequently executed commands", whether those commands be subshells or subprocesses. A naive implementation would be to simply put the variable in the environment of the shell, but this would make it impossible to implement export -p.
2nd Edit (with another 5 years in passing).
This answer is just bizarre. Perhaps I had some reason at one point to claim that bash puts the exported variable into its own environment, but those reasons were not given here and are now lost to history. See Exporting a function local variable to the environment.
export NAME=value for settings and variables that have meaning to a subprocess.
NAME=value for temporary or loop variables private to the current shell process.
In more detail, export marks the variable name in the environment that copies to a subprocesses and their subprocesses upon creation. No name or value is ever copied back from the subprocess.
A common error is to place a space around the equal sign:
$ export FOO = "bar"
bash: export: `=': not a valid identifier
Only the exported variable (B) is seen by the subprocess:
$ A="Alice"; export B="Bob"; echo "echo A is \$A. B is \$B" | bash
A is . B is Bob
Changes in the subprocess do not change the main shell:
$ export B="Bob"; echo 'B="Banana"' | bash; echo $B
Bob
Variables marked for export have values copied when the subprocess is created:
$ export B="Bob"; echo '(sleep 30; echo "Subprocess 1 has B=$B")' | bash &
[1] 3306
$ B="Banana"; echo '(sleep 30; echo "Subprocess 2 has B=$B")' | bash
Subprocess 1 has B=Bob
Subprocess 2 has B=Banana
[1]+ Done echo '(sleep 30; echo "Subprocess 1 has B=$B")' | bash
Only exported variables become part of the environment (man environ):
$ ALICE="Alice"; export BOB="Bob"; env | grep "ALICE\|BOB"
BOB=Bob
So, now it should be as clear as is the summer's sun! Thanks to Brain Agnew, alexp, and William Prusell.
It should be noted that you can export a variable and later change the value. The variable's changed value will be available to child processes. Once export has been set for a variable you must do export -n <var> to remove the property.
$ K=1
$ export K
$ K=2
$ bash -c 'echo ${K-unset}'
2
$ export -n K
$ bash -c 'echo ${K-unset}'
unset
export will make the variable available to all shells forked from the current shell.
As you might already know, UNIX allows processes to have a set of environment variables, which are key/value pairs, both key and value being strings.
Operating system is responsible for keeping these pairs for each process separately.
Program can access its environment variables through this UNIX API:
char *getenv(const char *name);
int setenv(const char *name, const char *value, int override);
int unsetenv(const char *name);
Processes also inherit environment variables from parent processes. Operating system is responsible for creating a copy of all "envars" at the moment the child process is created.
Bash, among other shells, is capable of setting its environment variables on user request. This is what export exists for.
export is a Bash command to set environment variable for Bash. All variables set with this command would be inherited by all processes that this Bash would create.
More on Environment in Bash
Another kind of variable in Bash is internal variable. Since Bash is not just interactive shell, it is in fact a script interpreter, as any other interpreter (e.g. Python) it is capable of keeping its own set of variables. It should be mentioned that Bash (unlike Python) supports only string variables.
Notation for defining Bash variables is name=value. These variables stay inside Bash and have nothing to do with environment variables kept by operating system.
More on Shell Parameters (including variables)
Also worth noting that, according to Bash reference manual:
The environment for any simple command or function may be augmented
temporarily by prefixing it with parameter assignments, as described
in Shell Parameters. These assignment statements affect only the
environment seen by that command.
To sum things up:
export is used to set environment variable in operating system. This variable will be available to all child processes created by current Bash process ever after.
Bash variable notation (name=value) is used to set local variables available only to current process of bash
Bash variable notation prefixing another command creates environment variable only for scope of that command.
The accepted answer implies this, but I'd like to make explicit the connection to shell builtins:
As mentioned already, export will make a variable available to both the shell and children. If export is not used, the variable will only be available in the shell, and only shell builtins can access it.
That is,
tango=3
env | grep tango # prints nothing, since env is a child process
set | grep tango # prints tango=3 - "type set" shows `set` is a shell builtin
Two of the creators of UNIX, Brian Kernighan and Rob Pike, explain this in their book "The UNIX Programming Environment". Google for the title and you'll easily find a pdf version.
They address shell variables in section 3.6, and focus on the use of the export command at the end of that section:
When you want to make the value of a variable accessible in sub-shells, the shell's export command should be used. (You might think about why there is no way to export the value of a variable from a sub-shell to its parent).
Here's yet another example:
VARTEST="value of VARTEST"
#export VARTEST="value of VARTEST"
sudo env | grep -i vartest
sudo echo ${SUDO_USER} ${SUDO_UID}:${SUDO_GID} "${VARTEST}"
sudo bash -c 'echo ${SUDO_USER} ${SUDO_UID}:${SUDO_GID} "${VARTEST}"'
Only by using export VARTEST the value of VARTEST is available in sudo bash -c '...'!
For further examples see:
http://mywiki.wooledge.org/SubShell
bash-hackers.org/wiki/doku.php/scripting/processtree
Just to show the difference between an exported variable being in the environment (env) and a non-exported variable not being in the environment:
If I do this:
$ MYNAME=Fred
$ export OURNAME=Jim
then only $OURNAME appears in the env. The variable $MYNAME is not in the env.
$ env | grep NAME
OURNAME=Jim
but the variable $MYNAME does exist in the shell
$ echo $MYNAME
Fred
By default, variables created within a script are only available to the current shell; child processes (sub-shells) will not have access to values that have been set or modified. Allowing child processes to see the values, requires use of the export command.
As yet another corollary to the existing answers here, let's rephrase the problem statement.
The answer to "should I export" is identical to the answer to the question "Does your subsequent code run a command which implicitly accesses this variable?"
For a properly documented standard utility, the answer to this can be found in the ENVIRONMENT section of the utility's man page. So, for example, the git manual page mentions that GIT_PAGER controls which utility is used to browse multi-page output from git. Hence,
# XXX FIXME: buggy
branch="main"
GIT_PAGER="less"
git log -n 25 --oneline "$branch"
git log "$branch"
will not work correctly, because you did not export GIT_PAGER. (Of course, if your system already declared the variable as exported somewhere else, the bug is not reproducible.)
We are explicitly referring to the variable $branch, and the git program code doesn't refer to a system variable branch anywhere (as also suggested by the fact that its name is written in lower case; but many beginners erroneously use upper case for their private variables, too! See Correct Bash and shell script variable capitalization for a discussion) so there is no reason to export branch.
The correct code would look like
branch="main"
export GIT_PAGER="less"
git log -n 25 --oneline "$branch"
git log -p "$branch"
(or equivalently, you can explicitly prefix each invocation of git with the temporary assignment
branch="main"
GIT_PAGER="less" git log -n 25 --oneline "$branch"
GIT_PAGER="less" git log -p "$branch"
In case it's not obvious, the shell script syntax
var=value command arguments
temporarily sets var to value for the duration of the execution of
command arguments
and exports it to the command subprocess, and then afterwards, reverts it back to the previous value, which could be undefined, or defined with a different - possibly empty - value, and unexported if that's what it was before.)
For internal, ad-hoc or otherwise poorly documented tools, you simply have to know whether they silently inspect their environment. This is rarely important in practice, outside of a few specific use cases, such as passing a password or authentication token or other secret information to a process running in some sort of container or isolated environment.
If you really need to know, and have access to the source code, look for code which uses the getenv system call (or on Windows, with my condolences, variations like getenv_s, w_getenv, etc). For some scripting languages (such as Perl or Ruby), look for ENV. For Python, look for os.environ (but notice also that e.g. from os import environ as foo means that foo is now an alias for os.environ). In Node, look for process.env. For C and related languages, look for envp (but this is just a convention for what to call the optional third argument to main, after argc and argv; the language lets you call them anything you like). For shell scripts (as briefly mentioned above), perhaps look for variables with uppercase or occasionally mixed-case names, or usage of the utility env. Many informal scripts have undocumented but discoverable assignments usually near the beginning of the script; in particular, look for the ?= default assignment parameter expansion.
For a brief demo, here is a shell script which invokes a Python script which looks for $NICKNAME, and falls back to a default value if it's unset.
#!/bin/sh
NICKNAME="Handsome Guy"
demo () {
python3 <<\____
from os import environ as env
print("Hello, %s" % env.get("NICKNAME", "Anonymous Coward"))
____
}
demo
# prints "Hello, Anonymous Coward"
# Fix: forgot export
export NICKNAME
demo
# prints "Hello, Handsome Guy"
As another tangential remark, let me reiterate that you only ever need to export a variable once. Many beginners cargo-cult code like
# XXX FIXME: redundant exports
export PATH="$HOME/bin:$PATH"
export PATH="/opt/acme/bin:$PATH"
but typically, your operating system has already declared PATH as exported, so this is better written
PATH="$HOME/bin:$PATH"
PATH="/opt/acme/bin:$PATH"
or perhaps refactored to something like
for p in "$HOME/bin" "/opt/acme/bin"
do
case :$PATH: in
*:"$p":*) ;;
*) PATH="$p:$PATH";;
esac
done
# Avoid polluting the variable namespace of your interactive shell
unset p
which avoids adding duplicate entries to your PATH.
Although not explicitly mentioned in the discussion, it is NOT necessary to use export when spawning a subshell from inside bash since all the variables are copied into the child process.
I need to set a system environment variable from a Bash script that would be available outside of the current scope. So you would normally export environment variables like this:
export MY_VAR=/opt/my_var
But I need the environment variable to be available at a system level though. Is this possible?
Not really - once you're running in a subprocess you can't affect your parent.
There two possibilities:
Source the script rather than run it (see source .):
source {script}
Have the script output the export commands, and eval that:
eval `bash {script}`
Or:
eval "$(bash script.sh)"
This is the only way I know to do what you want:
In foo.sh, you have:
#!/bin/bash
echo MYVAR=abc123
And when you want to get the value of the variable, you have to do the following:
$ eval "$(foo.sh)" # assuming foo.sh is in your $PATH
$ echo $MYVAR #==> abc123
Depending on what you want to do, and how you want to do it, Douglas Leeder's suggestion about using source could be used, but it will source the whole file, functions and all. Using eval, only the stuff that gets echoed will be evaluated.
Set the variable in file /etc/profile (create the file if needed). That will essentially make the variable available to every Bash process.
When i am working under the root account and wish for example to open an X executable under a normal users running X.
I need to set DISPLAY environment variable with...
env -i DISPLAY=:0 prog_that_need_xwindows arg1 arg2
You may want to use source instead of running the executable directly:
# Executable : exec.sh
export var="test"
invar="inside variable"
source exec.sh
echo $var # test
echo $invar # inside variable
This will run the file but in same shell as the parent shell.
Possible downside in some rare cases : all variables regardless of explicit export or not will be exported. If some variables are required to be unset, unset those explicitly. Similarly, handle imported variables.
# Executable : exec.sh
export var="test"
invar="inside variable"
# --- #
unset invar