Bash export variables but only for current command - node.js

I want to load some environment variables from a file before running a node script, so that the script has access to them. However, I don't want the environment variables to be set in my shell after the script is done executing.
I can load the environment variables like this:
export $(cat app-env-vars.txt | xargs) && node my-script.js
However, after the command is run, all of the environment variables are now set in my shell.
I'm asking this question to answer it, since I figured out a solution but couldn't find an answer on SO.

If you wrap the command in parentheses, the exports will be scoped to within those parens and won't pollute the global shell namespace:
(export $(cat app-env-vars.txt | xargs) && node my-script.js)
Echo'ing one of the environment variables from the app.env file after executing the command will show it as empty.

This is what the env command is for:
env - run a program in a modified environment
You can try something like:
env $(cat app-en-vars.txt) node my-script.js
This (and any unquoted $(...) expansion) is subject to word splitting and glob expansion, both of which can easily cause problems with something like environment variables.
A safer approach is to use arrays, like so:
my_vars=(
FOO=bar
"BAZ=hello world"
...
)
env "${my_vars[#]}" node my-script.js
You can populate an array from a file if needed. Note you can also use -i with env to only pass the environment variables you set explicitly.
If you trust the .txt's files contents, and it contains valid Bash syntax, you should source it (and probably rename it to a .sh/.bash extension). Then you can use a subshell, as you posted in your answer, to prevent the sourced state from leaking into the parent shell:
( source app-env-vars.txt && node my-script.js )

If you file just contains variables like
FOO='x y z'
BAR='bar'
...
you can try
eval $(< app-en-vars.txt) node my-script.js

Related

--environment argument given to ember is ignored if export EMBER_ENV is run

When I run this in a bash file, the argument environment is not received by the ember app:
#!/bin/bash
# create nginx.conf
echo "Create nginx.conf from nginx.conf.erb"
export `cat ./.env`
erb ./config/nginx.conf.erb > ./config/nginx.conf
./node_modules/ember-cli/bin/ember serve --environment=acceptance
I think it has something to do with the export function. When I put the ember serve command before the export it works.
The .env file looks like this
EMBER_ENV=development
Running bash 3.2 on Mac OS 10.10 (Yosemite)
Edit: I changed the question because it didn't have all the relevant code
In this case, you're giving ember two conflicting arguments: You're passing EMBER_ENV=development through the environment, and --environment=acceptance through the command line. The former tells it to use the environment named development, and the latter tells it to use the environment named acceptance -- but it can't do both at the same time.
Knowing which of those two conflicting commands ember will choose to honor is an item for which you'd need to check its documentation for. Of course, the better thing is just to fix the conflict.
I'd suggest doing the following:
./node_modules/ember-cli/bin/ember serve "--environment=${EMBER_ENV:-acceptance}"
...if you want to honor the EMBER_ENV in your file rather than the one on the command line (but fall back to acceptance when the file doesn't specify an EMBER_ENV). If you use bash -x, you'll explicitly see the script passing a --environment= appropriate to what's given in the .env file.
If you always want to use the environment acceptance, on the other hand, override or remove the environment after loading it from your file:
export `cat ./.env`
# if the file contained `EMBER_ENV`, unset it so our command-line argument is honored
unset EMBER_ENV
./node_modules/ember-cli/bin/ember serve --environment=acceptance
All that said --
export `cat ./.env`
is actually a quite buggy way to do things (though it won't break if the only thing you're setting is EMBER_ENV, and the only value it has is a single word in all ASCII with no whitespace or special characters). If you trust your .env file to be written by a non-malicious user in valid shell syntax, you'd have fewer bugs with:
set -a # automatically export all variables
source .env # run .env as a shell script within the current interpreter
If you don't trust your .env to be written as a non-malicious script in valid shell syntax, then perhaps something more like:
while IFS='=' read -r k v;
[[ $k ]] || continue # skip empty lines
printf -v "$k" %s "$v" || continue # set any variable given as a shell variable
export "$k" # export those variables to the environment
done < .env

Export variables defined in another file

I have a script that contains a couple of variables that need to be set as an environment variable
The list of variables change constantly and modifying it on my end is not an option. Any idea how I would go about doing it?
sample file foo.sh
FOO="FOOFOO"
BAR="BARBAR"
There is a difference between a variable and an environment variable. If you execute . foo.sh and foo.sh contains the line FOO=value, then the variable FOO will be assigned in the current process. It is not an environment variable. To become an environment variable (and thus be available to sub-shells), it must be exported. However, shells provide an option which makes all variable assignments promote the variable to an environment variable, so if you simply do:
set -a
. foo.sh
set +a
then all variable assignments in foo.sh will be made environment variables in the current process. Note that this is not strictly true: in bash, exporting a variable makes it an environement variable in the current shell, but in other shells (dash, for example) exporting the variable does not make it an environment variable in the current shell. (It does cause it to be set it in the environment of subshells, however.) However, in the context of the shell, it does not really matter if a variable is an environment variable in the current process. If it is exported (and therefore set in the environment of any sub-processes), a variable that is not in the environment is functionally equivalent to an environment variable.
Are you looking for the '.' command (also spelled 'source' in bash):
source foo.sh
Since you need to export them to the environment as well, you may need to do a little extra work:
while read -r line; do
export "$line"
done < foo.sh
If you are looking to export only variables with a common prefix, e.g.:
BAZ_FOO="FOOFOO"
BAZ_BAR="BARBAR"
You could simply do this:
. foo.sh
variable_list=()
variable_list=( $(set -o posix ; set |\
grep "BAZ_" |\
cut -d= -f1) )
for variable in "${variable_list[#]}"
do
export "$variable"
done

Shell variable expansion - indirection while calling a utility with env

I was trying to figure out env, ( i.e. calling a util with a new environment).
Just as an example my environment variable KDEDIRS = /usr in my current environment and lets say I type:
env -i KDEDIRS=/home/newkdedir env
This outputs KDEDIRS=/home/newkdedir as expected. (i.e calling second env with the new environment)
Now i wanna call say utility echo same way
env -i KDEDIRS=/home/new_kdedir echo ${KDEDIRS}
This is obviously not gonna work bec. shell expands KDEDIRS before it gets to echo. So the output is /usr (i.e. value in the current environment)
Then i try indirection and type in
env -i KDEDIRS=/home/newkdedir echo ${!KDEDIRS}
This outputs nothing.
I might be a little bit confused about this but how can i make the shell expand that KDEDIRS variable according to the newly created environment for echo?
Expansion happens as part of constructing the env command line which also sets the variable.
No expansion is done within the execution of that command. So you must add another command line expander as part of that command. E.g.
env -i KDEDIRS=/home/newkdedir /bin/sh -c 'echo $KDEDIRS'
KDEDIRS=/home/newkdedir eval 'echo $KDEDIRS'
Indirection has nothing to do with it.
Usually, you use `env' to give an environment to a command that it spawned off (eg. like you did with in your first snippet). Printing the variable back (that too using a shell builtin) might be possible through some perverse escaping and subshell tricks but it's not a very common use case (atleast not in my experience).

Defining a variable with or without export

What is export for?
What is the difference between:
export name=value
and
name=value
export makes the variable available to sub-processes.
That is,
export name=value
means that the variable name is available to any process you run from that shell process. If you want a process to make use of this variable, use export, and run the process from that shell.
name=value
means the variable scope is restricted to the shell, and is not available to any other process. You would use this for (say) loop variables, temporary variables etc.
It's important to note that exporting a variable doesn't make it available to parent processes. That is, specifying and exporting a variable in a spawned process doesn't make it available in the process that launched it.
To illustrate what the other answers are saying:
$ foo="Hello, World"
$ echo $foo
Hello, World
$ bar="Goodbye"
$ export foo
$ bash
bash-3.2$ echo $foo
Hello, World
bash-3.2$ echo $bar
bash-3.2$
It has been said that it's not necessary to export in bash when spawning subshells, while others said the exact opposite. It is important to note the difference between subshells (those that are created by (), ``, $() or loops) and subprocesses (processes that are invoked by name, for example a literal bash appearing in your script).
Subshells will have access to all variables from the parent, regardless of their exported state.
Subprocesses will only see the exported variables.
What is common in these two constructs is that neither can pass variables back to the parent shell.
$ noexport=noexport; export export=export; (echo subshell: $noexport $export; subshell=subshell); bash -c 'echo subprocess: $noexport $export; subprocess=subprocess'; echo parent: $subshell $subprocess
subshell: noexport export
subprocess: export
parent:
There is one more source of confusion: some think that 'forked' subprocesses are the ones that don't see non-exported variables. Usually fork()s are immediately followed by exec()s, and that's why it would seem that the fork() is the thing to look for, while in fact it's the exec(). You can run commands without fork()ing first with the exec command, and processes started by this method will also have no access to unexported variables:
$ noexport=noexport; export export=export; exec bash -c 'echo execd process: $noexport $export; execd=execd'; echo parent: $execd
execd process: export
Note that we don't see the parent: line this time, because we have replaced the parent shell with the exec command, so there's nothing left to execute that command.
This answer is wrong but retained for historical purposes. See 2nd edit below.
Others have answered that export makes the variable available to subshells, and that is correct but merely a side effect. When you export a variable, it puts that variable in the environment of the current shell (ie the shell calls putenv(3) or setenv(3)).
The environment of a process is inherited across exec, making the variable visible in subshells.
Edit (with 5 year's perspective):
This is a silly answer. The purpose of 'export' is to make variables "be in the environment of subsequently executed commands", whether those commands be subshells or subprocesses. A naive implementation would be to simply put the variable in the environment of the shell, but this would make it impossible to implement export -p.
2nd Edit (with another 5 years in passing).
This answer is just bizarre. Perhaps I had some reason at one point to claim that bash puts the exported variable into its own environment, but those reasons were not given here and are now lost to history. See Exporting a function local variable to the environment.
export NAME=value for settings and variables that have meaning to a subprocess.
NAME=value for temporary or loop variables private to the current shell process.
In more detail, export marks the variable name in the environment that copies to a subprocesses and their subprocesses upon creation. No name or value is ever copied back from the subprocess.
A common error is to place a space around the equal sign:
$ export FOO = "bar"
bash: export: `=': not a valid identifier
Only the exported variable (B) is seen by the subprocess:
$ A="Alice"; export B="Bob"; echo "echo A is \$A. B is \$B" | bash
A is . B is Bob
Changes in the subprocess do not change the main shell:
$ export B="Bob"; echo 'B="Banana"' | bash; echo $B
Bob
Variables marked for export have values copied when the subprocess is created:
$ export B="Bob"; echo '(sleep 30; echo "Subprocess 1 has B=$B")' | bash &
[1] 3306
$ B="Banana"; echo '(sleep 30; echo "Subprocess 2 has B=$B")' | bash
Subprocess 1 has B=Bob
Subprocess 2 has B=Banana
[1]+ Done echo '(sleep 30; echo "Subprocess 1 has B=$B")' | bash
Only exported variables become part of the environment (man environ):
$ ALICE="Alice"; export BOB="Bob"; env | grep "ALICE\|BOB"
BOB=Bob
So, now it should be as clear as is the summer's sun! Thanks to Brain Agnew, alexp, and William Prusell.
It should be noted that you can export a variable and later change the value. The variable's changed value will be available to child processes. Once export has been set for a variable you must do export -n <var> to remove the property.
$ K=1
$ export K
$ K=2
$ bash -c 'echo ${K-unset}'
2
$ export -n K
$ bash -c 'echo ${K-unset}'
unset
export will make the variable available to all shells forked from the current shell.
As you might already know, UNIX allows processes to have a set of environment variables, which are key/value pairs, both key and value being strings.
Operating system is responsible for keeping these pairs for each process separately.
Program can access its environment variables through this UNIX API:
char *getenv(const char *name);
int setenv(const char *name, const char *value, int override);
int unsetenv(const char *name);
Processes also inherit environment variables from parent processes. Operating system is responsible for creating a copy of all "envars" at the moment the child process is created.
Bash, among other shells, is capable of setting its environment variables on user request. This is what export exists for.
export is a Bash command to set environment variable for Bash. All variables set with this command would be inherited by all processes that this Bash would create.
More on Environment in Bash
Another kind of variable in Bash is internal variable. Since Bash is not just interactive shell, it is in fact a script interpreter, as any other interpreter (e.g. Python) it is capable of keeping its own set of variables. It should be mentioned that Bash (unlike Python) supports only string variables.
Notation for defining Bash variables is name=value. These variables stay inside Bash and have nothing to do with environment variables kept by operating system.
More on Shell Parameters (including variables)
Also worth noting that, according to Bash reference manual:
The environment for any simple command or function may be augmented
temporarily by prefixing it with parameter assignments, as described
in Shell Parameters. These assignment statements affect only the
environment seen by that command.
To sum things up:
export is used to set environment variable in operating system. This variable will be available to all child processes created by current Bash process ever after.
Bash variable notation (name=value) is used to set local variables available only to current process of bash
Bash variable notation prefixing another command creates environment variable only for scope of that command.
The accepted answer implies this, but I'd like to make explicit the connection to shell builtins:
As mentioned already, export will make a variable available to both the shell and children. If export is not used, the variable will only be available in the shell, and only shell builtins can access it.
That is,
tango=3
env | grep tango # prints nothing, since env is a child process
set | grep tango # prints tango=3 - "type set" shows `set` is a shell builtin
Two of the creators of UNIX, Brian Kernighan and Rob Pike, explain this in their book "The UNIX Programming Environment". Google for the title and you'll easily find a pdf version.
They address shell variables in section 3.6, and focus on the use of the export command at the end of that section:
When you want to make the value of a variable accessible in sub-shells, the shell's export command should be used. (You might think about why there is no way to export the value of a variable from a sub-shell to its parent).
Here's yet another example:
VARTEST="value of VARTEST"
#export VARTEST="value of VARTEST"
sudo env | grep -i vartest
sudo echo ${SUDO_USER} ${SUDO_UID}:${SUDO_GID} "${VARTEST}"
sudo bash -c 'echo ${SUDO_USER} ${SUDO_UID}:${SUDO_GID} "${VARTEST}"'
Only by using export VARTEST the value of VARTEST is available in sudo bash -c '...'!
For further examples see:
http://mywiki.wooledge.org/SubShell
bash-hackers.org/wiki/doku.php/scripting/processtree
Just to show the difference between an exported variable being in the environment (env) and a non-exported variable not being in the environment:
If I do this:
$ MYNAME=Fred
$ export OURNAME=Jim
then only $OURNAME appears in the env. The variable $MYNAME is not in the env.
$ env | grep NAME
OURNAME=Jim
but the variable $MYNAME does exist in the shell
$ echo $MYNAME
Fred
By default, variables created within a script are only available to the current shell; child processes (sub-shells) will not have access to values that have been set or modified. Allowing child processes to see the values, requires use of the export command.
As yet another corollary to the existing answers here, let's rephrase the problem statement.
The answer to "should I export" is identical to the answer to the question "Does your subsequent code run a command which implicitly accesses this variable?"
For a properly documented standard utility, the answer to this can be found in the ENVIRONMENT section of the utility's man page. So, for example, the git manual page mentions that GIT_PAGER controls which utility is used to browse multi-page output from git. Hence,
# XXX FIXME: buggy
branch="main"
GIT_PAGER="less"
git log -n 25 --oneline "$branch"
git log "$branch"
will not work correctly, because you did not export GIT_PAGER. (Of course, if your system already declared the variable as exported somewhere else, the bug is not reproducible.)
We are explicitly referring to the variable $branch, and the git program code doesn't refer to a system variable branch anywhere (as also suggested by the fact that its name is written in lower case; but many beginners erroneously use upper case for their private variables, too! See Correct Bash and shell script variable capitalization for a discussion) so there is no reason to export branch.
The correct code would look like
branch="main"
export GIT_PAGER="less"
git log -n 25 --oneline "$branch"
git log -p "$branch"
(or equivalently, you can explicitly prefix each invocation of git with the temporary assignment
branch="main"
GIT_PAGER="less" git log -n 25 --oneline "$branch"
GIT_PAGER="less" git log -p "$branch"
In case it's not obvious, the shell script syntax
var=value command arguments
temporarily sets var to value for the duration of the execution of
command arguments
and exports it to the command subprocess, and then afterwards, reverts it back to the previous value, which could be undefined, or defined with a different - possibly empty - value, and unexported if that's what it was before.)
For internal, ad-hoc or otherwise poorly documented tools, you simply have to know whether they silently inspect their environment. This is rarely important in practice, outside of a few specific use cases, such as passing a password or authentication token or other secret information to a process running in some sort of container or isolated environment.
If you really need to know, and have access to the source code, look for code which uses the getenv system call (or on Windows, with my condolences, variations like getenv_s, w_getenv, etc). For some scripting languages (such as Perl or Ruby), look for ENV. For Python, look for os.environ (but notice also that e.g. from os import environ as foo means that foo is now an alias for os.environ). In Node, look for process.env. For C and related languages, look for envp (but this is just a convention for what to call the optional third argument to main, after argc and argv; the language lets you call them anything you like). For shell scripts (as briefly mentioned above), perhaps look for variables with uppercase or occasionally mixed-case names, or usage of the utility env. Many informal scripts have undocumented but discoverable assignments usually near the beginning of the script; in particular, look for the ?= default assignment parameter expansion.
For a brief demo, here is a shell script which invokes a Python script which looks for $NICKNAME, and falls back to a default value if it's unset.
#!/bin/sh
NICKNAME="Handsome Guy"
demo () {
python3 <<\____
from os import environ as env
print("Hello, %s" % env.get("NICKNAME", "Anonymous Coward"))
____
}
demo
# prints "Hello, Anonymous Coward"
# Fix: forgot export
export NICKNAME
demo
# prints "Hello, Handsome Guy"
As another tangential remark, let me reiterate that you only ever need to export a variable once. Many beginners cargo-cult code like
# XXX FIXME: redundant exports
export PATH="$HOME/bin:$PATH"
export PATH="/opt/acme/bin:$PATH"
but typically, your operating system has already declared PATH as exported, so this is better written
PATH="$HOME/bin:$PATH"
PATH="/opt/acme/bin:$PATH"
or perhaps refactored to something like
for p in "$HOME/bin" "/opt/acme/bin"
do
case :$PATH: in
*:"$p":*) ;;
*) PATH="$p:$PATH";;
esac
done
# Avoid polluting the variable namespace of your interactive shell
unset p
which avoids adding duplicate entries to your PATH.
Although not explicitly mentioned in the discussion, it is NOT necessary to use export when spawning a subshell from inside bash since all the variables are copied into the child process.

How to export a variable in Bash

I need to set a system environment variable from a Bash script that would be available outside of the current scope. So you would normally export environment variables like this:
export MY_VAR=/opt/my_var
But I need the environment variable to be available at a system level though. Is this possible?
Not really - once you're running in a subprocess you can't affect your parent.
There two possibilities:
Source the script rather than run it (see source .):
source {script}
Have the script output the export commands, and eval that:
eval `bash {script}`
Or:
eval "$(bash script.sh)"
This is the only way I know to do what you want:
In foo.sh, you have:
#!/bin/bash
echo MYVAR=abc123
And when you want to get the value of the variable, you have to do the following:
$ eval "$(foo.sh)" # assuming foo.sh is in your $PATH
$ echo $MYVAR #==> abc123
Depending on what you want to do, and how you want to do it, Douglas Leeder's suggestion about using source could be used, but it will source the whole file, functions and all. Using eval, only the stuff that gets echoed will be evaluated.
Set the variable in file /etc/profile (create the file if needed). That will essentially make the variable available to every Bash process.
When i am working under the root account and wish for example to open an X executable under a normal users running X.
I need to set DISPLAY environment variable with...
env -i DISPLAY=:0 prog_that_need_xwindows arg1 arg2
You may want to use source instead of running the executable directly:
# Executable : exec.sh
export var="test"
invar="inside variable"
source exec.sh
echo $var # test
echo $invar # inside variable
This will run the file but in same shell as the parent shell.
Possible downside in some rare cases : all variables regardless of explicit export or not will be exported. If some variables are required to be unset, unset those explicitly. Similarly, handle imported variables.
# Executable : exec.sh
export var="test"
invar="inside variable"
# --- #
unset invar

Resources