Inlining the env works SOMETHING=hello node -e "console.log(process.env.SOMETHING)", but I want node to read the environment variables from the spawning shell.
The following code will print hello (note that echo can read the environment)
SOMETHING=hello
echo $SOMETHING
However the following code prints undefined:
SOMETHING=hello
node -e "console.log(process.env.SOMETHING)"
Why can't node read the shell environment? Can I make it read that somehow?
Run like this:
export SOMETHING=hello
node -e "console.log(process.env.SOMETHING)"
OR
SOMETHING=test node -e "console.log(process.env.SOMETHING)"
Related
Is it possible to set a docker ENV variable to the result of a command?
Like:
ENV MY_VAR whoami
i want MY_VAR to get the value "root" or whatever whoami returns
As an addition to DarkSideF answer.
You should be aware that each line/command in Dockerfile is ran in another container.
You can do something like this:
RUN export bleah=$(hostname -f);echo $bleah;
This is run in a single container.
At this time, a command result can be used with RUN export, but cannot be assigned to an ENV variable.
Known issue: https://github.com/docker/docker/issues/29110
I had same issue and found way to set environment variable as result of function by using RUN command in dockerfile.
For example i need to set SECRET_KEY_BASE for Rails app just once without changing as would when i run:
docker run -e SECRET_KEY_BASE="$(openssl rand -hex 64)"
Instead it i write to Dockerfile string like:
RUN bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" >> /etc/bash.bashrc'
and my env variable available from root, even after bash login.
or may be
RUN /bin/bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" > /etc/profile.d/docker_init.sh'
then it variable available in CMD and ENTRYPOINT commands
Docker cache it as layer and change only if you change some strings before it.
You also can try different ways to set environment variable.
This answer is a response to #DarkSideF,
The method he is proposing is the following, in Dockerfile :
RUN bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" >> /etc/bash.bashrc'
( adding an export in the /etc/bash.bashrc)
It is good but the environment variable will only be available for the process /bin/bash, and if you try to run your docker application for example a Node.js application, /etc/bash.bashrc will completely be ignored and your application won't have a single clue what SECRET_KEY_BASE is when trying to access process.env.SECRET_KEY_BASE.
That is the reason why ENV keyword is what everyone is trying to use with a dynamic command because every time you run your container or use an exec command, Docker will check ENV and pipe every value in the process currently run (similar to -e).
One solution is to use a wrapper (credit to #duglin in this github issue).
Have a wrapper file (e.g. envwrapper) in your project root containing :
#!/bin/bash
export SECRET_KEY_BASE="$(openssl rand -hex 64)"
export ANOTHER_ENV "hello world"
$*
and then in your Dockerfile :
...
COPY . .
RUN mv envwrapper /bin/.
RUN chmod 755 /bin/envwrapper
CMD envwrapper myapp
If you run commands using sh as it seems to be the default in docker.
You can do something like this:
RUN echo "export VAR=`command`" >> /envfile
RUN . /envfile; echo $VAR
This way, you build a env file by redirecting output to the env file of your choice. It's more explicit than having to define profiles and so on.
Then as the file will be available to other layers, it will be possible to source it and use the variables being exported. The way you create the env file isn't important.
Then when you're done you could remove the file to make it unavailable to the running container.
The . is how the env file is loaded.
As an addition to #DarkSideF's answer, if you want to reuse the result of a previous command in your Dockerfile during in the build process, you can use the following workaround:
run a command, store the result in a file
use command substitution to get the previous result from that file into another command
For example :
RUN echo "bla" > ./result
RUN echo $(cat ./result)
For something cleaner, you can use also the following gist which provides a small CLI called envstore.py :
RUN envstore.py set MY_VAR bla
RUN echo $(envstore.py get MY_VAR)
Or you can use python-dotenv library which has a similar CLI.
Not sure if this is what you were looking for, but in order to inject ENV vars or ARGS into your .Dockerfile build this pattern works.
in your my_build.sh:
echo getting version of osbase image to build from
OSBASE=$(grep "osbase_version" .version | sed 's/^.*: //')
echo building docker
docker build -f \
--build-arg ARTIFACT_TAG=$OSBASE \
PATH_TO_MY.Dockerfile \
-t my_artifact_home_url/bucketname:$TAG .
for getting an ARG in your .Dockerfile the snippet might look like this:
FROM scratch
ARG ARTIFACT_TAG
FROM my_artifact_home_url/bucketname:${ARTIFACT_TAG}
alternatively for getting an ENV in your .Dockerfile the snippet might look like this:
FROM someimage:latest
ARG ARTIFACT_TAG
ENV ARTIFACT_TAG=${ARTIFACT_TAG}
the idea is you run the shell script and that calls the .Dockerfile with the args passed in as options on the build.
I'm trying to export environment variables to the calling shell/terminal/cmd in a platform agnostic way using node.
Is this possible?
Of course I can set variables using
FOO="bar"
echo $FOO
The following doesn't work but essentially I am hoping to do the same thing from node:
node -e "process.env.FOO = 'bar'"
echo $FOO
Or perhaps using spawn or similar?
I want to load some environment variables from a file before running a node script, so that the script has access to them. However, I don't want the environment variables to be set in my shell after the script is done executing.
I can load the environment variables like this:
export $(cat app-env-vars.txt | xargs) && node my-script.js
However, after the command is run, all of the environment variables are now set in my shell.
I'm asking this question to answer it, since I figured out a solution but couldn't find an answer on SO.
If you wrap the command in parentheses, the exports will be scoped to within those parens and won't pollute the global shell namespace:
(export $(cat app-env-vars.txt | xargs) && node my-script.js)
Echo'ing one of the environment variables from the app.env file after executing the command will show it as empty.
This is what the env command is for:
env - run a program in a modified environment
You can try something like:
env $(cat app-en-vars.txt) node my-script.js
This (and any unquoted $(...) expansion) is subject to word splitting and glob expansion, both of which can easily cause problems with something like environment variables.
A safer approach is to use arrays, like so:
my_vars=(
FOO=bar
"BAZ=hello world"
...
)
env "${my_vars[#]}" node my-script.js
You can populate an array from a file if needed. Note you can also use -i with env to only pass the environment variables you set explicitly.
If you trust the .txt's files contents, and it contains valid Bash syntax, you should source it (and probably rename it to a .sh/.bash extension). Then you can use a subshell, as you posted in your answer, to prevent the sourced state from leaking into the parent shell:
( source app-env-vars.txt && node my-script.js )
If you file just contains variables like
FOO='x y z'
BAR='bar'
...
you can try
eval $(< app-en-vars.txt) node my-script.js
I have a command I've installed via npm (db-migrate). I want to run it from the command line as part of automating database migration. The config file for the script can reference environment variables. I'm already setting database credentials as environment variables in another file. So rather than set them twice, I told the migration config to use the environment variables. The problem is, how do I get the environment variables from the file before running the migration script? Also, how can I run the migration script directly from the npm bin?
I found a nice general solution to this problem, so I'm posting the question and the answer for at least the benefit of my future self.
This can be done using a few tools:
Read the environment variables from the file and set them before running the script. To review, it's simple to set an environment variable before running a command:
PORT=3000 node index.js
But we want to read the variables from a file. This can be done using export and xargs:
export $(cat app.env | xargs)
We want to run the script directly from npm's bin. The path to the bin folder can be obtained using npm bin. So we just need to add that to the path before running the command:
PATH=$(npm bin):$PATH
Now put them together:
export $(cat app.env | xargs) && PATH=$(npm bin):$PATH db-migrate up
This reads the environment variables, sets them, adds the npm bin to the path, and then runs the migration script.
By the way, the content of app.env would look something like this:
PORT=3000
DB_NAME=dev
DB_USER=dev_user
DB_PASS=dev_pass
UPDATE:
There are a few caveats with this method. The first is that it will pollute your current shell with the environment variables. In other words, after you run the export...xargs bit, you can run something like echo $DB_PASS and your password will show up. To prevent this, wrap the command in parens:
(export $(cat app.env | xargs) && PATH=$(npm bin):$PATH db-migrate up)
The parens cause the command to be executed in a subshell environment. The environment variables will not bubble up to your current shell.
The second caveat is that this will only work if your environment variables don't have spaces in them. If you want spaces, I found an OK solution based on this gist comment. Create a file named something like load-env.sh:
# loads/exports env variables from a file
# taken from: https://gist.github.com/judy2k/7656bfe3b322d669ef75364a46327836#gistcomment-3239799
function loadEnv() {
local envFile=$1
local isComment='^[[:space:]]*#'
local isBlank='^[[:space:]]*$'
while IFS= read -r line; do
[[ $line =~ $isComment ]] && continue
[[ $line =~ $isBlank ]] && continue
key=$(echo "$line" | cut -d '=' -f 1)
value=$(echo "$line" | cut -d '=' -f 2-)
eval "export ${key}=\"$(echo \${value})\""
done < "$envFile"
}
Then run your command like this:
(source scripts/load-env.sh && loadEnv app.env && PATH=$(npm bin):$PATH db-migrate up)
Jolly good evening. I've used Environment Envirables before in Node.js applications, but I feel I havent understood the underlieing concept yet.
Its (in this case) not Node who gives me the ability to set environment variables, right? Is it Linux? Does this concept persist through the whole OS? Do environment Variables have a scope? Can I use them everywhere? Is the pattern always the same? Are they written into the run application, or do some Applications (like node) the ability to actively access them from the inside?
Woulld love to grasp the basic concept.
Environment variables is a functionality provided by the operating system (e.g. Linux).
You can set then in terminal or shell scripts using:
name=value
Or in Node using:
process.env.name = value;
You can access them in shell using:
echo $name
Or in Node using:
console.log(process.env.name);
The scope of the environment variables is the process when they are defined and the sub-processes that it executes.
For example write a Node program called envtest.js:
console.log('Node program:', process.env.test);
process.env.test = 'new value';
console.log('Node program:', process.env.test);
And a shell script called envtest1.sh:
test=value
echo "Shell script: $test"
node envtest.js
echo "Shell script: $test"
Running sh envtest1.sh wil print:
Shell script: value
Node program: undefined
Node program: new value
Shell script: value
As yu can see the Node program doesn't get the value because it wasn't exported. It can set the value and use the new value but it will not be changed in the shell script.
Now, write a different shell script:
test=value
export test
echo "Shell script: $test"
node envtest.js
echo "Shell script: $test"
This time running sh envtest2.sh will print:
Shell script: value
Node program: value
Node program: new value
Shell script: value
It means that the Node program got the value because it was exported this time, it can still change it and use the new value but it works on its own copy, it is not changed in the original shell script that called this Node program.
Instead of:
test=value
export test
You can write:
export test=value
as a shorthand.
A more complicated example, write envtest3.sh:
export test=value
echo "Shell script: $test"
node envtest.js
echo "Shell script: $test"
test=value2 node envtest.js
echo "Shell script: $test"
This time it will print:
Shell script: value
Node program: value
Node program: new value
Shell script: value
Node program: value2
Node program: new value
Shell script: value
Which shows that running test=value2 node envtest.js sets the value of test variable to value2 but only to this invocation of the Node program - the value in the rest of the shell script is still value as it was before.
Those are the 3 kinds of scope of environemnt variables - normally a variable in a shell script is not exported and the programs that you run can't see it. When it is exported then the programs that you run can see it and can modify it but they work on their own copy and it isn't changed in the shell script.
When you run name=value command then the environment variable will be set just for that command but the old value will remain in the rest of the script.
Those are the basics of environment variables and how you can use them in Node.