I have a command I've installed via npm (db-migrate). I want to run it from the command line as part of automating database migration. The config file for the script can reference environment variables. I'm already setting database credentials as environment variables in another file. So rather than set them twice, I told the migration config to use the environment variables. The problem is, how do I get the environment variables from the file before running the migration script? Also, how can I run the migration script directly from the npm bin?
I found a nice general solution to this problem, so I'm posting the question and the answer for at least the benefit of my future self.
This can be done using a few tools:
Read the environment variables from the file and set them before running the script. To review, it's simple to set an environment variable before running a command:
PORT=3000 node index.js
But we want to read the variables from a file. This can be done using export and xargs:
export $(cat app.env | xargs)
We want to run the script directly from npm's bin. The path to the bin folder can be obtained using npm bin. So we just need to add that to the path before running the command:
PATH=$(npm bin):$PATH
Now put them together:
export $(cat app.env | xargs) && PATH=$(npm bin):$PATH db-migrate up
This reads the environment variables, sets them, adds the npm bin to the path, and then runs the migration script.
By the way, the content of app.env would look something like this:
PORT=3000
DB_NAME=dev
DB_USER=dev_user
DB_PASS=dev_pass
UPDATE:
There are a few caveats with this method. The first is that it will pollute your current shell with the environment variables. In other words, after you run the export...xargs bit, you can run something like echo $DB_PASS and your password will show up. To prevent this, wrap the command in parens:
(export $(cat app.env | xargs) && PATH=$(npm bin):$PATH db-migrate up)
The parens cause the command to be executed in a subshell environment. The environment variables will not bubble up to your current shell.
The second caveat is that this will only work if your environment variables don't have spaces in them. If you want spaces, I found an OK solution based on this gist comment. Create a file named something like load-env.sh:
# loads/exports env variables from a file
# taken from: https://gist.github.com/judy2k/7656bfe3b322d669ef75364a46327836#gistcomment-3239799
function loadEnv() {
local envFile=$1
local isComment='^[[:space:]]*#'
local isBlank='^[[:space:]]*$'
while IFS= read -r line; do
[[ $line =~ $isComment ]] && continue
[[ $line =~ $isBlank ]] && continue
key=$(echo "$line" | cut -d '=' -f 1)
value=$(echo "$line" | cut -d '=' -f 2-)
eval "export ${key}=\"$(echo \${value})\""
done < "$envFile"
}
Then run your command like this:
(source scripts/load-env.sh && loadEnv app.env && PATH=$(npm bin):$PATH db-migrate up)
Related
Is it possible to set a docker ENV variable to the result of a command?
Like:
ENV MY_VAR whoami
i want MY_VAR to get the value "root" or whatever whoami returns
As an addition to DarkSideF answer.
You should be aware that each line/command in Dockerfile is ran in another container.
You can do something like this:
RUN export bleah=$(hostname -f);echo $bleah;
This is run in a single container.
At this time, a command result can be used with RUN export, but cannot be assigned to an ENV variable.
Known issue: https://github.com/docker/docker/issues/29110
I had same issue and found way to set environment variable as result of function by using RUN command in dockerfile.
For example i need to set SECRET_KEY_BASE for Rails app just once without changing as would when i run:
docker run -e SECRET_KEY_BASE="$(openssl rand -hex 64)"
Instead it i write to Dockerfile string like:
RUN bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" >> /etc/bash.bashrc'
and my env variable available from root, even after bash login.
or may be
RUN /bin/bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" > /etc/profile.d/docker_init.sh'
then it variable available in CMD and ENTRYPOINT commands
Docker cache it as layer and change only if you change some strings before it.
You also can try different ways to set environment variable.
This answer is a response to #DarkSideF,
The method he is proposing is the following, in Dockerfile :
RUN bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" >> /etc/bash.bashrc'
( adding an export in the /etc/bash.bashrc)
It is good but the environment variable will only be available for the process /bin/bash, and if you try to run your docker application for example a Node.js application, /etc/bash.bashrc will completely be ignored and your application won't have a single clue what SECRET_KEY_BASE is when trying to access process.env.SECRET_KEY_BASE.
That is the reason why ENV keyword is what everyone is trying to use with a dynamic command because every time you run your container or use an exec command, Docker will check ENV and pipe every value in the process currently run (similar to -e).
One solution is to use a wrapper (credit to #duglin in this github issue).
Have a wrapper file (e.g. envwrapper) in your project root containing :
#!/bin/bash
export SECRET_KEY_BASE="$(openssl rand -hex 64)"
export ANOTHER_ENV "hello world"
$*
and then in your Dockerfile :
...
COPY . .
RUN mv envwrapper /bin/.
RUN chmod 755 /bin/envwrapper
CMD envwrapper myapp
If you run commands using sh as it seems to be the default in docker.
You can do something like this:
RUN echo "export VAR=`command`" >> /envfile
RUN . /envfile; echo $VAR
This way, you build a env file by redirecting output to the env file of your choice. It's more explicit than having to define profiles and so on.
Then as the file will be available to other layers, it will be possible to source it and use the variables being exported. The way you create the env file isn't important.
Then when you're done you could remove the file to make it unavailable to the running container.
The . is how the env file is loaded.
As an addition to #DarkSideF's answer, if you want to reuse the result of a previous command in your Dockerfile during in the build process, you can use the following workaround:
run a command, store the result in a file
use command substitution to get the previous result from that file into another command
For example :
RUN echo "bla" > ./result
RUN echo $(cat ./result)
For something cleaner, you can use also the following gist which provides a small CLI called envstore.py :
RUN envstore.py set MY_VAR bla
RUN echo $(envstore.py get MY_VAR)
Or you can use python-dotenv library which has a similar CLI.
Not sure if this is what you were looking for, but in order to inject ENV vars or ARGS into your .Dockerfile build this pattern works.
in your my_build.sh:
echo getting version of osbase image to build from
OSBASE=$(grep "osbase_version" .version | sed 's/^.*: //')
echo building docker
docker build -f \
--build-arg ARTIFACT_TAG=$OSBASE \
PATH_TO_MY.Dockerfile \
-t my_artifact_home_url/bucketname:$TAG .
for getting an ARG in your .Dockerfile the snippet might look like this:
FROM scratch
ARG ARTIFACT_TAG
FROM my_artifact_home_url/bucketname:${ARTIFACT_TAG}
alternatively for getting an ENV in your .Dockerfile the snippet might look like this:
FROM someimage:latest
ARG ARTIFACT_TAG
ENV ARTIFACT_TAG=${ARTIFACT_TAG}
the idea is you run the shell script and that calls the .Dockerfile with the args passed in as options on the build.
Inlining the env works SOMETHING=hello node -e "console.log(process.env.SOMETHING)", but I want node to read the environment variables from the spawning shell.
The following code will print hello (note that echo can read the environment)
SOMETHING=hello
echo $SOMETHING
However the following code prints undefined:
SOMETHING=hello
node -e "console.log(process.env.SOMETHING)"
Why can't node read the shell environment? Can I make it read that somehow?
Run like this:
export SOMETHING=hello
node -e "console.log(process.env.SOMETHING)"
OR
SOMETHING=test node -e "console.log(process.env.SOMETHING)"
I have the following project directory structure:
myproj/
.env
setenv.sh
run.sh
Where run.sh looks like:
#!/bin/sh
sh setenv.sh
echo "$fizz"
Where .env is a properties file of key=value pairs like so:
fizz=buzz
foo=bar
color=red
The setenv.sh script needs to read the key-value pairs out of .env and export/source them, so that run.sh can reference them at runtime and they will evaluate to whatever their values are in .env.
The run.sh script and setenv.sh scripts need to run on Linux and/or Mac (so where uname is 'Linux', 'FreeBSD' or 'Darwin') and I need to be able to run run.sh over and over, each time with different values in .env, and have them take effect on each run.
Currently my setenv.sh looks like:
#!/bin/sh
unamestr=$(uname)
if [ "$unamestr" = 'Linux' ]; then
export $(grep -v '^#' .env | xargs -d '\n')
elif [ "unamestr" = 'FreeBSD' ] || [ "unamestr" = 'Darwin' ]; then
export $(grep -v '^#' .env | xargs -0)
fi
When I run sh run.sh it echoes the value buzz. But if I change fizz to another value, say, buzz2, and re-run run.sh, it still ouputs the fizz value as being buzz. What can I do so that the values in .env are always dynamically loaded/exported/sourced/etc. on each run of run.sh?
The issue is when you run sh setenv.sh it starts a new shell session and runs the script there. So, the new session's environment is configured, not yours.
As #Philippe suggested, you should "source" setenv.sh:
source setenv.sh # this is one way
. setenv.sh # this is another way
echo "$fizz"
Above code runs the content of setenv.sh in the current session, rather than in another session.
I want to load some environment variables from a file before running a node script, so that the script has access to them. However, I don't want the environment variables to be set in my shell after the script is done executing.
I can load the environment variables like this:
export $(cat app-env-vars.txt | xargs) && node my-script.js
However, after the command is run, all of the environment variables are now set in my shell.
I'm asking this question to answer it, since I figured out a solution but couldn't find an answer on SO.
If you wrap the command in parentheses, the exports will be scoped to within those parens and won't pollute the global shell namespace:
(export $(cat app-env-vars.txt | xargs) && node my-script.js)
Echo'ing one of the environment variables from the app.env file after executing the command will show it as empty.
This is what the env command is for:
env - run a program in a modified environment
You can try something like:
env $(cat app-en-vars.txt) node my-script.js
This (and any unquoted $(...) expansion) is subject to word splitting and glob expansion, both of which can easily cause problems with something like environment variables.
A safer approach is to use arrays, like so:
my_vars=(
FOO=bar
"BAZ=hello world"
...
)
env "${my_vars[#]}" node my-script.js
You can populate an array from a file if needed. Note you can also use -i with env to only pass the environment variables you set explicitly.
If you trust the .txt's files contents, and it contains valid Bash syntax, you should source it (and probably rename it to a .sh/.bash extension). Then you can use a subshell, as you posted in your answer, to prevent the sourced state from leaking into the parent shell:
( source app-env-vars.txt && node my-script.js )
If you file just contains variables like
FOO='x y z'
BAR='bar'
...
you can try
eval $(< app-en-vars.txt) node my-script.js
When I run this in a bash file, the argument environment is not received by the ember app:
#!/bin/bash
# create nginx.conf
echo "Create nginx.conf from nginx.conf.erb"
export `cat ./.env`
erb ./config/nginx.conf.erb > ./config/nginx.conf
./node_modules/ember-cli/bin/ember serve --environment=acceptance
I think it has something to do with the export function. When I put the ember serve command before the export it works.
The .env file looks like this
EMBER_ENV=development
Running bash 3.2 on Mac OS 10.10 (Yosemite)
Edit: I changed the question because it didn't have all the relevant code
In this case, you're giving ember two conflicting arguments: You're passing EMBER_ENV=development through the environment, and --environment=acceptance through the command line. The former tells it to use the environment named development, and the latter tells it to use the environment named acceptance -- but it can't do both at the same time.
Knowing which of those two conflicting commands ember will choose to honor is an item for which you'd need to check its documentation for. Of course, the better thing is just to fix the conflict.
I'd suggest doing the following:
./node_modules/ember-cli/bin/ember serve "--environment=${EMBER_ENV:-acceptance}"
...if you want to honor the EMBER_ENV in your file rather than the one on the command line (but fall back to acceptance when the file doesn't specify an EMBER_ENV). If you use bash -x, you'll explicitly see the script passing a --environment= appropriate to what's given in the .env file.
If you always want to use the environment acceptance, on the other hand, override or remove the environment after loading it from your file:
export `cat ./.env`
# if the file contained `EMBER_ENV`, unset it so our command-line argument is honored
unset EMBER_ENV
./node_modules/ember-cli/bin/ember serve --environment=acceptance
All that said --
export `cat ./.env`
is actually a quite buggy way to do things (though it won't break if the only thing you're setting is EMBER_ENV, and the only value it has is a single word in all ASCII with no whitespace or special characters). If you trust your .env file to be written by a non-malicious user in valid shell syntax, you'd have fewer bugs with:
set -a # automatically export all variables
source .env # run .env as a shell script within the current interpreter
If you don't trust your .env to be written as a non-malicious script in valid shell syntax, then perhaps something more like:
while IFS='=' read -r k v;
[[ $k ]] || continue # skip empty lines
printf -v "$k" %s "$v" || continue # set any variable given as a shell variable
export "$k" # export those variables to the environment
done < .env