Access $HOME in Dockerfile ENV instruction - linux

I'm trying to set an environment variable with ENV instruction dynamically using $HOME variable (i think its a system environment variable).
But ENV is not able to access $HOME. Its blank. Although i'm able to echo $HOME.
FROM somebaseimage
....
....
USER 5051
RUN echo $HOME
# prints /home/myuser
ENV MY_JSON_FILEPATH="${HOME}/my_file.json"
RUN echo $MY_JSON_FILEPATH
# prints /my_file.json
I have tried
"${HOME}/my_file.json", "$HOME/my_file.json"; both don't work.
What would be the best way to set such environment variable?

You should add the line ARG HOME after the FROM ... line and pass build option --build-arg or add build:args parameter in your docker-compose file.

Related

bash - Unable to set environment variable using script

I have a scrip that gets called in a Dockerfile entrypoint:
ENTRYPOINT ["/bin/sh", "-c", "/var/run/Scripts/entrypoint.sh"]
I need to set an environment variable based on a value in a file. I am using the following command to retrieve the value: RSYSLOG_LISTEN_PORT=$(sed -nE 's/.*port="([^"]+)".*/\1/p' /etc/rsyslog.d/0_base.conf)
Locally, this command works and even running the command from the same directory that the entrypoint script is located in will result in the env var being set.
However, adding this command after export (export SYSLOG_LISTEN_PORT=$(sed -nE 's/.*port="([^"]+)".*/\1/p' /etc/rsyslog.d/0_base.conf)) in the entrypoint script does not result in the env var being set.
Additionally, trying to use another script and sourcing the script within the entrypoint script also does not work:
#!/bin/bash
. ./rsyslog_listen_port.sh
I am unable to use source as I get a source: not found error - I have tried a few different ways to use source but it doesn't seem compatible.
Can anyone help me as I have spent too much time on trying to get this to work at this point for what seems like a relatively simple task.
A container only runs one process, and then it exits. A pattern I find useful here is to make the entrypoint script be a wrapper that does whatever first-time setup is useful, then exec the main container process:
#!/bin/sh
# set the environment variable
export SYSLOG_LISTEN_PORT=$(sed -nE 's/.*port="([^"]+)".*/\1/p' /etc/rsyslog.d/0_base.conf)
# then run the main container command
exec "$#"
In your Dockerfile, set the ENTRYPOINT to this script (it must use JSON-array syntax, and it must not have an explicit sh -c wrapper) and CMD to whatever you would have set it to without this wrapper.
ENTRYPOINT ["/var/run/Scripts/entrypoint.sh"]
CMD ["rsyslog"]
Note that this environment variable will be set for the main container process, but not for docker inspect or a docker exec debugging shell. Since the wrapper sets up the environment variable and then runs the main container process, you can replace the command part (only) when you run the container to see this.
docker run --rm your-image env \
| grep SYSLOG_LISTEN_PORT
(source is a bash-specific extension. POSIX shell . does pretty much the same thing, and I'd always use . in preference to source.)

Dockerfile set ENV based on npm package version [duplicate]

Is it possible to set a docker ENV variable to the result of a command?
Like:
ENV MY_VAR whoami
i want MY_VAR to get the value "root" or whatever whoami returns
As an addition to DarkSideF answer.
You should be aware that each line/command in Dockerfile is ran in another container.
You can do something like this:
RUN export bleah=$(hostname -f);echo $bleah;
This is run in a single container.
At this time, a command result can be used with RUN export, but cannot be assigned to an ENV variable.
Known issue: https://github.com/docker/docker/issues/29110
I had same issue and found way to set environment variable as result of function by using RUN command in dockerfile.
For example i need to set SECRET_KEY_BASE for Rails app just once without changing as would when i run:
docker run -e SECRET_KEY_BASE="$(openssl rand -hex 64)"
Instead it i write to Dockerfile string like:
RUN bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" >> /etc/bash.bashrc'
and my env variable available from root, even after bash login.
or may be
RUN /bin/bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" > /etc/profile.d/docker_init.sh'
then it variable available in CMD and ENTRYPOINT commands
Docker cache it as layer and change only if you change some strings before it.
You also can try different ways to set environment variable.
This answer is a response to #DarkSideF,
The method he is proposing is the following, in Dockerfile :
RUN bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" >> /etc/bash.bashrc'
( adding an export in the /etc/bash.bashrc)
It is good but the environment variable will only be available for the process /bin/bash, and if you try to run your docker application for example a Node.js application, /etc/bash.bashrc will completely be ignored and your application won't have a single clue what SECRET_KEY_BASE is when trying to access process.env.SECRET_KEY_BASE.
That is the reason why ENV keyword is what everyone is trying to use with a dynamic command because every time you run your container or use an exec command, Docker will check ENV and pipe every value in the process currently run (similar to -e).
One solution is to use a wrapper (credit to #duglin in this github issue).
Have a wrapper file (e.g. envwrapper) in your project root containing :
#!/bin/bash
export SECRET_KEY_BASE="$(openssl rand -hex 64)"
export ANOTHER_ENV "hello world"
$*
and then in your Dockerfile :
...
COPY . .
RUN mv envwrapper /bin/.
RUN chmod 755 /bin/envwrapper
CMD envwrapper myapp
If you run commands using sh as it seems to be the default in docker.
You can do something like this:
RUN echo "export VAR=`command`" >> /envfile
RUN . /envfile; echo $VAR
This way, you build a env file by redirecting output to the env file of your choice. It's more explicit than having to define profiles and so on.
Then as the file will be available to other layers, it will be possible to source it and use the variables being exported. The way you create the env file isn't important.
Then when you're done you could remove the file to make it unavailable to the running container.
The . is how the env file is loaded.
As an addition to #DarkSideF's answer, if you want to reuse the result of a previous command in your Dockerfile during in the build process, you can use the following workaround:
run a command, store the result in a file
use command substitution to get the previous result from that file into another command
For example :
RUN echo "bla" > ./result
RUN echo $(cat ./result)
For something cleaner, you can use also the following gist which provides a small CLI called envstore.py :
RUN envstore.py set MY_VAR bla
RUN echo $(envstore.py get MY_VAR)
Or you can use python-dotenv library which has a similar CLI.
Not sure if this is what you were looking for, but in order to inject ENV vars or ARGS into your .Dockerfile build this pattern works.
in your my_build.sh:
echo getting version of osbase image to build from
OSBASE=$(grep "osbase_version" .version | sed 's/^.*: //')
echo building docker
docker build -f \
--build-arg ARTIFACT_TAG=$OSBASE \
PATH_TO_MY.Dockerfile \
-t my_artifact_home_url/bucketname:$TAG .
for getting an ARG in your .Dockerfile the snippet might look like this:
FROM scratch
ARG ARTIFACT_TAG
FROM my_artifact_home_url/bucketname:${ARTIFACT_TAG}
alternatively for getting an ENV in your .Dockerfile the snippet might look like this:
FROM someimage:latest
ARG ARTIFACT_TAG
ENV ARTIFACT_TAG=${ARTIFACT_TAG}
the idea is you run the shell script and that calls the .Dockerfile with the args passed in as options on the build.

Adding DB_HOST environment varibale to Ubuntu VM in provision.sh

This is the code I have currently in my provision.sh file for vagrant to run when setting up my VM: (I do have other code before this to install packages etc, it is all working, it's just that this environment variable is not being created and set)
#Add DB_HOST env variable
export DB_HOST=192.168.10.150:27017/posts
echo "DB_HOST=192.168.10.150:27017/posts" >> ~/.bashrc
source ~/.bashrc
Is there something massively wrong with this? Or is there some other method that I have to use?
Thank you to KamilCuk for helping me with this.
He told me to echo into the /etc/environment file, and add the DB_HOST env variable in there to make it persistent.
I did have some issues doing this and here is the command I used to make it work > echo "DB_HOST=[db-ip]" | sudo tee -a /etc/environment
So, echo the variable name and its value, then pipe that into a tee command, which basically just opens a file and allows write access (I think). You need to do it in this way because you need root user permissions to open and write to the /etc/environment file, and you don't need to use sudo in order to complete the echo.
Another way...
To let a linux programm start with a special set of environment variables you can use: env -i ...
Example for a shell function that provides a special environment for the Lua interpreter...
lua ()
{
( env -i LANG='de_DE.UTF-8' TERM='xterm-256color' LUA_PATH='./lua/?.lua' LUA_CPATH='./lua/?.so' /usr/local/bin/lua "${#}" )
}
...give me the environment what i want.
This example use full path to Lua executable and need therefore no PATH variable.
The function is placed in .bashrc (user system login) or .profile (user remote login) of a normal users home folder.
Impression...
$ lua
Lua 5.4.3 Copyright (C) 1994-2021 Lua.org, PUC-Rio
> require('dialog')
dialog: 0x56690120 ./lua/dialog.lua
-- Do special german chars work too?
> _G['ÜüÄäÖöß']='It works!'
> print(_G['ÜüÄäÖöß'])
It works!
-- Environment is also used by child(s) (inherit)...
> os.execute('env')
PWD=/home/knoppix
LINES=54
LANG=de_DE.UTF-8
COLUMNS=190
TERM=xterm-256color
SHLVL=0
LUA_CPATH=./lua/?.so
LUA_PATH=./lua/?.lua
_=/usr/bin/env
true exit 0
This is working in multiuser environments without being superuser root first.
Because normal users arent allowed to write something in / or /etc/ that belongs to: root

Source to global env

I have some script, that I need to source. I want to source it from another script to global environment. Abstract example:
Script 1:
#/script1
PATH="$PATH:/something"
Script 2:
#/script2
source /script1
Than I run bash /script2 and I'm expecting to see updated PATH in global env. But it doesn't
More real example:
#/somedir/script1
A=$(./someanotherscript)
#/script2
cd /somedir
source script1
So, how can I do this thing?
After running bash script2, you won't see the change to PATH that script1 made. That change was local to the environment of the process running script2. If you want to change PATH in the current environment, from which you run script2, you need to source it as well.
$ source script2
$ echo $PATH
I believe you are not exporting the variable, see the following:
# script1.sh
PATH="/new:$PATH"
env
In this case, env, even in this script won't have access to the new path, because you need to do this:
# script1.sh
export PATH="/new:$PATH"
env

How to export a variable in Bash

I need to set a system environment variable from a Bash script that would be available outside of the current scope. So you would normally export environment variables like this:
export MY_VAR=/opt/my_var
But I need the environment variable to be available at a system level though. Is this possible?
Not really - once you're running in a subprocess you can't affect your parent.
There two possibilities:
Source the script rather than run it (see source .):
source {script}
Have the script output the export commands, and eval that:
eval `bash {script}`
Or:
eval "$(bash script.sh)"
This is the only way I know to do what you want:
In foo.sh, you have:
#!/bin/bash
echo MYVAR=abc123
And when you want to get the value of the variable, you have to do the following:
$ eval "$(foo.sh)" # assuming foo.sh is in your $PATH
$ echo $MYVAR #==> abc123
Depending on what you want to do, and how you want to do it, Douglas Leeder's suggestion about using source could be used, but it will source the whole file, functions and all. Using eval, only the stuff that gets echoed will be evaluated.
Set the variable in file /etc/profile (create the file if needed). That will essentially make the variable available to every Bash process.
When i am working under the root account and wish for example to open an X executable under a normal users running X.
I need to set DISPLAY environment variable with...
env -i DISPLAY=:0 prog_that_need_xwindows arg1 arg2
You may want to use source instead of running the executable directly:
# Executable : exec.sh
export var="test"
invar="inside variable"
source exec.sh
echo $var # test
echo $invar # inside variable
This will run the file but in same shell as the parent shell.
Possible downside in some rare cases : all variables regardless of explicit export or not will be exported. If some variables are required to be unset, unset those explicitly. Similarly, handle imported variables.
# Executable : exec.sh
export var="test"
invar="inside variable"
# --- #
unset invar

Resources