How to use environment variable in Dockerfile? - linux

I am just experimenting with configuring Dockerfile-
FROM ubuntu:latest
RUN apt-get update
RUN echo VERSION_TAG="latest" >> /etc/environment
RUN cat /etc/environment
CMD echo $VERSION_TAG
I built the image using (I was in the req directory) -
docker build -t temp/testing:latest .
Ran it using-
docker run temp/testing:latest
Expected output-
latest
Actual Output-
While building the image, the output of cat /etc/environment
Step 4/5 : RUN cat /etc/environment
---> Running in 4grdc7b5165a
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
VERSION_TAG=latest
It is present inside env variables, still, its value is not being printed, any help will be appreciated.
Note - I want to do it without using
https://docs.docker.com/engine/reference/builder/#env

I need to know the reason.
Docker just exec() the command ENTRYPOINT+CMD with nothing in between.
pam is not loaded, so pam_env.so is not loaded, so /etc/environment is not read.

The intention is to use ENV for this, but you can set extra parameters on the command line at docker run with -e
docker run -it -e VERSION='latest' -e NAME='abcd' ubuntu:latest /bin/bash
Alternatively, you can provide a file with each variable per-line
docker run -it --env-file ./my_env ubuntu:latest /bin/bash
You can also do a volume mount with -v, which can smuggle further information and entire paths into the container, including information useful to whatever application you run within it (the intent of Docker containers is normally to run a single application within a known environment)
However, if you're trying to determine if the container is running the latest version of its own tag, this will be problematic
Specifically, the tag may change, but your explicit setting of it won't! .. for such a case, you should consider something else, such as having an outside process which replaces the running container with a new one when the tag changes (so you can assume what's running is always latest) or at some frequency (perhaps every day in the morning)

Related

Where are environment variables located in a docker container

Where are the environment variables located on a docker container, more precisely, where is ENV=1 stored from the following:
$ docker run --rm -e ENV=1 -it ubuntu:16.04 bash
root#40e384fc9c1f:/# env
HOSTNAME=40e384fc9c1f
TERM=xterm
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:
ENV=1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
SHLVL=1
HOME=/root
_=/usr/bin/env
All linux docs I read pointed to either: /etc/environment, ~/.bashrc, ~/.bash_profile, and ~/profile, but when dumped those files, I saw nothing related to declaration of ENV=1.
Is it perhaps something specific to docker and how it sets up the environment?
If a variable is specified as an option when executing a Docker command, the Docker image is read-only, so it is likely to be stored only in memory, not in the file.
When you run docker run -e VAR=value some-image, the container has a single process, and the environment variable is set in that process's environment. It's not in any particular file. The lowest-level Unix function to run a command is execve(2) and in that system call you just pass the new command's environment as parameters.
Locally, try running
export ANYTHING=value
env | grep ANYTHING
grep ANYTHING ~/.profile
and you'll similarly notice that the environment variable is set; running env(1) as a subprocess sees it; but it doesn't exist in any of the files you mention.
Normally in Docker, none of the files you mention are read at all. It's highly likely that the standard shell isn't GNU bash; if you're using an Alpine-based image, you may not even have a /bin/bash. None of this is a problem, but it means that you should usually ignore shell dotfiles.
Instead, to set an environment variable in an image, use the Dockerfile ENV directive. When you docker build the image, that value will be persisted, and you can see its value if you docker inspect the image, but it's not directly accessible on disk anywhere and it won't be visible in any of the dotfiles you mention.
As an example:
FROM alpine
# Set an environment variable using ENV
ENV VAR_1=from-dockerfile-env
# These won't work at all
RUN echo 'export VAR_2=from-etc-profile' >> /etc/profile
RUN echo 'export VAR_3=from-etc-bashrc' >> /etc/bashrc
# Also has no effect
RUN echo 'export VAR_4=from-source-script' > /source-script.sh
RUN echo 'source /source-script.sh' >> /etc/profile
# _Also_ has no effect
RUN export VAR_5=from-dockerfile-run
# When you run the container, see what is set (only $VAR_1)
CMD env | grep VAR

Alias shell command work inside container but not with docker exec "alias"

To simplify test execution for several different container I want to create an alias to use the same command for every container.
For example for a backend container I want to be able to use docker exec -t backend test instead of docker exec -t backend pytest test
So I add this line in my backend Dockerfile :
RUN echo alias test="pytest /app/test" >> ~/.bashrc
But when I do docker exec -t backend test it doesn't work, otherwise it works when I do docker exec -ti backend bash then test.
I saw that it is because alias in .bashrc works only if we use interactive terminal.
How can I get around that?
docker exec does not run the shell, so .bashrc is just never used.
Create an executable in PATH, most probably in /usr/local/bin. Note that test is a very basic shell command, use a different unique name.
That alias will only work for interactive shells, if you want that alias to work on other programs:
RUN echo -e '#!/bin/bash\npytest /app/test' > /usr/bin/mypytest && \
chmod +x /usr/bin/mypytest

Setting environment variables on docker before exec

I'm running a set of commands on an ubuntu docker, and I need to set a couple of environment variables for my scripts to work.
I have tried several alternatives but none of them seem to solve my problem.
Alternative 1: Using --env or --env-file
On my already running container, I run either:
docker exec -it --env TESTVAR="some_path" ai_pipeline_new_image bash -c "echo $TESTVAR"
docker exec -it --env-file env_vars ai_pipeline_new_image bash -c "echo $TESTVAR"
The content of env_vars:
TESTVAR="some_path"
In both cases the output is empty.
Alternative 2: Using a dockerfile
I create my image using the following docker file
FROM ai_pipeline_yh
ENV TESTVAR "A_PATH"
With this alternative the variable is set if I attach to the docker (aka if I run an interactive shell), but the output is blank if I run docker exec -it ai_pipeline_new_image bash -c "echo $TESTVAR" from the host.
What is the clean way to do this?
EDIT
Turns out that if I check the state of the variables from a shell script, they are set, but not if check them directly in bash -c "echo $VAR". I would really like to understand why this is so. I provide a minimal example:
Run docker
docker run -it --name ubuntu_env_vars ubuntu
Create a file that echoes a VAR (inside the container)
root#afdc8c494e8a:/# echo "echo \$VAR" > env_check.sh
root#afdc8c494e8a:/# chmod +x env_check.sh
From the host, run:
docker exec -it -e VAR=BLA ubuntu_env_vars bash -c "echo $VAR"
(Blank output)
From the host, run:
docker exec -it -e VAR=BLA ubuntu_env_vars bash -c "/env_check.sh"
output: BLA
Why???????
I revealed my noobness. Answering my own question here:
Both options, --env-file file or -env foo=bar are okay.
I forgot to escape the $ character when testing. Therefore the correct command to test if the variable exists is:
docker exec -it my_docker bash -c "echo \$MYVAR"
Is a good option design your apps driven to environment variables at the runtime(start of application)
Don't use env variables at docker build stage.
Sometimes the problem is not the environment variables or docker, the problem is the app who reads the environment variables.
Anyway, you have these options to inject environment variables to a docker container at runtime:
-e foo=bar
This is the most basic way:
docker run -it -e foo=bar ubuntu
These variables will be available since the start of your container.
remote variables
If you need to pass several variables, using the -e will not be the best way.
Or if you don't want to use .env files or any kind of local file with variables, you should:
prepare your app to read environment variables
inject the variables in a docker entrypoint bash script, reading it from a remote variables manager
in the shell script you will get the remote variables and inject them using source /foo/bar/variables. A sample here
With this approach you will have a variables manager for all of your containers. These variables manager has these features:
login
create applications
create variables by application
create global variables if a variable is required for two or more apps
expose an http endpoint to be used in the client (apps) in order to get the variables
crypt
etc
You have these options:
spring cloud
zookeeper
https://www.vaultproject.io/
https://www.doppler.com/
Configurator (I'm the author)
Let me know if you want to use this approach to help you.

What difference does it make whether "docker run -ti ubuntu:latest" is passed "bash"?

I am new to the linux world and trying to learn Docker.
I have two examples:
#example 1
$ docker run -ti ubuntu:latest bash
#example 2
$ docker run -ti ubuntu:latest
In example 1 it would allow me access to the terminal and example 2 is the same outcome. I understand that adding bash creates a bash session, and if that means being able run bash scripts, I am able to do echo on both examples, so I do not really see the difference.
What exactly does adding bash to docker run do? Given this context, what is the difference of having and not having a bash argument?
Specifying an explicit command overrides the default command given in the Dockerfile.
If the default CMD in the Dockerfile is already bash, then specifying bash on the command line has no effect.
If you look at the ubuntu Dockerfile on github, you can see that that is the case here:
CMD ["bash"]
Thus, you're just explicitly asserting the command that is already run by default anyhow.

how to "docker run" a shell session on a minimal linux install and immediately tear down the container?

I just started using Docker, and I like it very much, but I have a clunky
workflow that I'd like to streamline. When I'm iterating on my Dockerfile script
I will often test things out after a build by launching a
bash session, running some commands, finding out that such
and such package didn't get installed correctly, then
going back and tweaking my Dockerfile.
Let's say I have built my image and tagged it as buildfoo, I'd run it like
this:
$> docker run -t -i buildfoo
... enter some bash commands.. then ^D to exit
Then I will have a container running that I have to clean up. Usually I just nuke everything like this:
docker rm --force `docker ps -qa`
This works OK for me.. However, I'd rather not have to manually remove the
container.
Any tips gratefully accepted !
Some Additional Minor Details:
Running minimal centos 7 image and using bash as my shell.
Please use -rm flag of docker run command. --rm=true or just --rm.
It automatically remove the container when it exits (incompatible with -d). Example:
docker run -i -t --rm=true centos /bin/bash
or
docker run -i -t --rm centos /bin/bash
Even though the above still works, the command below makes use of Docker's newer syntax
docker container run -it --rm centos bash
I use the alias dr
alias dr='docker run -it --rm'
That gives you:
dr myimage
ls
...
exit
No more container running.

Resources