Conditionally setting ENV vars in Dockerfile based on build ARG - node.js

I'd like to conditionally set some ENV vars in my Dockerfile based on certain build ARGs. For one reason or another, the ENV var doesn't map directly to the actual build ARG i.e. I can NOT do the following:
ARG myvar
ENV MYVAR $MYVAR
It's complex enough a mapping that I can't just do, like, "prefix-${$MYVAR}", or whatever, either.
I'm aware of just using a shell command with EXPORT instead (and then having access to if statements), but exporting the environmental variable this way won't persist across containers the way I need to. If the only solution is to just repeatedly prefix the needed env for every RUN/CMD I need it for (and all the logic needed to get there), then I would accept that.
I'm also aware of this answer Conditional ENV in Dockerfile, which does have one solution where a (essentially) ternary is used to trigger a certain ENV value, but because my situation is more complex, I can't just use that given solution.
Is there a way to write "logic" inside Dockerfiles, while still having access to Docker commands like ENV?
Any help would be really appreciated!
PS.
This is a Dockerfile to build a Node image, so my last few steps look basically like
RUN npm run build
CMD ["npm", "run", "start"]

If I understand your question correctly, you want
if BUILD_VAR == v1:
set ENV MYVAR=aValue
else if YOUR_VAR == v2:
set ENV MYVAR=anotherValue
Solution:
ARG BUILD_VAR=v1
FROM scratch as build_v1
ONBUILD ENV MYVAR=aValue
FROM scratch as build_v2
ONBUILD ENV MYVAR=anotherValue
FROM build_${YOUR_VAR} AS final
...
RUN npm run build
CMD ["npm", "run", "start"]
The toplevel BUILD_VAR allows you to pass the condition with docker build --build-arg BUILD_VAR=v1 .
ONBUILD is used so the instruction ENV is only executed if the corresponding branch is selected.

I had a similar issue for setting proxy server on a container.
The solution I'm using is an entrypoint script, and another script for environment variables configuration. Using RUN, you assure the the configuration script runs on build, and ENTRYPOINT when you run the container.
The entrypoint script looks like:
#!/bin/bash
# Run any other commands needed
# Run the main container command
exec "$#"
And in the Dockerfile, configure:
#copy bash scripts
COPY configproxy.sh /root
COPY startup.sh .
RUN ["/bin/bash", "-c", ". /root/configproxy.sh"]
ENTRYPOINT ["./entrypoint.sh"]
CMD ["sh", "-c", "bash"]
Take a look here.

Related

Docker compose with shared files [duplicate]

How can I include files from outside of Docker's build context using the "ADD" command in the Docker file?
From the Docker documentation:
The path must be inside the context of the build; you cannot ADD
../something/something, because the first step of a docker build is to
send the context directory (and subdirectories) to the docker daemon.
I do not want to restructure my whole project just to accommodate Docker in this matter. I want to keep all my Docker files in the same sub-directory.
Also, it appears Docker does not yet (and may not ever) support symlinks: Dockerfile ADD command does not follow symlinks on host #1676.
The only other thing I can think of is to include a pre-build step to copy the files into the Docker build context (and configure my version control to ignore those files). Is there a better workaround for than that?
The best way to work around this is to specify the Dockerfile independently of the build context, using -f.
For instance, this command will give the ADD command access to anything in your current directory.
docker build -f docker-files/Dockerfile .
Update: Docker now allows having the Dockerfile outside the build context (fixed in 18.03.0-ce). So you can also do something like
docker build -f ../Dockerfile .
I often find myself utilizing the --build-arg option for this purpose. For example after putting the following in the Dockerfile:
ARG SSH_KEY
RUN echo "$SSH_KEY" > /root/.ssh/id_rsa
You can just do:
docker build -t some-app --build-arg SSH_KEY="$(cat ~/file/outside/build/context/id_rsa)" .
But note the following warning from the Docker documentation:
Warning: It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc. Build-time variable values are visible to any user of the image with the docker history command.
I spent a good time trying to figure out a good pattern and how to better explain what's going on with this feature support. I realized that the best way to explain it was as follows...
Dockerfile: Will only see files under its own relative path
Context: a place in "space" where the files you want to share and your Dockerfile will be copied to
So, with that said, here's an example of the Dockerfile that needs to reuse a file called start.sh
Dockerfile
It will always load from its relative path, having the current directory of itself as the local reference to the paths you specify.
COPY start.sh /runtime/start.sh
Files
Considering this idea, we can think of having multiple copies for the Dockerfiles building specific things, but they all need access to the start.sh.
./all-services/
/start.sh
/service-X/Dockerfile
/service-Y/Dockerfile
/service-Z/Dockerfile
./docker-compose.yaml
Considering this structure and the files above, here's a docker-compose.yml
docker-compose.yaml
In this example, your shared context directory is the runtime directory.
Same mental model here, think that all the files under this directory are moved over to the so-called context.
Similarly, just specify the Dockerfile that you want to copy to that same directory. You can specify that using dockerfile.
The directory where your main content is located is the actual context to be set.
The docker-compose.yml is as follows
version: "3.3"
services:
service-A
build:
context: ./all-service
dockerfile: ./service-A/Dockerfile
service-B
build:
context: ./all-service
dockerfile: ./service-B/Dockerfile
service-C
build:
context: ./all-service
dockerfile: ./service-C/Dockerfile
all-service is set as the context, the shared file start.sh is copied there as well the Dockerfile specified by each dockerfile.
Each gets to be built their own way, sharing the start file!
On Linux you can mount other directories instead of symlinking them
mount --bind olddir newdir
See https://superuser.com/questions/842642 for more details.
I don't know if something similar is available for other OSes.
I also tried using Samba to share a folder and remount it into the Docker context which worked as well.
If you read the discussion in the issue 2745 not only docker may never support symlinks they may never support adding files outside your context. Seems to be a design philosophy that files that go into docker build should explicitly be part of its context or be from a URL where it is presumably deployed too with a fixed version so that the build is repeatable with well known URLs or files shipped with the docker container.
I prefer to build from a version controlled source - ie docker build
-t stuff http://my.git.org/repo - otherwise I'm building from some random place with random files.
fundamentally, no.... -- SvenDowideit, Docker Inc
Just my opinion but I think you should restructure to separate out the code and docker repositories. That way the containers can be generic and pull in any version of the code at run time rather than build time.
Alternatively, use docker as your fundamental code deployment artifact and then you put the dockerfile in the root of the code repository. if you go this route probably makes sense to have a parent docker container for more general system level details and a child container for setup specific to your code.
I believe the simpler workaround would be to change the 'context' itself.
So, for example, instead of giving:
docker build -t hello-demo-app .
which sets the current directory as the context, let's say you wanted the parent directory as the context, just use:
docker build -t hello-demo-app ..
You can also create a tarball of what the image needs first and use that as your context.
https://docs.docker.com/engine/reference/commandline/build/#/tarball-contexts
This behavior is given by the context directory that the docker or podman uses to present the files to the build process.
A nice trick here is by changing the context dir during the building instruction to the full path of the directory, that you want to expose to the daemon.
e.g:
docker build -t imageName:tag -f /path/to/the/Dockerfile /mysrc/path
using /mysrc/path instead of .(current directory), you'll be using that directory as a context, so any files under it can be seen by the build process.
This example you'll be exposing the entire /mysrc/path tree to the docker daemon.
When using this with docker the user ID who triggered the build must have recursively read permissions to any single directory or file from the context dir.
This can be useful in cases where you have the /home/user/myCoolProject/Dockerfile but want to bring to this container build context, files that aren't in the same directory.
Here is an example of building using context dir, but this time using podman instead of docker.
Lets take as example, having inside your Dockerfile a COPY or ADDinstruction which is copying files from a directory outside of your project, like:
FROM myImage:tag
...
...
COPY /opt/externalFile ./
ADD /home/user/AnotherProject/anotherExternalFile ./
...
In order to build this, with a container file located in the /home/user/myCoolProject/Dockerfile, just do something like:
cd /home/user/myCoolProject
podman build -t imageName:tag -f Dockefile /
Some known use cases to change the context dir, is when using a container as a toolchain for building your souce code.
e.g:
podman build --platform linux/s390x -t myimage:mytag -f ./Dockerfile /tmp/mysrc
or it can be a path relative, like:
podman build --platform linux/s390x -t myimage:mytag -f ./Dockerfile ../../
Another example using this time a global path:
FROM myImage:tag
...
...
COPY externalFile ./
ADD AnotherProject ./
...
Notice that now the full global path for the COPY and ADD is omitted in the Dockerfile command layers.
In this case the contex dir must be relative to where the files are, if both externalFile and AnotherProject are in /opt directory then the context dir for building it must be:
podman build -t imageName:tag -f ./Dockerfile /opt
Note when using COPY or ADD with context dir in docker:
The docker daemon will try to "stream" all the files visible on the context dir tree to the daemon, which can slowdown the build. And requires the user to have recursively permission from the context dir.
This behavior can be more costly specially when using the build through the API. However,with podman the build happens instantaneously, without needing recursively permissions, that's because podman does not enumerate the entire context dir, and doesn't use a client/server architecture as well.
The build for such cases can be way more interesting to use podman instead of docker, when you face such issues using a different context dir.
Some references:
https://docs.docker.com/engine/reference/commandline/build/
https://docs.podman.io/en/latest/markdown/podman-build.1.html
As is described in this GitHub issue the build actually happens in /tmp/docker-12345, so a relative path like ../relative-add/some-file is relative to /tmp/docker-12345. It would thus search for /tmp/relative-add/some-file, which is also shown in the error message.*
It is not allowed to include files from outside the build directory, so this results in the "Forbidden path" message."
Using docker-compose, I accomplished this by creating a service that mounts the volumes that I need and committing the image of the container. Then, in the subsequent service, I rely on the previously committed image, which has all of the data stored at mounted locations. You will then have have to copy these files to their ultimate destination, as host mounted directories do not get committed when running a docker commit command
You don't have to use docker-compose to accomplish this, but it makes life a bit easier
# docker-compose.yml
version: '3'
services:
stage:
image: alpine
volumes:
- /host/machine/path:/tmp/container/path
command: bash -c "cp -r /tmp/container/path /final/container/path"
setup:
image: stage
# setup.sh
# Start "stage" service
docker-compose up stage
# Commit changes to an image named "stage"
docker commit $(docker-compose ps -q stage) stage
# Start setup service off of stage image
docker-compose up setup
Create a wrapper docker build shell script that grabs the file then calls docker build then removes the file.
a simple solution not mentioned anywhere here from my quick skim:
have a wrapper script called docker_build.sh
have it create tarballs, copy large files to the current working directory
call docker build
clean up the tarballs, large files, etc
this solution is good because (1.) it doesn't have the security hole from copying in your SSH private key (2.) another solution uses sudo bind so that has another security hole there because it requires root permission to do bind.
I think as of earlier this year a feature was added in buildx to do just this.
If you have dockerfile 1.4+ and buildx 0.8+ you can do something like this
docker buildx build --build-context othersource= ../something/something .
Then in your docker file you can use the from command to add the context
ADD –from=othersource . /stuff
See this related post https://www.docker.com/blog/dockerfiles-now-support-multiple-build-contexts/
Workaround with links:
ln path/to/file/outside/context/file_to_copy ./file_to_copy
On Dockerfile, simply:
COPY file_to_copy /path/to/file
I was personally confused by some answers, so decided to explain it simply.
You should pass the context, you have specified in Dockerfile, to docker when
want to create image.
I always select root of project as the context in Dockerfile.
so for example if you use COPY command like COPY . .
first dot(.) is the context and second dot(.) is container working directory
Assuming the context is project root, dot(.) , and code structure is like this
sample-project/
docker/
Dockerfile
If you want to build image
and your path (the path you run the docker build command) is /full-path/sample-project/,
you should do this
docker build -f docker/Dockerfile .
and if your path is /full-path/sample-project/docker/,
you should do this
docker build -f Dockerfile ../
An easy workaround might be to simply mount the volume (using the -v or --mount flag) to the container when you run it and access the files that way.
example:
docker run -v /path/to/file/on/host:/desired/path/to/file/in/container/ image_name
for more see: https://docs.docker.com/storage/volumes/
I had this same issue with a project and some data files that I wasn't able to move inside the repo context for HIPAA reasons. I ended up using 2 Dockerfiles. One builds the main application without the stuff I needed outside the container and publishes that to internal repo. Then a second dockerfile pulls that image and adds the data and creates a new image which is then deployed and never stored anywhere. Not ideal, but it worked for my purposes of keeping sensitive information out of the repo.
In my case, my Dockerfile is written like a template containing placeholders which I'm replacing with real value using my configuration file.
So I couldn't specify this file directly but pipe it into the docker build like this:
sed "s/%email_address%/$EMAIL_ADDRESS/;" ./Dockerfile | docker build -t katzda/bookings:latest . -f -;
But because of the pipe, the COPY command didn't work. But the above way solves it by -f - (explicitly saying file not provided). Doing only - without the -f flag, the context AND the Dockerfile are not provided which is a caveat.
How to share typescript code between two Dockerfiles
I had this same problem, but for sharing files between two typescript projects. Some of the other answers didn't work for me because I needed to preserve the relative import paths between the shared code. I solved it by organizing my code like this:
api/
Dockerfile
src/
models/
index.ts
frontend/
Dockerfile
src/
models/
index.ts
shared/
model1.ts
model2.ts
index.ts
.dockerignore
Note: After extracting the shared code into that top folder, I avoided needing to update the import paths because I updated api/models/index.ts and frontend/models/index.ts to export from shared: (eg export * from '../../../shared)
Since the build context is now one directory higher, I had to make a few additional changes:
Update the build command to use the new context:
docker build -f Dockerfile .. (two dots instead of one)
Use a single .dockerignore at the top level to exclude all node_modules. (eg **/node_modules/**)
Prefix the Dockerfile COPY commands with api/ or frontend/
Copy shared (in addition to api/src or frontend/src)
WORKDIR /usr/src/app
COPY api/package*.json ./ <---- Prefix with api/
RUN npm ci
COPY api/src api/ts*.json ./ <---- Prefix with api/
COPY shared usr/src/shared <---- ADDED
RUN npm run build
This was the easiest way I could send everything to docker, while preserving the relative import paths in both projects. The tricky (annoying) part was all the changes/consequences caused by the build context being up one directory.
One quick and dirty way is to set the build context up as many levels as you need - but this can have consequences.
If you're working in a microservices architecture that looks like this:
./Code/Repo1
./Code/Repo2
...
You can set the build context to the parent Code directory and then access everything, but it turns out that with a large number of repositories, this can result in the build taking a long time.
An example situation could be that another team maintains a database schema in Repo1 and your team's code in Repo2 depends on this. You want to dockerise this dependency with some of your own seed data without worrying about schema changes or polluting the other team's repository (depending on what the changes are you may still have to change your seed data scripts of course)
The second approach is hacky but gets around the issue of long builds:
Create a sh (or ps1) script in ./Code/Repo2 to copy the files you need and invoke the docker commands you want, for example:
#!/bin/bash
rm -r ./db/schema
mkdir ./db/schema
cp -r ../Repo1/db/schema ./db/schema
docker-compose -f docker-compose.yml down
docker container prune -f
docker-compose -f docker-compose.yml up --build
In the docker-compose file, simply set the context as Repo2 root and use the content of the ./db/schema directory in your dockerfile without worrying about the path.
Bear in mind that you will run the risk of accidentally committing this directory to source control, but scripting cleanup actions should be easy enough.

Elastic Beanstalk Environment Variables Missing When In SSH

I'm trying to run some commands on my NodeJS app that need to be run via SSH (Sequelize seeding for instance), however when I do so, I noticed that the expected env vars were missing.
If I run eb printenv on my local machine I see the expected environment variables that were set in my EB Dashboard
If I SSH in, and run printenv, all of those variables I expect are missing.
So what happens, is when I run my seeds, I get an error:
node_modules/.bin/sequelize db:seed:all
ERROR: connect ECONNREFUSED 127.0.0.1:3306
I noticed that the port was wrong, it should be 5432. I checked to see if my environment variables were set with printenv and they are not there. This leads me to suspect that the proper env variables are not loaded in my ssh session, and NodeJS is falling back to the default values I provided in my config.
I found some solutions online, like running the following to load the env variables:
/opt/python/current/env
But the python directory doesn't exist. All that's inside /opt/ is elasticbeanstalk and aws directories.
I had some success so I could at least see that the env variables exist somewhere on the server by running the following:
sudo /opt/elasticbeanstalk/bin/get-config environment --output YAML
But simply running this command does not fix the problem. All it does is output the expected env variables to the screen. That's something! At least I know they are definitely there! But the env variables are still not there when I run printenv
So how do I fix this problem? Sequelize and NodeJS are clearly not seeing the env variables either, and it's trying to use the fallback default values that are set in my config file.
I know my answer is late, but I had the same problem and after some attempts with bash script I found a way to store it in your env vars.
you can simply run the following command:
export env=`/opt/elasticbeanstalk/bin/get-config environment -k <your-variable-name>`
now you will be able to easily access this variable:
echo $your-variable-name
afterward, you can utilize the env var to do what ever you like. in my case, I use it to decide which version of my code to build in a file called build-script.sh and its content is as follows:
# get env variable to know in which environment this code is running in
export env=`/opt/elasticbeanstalk/bin/get-config environment -k environment`
# building the code based on the current environment
if [ $env = "production" ]
then
echo "building for production"
npm --prefix /var/app/current run build-prod
else
echo "building for non production"
npm --prefix /var/app/current run build-prod
fi
hope this helps anyone facing the same issue 🤟🏻

How a docker container can use another docker container built as an executable and not as a service

Objective:
Running container "A" that is basically a nodejs server. This server should run an executable such "E" that is expose in another running container.
in simplified code. "A" contains this snippet of code that uses "E".
const spawn = require('child_process').spawn;
const someArgsForE = {
arg1:"some_string",
arg2:123
};
// E is the executable that would be normally run as 'docker run E '{ arg1:"some_string", arg2:123}' ... (ignore the correct escaping)
let childProcess = spawn("E", [JSON.stringify(someArgsForE)]);
childProcess.on('close', (code, signal) => {
//do whatever with the result... maybe write in a volume
});
Ideally "A" can implement some logic so that it can be aware of the existence of "E" .
If(serviceExists("E")){ ... do whatever ...}
Since also another "E_b" executable might exists and be used by the same server "A" .
I cannot figure out how I can achieve this with docker-compose without wrapping "E" and possibly "E_b" into others nodejs services but accessing them as executables.
To have docker inside docker and the using something like
let childProcess = spawn("docker", ["run", "E", args]);
is not ideal either.
Any clean possible solution ?
This is impossible without giving the service unlimited root-level access over the host. This is not a privilege you usually want to give to processes with network-facing services.
The best approach for what you’re describing is to make the “A” image self-contained by just adding the “E” executable to it. Depending on what kind of executable it is, you might be able to install it with a package manager or otherwise make it available
FROM node
# Some things are installable via APT
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install -y --no-install-recommends \
e-executable
# Or sometimes you have an executable or tarball available locally
ADD f-executable.tar.gz /usr/local
# Routine stuff for a Node app
WORKDIR /app
COPY package.json yarn.lock .
RUN yarn install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
The alternative approach is to bind-mount the host’s Docker socket into the container. As previously mentioned, this gives the service unlimited root-level access over the host. Common pitfalls to this approach include shell injection attacks that allow a caller to docker run -v /:/host ..., filesystem permission problems, and directory mapping issues where the left-hand side of a docker run -v option is always a host path even if it’s being launched from a container. I’d pretty strongly suggest avoiding this path.

rsync files from inside a docker container?

We are using Docker for the build/deploy of a NodeJS app. We have a test container that is built by Jenkins, and executes our unit tests. The Dockerfile looks like this:
FROM node:boron
# <snip> some misc unimportant config here
# Run the tests
ENTRYPOINT npm test
I would like to modify this step so that we run npm run test:cov, which runs the unit tests + generates a coverage report HTML file. I've modified the Dockerfile to say:
# Run the tests + generate coverage
ENTRYPOINT npm run test:cov
... which works. Yay!
...But now I'm unsure how to rsync the coverage report ( generated by the above command inside the Dockerfile ) to a remote server.
In Jenkins, the above config is invoked this way:
docker run -t test --rm
which, again, runs the above test and exists the container.
how can I add some additional steps after the entrypoint command executes, to (for example) rsync some results out to a remote server?
I am not a "node" expert, so bear with me on the details.
First of all, you may consider if you need a separate Dockerfile for running the tests. Ideally, you'd want your image to be built, then tested, without modifying the actual image.
Building a test-image that uses your NodeJS app as a base image (FROM my-nodejs-image) could do the trick, but may not be needed if all you have to do is run a different command / entrypoint on the image.
Secondly; stateful data (the coverage report falls into that category) should not be stored inside the container (i.e., not stored on the container's filesystem). You want your containers to be ephemeral, and anything that should live beyond the container's lifecycle (anything that should be preserved after the container itself is gone), should be stored outside of the container; either in a "volume", or in a bind-mounted directory.
Let me start with the "separate Dockerfile" point. Let's say, your NodeJS application Dockerfile looks like this;
FROM node:boron
COPY package.json /usr/src/app/
RUN npm install && npm cache clean
COPY . /usr/src/app
CMD [ "npm", "start" ]
You build your image, and tag it, for example, with the commit it was built from;
docker build -t myapp:$GIT_COMMIT .
Once the image was built succesfully, you want to test it. Probably a quick test to verify it actually "runs". Many ways to do that, perhaps something like;
docker run \
-d \
--rm \
--network=test-network \
--name test-{$GIT_COMMIT} \
myapp:$GIT_COMMIT
And a container to test it actually does something;
docker run --rm --network=test-network my-test-image curl test-{$GIT_COMMIT}
Once tested (and the temporary container removed), you can run your coverage tests, however, instead of writing the coverage report inside the container, write it to a volume or bind-mount. You can override the command to run in the container with docker run;
mkdir -p /coverage-reports/{$GIT_COMMIT}
docker run \
--rm \
--name test-{$GIT_COMMIT}\
-v /coverage-reports/{$GIT_COMMIT}:/usr/src/app/coverage \
myapp:$GIT_COMMIT npm run test:cov
The commands above;
Create a unique local directory to store the test-artifacts (coverage report)
Runs the image you built (and tagged myapp:$GIT_COMMIT)
Bind-mount the /coverage-reports/{$GIT_COMMIT} into the container at usr/src/app/coverage
Runs the coverage tests (which will write to /usr/src/app/coverage if I'm not mistaken - again, not a Node expert)
Removes the container once it exits
After the container exits, the coverage report is stored in /coverage-reports/{$GIT_COMMIT} on the host. You can use your regular tools to rsync those where you want.
As an alternative, you can use a volume plugin to write the results to (e.g.) an s3 bucket, which saves you from having to rsync the results.
Once tests are successful, you can docker tag the image to bump your application's version (e.g. docker tag myapp:1.0.12345), docker push to your registry, and deploy the new version.
Make a script to execute as the entrypoint and put the commands in the script. You pass in args when calling docker run and they get passed to the script.
The docs have an example of the postgres image's script. You can build off that.
Docker Entrypoint Docs

Docker & PM2: String based CMD with environment variables

I'm currently using the shell-form of CMD in Docker for launching my node app:
CMD /usr/src/app/node_modules/.bin/trifid --config $TRIFID_CONFIG
The env-var TRIFID_CONFIGis set to a default in the Dockerfile:
ENV TRIFID_CONFIG config.customer.json
This makes it easy to pass another config file for dev-environments for example.
Now I try to switch this to PM2 for production. However it looks like all PM2 samples are using the "exec" form which from what I understood does not evaluate ENV-vars. I tried the shell-form with PM2:
CMD pm2-docker /usr/src/app/node_modules/trifid/server.js --config $TRIFID_CONFIG
But it looks like the variable is not evaluated like this, it fails back to default on execution.
What would be the proper way to handle this with PM2 inside a Docker image?
I had a discussion on Github and meanwhile figured it out:
CMD pm2-docker /usr/src/app/node_modules/.bin/trifid -- --config $TRIFID_CONFIG
So the trick is to use -- after the command and the rest will be passed as argument. If I use the shell form env-vars do seem to get evaluated properly.

Resources