I want to Containerize my Nuxt.js application. I could write my own Dockerfile (as mentioned in the Nuxt.js Google Cloud Run docs for example), but as Cloud Native Buildpacks are here to free us from the need to write those over and over again I wanted to simply use Paketo.io to build a Container from my Nuxt.js app.
I ran
pack build microservice-ui-nuxt-js --path . --builder paketobuildpacks/builder:base
and a Container was created successfully. Here's the full log:
$ pack build microservice-ui-nuxt-js --path . --builder paketobuildpacks/builder:base
base: Pulling from paketobuildpacks/builder
Digest: sha256:3e2ee17348bd901e7e0748e0e1ddccdf8a602b624e418927145b5f84ca26f264
Status: Image is up to date for paketobuildpacks/builder:base
base-cnb: Pulling from paketobuildpacks/run
Digest: sha256:b6b1612ab2dfa294514fff2750e8d724287f81e89d5e91209dbdd562ed7f7daf
Status: Image is up to date for paketobuildpacks/run:base-cnb
===> DETECTING
4 of 7 buildpacks participating
paketo-buildpacks/ca-certificates 2.2.0
paketo-buildpacks/node-engine 0.4.0
paketo-buildpacks/npm-install 0.3.0
paketo-buildpacks/npm-start 0.2.0
===> ANALYZING
Previous image with name "microservice-ui-nuxt-js" not found
===> RESTORING
===> BUILDING
Paketo CA Certificates Buildpack 2.2.0
https://github.com/paketo-buildpacks/ca-certificates
Launch Helper: Contributing to layer
Creating /layers/paketo-buildpacks_ca-certificates/helper/exec.d/ca-certificates-helper
Paketo Node Engine Buildpack 0.4.0
Resolving Node Engine version
Candidate version sources (in priority order):
-> ""
<unknown> -> ""
Selected Node Engine version (using ): 14.17.0
Executing build process
Installing Node Engine 14.17.0
Completed in 5.795s
Configuring build environment
NODE_ENV -> "production"
NODE_HOME -> "/layers/paketo-buildpacks_node-engine/node"
NODE_VERBOSE -> "false"
Configuring launch environment
NODE_ENV -> "production"
NODE_HOME -> "/layers/paketo-buildpacks_node-engine/node"
NODE_VERBOSE -> "false"
Writing profile.d/0_memory_available.sh
Calculates available memory based on container limits at launch time.
Made available in the MEMORY_AVAILABLE environment variable.
Paketo NPM Install Buildpack 0.3.0
Resolving installation process
Process inputs:
node_modules -> "Not found"
npm-cache -> "Not found"
package-lock.json -> "Found"
Selected NPM build process: 'npm ci'
Executing build process
Running 'npm ci --unsafe-perm --cache /layers/paketo-buildpacks_npm-install/npm-cache'
Completed in 14.988s
Configuring launch environment
NPM_CONFIG_LOGLEVEL -> "error"
Configuring environment shared by build and launch
PATH -> "$PATH:/layers/paketo-buildpacks_npm-install/modules/node_modules/.bin"
Paketo NPM Start Buildpack 0.2.0
Assigning launch processes
web: nuxt start
===> EXPORTING
Adding layer 'paketo-buildpacks/ca-certificates:helper'
Adding layer 'paketo-buildpacks/node-engine:node'
Adding layer 'paketo-buildpacks/npm-install:modules'
Adding layer 'paketo-buildpacks/npm-install:npm-cache'
Adding 1/1 app layer(s)
Adding layer 'launcher'
Adding layer 'config'
Adding layer 'process-types'
Adding label 'io.buildpacks.lifecycle.metadata'
Adding label 'io.buildpacks.build.metadata'
Adding label 'io.buildpacks.project.metadata'
Setting default process type 'web'
Saving microservice-ui-nuxt-js...
*** Images (5eb36ba20094):
microservice-ui-nuxt-js
Adding cache layer 'paketo-buildpacks/node-engine:node'
Adding cache layer 'paketo-buildpacks/npm-install:modules'
Adding cache layer 'paketo-buildpacks/npm-install:npm-cache'
Successfully built image microservice-ui-nuxt-js
Now running
docker run --rm -i --tty -p 3000:3000 microservice-ui-nuxt-js
i hoped to see my app inside the Browser at http://localhost:3000. But no luck! My app doesn't seem to be fully running:
Although my console looks good:
What am I missing?
I read about the HOST variable in this post , which the whole problem is about! And then I also found this answer, since I now knew what to look for. The Nuxt.js configuration docs state it also:
By default, the Nuxt.js development server host is localhost which is
only accessible from within the host machine. In order to view your
app on another device you need to modify the host.
And the crucial config is mentioned also:
Host '0.0.0.0' is designated to tell Nuxt.js to resolve a host
address, which is accessible to connections outside of the host
machine (e.g. LAN)
So all we have to do is to define a Docker environment variable --env "HOST=0.0.0.0" and run the Paketo build Container like this:
docker run --rm -i --tty --env "HOST=0.0.0.0" -p 3000:3000 microservice-ui-nuxt-js
Now the Browser should also show our app at http://localhost:3000:
You can try it yourself using the GitHub Container Registry published image of the example project:
docker run --rm -i --tty --env "HOST=0.0.0.0" -p 3000:3000 ghcr.io/jonashackt/microservice-ui-nuxt-js:latest
this should have been routine, but I haven't been able to find any way. I am using Node with Docker for packaging. I have three environments: dev, qa, and prod, as usual. I have three configuration files with numerous variables: dev-config.json, qa-config.json, prod-config.json. I need Docker to pick up files and package them as config.json inside the Docker image. How to go about pl.. Thx
For building an image with only the correct config file included, you can use --build-arg.
Add
ARG CONFIG_FILE
...
COPY $CONFIG_FILE config.json
in your docker file and then use
docker build --build-arg CONFIG_FILE=prod-config.json .
to build your image
EDIT
The other possibility is to put all your config files in your image and decide which one to use, when you startup the container. For instance, you could read the desired name of your config file from an environment variable (at runtime of the container, not to confuse with ARG and --build-arg at build-time of the image) which can be set when you start your container
Iw somewhere in your node app
// read the config file name from the environment variable
// and have a fallback if the environment variable is not defined
const configfilename = process.env.CONFIG_FILE || "config.json";
and when you start your container you can do
docker run --env CONFIG_FILE=prod-config.json YOURIMAGE
to set the environment variable. This way, you will have only one image.
A third possibility would be to not add your configs in the container at all, but load them from external volume that you mount when you run the container. If you have different volumes for diffent configs, you can again decide at startup, which volume to mount. As you can give your config file the same name on every volume, your app does not need to be aware of any environment variables, you just have to make sure, you use the correct path to your config file and all volumes have the same file structure.
Ie in your node app
const configfile = '/config/config.json';
and then you start your container mounting the correct config directory
docker run -v /host/path/to/prod-config:/config YOURIMAGE
I have a jenkin pipeline that it runs on a docker agent when I run ember build I get this error.
Any idea what should I do . I use
image 'node:latest'
and I get this error
+ ./node_modules/.bin/ember build --env production
WARNING: Node v14.3.0 is not tested against Ember CLI on your platform. We recommend that you use the most-recent "Active LTS" version of Node.js. See https://git.io/v7S5n for details.
Could not start watchman
Visit https://ember-cli.com/user-guide/#watchman for more info.
Building
A system error occurred: uv_os_get_passwd returned ENOENT (no such file or directory)
it turned out all I needed to do was adding docker volumne mapping from /etc/passwd to /etc/passwd. like this :
agent {
docker {
image 'node:12'
args "-v /etc/passwd:/etc/passwd"
reuseNode true
}
}
This issue could be masking another issue with a missing or readonly path when using node-gyp in containers.
The os.userInfo() usage is part of the eaccesFallback which should only be called if a file path cannot be accessed.
Switch on verbose logging (npm_config_loglevel=verbose) to log the path which cannot be accessed and mount/fix that instead.
In my experience, this fixed the underlying issue and avoided mounting /etc/passwd which may not always be possible or could be considered insecure.
I specifically saw this in k8s pods when using electron-builder and had to create an empty volume mount for the .electron-gyp folder:
volumeMounts:
- name: electron-cache
mountPath: /.electron-gyp
volumes:
- name: electron-cache
emptyDir: {}
I've been experimenting with Docker recently on building some services to play around with and one thing that keeps nagging me has been putting passwords in a Dockerfile. I'm a developer so storing passwords in source feels like a punch in the face. Should this even be a concern? Are there any good conventions on how to handle passwords in Dockerfiles?
Definitely it is a concern. Dockerfiles are commonly checked in to repositories and shared with other people. An alternative is to provide any credentials (usernames, passwords, tokens, anything sensitive) as environment variables at runtime. This is possible via the -e argument (for individual vars on the CLI) or --env-file argument (for multiple variables in a file) to docker run. Read this for using environmental with docker-compose.
Using --env-file is definitely a safer option since this protects against the secrets showing up in ps or in logs if one uses set -x.
However, env vars are not particularly secure either. They are visible via docker inspect, and hence they are available to any user that can run docker commands. (Of course, any user that has access to docker on the host also has root anyway.)
My preferred pattern is to use a wrapper script as the ENTRYPOINT or CMD. The wrapper script can first import secrets from an outside location in to the container at run time, then execute the application, providing the secrets. The exact mechanics of this vary based on your run time environment. In AWS, you can use a combination of IAM roles, the Key Management Service, and S3 to store encrypted secrets in an S3 bucket. Something like HashiCorp Vault or credstash is another option.
AFAIK there is no optimal pattern for using sensitive data as part of the build process. In fact, I have an SO question on this topic. You can use docker-squash to remove layers from an image. But there's no native functionality in Docker for this purpose.
You may find shykes comments on config in containers useful.
Our team avoids putting credentials in repositories, so that means they're not allowed in Dockerfile. Our best practice within applications is to use creds from environment variables.
We solve for this using docker-compose.
Within docker-compose.yml, you can specify a file that contains the environment variables for the container:
env_file:
- .env
Make sure to add .env to .gitignore, then set the credentials within the .env file like:
SOME_USERNAME=myUser
SOME_PWD_VAR=myPwd
Store the .env file locally or in a secure location where the rest of the team can grab it.
See: https://docs.docker.com/compose/environment-variables/#/the-env-file
Docker now (version 1.13 or 17.06 and higher) has support for managing secret information. Here's an overview and more detailed documentation
Similar feature exists in kubernetes and DCOS
You should never add credentials to a container unless you're OK broadcasting the creds to whomever can download the image. In particular, doing and ADD creds and later RUN rm creds is not secure because the creds file remains in the final image in an intermediate filesystem layer. It's easy for anyone with access to the image to extract it.
The typical solution I've seen when you need creds to checkout dependencies and such is to use one container to build another. I.e., typically you have some build environment in your base container and you need to invoke that to build your app container. So the simple solution is to add your app source and then RUN the build commands. This is insecure if you need creds in that RUN. Instead what you do is put your source into a local directory, run (as in docker run) the container to perform the build step with the local source directory mounted as volume and the creds either injected or mounted as another volume. Once the build step is complete you build your final container by simply ADDing the local source directory which now contains the built artifacts.
I'm hoping Docker adds some features to simplify all this!
Update: looks like the method going forward will be to have nested builds. In short, the dockerfile would describe a first container that is used to build the run-time environment and then a second nested container build that can assemble all the pieces into the final container. This way the build-time stuff isn't in the second container. This of a Java app where you need the JDK for building the app but only the JRE for running it. There are a number of proposals being discussed, best to start from https://github.com/docker/docker/issues/7115 and follow some of the links for alternate proposals.
An alternative to using environment variables, which can get messy if you have a lot of them, is to use volumes to make a directory on the host accessible in the container.
If you put all your credentials as files in that folder, then the container can read the files and use them as it pleases.
For example:
$ echo "secret" > /root/configs/password.txt
$ docker run -v /root/configs:/cfg ...
In the Docker container:
# echo Password is `cat /cfg/password.txt`
Password is secret
Many programs can read their credentials from a separate file, so this way you can just point the program to one of the files.
run-time only solution
docker-compose also provides a non-swarm mode solution (since v1.11:
Secrets using bind mounts).
The secrets are mounted as files below /run/secrets/ by docker-compose. This solves the problem at run-time (running the container), but not at build-time (building the image), because /run/secrets/ is not mounted at build-time. Furthermore this behavior depends on running the container with docker-compose.
Example:
Dockerfile
FROM alpine
CMD cat /run/secrets/password
docker-compose.yml
version: '3.1'
services:
app:
build: .
secrets:
- password
secrets:
password:
file: password.txt
To build, execute:
docker-compose up -d
Further reading:
mikesir87's blog - Using Docker Secrets during Development
My approach seems to work, but is probably naive. Tell me why it is wrong.
ARGs set during docker build are exposed by the history subcommand, so no go there. However, when running a container, environment variables given in the run command are available to the container, but are not part of the image.
So, in the Dockerfile, do setup that does not involve secret data. Set a CMD of something like /root/finish.sh. In the run command, use environmental variables to send secret data into the container. finish.sh uses the variables essentially to finish build tasks.
To make managing the secret data easier, put it into a file that is loaded by docker run with the --env-file switch. Of course, keep the file secret. .gitignore and such.
For me, finish.sh runs a Python program. It checks to make sure it hasn't run before, then finishes the setup (e.g., copies the database name into Django's settings.py).
There is a new docker command for "secrets" management. But that only works for swarm clusters.
docker service create
--name my-iis
--publish target=8000,port=8000
--secret src=homepage,target="\inetpub\wwwroot\index.html"
microsoft/iis:nanoserver
The issue 13490 "Secrets: write-up best practices, do's and don'ts, roadmap" just got a new update in Sept. 2020, from Sebastiaan van Stijn:
Build time secrets are now possible when using buildkit as builder; see the blog post "Build secrets and SSH forwarding in Docker 18.09", Nov. 2018, from Tõnis Tiigi.
The documentation is updated: "Build images with BuildKit"
The RUN --mount option used for secrets will graduate to the default (stable) Dockerfile syntax soon.
That last part is new (Sept. 2020)
New Docker Build secret information
The new --secret flag for docker build allows the user to pass secret information to be used in the Dockerfile for building docker images in a safe way that will not end up stored in the final image.
id is the identifier to pass into the docker build --secret.
This identifier is associated with the RUN --mount identifier to use in the Dockerfile.
Docker does not use the filename of where the secret is kept outside of the Dockerfile, since this may be sensitive information.
dst renames the secret file to a specific file in the Dockerfile RUN command to use.
For example, with a secret piece of information stored in a text file:
$ echo 'WARMACHINEROX' > mysecret.txt
And with a Dockerfile that specifies use of a BuildKit frontend docker/dockerfile:1.0-experimental, the secret can be accessed.
For example:
# syntax = docker/dockerfile:1.0-experimental
FROM alpine
# shows secret from default secret location:
RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret
# shows secret from custom secret location:
RUN --mount=type=secret,id=mysecret,dst=/foobar cat /foobar
This Dockerfile is only to demonstrate that the secret can be accessed. As you can see the secret printed in the build output. The final image built will not have the secret file:
$ docker build --no-cache --progress=plain --secret id=mysecret,src=mysecret.txt .
...
#8 [2/3] RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret
#8 digest: sha256:5d8cbaeb66183993700828632bfbde246cae8feded11aad40e524f54ce7438d6
#8 name: "[2/3] RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret"
#8 started: 2018-08-31 21:03:30.703550864 +0000 UTC
#8 1.081 WARMACHINEROX
#8 completed: 2018-08-31 21:03:32.051053831 +0000 UTC
#8 duration: 1.347502967s
#9 [3/3] RUN --mount=type=secret,id=mysecret,dst=/foobar cat /foobar
#9 digest: sha256:6c7ebda4599ec6acb40358017e51ccb4c5471dc434573b9b7188143757459efa
#9 name: "[3/3] RUN --mount=type=secret,id=mysecret,dst=/foobar cat /foobar"
#9 started: 2018-08-31 21:03:32.052880985 +0000 UTC
#9 1.216 WARMACHINEROX
#9 completed: 2018-08-31 21:03:33.523282118 +0000 UTC
#9 duration: 1.470401133s
...
The 12-Factor app methodology tells, that any configuration should be stored in environment variables.
Docker compose could do variable substitution in configuration, so that could be used to pass passwords from host to docker.
Starting from Version 20.10, besides using secret-file, you could also provide secrets directly with env.
buildkit: secrets: allow providing secrets with env moby/moby#41234 docker/cli#2656 moby/buildkit#1534
Support --secret id=foo,env=MY_ENV as an alternative for storing a secret value to a file.
--secret id=GIT_AUTH_TOKEN will load env if it exists and the file does not.
secret-file:
THIS IS SECRET
Dockerfile:
# syntax = docker/dockerfile:1.3
FROM python:3.8-slim-buster
COPY build-script.sh .
RUN --mount=type=secret,id=mysecret ./build-script.sh
build-script.sh:
cat /run/secrets/mysecret
Execution:
$ export MYSECRET=theverysecretpassword
$ export DOCKER_BUILDKIT=1
$ docker build --progress=plain --secret id=mysecret,env=MYSECRET -t abc:1 . --no-cache
......
#9 [stage-0 3/3] RUN --mount=type=secret,id=mysecret ./build-script.sh
#9 sha256:e32137e3eeb0fe2e4b515862f4cd6df4b73019567ae0f49eb5896a10e3f7c94e
#9 0.931 theverysecretpassword#9 DONE 1.5s
......
With Docker v1.9 you can use the ARG instruction to fetch arguments passed by command line to the image on build action. Simply use the --build-arg flag. So you can avoid to keep explicit password (or other sensible information) on the Dockerfile and pass them on the fly.
source: https://docs.docker.com/engine/reference/commandline/build/ http://docs.docker.com/engine/reference/builder/#arg
Example:
Dockerfile
FROM busybox
ARG user
RUN echo "user is $user"
build image command
docker build --build-arg user=capuccino -t test_arguments -f path/to/dockerfile .
during the build it print
$ docker build --build-arg user=capuccino -t test_arguments -f ./test_args.Dockerfile .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM busybox
---> c51f86c28340
Step 2 : ARG user
---> Running in 43a4aa0e421d
---> f0359070fc8f
Removing intermediate container 43a4aa0e421d
Step 3 : RUN echo "user is $user"
---> Running in 4360fb10d46a
**user is capuccino**
---> 1408147c1cb9
Removing intermediate container 4360fb10d46a
Successfully built 1408147c1cb9
Hope it helps! Bye.
Something simply like this will work I guess if it is bash shell.
read -sp "db_password:" password | docker run -itd --name <container_name> --build-arg mysql_db_password=$db_password alpine /bin/bash
Simply read it silently and pass as argument in Docker image. You need to accept the variable as ARG in Dockerfile.
While I totally agree there is no simple solution. There continues to be a single point of failure. Either the dockerfile, etcd, and so on. Apcera has a plan that looks like sidekick - dual authentication. In other words two container cannot talk unless there is a Apcera configuration rule. In their demo the uid/pwd was in the clear and could not be reused until the admin configured the linkage. For this to work, however, it probably meant patching Docker or at least the network plugin (if there is such a thing).