Docker CMD with environment variable while running node application - node.js

I need to pass an environment variable to node like below.
RAZZLE_ENV=production node build/server.js
How I can achieve this with docker CMD command. My current config is like this:
CMD [ 'node', 'build/server.js' ]
I did change it to this:
CMD [ 'RAZZLE_ENV=production node', 'build/server.js' ]
But it does not work as expected and the container is not going to be created even.
UPDATE: the error is:
Cannot find module /app/RAZZLE_ENV=production node

Dockerfile
# Use ARG so that it can be overridden at build time
ARG ARG_RAZZLE_ENV=development
# Set environment variable based on ARG
ENV RAZZLE_ENV=$ARG_RAZZLE_ENV
CMD [ 'node', 'build/server.js' ]
Pass ARG during build:
docker build --build-arg ARG_RAZZLE_ENV=production . -t name:tag

Related

Running a node.js script in Docker

I have a node.js script that when it's executed it just run a process to copy two tables from one database to another one. If I run it in my local it works as it should, so there is no coding issue.
My problem is, I want to put that node.js program inside a docker image and execute that image (and the node.js script) when I needed. I created the image but when I run it just say that was executed x amount of time ago, but it doesn't do what the script does.
Can anyone explain me what I can do to accomplish this?
Steps are:
I need to pass an optional parameter to npm like: npm start Initial.
I need to be able to do the same inside the container:.
I have a file.sh that do something like this:
#!/bin/bash
if [ $1 = "Initial" ]; then
: npm start $1
else
: npm start
fi
But again, when I run the docker with something like this :
docker run [image-name] Initial
It doesn't give me any error but is not executing my node.js script. My Dockerfile is something like this:
...
WORKDIR /usr/src/app
RUN npm install
COPY ./ ./
RUN ["chmod", "+x", "/usr/src/app/file.sh"]
ENTRYPOINT ["/usr/src/app/file.sh"]
You did not share the base image, but one issue might be that the base image is base on alpine so #!/bin/bash this will not work, plus you need variable expansion with proper to avoid error on empty.
#!/bin/sh
if [ "${1}" = "Initial" ]; then
npm start "${1}"
else
npm start
fi
Here is the working example that you can try
git clone https://github.com/Adiii717/docker-npm-argument.git
cd docker-npm-argument;
docker-compose build
docker-compose up
or to pass argument
docker-compose run docker-npm-argument argument1 arguments
To check same with docker run command
docker run --rm docker-npm-argument "Initial"
output
Args passed to docker run command are [ initial ]
starting application
> app#0.0.0 start /app
> node app.js "initial"
Node process arguments [ '/usr/local/bin/node', '/app/app.js', 'initial' ]
You need to pass an environment variable to your container. you can use -e:
docker run -e "foo=bar" [image-name]
then you have access to foo environment variables in your file.sh
https://docs.docker.com/engine/reference/run/#env-environment-variables

Dockerized React App failed to bind to $PORT on Heroku

I'm trying to deploy a Dockerized React App to Heroku, but keep getting the
"R10: Failed to bind to $PORT error on Heroku"
.
The dockerized app runs perfectly fine when i docker run it locally.
My docker file looks like the following:
FROM node:10.15.3
RUN mkdir -p /app
WORKDIR /app
COPY . .
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json /app/package.json
RUN npm install --verbose
RUN npm install serve -g -silent
# start app
RUN npm run build
CMD ["serve", "-l", "tcp://0.0.0.0:${PORT}", "-s", "/app/build"]
I followed the online solution to change the "listening" port on serve to $PORT from Heroku. Now the application is served on Heroku's port according to logs, but still, get the
"Failed to bind to $PORT error"
.
Please help!
variable substitution does not happen in CMD that is why ${PORT} is consider as a text instead of consuming its value.
Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell processing does not happen. For example, CMD [ "echo", "$HOME" ] will not do variable substitution on $HOME. If you want shell processing then either use the shell form or execute a shell directly, for example: CMD [ "sh", "-c", "echo $HOME" ]. When using the exec form and executing a shell directly, as in the case for the shell form, it is the shell that is doing the environment variable expansion, not docker.
docker-cmd
Change CMD to
CMD ["sh", "-c", "serve -l tcp://0.0.0.0:${PORT} -s /app/build"]

How to run node server using DEBUG command in docker file?

What is the command of the running node server in docker using DEBUG? I tried following commands in dockerfile but no luck.
CMD [ "npm", "DEBUG=* start" ]
CMD [ "DEBUG=*", "npm", "start" ]
I am using debug npm for logging.
Could you please help me?
According to documentation on npm debug, it requires DEBUG to be an environment variable, like set DEBUG=*,-not_this. In this case you can do it in several ways:
Using ENV command of Dockerfile:
ENV DEBUG * start
or
ENV DEBUG="* start"
If you want to dynamically change the DEBUG variable, you can put it into CMD and override on container start, but in this case you have to follow your OS rules for environment variable definition.
For Windows it can be:
CMD ["cmd.exe", "-c", "set DEBUG=* start"]

Passing NODE_ENV to docker to run package.json scripts

This is my dockerfile :
FROM node:6-onbuild
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
ENV PORT 80
EXPOSE ${PORT}
CMD [ "npm","run", "start" ]
and in package.json I do have this :
"scripts": {
"start": "node start.js",
"stagestart": "NODE_ENV=content-staging node start.js"
}
the start script is for production, now I want a way to run the staging script in dockerfile. is there a way to read NODE_ENV inside dockerfile, so I can have one dockerfile which handle staging and production.
Here is two possible implementation.
FYI: you don't need to mention NODE_ENV in package.json if you
already set NODE_ENV at the system level or set NODE_ENV during build time or runtime in docker.
Here Dockerfile as same but I used to alpine base image
FROM node:alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
ENV PORT 3000
ARG DOCKER_ENV
ENV NODE_ENV=${DOCKER_ENV}
RUN if [ "$DOCKER_ENV" = "stag" ] ; then echo your NODE_ENV for stage is $NODE_ENV; \
else echo your NODE_ENV for dev is $NODE_ENV; \
fi
EXPOSE ${PORT}
CMD [ "npm","run", "start" ]
when you build this Dockerfile with this command
docker build --build-arg DOCKER_ENV=stag -t test-node .
You will see at layer
---> Running in a6231eca4d0b your NODE_ENV for stage is stag
When you run this docker container and run this command your output will be
/usr/src/app # echo $NODE_ENV
stag
Simplest Approch same image and but set environment variable at run time
Your Dockerfile
FROM node:alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
ENV PORT 3000
EXPOSE ${PORT}
CMD [ "npm","run", "start" ]
Run this docker image with this command
docker build -t test-node .
docker run --name test -e NODE_ENV=content-staging -p 3000:3000 --rm -it test-node ash
So when you run this command at container you will see
/usr/src/app # echo $NODE_ENV
content-staging
So this is how you can start your node application with NODE_ENV without setting environment variable at package.json. So if your nodejs configuration is based on NODE_ENV it should pick configuration according to NODE_ENV .
You can use the ENV instruction to get the environment variable as an environment variable inside container. Have an entry point script that injects the available environment variable (perhaps something as simple as sed) in place of a placeholder variable name that is in your package.json file. Then start your node application. Obviously this will require you to make a few changes to your Dockerfile in regards to entrypoint script etc.
That is how I have achieved such things in the past.

Use docker run command to pass arguments to CMD in Dockerfile

I'm new to Docker and I'm having a hard time to setup the docker container as I want. I have a nodejs app can take two parameters when start. For example, I can use
node server.js 0 dev
or
node server.js 1 prod
to switch between production mode and dev mode and determine if it should turn the cluster on. Now I want to create docker image with arguments to do the similar thing, the only thing I can do so far is to adjust the Dockerfile to have a line
CMD [ "node", "server.js", "0", "dev"]
and
docker build -t me/app . to build the docker.
Then docker run -p 9000:9000 -d me/app to run the docker.
But If I want to switch to prod mode, I need to change the Dockerfile CMD to be
CMD [ "node", "server.js", "1", "prod"] ,
and I need to kill the old one listening on port 9000 and rebuild the image.
I wish I can have something like
docker run -p 9000:9000 environment=dev cluster=0 -d me/app
to create an image and run the nodejs command with "environment" and "cluster" arguments, so I don't need to change the Dockerfile and rebuild the docker any more. How can I accomplish this?
Make sure your Dockerfile declares an environment variable with ENV:
ENV environment default_env_value
ENV cluster default_cluster_value
The ENV <key> <value> form can be replaced inline.
Then you can pass an environment variable with docker run. Note that each variable requires a specific -e flag to run.
docker run -p 9000:9000 -e environment=dev -e cluster=0 -d me/app
Or you can set them through your compose file:
node:
environment:
- environment=dev
- cluster=0
Your Dockerfile CMD can use that environment variable, but, as mentioned in issue 5509, you need to do so in a sh -c form:
CMD ["sh", "-c", "node server.js ${cluster} ${environment}"]
The explanation is that the shell is responsible for expanding environment variables, not Docker. When you use the JSON syntax, you're explicitly requesting that your command bypass the shell and be executed directly.
Same idea with Builder RUN (applies to CMD as well):
Unlike the shell form, the exec form does not invoke a command shell.
This means that normal shell processing does not happen.
For example, RUN [ "echo", "$HOME" ] will not do variable substitution on $HOME. If you want shell processing then either use the shell form or execute a shell directly, for example: RUN [ "sh", "-c", "echo $HOME" ].
When using the exec form and executing a shell directly, as in the case for the shell form, it is the shell that is doing the environment variable expansion, not docker.
Another option is to use ENTRYPOINT to specify that node is the executable to run and CMD to provide the arguments. The docs have an example in Exec form ENTRYPOINT example.
Using this approach, your Dockerfile will look something like
FROM ...
ENTRYPOINT [ "node", "server.js" ]
CMD [ "0", "dev" ]
Running it in dev would use the same command
docker run -p 9000:9000 -d me/app
and running it in prod you would pass the parameters to the run command
docker run -p 9000:9000 -d me/app 1 prod
You may want to omit CMD entirely and always pass in 0 dev or 1 prod as arguments to the run command. That way you don't accidentally start a prod container in dev or a dev container in prod.
Option 1) Use ENV variable
Dockerfile
# we need to specify default values
ENV ENVIRONMENT=production
ENV CLUSTER=1
# there is no need to use parameters array
CMD node server.js ${CLUSTER} ${ENVIRONMENT}
Docker run
$ docker run -d -p 9000:9000 -e ENVIRONMENT=dev -e CLUSTER=0 -me/app
Option 2) Pass arguments
Dockerfile
# use entrypoint instead of CMD and do not specify any arguments
ENTRYPOINT node server.js
Docker run
Pass arguments after docker image name
$ docker run -p 9000:9000 -d me/app 0 dev
The typical way to do this in Docker containers is to pass in environment variables:
docker run -p 9000:9000 -e NODE_ENV=dev -e CLUSTER=0 -d me/app
Going a bit off topic, build arguments exist to allow you to pass in arguments at build time that manifest as environment variables for use in your docker image build process:
$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
Late joining the discussion. Here's a nifty trick you can use to set default command line parameters while also supporting overriding the default arguments with custom ones:
Step#1 In your dockerfile invoke your program like so:
ENV DEFAULT_ARGS "--some-default-flag=123 --foo --bar"
CMD ["/bin/bash", "-c", "./my-nifty-executable ${ARGS:-${DEFAULT_ARGS}}"]
Step#2 When can now invoke the docker-image like so:
# this will invoke it with DEFAULT_ARGS
docker run mydockerimage
# but this will invoke the docker image with custom arguments
docker run --env ARGS="--alternative-args --and-then-some=123" mydockerimage
You can also adjust this technique to do much more complex argument-evaluation however you see fit. Bash supports many kinds of one-line constructs to help you towards that goal.
Hope this technique helps some folks out there save a few hours of head-scratching.
Not sure if this helps but I have used it this way and it worked like a charm
CMD ["node", "--inspect=0.0.0.0:9229", "--max-old-space-size=256", "/home/api/index.js"]
I found this at docker-compose not setting environment variables with flask
docker-compose.yml
version: '2'
services:
app:
image: python:2.7
environment:
- BAR=FOO
volumes:
- ./app.py:/app.py
command: python app.py
app.py
import os
print(os.environ["BAR"])

Resources