CMD with pipe in Dockerfile doesn't forward - node.js

I have this command to start a Node.js webserver like this:
node --inspect=0.0.0.0:9229 --preserve-symlinks /app/api/dist/server.js | pino-pretty
I'm placing it into a Dockerfile as the CMD:
CMD ["node", "--inspect=0.0.0.0:9229", "--preserve-symlinks" ,"/app/api/dist/server.js", "|","pino-pretty"]
The service starts when calling docker run but the | is ignored so no logs are forwarded to pino-pretty.
What am I doing wrong here?
I could put the whole command into a start.sh or use CMD ["npm", "run", "start:prod"] but I want to understand the core problem.

I could put the whole command into a start.sh or use CMD ["npm", "run", "start:prod"] but I want to understand the core problem.
A pipe is a shell construct, e.g. a feature of /bin/sh, /bin/bash, and similar shells. When you define CMD with the json/exec syntax, you are explicitly telling docker to run the command without a shell. Therefore you need to either run the command in a script, call a shell explicitly, or run with the string/shell syntax to have docker execute the command with a shell:
CMD node --inspect=0.0.0.0:9229 --preserve-symlinks /app/api/dist/server.js | pino-pretty

Related

Dockerized React App failed to bind to $PORT on Heroku

I'm trying to deploy a Dockerized React App to Heroku, but keep getting the
"R10: Failed to bind to $PORT error on Heroku"
.
The dockerized app runs perfectly fine when i docker run it locally.
My docker file looks like the following:
FROM node:10.15.3
RUN mkdir -p /app
WORKDIR /app
COPY . .
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json /app/package.json
RUN npm install --verbose
RUN npm install serve -g -silent
# start app
RUN npm run build
CMD ["serve", "-l", "tcp://0.0.0.0:${PORT}", "-s", "/app/build"]
I followed the online solution to change the "listening" port on serve to $PORT from Heroku. Now the application is served on Heroku's port according to logs, but still, get the
"Failed to bind to $PORT error"
.
Please help!
variable substitution does not happen in CMD that is why ${PORT} is consider as a text instead of consuming its value.
Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell processing does not happen. For example, CMD [ "echo", "$HOME" ] will not do variable substitution on $HOME. If you want shell processing then either use the shell form or execute a shell directly, for example: CMD [ "sh", "-c", "echo $HOME" ]. When using the exec form and executing a shell directly, as in the case for the shell form, it is the shell that is doing the environment variable expansion, not docker.
docker-cmd
Change CMD to
CMD ["sh", "-c", "serve -l tcp://0.0.0.0:${PORT} -s /app/build"]

How to run node server using DEBUG command in docker file?

What is the command of the running node server in docker using DEBUG? I tried following commands in dockerfile but no luck.
CMD [ "npm", "DEBUG=* start" ]
CMD [ "DEBUG=*", "npm", "start" ]
I am using debug npm for logging.
Could you please help me?
According to documentation on npm debug, it requires DEBUG to be an environment variable, like set DEBUG=*,-not_this. In this case you can do it in several ways:
Using ENV command of Dockerfile:
ENV DEBUG * start
or
ENV DEBUG="* start"
If you want to dynamically change the DEBUG variable, you can put it into CMD and override on container start, but in this case you have to follow your OS rules for environment variable definition.
For Windows it can be:
CMD ["cmd.exe", "-c", "set DEBUG=* start"]

Npm not found when running docker container from node image

# Dockerfile
FROM node:7-alpine
RUN mkdir -p /src/app
WORKDIR /src/app
COPY package.json /src/app/package.json
RUN npm install
COPY . /src/app
EXPOSE 3000
CMD ['npm', 'start']
I'm trying to complete a katacoda.com exercise for Dockerizing nodejs applications with the Dockerfile above. The build completes but running the image quits immediately and in the docker logs I see:
/bin/sh: [npm,: not found
I tried running the container in interactive mode with docker -it nodeapp /bin/bash which raised the error docker: Error response from daemon: oci runtime error: container_linux.go:262: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory". So I'm not sure what is going on here.
The reason it doesn't work is single quotes
CMD ['npm', 'start']
should be
CMD ["npm", "start"]
When you don't use double quotes, docker will remove the single quotes and process the command as [npm, start]
That is why you see error [npm, : not found
I had the same symptom but the problem was slightly different. Writing here in case google leads others in my situation to this link For me the issue was forgetting commas in the CMD. So the solution was going from CMD ["npm" "start"] to CMD ["npm", "start"].

Use docker run command to pass arguments to CMD in Dockerfile

I'm new to Docker and I'm having a hard time to setup the docker container as I want. I have a nodejs app can take two parameters when start. For example, I can use
node server.js 0 dev
or
node server.js 1 prod
to switch between production mode and dev mode and determine if it should turn the cluster on. Now I want to create docker image with arguments to do the similar thing, the only thing I can do so far is to adjust the Dockerfile to have a line
CMD [ "node", "server.js", "0", "dev"]
and
docker build -t me/app . to build the docker.
Then docker run -p 9000:9000 -d me/app to run the docker.
But If I want to switch to prod mode, I need to change the Dockerfile CMD to be
CMD [ "node", "server.js", "1", "prod"] ,
and I need to kill the old one listening on port 9000 and rebuild the image.
I wish I can have something like
docker run -p 9000:9000 environment=dev cluster=0 -d me/app
to create an image and run the nodejs command with "environment" and "cluster" arguments, so I don't need to change the Dockerfile and rebuild the docker any more. How can I accomplish this?
Make sure your Dockerfile declares an environment variable with ENV:
ENV environment default_env_value
ENV cluster default_cluster_value
The ENV <key> <value> form can be replaced inline.
Then you can pass an environment variable with docker run. Note that each variable requires a specific -e flag to run.
docker run -p 9000:9000 -e environment=dev -e cluster=0 -d me/app
Or you can set them through your compose file:
node:
environment:
- environment=dev
- cluster=0
Your Dockerfile CMD can use that environment variable, but, as mentioned in issue 5509, you need to do so in a sh -c form:
CMD ["sh", "-c", "node server.js ${cluster} ${environment}"]
The explanation is that the shell is responsible for expanding environment variables, not Docker. When you use the JSON syntax, you're explicitly requesting that your command bypass the shell and be executed directly.
Same idea with Builder RUN (applies to CMD as well):
Unlike the shell form, the exec form does not invoke a command shell.
This means that normal shell processing does not happen.
For example, RUN [ "echo", "$HOME" ] will not do variable substitution on $HOME. If you want shell processing then either use the shell form or execute a shell directly, for example: RUN [ "sh", "-c", "echo $HOME" ].
When using the exec form and executing a shell directly, as in the case for the shell form, it is the shell that is doing the environment variable expansion, not docker.
Another option is to use ENTRYPOINT to specify that node is the executable to run and CMD to provide the arguments. The docs have an example in Exec form ENTRYPOINT example.
Using this approach, your Dockerfile will look something like
FROM ...
ENTRYPOINT [ "node", "server.js" ]
CMD [ "0", "dev" ]
Running it in dev would use the same command
docker run -p 9000:9000 -d me/app
and running it in prod you would pass the parameters to the run command
docker run -p 9000:9000 -d me/app 1 prod
You may want to omit CMD entirely and always pass in 0 dev or 1 prod as arguments to the run command. That way you don't accidentally start a prod container in dev or a dev container in prod.
Option 1) Use ENV variable
Dockerfile
# we need to specify default values
ENV ENVIRONMENT=production
ENV CLUSTER=1
# there is no need to use parameters array
CMD node server.js ${CLUSTER} ${ENVIRONMENT}
Docker run
$ docker run -d -p 9000:9000 -e ENVIRONMENT=dev -e CLUSTER=0 -me/app
Option 2) Pass arguments
Dockerfile
# use entrypoint instead of CMD and do not specify any arguments
ENTRYPOINT node server.js
Docker run
Pass arguments after docker image name
$ docker run -p 9000:9000 -d me/app 0 dev
The typical way to do this in Docker containers is to pass in environment variables:
docker run -p 9000:9000 -e NODE_ENV=dev -e CLUSTER=0 -d me/app
Going a bit off topic, build arguments exist to allow you to pass in arguments at build time that manifest as environment variables for use in your docker image build process:
$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
Late joining the discussion. Here's a nifty trick you can use to set default command line parameters while also supporting overriding the default arguments with custom ones:
Step#1 In your dockerfile invoke your program like so:
ENV DEFAULT_ARGS "--some-default-flag=123 --foo --bar"
CMD ["/bin/bash", "-c", "./my-nifty-executable ${ARGS:-${DEFAULT_ARGS}}"]
Step#2 When can now invoke the docker-image like so:
# this will invoke it with DEFAULT_ARGS
docker run mydockerimage
# but this will invoke the docker image with custom arguments
docker run --env ARGS="--alternative-args --and-then-some=123" mydockerimage
You can also adjust this technique to do much more complex argument-evaluation however you see fit. Bash supports many kinds of one-line constructs to help you towards that goal.
Hope this technique helps some folks out there save a few hours of head-scratching.
Not sure if this helps but I have used it this way and it worked like a charm
CMD ["node", "--inspect=0.0.0.0:9229", "--max-old-space-size=256", "/home/api/index.js"]
I found this at docker-compose not setting environment variables with flask
docker-compose.yml
version: '2'
services:
app:
image: python:2.7
environment:
- BAR=FOO
volumes:
- ./app.py:/app.py
command: python app.py
app.py
import os
print(os.environ["BAR"])

Forever monitor fails to run inside a docker container [duplicate]

i have a problem when start node with forever in docker container, if i launch manually works, instead the same command in Dockerfile, when build and start the container, exited.
The command works in bash:
docker run -it container_name bash forever start -c 'node --harmony' /my/path/app.js
I tried to put command in Dockerfile but the container don't start
CMD forever start -c 'node --harmony' /my/path/app.js
Google Group discussion
Forever start script.js runs in the background. To run forever in the foreground, try forever script.js.
This starts forever in the foreground, which is what Docker needs. Remember a container is "alive" only as long as the process defined in CMD is up and running. Since forever starts as a daemon, the command itself exits and docker will exit also.
CMD forever -c 'node --harmony' /my/path/app.js
Try using the array syntax:
CMD ["forever", "start", "-c", "node --harmony", "/my/path/app.js"]
I'm now trying to use forever in docker. This works:
CMD ["forever", "src/app.js"]
Put in your Dockerfile :
CMD forever app.js

Resources