Here's the Dockerfile:
FROM nginx:stable-alpine
COPY ./mailservice /var/www/backend
COPY ./dist /usr/share/nginx/html
COPY ./docker/nginx_config/default.conf /etc/nginx/conf.d/default.conf
COPY ./docker/nginx_config/.htpasswd /etc/nginx
RUN chown -R nginx:nginx /usr/share/nginx/html/ \
&& chown -R nginx:nginx /etc/nginx/.htpasswd \
&& apk add --update nodejs nodejs-npm
WORKDIR /var/www/backend
RUN npm run start
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
But my RUN npm run start doesn't work, i have to manually attach shell to container and then run this by my self. What's the correct way to launch npm run start after container is started?
UPDATE
CMD ["nginx", "-g", "daemon off;"]
ENTRYPOINT ["node", "server.js"]
Would this work?
The best practice say that you shouldn't run more than one process per container. Unless your application its made in a way that starts multiples process from a unique entrypoint.
But there is some workaround that you can use. Try to check this question: Docker multiple entrypoints
Solved this way:
Dockerfile
FROM nginx:stable-alpine
COPY ./mailservice /var/www/backend
COPY ./dist /usr/share/nginx/html
COPY ./docker/nginx_config/default.conf /etc/nginx/conf.d/default.conf
COPY ./docker/nginx_config/.htpasswd /etc/nginx
RUN chown -R nginx:nginx /usr/share/nginx/html/ \
&& chown -R nginx:nginx /etc/nginx/.htpasswd \
&& apk add --update nodejs nodejs-npm
ADD ./docker/docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod 755 /docker-entrypoint.sh
EXPOSE 80
WORKDIR /
CMD ["/docker-entrypoint.sh"]
docker-entrypoint.sh
#!/usr/bin/env sh
exec node /var/www/backend/server.js > /var/log/node-server.log &
exec /usr/sbin/nginx -g "daemon off;"
You're confusing build time (basically RUN instructions) with runtime (ENTRYPOINT or CMD) and,
after that, you're breaking the rule: one container, one process, even this is not a sacred one.
My suggestion is to use Supervisord with this configuration
[unix_http_server]
file=/tmp/supervisor.sock ; path to your socket file
[supervisord]
logfile=/var/log/supervisord/supervisord.log ; supervisord log file
loglevel=error ; info, debug, warn, trace
pidfile=/var/run/supervisord.pid ; pidfile location
nodaemon=false ; run supervisord as a daemon
minfds=1024 ; number of startup file descriptors
minprocs=200 ; number of process descriptors
user=root ; default user
childlogdir=/var/log/supervisord/ ; where child log files will live
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[program:npm]
command=npm run --prefix /path/to/app start
stderr_logfile = /dev/stdout
stdout_logfile = /dev/stderr
[program:nginx]
command=nginx -g "daemon off;"
stderr_logfile = /dev/stdout
stdout_logfile = /dev/stderr
With this configuration you will have logs redirected to Standard Output and this is a good
practice instead of files inside container that could be ephemeral, also you will have a
PID responsible for handling child processes and restart them with specific rules.
You could try to achieve this also with a bash script but could be tricky.
Another best solution should be using separated container with network namespace in order to
forward NGINX requests to the NPM upstream... but without Kubernetes it could be hardly to
maintain, even it's not impossible just with Docker :)
Your current approach is fundamentally wrong by design.
Your current approach is a clear Anti-pattern of using containers
Please create a Dockerfile for your app
Please create a Dockerfile for nginx separately
Use docker-compose to build the stack or you can compose it your own way
Always run the app and proxy on separate containers
Related
I am trying to run a webserver (right now still locally) out of a docker container. I am currently going step by step to understand the different parts.
Dockerfile:
FROM node:12.2.0-alpine as build
ENV environment development
WORKDIR /app
COPY . /app
RUN cd /app/client && yarn && yarn build
RUN cd /app/server && yarn
EXPOSE 5000
CMD ["sh", "-c","NODE_ENV=${environment}", "node", "server/server.js"]
Explanation:
I have the "sh", "-c" part in the CMD command due to the fact that without it I was getting this error:
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:346: starting container process caused "exec:
\"NODE_ENV=${environment}\": executable file not found in $PATH":
unknown.
Building the container:
Building the container works just fine with:
docker build -t auth_example .
It takes a little while since the build context is (even after excluding all the node_modules) roughly 37MB, but that's okay.
Running the container:
Running the container and the app inside works like a charm if I do:
MyZSH: docker run -it -p 5000:5000 auth_example /bin/sh
/app # NODE_ENV=development node server/server.js
However, when running the container via the CMD command like this:
MyZSH: docker run -p 5000:5000 auth_example
Nothing happens, no errors, no nothing. The logs are empty and a docker ps -a reveals that the container was exited right upon start. I did some googling and tried different combinations of -t -i -d but that didn't solve it either.
Can anybody shed some light on this or point me into the right direction?
The problem is you're passing three arguments to sh -c whereas you'd usually pass one (sh -c "... ... ...").
It's likely you don't need the sh -c invocation at all; use /usr/bin/env to alias that environment variable instead (or just directly pass in NODE_ENV instead of environment):
FROM node:12.2.0-alpine as build
ENV environment development
WORKDIR /app
COPY . /app
RUN cd /app/client && yarn && yarn build
RUN cd /app/server && yarn
EXPOSE 5000
CMD /usr/bin/env NODE_ENV=${environment} node server/server.js
I have problem with my Dockerfile (code below)
FROM node:4.2.6
MAINTAINER kamil
RUN useradd -ms /bin/bash node
RUN mkdir -p /home/node/app && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY /myFolder .
USER node
COPY --chown=node:node . .
RUN ["chmod", "777", "/home/node/app"]
ENTRYPOINT /home/node/app
CMD ["node myApp.js"]
I'm building docker image with
"docker build -t my_docker_image ."
and it finished with no errors.
Next I am running it with command "docker run --name my_run_docker_image -d my_docker_image" and its also finished without errors, but when I want to check status of my new container with "docker ps -l" command i'm getting info that status of my container is "EXITED".
Hence i'm trying to run it once again with command "docker start -a my_run_docker_image" but I'm receiving error:
"node MyApp.js: 1: node myApp.js: /home/node/app: Permission denied"
I was trying to run it with root user, without specified user but every time I have the same issue.
It looks like you may have a problem with your user add command.
Change
RUN useradd -ms /bin/bash/node
to
RUN useradd -ms /bin/bash node
And also
RUN mkdir -p /home/node/app && -R node:node /home/node/app
Needs to change to
RUN mkdir -p /home/node/app && chown -R node:node /home/node/app
The ENTRYPOINT and CMD tell Docker what command to run when you start the container. Since ENTRYPOINT is a bare string, it’s wrapped in a shell, and CMD is ignored. So when you start your container, the main container process is
/bin/sh -c '/home/node/app'
Which fails, because that is a directory.
In this Dockerfile, broadly, I’d suggest two things. The first is to install your application as root but then run it as non-root, as protection against accidentally overwriting the application code. The second is to prefer CMD to ENTRYPOINT in most cases, unless you’re clear on how they interact. You might come up with something more like:
FROM node:4.2.6
MAINTAINER kamil
WORKDIR /app # Docker will create on first use
COPY myFolder .
RUN useradd node # its shell should never matter
USER node
CMD ["node", "myApp.js"]
how can i run two different nodejs apps in one docker image?
two different CMD [ "node", "app.js"] and CMD [ "node", "otherapp.js"] won't work, cause there can be only one CMD directive in Dockerfile.
I recommend using pm2 as the entrypoint process which will handle all your NodeJS applications within docker image. The advantage of this is that pm2 can bahave as a proper process manager which is essential in docker. Other helpful features are load balancing, restarting applications which consume too much memory or just die for whatever reason, and log management.
Here's a Dockerfile I've been using for some time now:
#A lightweight node image
FROM mhart/alpine-node:6.5.0
#PM2 will be used as PID 1 process
RUN npm install -g pm2#1.1.3
# Copy package json files for services
COPY app1/package.json /var/www/app1/package.json
COPY app2/package.json /var/www/app2/package.json
# Set up working dir
WORKDIR /var/www
# Install packages
RUN npm config set loglevel warn \
# To mitigate issues with npm saturating the network interface we limit the number of concurrent connections
&& npm config set maxsockets 5 \
&& npm config set only production \
&& npm config set progress false \
&& cd ./app1 \
&& npm i \
&& cd ../app2 \
&& npm i
# Copy source files
COPY . ./
# Expose ports
EXPOSE 3000
EXPOSE 3001
# Start PM2 as PID 1 process
ENTRYPOINT ["pm2", "--no-daemon", "start"]
# Actual script to start can be overridden from `docker run`
CMD ["process.json"]
process.json file in the CMD is described here
I don't know the specifics why the node application does not run. Basically I added a dockerfile in a nodejs app, and here is my Dockerfile
FROM node:0.10-onbuild
RUN mv /usr/src/app /ghost && useradd ghost --home /ghost && \
cd /ghost
ENV NODE_ENV production
VOLUME ["/ghost/content"]
WORKDIR /ghost
EXPOSE 2368
CMD ["bash", "start.bash"]
Where start.bash looks like this:
#!/bin/bash
GHOST="/ghost"
chown -R ghost:ghost /ghost
su ghost << EOF
cd "$GHOST"
NODE_ENV={$NODE_ENV:-production} npm start
EOF
I usually run docker like so:
docker run --name ghost -d -p 80:2368 user/ghost
With that I cannot see what is going on, and I decided to run it like this:
docker run --name ghost -it -p 80:2368 user/ghost
And I got this output:
> ghost#0.5.2 start /ghost
> node index
Seems, like starting, but as I check the status of the container docker ps -a , it is stopped.
Here is the repo for that but, the start.bash and dockerfile is different, because I haven't committed the latest, since both are not working:
JoeyHipolito/Ghost
I manage to make it work, there is no error in the start bash file nor in the Dockerfile, it's just that I failed to build the image again.
With that said, you can checkout the final Dockerfile and start.bash file in my repository:
Ghost-blog__Docker (https://github.com/joeyhipolito/ghost)
At the time I write this answer, you can see it in the feature-branch, feature/dockerize.
I have Nginx installed on a Docker container, and am trying to run it like this:
docker run -i -t -p 80:80 mydockerimage /usr/sbin/nginx
The problem is that the way Nginx works, is that the initial process immediately spawns a master Nginx process and some workers, and then quits. Since Docker is only watching the PID of the original command, the container then halts.
How do I prevent the container from halting? I need to be able to tell it to bind to the first child process, or stop Nginx's initial process from exiting.
To expand on Charles Duffy's answer, Nginx uses the daemon off directive to run in the foreground. If it's inconvenient to put this in the configuration file, we can specify it directly on the command line. This makes it easy to run in debug mode (foreground) and directly switch to running in production mode (background) by changing command line args.
To run in foreground:
nginx -g 'daemon off;'
To run in background:
nginx
nginx, like all well-behaved programs, can be configured not to self-daemonize.
Use the daemon off configuration directive described in http://wiki.nginx.org/CoreModule.
To expand on John's answer you can also use the Dockerfile CMD command as following (in case you want it to self start without additional args)
CMD ["nginx", "-g", "daemon off;"]
Just FYI, as of today (22 October 2019) official Nginx docker images all have line:
CMD ["nginx", "-g", "daemon off;"]
e.g. https://github.com/nginxinc/docker-nginx/blob/23a990403d6dbe102bf2c72ab2f6a239e940e3c3/mainline/alpine/Dockerfile#L117
Adding this command to Dockerfile can disable it:
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
To add Tomer and Charles answers,
Syntax to run nginx in forground in Docker container using Entrypoint:
ENTRYPOINT nginx -g 'daemon off;'
Not directly related but to run multiple commands with Entrypoint:
ENTRYPOINT /bin/bash -x /myscripts/myscript.sh && nginx -g 'daemon off;'
Here you have an example of a Dockerfile that runs nginx. As mentionned by Charles, it uses the daemon off configuration:
https://github.com/darron/docker-nginx-php5/blob/master/Dockerfile#L17
For all who come here trying to run a nginx image in a docker
container, that will run as a service
As there is no whole Dockerfile, here is my whole Dockerfile solving the issue.
Nice and working. Thanks to all answers here in order to solve the final nginx issue.
FROM ubuntu:18.04
MAINTAINER stackoverfloguy "stackoverfloguy#foo.com"
RUN apt-get update -y
RUN apt-get install net-tools nginx ufw sudo -y
RUN adduser --disabled-password --gecos '' docker
RUN adduser docker sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER docker
RUN sudo ufw default allow incoming
RUN sudo rm /etc/nginx/nginx.conf
RUN sudo rm /etc/nginx/sites-available/default
RUN sudo rm /var/www/html/index.nginx-debian.html
VOLUME /var/log
VOLUME /usr/share/nginx/html
VOLUME /etc/nginx
VOLUME /var/run
COPY conf/nginx.conf /etc/nginx/nginx.conf
COPY content/* /var/www/html/
COPY Dockerfile /var/www/html
COPY start.sh /etc/nginx/start.sh
RUN sudo chmod +x /etc/nginx/start.sh
RUN sudo chmod -R 777 /var/www/html
EXPOSE 80
EXPOSE 443
ENTRYPOINT sudo nginx -c /etc/nginx/nginx.conf -g 'daemon off;'
And run it with:
docker run -p 80:80 -p 443:443 -dit
It is also good idea to use supervisord or runit[1] for service management.
[1] https://github.com/phusion/baseimage-docker
In the official notes for the official NGINX image on DockerHub it states:
If you add a custom CMD in the Dockerfile, be sure to include -g daemon off; in the CMD in order for nginx to stay in the foreground,
so that Docker can track the process properly (otherwise your
container will stop immediately after starting)!
This makes me thing removing the CMD [] might prevent this issue from occurring in the first place?