How to run Nginx within a Docker container without halting? - linux

I have Nginx installed on a Docker container, and am trying to run it like this:
docker run -i -t -p 80:80 mydockerimage /usr/sbin/nginx
The problem is that the way Nginx works, is that the initial process immediately spawns a master Nginx process and some workers, and then quits. Since Docker is only watching the PID of the original command, the container then halts.
How do I prevent the container from halting? I need to be able to tell it to bind to the first child process, or stop Nginx's initial process from exiting.

To expand on Charles Duffy's answer, Nginx uses the daemon off directive to run in the foreground. If it's inconvenient to put this in the configuration file, we can specify it directly on the command line. This makes it easy to run in debug mode (foreground) and directly switch to running in production mode (background) by changing command line args.
To run in foreground:
nginx -g 'daemon off;'
To run in background:
nginx

nginx, like all well-behaved programs, can be configured not to self-daemonize.
Use the daemon off configuration directive described in http://wiki.nginx.org/CoreModule.

To expand on John's answer you can also use the Dockerfile CMD command as following (in case you want it to self start without additional args)
CMD ["nginx", "-g", "daemon off;"]

Just FYI, as of today (22 October 2019) official Nginx docker images all have line:
CMD ["nginx", "-g", "daemon off;"]
e.g. https://github.com/nginxinc/docker-nginx/blob/23a990403d6dbe102bf2c72ab2f6a239e940e3c3/mainline/alpine/Dockerfile#L117

Adding this command to Dockerfile can disable it:
RUN echo "daemon off;" >> /etc/nginx/nginx.conf

To add Tomer and Charles answers,
Syntax to run nginx in forground in Docker container using Entrypoint:
ENTRYPOINT nginx -g 'daemon off;'
Not directly related but to run multiple commands with Entrypoint:
ENTRYPOINT /bin/bash -x /myscripts/myscript.sh && nginx -g 'daemon off;'

Here you have an example of a Dockerfile that runs nginx. As mentionned by Charles, it uses the daemon off configuration:
https://github.com/darron/docker-nginx-php5/blob/master/Dockerfile#L17

For all who come here trying to run a nginx image in a docker
container, that will run as a service
As there is no whole Dockerfile, here is my whole Dockerfile solving the issue.
Nice and working. Thanks to all answers here in order to solve the final nginx issue.
FROM ubuntu:18.04
MAINTAINER stackoverfloguy "stackoverfloguy#foo.com"
RUN apt-get update -y
RUN apt-get install net-tools nginx ufw sudo -y
RUN adduser --disabled-password --gecos '' docker
RUN adduser docker sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER docker
RUN sudo ufw default allow incoming
RUN sudo rm /etc/nginx/nginx.conf
RUN sudo rm /etc/nginx/sites-available/default
RUN sudo rm /var/www/html/index.nginx-debian.html
VOLUME /var/log
VOLUME /usr/share/nginx/html
VOLUME /etc/nginx
VOLUME /var/run
COPY conf/nginx.conf /etc/nginx/nginx.conf
COPY content/* /var/www/html/
COPY Dockerfile /var/www/html
COPY start.sh /etc/nginx/start.sh
RUN sudo chmod +x /etc/nginx/start.sh
RUN sudo chmod -R 777 /var/www/html
EXPOSE 80
EXPOSE 443
ENTRYPOINT sudo nginx -c /etc/nginx/nginx.conf -g 'daemon off;'
And run it with:
docker run -p 80:80 -p 443:443 -dit

It is also good idea to use supervisord or runit[1] for service management.
[1] https://github.com/phusion/baseimage-docker

In the official notes for the official NGINX image on DockerHub it states:
If you add a custom CMD in the Dockerfile, be sure to include -g daemon off; in the CMD in order for nginx to stay in the foreground,
so that Docker can track the process properly (otherwise your
container will stop immediately after starting)!
This makes me thing removing the CMD [] might prevent this issue from occurring in the first place?

Related

docker: ngnix always stops when CMD is executed in dockerfile

I try to setup a docker container which runs ngnix and nodejs at the same time.
my dockerfile looks like this:
FROM nginx:mainline-alpine
RUN apk add --no-cache --repository http://dl-cdn.alpinelinux.org/alpine/v3.11/main/ nodejs=12.14.0-r0
RUN apk add --no-cache bash
RUN apk add --no-cache nano
ADD ./myHelloWorld /myHelloWorld
CMD ["node", "/myHelloWorld/index.js"]
EXPOSE 3000
The base docker image has the command for starting ngnix
But ngnix is not running after starting my container. when i remove the CMD line in my docker file which starts nodejs. nginx works as expected.
i tried a lot, and everytime when i have an CMD in my dockerfile, ngnix is not running.
i read that ngnix needs the parameter "-g", "daemon off;" But my base image is starting nginx exactly in this way. https://github.com/nginxinc/docker-nginx/blob/master/stable/alpine/Dockerfile
if i add
CMD ["nginx", "-g", "daemon off;"]
at the end of MY Dockerfile (which doesn't make sense, because it's already part of the base image), then ngnix is running, but the nodejs is not running any more.
does someone have an idea how to run both: ngnix and nodejs?
I would be very grateful
Kind Regards
Stefan
You cannot have multiple CMD instructions. You can write a shell script that starts both nginx and node and run the shell script as part of CMD instruction.
This is not recommended though. As mentioned in the documentation:
It is generally recommended that you separate areas of concern by
using one service per container. That service may fork into multiple
processes.
. You should run your nginx and node app in different containers. You can connect them using shared networks and volumes if required. Checkout docker-compose which makes the job of starting multiple containers easy. If you still want multiple services in a single container, a better approach would be to use a process manager like supervisord.
See - https://docs.docker.com/config/containers/multi-service_container/
A docker image can only have one CMD. If you include multiple CMDs, the last one will take effect (documentation). The Dockerfile for the base image has CMD ["nginx", "-g", "daemon off;"], which means it will start nginx. When you include a CMD in your Dockerfile to run node, you are overwriting the original CMD, and nginx will not run by default.
You can run nginx and node separately using the same image.
docker run MYIMAGE node /myHelloWorld/index.js
docker run MYIMAGE nginx -g "daemon off;"

Can not connect to node app running in Docker container from browser

I am running a nodejs application in a Docker container. The application is hosted on a bluehost centOS VPS to which I connect using SSH. I use the following command to run the app in the container: sudo docker run -p 80:8080 -d skepticalbonobo/dandakou-nodeapp. Then I check that the container is running using sudo docker ps and sure enough it is. But when I try to access the app from Chrome using the domain name or IP address I get: "This site can’t be reached". I have noticed however that in the output of sudo docker ps, under COMMAND I get docker-entrypoint... as opposed to node app.js and I do not know how to fix it.You can pull the container using docker pull skepticalbonobo/dandakou-nodeapp. Here is the content of my Dockerfile:
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
RUN npm install
COPY . .
USER root
RUN chown -R node:node . .
EXPOSE 8080
CMD [ "node", "app.js" ]
Thank you!
The default for Nodejs app is 3000.
Run following command and check on which port node app is running
sudo docker run -ti skepticalbonobo/dandakou-nodeapp /bin/sh
Expose in Dockerfile is just for documentation purpose.

How to run command when container is started - Docker

Here's the Dockerfile:
FROM nginx:stable-alpine
COPY ./mailservice /var/www/backend
COPY ./dist /usr/share/nginx/html
COPY ./docker/nginx_config/default.conf /etc/nginx/conf.d/default.conf
COPY ./docker/nginx_config/.htpasswd /etc/nginx
RUN chown -R nginx:nginx /usr/share/nginx/html/ \
&& chown -R nginx:nginx /etc/nginx/.htpasswd \
&& apk add --update nodejs nodejs-npm
WORKDIR /var/www/backend
RUN npm run start
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
But my RUN npm run start doesn't work, i have to manually attach shell to container and then run this by my self. What's the correct way to launch npm run start after container is started?
UPDATE
CMD ["nginx", "-g", "daemon off;"]
ENTRYPOINT ["node", "server.js"]
Would this work?
The best practice say that you shouldn't run more than one process per container. Unless your application its made in a way that starts multiples process from a unique entrypoint.
But there is some workaround that you can use. Try to check this question: Docker multiple entrypoints
Solved this way:
Dockerfile
FROM nginx:stable-alpine
COPY ./mailservice /var/www/backend
COPY ./dist /usr/share/nginx/html
COPY ./docker/nginx_config/default.conf /etc/nginx/conf.d/default.conf
COPY ./docker/nginx_config/.htpasswd /etc/nginx
RUN chown -R nginx:nginx /usr/share/nginx/html/ \
&& chown -R nginx:nginx /etc/nginx/.htpasswd \
&& apk add --update nodejs nodejs-npm
ADD ./docker/docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod 755 /docker-entrypoint.sh
EXPOSE 80
WORKDIR /
CMD ["/docker-entrypoint.sh"]
docker-entrypoint.sh
#!/usr/bin/env sh
exec node /var/www/backend/server.js > /var/log/node-server.log &
exec /usr/sbin/nginx -g "daemon off;"
You're confusing build time (basically RUN instructions) with runtime (ENTRYPOINT or CMD) and,
after that, you're breaking the rule: one container, one process, even this is not a sacred one.
My suggestion is to use Supervisord with this configuration
[unix_http_server]
file=/tmp/supervisor.sock ; path to your socket file
[supervisord]
logfile=/var/log/supervisord/supervisord.log ; supervisord log file
loglevel=error ; info, debug, warn, trace
pidfile=/var/run/supervisord.pid ; pidfile location
nodaemon=false ; run supervisord as a daemon
minfds=1024 ; number of startup file descriptors
minprocs=200 ; number of process descriptors
user=root ; default user
childlogdir=/var/log/supervisord/ ; where child log files will live
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[program:npm]
command=npm run --prefix /path/to/app start
stderr_logfile = /dev/stdout
stdout_logfile = /dev/stderr
[program:nginx]
command=nginx -g "daemon off;"
stderr_logfile = /dev/stdout
stdout_logfile = /dev/stderr
With this configuration you will have logs redirected to Standard Output and this is a good
practice instead of files inside container that could be ephemeral, also you will have a
PID responsible for handling child processes and restart them with specific rules.
You could try to achieve this also with a bash script but could be tricky.
Another best solution should be using separated container with network namespace in order to
forward NGINX requests to the NPM upstream... but without Kubernetes it could be hardly to
maintain, even it's not impossible just with Docker :)
Your current approach is fundamentally wrong by design.
Your current approach is a clear Anti-pattern of using containers
Please create a Dockerfile for your app
Please create a Dockerfile for nginx separately
Use docker-compose to build the stack or you can compose it your own way
Always run the app and proxy on separate containers

Unable to ssh localhost within a running Docker container

I'm building a Docker image for an application which requires to ssh into localhost (i.e ssh user#localhost)
I'm working on a Ubuntu desktop machine and started with a basic ubuntu:16.04 container.
Following is the content of my Dockerfile:
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y \
openjdk-8-jdk \
ssh && \
groupadd -r custom_group && useradd -r -g custom_group -m user1
USER user1
RUN ssh-keygen -b 2048 -t rsa -f ~/.ssh/id_rsa -q -N "" && \
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
Then I build this container using the command:
docker build -t test-container .
And run it using:
docker run -it test-container
The container opens with the following prompt and the keys are generated correctly to enable ssh into localhost:
user1#0531c0f71e0a:/$
user1#0531c0f71e0a:/$ cd ~/.ssh/
user1#0531c0f71e0a:~/.ssh$ ls
authorized_keys id_rsa id_rsa.pub
Then ssh into localhost and greeted by the error:
user1#0531c0f71e0a:~$ ssh user1#localhost
ssh: connect to host localhost port 22: Cannot assign requested address
Is there anything I'm doing wrong or any additional network settings that needs to be configured? I just want to ssh into localhost within the running container.
First you need to install the ssh server in the image building script:
RUN sudo apt-get install -y openssh-server
Then you need to start the ssh server:
RUN sudo /etc/init.d/ssh start
or probably even in the last lines of the Dockerfile ( you must have one binary instantiated to keep the container running ... )
USER root
CMD [ "sh", "/etc/init.d/ssh", "start"]
on the host than
# init a container from an the image
run -d --name my-ssh-container-name-01 \
-v /opt/local/dir:/opt/container/dir my-image-01
As #user2915097 stated in the OP comments, this was due to the ssh instance in the container was attempting to connect to the host using IPv6.
Forcing connection over IPv4 using -4 solved the issue.
$ docker run -it ubuntu ssh -4 user#hostname
For Docker Compose I was able to add the following to my .yml file:
network_mode: "host"
I believe the equivalent in Docker is:
--net=host
Documentation:
https://docs.docker.com/compose/compose-file/compose-file-v3/#network_mode
https://docs.docker.com/network/#network-drivers
host: For standalone containers, remove network isolation between the
container and the Docker host, and use the host’s networking directly.
See use the host network.
I also faced this error today, here's how to fix it:
If(and only if) you are facing this error inside a running container that isn't in production.
Do this:
docker exec -it -u 0 [your container id here] /bin/bash
then when you entered the container in god mode, run this:
service ssh start
then you can run your ssh based commands.
Of course it is best practice to do it in your Dockerfile before all these, but no need to sweat if you are not done with your image built process just yet.

Docker cannot run on build when running container with a different user

I don't know the specifics why the node application does not run. Basically I added a dockerfile in a nodejs app, and here is my Dockerfile
FROM node:0.10-onbuild
RUN mv /usr/src/app /ghost && useradd ghost --home /ghost && \
cd /ghost
ENV NODE_ENV production
VOLUME ["/ghost/content"]
WORKDIR /ghost
EXPOSE 2368
CMD ["bash", "start.bash"]
Where start.bash looks like this:
#!/bin/bash
GHOST="/ghost"
chown -R ghost:ghost /ghost
su ghost << EOF
cd "$GHOST"
NODE_ENV={$NODE_ENV:-production} npm start
EOF
I usually run docker like so:
docker run --name ghost -d -p 80:2368 user/ghost
With that I cannot see what is going on, and I decided to run it like this:
docker run --name ghost -it -p 80:2368 user/ghost
And I got this output:
> ghost#0.5.2 start /ghost
> node index
Seems, like starting, but as I check the status of the container docker ps -a , it is stopped.
Here is the repo for that but, the start.bash and dockerfile is different, because I haven't committed the latest, since both are not working:
JoeyHipolito/Ghost
I manage to make it work, there is no error in the start bash file nor in the Dockerfile, it's just that I failed to build the image again.
With that said, you can checkout the final Dockerfile and start.bash file in my repository:
Ghost-blog__Docker (https://github.com/joeyhipolito/ghost)
At the time I write this answer, you can see it in the feature-branch, feature/dockerize.

Resources