When I am not exposing any ports when writing my Dockerfile, nor am I binding any ports when running docker run, I am still able to interact with applications running inside the container. Why?
I am writing my Dockerfile for my Node application. It's pretty simple and looks like this:
FROM node:8
COPY . .
RUN yarn
RUN yarn run build
ARG PORT=80
EXPOSE $PORT
CMD yarn run serve
Using this Dockerfile, I was able to build the image using docker build
$ cd ~/project/dir/
$ docker build . --build-arg PORT=8080
And run it using docker run
$ docker run -p 8080 <image-id>
I then accessed the application, running inside the Docker container, on an IP address like http://172.17.0.12:8080/ and it works.
However, when I removed the EXPOSE instruction from the Dockerfile, and remove the -p option in docker run, the application still works! It's like Docker is automatically binding my ports
Additional Notes:
It appears that another user have experienced the same issue
I have tried rebuilding my image using --no-cache after I removed the EXPOSE instructions, but this problem still exists.
Using docker inspect, I see no entries for Config.ExposedPorts
the EXPOSE command in Dockerfile really doesnt do much and I think it is more for people that read the Dockerfile to know what ports/services are running inside the container. However, the EXPOSE is usefull when you start contianer with capital -P argument (-P, --publish-all Publish all exposed ports to random ports)
docker run -P my_image
but if you are using the lower case -p you have to specify the source:destination port... See this thread
If you dont write EXPOSE in Dockerfile it doesnt have any influence to the app inside container, it is only for the capital -P argument....
Related
I am working on telegram group for premium members in which I have two services, one is monitoring all the joinee and other one is monitoring if any member has expired premium plan so It would kick out that user from the channel. I am very very new to Docker and deployment things. So I am very confused that, to run two processes simultaneously with one Dockerfile. I have tried like this.
here is the file structure:
start.sh
#!/bin/bash
cd TelegramChannelMonitor
pm2 start services/kickNonPremium.js --name KICK_NONPREMIUM
pm2 start services/monitorTelegramJoinee.js --name MONITOR_JOINEE
Dockerfile
FROM node:12-alpine
WORKDIR ./TelegramChannelMonitor
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
ENTRYPOINT ["/start.sh"]
What should I do to achieve this?
A Docker container only runs one process. On the other hand, you can run arbitrarily many containers off of a single image, each with a different command. So the approach I'd take here is to build a single image; as you've shown it, except without the ENTRYPOINT line.
FROM node:12-alpine
# Note that the Dockerfile already puts us in the right directory
WORKDIR /TelegramChannelMonitor
...
# Important: no ENTRYPOINT
# (There can be a default CMD if you think one path is more likely)
Then when you want to run this application, run two containers, and in each, make the main container command run a different script.
docker build -t telegram-channel-monitor .
docker run -d -p 8080:8080 --name kick-non-premium \
telegram-channel-monitor \
node services/kickNonPremium.js
docker run -d -p 8081:8080 --name monitor-joined \
telegram-channel-monitor \
node services/monitorTelegramJoinee.js
You can have a similar setup using Docker Compose. Set all of the containers to build: ., but set a different command: for each.
(The reason to avoid ENTRYPOINT here is because the syntax to override the command gets very clumsy: you need --entrypoint node before the image name, but then the rest of the arguments after it. I've also used plain node instead of pm2 since a Docker container provides most of the functionality of a process supervisor; see also what is the point of using pm2 and docker together?.)
Try pm2 ecosystem for apps(i.e services) declaration and run pm2 in non-backrgound mode or pm2-runtime
https://pm2.keymetrics.io/docs/usage/application-declaration/
https://pm2.keymetrics.io/docs/usage/docker-pm2-nodejs/
I try to setup a docker container which runs ngnix and nodejs at the same time.
my dockerfile looks like this:
FROM nginx:mainline-alpine
RUN apk add --no-cache --repository http://dl-cdn.alpinelinux.org/alpine/v3.11/main/ nodejs=12.14.0-r0
RUN apk add --no-cache bash
RUN apk add --no-cache nano
ADD ./myHelloWorld /myHelloWorld
CMD ["node", "/myHelloWorld/index.js"]
EXPOSE 3000
The base docker image has the command for starting ngnix
But ngnix is not running after starting my container. when i remove the CMD line in my docker file which starts nodejs. nginx works as expected.
i tried a lot, and everytime when i have an CMD in my dockerfile, ngnix is not running.
i read that ngnix needs the parameter "-g", "daemon off;" But my base image is starting nginx exactly in this way. https://github.com/nginxinc/docker-nginx/blob/master/stable/alpine/Dockerfile
if i add
CMD ["nginx", "-g", "daemon off;"]
at the end of MY Dockerfile (which doesn't make sense, because it's already part of the base image), then ngnix is running, but the nodejs is not running any more.
does someone have an idea how to run both: ngnix and nodejs?
I would be very grateful
Kind Regards
Stefan
You cannot have multiple CMD instructions. You can write a shell script that starts both nginx and node and run the shell script as part of CMD instruction.
This is not recommended though. As mentioned in the documentation:
It is generally recommended that you separate areas of concern by
using one service per container. That service may fork into multiple
processes.
. You should run your nginx and node app in different containers. You can connect them using shared networks and volumes if required. Checkout docker-compose which makes the job of starting multiple containers easy. If you still want multiple services in a single container, a better approach would be to use a process manager like supervisord.
See - https://docs.docker.com/config/containers/multi-service_container/
A docker image can only have one CMD. If you include multiple CMDs, the last one will take effect (documentation). The Dockerfile for the base image has CMD ["nginx", "-g", "daemon off;"], which means it will start nginx. When you include a CMD in your Dockerfile to run node, you are overwriting the original CMD, and nginx will not run by default.
You can run nginx and node separately using the same image.
docker run MYIMAGE node /myHelloWorld/index.js
docker run MYIMAGE nginx -g "daemon off;"
I am very to new Docker so please pardon me if this this is a very silly question. Googling hasn't really produced anything I am looking for. I have a very simple Dockerfile which looks like the following
FROM node:9.6.1
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /usr/src/app/package.json
RUN npm install --silent
COPY . /usr/src/app
RUN npm start
EXPOSE 8000
In the container the app is running on port 8000. Is it possible to access port 8000 without the -p 8000:8000? I just want to be able to do
docker run imageName
and access the app on my browser on localhost:8000
By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host.
Read more: Container networking - Published ports
But you can use docker-compose to set config and run your docker images easily.
First installing the docker-compose. Install Docker Compose
Second create docker-compose.yml beside the Dockerfile and copy this code on them
version: '3'
services:
web:
build: .
ports:
- "8000:8000"
Now you can start your docker with this command
docker-compose up
If you want to run your services in the background, you can pass the -d flag (for “detached” mode) to docker-compose up -d and use `docker-compose ps to see what is currently running.
Docker Compose Tutorial
Old question but someone might find it useful:
First get the IP of the docker container by running
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
Then connect to it from the the browser or using curl using the IP and port exposed :
Note that you will not be able to access the container on 0.0.0.0 because port is not mapped
End goal: To spin up a docker container running my expressjs application on port 3000 (as if I am using npm start).
Details:
I am using Windows 10 Enterprise:
This a very basic, front-end Expressjs application.
It runs fine using npm start – no errors.
Dockerfile I am using:
FROM node:8.11.2
WORKDIR /app
COPY package.json .
RUN npm install
COPY src .
CMD node src/index.js
EXPOSE 3000
Steps:
I am able to create an image, using basic docker build command:
docker build –t portfolio-img .
Running the image (I am using this command from a tutorial www.katacoda.com/courses/docker/deploying-first-container):
docker run -d --name portfolio-container -p 3000:3000 portfolio-img
The container is not running. It is created, since I can inspect it, but it has exited after the command. I am guessing I did something wrong with the last command, or I am not giving the correct instructions in the dockerfile.
If anyone can point me in the right direction, I'd greatly appreciate it.
Already have searched a lot on the docker documentation and on here.
I've deployed some docker containers with golang apps. One of them I need to start by this command:
docker run --restart unless-stopped -it myapp /bin/bash
The next step I enter the container and edit some config files, then I run
go build main.go
and ./main
After that I press ctrl+q and leave it out.
Everything works perfectly and all my containers restart perfectly after restarting server. But there is one issue, when myapp container restarts, the golang application doesn't run while container still works. I have to enter this again and run ./main. How can I fixed it?
Dockerfile
FROM golang:1.8
WORKDIR /go/src/app
COPY . .
RUN go-wrapper download # "go get -d -v ./..."
RUN go-wrapper install # "go install -v ./..." RUN ["apt-get","update"]
RUN ["apt-get","install","-y","vim"]
EXPOSE 3000
CMD ["app"]
When you create a container and pass in /bin/bash as the command, that's as far as Docker cares. When the container restarts, it will start up another instance of /bin/bash.
Docker doesn't watch your shell session and see what things you do after it starts the command. If you want to actually run ./main as the command of the container, then you'll need to pass in /go/src/app/main as the command instead of /bin/bash.
Additionally, compiling code is something better done during the image build phase instead of at container runtime.