I have an Node.js application that I want to run with docker-compose. Inside the container it listens for port 4321, set by an environment variable.
This port is also exposed by my Dockerfile and I specify it like so in my docker-compose.yml:
version: '3.4'
services:
previewcrawler:
image: previewcrawler
build:
context: .
dockerfile: ./Dockerfile
environment:
NODE_ENV: development
ports:
- 4321:4321
- 9229:9229
command: ['node', '--inspect=0.0.0.0:9229', 'dist/index.js']
I run the app with a VSCode task, which executes this:
docker run -dt -P --name "previewcrawler-dev" -e "DEBUG=*" -e "NODE_ENV=development" --label "com.microsoft.created-by=visual-studio-code" -p "9229:9229" "previewcrawler:latest" node --inspect-brk=0.0.0.0:9229 .
When I choose to open the application in my browser, it has some crazy port like 49171, which also changes every time I start my container.
How can I make this port stable? So that it is 4321 every time, like I specified in my docker-compose.yml
docker run -P (with a capital P) tells Docker to pick a host port for anything the Dockerfile EXPOSEs. You have no control over which host port or interfaces the port uses.
docker run -p 4321:4321 (with a lowercase p) lets you explicitly pick which ports get published, and on which host port. It is exactly equivalent to the Compose ports: option.
This is further detailed in the Docker run reference.
(That link is more specifically to a section entitled "expose incoming ports". However, "expose" as a verb means almost nothing in modern Docker. Functionally, it does only two things: if you use docker run -P then all exposed ports get published; and if you don't have a -p or -P option at all, the port will be listed in the docker ps output anyways. Exposed ports aren't automatically published, and there's not really any reason to use the docker run --expose or Compose expose: options.)
Apparently I started my app with the wrong command. I now use
docker-compose -f "docker-compose.debug.yml" up -d --build
which works great. The port is also correct then.
Related
I run a service inside a container that binds to 127.0.0.1:8888.
I want to expose this port to the host.
Does docker-compose support this?
I tried the following in docker-compose.yml but did not work.
expose:
- "8888"
ports:
- "8888:8888"
P.S. Binding the service to 0.0.0.0 inside the container is not possible in my case.
UPDATE: Providing a simple example:
docker-compose.yml
version: '3'
services:
myservice:
expose:
- "8888"
ports:
- "8888:8888"
build: .
Dockerfile
FROM centos:7
RUN yum install -y nmap-ncat
CMD ["nc", "-l", "-k", "localhost", "8888"]
Commands:
$> docker-compose up --build
$> # Starting test1_myservice_1 ... done
$> # Attaching to test1_myservice_1
$> nc -v -v localhost 8888
$> # Connection to localhost 8888 port [tcp/*] succeeded!
TEST
$>
After inputing TEST in the console the connection is closed, which means the port is not really exposed, despite the initial success message. The same issue occurs with with my real service.
But If I bind to 0.0.0.0 (instead of localhost) inside the container everything works fine.
Typically the answer is no, and in almost every situation, you should reconfigure your application to listen on 0.0.0.0. Any attempt to avoid changing the app to listen on all interfaces inside the container should be viewed as a hack that is adding technical debt to your project.
To expand on my comment, each container by default runs in its own network namespace. The loopback interface inside a container is separate from the loopback interface on the host and in other containers. So if you listen on 127.0.0.1 inside a container, anything outside of that network namespace cannot access the port. It's not unlike listening on loopback on your VM and trying to connect from another VM to that port, Linux doesn't let you connect.
There are a few workarounds:
You can hack up the iptables to forward connections, but I'd personally avoid this. Docker is heavily based on automated changes to the iptables rules so your risk conflicting with that automation or getting broken the next time the container is recreated.
You can setup a proxy inside your container that listens on all interfaces and forwards to the loopback interface. Something like nginx would work.
You can get things in the same network namespace.
That last one has two ways to implement. Between containers, you can run a container in the network namespace of another container. This is often done for debugging the network, and is also how pods work in kubernetes. Here's an example of running a second container:
$ docker run -it --rm --net container:$(docker ps -lq) nicolaka/netshoot /bin/sh
/ # ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 10 127.0.0.1:8888 *:*
LISTEN 0 128 127.0.0.11:41469 *:*
/ # nc -v -v localhost 8888
Connection to localhost 8888 port [tcp/8888] succeeded!
TEST
/ #
Note the --net container:... (I used docker ps -lq to get the last started container id in my lab). This makes the two separate containers run in the same namespace.
If you needed to access this from outside of docker, you can remove the network namespacing, and attach the container directly to the host network. For a one-off container, this can be done with
docker run --net host ...
In compose, this would look like:
version: '3'
services:
myservice:
network_mode: "host"
build: .
You can see the docker compose documentation on this option here. This is not supported in swarm mode, and you do not publish ports in this mode since you would be trying to publish the port between the same network namespaces.
Side note, expose is not needed for any of this. It is only there for documentation, and some automated tooling, but otherwise does not impact container-to-container networking, nor does it impact the ability to publish a specific port.
According #BMitch voted answer "it is not possible to externally access this port directly if the container runs with it's own network namespace".
Based on this I think it worths it to provide my workaround on the issue:
One way would be to setup an iptables rule inside the container, for port redirection, before running the service. However this seems to require iptables modules to be loaded explicitly on the host (according to this ). This in someway breaks portablity.
My way (using socat: forwarding *:8889 to 127.0.0.1:8888.)
Dockerfile
...
yum install -y socat
RUN echo -e '#!/bin/bash\n./localservice &\nsocat TCP4-LISTEN:8889,fork
TCP4:127.0.0.1:8888\n' >> service.sh
RUN chmod u+x service.sh
ENTRYPOINT ["./service.sh"]
docker-compose.yml
version: '3'
services:
laax-pdfjs:
ports:
# Switch back to 8888 on host
- "8888:8889"
build: .
Check your docker compose version and configure it based on the version.
Compose files that do not declare a version are considered “version 1”. In those files, all the services are declared at the root of the document.
Reference
Here is how I set up my ports:
version: "3"
services:
myservice:
image: myimage:latest
ports:
- "80:80"
We can help you further if you can share the remaining of your docker-compose.yaml.
I hope you can help.
I had an old docker image that was configured for networking exposing port 8082. I am using this image as my base image to created a new container but I can't seem to get rid of the old networking settings.
The 8082 ports are not specified in my new Dockerfile or docker-composer file but it still comes up. My new port is 8091.
server#omv:~/docker/app$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f023f6a0a792 api_app_image "/entrypoint.sh" 3 minutes ago Up 3 minutes 80/tcp, 8082/tcp, 0.0.0.0:8091->8091/tcp api_app
Here is my docker-composer file.
api_app:
container_name: api_app
build:
context: ./api
dockerfile: Dockerfile
ports:
- "8091:8091"
volumes:
- ./api/app:/var/www/html/apiapp
Here is a snip from my Dockerfile
FROM bde8c3167970
VOLUME /etc/nginx/conf.d
VOLUME /var/www/html/apiapp
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 80 8091
Thanks, any help would be appreciated.
There is no Dockerfile option to remove a port that's been set with EXPOSE, and it is always inherited by derived images; you can't remove this value.
However:
In modern Docker simply having a port "exposed" (as distinct from "published") means almost nothing. It shows up in the docker ps output as unmapped, and if you use the docker run -P option to publish all exposed ports, it will be assigned an arbitrary host port, but that's it. There's no harm to having extra ports exposed.
Since each container runs in an isolated network namespace, there's no harm in using the same port in multiple containers. The container port doesn't have to match the host port. If the base image expected to run the application on port 8082, I'd keep doing that in the derived image; in the Compose setup, you can set ports: ['8091:8082'] to pick a different host port.
I'm a newbie to Docker so please correct me if anything I'm stating is wrong.
I created a React app and wrote a following Dockerfile in the root repository:
# pull official base image
FROM node:latest
# A directory within the virtualized Docker environment
# Becomes more relevant when using Docker Compose later
WORKDIR /usr/src/app
# Copies package.json and package-lock.json to Docker environment
COPY package*.json ./
# Installs all node packages
RUN npm install
# Copies everything over to Docker environment
COPY . .
# Uses port which is used by the actual application
EXPOSE 8080
# Finally runs the application
CMD [ "npm", "start" ]
My goal is to run the docker image in a way, that I can open the React app in my browser (with localhost).
Since in the Dockerfile I'm Exposing the app to the PORT: 8080. I thought I can run:
docker run -p 8080:8080 -t <name of the docker image>
But apparently the application is accessible through 3000 in the container, cause when I run:
docker run -p 8080:3000 -t <name of the docker image>
I can access it with localhost:8080.
What's the point of the EXPOSE port in the Dockerfile, when the service running in its container is accessible through a different port?
When containerizing a NodeJS app, do I always have to make sure that process.env.PORT in my app is the same as the EXPOSE in the Dockerfile?
EXPOSE is for telling docker what ports from inside the application can be exposed. It doesn't mean anything if you do not use those port inside (container -> host).
The EXPOSE is very handy when using docker run -P -t <name of the docker image> (-P capital P) to let Docker automatically publish all the exposed ports to random ports on the host (try it out. then run docker ps or docker inspect <containerId> and checking the output).
So if your web Server (React app) is running on port 3000 (inside the container) you should EXPOSE 3000 (instead of 8080) to properly integrate with the Docker API.
It's kind of weird.
Its just documentation in a sense.
https://docs.docker.com/engine/reference/builder/#:~:text=The%20EXPOSE%20instruction%20informs%20Docker,not%20actually%20publish%20the%20port.
The EXPOSE instruction does not actually publish the port. It
functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published. To actually publish the port when running
the container, use the -p flag on docker run to publish and map one or
more ports, or the -P flag to publish all exposed ports and map them
to high-order ports.
do I always have to make sure that process.env.PORT in my app is the
same as the EXPOSE in the Dockerfile?
Yes. You should.
And then you also need to make sure that port actually gets published, when you use the docker run command or in your docker-compose.yml file, or however you plan on running docker.
Actually react app runs the default port 3000. so you must to mention ports and expose in docker-compose.yml. Now I'm changing the 3000 port to 8081
frontend:
container_name: frontend
build:
context: ./frontend/app
dockerfile: ../Dockerfile
volumes:
- ./frontend/app:/home/devops/frontend/app
- /home/devops/frontend/app/node_modules
ports:
- "8081:3000"
expose:
- 8081
command: ["npm", "start"]
restart: always
stdin_open: true
And run the docker
$ sudo docker-compose up -d
Then check the running containers for find the running port
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
83b970baf16d devops_frontend "docker-entrypoint..." 31 seconds ago Up 30 seconds 8081/tcp, 0.0.0.0:8081->3000/tcp frontend
It's resolved. check your public port
$ curl 'http://0.0.0.0:8081'
How do I run all my Node.js file in a single container?
app1.js running on port 1001
app2.js running on port 1002
app3.js running on port 1003
app4.js running on port 1004
Dockerfile
FROM node:latest
WORKDIR /rootfolder
COPY package.json ./
RUN npm install
COPY . .
RUN chmod +x /script.sh
RUN /script.sh
script.sh
#!/bin/sh
node ./app1.js
node ./app2.js
node ./app3.js
node ./app4.js
You would almost always run these in separate containers. You're allowed to run multiple containers from the same image, you can override the default command for an image when you start it up, and you can remap the ports an application uses when you start it.
In your Dockerfile, delete the RUN /script.sh line at the end. (That will try to start the servers during the image build, which you don't want.) Now you can build and run containers:
docker build -t myapp . # build the image
docker network create mynet # create a Docker network
docker run \ # run the first container...
-d \ # in the background
--net mynet \ # on that network
--name app1 \ # with a known name
-p 1001:3000 \ # publishing a port
myapp \ # from this image
node ./app1.js # running this command
docker run \
-d \
--net mynet \
--name app2 \
-p 1002:3000 \
myapp \
node ./app2.js
(I've assumed all of the scripts listen on the default Express port 3000, which is the second port number in the -p options.)
Docker Compose is a useful tool for running multiple containers together and can replicate this functionality. A docker-compose.yml file matching this setup would look like:
version: '3.8'
services:
app1:
build: .
ports:
- 1001:3000
command: node ./app1.js
app2:
build: .
ports:
- 1002:3000
command: node ./app2.js
Compose will create a Docker network on its own, and take responsibility for naming the images and containers. docker-compose up will start all of the services in parallel.
You need to expose the ports first using:
EXPOSE 1001
...
EXPOSE 1004
in your dockerfile and later run the container using the -p parameter as with -p 1501:1001
to expose -for example- the port 1501 of the host to work as the 1001 port of the container.
ref: https://docs.docker.com/engine/reference/commandline/run/
However, it is suggested to minimize the number of programs to be run from a docker container. So, you might like to have a container for each of your js scripts.
Yet, Nothing stops you from using:
docker exec -it yourDockerMachineName bash
several times where you can use each of your node cmds.
What you are trying to achieve is considered to be an anti-pattern.
Conversely, having in mind the single-responsibility-principle when building up the stacks of your apps will give you better leverages to manage, monitor, change your app etc.
This article from the official documentation explains when you might want to do this.
If you want to manage multiple containers as a whole, having one Dockerfile for each js, combined with a docker-compose file to bring up all the containers at once on different ports might answer your question. Here is a minimal example:
docker-compose.yml
version: '3.7'
services:
app1:
image: your-js-app-1-image
container_name: app-1
ports:
- '1001:3000'
app2:
image: your-js-app-2-image
container_name: app-2
ports:
- '1002:3000'
Ideally you should run each app on a separated container, if your applications are different. In the case they are equal and you want to run multiple instances on different ports
docker run -p <your_public_tcp_port_number>:3000 <image_name>
or a good docker-compose.yaml would suffice.
Technically you may want to run each different application on a different container and run multiple instances of the same application in order to make it easy to version each of your app on a newer independent image. It will allows you to independently stop, deploy and start your apps on the production environment.
I am very to new Docker so please pardon me if this this is a very silly question. Googling hasn't really produced anything I am looking for. I have a very simple Dockerfile which looks like the following
FROM node:9.6.1
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /usr/src/app/package.json
RUN npm install --silent
COPY . /usr/src/app
RUN npm start
EXPOSE 8000
In the container the app is running on port 8000. Is it possible to access port 8000 without the -p 8000:8000? I just want to be able to do
docker run imageName
and access the app on my browser on localhost:8000
By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host.
Read more: Container networking - Published ports
But you can use docker-compose to set config and run your docker images easily.
First installing the docker-compose. Install Docker Compose
Second create docker-compose.yml beside the Dockerfile and copy this code on them
version: '3'
services:
web:
build: .
ports:
- "8000:8000"
Now you can start your docker with this command
docker-compose up
If you want to run your services in the background, you can pass the -d flag (for “detached” mode) to docker-compose up -d and use `docker-compose ps to see what is currently running.
Docker Compose Tutorial
Old question but someone might find it useful:
First get the IP of the docker container by running
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
Then connect to it from the the browser or using curl using the IP and port exposed :
Note that you will not be able to access the container on 0.0.0.0 because port is not mapped