How to provide hostName into Docker [duplicate] - node.js

I run a service inside a container that binds to 127.0.0.1:8888.
I want to expose this port to the host.
Does docker-compose support this?
I tried the following in docker-compose.yml but did not work.
expose:
- "8888"
ports:
- "8888:8888"
P.S. Binding the service to 0.0.0.0 inside the container is not possible in my case.
UPDATE: Providing a simple example:
docker-compose.yml
version: '3'
services:
myservice:
expose:
- "8888"
ports:
- "8888:8888"
build: .
Dockerfile
FROM centos:7
RUN yum install -y nmap-ncat
CMD ["nc", "-l", "-k", "localhost", "8888"]
Commands:
$> docker-compose up --build
$> # Starting test1_myservice_1 ... done
$> # Attaching to test1_myservice_1
$> nc -v -v localhost 8888
$> # Connection to localhost 8888 port [tcp/*] succeeded!
TEST
$>
After inputing TEST in the console the connection is closed, which means the port is not really exposed, despite the initial success message. The same issue occurs with with my real service.
But If I bind to 0.0.0.0 (instead of localhost) inside the container everything works fine.

Typically the answer is no, and in almost every situation, you should reconfigure your application to listen on 0.0.0.0. Any attempt to avoid changing the app to listen on all interfaces inside the container should be viewed as a hack that is adding technical debt to your project.
To expand on my comment, each container by default runs in its own network namespace. The loopback interface inside a container is separate from the loopback interface on the host and in other containers. So if you listen on 127.0.0.1 inside a container, anything outside of that network namespace cannot access the port. It's not unlike listening on loopback on your VM and trying to connect from another VM to that port, Linux doesn't let you connect.
There are a few workarounds:
You can hack up the iptables to forward connections, but I'd personally avoid this. Docker is heavily based on automated changes to the iptables rules so your risk conflicting with that automation or getting broken the next time the container is recreated.
You can setup a proxy inside your container that listens on all interfaces and forwards to the loopback interface. Something like nginx would work.
You can get things in the same network namespace.
That last one has two ways to implement. Between containers, you can run a container in the network namespace of another container. This is often done for debugging the network, and is also how pods work in kubernetes. Here's an example of running a second container:
$ docker run -it --rm --net container:$(docker ps -lq) nicolaka/netshoot /bin/sh
/ # ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 10 127.0.0.1:8888 *:*
LISTEN 0 128 127.0.0.11:41469 *:*
/ # nc -v -v localhost 8888
Connection to localhost 8888 port [tcp/8888] succeeded!
TEST
/ #
Note the --net container:... (I used docker ps -lq to get the last started container id in my lab). This makes the two separate containers run in the same namespace.
If you needed to access this from outside of docker, you can remove the network namespacing, and attach the container directly to the host network. For a one-off container, this can be done with
docker run --net host ...
In compose, this would look like:
version: '3'
services:
myservice:
network_mode: "host"
build: .
You can see the docker compose documentation on this option here. This is not supported in swarm mode, and you do not publish ports in this mode since you would be trying to publish the port between the same network namespaces.
Side note, expose is not needed for any of this. It is only there for documentation, and some automated tooling, but otherwise does not impact container-to-container networking, nor does it impact the ability to publish a specific port.

According #BMitch voted answer "it is not possible to externally access this port directly if the container runs with it's own network namespace".
Based on this I think it worths it to provide my workaround on the issue:
One way would be to setup an iptables rule inside the container, for port redirection, before running the service. However this seems to require iptables modules to be loaded explicitly on the host (according to this ). This in someway breaks portablity.
My way (using socat: forwarding *:8889 to 127.0.0.1:8888.)
Dockerfile
...
yum install -y socat
RUN echo -e '#!/bin/bash\n./localservice &\nsocat TCP4-LISTEN:8889,fork
TCP4:127.0.0.1:8888\n' >> service.sh
RUN chmod u+x service.sh
ENTRYPOINT ["./service.sh"]
docker-compose.yml
version: '3'
services:
laax-pdfjs:
ports:
# Switch back to 8888 on host
- "8888:8889"
build: .

Check your docker compose version and configure it based on the version.
Compose files that do not declare a version are considered “version 1”. In those files, all the services are declared at the root of the document.
Reference
Here is how I set up my ports:
version: "3"
services:
myservice:
image: myimage:latest
ports:
- "80:80"
We can help you further if you can share the remaining of your docker-compose.yaml.

Related

What is the equivalent of --add-host=host.docker.internal:host-gateway in a Compose file

Starting from Docker version 20.10 (https://github.com/moby/moby/pull/40007), there is a new special string host-gateway that one can use within the --add-host run flag to allow a direct connection from inside a docker container to the local machine on Linux based systems. And this is very nice.
But what is the equivalent of --add-host=host.docker.internal:host-gateway in a Compose file?
e.g. in:
$ docker run \
--rm \
--name postgres \
-p "5433:5432" \
-e POSTGRES_PASSWORD=**** \
--add-host=host.docker.internal:host-gateway \
-d postgres:14.1-bullseye
How would the same --add-host flag fit in this Docker Compose equivalent template:
version: '3.9'
services:
postgres:
image: postgres:14.1-bullseye
environment:
POSTGRES_PASSWORD: ****
ports:
- "5433:5432"
It's for sure not: network_mode: host at the service level (see #Doc).
The actual Docker Compose equivalent is achieved by appending the same string to the extra_hosts parameters (#Doc) as:
version: '3.9'
services:
postgres:
image: postgres:14.1-bullseye
environment:
POSTGRES_PASSWORD: ****
ports:
- "5433:5432"
extra_hosts:
- "host.docker.internal:host-gateway"
You can see it has been successfully mapped to the IP of the docker0 interface, here 172.17.0.1, from inside your container, e.g.:
$ docker-compose up -d
$ docker-compose exec postgres bash
then, from inside the container:
root#5864db7d7fba:/# apt update && apt -y install netcat
root#5864db7d7fba:/# nc -vz host.docker.internal 80
Connection to host.docker.internal (172.17.0.1) 80 port [tcp/http] succeeded!
(assuming port 80 is not closed or constrained to the IP of the docker0 interface by a firewall on the host machine).
More on this can be found here:
https://medium.com/#TimvanBaarsen/how-to-connect-to-the-docker-host-from-inside-a-docker-container-112b4c71bc66
But... beware...
Warning ⚠️
This will normally always match the 172.17.0.1 IP of the docker0 interface on the host machine. Hence, if you spin-up a container using a Compose file (so, not by using docker run), chances are infinitely high that this container will rely on the network created during the build of the Compose services. And this network will use a random Gateway address of the form 172.xxx.0.1 which will for sure be different than the 172.17.0.1 default docker Gateway, this can for example be 172.22.0.1.
This can cause you some troubles if for example you only explicitly authorized connections from 172.17.0.1 to a port of a local service on the host machine.
Indeed, it will not be possible to ping the port of that service from inside the container, precisely because of this differently assigned Gateway address (172.22.0.1).
Therefore, and because you cannot know in advance which Gateway address the Compose network will have, I highly recommend that you wisely build a custom network definition in the Compose file, e.g.:
version: '3.9'
networks:
network1:
name: my-network
attachable: true
ipam:
driver: default
config:
- subnet: 172.18.0.0/16
ip_range: 172.18.5.0/24
gateway: 172.18.0.1
services:
postgres:
image: postgres:14.1-bullseye
environment:
POSTGRES_PASSWORD: ****
ports:
- "5433:5432"
networks:
- network1
If needed, I also recommend using some IP range calculator tool, such as http://jodies.de/ipcalc?host=172.18.5.0&mask1=24&mask2= to help yourself in that task, especially when defining ranges using the CIDR notation.
Finally, spin up your container. And verify that the newly specified Gateway address 172.18.0.1 has been correctly used:
$ docker inspect tmp_postgres_1 -f '{{range .NetworkSettings.Networks}}{{.Gateway}}{{end}}'
172.18.0.1
Attach to it, install netcat and verify:
root#9fe8de220d44:/# nc -vz 172.18.0.1 80
Connection to 172.18.0.1 80 port [tcp/http] succeeded!
(you may also need to adapt your firewall rules accordingly and/or the allowed IPs for your local service, e.g. a database)
Another solution
is to connect to the existing default bridge network using docker network. In order to do so, after having spin up the container, run this command:
$ docker network connect bridge tmp_postgres_1
Now, an inspect should give you two IPs; the one you set up (if any) or the one auto-magically set up by docker during the container creation, and the bridge IP:
$ docker inspect tmp_postgres_1 -f '{{range .NetworkSettings.Networks}}{{.Gateway}}{{end}}'
172.17.0.1 172.18.0.1
Or
you can skip the manual network creation and directy tell, in your Compose service definition, to join the bridge network using the network_mode: flag as follow:
version: '3.9'
services:
postgres:
image: postgres:14.1-bullseye
environment:
POSTGRES_PASSWORD: ****
ports:
- "5433:5432"
# removed networks: and add this:
network_mode: bridge
extra_hosts:
- "host.docker.internal:host-gateway"
Now, whether you used the docker network connect... method or the network_mode: flag in your Compose file, you normally succesfully joined the default bridge network with the Gateway 172.17.0.1, this will allow you to use that Gateway IP to connect to your host, either by typing its numerical value, or if set, the variable host.docker.internal:
root#9fe8de220d44:/# nc -vz 172.18.0.1 80
Connection to 172.18.0.1 80 port [tcp/http] succeeded!
root#9fe8de220d44:/# nc -vz 172.17.0.1 80
Connection to 172.18.0.1 80 port [tcp/http] succeeded!
root#9fe8de220d44:/# nc -vz host.docker.internal 80
Connection to host.docker.internal (172.17.0.1) 80 port [tcp/http] succeeded!
⚠️ But by joining the bridge network, you also makes it possible for your container to communicate with all other containers on that network (if they have published ports), and vice-versa. So if you need to clearly keep it apart from these other containers, you preferably don't want to do that and stick with its own custom network!
What if something goes wrong?
In case you messed up your docker network after some trials, you may face such error message:
Creating tmp_postgres_1 ... error
ERROR: for tmp_postgres_1 Cannot start service postgres: failed to create endpoint tmp_postgres_1 on network bridge: network 895de42e2a0bdaab5423a6356a079fae55aae41ae268ee887ed214bd6fd88486 does not exist
ERROR: for postgress Cannot start service postgres: failed to create endpoint tmp_postgres_1 on network bridge: network 895de42e2a0bdaab5423a6356a079fae55aae41ae268ee887ed214bd6fd88486 does not exist
ERROR: Encountered errors while bringing up the project.
even so the 895de42e2a0bdaab5423a6356a079fae55aae41ae268ee887ed214bd6fd88486 bridge network does actually exist, you have to clean all that either by restarting your computer or in the luckiest case, the docker service with:
$ sudo service docker restart
(a docker networkd prune -f may not be sufficient).
More in the documentation:
https://docs.docker.com/compose/networking/
https://docs.docker.com/compose/compose-file/compose-file-v3/#networks
https://github.com/compose-spec/compose-spec/blob/master/spec.md#networks-top-level-element
Tested on a host machine having the following specs:
Ubuntu: 18.04.6 LTS
Kernel: 5.4.0-94-generic
Docker: 20.10.12, build e91ed57
Docker Compose: 1.27.4, build 40524192

Can't remove old Docker Networking Settings

I hope you can help.
I had an old docker image that was configured for networking exposing port 8082. I am using this image as my base image to created a new container but I can't seem to get rid of the old networking settings.
The 8082 ports are not specified in my new Dockerfile or docker-composer file but it still comes up. My new port is 8091.
server#omv:~/docker/app$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f023f6a0a792 api_app_image "/entrypoint.sh" 3 minutes ago Up 3 minutes 80/tcp, 8082/tcp, 0.0.0.0:8091->8091/tcp api_app
Here is my docker-composer file.
api_app:
container_name: api_app
build:
context: ./api
dockerfile: Dockerfile
ports:
- "8091:8091"
volumes:
- ./api/app:/var/www/html/apiapp
Here is a snip from my Dockerfile
FROM bde8c3167970
VOLUME /etc/nginx/conf.d
VOLUME /var/www/html/apiapp
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 80 8091
Thanks, any help would be appreciated.
There is no Dockerfile option to remove a port that's been set with EXPOSE, and it is always inherited by derived images; you can't remove this value.
However:
In modern Docker simply having a port "exposed" (as distinct from "published") means almost nothing. It shows up in the docker ps output as unmapped, and if you use the docker run -P option to publish all exposed ports, it will be assigned an arbitrary host port, but that's it. There's no harm to having extra ports exposed.
Since each container runs in an isolated network namespace, there's no harm in using the same port in multiple containers. The container port doesn't have to match the host port. If the base image expected to run the application on port 8082, I'd keep doing that in the derived image; in the Compose setup, you can set ports: ['8091:8082'] to pick a different host port.

How to stabilize the port used by docker-compose?

I have an Node.js application that I want to run with docker-compose. Inside the container it listens for port 4321, set by an environment variable.
This port is also exposed by my Dockerfile and I specify it like so in my docker-compose.yml:
version: '3.4'
services:
previewcrawler:
image: previewcrawler
build:
context: .
dockerfile: ./Dockerfile
environment:
NODE_ENV: development
ports:
- 4321:4321
- 9229:9229
command: ['node', '--inspect=0.0.0.0:9229', 'dist/index.js']
I run the app with a VSCode task, which executes this:
docker run -dt -P --name "previewcrawler-dev" -e "DEBUG=*" -e "NODE_ENV=development" --label "com.microsoft.created-by=visual-studio-code" -p "9229:9229" "previewcrawler:latest" node --inspect-brk=0.0.0.0:9229 .
When I choose to open the application in my browser, it has some crazy port like 49171, which also changes every time I start my container.
How can I make this port stable? So that it is 4321 every time, like I specified in my docker-compose.yml
docker run -P (with a capital P) tells Docker to pick a host port for anything the Dockerfile EXPOSEs. You have no control over which host port or interfaces the port uses.
docker run -p 4321:4321 (with a lowercase p) lets you explicitly pick which ports get published, and on which host port. It is exactly equivalent to the Compose ports: option.
This is further detailed in the Docker run reference.
(That link is more specifically to a section entitled "expose incoming ports". However, "expose" as a verb means almost nothing in modern Docker. Functionally, it does only two things: if you use docker run -P then all exposed ports get published; and if you don't have a -p or -P option at all, the port will be listed in the docker ps output anyways. Exposed ports aren't automatically published, and there's not really any reason to use the docker run --expose or Compose expose: options.)
Apparently I started my app with the wrong command. I now use
docker-compose -f "docker-compose.debug.yml" up -d --build
which works great. The port is also correct then.

Alpine Linux docker set hostname

I'm using lwieske/java-8:server-jre-8u121-slim with Alpine Linux
I'd like to set hostname from a text file to be seen globally (for all shells)
/ # env
HOSTNAME=2fa4a43a975c
/ # cat /etc/afile
something
/ # hostname -F /etc/afile
hostname: sethostname: Operation not permitted
everything running as a service in swarm
i want every node to have unique hostname based on container id.
You can provide the --hostname flag to docker run as well:
docker run -d --net mynet --ip 162.18.1.1 --hostname mynodename
As for workaround, can use docker-compose to assign the hostnames for multiple containers.
Here is the example docker-compose.yml:
version: '3'
services:
ubuntu01:
image: ubuntu
hostname: ubuntu01
ubuntu02:
image: ubuntu
hostname: ubuntu02
ubuntu03:
image: ubuntu
hostname: ubuntu03
ubuntu04:
image: ubuntu
hostname: ubuntu04
To make it dynamic, you can generatedocker-compose.yml from the script.
Then run with: docker-compose up.
docker service create has a --hostname parameter that allows you to specify the hostname. On a more personal note, if you'll connect to one of your services, then any other service on the same network will be pingable and accessible using the service name, with the added benefit of allowing you multiple replicas without worrying about what those will be named.
Better to be late than never. Found this Q trying to find the same thing myself.
The answer is to give the docker container SYS_ADMIN capability and 'hostname -F' will now set the hostname properly.
docker-compose:
cap_add:
- SYS_ADMIN

How to connect nodeJS docker container to mongoDB

I have problems to connect a nodeJS application which is running as a docker container to a mongoDB. Let me explain what I have done so far:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a3732cc1d90 mongo:3.4 "docker-entrypoint..." 3 weeks ago Up 3 weeks 27017/tcp mongo_live
As you can see, there is already a mongo docker container running.
Now I'm running my nodeJS application docker container (which is a build from meteorJS):
$ docker run -it 0b422defbd59 /bin/bash
In this docker container I want to run the application by running:
$ node main.js
Now I'm getting the error
Error: MONGO_URL must be set in environment
I already tried to set MONGO_URL by setting:
ENV MONGO_URL mongodb://mongo_live:27017/
But this doesn't work:
MongoError: failed to connect to server [mongo_live:27017] on first connect
So my question is how to connect to a DB, which is - as far as I understand - 'outside' of the running container. Alternativly how do I set up a new DB to this container?
There are couple of ways to do it.
run your app in the same network as your mongodb:
docker run --net container:mongo_live your_app_docker_image
# then you can use mongodb in your localhost
$ ENV MONGO_URL mongodb://localhost:27017/
Also you can link two containers:
docker run --link mongo_live:mongo_live you_app_image ..
# Now mongodb is accessible via mongo_live
use mongodb container ip address:
docker inspect -f '{{.NetworkSettings.IPAddress}}' mongo_live
# you will get you container ip here
$ docker run -it 0b422defbd59 /bin/bash
# ENV MONGO_URL mongodb://[ip from previous command]:27017/
You can bind your mongodb port to your host and use host's hostname in your app
You can use docker network and run both apps in the same network
You could pass --add-host mongo_live:<ip of mongo container> to docker run for your application and then use mongo_live for mongodb url
You can also use docker compose to make your life easier ;)
...
When you run containers each container works in independent network. Because one container cant connect to other point to point.
The are 3 ways to connect containers
Have a little fuss with low-level docker network magic
Connect container through localhost. Each container must expose ports on localhost (as your mongo_live). But you need add to host ile on localhost 127.0.0.1 mongo_live (This is the simplest way)
Use docker-compose. It convenient tool for working many containers together. (This is right way)
Add mongodb to application container is not docker way.
Please use below snippet for your docker-compose.yml file, replace comments with your actuals. Should solve your problem.
version: '2'
services:
db:
build: <image for mongoDB>
ports:
- "27017:27017" # whatever port u r using
environment:
#you can specify mondo db username and stuff here
volumes:
- #load default config for mondodb from here
- "db-data-store:/data/db" # path depends on which image you use
networks:
- network
nodejs:
build: #image for node js
expose:
- # mention port for nodejs
volumes:
- #mount project code on container
networks:
- network
depends_on:
- db
networks:
network:
driver: bridge
Please use the below links for references :
1) NodeJs Docker
2) MongoDb docker
3) docker-compose tutorial
Best of Luck
I had problem how to connect my server.js to mongodb. And that's how i solved it hope you find it useful.
Tap For My Screenshot

Resources