Im a trying to deploy my application using Docker and came across an issue that restarting named containers assigns a different IP to container. Maybe explaining what I am doing will better explain the issue:
Postgres runs inside a separate container named "postgres"
$ PG_ID=$(docker run --name postgres postgres/image)
My webapp container links to postgres container
$ APP_ID=$(docker run --link postgres:postgres webapp/image)
Linking postgres container image to webapp container inserts in webapp container a hosts file entry with the IP of the postgres container. This allows me to point to postgres db within my webapp using postgres:5432 (I am using Django btw). This all works well except if for some reason postgres crashes.
Before I manually stop postgres process to simulate postgres process crashing I verify IP of postgres container:
$ docker inspect --format "{{.NetworkSettings.IPAddress}}" $PG_ID
172.17.0.73
Now to simulate crash I stop postgres container:
$ docker stop $PG_ID
If now I restart postgres by using
$ docker start $PG_ID
the ip of the container changes:
$ docker inspect --format "{{.NetworkSettings.IPAddress}}" $PG_ID
172.17.0.74
Therefore the IP which points to postgres container in webapp container is no longer correct. I though that by naming container docker assigns a name to it with specific configs so that you can reliably link between containers (both network and volumes). If the IP changes this seems to defeat the purpose.
If I have to restart my webapp process each time I postgres restarts, this does not seem any better than just using a single container to run both processes. Then I can use supervisor or something similar to keep both of them running and use localhost to link between processes.
I am still new to Docker so am I doing something wrong or is this a bug in docker?
2nd UPDATE: maybe you already discovered this, but as workaround, I plan to map the service to share the database to the host interface (ej: with -p 5432:5432), and connect the webapps to the host IP (the IP of the docker0 interface: in my Ubuntu and CentOS, the IP is 172.17.42.1). If you restart the postgres container, the conteiner's IP will change, but I wil be accesible using 172.17.42.1:5432. The downside is that you are exposing that port to all the containers, and loose the fine-grained mapping that --link gives you.
--- OLD UPDATES:
CORRECTION: Docker will map 'postgres' to the container's IP in the /etc/hosts files, on the webapp container. So, in the webapp container, you can ping 'postgres', and it will be mapped to the IP.
1st UPDATE: I've seen that Docker generates and mounts /etc/hosts, /etc/resolv.conf, etc. to have always the correct information, but this does not apply when the linked container is restarted. So, I've assumed (wrongly) that Docker would update the hosts files.
-- ORIGINAL (wrong) response:
Add --hostname=postgres-db (you can use anythin, I'm using something different than 'postgres' to avoid confussion with the container name):
$ docker run --name postgres --hostname postgres-db postgres/image
Docker will map 'postgres-db' to the container's IP (check the contents of /etc/hosts on the webapp container).
This will allow you run 'ping postgres-db' from the webapp container. If the IP changes, Dockers will update /etc/hosts for you.
In the Django app, use 'postgres-db' instead of the IP (or whatever you use for --hostname of the container with PostgreSql).
Bye!
Horacio
According to https://docs.docker.com/engine/reference/commandline/run/, it should be possible to assign a static IP for your container -- at the time of container creation -- using the --ip option:
Example:
docker run -itd --ip 172.30.100.104 --name postgres postgres/image
....where 172.30.100.104 is a free IP address on a custom bridge/overlay network.
This should then retain the same IP address even if postgres container crashes/restarts.
Looks like this was released in Docker Engine v 1.10 or greater, therefore if you have a lower version, you have to upgrade first.
As of Docker 1.0 they implemented a stronger sense of linked containers. Now you can use the container instance name as if it were the host name.
Here is a link
I found a link that better describes your problem. And while that question was answered I wonder whether or not this ambassador pattern might not solve the problem... this assumes that the ambassador is more reliable than the services that link.
Related
I have a docker container running on a local host with private ip 172.17.0.3.
I want this to be publicly accesible over the internet so that anyone in the world can ssh into this docker container. Is this possible and if possible how? I am trying to create a small public cloud filled with instances of docker containers in my local network which people from all over the world can access and I am sitting behind a NAT which might cause issues.
Any help will be appreciated.
Forward port from inside container to the host please, for example
docker run -it --name mydocker -p 8080:80 docker/tensorflow:latest
I'm trying to set up very simple service using docker swarm and I have a problem with exposing ports.
I have two machines, lets name them xxx and yyy. When I do simple
docker run -d -p 9200:9200 -p 9300:9300 elasticsearch:7.4.0
both of them works correctly, I can go to xxx:9200 to have instance of Elasticsearch
I tried to do the same with swarm mode so, on the xxx machine I did:
docker swarm init --advertise-addr [external IP of xxx machine]
I got correct token and I successfully joined yyy machine to the swarm.
Then I created new overlay network using
docker network create -d overlay dockerdemo
and created service in this swarm using
docker service create --name swarmelasticsearch --network dockerdemo --replicas 2 -p 9200:9200 -p 9300:9300 elasticsearch:7.4.0
Service is being created successfully, both of machines have running containers with Elasticsearch, but I cannot go to them from outside. When I go to the xxx:9200 or yyy:9200 or port-of-xxx:9200 nothing happens. I cannot reach my site. Why? Do I need to do anything more? Both of my machines are on Azure VM with Ubuntu + latest docker.
At the moment I'm running a node.js application inside a docker container which needs to connect to camunda, which runs in another container.
I start the containers with the following command
docker run -d --restart=always --name camunda -p 8000:8080 camunda/camunda-bpm-platform:tomcat-7.4.0
docker run -d --name app -p 3000:3000 app
Both applications are now running and I can access camunda by navigating to my host's IP on port 8000, and running wget http://localhost:8000 -q -O - also returns the camunda page. When I login to my app container with docker exec -it app sh and type wget http://localhost:8000 -q -O -, I cannot access camunda. Instead I get the following error:
wget: can't connect to remote host (127.0.0.1): Connection refused
When I link my app container to the camunda container with --link camunda:camunda, and type wget http://camunda:8000 -q -O - in my app container, I get the following error:
wget: can't connect to remote host (172.17.0.4): Connection refused`
I've seen this option, so I started my app container with --add-host camunda:my_hosts_ip and tried wget again, resulting in:
wget: can't connect to remote host (149.210.227.191): Operation timed out
When running wget http://149.210.227.191:5001 -q -O - on my host machine however, I get a correct response immediately.
Ideally I would like to just start my app container without the need to supply the external IP in any way, and let the app container just use the camunda service via the localhost or by linking the camunda container tot my app container. What would be the easiest way to achieve this?
Why does it not work?
Containers and host do not share their local IP stack. Thus, when you are within a container and try anything localhost:port the anything command will try to connect to the container-specific local IP stack, not the other container nor the host.
How to make it work?
Hard way: you either need to know the IP address of the other container and connect to this IP address..
Easier and cleaner way: .. either link your containers.
--link=[]
Add link to another container in the form of <name or id>:alias or just <name or id> in which case the alias will match the name
So you'll need to perform, assuming the camunda container is named camunda:
docker run -d --name app -p 3000:3000 --link camunda app
Then, once you docker-exec-ed into the container app you will be able to execute wget http://camunda:8080 -q -O - without error.
Note that while the linked containers graph cannot loop, e.g., camunda cannot be linked to app as you need to start a container to be able to link it, you actually do whatever you want/need playing with IP addresses.
Note also that you can specify the IP address of a container using the --ip option (though it can only be used in conjunction with --net for user-defined networks).
Original answer below. Note that link has been deprecated and the recommended replacement is network. That is explained in the answer to this question: docker-compose: difference between network and link
--
Use the --link camunda:camunda option for your app container. Then you can access camunda via http://camunda:8080/.... The link option adds a entry to the /etc/hosts file of the app container with the IP address of the camunda container. This also means you have to restart your app container if you restart the camunda container.
Apologies for asking two unrelated questions.
what is the best way of accessing the host machine of the docker container (i.e. I am trying to access a kafka instance running on the host, from my docker container so that I can publish some messages)
when I run docker run ..... on an image which I've modified that may have an issue/syntax error, it will naturally not start - is there a log file anywhere that I would be able to take a look at to debug the issue. (this question is somewhat related to the 1st question, since I did what was suggested on another post, but the image is still not starting)
This is an ongoing discussion on what to use and what not, I don't really know what is best. Using the docker run --net="host" is pretty easy but can be dangerous. See From inside of a Docker container, how do I connect to the localhost of the machine?.
Use docker logs containerid or lookup the raw data in /var/lib/docker/containers/containerid/ for Ubuntu.
You should have no problem connecting to the host using the local lan interface ip address. Suppose you have a host with ip 192.168.0.1:
docker run --rm -ti ubuntu bash
ping 192.168.0.1
should give you a response.
You can use docker logs to see the standard output of your container.
Is there a way I can reach my docker containers using names instead of ip addresses?
I've heard of pipework and I've seen some dns and hostname type options for docker, but I still am unable to piece everything together.
Thank you for your time.
I'm not sure if this is helpful, but this is what I've done so far:
installed docker container host using docker-machine and the vmwarevsphere driver
started up all the services with docker-compose
I can reach all of the services from any other machine on the network using IP and port
I've added a DNS alias entry to my private network DNS server and it matches the machine name that's used by docker-machine. But the machine always picks up a different IP address when it boots and connects to the network.
I'm just lost as to where to tackle this:
network DNS server
docker-machine hostname
docker container hostname
probably some combination of all of them
I'm probably looking for something similar to this question:
How can let docker use my network router to assign dhcp ip to containers easily instead of pipework?
Any general direction will be awesome...thanks again!
Docker 1.10 has a built in DNS. If your containers are connected to the same user defined network (create a network docker network create my-network and run your container with --net my-network) they can reference each other using the container name. (Docs).
Cool!
One caveat if you are using Docker compose you know that it adds a prefix to your container names, i.e. <project name>_<service name>-#. This makes your container names somewhat more difficult to control, but it might be ok for your use case. You can override the docker compose naming functionality by manually setting the container name in your compose template, but then you wont be able to scale with compose.
Create a new bridge network other than docker0, run your containers inside it and you can reference the containers inside that network by their names.
Docker daemon runs an embedded DNS server to provide automatic service
discovery for containers connected to user-defined networks. Name
resolution requests from the containers are handled first by the
embedded DNS server.
Try this:
docker network create <network name>
docker run --net <network name> --name test busybox nc -l 0.0.0.0:7000
docker run --net <network name> busybox ping test
First, we create a new network. Then, we run a busybox container named test listening on port 7000 (just to keep it running). Finally, we ping the test container by its name and it should work.
EDIT 2018-02-17: Docker may eventually remove the links key from docker-compose, therefore they suggest to use user-defined networks as stated here => https://docs.docker.com/compose/compose-file/#links
Assuming you want to reach the mysql container from the web container of your docker-compose.yml file, such as:
web:
build: .
links:
- mysql
mysqlservice:
image: mysql
You'll be pleased to know that Docker Compose already adds a mysqlservice domain name (in the web container /etc/hosts) which point to the mysql container.
Instead of looking for the mysql container IP address, you can just use the mysqlservice domain name.
If you want to add custom domain names, it's also possible with the extra_hosts parameter.
You might want to try out dnsdock. Looks straight forward and easy(!) to set up. Have a look at http://blog.brunopaz.net/easy-discover-your-docker-containers-with-dnsdock/ and https://github.com/tonistiigi/dnsdock .
If you want out of the box solution, you might want to check for example Kontena. It comes with network overlay technology from Weave and this technology is used to create virtual private LAN networks between services. Thanks to that every service/container can be reached by service_name.kontena.local.
I changed the --net parameter with --network parameter and it runs as expected:
docker network create <network name>
docker run --network <network name> --name <container name> <other container options>
docker run --network <network name> --name <container name> <other container options>
If you are using Docker Compose, and your docker-compose.yml file has a top-level services: block (you are not using the obsolete "version 1" file format), then Compose does all of the required setup automatically. The names underneath services: can be directly used as host names.
version: '3.8'
services:
database: # <-- "database" is a usable hostname
image: postgres
application: # <-- "application" is a usable hostname
build: .
environment:
PGHOST: database # <-- use the "database" hostname
Networking in Compose in the Docker documentation describes this setup further.
These host names only work for connections between containers, in the same Compose file. If you manually declare networks: then the two containers must have some network in common, but the easiest setup is to just not declare networks: at all. These connections will only use the "standard" port (for PostgreSQL, for example, always connect to port 5432); a ports: declaration is not required and is ignored if present.