At the moment I'm running a node.js application inside a docker container which needs to connect to camunda, which runs in another container.
I start the containers with the following command
docker run -d --restart=always --name camunda -p 8000:8080 camunda/camunda-bpm-platform:tomcat-7.4.0
docker run -d --name app -p 3000:3000 app
Both applications are now running and I can access camunda by navigating to my host's IP on port 8000, and running wget http://localhost:8000 -q -O - also returns the camunda page. When I login to my app container with docker exec -it app sh and type wget http://localhost:8000 -q -O -, I cannot access camunda. Instead I get the following error:
wget: can't connect to remote host (127.0.0.1): Connection refused
When I link my app container to the camunda container with --link camunda:camunda, and type wget http://camunda:8000 -q -O - in my app container, I get the following error:
wget: can't connect to remote host (172.17.0.4): Connection refused`
I've seen this option, so I started my app container with --add-host camunda:my_hosts_ip and tried wget again, resulting in:
wget: can't connect to remote host (149.210.227.191): Operation timed out
When running wget http://149.210.227.191:5001 -q -O - on my host machine however, I get a correct response immediately.
Ideally I would like to just start my app container without the need to supply the external IP in any way, and let the app container just use the camunda service via the localhost or by linking the camunda container tot my app container. What would be the easiest way to achieve this?
Why does it not work?
Containers and host do not share their local IP stack. Thus, when you are within a container and try anything localhost:port the anything command will try to connect to the container-specific local IP stack, not the other container nor the host.
How to make it work?
Hard way: you either need to know the IP address of the other container and connect to this IP address..
Easier and cleaner way: .. either link your containers.
--link=[]
Add link to another container in the form of <name or id>:alias or just <name or id> in which case the alias will match the name
So you'll need to perform, assuming the camunda container is named camunda:
docker run -d --name app -p 3000:3000 --link camunda app
Then, once you docker-exec-ed into the container app you will be able to execute wget http://camunda:8080 -q -O - without error.
Note that while the linked containers graph cannot loop, e.g., camunda cannot be linked to app as you need to start a container to be able to link it, you actually do whatever you want/need playing with IP addresses.
Note also that you can specify the IP address of a container using the --ip option (though it can only be used in conjunction with --net for user-defined networks).
Original answer below. Note that link has been deprecated and the recommended replacement is network. That is explained in the answer to this question: docker-compose: difference between network and link
--
Use the --link camunda:camunda option for your app container. Then you can access camunda via http://camunda:8080/.... The link option adds a entry to the /etc/hosts file of the app container with the IP address of the camunda container. This also means you have to restart your app container if you restart the camunda container.
Related
I have the following container:
admin#PC:/$docker ps -a
returns
CONTAINER ID
IMAGE
COMMAND
CREATED
STATUS
PORTS
NAMES
9c0adfffff
hg/sample:1.1
"/usr/sbin/init"
8 days ago
Up 7 days
0.0.0.0:80->80/tcp, :::80->80/tcp
agitated_euclid
This container is a springboot webapp, that maps the application on 80:80. So, the problem is how to access the postgresql that is used by this application inside the same docker container to be accessible from:
the host linux machine containing the docker with this container? and,
any computer with pgadmin interface to connect to this docker postgresql?
Currently I'm using sudo docker exec -it 9c0adfffff bash command to connect to the docker terminal and accessing database using psql, but that doesn't satisfy my current requirement. (like this)
I also tried docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d -p 5432:5432 postgres from this answer, but I think this fires up the new container, and also that is not what I need. I need to access database of existing container, whose webapp is running on port 80 currently.
Is there a way to bind ports to containers without passing an argument via the run command? I do not like starting my containers with the 'docker run' command so using the -p argument is not an option for me. I like to start my containers with the 'docker start containername' command. I would like to specify the hostname of the docker-server with the port number (http://dockerserver:8081) and this should then be forwarded to my container's app which is listening on port 8081. My setup is on Azure but is pretty basic so the Azure docker plugin looks a bit like overkill. I read up about the expose command but seems like you still need to use the 'docker run -p' command to get access to the container from the outside. Any suggestions would be very much appreciated.
docker run is just a shortcut for docker create + docker start. Ports need to be exposed when a container is created, so the -p option is available in docker create:
docker create -d -p 80:80 --name web nginx:alpine
docker start web
Port publishing only does ports though.
If you want the hostname passed to the container, you'll need to do it with a command option or (more likely) an environment variable - defined with ENV in the Dockerfile and passed with -e in docker create.
I'm trying to run a gameserver inside a docker container on my server but I'm having troubles connecting to it.
I created my container and started my gameserver (which is using port 7777) inside it.
I'm running the container with this command:
docker run -p 7777:7777 -v /home/gameserver/:/home -c=1024 -m=1024m -d --name my_gameserver game
I published the ports 7777 with the -p parameter but I can't connect to my gameserver, even though logs show that it is started.
I think I should bind my IP in some way but I have no idea what to do.
What I found so far is that docker inspect my_gameserver | grep IPAddress returns 172.17.0.24.
The problem was coming from the fact that I didn't expose the UDP port.
Correct command was:
docker run -p 7777:7777 -p 7777:7777/udp -v -d --name my_gameserver game
I am completely stuck on the following.
Trying to setup a express app in docker on an Azure VM.
1) VM is all good after using docker-machine create -driver azure ...
2) Build image all good after:
//Dockerfile
FROM iojs:onbuild
ADD package.json package.json
ADD src src
RUN npm install
EXPOSE 8080
CMD ["node", "src/server.js"]
Here's where I'm stuck:
I have tried all of the following plus many more:
• docker run -P (Then adding end points in azure)
• docker run -p 80:8080
• docker run -p 80:2756 (2756, the port created during docker-machine create)
• docker run -p 8080:80
If someone could explain azure's setup with VIP vs internal vs docker expose.
So at the end of all this, every port that I try to hit with Azure's:
AzureVirtualIP:ALL_THE_PORT
I just always get back a ERR_CONNECTION_REFUSED
For sure the express app is running because I get the console log info.
Any ideas?
Thanks
Starting from the outside and working your way in, debugging:
Outside Azure
<start your container on the Azure VM, then>
$ curl $yourhost:80
On the VM
$ docker run -p 80:8080 -d laslo
882a5e774d7004183ab264237aa5e217972ace19ac2d8dd9e9d02a94b221f236
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
64f4d98b9c75 laslo:latest node src/server.js 5 seconds ago up 5 seconds 0.0.0.0:80->8080 something_funny
$ curl localhost:80
That 0.0.0.0:80->8080 shows you that your port forwarding is in effect. If you run other containers, don't have the right privileges or have other networking problems, Docker might give you a container without forwarding the ports.
If this works but the first test didn't, then you didn't open the ports to your VM correctly. It could be that you need to set up the Azure endpoint, or that you've got a firewall running on the VM.
In the container
$ docker run -p 80:8080 --name=test -d laslo
882a5e774d7004183ab264237aa5e217972ace19ac2d8dd9e9d02a94b221f236
$ docker exec it test bash
# curl localhost:8080
In this last one, we get inside the container itself. Curl might not be installed, so maybe you have to apt-get install curl first.
If this doesn't work, then your Express server isn't listening on port 80, and you need to check the setup.
Im a trying to deploy my application using Docker and came across an issue that restarting named containers assigns a different IP to container. Maybe explaining what I am doing will better explain the issue:
Postgres runs inside a separate container named "postgres"
$ PG_ID=$(docker run --name postgres postgres/image)
My webapp container links to postgres container
$ APP_ID=$(docker run --link postgres:postgres webapp/image)
Linking postgres container image to webapp container inserts in webapp container a hosts file entry with the IP of the postgres container. This allows me to point to postgres db within my webapp using postgres:5432 (I am using Django btw). This all works well except if for some reason postgres crashes.
Before I manually stop postgres process to simulate postgres process crashing I verify IP of postgres container:
$ docker inspect --format "{{.NetworkSettings.IPAddress}}" $PG_ID
172.17.0.73
Now to simulate crash I stop postgres container:
$ docker stop $PG_ID
If now I restart postgres by using
$ docker start $PG_ID
the ip of the container changes:
$ docker inspect --format "{{.NetworkSettings.IPAddress}}" $PG_ID
172.17.0.74
Therefore the IP which points to postgres container in webapp container is no longer correct. I though that by naming container docker assigns a name to it with specific configs so that you can reliably link between containers (both network and volumes). If the IP changes this seems to defeat the purpose.
If I have to restart my webapp process each time I postgres restarts, this does not seem any better than just using a single container to run both processes. Then I can use supervisor or something similar to keep both of them running and use localhost to link between processes.
I am still new to Docker so am I doing something wrong or is this a bug in docker?
2nd UPDATE: maybe you already discovered this, but as workaround, I plan to map the service to share the database to the host interface (ej: with -p 5432:5432), and connect the webapps to the host IP (the IP of the docker0 interface: in my Ubuntu and CentOS, the IP is 172.17.42.1). If you restart the postgres container, the conteiner's IP will change, but I wil be accesible using 172.17.42.1:5432. The downside is that you are exposing that port to all the containers, and loose the fine-grained mapping that --link gives you.
--- OLD UPDATES:
CORRECTION: Docker will map 'postgres' to the container's IP in the /etc/hosts files, on the webapp container. So, in the webapp container, you can ping 'postgres', and it will be mapped to the IP.
1st UPDATE: I've seen that Docker generates and mounts /etc/hosts, /etc/resolv.conf, etc. to have always the correct information, but this does not apply when the linked container is restarted. So, I've assumed (wrongly) that Docker would update the hosts files.
-- ORIGINAL (wrong) response:
Add --hostname=postgres-db (you can use anythin, I'm using something different than 'postgres' to avoid confussion with the container name):
$ docker run --name postgres --hostname postgres-db postgres/image
Docker will map 'postgres-db' to the container's IP (check the contents of /etc/hosts on the webapp container).
This will allow you run 'ping postgres-db' from the webapp container. If the IP changes, Dockers will update /etc/hosts for you.
In the Django app, use 'postgres-db' instead of the IP (or whatever you use for --hostname of the container with PostgreSql).
Bye!
Horacio
According to https://docs.docker.com/engine/reference/commandline/run/, it should be possible to assign a static IP for your container -- at the time of container creation -- using the --ip option:
Example:
docker run -itd --ip 172.30.100.104 --name postgres postgres/image
....where 172.30.100.104 is a free IP address on a custom bridge/overlay network.
This should then retain the same IP address even if postgres container crashes/restarts.
Looks like this was released in Docker Engine v 1.10 or greater, therefore if you have a lower version, you have to upgrade first.
As of Docker 1.0 they implemented a stronger sense of linked containers. Now you can use the container instance name as if it were the host name.
Here is a link
I found a link that better describes your problem. And while that question was answered I wonder whether or not this ambassador pattern might not solve the problem... this assumes that the ambassador is more reliable than the services that link.