Why can't the Docker ("master") container see workers? - linux

I'm running Docker 1.7 and docker-compose 1.5.2 on Oracle Enterprise Linux (OEL) 6 with a 2.6 kernel.
I have adapted the CitusDB docker-compose.yml file in a few minor ways, except the main thing is that I only have one 'worker' entry and I use docker-compose scale to increase the numbers of workers. This works fine on the Docker 1.10/Docker-compose 2 configurations I was developing on, but these were using VirtualBox VMs under OS X/docker-machine.
Now to deploy on a regular native Linux box, I'm back-revved and the 'master' container cannot connect to the workers.
Port 5432 (standard PostgreSQL) is exposed, and each worker container has a different IP address, and they get put into /etc/hosts on the master, but the master cannot connect to the workers.
I saw some messages about the firewall. The Linux box is using iptables. When I start up using docker-compose, it creates an iptables DOCKER entry.
I tried disabling iptables and it did not change the results.
I tried creating alternative networks using the docker-compose net: directive and there was no difference.

Related

Docker Container Attached to Network, Network Inspect Shows No Containers

I am simply trying to connect a ROS2 node from my Ubuntu 22.04 VM on my laptop to another ROS2 node on another machine running Ubuntu 18.04. Ideally, I would only have Docker on the second machine (the first machine runs a trivial node that will never change), but I have been trying using a separate container on each.
Here is what I am doing and what I am seeing when I inspect:
(ssh into machine 2 from VM 1.)
A: start up network from machine 2.
sudo docker network create -d overlay --attachable my-attachable-ovrlay
B: start up container 1.
sudo docker run -it --rm test1
C: successfully attach container 1 to the network.
sudo docker network connect dwgyau64pvpenxoj2edu4liqu bold_murdock
D: Confirm the container lists network.
sudo docker inspect -f '{{range $key, $value := .NetworkSettings.Networks}}{{$key}} {{end}}' bold_murdock
prints:
bridge my-attachable-ovrlay
E: Check the network to see container.
sudo docker network inspect my-attachable-ovrlay
prints (among other things):
"Containers": null,
I am new to Docker AND networking, so I could be missing something huge, but I have tried all of the standard suggestions I found online including disabling my firewall, opening a ton of ports using ufw allow on both machines, making sure nodes are active, etc etc etc etc etc.
I tried joining the network from machine 2 and that works and the container is displayed when using network inspect. But when I do that, then machine 1 simply refuses to connect to network.
F: In this situation it gives an error.
sudo docker network connect dwgyau64pvpenxoj2edu4liqu objective_mendel
prints:
Error response from daemon: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded
Also, before trying any docker networking, I have tried plainly pinging from VM1 to machine 2 and that works, both ways. I have tried to use netcat to open an old-timey chat window on port 1234 (random port as per this resource) and that works one way only. I can communicate both ways, but only when machine 1 sends the initial netcat request and machine 2 listens. When machine 2 sends request and 1 listens, nothing happens.
I have been struggling to get this to work for 3 weeks now. I know it’s something stupid, I just know it. Any advice would be incredibly appreciated. Please explain like I know nothing about networking, because I just about do.
EDIT: I converted images (still hyperlinked) into code blocks.
If both PCs are on the same LAN, you could skip the whole network configuration entirely and use ROS2 auto-discovery.
E.g.
PC1:
docker run -it --rm --net=host -v /dev/shm:/dev/shm osrf/ros:foxy-desktop
export ROS_DOMAIN_ID=1
ros2 run demo_nodes_py talker
PC2:
docker run -it --rm --net=host -v /dev/shm:/dev/shm osrf/ros:foxy-desktop
export ROS_DOMAIN_ID=1
ros2 run demo_nodes_py listener
If the PCs are not on the same network, I usually use ZeroTier to create a virtual LAN between PC1, PC2, and PC(N), then repeat the above example.
The issue was that the router was set to 1969. When we updated the time by connecting to the internet for 15 seconds, then disconnected, it started working.

Docker: Linux Worker does not join Overlay network defined by Windows Master. Windows Worker works fine

I have the following 3 nodes..
C:\Users\yadamu>docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
4tpray751pk50bl6o6gtbjfq2 YADAMU-DB5 Ready Active 20.10.8
xzx0gxu1m0qo59z6gtr4j2i1p * yadamu-db3 Ready Active Leader 20.10.8
x1yh3l6m6k73gytkxx3ipimq4 yadamu-db4 Ready Active 20.10.10
On yadamu-db3 (Manager), a Windows 11 box, I created an overlay network with
docker network create --driver overlay --attachable Y_OVERLAY
I then started a set of containers using docker-compose on YADAMU-DB3. They came up as expected and could talk to each other. I then started a second set of containers on YADAMU-DB5, which is also a Windows 11 box using a different docker compose file and they could also talk to each other and to the to containers running on YADAMU-DB5.
I then started a third set of containers using docker compose on YADAMU-DB4, which is running Oracle Enterpise Linux 8. These container can talk to each other but are isolated from the containers running on YADAMU-DB3 and YADAMU-DB5.
All three docker compose files contain the following 'networks' sections
networks:
YADAMU-NET:
name: Y_OVERLAY0
attachable : true
However when I run docker-compose on the linux box I see
C:\Development\YADAMU\docker\dockerfiles\swarm\YADAMU-DB4>docker-compose up -d
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Creating network "Y_OVERLAY" with the default driver
Creating volume "YADAMU_01-SHARED" with default driver
and when I list the networks I see
C:\Development\YADAMU\docker\dockerfiles\swarm\YADAMU-DB4>docker network ls
NETWORK ID NAME DRIVER SCOPE
b87206cfac4c Y_OVERLAY bridge local
393beef01a7f bridge bridge local
e19e5f965e8d docker_gwbridge bridge local
1b4cbfa566f0 host host local
y58mwnbratkj ingress overlay swarm
32a41d9b3d7c none null local
Where as when I list the networks on YADAMU-DB5 I see
C:\Users\Mark D Drake>docker network ls
NETWORK ID NAME DRIVER SCOPE
luh9dw47k5a1 Y_OVERLAY overlay swarm
y58mwnbratkj ingress overlay swarm
8fd8ef298f47 nat nat local
dce21ec8e1ae none null local
So it appears the on the LINUX box it has not resolved Y_OVERLAY as the overlay network defined by the swarm manager.
Any ideas what I'm missing here..
Note the intent here is not to build a resiliant swarm it's to build a qa environment that can us used for testing interaction between windows and linux hosts on limited hardware.
I found a workaround.. If I down the containers on the linux node, and then promote it to a 'Manager', and then run docker-compose up on the linux node, the contains join the Overlay network...
It's doesn't explain why this did not work with the linux node as a worker, but it means I can move ahead.

one linux connect to another ip,How to use docker make them into images and deploy to somewhere else

Now I have two Linux PC,mongodb is in the first PC which IP is 192.168.1.33,and a
java application on another Linux connect to the mongodb on 192.168.1.33
What I want to do is,prepare everything and make both Linux systems into docker images,and when I am in productive environment,I can simply restore the images that I prepared,and everything is OK,so I do not need complex deployment steps.
but the problem is,the IP of mongodb will change,and the IP 192.168.1.33 is written in my configuration file of my java application,it will not change automatically,is there a automated way?
Basics
We create Docker-file with minimal installation steps.
We create docker-Image from that Docker-file in step-1.
We create container from the step-2 image and expose the important port as required.
For your problem.
creating-a-docker-image-with-mongodb This article will help to dockerize the mongodb.
but the problem is,the IP of mongodb will change,and the IP
192.168.1.33 is written in my configuration file of my java application,it
will not change automatically,is there a automated way?
If you expose the mongo-db port to docker host you can use same
docker-host-IP:<exposed-port>
Ref from the article sudo docker run -p 27017:27017 -i -t my_new_mongodb
Example: 192.168.1.33 is your docker-host where mongodb container is running with exposed port 27017. You can add 192.168.1.33:27017 to your JAVA app.
What I want to do is,prepare everything and make both Linux systems
into docker images
You can not convert your VM to direct docker images. Instead you can follow the steps written in Basics and dockerize the both DB and application layer.
2.dockerize-your-java-application refer this link and dockerize you application based on requirements.
Step 1 & 2 will help you to build docker images which you can deploy to multiple servers.

Restarting named container assigns different IP

Im a trying to deploy my application using Docker and came across an issue that restarting named containers assigns a different IP to container. Maybe explaining what I am doing will better explain the issue:
Postgres runs inside a separate container named "postgres"
$ PG_ID=$(docker run --name postgres postgres/image)
My webapp container links to postgres container
$ APP_ID=$(docker run --link postgres:postgres webapp/image)
Linking postgres container image to webapp container inserts in webapp container a hosts file entry with the IP of the postgres container. This allows me to point to postgres db within my webapp using postgres:5432 (I am using Django btw). This all works well except if for some reason postgres crashes.
Before I manually stop postgres process to simulate postgres process crashing I verify IP of postgres container:
$ docker inspect --format "{{.NetworkSettings.IPAddress}}" $PG_ID
172.17.0.73
Now to simulate crash I stop postgres container:
$ docker stop $PG_ID
If now I restart postgres by using
$ docker start $PG_ID
the ip of the container changes:
$ docker inspect --format "{{.NetworkSettings.IPAddress}}" $PG_ID
172.17.0.74
Therefore the IP which points to postgres container in webapp container is no longer correct. I though that by naming container docker assigns a name to it with specific configs so that you can reliably link between containers (both network and volumes). If the IP changes this seems to defeat the purpose.
If I have to restart my webapp process each time I postgres restarts, this does not seem any better than just using a single container to run both processes. Then I can use supervisor or something similar to keep both of them running and use localhost to link between processes.
I am still new to Docker so am I doing something wrong or is this a bug in docker?
2nd UPDATE: maybe you already discovered this, but as workaround, I plan to map the service to share the database to the host interface (ej: with -p 5432:5432), and connect the webapps to the host IP (the IP of the docker0 interface: in my Ubuntu and CentOS, the IP is 172.17.42.1). If you restart the postgres container, the conteiner's IP will change, but I wil be accesible using 172.17.42.1:5432. The downside is that you are exposing that port to all the containers, and loose the fine-grained mapping that --link gives you.
--- OLD UPDATES:
CORRECTION: Docker will map 'postgres' to the container's IP in the /etc/hosts files, on the webapp container. So, in the webapp container, you can ping 'postgres', and it will be mapped to the IP.
1st UPDATE: I've seen that Docker generates and mounts /etc/hosts, /etc/resolv.conf, etc. to have always the correct information, but this does not apply when the linked container is restarted. So, I've assumed (wrongly) that Docker would update the hosts files.
-- ORIGINAL (wrong) response:
Add --hostname=postgres-db (you can use anythin, I'm using something different than 'postgres' to avoid confussion with the container name):
$ docker run --name postgres --hostname postgres-db postgres/image
Docker will map 'postgres-db' to the container's IP (check the contents of /etc/hosts on the webapp container).
This will allow you run 'ping postgres-db' from the webapp container. If the IP changes, Dockers will update /etc/hosts for you.
In the Django app, use 'postgres-db' instead of the IP (or whatever you use for --hostname of the container with PostgreSql).
Bye!
Horacio
According to https://docs.docker.com/engine/reference/commandline/run/, it should be possible to assign a static IP for your container -- at the time of container creation -- using the --ip option:
Example:
docker run -itd --ip 172.30.100.104 --name postgres postgres/image
....where 172.30.100.104 is a free IP address on a custom bridge/overlay network.
This should then retain the same IP address even if postgres container crashes/restarts.
Looks like this was released in Docker Engine v 1.10 or greater, therefore if you have a lower version, you have to upgrade first.
As of Docker 1.0 they implemented a stronger sense of linked containers. Now you can use the container instance name as if it were the host name.
Here is a link
I found a link that better describes your problem. And while that question was answered I wonder whether or not this ambassador pattern might not solve the problem... this assumes that the ambassador is more reliable than the services that link.

Docker orchestration

I know this is a bit long question but any help would be appreciated.
The short version is simply that I want to have a set of containers communicating with each other on multiple hosts and to be accessible with SSH.
I know there are tools for this but I wasn't able to do it.
The long version is:
There is a software that has multiple components and these components can be installed in any number of machines. There is a client- and a server-side for this software.
Both the client-server and the server side components communicate via UDP ports.
The server uses CentOS, the client uses Microsoft Windows.
I want to create a testing environment that consists of 4 containers and these components would be spread across these containers and a client side machine.
The docker host machine is Ubuntu, the containers are CentOS.
If I install all the components in one container it's working, if there are more than it's not. According to the logs its working but its not.
I read that you need to link the containers or use an orchestrator like Maestro to do this, but I wasn't able to do it so far.
What I want is to be able to start a set if containers which communicate with each other, on one or multiple hosts. I want to be able to access these containers with ssh so the service should start automatically.
Also it would be great to use ddns for the containers because the names would be used again and again but the IP addresses can change, but this is just the cherry on top.
Some specifications:
The host is a fresh install of Ubuntu 12.04.4 LTS x86_64
Docker is the latest version. (lxc-docker 0.10.0) I used the native driver.
The containers a plain simple centos pulled from the docker index. I installed some basic stuff on the containers: openssh-server, mc, java-jre.
I changed the docker network to a network that can be reached from the internal network.
IP tables rules were cleared, because I didn't needed them, but also tried with those in place but with no luck.
The /etc/default/docker file changes:
DOCKER_OPTS="--iptables=false"
or with the exposed API:
DOCKER_OPTS="-H tcp://0.0.0.0:4243 --iptables=false"
The ports that the software uses are between 6000-9000 but I tried to open all the ports.
An example of run command:
docker run -h <hostname> -i -t --privileged --expose 1-65535/udp <image> /bin/bash
I also tried with exposed API:
docker -H :4243 run -h <hostname> -i -t --privileged --expose 1-65535/udp <image> /bin/bash
I'm not giving up but I would appreciate some help.
You might want to take a look at the in-development docker swarm project. It will allow you to treat your set of test machines as a cluster to which you can deploy containers to.
You could simply use fig for orchestration and link the containers together instead of doing all that ddns and port forwarding stuff. The fig.yml syntax is pretty straight-forward.
You can use weave for networking part. You can use these tutorials
https://github.com/weaveworks/weave
http://xmodulo.com/networking-between-docker-containers.html

Resources