Apologies for asking two unrelated questions.
what is the best way of accessing the host machine of the docker container (i.e. I am trying to access a kafka instance running on the host, from my docker container so that I can publish some messages)
when I run docker run ..... on an image which I've modified that may have an issue/syntax error, it will naturally not start - is there a log file anywhere that I would be able to take a look at to debug the issue. (this question is somewhat related to the 1st question, since I did what was suggested on another post, but the image is still not starting)
This is an ongoing discussion on what to use and what not, I don't really know what is best. Using the docker run --net="host" is pretty easy but can be dangerous. See From inside of a Docker container, how do I connect to the localhost of the machine?.
Use docker logs containerid or lookup the raw data in /var/lib/docker/containers/containerid/ for Ubuntu.
You should have no problem connecting to the host using the local lan interface ip address. Suppose you have a host with ip 192.168.0.1:
docker run --rm -ti ubuntu bash
ping 192.168.0.1
should give you a response.
You can use docker logs to see the standard output of your container.
Related
I am simply trying to connect a ROS2 node from my Ubuntu 22.04 VM on my laptop to another ROS2 node on another machine running Ubuntu 18.04. Ideally, I would only have Docker on the second machine (the first machine runs a trivial node that will never change), but I have been trying using a separate container on each.
Here is what I am doing and what I am seeing when I inspect:
(ssh into machine 2 from VM 1.)
A: start up network from machine 2.
sudo docker network create -d overlay --attachable my-attachable-ovrlay
B: start up container 1.
sudo docker run -it --rm test1
C: successfully attach container 1 to the network.
sudo docker network connect dwgyau64pvpenxoj2edu4liqu bold_murdock
D: Confirm the container lists network.
sudo docker inspect -f '{{range $key, $value := .NetworkSettings.Networks}}{{$key}} {{end}}' bold_murdock
prints:
bridge my-attachable-ovrlay
E: Check the network to see container.
sudo docker network inspect my-attachable-ovrlay
prints (among other things):
"Containers": null,
I am new to Docker AND networking, so I could be missing something huge, but I have tried all of the standard suggestions I found online including disabling my firewall, opening a ton of ports using ufw allow on both machines, making sure nodes are active, etc etc etc etc etc.
I tried joining the network from machine 2 and that works and the container is displayed when using network inspect. But when I do that, then machine 1 simply refuses to connect to network.
F: In this situation it gives an error.
sudo docker network connect dwgyau64pvpenxoj2edu4liqu objective_mendel
prints:
Error response from daemon: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded
Also, before trying any docker networking, I have tried plainly pinging from VM1 to machine 2 and that works, both ways. I have tried to use netcat to open an old-timey chat window on port 1234 (random port as per this resource) and that works one way only. I can communicate both ways, but only when machine 1 sends the initial netcat request and machine 2 listens. When machine 2 sends request and 1 listens, nothing happens.
I have been struggling to get this to work for 3 weeks now. I know it’s something stupid, I just know it. Any advice would be incredibly appreciated. Please explain like I know nothing about networking, because I just about do.
EDIT: I converted images (still hyperlinked) into code blocks.
If both PCs are on the same LAN, you could skip the whole network configuration entirely and use ROS2 auto-discovery.
E.g.
PC1:
docker run -it --rm --net=host -v /dev/shm:/dev/shm osrf/ros:foxy-desktop
export ROS_DOMAIN_ID=1
ros2 run demo_nodes_py talker
PC2:
docker run -it --rm --net=host -v /dev/shm:/dev/shm osrf/ros:foxy-desktop
export ROS_DOMAIN_ID=1
ros2 run demo_nodes_py listener
If the PCs are not on the same network, I usually use ZeroTier to create a virtual LAN between PC1, PC2, and PC(N), then repeat the above example.
The issue was that the router was set to 1969. When we updated the time by connecting to the internet for 15 seconds, then disconnected, it started working.
So I have been experiencing an issue where there are two Docker instances (daemons) that are running at the same time on my Ubuntu machine.
The issue is the following:
I have been using docker for some time without problems. I have tons of images and volumes there. Now one day after restart when I try to start my project using docker-compose up I get error that port is in use:
Error starting userland proxy: listen tcp 0.0.0.0:8011: bind: address already in use.
Now the thing is there is no project apart from mine that is using this port. I checked docker ps and there are no containers up. Not even portainer that I use to manage images and containers. It seems that there is another docker-daemon running on my machine or other version of docker. It might be the case that I botched installation at the beginning and now it came back to haunt me.
What I tried:
Uninstalled snap version of docker.
Restarted using sudo systemctl restart docker
Reinstalled docker completelly- it worked for a while but I lost all containers and images and again after a while it started showing me different docker with different images and no volumes while at the same time the ports for my apps where blocked because the docker I was using previously still was up.
Is there a way to list running docker-daemons/engines/instances and choose which one to use in the system?
Are you sure that you have two dockerds? It is very possible, but very uncommon.
With an lsof -n -P|grep TCP|grep 8011 or so, you can list what is listening already on 8011. This "userland proxy" is a docker thingy: if you publish 8011 of a container, docker starts a process listening on 8011 and forwarding the connections to the container. The likely cause is that you have already running something on 8011. It does not matter, which dockerd (or another, non-docker process) is listening already on 8011.
There is no specific command to list the dockerd(s) of your system. You can list the running dockerds, for example, by a ps uxa|grep dockerd command. After that, an lsof -n -p <pid> -P will show, where are they listening.
The most likely cause of your problem is that another container is already using your 8011.
It looks like both docker daemons where listening at /var/run/docker.sock and the requests were randomly distributed causing the inconsistency.
I'm a freshmen focusing on Database Management System, and I'm using Docker on Windows 10(Newest version), and I use this website https://hub.docker.com/r/cloudera/quickstart/ to run a cloudera quick-start container, just printing this code in Powershell(After pulling the image):
docker run --hostname=quickstart.cloudera --privileged=true -t -i -p 8888 4239cd2958c6 /usr/bin/docker-quickstart
But the container will exit quickly after I run, and I can also get no log( Using docker log <name>), no port assigned by docker. And it seems most services(related to the Cloudera Quickstart) are not started, either.
Some pictures here:
All containers, and the first one is the cloudera,but with no port assigned.
There should have been a port like this.
Because the final purpose is to access to Hue interface based on the port, so I do believe I'm in a trouble. And I do need someone's help. Thanks a lot.
P.S. Changing -p 8888:8888 or other port number is useless.
Im a trying to deploy my application using Docker and came across an issue that restarting named containers assigns a different IP to container. Maybe explaining what I am doing will better explain the issue:
Postgres runs inside a separate container named "postgres"
$ PG_ID=$(docker run --name postgres postgres/image)
My webapp container links to postgres container
$ APP_ID=$(docker run --link postgres:postgres webapp/image)
Linking postgres container image to webapp container inserts in webapp container a hosts file entry with the IP of the postgres container. This allows me to point to postgres db within my webapp using postgres:5432 (I am using Django btw). This all works well except if for some reason postgres crashes.
Before I manually stop postgres process to simulate postgres process crashing I verify IP of postgres container:
$ docker inspect --format "{{.NetworkSettings.IPAddress}}" $PG_ID
172.17.0.73
Now to simulate crash I stop postgres container:
$ docker stop $PG_ID
If now I restart postgres by using
$ docker start $PG_ID
the ip of the container changes:
$ docker inspect --format "{{.NetworkSettings.IPAddress}}" $PG_ID
172.17.0.74
Therefore the IP which points to postgres container in webapp container is no longer correct. I though that by naming container docker assigns a name to it with specific configs so that you can reliably link between containers (both network and volumes). If the IP changes this seems to defeat the purpose.
If I have to restart my webapp process each time I postgres restarts, this does not seem any better than just using a single container to run both processes. Then I can use supervisor or something similar to keep both of them running and use localhost to link between processes.
I am still new to Docker so am I doing something wrong or is this a bug in docker?
2nd UPDATE: maybe you already discovered this, but as workaround, I plan to map the service to share the database to the host interface (ej: with -p 5432:5432), and connect the webapps to the host IP (the IP of the docker0 interface: in my Ubuntu and CentOS, the IP is 172.17.42.1). If you restart the postgres container, the conteiner's IP will change, but I wil be accesible using 172.17.42.1:5432. The downside is that you are exposing that port to all the containers, and loose the fine-grained mapping that --link gives you.
--- OLD UPDATES:
CORRECTION: Docker will map 'postgres' to the container's IP in the /etc/hosts files, on the webapp container. So, in the webapp container, you can ping 'postgres', and it will be mapped to the IP.
1st UPDATE: I've seen that Docker generates and mounts /etc/hosts, /etc/resolv.conf, etc. to have always the correct information, but this does not apply when the linked container is restarted. So, I've assumed (wrongly) that Docker would update the hosts files.
-- ORIGINAL (wrong) response:
Add --hostname=postgres-db (you can use anythin, I'm using something different than 'postgres' to avoid confussion with the container name):
$ docker run --name postgres --hostname postgres-db postgres/image
Docker will map 'postgres-db' to the container's IP (check the contents of /etc/hosts on the webapp container).
This will allow you run 'ping postgres-db' from the webapp container. If the IP changes, Dockers will update /etc/hosts for you.
In the Django app, use 'postgres-db' instead of the IP (or whatever you use for --hostname of the container with PostgreSql).
Bye!
Horacio
According to https://docs.docker.com/engine/reference/commandline/run/, it should be possible to assign a static IP for your container -- at the time of container creation -- using the --ip option:
Example:
docker run -itd --ip 172.30.100.104 --name postgres postgres/image
....where 172.30.100.104 is a free IP address on a custom bridge/overlay network.
This should then retain the same IP address even if postgres container crashes/restarts.
Looks like this was released in Docker Engine v 1.10 or greater, therefore if you have a lower version, you have to upgrade first.
As of Docker 1.0 they implemented a stronger sense of linked containers. Now you can use the container instance name as if it were the host name.
Here is a link
I found a link that better describes your problem. And while that question was answered I wonder whether or not this ambassador pattern might not solve the problem... this assumes that the ambassador is more reliable than the services that link.
I know this is a bit long question but any help would be appreciated.
The short version is simply that I want to have a set of containers communicating with each other on multiple hosts and to be accessible with SSH.
I know there are tools for this but I wasn't able to do it.
The long version is:
There is a software that has multiple components and these components can be installed in any number of machines. There is a client- and a server-side for this software.
Both the client-server and the server side components communicate via UDP ports.
The server uses CentOS, the client uses Microsoft Windows.
I want to create a testing environment that consists of 4 containers and these components would be spread across these containers and a client side machine.
The docker host machine is Ubuntu, the containers are CentOS.
If I install all the components in one container it's working, if there are more than it's not. According to the logs its working but its not.
I read that you need to link the containers or use an orchestrator like Maestro to do this, but I wasn't able to do it so far.
What I want is to be able to start a set if containers which communicate with each other, on one or multiple hosts. I want to be able to access these containers with ssh so the service should start automatically.
Also it would be great to use ddns for the containers because the names would be used again and again but the IP addresses can change, but this is just the cherry on top.
Some specifications:
The host is a fresh install of Ubuntu 12.04.4 LTS x86_64
Docker is the latest version. (lxc-docker 0.10.0) I used the native driver.
The containers a plain simple centos pulled from the docker index. I installed some basic stuff on the containers: openssh-server, mc, java-jre.
I changed the docker network to a network that can be reached from the internal network.
IP tables rules were cleared, because I didn't needed them, but also tried with those in place but with no luck.
The /etc/default/docker file changes:
DOCKER_OPTS="--iptables=false"
or with the exposed API:
DOCKER_OPTS="-H tcp://0.0.0.0:4243 --iptables=false"
The ports that the software uses are between 6000-9000 but I tried to open all the ports.
An example of run command:
docker run -h <hostname> -i -t --privileged --expose 1-65535/udp <image> /bin/bash
I also tried with exposed API:
docker -H :4243 run -h <hostname> -i -t --privileged --expose 1-65535/udp <image> /bin/bash
I'm not giving up but I would appreciate some help.
You might want to take a look at the in-development docker swarm project. It will allow you to treat your set of test machines as a cluster to which you can deploy containers to.
You could simply use fig for orchestration and link the containers together instead of doing all that ddns and port forwarding stuff. The fig.yml syntax is pretty straight-forward.
You can use weave for networking part. You can use these tutorials
https://github.com/weaveworks/weave
http://xmodulo.com/networking-between-docker-containers.html