I am using docker compose and hypriotos on a raspberry pi running a node container. I would like to receive udp multicast messages send to 239.255.255.250:1982 in local network. My code is working on other machines outside of docker so I think it's a docker issue.
I already exposed port 1982/udp in the docker file and in the docker-compose file I added "1982:1982" and "239.255.255.250:1982:1982/udp" to ports. I still can't receive anything. Am I missing something?
My concrete plan is to receive advertisement messages from a yeelight. These messages are documented here
Any help would be nice. Thanks.
While you can do udp with -p 1982:1982/udp I don't believe docker's port forwarding currently supports multicast. You may have better luck if you disable the userland proxy on the daemon (dockerd --userland-proxy=false ...), but that's just a guess.
The fast/easy solution, while removing some of the isolation, is to use the host network with docker run --net=host ....
Related
I have an application (UI) that monitors multicast streams to see if the streams are working correctly.
On top of that, I have a docker container (that runs in "network_mode: host") that also listens to the multicast streams and cache them in a database.
The compose file looks like this :
version: "3"
services:
my-multicast-container:
image: my-multicast-container-image:latest
depends_on:
- my-database
network_mode: host
my-database:
image: my-database-image:latest
restart: always
ports:
- ... #forwarded ports for the other container
My issue is : when I listen to the multicast streams inside my docker container, the monitoring application that runs on the same host as the docker isn't able to listen to the same streams that the docker is listening to. As I've read online, it's not possible to listen to multicast in a docker container without "network_mode: host" because of how containers works,... and can't seem to find a solution to that problem.
How can I receive the multicast packets in my docker container and in my desktop application ?
FYI: don't know if it's important but I'm using CentOS as a host OS
The problem was not really about multicast but, in fact, was about the user which owns the multicast UDP socket.
By default, the docker container is started as root user and the monitoring interface is started with the host's user. And, as mentioned in the SO_REUSEPORT documentation :
To prevent unwanted processes from hijacking a port that has already been bound by a server using SO_REUSEPORT, all of the servers that later bind to that port must have an effective user ID that matches the effective user ID used to perform the first bind on the socket.
So, to re-use the socket, the solution was to use the host's user to run the docker container like this :
docker run --rm -u $(id -u ${USER}):$(id -g ${USER}) --network=host \
my-multicast-container-image:latest
Background: I'm trying to have XDebug connect to my IDE from within a docker container (my php app is running inside a container on my development machine). On my Macbook, it has no issue doing this. However, on linux, I discovered that from within the container, the port I was using (9000) was not visibile on the host gateway (Using sudo nmap -sT -p- 172.20.0.1 where 172.20.0.1 is my host gateway in docker).
I was able to fix this issue by opening port 9000 on my development machine (sudo ufw allow 9000/tcp). Once I did this, the container could see port 9000 on the host gateway.
My Question: Is this completely necessary? I don't love the idea of opening up a firewall port just so a docker container, running on my machine, can connect to it. Is there a more secure alternative to this?
From what you've told us, opening the port does sound necessary. If a firewall blocks a port, all traffic over that port is blocked and you won't be able to use the application on the container from the host machine.
What you can do to make this more secure is to specify a specific interface to open the port for as specified here:
ufw allow in on docker0 port 9000 proto tcp
Obviously replace docker0 with the docker interface on your machine. You can find this by looking at the output of ip address show or by following the steps here if the interface name is not obvious.
I am trying to use a script made my someone else to spin up some docker containers on my Linux Mint 18.1 machine. When I first tried to execute the script (which I unfortunately cannot include) I received an error message that contained the following:
listen tcp 0.0.0.0:53: bind: address already in use
When I used netstat to find out what was using the port I discovered that it was dnsmasq. I killed the process (knowing it would break my internet, which it did) and I was able to create the containers. So it appears the only issue is the port conflict.
In the guide to the script and in other answers it has been mentioned to add nameserver 127.0.0.1. I did that but it didn't do anything for me. I have read other answers saying that I cannot change the port that dnsmasq uses and I also can't change the port of the docker image. Is there some way I can run both of these?
Unless the docker container HAVE to listen on port 53, you CAN change it by changing the -p option of the docker run command launching that container.
So, I'm trying to get Jenkins working inside of docker as an exercise to get experience using docker. I have a small linux server, running Ubuntu 14.04 in my house (computer I wasn't using for anything else), and have no issues getting the container to start up, and connect to Jenkins over my local network.
My issue comes in when I try to connect to it from outside of my local network. I have port 8080 forwarded to the serve with the container, and if I run a port checker it says the port is open. However, when I actually try and go to my-ip:8080, I will either get nothing if I started the container just with -p 8080:8080 or "Error: Invalid request or server failed. HTTP_Proxy" if I run it with -p 0.0.0.0:8080:8080.
I wanted to make sure it wasn't jenkins, so I tried getting just a simple hello world flask application to work, and had the exact same issue. Any recommendations? Do I need to add anything extra inside Ubuntu to get it to allow outside connections to go to my containers?
EDIT: I'm also just using the official Jenkins image from docker hub.
If you are running this:
docker run -p 8080:8080 jenkins
Then to connect to jenkins you will have to connect to (in essence you are doing port forwarding):
http://127.0.0.1:8080 or http://localhost:8080
If you are just running this:
docker run jenkins
You can connect to jenkins using the container's IP
http://<containers-ip>:8080
The Dockerfile when the Jenkins container is built already exposes port 8080
The Docker Site has a great amount of information on container networks.
https://docs.docker.com/articles/networking
"By default Docker containers can make connections to the outside world, but the outside world cannot connect to containers."
You will need to provide special options when invoking docker run in order for containers to accept incoming connections.
Use the -P or --publish-all=true|false for containers to accept incoming connections.
The below should allow you to access it from another network:
docker run -P -p 8080:8080 jenkins
if you can connect to Jenkins over local network from a machine different than the one docker is running on but not from outside your local network, then the problem is not docker. In this case the problem is what ever machine who is receiving outside connection (normally your router, modem or ...) does not know to which machine the outside request should be forwarded.
You have to make sure you are forwarding the proper port on your external IP to proper port on the machine which is running Docker. This can be normally done on your internet modem/router.
I have been following the tutorials and been experimenting with Docker for a couple of days but I can't find any "real-world" usage example..
How can I communicate with my container from the outside?
All examples I can find ends up with 1 or more containers, they can share ports with their-others, but no-one outside the host gets access to their exposed ports.
Isn't the whole point of having containers like this, that at least 1 of them needs to be accessible from the outside?
I have found a tool called pipework (https://github.com/jpetazzo/pipework) which probably will help me with this. But is this the tool everyone testing out Docker for production what they are using?
Is a "hack" necessary to get the outside to talk to my container?
You can use the argument -p to expose a port of your container to the host machine.
For example:
sudo docker run -p80:8080 ubuntu bash
Will bind the port 8080 of your container to the port 80 of the host machine.
Therefore, you can then access to your container from the outside using the URL of the host:
http://you.domain -> losthost:80 -> container:8080
Is that what you wanted to do? Or maybe I missed something
(The parameter -expose only expose port to other containers (not the host))
This (https://blog.codecentric.de/en/2014/01/docker-networking-made-simple-3-ways-connect-lxc-containers/) blog post explains the problem and the solution.
Basicly, it looks like pipeworks (https://github.com/jpetazzo/pipework) is the way to expose container ports to the outside as of now... Hope this gets integrated soon..
Update: In this case, iptables was to blame, and there was a rule that blocked forwarded traffic. Adding -A FORWARD -i em1 -o docker0 -j ACCEPT solved it..