Docker containers in real life - linux

I have been following the tutorials and been experimenting with Docker for a couple of days but I can't find any "real-world" usage example..
How can I communicate with my container from the outside?
All examples I can find ends up with 1 or more containers, they can share ports with their-others, but no-one outside the host gets access to their exposed ports.
Isn't the whole point of having containers like this, that at least 1 of them needs to be accessible from the outside?
I have found a tool called pipework (https://github.com/jpetazzo/pipework) which probably will help me with this. But is this the tool everyone testing out Docker for production what they are using?
Is a "hack" necessary to get the outside to talk to my container?

You can use the argument -p to expose a port of your container to the host machine.
For example:
sudo docker run -p80:8080 ubuntu bash
Will bind the port 8080 of your container to the port 80 of the host machine.
Therefore, you can then access to your container from the outside using the URL of the host:
http://you.domain -> losthost:80 -> container:8080
Is that what you wanted to do? Or maybe I missed something
(The parameter -expose only expose port to other containers (not the host))

This (https://blog.codecentric.de/en/2014/01/docker-networking-made-simple-3-ways-connect-lxc-containers/) blog post explains the problem and the solution.
Basicly, it looks like pipeworks (https://github.com/jpetazzo/pipework) is the way to expose container ports to the outside as of now... Hope this gets integrated soon..
Update: In this case, iptables was to blame, and there was a rule that blocked forwarded traffic. Adding -A FORWARD -i em1 -o docker0 -j ACCEPT solved it..

Related

How can I get the IP of the docker host on a Linux system from a docker container?

This question appears to have been asked many times, but the answers appear to be outdated, or just not work.
I'm on a Linux system without a RTC (a raspberry pi). My host runs an ntp daemon (ntpd), which checks the time online as soon as the host boots up, assuming it has internet, and sets the system clock.
The code inside my container needs to know if the host's system clock is accurate (has been updated since last boot).
On the host itself, this is very easy to do - use something like ntpdate -q 127.0.0.1. ntpdate connects to 127.0.0.1:123 over udp, and checks with the ntpd daemon if the clock is accurate (if it's been updated since last boot). This appears to be more difficult to do from within a container.
If I start up a container, and use docker container inspect NAME to see the container's IP, it shows me this:
"Gateway": "172.19.0.1",
"IPAddress": "172.19.0.6",
If I run ntpdate -q 172.19.0.1 within the container, this works. Unfortunately, 172.19.0.1 isn't a permanent IP for the host. It that subnet is already taken when the container is starting up, the subnet will change, so hardcoding this IP is a bad idea. What I need is an environment variable that always reflects the proper IP for the host.
Windows and MacOS versions of docker appear to set the host.docker.internal hostname within containers, but Linux doesn't. Some people recommend setting this in the /etc/hosts file of the host, but then you're just hardcoding the IP, which again, can change.
I run my docker container with a docker-compose.yml file, and apparently, on new versions of docker, you can do this:
extra_hosts:
- "host.docker.internal:host-gateway"
I tried this, and this works. Sort of. Inside my container, host.docker.internal resolves to 172.17.0.1, which is IP of the docker0 interface on the host. While I can ping host.docker.internal from within the container, using ntpdate -q host.docker.internal or ntpdate -q 172.17.0.1 doesn't work.
Is there a way to make host.docker.internal resolve to the proper gateway IP of the host from within the container? In my example, 172.19.0.1.
Note: Yes, I can use code within the container to check what the container's gateway is with netstat or similar, but then I need to complicate my code, making it figure out the IP of the NTP server (the docker host). I can probably also pass the docker socket into the container, and try to get the docker host's IP through that, but that seems super hackey, and an unnecessary security issue.
The best solution I've found is to use the ip command from the iproute2 package, look for the default route and use the gateway address for it.
ip route | awk '/default/ {print $3}'
if you want it in an environment variable, you can set it in an entrypoint script with
export HOST_IP_ADDRESS=$(ip route | awk '/default/ {print $3}')

Receive UDP Multicast in Docker Container

I am using docker compose and hypriotos on a raspberry pi running a node container. I would like to receive udp multicast messages send to 239.255.255.250:1982 in local network. My code is working on other machines outside of docker so I think it's a docker issue.
I already exposed port 1982/udp in the docker file and in the docker-compose file I added "1982:1982" and "239.255.255.250:1982:1982/udp" to ports. I still can't receive anything. Am I missing something?
My concrete plan is to receive advertisement messages from a yeelight. These messages are documented here
Any help would be nice. Thanks.
While you can do udp with -p 1982:1982/udp I don't believe docker's port forwarding currently supports multicast. You may have better luck if you disable the userland proxy on the daemon (dockerd --userland-proxy=false ...), but that's just a guess.
The fast/easy solution, while removing some of the isolation, is to use the host network with docker run --net=host ....

Follow docker port mapping inside container

I have a cloud pc with static external ip, f.e. 162.243.100.100
Inside I installed docker with nginx, and mapped 80 port like this
docker run -it -p 80:80 nginx
I'm able to access nginx demo page from curl 162.243.100.100 from host machine.
I'm able to access nginx demo page from curl localhost from inside said container.
But I want to have able to access ngninx demo page from command curl 162.243.100.100 from inside said nginx containeer.
Seem this not follow port mapping, and just give me timeout error.
I thin I need to do something with network settings, but not know what.
The short answer is "don't do that."
The default iptables rules and routing tables from Docker aren't setup to route traffic from a container out to the host and back into the container through the docker-proxy. Considering how much this is an anti-pattern, I don't expect it to be a priority to change this behavior. It's much easier to work with the tool and use docker networks and the container name when talking from container to container.
Create a network and start your container in that network:
docker network create --subnet 172.16.0.0/16 dockernet
docker run -it -p 80:80 --net=dockernet nginx

How to access several ports of a Docker container inside the same container?

I am trying to put an application that listens to several ports inside a Docker image.
At the moment, I have one docker image with a Nginx server with the front-end and a Python app: the Nginx runs on the port 27019 and the app runs on 5984.
The index.html file listens to localhost:5984 but it seems like it only listens to it outside the container (on the localhost of my computer).
The only way I can make it work at the moment is by using the -p option twice in the docker run:
docker run -p 27019:27019 -p 5984:5984 app-test.
Doing so, I generate two localhost ports on my computer. If I don't put the -p 5984:5984 it doesn't work.
I plan on using more ports for the application, so I'd like to avoid adding -p xxx:xxx for each new port.
How can I make an application inside the container (in this case the index.html at 27019) listens to another port inside the same container, without having to publish both of them? Can it be generalized to more than two ports? The final objective would be to have a complete application running on a single port on a server/computer, while listening to several ports inside Docker container(s).
If you want to expose two virtual hosts on the same outgoing port then you need a proxy like for example https://github.com/jwilder/nginx-proxy .
It's not a good thing to put a lot of applications into one container, normally you should split that with one container per app, it's the way it should be used.
But if you absolutly want to use many apps into one container you can use proxy or write a dockerfile that will open your ports itself.

Can't get docker to accept request over the internet

So, I'm trying to get Jenkins working inside of docker as an exercise to get experience using docker. I have a small linux server, running Ubuntu 14.04 in my house (computer I wasn't using for anything else), and have no issues getting the container to start up, and connect to Jenkins over my local network.
My issue comes in when I try to connect to it from outside of my local network. I have port 8080 forwarded to the serve with the container, and if I run a port checker it says the port is open. However, when I actually try and go to my-ip:8080, I will either get nothing if I started the container just with -p 8080:8080 or "Error: Invalid request or server failed. HTTP_Proxy" if I run it with -p 0.0.0.0:8080:8080.
I wanted to make sure it wasn't jenkins, so I tried getting just a simple hello world flask application to work, and had the exact same issue. Any recommendations? Do I need to add anything extra inside Ubuntu to get it to allow outside connections to go to my containers?
EDIT: I'm also just using the official Jenkins image from docker hub.
If you are running this:
docker run -p 8080:8080 jenkins
Then to connect to jenkins you will have to connect to (in essence you are doing port forwarding):
http://127.0.0.1:8080 or http://localhost:8080
If you are just running this:
docker run jenkins
You can connect to jenkins using the container's IP
http://<containers-ip>:8080
The Dockerfile when the Jenkins container is built already exposes port 8080
The Docker Site has a great amount of information on container networks.
https://docs.docker.com/articles/networking
"By default Docker containers can make connections to the outside world, but the outside world cannot connect to containers."
You will need to provide special options when invoking docker run in order for containers to accept incoming connections.
Use the -P or --publish-all=true|false for containers to accept incoming connections.
The below should allow you to access it from another network:
docker run -P -p 8080:8080 jenkins
if you can connect to Jenkins over local network from a machine different than the one docker is running on but not from outside your local network, then the problem is not docker. In this case the problem is what ever machine who is receiving outside connection (normally your router, modem or ...) does not know to which machine the outside request should be forwarded.
You have to make sure you are forwarding the proper port on your external IP to proper port on the machine which is running Docker. This can be normally done on your internet modem/router.

Resources