How to access several ports of a Docker container inside the same container? - python-3.x

I am trying to put an application that listens to several ports inside a Docker image.
At the moment, I have one docker image with a Nginx server with the front-end and a Python app: the Nginx runs on the port 27019 and the app runs on 5984.
The index.html file listens to localhost:5984 but it seems like it only listens to it outside the container (on the localhost of my computer).
The only way I can make it work at the moment is by using the -p option twice in the docker run:
docker run -p 27019:27019 -p 5984:5984 app-test.
Doing so, I generate two localhost ports on my computer. If I don't put the -p 5984:5984 it doesn't work.
I plan on using more ports for the application, so I'd like to avoid adding -p xxx:xxx for each new port.
How can I make an application inside the container (in this case the index.html at 27019) listens to another port inside the same container, without having to publish both of them? Can it be generalized to more than two ports? The final objective would be to have a complete application running on a single port on a server/computer, while listening to several ports inside Docker container(s).

If you want to expose two virtual hosts on the same outgoing port then you need a proxy like for example https://github.com/jwilder/nginx-proxy .

It's not a good thing to put a lot of applications into one container, normally you should split that with one container per app, it's the way it should be used.
But if you absolutly want to use many apps into one container you can use proxy or write a dockerfile that will open your ports itself.

Related

Two docker container (nginx and a web app) not working together (linux)

I built both containers using a Dockerfile (for each). I have the NGINX container pointing (proxy_pass http://localhost:8080) to the port that the web app is exposed (via -p 8080:80). I am able to get it to work when I just install NGINX in the linux machine, but when I use a dockerized NGINX, I just get the default NGINX index.html. Do I have to build both containers using Docker-Compose.yml file (as oppose to Dockerfile) when I want the containers working together? Sorry, if I didn't put any code, but at this point, I'm just wanting to know if I'm taking the correct approach (using Dockerfile or Docker-Compose).
The Nginx proxy needs access to the host (!) network for this to work, e.g.:
docker container run ... --net=host ... nginx
Without it, localhost refers to the proxy (localhost) which likely has nothing on :8080 and certainly not your web app.
Alternatively, if the proxy's container (!), can resolve|access the host then processes in the container can refer to host-accessible ports using the host's DNS name or IP.
Docker Compose (conventionally) solves this by putting the containers onto a new virtual network. The difference then would be that, rather than mapping everything onto host ports, each container (called a service) gets a unique name and a container called proxy could refer to a container called web on port 8080 as http://web:8080.
You may achieve similar results with Docker only by creating a network and then running containers on it, e.g:
docker network create ${NETWORK}
docker container run ... --net=${NETWORK} --name=proxy ...
...

Exposing node Server running on docker doesn't work

I am running a angular app on node server and in server.js I have specified app.listen(8084,localhost)..So when i run npm start in the docker container and try to -p 8084:8084 in docker run I was not able to get anything, even though the curl command inside my container curl localhost:8084 was giving me right result.
So i change the app.listen(8084) and the -p 8084:8084 started working..I am not sure why ?
When you open socket, you need to bind it to some interface in your system. There are predefined values:
0.0.0.0 - all interfaces, your service will be available from any interface
locahost, 127.0.0.1 - bind locally. That means service is NOT available from oustide -- this is your case.
You also can specify particular interface IP address to bind to it.
When you start your container, by default docker start default bridge network, so your container is being put into separate network and to access it, you need to allow incoming remote connections in container.
You bind your service to localhost into a container, so no communication is possible outside the container. localhost for your node server is not the same than localhost for your container.

Can't get docker to accept request over the internet

So, I'm trying to get Jenkins working inside of docker as an exercise to get experience using docker. I have a small linux server, running Ubuntu 14.04 in my house (computer I wasn't using for anything else), and have no issues getting the container to start up, and connect to Jenkins over my local network.
My issue comes in when I try to connect to it from outside of my local network. I have port 8080 forwarded to the serve with the container, and if I run a port checker it says the port is open. However, when I actually try and go to my-ip:8080, I will either get nothing if I started the container just with -p 8080:8080 or "Error: Invalid request or server failed. HTTP_Proxy" if I run it with -p 0.0.0.0:8080:8080.
I wanted to make sure it wasn't jenkins, so I tried getting just a simple hello world flask application to work, and had the exact same issue. Any recommendations? Do I need to add anything extra inside Ubuntu to get it to allow outside connections to go to my containers?
EDIT: I'm also just using the official Jenkins image from docker hub.
If you are running this:
docker run -p 8080:8080 jenkins
Then to connect to jenkins you will have to connect to (in essence you are doing port forwarding):
http://127.0.0.1:8080 or http://localhost:8080
If you are just running this:
docker run jenkins
You can connect to jenkins using the container's IP
http://<containers-ip>:8080
The Dockerfile when the Jenkins container is built already exposes port 8080
The Docker Site has a great amount of information on container networks.
https://docs.docker.com/articles/networking
"By default Docker containers can make connections to the outside world, but the outside world cannot connect to containers."
You will need to provide special options when invoking docker run in order for containers to accept incoming connections.
Use the -P or --publish-all=true|false for containers to accept incoming connections.
The below should allow you to access it from another network:
docker run -P -p 8080:8080 jenkins
if you can connect to Jenkins over local network from a machine different than the one docker is running on but not from outside your local network, then the problem is not docker. In this case the problem is what ever machine who is receiving outside connection (normally your router, modem or ...) does not know to which machine the outside request should be forwarded.
You have to make sure you are forwarding the proper port on your external IP to proper port on the machine which is running Docker. This can be normally done on your internet modem/router.

Creating a container with a variable port

I want to create an image for couchdb, to run multiple couchdb instances. For reasons that are a bit long to clarify, I want the couchdb instance in the container to listen to a non default port, which is also not known at image definition time, since this will be a container parameter.
From the host I would run the container with:
sudo docker run -d -p 10000:30000 --name couchdb -e COUCHDB_PORT=30000 my/couchdb
Which would make the port in the container (30000), where couchdb is listening, to be reachable through port 10000 in the host. This port should a per-container paramenter.
This way from the container I could do:
curl -X GET localhost:30000
And from the host I could do:
curl -X GET localhost:10000
Both requests would be hitting the same couchdb server.
Setting up the container has been easy (it is just a matter of processing the environment variable to automatically edit the couchdb config file), but now I have reached a blocking problem. At the end of the Dockerfile I have:
# the default couchdb port, which in my case is not
# known at image creation time
EXPOSE 5984
Apparently I need to expose the port where my service is running inside the container, but I do not know that when creating the image. This is a run-time parameter, which will differ from container to container.
How can I expose a port when starting up the container?

Web service under Docker connection issue

I'm having some troubles running Apache under Docker, and I wanted to ask for some directions. My current setup is the following : I have Docker 0.8 installed on an Ubuntu 12.04 server.
I want to run an Apache server under Docker, and bind it to a specific ip on the host, my intention being to run multiple Apache servers under Docker on the same hardware node each with it's one interface.
Now, I've been able to start the Apache server inside Docker, and have it run like a daemon (-D FOREGROUND, or under supervisord), and I've even been able to bind it to 0.0.0.0:$PORT and access it from the outside. But when I created multiple interfaces on the hardware node let's say 10.10.10.1, and 10.10.10.2, and tried to bind to -p 10.10.10.1:80:80, I'm not able to access 10.10.10.1:80 from the outside.
A little info about the network setup: I have my eth0 interface which has trunking out of which I create multiple vlans on which I want to put Docker instances (probably with a bridge on the eth0.$VLAN_NO, when I want to put more on the same vlan).
So basically, to reiterate, i have started a Docker container bound with -p 10.10.10.1:80:80, with an Apache inside Docker on port 80 and I can't access it (although binded on 0.0.0.0:80:80 works).

Resources