Creating a container with a variable port - couchdb

I want to create an image for couchdb, to run multiple couchdb instances. For reasons that are a bit long to clarify, I want the couchdb instance in the container to listen to a non default port, which is also not known at image definition time, since this will be a container parameter.
From the host I would run the container with:
sudo docker run -d -p 10000:30000 --name couchdb -e COUCHDB_PORT=30000 my/couchdb
Which would make the port in the container (30000), where couchdb is listening, to be reachable through port 10000 in the host. This port should a per-container paramenter.
This way from the container I could do:
curl -X GET localhost:30000
And from the host I could do:
curl -X GET localhost:10000
Both requests would be hitting the same couchdb server.
Setting up the container has been easy (it is just a matter of processing the environment variable to automatically edit the couchdb config file), but now I have reached a blocking problem. At the end of the Dockerfile I have:
# the default couchdb port, which in my case is not
# known at image creation time
EXPOSE 5984
Apparently I need to expose the port where my service is running inside the container, but I do not know that when creating the image. This is a run-time parameter, which will differ from container to container.
How can I expose a port when starting up the container?

Related

Exposing node Server running on docker doesn't work

I am running a angular app on node server and in server.js I have specified app.listen(8084,localhost)..So when i run npm start in the docker container and try to -p 8084:8084 in docker run I was not able to get anything, even though the curl command inside my container curl localhost:8084 was giving me right result.
So i change the app.listen(8084) and the -p 8084:8084 started working..I am not sure why ?
When you open socket, you need to bind it to some interface in your system. There are predefined values:
0.0.0.0 - all interfaces, your service will be available from any interface
locahost, 127.0.0.1 - bind locally. That means service is NOT available from oustide -- this is your case.
You also can specify particular interface IP address to bind to it.
When you start your container, by default docker start default bridge network, so your container is being put into separate network and to access it, you need to allow incoming remote connections in container.
You bind your service to localhost into a container, so no communication is possible outside the container. localhost for your node server is not the same than localhost for your container.

What does localhost means inside a Docker container?

Say, if I use this command inside a docker container.
/opt/lampp/bin/mysql -h localhost -u root -pThePassword
What would the localhost here refer to? The host machine's IP or the docker container's own IP?
From inside a container, localhost always refers to the current container. It never refers to another container, and it never refers to anything else running on your physical system that's not in the same container. It's not usually useful to make outbound connections to localhost or configure localhost as your database host.
From a shell on your host system, localhost could refer to daemons running on your system outside Docker, or to ports you've published with docker run -p options.
From a different system, localhost refers to the system it's called from.
In terms of IP addresses, localhost is always 127.0.0.1, and that IP address is special and is always localhost and behaves the same way as above.
If you want to make a connection to a container...
...from another container, the best way is to make sure they're on the same Docker network (you started them from the same Docker Compose YAML file; you did a docker network create and then did docker run --net ... on the same network) and use Docker's internal DNS service to refer to them by the container's --name or its name in the Docker Compose YAML file and the port number inside the container. Even if the target has a published port with a docker run -p option or Docker Compose ports: setting, use the second (container-internal) port number.
...from outside Docker space, make sure you started the container with a docker run -p or Docker Compose ports: option, and connect to the host's IP address or DNS name using the first port number from that option.
...from a terminal window or browser on the same physical host, not in a container, in this case and in this case only, localhost will work consistently.
Except:
If you started a container with --net host, localhost refers to the physical host, and you're in the "terminal window on the same physical host" scenario.
If you've gone out of your way to have multiple servers in the same container, you can use localhost to communicate between them.
If you're running in Kubernetes, and you have multiple containers in the same pod, you can use localhost to communicate between them. Between pods, you should set up a service in front of each pod/deployment, and use DNS names of the form service-name.namespace-name.svc.cluster.local.
Definitely, It will be your container, if you are running command in container.
/opt/lampp/bin/mysql -h localhost -u root -pThePassword
If you run this command inside container then it will try to connect mysql running inside container.

Follow docker port mapping inside container

I have a cloud pc with static external ip, f.e. 162.243.100.100
Inside I installed docker with nginx, and mapped 80 port like this
docker run -it -p 80:80 nginx
I'm able to access nginx demo page from curl 162.243.100.100 from host machine.
I'm able to access nginx demo page from curl localhost from inside said container.
But I want to have able to access ngninx demo page from command curl 162.243.100.100 from inside said nginx containeer.
Seem this not follow port mapping, and just give me timeout error.
I thin I need to do something with network settings, but not know what.
The short answer is "don't do that."
The default iptables rules and routing tables from Docker aren't setup to route traffic from a container out to the host and back into the container through the docker-proxy. Considering how much this is an anti-pattern, I don't expect it to be a priority to change this behavior. It's much easier to work with the tool and use docker networks and the container name when talking from container to container.
Create a network and start your container in that network:
docker network create --subnet 172.16.0.0/16 dockernet
docker run -it -p 80:80 --net=dockernet nginx

How to access several ports of a Docker container inside the same container?

I am trying to put an application that listens to several ports inside a Docker image.
At the moment, I have one docker image with a Nginx server with the front-end and a Python app: the Nginx runs on the port 27019 and the app runs on 5984.
The index.html file listens to localhost:5984 but it seems like it only listens to it outside the container (on the localhost of my computer).
The only way I can make it work at the moment is by using the -p option twice in the docker run:
docker run -p 27019:27019 -p 5984:5984 app-test.
Doing so, I generate two localhost ports on my computer. If I don't put the -p 5984:5984 it doesn't work.
I plan on using more ports for the application, so I'd like to avoid adding -p xxx:xxx for each new port.
How can I make an application inside the container (in this case the index.html at 27019) listens to another port inside the same container, without having to publish both of them? Can it be generalized to more than two ports? The final objective would be to have a complete application running on a single port on a server/computer, while listening to several ports inside Docker container(s).
If you want to expose two virtual hosts on the same outgoing port then you need a proxy like for example https://github.com/jwilder/nginx-proxy .
It's not a good thing to put a lot of applications into one container, normally you should split that with one container per app, it's the way it should be used.
But if you absolutly want to use many apps into one container you can use proxy or write a dockerfile that will open your ports itself.

Send request from one docker container to another

I'm trying to move some existing servers to be housed within docker containers. I have two: an app server and an api server but developed with node.js. I have them both working within an ubuntu vm and can hit both apps from outside the vm which is great.
Each server has it's own domain. The app server uses the domain app and the api server uses the api domain, clever I know. Locally I added both domains to my hosts file to point to the ip assigned to the ubuntu vm.
The only issue I'm having is there is a request sent from the app server that needs to be routed to the api server. Tried editing the hosts file of both the app server container (via the Dockerfile) and the ubuntu vm however the request fails.
Is there a simple way to get that request to not go out and try to resolve the api domain but get it to point to the api container?
A typical solution to this would be to use Docker's --link option to link the containers. That is, if you do:
docker run -d --name api myapi
docker run -d --name app --link api:api myapp
Then within the app container, the hostname api will map to the api container. You will also have a set of environment variables available that describe the exposed ports on the linked container. E.g., if your "api" container exposed port 80, the variable would look like:
API_PORT_80_TCP=tcp://172.17.0.10:80
API_PORT_80_TCP_PORT=80
API_PORT_80_TCP_PROTO=tcp
API_PORT=tcp://172.17.0.10:80
API_NAME=/app/api
API_PORT_80_TCP_ADDR=172.17.0.10
There are some disadvantages to the link option:
This only works for containers hosted on the same physical host
If you restart the "api" container, you have to restart the "app" container, too.
Both of these particular problems can probably be resolved by the orchestration tool of your choice if you are operating in a multi-host environment.
The linking feature (--link) is a legacy feature.
You should always prefer using Docker network drivers over linking.
Example: running a Redis container with Redis binding to localhost then running the redis-cli command and connecting to the Redis server over the localhost interface.
$ docker run -d --name redis example/redis --bind 127.0.0.1
$ # use the redis container's network stack to access localhost
$ docker run --rm -it --network container:redis example/redis-cli -h 127.0.0.1
See the docs for details.
https://docs.docker.com/compose/link-env-deprecated/
https://docs.docker.com/engine/reference/run/#network-settings

Resources