Follow docker port mapping inside container - web

I have a cloud pc with static external ip, f.e. 162.243.100.100
Inside I installed docker with nginx, and mapped 80 port like this
docker run -it -p 80:80 nginx
I'm able to access nginx demo page from curl 162.243.100.100 from host machine.
I'm able to access nginx demo page from curl localhost from inside said container.
But I want to have able to access ngninx demo page from command curl 162.243.100.100 from inside said nginx containeer.
Seem this not follow port mapping, and just give me timeout error.
I thin I need to do something with network settings, but not know what.

The short answer is "don't do that."
The default iptables rules and routing tables from Docker aren't setup to route traffic from a container out to the host and back into the container through the docker-proxy. Considering how much this is an anti-pattern, I don't expect it to be a priority to change this behavior. It's much easier to work with the tool and use docker networks and the container name when talking from container to container.

Create a network and start your container in that network:
docker network create --subnet 172.16.0.0/16 dockernet
docker run -it -p 80:80 --net=dockernet nginx

Related

Two docker container (nginx and a web app) not working together (linux)

I built both containers using a Dockerfile (for each). I have the NGINX container pointing (proxy_pass http://localhost:8080) to the port that the web app is exposed (via -p 8080:80). I am able to get it to work when I just install NGINX in the linux machine, but when I use a dockerized NGINX, I just get the default NGINX index.html. Do I have to build both containers using Docker-Compose.yml file (as oppose to Dockerfile) when I want the containers working together? Sorry, if I didn't put any code, but at this point, I'm just wanting to know if I'm taking the correct approach (using Dockerfile or Docker-Compose).
The Nginx proxy needs access to the host (!) network for this to work, e.g.:
docker container run ... --net=host ... nginx
Without it, localhost refers to the proxy (localhost) which likely has nothing on :8080 and certainly not your web app.
Alternatively, if the proxy's container (!), can resolve|access the host then processes in the container can refer to host-accessible ports using the host's DNS name or IP.
Docker Compose (conventionally) solves this by putting the containers onto a new virtual network. The difference then would be that, rather than mapping everything onto host ports, each container (called a service) gets a unique name and a container called proxy could refer to a container called web on port 8080 as http://web:8080.
You may achieve similar results with Docker only by creating a network and then running containers on it, e.g:
docker network create ${NETWORK}
docker container run ... --net=${NETWORK} --name=proxy ...
...

how to do the the routing for different node microservices using docker

I'm creating different node micro services using Docker. I'm planning to use the same ip address and different port for every service as a routing solution, but for now each service has it is own ip address, how to make it commun fot all the containers?
Is there any tool that will facilitate routing?
if you're making route from outside docker, just expose ports from your docker IP server.
docker run -d -p 3000:3000 my_image_node_service_1
docker run -d -p 3001:3000 my_image_node_service_2
and now with ip_server_docker:3000 you have access on service_1:3000, and ip_server_docker:3001 you have access to service_2:3000.

What does localhost means inside a Docker container?

Say, if I use this command inside a docker container.
/opt/lampp/bin/mysql -h localhost -u root -pThePassword
What would the localhost here refer to? The host machine's IP or the docker container's own IP?
From inside a container, localhost always refers to the current container. It never refers to another container, and it never refers to anything else running on your physical system that's not in the same container. It's not usually useful to make outbound connections to localhost or configure localhost as your database host.
From a shell on your host system, localhost could refer to daemons running on your system outside Docker, or to ports you've published with docker run -p options.
From a different system, localhost refers to the system it's called from.
In terms of IP addresses, localhost is always 127.0.0.1, and that IP address is special and is always localhost and behaves the same way as above.
If you want to make a connection to a container...
...from another container, the best way is to make sure they're on the same Docker network (you started them from the same Docker Compose YAML file; you did a docker network create and then did docker run --net ... on the same network) and use Docker's internal DNS service to refer to them by the container's --name or its name in the Docker Compose YAML file and the port number inside the container. Even if the target has a published port with a docker run -p option or Docker Compose ports: setting, use the second (container-internal) port number.
...from outside Docker space, make sure you started the container with a docker run -p or Docker Compose ports: option, and connect to the host's IP address or DNS name using the first port number from that option.
...from a terminal window or browser on the same physical host, not in a container, in this case and in this case only, localhost will work consistently.
Except:
If you started a container with --net host, localhost refers to the physical host, and you're in the "terminal window on the same physical host" scenario.
If you've gone out of your way to have multiple servers in the same container, you can use localhost to communicate between them.
If you're running in Kubernetes, and you have multiple containers in the same pod, you can use localhost to communicate between them. Between pods, you should set up a service in front of each pod/deployment, and use DNS names of the form service-name.namespace-name.svc.cluster.local.
Definitely, It will be your container, if you are running command in container.
/opt/lampp/bin/mysql -h localhost -u root -pThePassword
If you run this command inside container then it will try to connect mysql running inside container.

Can't get docker to accept request over the internet

So, I'm trying to get Jenkins working inside of docker as an exercise to get experience using docker. I have a small linux server, running Ubuntu 14.04 in my house (computer I wasn't using for anything else), and have no issues getting the container to start up, and connect to Jenkins over my local network.
My issue comes in when I try to connect to it from outside of my local network. I have port 8080 forwarded to the serve with the container, and if I run a port checker it says the port is open. However, when I actually try and go to my-ip:8080, I will either get nothing if I started the container just with -p 8080:8080 or "Error: Invalid request or server failed. HTTP_Proxy" if I run it with -p 0.0.0.0:8080:8080.
I wanted to make sure it wasn't jenkins, so I tried getting just a simple hello world flask application to work, and had the exact same issue. Any recommendations? Do I need to add anything extra inside Ubuntu to get it to allow outside connections to go to my containers?
EDIT: I'm also just using the official Jenkins image from docker hub.
If you are running this:
docker run -p 8080:8080 jenkins
Then to connect to jenkins you will have to connect to (in essence you are doing port forwarding):
http://127.0.0.1:8080 or http://localhost:8080
If you are just running this:
docker run jenkins
You can connect to jenkins using the container's IP
http://<containers-ip>:8080
The Dockerfile when the Jenkins container is built already exposes port 8080
The Docker Site has a great amount of information on container networks.
https://docs.docker.com/articles/networking
"By default Docker containers can make connections to the outside world, but the outside world cannot connect to containers."
You will need to provide special options when invoking docker run in order for containers to accept incoming connections.
Use the -P or --publish-all=true|false for containers to accept incoming connections.
The below should allow you to access it from another network:
docker run -P -p 8080:8080 jenkins
if you can connect to Jenkins over local network from a machine different than the one docker is running on but not from outside your local network, then the problem is not docker. In this case the problem is what ever machine who is receiving outside connection (normally your router, modem or ...) does not know to which machine the outside request should be forwarded.
You have to make sure you are forwarding the proper port on your external IP to proper port on the machine which is running Docker. This can be normally done on your internet modem/router.

Send request from one docker container to another

I'm trying to move some existing servers to be housed within docker containers. I have two: an app server and an api server but developed with node.js. I have them both working within an ubuntu vm and can hit both apps from outside the vm which is great.
Each server has it's own domain. The app server uses the domain app and the api server uses the api domain, clever I know. Locally I added both domains to my hosts file to point to the ip assigned to the ubuntu vm.
The only issue I'm having is there is a request sent from the app server that needs to be routed to the api server. Tried editing the hosts file of both the app server container (via the Dockerfile) and the ubuntu vm however the request fails.
Is there a simple way to get that request to not go out and try to resolve the api domain but get it to point to the api container?
A typical solution to this would be to use Docker's --link option to link the containers. That is, if you do:
docker run -d --name api myapi
docker run -d --name app --link api:api myapp
Then within the app container, the hostname api will map to the api container. You will also have a set of environment variables available that describe the exposed ports on the linked container. E.g., if your "api" container exposed port 80, the variable would look like:
API_PORT_80_TCP=tcp://172.17.0.10:80
API_PORT_80_TCP_PORT=80
API_PORT_80_TCP_PROTO=tcp
API_PORT=tcp://172.17.0.10:80
API_NAME=/app/api
API_PORT_80_TCP_ADDR=172.17.0.10
There are some disadvantages to the link option:
This only works for containers hosted on the same physical host
If you restart the "api" container, you have to restart the "app" container, too.
Both of these particular problems can probably be resolved by the orchestration tool of your choice if you are operating in a multi-host environment.
The linking feature (--link) is a legacy feature.
You should always prefer using Docker network drivers over linking.
Example: running a Redis container with Redis binding to localhost then running the redis-cli command and connecting to the Redis server over the localhost interface.
$ docker run -d --name redis example/redis --bind 127.0.0.1
$ # use the redis container's network stack to access localhost
$ docker run --rm -it --network container:redis example/redis-cli -h 127.0.0.1
See the docs for details.
https://docs.docker.com/compose/link-env-deprecated/
https://docs.docker.com/engine/reference/run/#network-settings

Resources