Unable to communicate between docker and external service - node.js

I have a docker container running in localhost:3000
Also, I have a node app running in localhost:8081
Now if I want to make post or get request from localhost:3000 to localhost:8001 its not working at all.
Now if run the service as a binary (not a docker file) on localhost:3000 the same API requests works.
How do I communicate if using docker?

Each container runs on it's own bridge network where localhost means the container itself.
To access the host, you can add the option --add-host=host.docker.internal:host-gateway to the docker run command. Then you can access the host using the host name host.docker.internal. I.e. if you have a REST service on the host, your URL would look something like http://host.docker.internal:8081/my/service/endpoint.
The name host.docker.internal is the common name to use for the host, but you can use any name you like.
You need docker version 20.10 for this to work.

Related

Two docker container (nginx and a web app) not working together (linux)

I built both containers using a Dockerfile (for each). I have the NGINX container pointing (proxy_pass http://localhost:8080) to the port that the web app is exposed (via -p 8080:80). I am able to get it to work when I just install NGINX in the linux machine, but when I use a dockerized NGINX, I just get the default NGINX index.html. Do I have to build both containers using Docker-Compose.yml file (as oppose to Dockerfile) when I want the containers working together? Sorry, if I didn't put any code, but at this point, I'm just wanting to know if I'm taking the correct approach (using Dockerfile or Docker-Compose).
The Nginx proxy needs access to the host (!) network for this to work, e.g.:
docker container run ... --net=host ... nginx
Without it, localhost refers to the proxy (localhost) which likely has nothing on :8080 and certainly not your web app.
Alternatively, if the proxy's container (!), can resolve|access the host then processes in the container can refer to host-accessible ports using the host's DNS name or IP.
Docker Compose (conventionally) solves this by putting the containers onto a new virtual network. The difference then would be that, rather than mapping everything onto host ports, each container (called a service) gets a unique name and a container called proxy could refer to a container called web on port 8080 as http://web:8080.
You may achieve similar results with Docker only by creating a network and then running containers on it, e.g:
docker network create ${NETWORK}
docker container run ... --net=${NETWORK} --name=proxy ...
...

Calling or communicating with docker container on one machine from other application not in docker container and in another machine

I have my docker container in one machine running say Machine A. I have another machine B which consists of a flask server. I would like to call/communicate with the docker container in machine A from my flask server in Machine B. I am not running my flask server inside any docker container. I am actually very new to docker so I am not sure whether we are able to achieve it or not.
I am not sure what you want to do with your docker container from the flask server, but I am assuming that it would be an API or some service running in the docker container which you want to use in the flask server. You can do so by using the IP of machine A on which docker container is running, also, you will need to bind your docker container's port to the host machine's ( machine A) port. So that whenever you try to reach the host machine on that specific port, you will be calling the containers port instead.
If you want to execute a command in the running container then there are 2 ways to do so, first, you can SSH to the container, second you can SSH to the host machine and then use docker exec. But since you are trying to communicate from a flask server, I think that this might not be the case.
You can just directly visit the http service in container from other machine.
E.g.
Container on machineA was this:
docker run -idt -p 9000:80 nginx
Then, you machineB's flask application, you can just use:
requests.get("http://your_machine_a_ip:9000")
to get what you need.
Just remember, for container, you need to expose http port to host, so other machine could visit it.

Can't get docker to accept request over the internet

So, I'm trying to get Jenkins working inside of docker as an exercise to get experience using docker. I have a small linux server, running Ubuntu 14.04 in my house (computer I wasn't using for anything else), and have no issues getting the container to start up, and connect to Jenkins over my local network.
My issue comes in when I try to connect to it from outside of my local network. I have port 8080 forwarded to the serve with the container, and if I run a port checker it says the port is open. However, when I actually try and go to my-ip:8080, I will either get nothing if I started the container just with -p 8080:8080 or "Error: Invalid request or server failed. HTTP_Proxy" if I run it with -p 0.0.0.0:8080:8080.
I wanted to make sure it wasn't jenkins, so I tried getting just a simple hello world flask application to work, and had the exact same issue. Any recommendations? Do I need to add anything extra inside Ubuntu to get it to allow outside connections to go to my containers?
EDIT: I'm also just using the official Jenkins image from docker hub.
If you are running this:
docker run -p 8080:8080 jenkins
Then to connect to jenkins you will have to connect to (in essence you are doing port forwarding):
http://127.0.0.1:8080 or http://localhost:8080
If you are just running this:
docker run jenkins
You can connect to jenkins using the container's IP
http://<containers-ip>:8080
The Dockerfile when the Jenkins container is built already exposes port 8080
The Docker Site has a great amount of information on container networks.
https://docs.docker.com/articles/networking
"By default Docker containers can make connections to the outside world, but the outside world cannot connect to containers."
You will need to provide special options when invoking docker run in order for containers to accept incoming connections.
Use the -P or --publish-all=true|false for containers to accept incoming connections.
The below should allow you to access it from another network:
docker run -P -p 8080:8080 jenkins
if you can connect to Jenkins over local network from a machine different than the one docker is running on but not from outside your local network, then the problem is not docker. In this case the problem is what ever machine who is receiving outside connection (normally your router, modem or ...) does not know to which machine the outside request should be forwarded.
You have to make sure you are forwarding the proper port on your external IP to proper port on the machine which is running Docker. This can be normally done on your internet modem/router.

External access to Node.JS app, within Docker container

i have a Node app running within a Docker container, hosted on Elastic Beanstalk (single instance). The docker has port 3000 exposed to access the app within the docker, and I can 'curl 172.17.0.32:3000/test' from the host which returns the expected response.
The problem I have is accessing this port externally using the elastic beanstalk url. i.e
http://XXXXXX-env.elasticbeanstalk.com:3000/test
This will time out.. can anyone recommend how to gain access to this port externally?
thanks
Check this for reference
http://victorlin.me/posts/2014/11/26/running-docker-with-aws-elastic-beanstalk
see what your docker ps command returns.
The ip you have shared looks like private ip address of the docker service used for internal network. You have to enable a bridge between your host and docker container by supplying -p 3000:3000 to the run command and finally enable the app in your elastic console.

Send request from one docker container to another

I'm trying to move some existing servers to be housed within docker containers. I have two: an app server and an api server but developed with node.js. I have them both working within an ubuntu vm and can hit both apps from outside the vm which is great.
Each server has it's own domain. The app server uses the domain app and the api server uses the api domain, clever I know. Locally I added both domains to my hosts file to point to the ip assigned to the ubuntu vm.
The only issue I'm having is there is a request sent from the app server that needs to be routed to the api server. Tried editing the hosts file of both the app server container (via the Dockerfile) and the ubuntu vm however the request fails.
Is there a simple way to get that request to not go out and try to resolve the api domain but get it to point to the api container?
A typical solution to this would be to use Docker's --link option to link the containers. That is, if you do:
docker run -d --name api myapi
docker run -d --name app --link api:api myapp
Then within the app container, the hostname api will map to the api container. You will also have a set of environment variables available that describe the exposed ports on the linked container. E.g., if your "api" container exposed port 80, the variable would look like:
API_PORT_80_TCP=tcp://172.17.0.10:80
API_PORT_80_TCP_PORT=80
API_PORT_80_TCP_PROTO=tcp
API_PORT=tcp://172.17.0.10:80
API_NAME=/app/api
API_PORT_80_TCP_ADDR=172.17.0.10
There are some disadvantages to the link option:
This only works for containers hosted on the same physical host
If you restart the "api" container, you have to restart the "app" container, too.
Both of these particular problems can probably be resolved by the orchestration tool of your choice if you are operating in a multi-host environment.
The linking feature (--link) is a legacy feature.
You should always prefer using Docker network drivers over linking.
Example: running a Redis container with Redis binding to localhost then running the redis-cli command and connecting to the Redis server over the localhost interface.
$ docker run -d --name redis example/redis --bind 127.0.0.1
$ # use the redis container's network stack to access localhost
$ docker run --rm -it --network container:redis example/redis-cli -h 127.0.0.1
See the docs for details.
https://docs.docker.com/compose/link-env-deprecated/
https://docs.docker.com/engine/reference/run/#network-settings

Resources