I have a docker instance running apache on port 80 and node.js+express running on port 3000. I need to make an AJAX request from the apache-served website to the node server running on port 3000.
I don't know what is the appropiate url to use. I tried localhost but that resolved to the localhost of the client browsing the webpage (also the end user) instead of the localhost of the docker image.
Thanks in advance for your help!
First you should split your containers - it is a good practice for Docker to have one container per one process.
Then you will need some tool for orchestration of these containers. You can start with docker-compose as IMO the simplest one.
It will launch all your containers and manage their network settings for you by default.
So, imaging you have following docker-compose.yml file for launching your apps:
docker-compose.yml
version: '3'
services:
apache:
image: apache
node:
image: node # or whatever
With such simple configuration you will have host names in your network apache and node. So from inside you node application, you will see apache as apache host.
Just launch it with docker-compose up
make an AJAX request from the [...] website to the node server
The JavaScript, HTML, and CSS that Apache serves up is all read and interpreted by the browser, which may or may not be running on the same host as the servers. Once you're at the browser level, code has no idea that Docker is involved with any of this.
If you can get away with only sending links without hostnames <img src="/assets/foo.png"> that will always work without any configuration. Otherwise you need to use the DNS name or IP address of the host, in exactly the same way you would as if you were running the two services directly on the host without Docker.
Related
I have a docker container running in localhost:3000
Also, I have a node app running in localhost:8081
Now if I want to make post or get request from localhost:3000 to localhost:8001 its not working at all.
Now if run the service as a binary (not a docker file) on localhost:3000 the same API requests works.
How do I communicate if using docker?
Each container runs on it's own bridge network where localhost means the container itself.
To access the host, you can add the option --add-host=host.docker.internal:host-gateway to the docker run command. Then you can access the host using the host name host.docker.internal. I.e. if you have a REST service on the host, your URL would look something like http://host.docker.internal:8081/my/service/endpoint.
The name host.docker.internal is the common name to use for the host, but you can use any name you like.
You need docker version 20.10 for this to work.
I built both containers using a Dockerfile (for each). I have the NGINX container pointing (proxy_pass http://localhost:8080) to the port that the web app is exposed (via -p 8080:80). I am able to get it to work when I just install NGINX in the linux machine, but when I use a dockerized NGINX, I just get the default NGINX index.html. Do I have to build both containers using Docker-Compose.yml file (as oppose to Dockerfile) when I want the containers working together? Sorry, if I didn't put any code, but at this point, I'm just wanting to know if I'm taking the correct approach (using Dockerfile or Docker-Compose).
The Nginx proxy needs access to the host (!) network for this to work, e.g.:
docker container run ... --net=host ... nginx
Without it, localhost refers to the proxy (localhost) which likely has nothing on :8080 and certainly not your web app.
Alternatively, if the proxy's container (!), can resolve|access the host then processes in the container can refer to host-accessible ports using the host's DNS name or IP.
Docker Compose (conventionally) solves this by putting the containers onto a new virtual network. The difference then would be that, rather than mapping everything onto host ports, each container (called a service) gets a unique name and a container called proxy could refer to a container called web on port 8080 as http://web:8080.
You may achieve similar results with Docker only by creating a network and then running containers on it, e.g:
docker network create ${NETWORK}
docker container run ... --net=${NETWORK} --name=proxy ...
...
I have a situation where node-container can reach mongo-container if it uses mongo-container's name such that Docker translates it to its internal IP (which I think is how it works). They are running at the same Docker host.
However, if developing locally (not on the Docker host server), we reach mongo by giving domain-name:port to the node-container. This configuration works fine both locally (with/without Docker) and at the server, but only without Docker (so npm start "directly").
The Docker containers at the host are connected to a Docker bridge network.
We would like to use the domain-name:port configuration everywhere ideally, so that we don't have to think about the Docker side of things.
I would like to understand what happens in terms of networking. My rough networking understanding thinks this happens:
WORKING SITUATION # SERVER BY SKIPPING DOCKER:
Host[npm start]-->Node[need to check <domain-name:port>]-->DNS server[this is the IP you need]-->Host[Yes my IP + that port leads to mongo-container].
BROKEN SITUATION # SERVER BY USING NODE BEHIND DOCKER:
Host[docker-compose up etc... npm start via node container]-->Node[need to check <domain-name:port>]-->DNS server[Problem here?].
Thanks for any insights.
I am trying to put an application that listens to several ports inside a Docker image.
At the moment, I have one docker image with a Nginx server with the front-end and a Python app: the Nginx runs on the port 27019 and the app runs on 5984.
The index.html file listens to localhost:5984 but it seems like it only listens to it outside the container (on the localhost of my computer).
The only way I can make it work at the moment is by using the -p option twice in the docker run:
docker run -p 27019:27019 -p 5984:5984 app-test.
Doing so, I generate two localhost ports on my computer. If I don't put the -p 5984:5984 it doesn't work.
I plan on using more ports for the application, so I'd like to avoid adding -p xxx:xxx for each new port.
How can I make an application inside the container (in this case the index.html at 27019) listens to another port inside the same container, without having to publish both of them? Can it be generalized to more than two ports? The final objective would be to have a complete application running on a single port on a server/computer, while listening to several ports inside Docker container(s).
If you want to expose two virtual hosts on the same outgoing port then you need a proxy like for example https://github.com/jwilder/nginx-proxy .
It's not a good thing to put a lot of applications into one container, normally you should split that with one container per app, it's the way it should be used.
But if you absolutly want to use many apps into one container you can use proxy or write a dockerfile that will open your ports itself.
I have 3 separate pieces to my dockerized application:
nodeapp: A node:latest docker container running an expressjs app that returns a JSON object when accessed from /api. This server is also CORs enabled according to this site.
nginxserver: A nginx:latest static server that simply hosts an index.html file that allows the user to click a button which would make the XMLHttpRequest to the node server above.
My host machine
The node:latest has its port exposed to the host via 3000:80.
The nginx:latest has its port exposed to the host via 8080:80.
From host I am able to access both nodeapp and nginxserver individually: I can make requests and see the JSON object returned from the node server using curl from the command line, and the button (index.html) is visible on the screen when I hit localhost:8080.
However, when I try clicking the button the call to XMLHttpRequest('GET', 'http://nodeapp/api', true) fails without seemingly hitting the nodeapp server (no log is present). I'm assuming this is because host does not understand http://nodeapp/api.
Is there a way to tell docker that while a container is running to add its container linking alias to my hosts file?
I don't know if my question is the proper solution to my problem. It looks as though I'm getting a CORs error returned but I don't think it is ever hitting my server. Does this have to do with accessing the application from my host machine?
Here is a link to an example repo
Edit: I've noticed that the when using the stack that clicking the button sends a response from my nginx container. I'm confused as to why it is routing through that server as the nodeapp is in my hosts file so it should recognize the correlation there?
Problem:
nodeapp exists in internal network, which is visible to your nginxserver only, you can check this by enter nginxserver
docker exec -it nginxserver bash
# cat /etc/hosts
Most important, your service setup is not correct, nginxserver shall act as reverse proxy in front of nodeapp
host (client) -> nginxserver -> nodeapp
Dirty Quick Solution:
If you really really want your client (host) to access internal application nodeapp, then you simple change below code
XMLHttpRequest('GET', 'http://nodeapp/api', true)
To
XMLHttpRequest('GET', 'http://localhost:3000/api', true)
Since in your docker-compose.yml, nodeapp service port 80 is exposed in home network as 3000, which can be accessed directly.
Better solution
You need redesign your service stack, to make nginxserver as frontend node, see one sample http://schempy.com/2015/08/25/docker_nginx_nodejs/