Why does swapping between container IP and alias cause difference in AJAX request? - node.js

I have a small sample project located here that illustrates the problem I am seeing when working with a nginx + node + host docker stack.
I have 2 containers:
A node (express) application that simply returns a json object. It is CORs enabled based on this website. It has it's port published to host via 3000:80
An nginx server that is also CORs enabled based on this website. It only serves static content (index.html and main.js files) from the default location (/usr/shared/nginx/html). Its port is published via 8080:80.
When running the containers individually from host I can access the node server and see the JSON object being returned. When I access the nginx server, I see my index.html and the javascript code from main.js runs.
Now I have the node app container linked to the nginx server container. From inside my main.js file of the nginx container, I attempt to access the server at http://nodeapp/api. I am seeing a CORs error
XMLHttpRequest cannot load http://nodeapp/api. No
'Access-Control-Allow-Origin' header is present on the requested
resource. Origin 'http://localhost:8080' is therefore not allowed
access.
The strange thing is, the response header indicates it is coming from nginx and not my express application as I would expect. The nginx container is also not logging anything.
Things that worked
If I change the url for the XMLHttpRequest to the node container's IP (say 172.17.0.2) it works as expected and the response header indicates it is coming from the express server. In my /etc/hosts file there is an entry:
172.17.0.2 nodeapp abc123ContainerId quickserve_nodeapp_run_1
When I curl the node container from an interactive tty container it also works as expected.
If I load the node container and use http-server (server on host) it works as expected and the response header indicates it is coming from the express server.

Just in case it has an incluence, that old thread (2013) mentioned a cor option on the docker daemon.
Nowadays (Q4 2015), the docker daemon includes:
--api-cors-header="" Set CORS headers in the remote API
To set cross origin requests to the remote api please give values to --api-cors-header when running Docker in daemon mode. Set * (asterisk) allows all, default or blank means CORS disabled
$ docker -d -H="192.168.1.9:2375" --api-cors-header="http://foo.bar"
That might be a setting to use in your case.

Related

How to access host nodejs app from docker nuxt app?

I am trying to get my nuxt app working on the production servers. For the local machine, the generated docker image runs well and it can access the nodejs app that runs on localhost. The axios 'baseurl: http://127.0.0.1:6008/' seems working fine, the docker image can access this. On the production servers, i have used docker to setup the nuxt app, the same way i tested on my local machine. Yet the docker nuxt app cannot reach the nodejs app on the host server. I can see this must be some kind of network setting issue.
In vuejs app, i usually setup a proxypass in the apache web conf, to convert the input backend query to match and replace them with localhost address.
ProxyPass /app/query http://localhost:6008/query
The nuxt.config file, axios setting looks lik this:
axios: {
baseURL:'http://127.0.0.1:6008/',
browserBaseURL: ''
},
Does docker needs additional settings or should i configure my apache for this communication between my docker container and a node app that running on host apache pm2 ?
localhost or 127.0.0.1 from within the docker will not resolve to the server's localhost. Instead you need to specify the name of the nodejs service (if you are using docker-compose) or the nodejs docker container name (if you are just using docker run).
You can also try by giving the IP of the server where the docker is running instead of 127.0.0.1

Unable to communicate between docker and external service

I have a docker container running in localhost:3000
Also, I have a node app running in localhost:8081
Now if I want to make post or get request from localhost:3000 to localhost:8001 its not working at all.
Now if run the service as a binary (not a docker file) on localhost:3000 the same API requests works.
How do I communicate if using docker?
Each container runs on it's own bridge network where localhost means the container itself.
To access the host, you can add the option --add-host=host.docker.internal:host-gateway to the docker run command. Then you can access the host using the host name host.docker.internal. I.e. if you have a REST service on the host, your URL would look something like http://host.docker.internal:8081/my/service/endpoint.
The name host.docker.internal is the common name to use for the host, but you can use any name you like.
You need docker version 20.10 for this to work.

nginx.conf for NodeJS/React App returning 502 and 405

Trying to setup a staging environment on Amazon LINUX EC2 instance and migrate from Heroku.
My repository has two folders:
Web
API
Our frontend and backend are running on the same port in deployment
In dev, these are run on separate ports and all requests from WEB and proxied to API
(for ex. WEB runs on PORT 3000 and API runs on PORT 3001. Have a proxy set up in the package.json file in WEB/)
Currently the application deployment works like this:
Build Web/ for distribution
Copy build/ to API folder
Deploy to Heroku with web npm start
In prod, we only deploy API folder with the WEB build/
Current nginx.conf looks like this
Commented out all other attempts
Also using PM2 to run the thread like so
$ sudo pm2 bin/www
Current thread running like so:
pm2 log
This is running on PORT 3000 on the EC2 instance
Going to the public IPv4 DNS for instance brings me to the login, which it's getting from the /build folder but none of the login methods (or any API calls) are working.
502 response example
I have tried a lot of different configurations. Set up the proxy_pass to port 3000 since thats where the Node process is running.
The only response codes I get are 405 Not Allowed and 502 Bad Gateway
Please let me know if there is any other information I can provide to find the solution.
It looks like you don't have an upstream block in your configuration. Looks like you're trying to use proxy-pass to send to a named server and port instead of a defined upstream. There's is an example on this page that shows how you define the upstream and then send traffic to it. https://nginx.org/en/docs/http/ngx_http_upstream_module.html
server backend1.example.com weight=5;
server backend2.example.com:8080;
server unix:/tmp/backend3;
server backup1.example.com:8080 backup;
server backup2.example.com:8080 backup;
}
server {
location / {
proxy_pass http://backend;
}
}````
Turns out there was an issue with express-sessions being stored in Postgres.
This led me to retest the connection strings and I found out that I kept receiving the following error:
connect ECONNREFUSED 127.0.0.1:5432
I did have a .env file holding the env variables and they were not being read by pm2.
So I added this line to app.js:
const path = require("path");
require('dotenv').config({ path: path.join(__dirname, '.env') });
then restarted the app with pm2 with the following command:
$ pm2 restart /bin/www --update-env

Docker request to own server

I have a docker instance running apache on port 80 and node.js+express running on port 3000. I need to make an AJAX request from the apache-served website to the node server running on port 3000.
I don't know what is the appropiate url to use. I tried localhost but that resolved to the localhost of the client browsing the webpage (also the end user) instead of the localhost of the docker image.
Thanks in advance for your help!
First you should split your containers - it is a good practice for Docker to have one container per one process.
Then you will need some tool for orchestration of these containers. You can start with docker-compose as IMO the simplest one.
It will launch all your containers and manage their network settings for you by default.
So, imaging you have following docker-compose.yml file for launching your apps:
docker-compose.yml
version: '3'
services:
apache:
image: apache
node:
image: node # or whatever
With such simple configuration you will have host names in your network apache and node. So from inside you node application, you will see apache as apache host.
Just launch it with docker-compose up
make an AJAX request from the [...] website to the node server
The JavaScript, HTML, and CSS that Apache serves up is all read and interpreted by the browser, which may or may not be running on the same host as the servers. Once you're at the browser level, code has no idea that Docker is involved with any of this.
If you can get away with only sending links without hostnames <img src="/assets/foo.png"> that will always work without any configuration. Otherwise you need to use the DNS name or IP address of the host, in exactly the same way you would as if you were running the two services directly on the host without Docker.

How to connect to docker container's link alias from host

I have 3 separate pieces to my dockerized application:
nodeapp: A node:latest docker container running an expressjs app that returns a JSON object when accessed from /api. This server is also CORs enabled according to this site.
nginxserver: A nginx:latest static server that simply hosts an index.html file that allows the user to click a button which would make the XMLHttpRequest to the node server above.
My host machine
The node:latest has its port exposed to the host via 3000:80.
The nginx:latest has its port exposed to the host via 8080:80.
From host I am able to access both nodeapp and nginxserver individually: I can make requests and see the JSON object returned from the node server using curl from the command line, and the button (index.html) is visible on the screen when I hit localhost:8080.
However, when I try clicking the button the call to XMLHttpRequest('GET', 'http://nodeapp/api', true) fails without seemingly hitting the nodeapp server (no log is present). I'm assuming this is because host does not understand http://nodeapp/api.
Is there a way to tell docker that while a container is running to add its container linking alias to my hosts file?
I don't know if my question is the proper solution to my problem. It looks as though I'm getting a CORs error returned but I don't think it is ever hitting my server. Does this have to do with accessing the application from my host machine?
Here is a link to an example repo
Edit: I've noticed that the when using the stack that clicking the button sends a response from my nginx container. I'm confused as to why it is routing through that server as the nodeapp is in my hosts file so it should recognize the correlation there?
Problem:
nodeapp exists in internal network, which is visible to your nginxserver only, you can check this by enter nginxserver
docker exec -it nginxserver bash
# cat /etc/hosts
Most important, your service setup is not correct, nginxserver shall act as reverse proxy in front of nodeapp
host (client) -> nginxserver -> nodeapp
Dirty Quick Solution:
If you really really want your client (host) to access internal application nodeapp, then you simple change below code
XMLHttpRequest('GET', 'http://nodeapp/api', true)
To
XMLHttpRequest('GET', 'http://localhost:3000/api', true)
Since in your docker-compose.yml, nodeapp service port 80 is exposed in home network as 3000, which can be accessed directly.
Better solution
You need redesign your service stack, to make nginxserver as frontend node, see one sample http://schempy.com/2015/08/25/docker_nginx_nodejs/

Resources