Nginx reverse proxy to node.js not working - node.js

I have an angular and node app with Postgres as the db. I am deploying them to docker containers on ec2 instance. The Nginx reverse proxy on ec2 is not routing requests to node. The same setup works on my local machine. The error message I receive is: "XMLHttpRequest cannot load http://localhost:3000/api/user/login due to access control checks". Here is my docker-compose file:
version: '3.3'
networks:
network1:
services:
api-service:
image: backend
environment:
PORT: 3000
volumes:
- ./volumes/logs:/app/logs
ports:
- 3000:3000
restart: always
networks:
- network1
nginx:
image: frontend
environment:
USE_SSL: "false"
volumes:
- ./volumes/logs/nginx:/var/log/nginx/
- ./volumes/ssl/nginx:/etc/ssl/
ports:
- 80:80
- 443:443
restart: always
networks:
- network1
and my Nginx.conf is as follows:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80;
location / {
root /myapp/app;
try_files $uri $uri/ /index.html;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $host;
}
location /api/ {
proxy_pass http://api-service:3000;
}
}
}
and my api url for backend is : (the commented lines are all what I have tried)
export const environment = {
production: false,
//apiUrl: 'http://api-service'
// apiUrl: '/api-service/api'
// apiUrl : 'http://' + window.location.hostname + ':3000'
//apiUrl: 'http://localhost:3000/api'
apiUrl: 'http://localhost:3000'
};
and in my angular service I am using:
const BACKEND_URL = environment.apiUrl + "/api/user/";
and in my backend app.js:
app.use("/api/user/", userRoutes);
What am I doing wrong?

Posting what worked for me in case it helps someone. Here is what worked when I moved my code to ec2 instance:
export const environment = {
production: false,
//apiUrl: 'http://localhost:3000/api'
apiUrl: '/api'
};
So, in localhost apiUrl: 'http://localhost:3000/api works but it does not work on server.

Related

net::ERR_FAILED api call to backend from nextjs with nginx docker

I have 3 docker containers running in mac os.
backend - port 5055
frontend (next.js) - port 3000
nginx - port 80
I am getting net::ERR_FAILED for backend api requests when I access from browser (http://localhost:80). I can make a request to backend (http://localhost:5055) in postman and it works well.
Sample api request - GET http://backend:5055/api/category/show
What is the reason for this behaviour ?
Thanks.
docker-compose.yml
version: '3.9'
services:
backend:
image: backend-image
build:
context: ./backend
dockerfile: Dockerfile.prod
ports:
- '5055:5055'
frontend:
image: frontend-image
build:
context: ./frontend
dockerfile: Dockerfile.prod
ports:
- '3000:3000'
depends_on:
- backend
nginx:
image: nginx-image
build:
context: ./nginx
ports:
- '80:80'
depends_on:
- backend
- frontend
backend - Dockerfile.prod
FROM node:19.0.1-slim
WORKDIR /app
COPY package.json .
RUN yarn install
COPY . ./
ENV PORT 5055
EXPOSE $PORT
CMD ["npm", "run", "start"]
frontend - Dockerfile.prod
FROM node:19-alpine
WORKDIR /usr/app
RUN npm install --global pm2
COPY ./package*.json ./
RUN npm install
COPY ./ ./
RUN npm run build
EXPOSE 3000
USER node
CMD [ "pm2-runtime", "start", "npm", "--", "start" ]
nginx - Dockerfile
FROM public.ecr.aws/nginx/nginx:stable-alpine
RUN rm /etc/nginx/conf.d/*
COPY ./default.conf /etc/nginx/conf.d/
EXPOSE 80
CMD [ "nginx", "-g", "daemon off;" ]
nginx - default.conf
upstream frontend {
server frontend:3000;
}
upstream backend {
server backend:5055;
}
server {
listen 80 default_server;
...
location /api {
...
proxy_pass http://backend;
proxy_redirect off;
...
}
location /_next/static {
proxy_cache STATIC;
proxy_pass http://frontend;
}
location /static {
proxy_cache STATIC;
proxy_ignore_headers Cache-Control;
proxy_cache_valid 60m;
proxy_pass http://frontend;
}
location / {
proxy_pass http://frontend;
}
}
frontend - .env.local
NEXT_PUBLIC_API_BASE_URL=http://backend:5055/api
frontend - httpServices.js
import axios from 'axios'
import Cookies from 'js-cookie'
const instance = axios.create({
baseURL: `${process.env.NEXT_PUBLIC_API_BASE_URL}`,
timeout: 500000,
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
},
})
...
const responseBody = (response) => response.data
const requests = {
get: (url, body) => instance.get(url, body).then(responseBody),
post: (url, body, headers) =>
instance.post(url, body, headers).then(responseBody),
put: (url, body) => instance.put(url, body).then(responseBody),
}
export default requests
Edit
nginx logs (docker logs -f nginx 2>/dev/null)
172.20.0.1 - - [14/Nov/2022:17:02:39 +0000] "GET /_next/image?url=%2Fslider%2Fslider-1.jpg&w=1080&q=75 HTTP/1.1" 304 0 "http://localhost/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
172.20.0.1 - - [14/Nov/2022:17:02:41 +0000] "GET /service-worker.js HTTP/1.1" 304 0 "http://localhost/service-worker.js" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
172.20.0.1 - - [14/Nov/2022:17:02:41 +0000] "GET /fallback-B639VDPLP_r91l2hRR104.js HTTP/1.1" 304 0 "http://localhost/service-worker.js" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
172.20.0.1 - - [14/Nov/2022:17:02:41 +0000] "GET /workbox-fbc529db.js HTTP/1.1" 304 0 "http://localhost/service-worker.js" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
curl request is working well from nginx container to backend container (curl backend:5055/api/category/show)
Edit 2
const CategoryCarousel = () => {
...
const { data, error } = useAsync(() => CategoryServices.getShowingCategory())
...
}
import requests from './httpServices'
const CategoryServices = {
getShowingCategory() {
return requests.get('/category/show')
},
}
Edit 3
When NEXT_PUBLIC_API_BASE_URL=http://localhost:5055/api
Error: connect ECONNREFUSED 127.0.0.1:5055
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1283:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 5055,
config: {
...
baseURL: 'http://localhost:5055/api',
method: 'get',
url: '/products/show',
data: undefined
},
...
_options: {
...
protocol: 'http:',
path: '/api/products/show',
method: 'GET',
...
pathname: '/api/products/show'
},
...
},
_currentUrl: 'http://localhost:5055/api/products/show',
_timeout: null,
},
response: undefined,
isAxiosError: true,
}
Docker configuration is all correct
I've read the question again, with the given logs, it seems your configuration is correct all along. However, what you're doing on browser is access violation.
Docker compose service(hosts) access
Docker services are connected to each other, that means making a request inside another service to another service is possible. What's impossible is a user outside from that service is not accessible to another service. Here's a simpler explanation:
# docker-compose.yaml
service-a:
ports:
- 3000:3000 # exposed to outside
# ...
service-b:
ports:
- 5000:5000 # exposed to outside
# ...
# ✅this works! case A
(service-a) $ curl http://service-b:5000
(service-b) $ curl http://service-a:3000
(local machine) $ curl http://localhost:5000
(local machine) $ curl http://localhost:3000
# ❌this does not work! case B
(local machine) $ curl http://service-a:3000
(local machine) $ curl http://service-b:3000
When I was reading the question initially, I've missed the part that the frontend code is accessing unexposed backend service in browser. This is clearly falls into case B, which will not work.
Solution: With server side rendering...
A frontend app is merely a javascript code on your browser. It cannot access to internals. Therefore the request should be corrected:
# change from http://backend:5055/api
NEXT_PUBLIC_API_BASE_URL=http://localhost:5055/api
But here's better way to solve it. Try access the api inside server-side code. Since your frontend is Next.js, it is possible to inject backend result to frontend.
export async function getServerSideProps(context) {
const req = await fetch('http://backend:5055/api/...');
const data = req.json();
return {
props: { data }, // will be passed to the page component as props
}
}
Edit
(Edit 3) contains frontend logs when NEXT_PUBLIC_API_BASE_URL is changed to 'localhost' (as you mentioned in the answer). Now the error comes from different api i.e. 'localhost:5055/api/products/show' which is inside a getServerSideProps(). Is this happening because some apis are calling from client side and some are from server side ? If that is the case , How should I fix this ? Thanks
Here's more practical example:
// outside getServerSideProps, getStaticProps - browser
// ❌this will fail 1-a
await fetch('http://backend:5055/...') // 'backend' should be service name
// ✅this will work 1-b
await fetch('http://localhost:5055/...')
// inside getServerSideProps or getStaticProps - internal network
export const getServerSideProps() {
// ✅this will work 2-a
await fetch('http://backend:5055/...');
// ❌this will fail 2-b
await fetch('http://localhost:5055/...');
}
In short, the request has to be either 1-b or 2-a.
Is this happening because some apis are calling from client side and some are from server side ? If that is the case , How should I fix this ? Thanks
Yes. There are several ways to deal with it.
1.Programmatically differentiating the host
const isServer = typeof window === 'undefined';
const HOST_URL = isServer ? 'http://backend:5055' : 'http://localhost:5055';
fetch(HOST_URL);
2.Manually differentiating the host
// server side
getServerSideProps() {
// this seems unnecessary and confusing at first sight but it comes very handy later in terms of security.
fetch('http://backend:5055');
}
// client side
fetch('http://localhost:5055');
3.Use separate domain for backend(modify hosts file)
This is what I usually resort to when testing services with domain name in local environment.
Modifying hosts file, means exampleurl.com will be resolved as localhost in the OS. In this case, production environment must use separate domains and host file setup is required. The service must be exposed to public. Please refer to this document on modifying hosts file.
# docker-compose.yaml
services:
backend:
ports:
- 5050:5050
# ...
# hosts file
127.0.0.1 exampleurl.com
# ...
// in this case,
// development == local development
const IS_DEV = process.NODE_ENV = 'development';
// hosts file does not resolve port. it's necessary for local development.
const BACKEND_HOST = 'exampleurl.com'
const BACKEND_URL = IS_DEV ? `${BACKEND_HOST}:5050` : BACKEND_HOST;
// client-side
fetch(BACKEND_URL);
// server-side
getServerSideProps() {
fetch(BACKEND_URL);
}
There are many clever ways to solve this problem but there is no "always right" answer. Take your time to think which method best fits to your case.
in nginx.conf you need to specify the backend port too and also the base path (/api):
location /api {
...
proxy_pass http://backend:5055/api;
}

Cannot Proxy Requests From React Node.js App to Express Backend (Docker)

I'm having issues understanding how to proxy my requests to Express routes in my backend while accounting for the local development env. use case and the docker containerized use case. What I'm trying to setup up is a situation in which I have "proxy" configured for "http://localhost:8080" on my local env and http://api:8080 configured for my container. What I have thus far is createProxyMiddleware configured like so...
module.exports = function(app) {
console.log(process.env.API_URL);
app.use(
'/api',
createProxyMiddleware({
target: process.env.API_URL,
changeOrigin: true,
})
);
};
And my docker-compose file is configured like so...
version: "3.7"
services:
client:
image: webapp-client
build: ./client
restart: always
environment:
- API_URL=http://api:8080
volumes:
- ./client:/client
- /client/node_modules
labels:
- "traefik.enable=true"
- "traefik.http.routers.client.rule=PathPrefix(`/`)"
- "traefik.http.routers.client.entrypoints=web"
- "traefik.port=3000"
depends_on:
- api
networks:
- webappnetwork
api:
image: webapp-api
build: ./api
restart: always
ports:
- "8080:8080"
volumes:
- ./api:/api
- /api/node_modules
networks:
- webappnetwork
traefik:
image: "traefik:v2.5"
container_name: "traefik"
restart: always
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
networks:
- webappnetwork
networks:
webappnetwork:
external: true
volumes:
pg-data:
pgadmin:
Upon startup, the container logs out...
[HPM] Proxy created: / -> http://api:8080
My axios calls look like this...
const <config_name> = {
method: 'post',
url: '/<route>',
headers: {
'Content-Type': 'application/json'
},
data: dataInput
}
As you can see, I set the environment variable and pass that into the createProxyMiddleWare method, but for some reason, this config doesn't work and gives a 404 when I try to hit a route. Any help with this would be greatly appreciated!

Logging to email in varnish from default.vcl file

varnish:
image: varnish:6.0
restart: always
depends_on:
- apache
networks:
- frontend
- backend
- traefik
volumes:
- ./docker/varnish:/etc/varnish
ports:
- 6081:6081
I have the following varnish config in docker.
I have the following default.vcl config:
vcl 4.0;
sub vcl_recv {
set req.grace = 2m;
if (req.http.Cookie !~ "(^|;\s*)(city=(.*?))(;|$)") {
return (pass);
}
# Try a cache-lookup
return (hash);
}
How do I log to a text file when the function returns pass or returns a hash?
I tried the following, but I can't find the log.
# To 'varnishlog'
std.log("varnish log info:" + req.host);
# To syslog
std.syslog( LOG_USER|LOG_ALERT, "There is serious trouble");

Docker Health Check and Node.js Application

I've been trying to include healthcheck to my container, however, no matter what i do, container never seems to work. To be precise, i have the following structure:
Traefik Proxy
Node.js Application behind that proxy
All the labels for Traefik are included in the docker-compose.yml file.
Whenever i try to add healthcheck, either in Dockerfile or in docker-compose.yml, application is built and listening for connections on port 443, however, when i try to access address from the browser, it always shows 404 error (when Traefik is unable to proxy container).
Here is simple service configuration:
frontend:
restart: always
build:
context: ./configuration/frontend
dockerfile: Dockerfile
environment:
- application_environment=development
- FRONTEND_DOMAIN=HOST_HERE
volumes:
- ./volumes/frontend:/app:rw
- ./volumes/backend/.env:/.env:ro
- ./volumes/backend/resources/lang:/backend-lang:rw
labels:
- traefik.enable=true
- traefik.frontend.rule=Host:HOST_HERE
- traefik.port=3000
- traefik.docker.network=traefik_proxy
- traefik.frontend.redirect.entryPoint=https
- traefik.frontend.passHostHeader=true
- traefik.frontend.headers.SSLRedirect=true
- traefik.frontend.headers.browserXSSFilter=true
- traefik.frontend.headers.contentTypeNosniff=true
- traefik.frontend.headers.customFrameOptionsValue=SAMEORIGIN
- traefik.frontend.headers.STSPreload=true
- traefik.frontend.headers.STSSeconds=31536000
healthcheck:
test: ["CMD", "cd /app && yarn healthcheck"]
interval: 10s
timeout: 5s
start_period: 60s
networks:
- traefik_proxy
And here is healthcheck.js file which is accessible through the command yarn healthcheck:
const http = require('https');
const options = {
host: process.env.FRONTEND_DOMAIN,
port: 443,
path: '/',
method: 'GET',
timeout: 2000
};
const healthCheck = http.request(options, (response) => {
console.log(`STATUS: ${response.statusCode}`);
if (response.statusCode === 200) {
process.exit(0);
}
else {
process.exit(1);
}
});
healthCheck.on('error', function (error) {
console.error('ERROR', error);
process.exit(1);
});
healthCheck.end();
When I start the container without the HEALTHCHECK options (either Dockerfile or compose), it works just fine, page is displayed and when I manually execute yarn healthcheck, then it shows that everything is fine (I mean in the console, STATUS: 200). However, with automated healthcheck, Traefik will have no access to the container.

Setting up nginx with multiple IPs

I have my nginx configuration file under /etc/nginx/sites-available/ with two upstreams say
upstream test1 {
server 1.1.1.1:50;
server 1.1.1.2:50;
}
upstream test2 {
server 2.2.2.1:60;
server 2.2.2.2:60;
}
server {
location / {
proxy_pass http://test1;
}
location / {
proxy_pass http://test2;
}
}
Sending a curl request to <PrimaryIP>:80 works but I want to use <SecondaryIP1>:80 for test1 and <SecondaryIP2>:80 for test2. Is it possible to define this in nginx?
You have to have two server directives to accomplish this task:
upstream test1 {
server 1.1.1.1:50;
server 1.1.1.2:50;
}
upstream test2 {
server 2.2.2.1:60;
server 2.2.2.2:60;
}
server {
listen 80
server_name <SecondartIP1>
location / {
proxy_pass http://test1;
}
}
server {
listen 80
server_name <SecondarIP2>
location / {
proxy_pass http://test2;
}
}

Resources