I have this Dockerfile I use to deploy my nodejs app together with nginx:
#Create our image from Node
FROM node:latest as builder
MAINTAINER Cristi Boariu <cristiboariu#gmail.com>
# use changes to package.json to force Docker not to use the cache
# when we change our application's nodejs dependencies:
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /opt/app && cp -a /tmp/node_modules /opt/app/
# From here we load our application's code in, therefore the previous docker
# "layer" thats been cached will be used if possible
WORKDIR /opt/app
ADD . /opt/app
EXPOSE 8080
CMD npm start
### STAGE 2: Setup ###
FROM nginx:1.13.3-alpine
## Copy our default nginx config
RUN mkdir /etc/nginx/ssl
COPY nginx/default.conf /etc/nginx/conf.d/
COPY nginx/star_zuumapp_com.chained.crt /etc/nginx/ssl/
COPY nginx/star_zuumapp_com.key /etc/nginx/ssl/
RUN cd /etc/nginx && chmod -R 600 ssl/
RUN mkdir -p /opt/app
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
and this is my nginx file:
upstream api-server {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name example.com www.example.com;
access_log /var/log/nginx/dev.log;
error_log /var/log/nginx/dev.error.log debug;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarder-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass http://api-server;
proxy_redirect off;
}
}
server {
listen 443 default ssl;
server_name example.com www.example.com;
ssl on;
ssl_certificate /etc/nginx/ssl/star_example_com.chained.crt;
ssl_certificate_key /etc/nginx/ssl/star_example_com.key;
server_tokens off;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarder-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://api-server;
proxy_redirect off;
}
}
I already spent a few hours debugging this without success.
Basically, I receive:
502 Bad Gateway
when trying to test it locally on:
https://localhost/docs/#
From docker logs:
172.17.0.1 - - [04/May/2018:05:35:36 +0000] "GET /docs/ HTTP/1.1" 502 166 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/11.1 Safari/605.1.15" "-"
2018/05/04 05:35:36 [error] 7#7: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1, server: example.com, request: "GET /docs/ HTTP/1.1", upstream: "http://127.0.0.1:8080/docs/", host: "localhost"
Can somebody help please?
You should configure:
server 172.17.42.1:8080;
172.17.42.1 is gateway ip address of docker.
or
server app_ipaddress_container:8080;
in nginx.conf file.
Because of port 8080 is listened on host, not on the container.
Related
I am running a NestJS application via PM2 on port 3001 in an AWS EC2 instance.
I configured SSL using certbot / Let's Encrypt and nginx. I want the NestJS application to serve as my API server hence the *.api.example.com. I have the client assets (HTML, JavaScript, and CSS) in S3 and a CloudFront distribution.
The issue I am running into is as follows:
If I navigate to staging.api.example.com in the browser, I receive a 502 Bad Gateway
If I navigate to staging.api.example.com:3001 in the browser I receive a 404
If I navigate to staging.api.example.com:3001/users which is a valid API route, everything works fine.
I want requests from staging.api.example.com to hit my NestJS server running at http://127.0.0.1:3001 in the EC2 instance via my nginx reverse proxy configuration.
I also cannot figure out why I have to include the port in the URL in order to reach my backend.
In my EC2 instance I had to add a custom rule to allow TCP traffic on port 3001 which doesn't seem right to me. I'm using a VPC, so I'm not sure if that's part of the problem.
IP Version
Type
Protocol
Port Range
IPv4
Custom TCP
TCP
3001
IPv4
HTTP
TCP
80
IPv4
HTTPS
TCP
443
Steps I took install certbot and generate a certificate for staging.api.example.com
sudo yum update -y
sudo amazon-linux-extras install nginx1
sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo yum-config-manager --enable epel
sudo yum install certbot python2-certbot-nginx -y
sudo certbot --nginx
How I start the server in my EC2 instance
pm2 start dist/src/main.js --name example
NestJS application configuration
const app = await NestFactory.create(AppModule, {
cors: true,
httpsOptions: {
key: fs.readFileSync('/etc/letsencrypt/live/example.com/privkey.pem'),
cert: fs.readFileSync('/etc/letsencrypt/live/example.com/cert.pem')
}
});
await app.listen(3001);
NGINX configuration
server {
listen 443 ssl; # managed by Certbot
listen [::]:443 ssl;
server_name staging.api.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
proxy_connect_timeout 300;
proxy_read_timeout 300;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
proxy_redirect http:// https://;
}
}
server {
if ($host = staging.api.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name staging.api.example.com;
return 404; # managed by Certbot
}
NGINX error log
$ sudo tail -f /var/log/nginx/error.log
2022/11/11 20:17:35 [error] 30033#30033: *1 upstream prematurely closed connection while reading response header from upstream, client: ip, server: staging.api.example.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3001/", host: "staging.api.example.com"
NGINX access log
sudo tail -f /var/log/nginx/access.log
# navigate to staging.api.example.com in browser
[11/Nov/2022:20:22:16 +0000] "GET / HTTP/1.1" 502 559 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
# navigate to staging.api.example.com/ in browser
[11/Nov/2022:20:22:22 +0000] "GET / HTTP/1.1" 502 559 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
It turns out all I needed to do was change the proxy_pass url in the location block to include https
Original NGINX config - does not work
location / {
...
proxy_pass http://127.0.0.1:3001;
...
}
Updated NGINX config - works
location / {
...
proxy_pass https://127.0.0.1:3001;
...
}
I have a fairly standard ReactJS frontend (using port 3000) app which is served by a NodeJS backend server (using port 5000). Both apps are Dockerized and I have configured NGINX in order to proxy requests from the frontend to and from the server.
Dockerfile for front end (with NGINX "baked in"):
FROM node:lts-alpine as build
WORKDIR /app
COPY ./package.json ./
COPY ./package-lock.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 3000
EXPOSE 443
EXPOSE 80
COPY ./cert/app.crt /etc/nginx/
COPY ./cert/app.key /etc/nginx/
ENV HTTPS=true
ENV SSL_CRT_FILE=/etc/nginx/app.crt
ENV SSL_KEY_FILE=/etc/nginx/app.key
RUN rm /etc/nginx/conf.d/default.conf
COPY ./default.conf /etc/nginx/nginx.conf
COPY --from=build /app/build/ /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
Dockerfile for server:
FROM node:lts-alpine as build
WORKDIR /app
EXPOSE 5000
ENV NODE_TLS_REJECT_UNAUTHORIZED=0
ENV DANGEROUSLY_DISABLE_HOST_CHECK=true
ENV NODE_CONFIG_DIR=./config/
COPY ./package.json ./
COPY ./package-lock.json ./
RUN npm install
COPY . .
CMD [ "npm", "start" ]
The docker-compose.yml for this setup is
version: '3.8'
services:
client:
container_name: client
depends_on:
- server
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
- HTTPS=true
- SSL_CRT_FILE=/etc/nginx/app.crt
- SSL_KEY_FILE=/etc/nginx/app.key
build:
dockerfile: Dockerfile
context: ./client
expose:
- "8000"
- "3000"
ports:
- "3000:443"
- "8000:80"
volumes:
- ./client:/app
- /app/node_modules
- /etc/nginx
networks:
- internal-network
server:
container_name: server
build:
dockerfile: Dockerfile
context: "./server"
expose:
- "5000"
ports:
- "5000:5000"
volumes:
- /app/node_modules
- ./server:/app
networks:
- internal-network
networks:
internal-network:
driver: bridge
And crucially, the NGINX default.conf is
worker_processes auto;
events {
worker_connections 1024;
}
pid /var/run/nginx.pid;
http {
include mime.types;
upstream loadbalancer {
server server:5000 weight=3;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
port_in_redirect off;
absolute_redirect off;
return 301 https://$host$request_uri;
}
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name example.app* example.co* example.uksouth.azurecontainer.io* localhost*;
error_page 497 https://$host:$server_port$request_uri;
error_log /var/log/nginx/client-proxy-error.log;
access_log /var/log/nginx/client-proxy-access.log;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:AES128:AES256:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 24h;
keepalive_timeout 300;
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains';
ssl_certificate /etc/nginx/app.crt;
ssl_certificate_key /etc/nginx/app.key;
root /usr/share/nginx/html;
index index.html index.htm index.nginx-debian.html;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
try_files $uri $uri/ /index.html;
}
location /tours {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://loadbalancer;
}
}
}
with this configuration I have two problems:
By running docker-compose up -d, this setup builds and deploys two Docker containers locally. When I use https://localhost:3000/id this works and the data is retrieved and shown in browser correctly - when I type http://localhost:3000/id this gets redirected to http://localhost:443/id and this does not work. I have attempted to use NGINX commands port_in_redirect off; absolute_redirect off; but this has not helped. How can I make sure that the redirect does not edit the port number? (this is likely not going to be an issue in production where the port numbers are not used).
The bigger problem: the deployment to Azure is done using a docker context and running docker-compose -f ./docker-compose-azure.yml up. This runs and creates two Docker containers and a side-car process. The docker-compose-azure.yml file is
version: '3.8'
services:
client:
image: dev.azurecr.io/example-client
depends_on:
- server
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
- HTTPS=true
- SSL_CRT_FILE=/etc/nginx/app.crt
- SSL_KEY_FILE=/etc/nginx/app.key
restart: unless-stopped
domainname: "example-dev"
expose:
- "3000"
ports:
- target: 3000
#published: 3000
protocol: tcp
mode: host
networks:
- internal-network
server:
image: dev.azurecr.io/example-server
restart: unless-stopped
ports:
- "5000:5000"
networks:
- internal-network
networks:
internal-network:
driver: bridge
If I don't use HTTPS and a simple reverse proxy - the two issues outline above go away. But with the configuration above, calls to the Azure FQDN/URL fail; HTTPS requests timing out "ERR_CONNECTION_TIMED_OUT", and for HTTP, the site could not be found. What am I doing wrong here?
Thanks for your time.
I think Jan Garaj's answer has touched upon all the important bits. Here is my take, trying to give a targeted answer.
HTTP to HTTPS redirect
Currently the return 301 statement is using the $host variable that only holds the Hostname and not the port information. To capture both, you can use the $http_host variable instead. source
server {
listen [::]:80;
#//307 to preserve POST data
return 307 https://$http_host$request_uri;
}
Problems with the Azure config
In the Azure config, you have this bit:
ports:
- target: 3000
#published: 3000
protocol: tcp
mode: host
which identifies 3000 as the internal client port which listens to the requests. But you have to remember that you have a NGINX proxy inside that only listens to ports 80 or 443 (the server blocks in Nginx config). So this is the reason you get the ERR_CONNECTION_TIMED_OUT error because the requests are sent to port 3000 where nothing is listening.
As you want to do a HTTPS deployment, you can set this to 443 and the Nginx will take care of the request.
enabling HTTP redirect on Azure
The final bit is to configure the Azure deployment such that when a HTTP request is made to your URL, it should get redirected to the HTTPS counterpart. We already have the NGINX redirect block for port 80.
BUT, it will not help. As we specify the target to be 443 inside the container, the HTTP request will try to hit 443 and get refused.This article also mentions the same towards the end
Use your browser to navigate to the public IP address of the container group. The IP address shown in this example is 52.157.22.76, so the URL is https://52.157.22.76. You must use HTTPS to see the running application, because of the Nginx server configuration. Attempts to connect over HTTP fail.
This could be solved if it were possible to add another port to Azure config, the port 80.
ports:
- port: 443
protocol: TCP
- port: 80
protocol: TCP
I am not sure if Azure allows this, but if it does then thats the final solution.
I think you need to check/update Nginx configuration file properly and also make sure SSL certificate files are available
# http block would be
server {
listen 80 default_server;
return 301 https://$server_name$request_uri;
}
and in https server block, you need to update location block
location /tours {
proxy_pass http://server:5000;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
location / {
try_files $uri $uri/ /index.html;
}
Updated
Your Nginx config file would be
worker_processes auto;
events {
worker_connections 1024;
}
pid /var/run/nginx.pid;
http {
include mime.types;
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name my-redirected-domain.com my-azure-domain.io localhost;
access_log /var/log/nginx/client-proxy.log;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:AES128:AES256:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 24h;
keepalive_timeout 300;
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains';
ssl_certificate /etc/nginx/viewform.app.crt;
ssl_certificate_key /etc/nginx/viewform.app.key;
root /usr/share/nginx/html;
index index.html index.htm index.nginx-debian.html;
location / {
try_files $uri $uri/ /index.html;
}
location /tours {
proxy_pass http://server:5000;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
server {
listen 80 default_server;
return 301 https://$server_name$request_uri;
}
}
Use port 443 everywhere to avoid any confusion with port remapping (that can be an advance setup):
1.) Define client container to be running on port 443:
version: '3.8'
services:
client:
...
ports:
- port: 443
protocol: TCP
2.) Define Nginx to be running on the port 443 with proper TLS setup as you have in your updated nginx.conf
Deploy and open https://<public IP> (you will very likely need to add sec. exception in the browser).
BTW: Azure has quite good article about Nginx with TLS (but more advance setup is used):
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-container-group-ssl
IMHO better redirect from http to https is:
server {
listen [::]:80;
return 301 https://$host$request_uri;
}
I'm trying to use Nginx for a reverse proxy. I have an Angular application running on Node server on port 4200. I am using a Docker image for the deployment
Below is the Dockerfile configuration:
FROM node:12.16.3-alpine as builder
RUN mkdir /app
WORKDIR /app
COPY package*.json ./
RUN npm install && npm install node-sass
COPY . .
RUN npm run build:ssr --prod --output-path=dist
EXPOSE 4200
CMD [ "node", "dist/server.js" ]
FROM nginx:alpine
COPY ./.nginx/nginx.conf /etc/nginx/nginx.conf
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /app/dist /usr/share/nginx/html
EXPOSE 80 8080
ENTRYPOINT ["nginx", "-g", "daemon off;"]
I have the Nginx configuration as below:
worker_processes 4;
events { worker_connections 1024; }
http {
server {
listen 0.0.0.0:8080;
listen [::]:8080;
listen 127.0.0.1;
server_name localhost;
default_type application/octet-stream;
gzip on;
gzip_comp_level 6;
gzip_vary on;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_buffers 16 8k;
gunzip on;
client_max_body_size 256M;
root /usr/share/nginx/html;
autoindex on;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_pass http://127.0.0.1:4200;
}
}
}
When I am running the image in the Docker container using the command
docker run --name application -d -p 8080:8080 app
I am getting the below error:
*5 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.xx.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:4200/", host: "localhost:8080" xxx.xxx.0.1 - - [14/Mar/2021:12:41:24 +0000] "GET / HTTP/1.1" 502 559 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.82 Safari/537.36"
I am currently trying to run Nginx as a reverse proxy for a small Node application and serve up files for the core of a site.
E.g.
/ Statically served files for root of website
/app/ Node app running on port 3000 with Nginx reverse proxy
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
root /var/www/example.com/html;
index index.html index.htm;
# Set path for access_logs
access_log /var/log/nginx/access.example.com.log combined;
# Set path for error logs
error_log /var/log/nginx/error.example.com.log notice;
# If set to on, Nginx will issue log messages for every operation
# performed by the rewrite engine at the notice error level
# Default value off
rewrite_log on;
# Settings for main website
location / {
try_files $uri $uri/ =404;
}
# Settings for Node app service
location /app/ {
# Header settings for application behind proxy
proxy_set_header Host $host;
# proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Proxy pass settings
proxy_pass http://127.0.0.1:3000/;
# Proxy redirect settings
proxy_redirect off;
# HTTP version settings
proxy_http_version 1.1;
# Response buffering from proxied server default 1024m
proxy_max_temp_file_size 0;
# Proxy cache bypass define conditions under the response will not be taken from cache
proxy_cache_bypass $http_upgrade;
}
}
This appeared to work at first glance, but what I have found over time is that I am being served 502 errors constantly on the Node app route. This applies to both the app itself, as well as static assets included in the app.
I've tried using various different variations of the above config, but nothing I can find seems to fix the issue. I had read of issues with SELinux, but this is currently not on the server in question.
Few additional bits of information;
Server: Ubuntu 18.04.3
Nginx: nginx/1.17.5
2020/02/09 18:18:07 [error] 8611#8611: *44 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: x, server: example.com, request: "GET /app/assets/images/image.png HTTP/1.1", upstream: "http://127.0.0.1:3000/assets/images/image.png", host: "example.com", referrer: "http://example.com/overlay/"
2020/02/09 18:18:08 [error] 8611#8611: *46 connect() failed (111: Connection refused) while connecting to upstream, client: x, server: example.com, request: "GET /app/ HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "example.com"
Has anyone encountered similar issues, or knows what it is that I've done wrong?
Thanks in advance!
It's may be because of your node router.It's better to share nodes code too.
Anyway try put your main router and static route like app.use('/app', mainRouter); and see it make any sense?
As our organization is using SSO for staff we are getting 502 bad gateway when users try to login with shibboleth.
The users who has more groups access, and try to login they are getting 502, but the users who has less access they are able to logged in.
The maximum header size with all the access is 32768.
We tried the --max-http-header-size 42768 in docker, how ever it was not helpful.
The users with normal access(less header size) is able to log in.
Our setup:
VM1 host the nginx as reverse proxy. The configuration is below.
VM2 host more than one docker.
server {
listen 80;
server_name **********;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
client_body_timeout 60s;
client_header_timeout 60s;
keepalive_timeout 70s;
send_timeout 60s;
client_body_buffer_size 32k;
client_header_buffer_size 32k;
client_max_body_size 0;
large_client_header_buffers 4 32k;
access_log off;
error_log /data/nginx/logs/****_error.log warn;
location / {
proxy_pass http://******:8098;
}
}
Error log:
2019/09/25 10:25:38 [error] 20070#0: *123 upstream prematurely closed
connection while reading response header from upstream, client: ****,
server: ******, request: "GET /auth/shibboleth?redirect=L2FjY291bnQ=
HTTP/1.1", upstream: "http://******:8098/auth/shibboleth?redirect=L2FjY291bnQ=",
host: "*****", referrer:
"https://******/profile/SAML2/Redirect/SSO?execution=e1s2"
2019/09/25 10:25:50 [error] 20070#0: *125 upstream prematurely closed
connection while reading response header from upstream, client: ****,
server: *****, request: "GET / HTTP/1.1", upstream: "http://****:8098/",
host: "*****"
Docker setup
FROM node:8-alpine as intermediate
RUN apk add --no-cache git openssh alpine-sdk python2
RUN python2 -m ensurepip && \
rm -r /usr/lib/python*/ensurepip && \
pip install --upgrade pip setuptools && \
if [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python2
/usr/bin/python; fi
WORKDIR /usr/src/app
RUN touch config.js && mkdir config
COPY package*.json ./
RUN http_proxy="http://****:3128" https_proxy="http://****:3128" npm install
COPY . .
RUN rm -rf .private
FROM node:8-alpine
WORKDIR /usr/src/app
COPY --from=intermediate /usr/src/app /usr/src/app
EXPOSE 8080
CMD [ "node", "app.js", "-p 8080" ]
This is apparently common. The fix:
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
See e.g.
Kubernetes nginx ingress controller returns 502 but only for AJAX/XmlHttpRequest requests - Stack Overflow
Fixing Nginx "upstream sent too big header" error when running an ingress controller in Kubernetes