As our organization is using SSO for staff we are getting 502 bad gateway when users try to login with shibboleth.
The users who has more groups access, and try to login they are getting 502, but the users who has less access they are able to logged in.
The maximum header size with all the access is 32768.
We tried the --max-http-header-size 42768 in docker, how ever it was not helpful.
The users with normal access(less header size) is able to log in.
Our setup:
VM1 host the nginx as reverse proxy. The configuration is below.
VM2 host more than one docker.
server {
listen 80;
server_name **********;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
client_body_timeout 60s;
client_header_timeout 60s;
keepalive_timeout 70s;
send_timeout 60s;
client_body_buffer_size 32k;
client_header_buffer_size 32k;
client_max_body_size 0;
large_client_header_buffers 4 32k;
access_log off;
error_log /data/nginx/logs/****_error.log warn;
location / {
proxy_pass http://******:8098;
}
}
Error log:
2019/09/25 10:25:38 [error] 20070#0: *123 upstream prematurely closed
connection while reading response header from upstream, client: ****,
server: ******, request: "GET /auth/shibboleth?redirect=L2FjY291bnQ=
HTTP/1.1", upstream: "http://******:8098/auth/shibboleth?redirect=L2FjY291bnQ=",
host: "*****", referrer:
"https://******/profile/SAML2/Redirect/SSO?execution=e1s2"
2019/09/25 10:25:50 [error] 20070#0: *125 upstream prematurely closed
connection while reading response header from upstream, client: ****,
server: *****, request: "GET / HTTP/1.1", upstream: "http://****:8098/",
host: "*****"
Docker setup
FROM node:8-alpine as intermediate
RUN apk add --no-cache git openssh alpine-sdk python2
RUN python2 -m ensurepip && \
rm -r /usr/lib/python*/ensurepip && \
pip install --upgrade pip setuptools && \
if [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python2
/usr/bin/python; fi
WORKDIR /usr/src/app
RUN touch config.js && mkdir config
COPY package*.json ./
RUN http_proxy="http://****:3128" https_proxy="http://****:3128" npm install
COPY . .
RUN rm -rf .private
FROM node:8-alpine
WORKDIR /usr/src/app
COPY --from=intermediate /usr/src/app /usr/src/app
EXPOSE 8080
CMD [ "node", "app.js", "-p 8080" ]
This is apparently common. The fix:
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
See e.g.
Kubernetes nginx ingress controller returns 502 but only for AJAX/XmlHttpRequest requests - Stack Overflow
Fixing Nginx "upstream sent too big header" error when running an ingress controller in Kubernetes
Related
I'm trying to use Nginx for a reverse proxy. I have an Angular application running on Node server on port 4200. I am using a Docker image for the deployment
Below is the Dockerfile configuration:
FROM node:12.16.3-alpine as builder
RUN mkdir /app
WORKDIR /app
COPY package*.json ./
RUN npm install && npm install node-sass
COPY . .
RUN npm run build:ssr --prod --output-path=dist
EXPOSE 4200
CMD [ "node", "dist/server.js" ]
FROM nginx:alpine
COPY ./.nginx/nginx.conf /etc/nginx/nginx.conf
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /app/dist /usr/share/nginx/html
EXPOSE 80 8080
ENTRYPOINT ["nginx", "-g", "daemon off;"]
I have the Nginx configuration as below:
worker_processes 4;
events { worker_connections 1024; }
http {
server {
listen 0.0.0.0:8080;
listen [::]:8080;
listen 127.0.0.1;
server_name localhost;
default_type application/octet-stream;
gzip on;
gzip_comp_level 6;
gzip_vary on;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_buffers 16 8k;
gunzip on;
client_max_body_size 256M;
root /usr/share/nginx/html;
autoindex on;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_pass http://127.0.0.1:4200;
}
}
}
When I am running the image in the Docker container using the command
docker run --name application -d -p 8080:8080 app
I am getting the below error:
*5 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.xx.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:4200/", host: "localhost:8080" xxx.xxx.0.1 - - [14/Mar/2021:12:41:24 +0000] "GET / HTTP/1.1" 502 559 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.82 Safari/537.36"
I haven't been able to get Keycloak and Nginx to work within the same Docker network:
Sequence of events:
https://localhost takes me to the application homepage.
When I click on the login button:
I see the following URL in the browser:
https://localhost/auth/realms/bizmkc/protocol/openid-connect/auth?client_id=bizmapp&redirect_uri=&state=26ce2075-8099-4960-83e8-508e40c585f3&response_mode=fragment&response_type=code&scope=openid&nonce=b57ca43a-ed93-48ab-9c96-591cd6378de9
which gives me a 404.
Nginx logs show the following:
2020/04/13 09:58:38 [error] 7#7: *19 connect() failed (111: Connection refused) while connecting to upstream, client: 10.0.0.2, server: localhost, request: "GET /auth/realms/bizmkc/protocol/openid-connect/auth?client_id=bizmapp&redirect_uri=https%3A%2F%2Flocalhost%2Flogin&state=26ce2075-8099-4960-83e8-508e40c585f3&response_mode=fragment&response_type=code&scope=openid&nonce=b57ca43a-ed93-48ab-9c96-591cd6378de9 HTTP/1.1", upstream: "https://127.0.0.1:9443/auth/realms/bizmkc/protocol/openid-connect/auth?client_id=bizmapp&redirect_uri=https%3A%2F%2Flocalhost%2Flogin&state=26ce2075-8099-4960-83e8-508e40c585f3&response_mode=fragment&response_type=code&scope=openid&nonce=b57ca43a-ed93-48ab-9c96-591cd6378de9", host: "localhost", referrer: "https://localhost/login"
2020/04/13 09:58:38 [error] 7#7: *19 open() "/usr/local/nginx/html/50x.html" failed (2: No such file or directory), client: 10.0.0.2, server: localhost, request: "GET /auth/realms/bizmkc/protocol/openid-connect/auth?client_id=bizmapp&redirect_uri=https%3A%2F%2Flocalhost%2Flogin&state=26ce2075-8099-4960-83e8-508e40c585f3&response_mode=fragment&response_type=code&scope=openid&nonce=b57ca43a-ed93-48ab-9c96-591cd6378de9 HTTP/1.1", upstream: "https://127.0.0.1:9443/auth/realms/bizmkc/protocol/openid-connect/auth?client_id=bizmapp&redirect_uri=https%3A%2F%2Flocalhost%2Flogin&state=26ce2075-8099-4960-83e8-508e40c585f3&response_mode=fragment&response_type=code&scope=openid&nonce=b57ca43a-ed93-48ab-9c96-591cd6378de9", host: "localhost", referrer: "https://localhost/login"
If I run Nginx on its own outside the Docker network, then the browser URL
https://localhost/auth/realms/bizmkc/protocol/openid-connect/auth?client_id=bizmapp&redirect_uri=<redirecxt_uri>&state=26ce2075-8099-4960-83e8-508e40c585f3&response_mode=fragment&response_type=code&scope=openid&nonce=b57ca43a-ed93-48ab-9c96-591cd6378de9 correctly takes me to the Keycloak realm login page.
I don't know why URL redirection for the ports doesn't work within the Docker network.
My nginx.conf file
# nginx.vh.default.conf -- docker-openresty
#
# This file is installed to:
# `/etc/nginx/conf.d/default.conf`
#
# It tracks the `server` section of the upstream OpenResty's `nginx.conf`.
#
# This config (and any other configs in `etc/nginx/conf.d/`) is loaded by
# default by the `include` directive in `/usr/local/openresty/nginx/conf/nginx.conf`.
#
# See https://github.com/openresty/docker-openresty/blob/master/README.md#nginx-config-files
#
# log if only it's a new user with no cookie. From https://www.nginx.com/blog/sampling-requests-with-nginx-conditional-logging/
map $cookie_SESSION $logme {
"" 1;
default 0;
}
server {
listen 80; #listen for all the HTTP requests
server_name localhost;
# return 301 https://localhost;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name localhost; # same server name as port 80 is fine
ssl_certificate /etc/nginx/ssldir/ssl.crt;
ssl_certificate_key /etc/nginx/ssldir/ssl.key;
charset utf-8;
# log a user only one time. If cookie is null, it's a new user
access_log /var/log/nginx/access.log combined if=$logme;
error_log /var/log/nginx/error.log debug;
# Optional: If the application does not generate a session cookie, we
# generate our own
add_header Set-Cookie SESSION=1;
# MUST USE TRAILING HASH IN https://localhost:9443/ AND IT WILL NOT ADD BIZAUTH ****important
# Default keycloak configuration points to CONTECT auth in standalone/configuration/standalone.xml. So use auth
location /auth {
proxy_redirect off;
proxy_pass https://localhost:9443;
proxy_read_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location / {
root /usr/local/nginx/html;
index index.html index.htm;
# following is needed for angular pathlocation strategy
try_files $uri $uri/ /index.html;
}
location /mpi {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_max_temp_file_size 0;
# client_max_body_size 10m;
# client_body_buffer_size 128k;
# proxy_connect_timeout 90;
# proxy_send_timeout 90;
# proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_pass http://localhost:8080;
}
location /npi {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_max_temp_file_size 0;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_pass http://localhost:8080;
}
location /tilla/ {
proxy_pass https://www.google.com/;
}
error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/local/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root /usr/local/openresty/nginx/html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
# On error pages, this will prevent showing version number
#server_tokens off;
}
keycloak-nginx.yaml
version: '3.7'
networks:
nginx:
name: nginx
services:
nginx:
image: nginx:1.17.7-alpine
domainname: localhost
ports:
- "80:80"
- "443:443"
networks:
nginx:
network_mode: host
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d
- ./nginx/logs:/var/log/nginx
- ./nginx/html:/usr/local/nginx/html
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- ./nginx/ssldir:/etc/nginx/ssldir:ro
keycloak:
image: jboss/keycloak:8.0.1
domainname: localhost
ports:
- "9443:8443"
networks:
nginx:
volumes:
# - ${USERDIR}/keycloak/config.json:/config.json
- /mnt/disks/vol1/kcthemes:/opt/jboss/keycloak/themes
#- /mnt/disks/vol1/ssldir:/etc/x509/https
environment:
# https://geek-cookbook.funkypenguin.co.nz/recipes/keycloak/setup-oidc-provider/
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=aaaa
# - KEYCLOAK_IMPORT=/config.json
- DB_VENDOR=postgres
- DB_DATABASE=keycloak
- DB_ADDR=keycloak-db
- DB_USER=keycloak
- DB_PASSWORD=myuberpassword
# This is required to run keycloak behind traefik
- PROXY_ADDRESS_FORWARDING=true
- KEYCLOAK_HOSTNAME=localhost
# Tell Postgress what user/password to create
- POSTGRES_USER=keycloak
- POSTGRES_PASSWORD=myuberpassword
- ROOT_LOGLEVEL=DEBUG
- KEYCLOAK_LOGLEVEL=DEBUG
restart: "no"
depends_on:
- keycloak-db
# https://hub.docker.com/_/postgres
keycloak-db:
image: postgres:12.1-alpine
ports:
- target: 5432
published: 5432
networks:
nginx:
volumes:
- ./kc_db:/var/lib/postgresql/data
environment:
- DB_VENDOR=postgres
- DB_DATABASE=keycloak
- DB_ADDR=keycloak-db
- DB_USER=keycloak
- DB_PASSWORD=.
# This is required to run keycloak behind traefik
- KEYCLOAK_HOSTNAME=localhost
# Tell Postgress what user/password to create
- POSTGRES_USER=keycloak
- POSTGRES_PASSWORD=myuberpassword
restart: "no"
keycloak-db-backup:
image: postgres
networks:
nginx:
volumes:
- ${USERDIR}/keycloak/database-dump:/dump
environment:
- PGHOST=keycloak-db
- PGUSER=keycloak
- PGPASSWORD=myuberpassword
- BACKUP_NUM_KEEP=7
- BACKUP_FREQUENCY=1d
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
pg_dump -Fc > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
(ls -t /dump/dump*.psql|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.psql)|sort|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
restart: "no"
depends_on:
- nginx
Command used to run this
docker stack deploy -c keycloak-nginx.yaml kc
docker info
Client:
Debug Mode: false
Server:
Containers: 5
Running: 3
Paused: 0
Stopped: 2
Images: 20
Server Version: 19.03.6
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: pusagcsjon73mkvjxn2wx9bkz
Is Manager: true
ClusterID: ibxcgupiut3apyhwyn78anycj
Managers: 1
Nodes: 1
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Data Path Port: 4789
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 192.168.0.145
Manager Addresses:
192.168.0.145:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version:
runc version:
init version:
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.15.0-96-generic
Operating System: Linux Mint 19.1
OSType: linux
Architecture: x86_64
CPUs: 6
Total Memory: 31.28GiB
Name: Yogi-Linux
ID: YTU6:VKGZ:42ED:QJNQ:34RU:IWAU:L5UL:PJP2:2FJG:FYZC:FRUC:6XNB
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
localhost:32000
127.0.0.0/8
Live Restore Enabled: false
localhost in the container is not the same localhost which you see on the OS level, so:
don't force keycloak service to be "localhost" (domainname,KEYCLOAK_HOSTNAME)
proxy pass /auth to keycloak service (not to localhost)
proxy_pass https://keycloak:9443;
OR:
run all containers in the OS network namespace (--net=host, but generally it isn't recommended) and then localhost in the container will be the same as your OS localhost.
I am currently trying to run Nginx as a reverse proxy for a small Node application and serve up files for the core of a site.
E.g.
/ Statically served files for root of website
/app/ Node app running on port 3000 with Nginx reverse proxy
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
root /var/www/example.com/html;
index index.html index.htm;
# Set path for access_logs
access_log /var/log/nginx/access.example.com.log combined;
# Set path for error logs
error_log /var/log/nginx/error.example.com.log notice;
# If set to on, Nginx will issue log messages for every operation
# performed by the rewrite engine at the notice error level
# Default value off
rewrite_log on;
# Settings for main website
location / {
try_files $uri $uri/ =404;
}
# Settings for Node app service
location /app/ {
# Header settings for application behind proxy
proxy_set_header Host $host;
# proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Proxy pass settings
proxy_pass http://127.0.0.1:3000/;
# Proxy redirect settings
proxy_redirect off;
# HTTP version settings
proxy_http_version 1.1;
# Response buffering from proxied server default 1024m
proxy_max_temp_file_size 0;
# Proxy cache bypass define conditions under the response will not be taken from cache
proxy_cache_bypass $http_upgrade;
}
}
This appeared to work at first glance, but what I have found over time is that I am being served 502 errors constantly on the Node app route. This applies to both the app itself, as well as static assets included in the app.
I've tried using various different variations of the above config, but nothing I can find seems to fix the issue. I had read of issues with SELinux, but this is currently not on the server in question.
Few additional bits of information;
Server: Ubuntu 18.04.3
Nginx: nginx/1.17.5
2020/02/09 18:18:07 [error] 8611#8611: *44 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: x, server: example.com, request: "GET /app/assets/images/image.png HTTP/1.1", upstream: "http://127.0.0.1:3000/assets/images/image.png", host: "example.com", referrer: "http://example.com/overlay/"
2020/02/09 18:18:08 [error] 8611#8611: *46 connect() failed (111: Connection refused) while connecting to upstream, client: x, server: example.com, request: "GET /app/ HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "example.com"
Has anyone encountered similar issues, or knows what it is that I've done wrong?
Thanks in advance!
It's may be because of your node router.It's better to share nodes code too.
Anyway try put your main router and static route like app.use('/app', mainRouter); and see it make any sense?
Cannot reach my web service which is running on Nginx/CentOS 7. 502 Bad Gateway error occurs when I try to request. Developed with NodeJS. For more information I shared my configuration files.
Also codes work on local machine but not working on the server.
Server NodeJS version v10.10.0
Local NodeJS version v9.3.0
nginx config
upstream node_server {
server 127.0.0.1:5000 fail_timeout=0;
server 127.0.0.1:5001 fail_timeout=0;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
index index.html index.htm;
server_name alpha;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect off;
proxy_buffering off;
proxy_pass http://node_server;
}
location /public/ {
root /opt/app;
}
}
Service file located at /etc/systemd/system/node-app-1.service
[Service]
ExecStart=/usr/bin/node /opt/app/app.js
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=node-app-1
User=root
Group=root
Environment=NODE_ENV=production PORT=5000
[Install]
WantedBy=multi-user.target
Error detail located in error.log file
2018/09/16 03:30:59 [error] 3102#0: *16 connect() failed (111: Connection refused) while connecting to upstream, client: <MYIPADDRESS>, server: alpha, request: "GET / HTTP/1.1", upstream: "<SERVERIPADDRESS>", host: "<SERVERIPADDRESS>"
I tried to run npm run start command inside my root folder and it worked fine.
http & https firewall enabled
Origin Server is up
Approximately I am searching this issue for 5-6 hours and could not find any solution to take a deep breath.
I could not find same question on the platform. If exist, please let me know and mark this question as duplicate.
I have this Dockerfile I use to deploy my nodejs app together with nginx:
#Create our image from Node
FROM node:latest as builder
MAINTAINER Cristi Boariu <cristiboariu#gmail.com>
# use changes to package.json to force Docker not to use the cache
# when we change our application's nodejs dependencies:
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /opt/app && cp -a /tmp/node_modules /opt/app/
# From here we load our application's code in, therefore the previous docker
# "layer" thats been cached will be used if possible
WORKDIR /opt/app
ADD . /opt/app
EXPOSE 8080
CMD npm start
### STAGE 2: Setup ###
FROM nginx:1.13.3-alpine
## Copy our default nginx config
RUN mkdir /etc/nginx/ssl
COPY nginx/default.conf /etc/nginx/conf.d/
COPY nginx/star_zuumapp_com.chained.crt /etc/nginx/ssl/
COPY nginx/star_zuumapp_com.key /etc/nginx/ssl/
RUN cd /etc/nginx && chmod -R 600 ssl/
RUN mkdir -p /opt/app
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
and this is my nginx file:
upstream api-server {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name example.com www.example.com;
access_log /var/log/nginx/dev.log;
error_log /var/log/nginx/dev.error.log debug;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarder-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass http://api-server;
proxy_redirect off;
}
}
server {
listen 443 default ssl;
server_name example.com www.example.com;
ssl on;
ssl_certificate /etc/nginx/ssl/star_example_com.chained.crt;
ssl_certificate_key /etc/nginx/ssl/star_example_com.key;
server_tokens off;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarder-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://api-server;
proxy_redirect off;
}
}
I already spent a few hours debugging this without success.
Basically, I receive:
502 Bad Gateway
when trying to test it locally on:
https://localhost/docs/#
From docker logs:
172.17.0.1 - - [04/May/2018:05:35:36 +0000] "GET /docs/ HTTP/1.1" 502 166 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/11.1 Safari/605.1.15" "-"
2018/05/04 05:35:36 [error] 7#7: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1, server: example.com, request: "GET /docs/ HTTP/1.1", upstream: "http://127.0.0.1:8080/docs/", host: "localhost"
Can somebody help please?
You should configure:
server 172.17.42.1:8080;
172.17.42.1 is gateway ip address of docker.
or
server app_ipaddress_container:8080;
in nginx.conf file.
Because of port 8080 is listened on host, not on the container.