Docker Network Nginx Keycloak Integration not working properly(Ubuntu 19) - linux

I haven't been able to get Keycloak and Nginx to work within the same Docker network:
Sequence of events:
https://localhost takes me to the application homepage.
When I click on the login button:
I see the following URL in the browser:
https://localhost/auth/realms/bizmkc/protocol/openid-connect/auth?client_id=bizmapp&redirect_uri=&state=26ce2075-8099-4960-83e8-508e40c585f3&response_mode=fragment&response_type=code&scope=openid&nonce=b57ca43a-ed93-48ab-9c96-591cd6378de9
which gives me a 404.
Nginx logs show the following:
2020/04/13 09:58:38 [error] 7#7: *19 connect() failed (111: Connection refused) while connecting to upstream, client: 10.0.0.2, server: localhost, request: "GET /auth/realms/bizmkc/protocol/openid-connect/auth?client_id=bizmapp&redirect_uri=https%3A%2F%2Flocalhost%2Flogin&state=26ce2075-8099-4960-83e8-508e40c585f3&response_mode=fragment&response_type=code&scope=openid&nonce=b57ca43a-ed93-48ab-9c96-591cd6378de9 HTTP/1.1", upstream: "https://127.0.0.1:9443/auth/realms/bizmkc/protocol/openid-connect/auth?client_id=bizmapp&redirect_uri=https%3A%2F%2Flocalhost%2Flogin&state=26ce2075-8099-4960-83e8-508e40c585f3&response_mode=fragment&response_type=code&scope=openid&nonce=b57ca43a-ed93-48ab-9c96-591cd6378de9", host: "localhost", referrer: "https://localhost/login"
2020/04/13 09:58:38 [error] 7#7: *19 open() "/usr/local/nginx/html/50x.html" failed (2: No such file or directory), client: 10.0.0.2, server: localhost, request: "GET /auth/realms/bizmkc/protocol/openid-connect/auth?client_id=bizmapp&redirect_uri=https%3A%2F%2Flocalhost%2Flogin&state=26ce2075-8099-4960-83e8-508e40c585f3&response_mode=fragment&response_type=code&scope=openid&nonce=b57ca43a-ed93-48ab-9c96-591cd6378de9 HTTP/1.1", upstream: "https://127.0.0.1:9443/auth/realms/bizmkc/protocol/openid-connect/auth?client_id=bizmapp&redirect_uri=https%3A%2F%2Flocalhost%2Flogin&state=26ce2075-8099-4960-83e8-508e40c585f3&response_mode=fragment&response_type=code&scope=openid&nonce=b57ca43a-ed93-48ab-9c96-591cd6378de9", host: "localhost", referrer: "https://localhost/login"
If I run Nginx on its own outside the Docker network, then the browser URL
https://localhost/auth/realms/bizmkc/protocol/openid-connect/auth?client_id=bizmapp&redirect_uri=<redirecxt_uri>&state=26ce2075-8099-4960-83e8-508e40c585f3&response_mode=fragment&response_type=code&scope=openid&nonce=b57ca43a-ed93-48ab-9c96-591cd6378de9 correctly takes me to the Keycloak realm login page.
I don't know why URL redirection for the ports doesn't work within the Docker network.
My nginx.conf file
# nginx.vh.default.conf -- docker-openresty
#
# This file is installed to:
# `/etc/nginx/conf.d/default.conf`
#
# It tracks the `server` section of the upstream OpenResty's `nginx.conf`.
#
# This config (and any other configs in `etc/nginx/conf.d/`) is loaded by
# default by the `include` directive in `/usr/local/openresty/nginx/conf/nginx.conf`.
#
# See https://github.com/openresty/docker-openresty/blob/master/README.md#nginx-config-files
#
# log if only it's a new user with no cookie. From https://www.nginx.com/blog/sampling-requests-with-nginx-conditional-logging/
map $cookie_SESSION $logme {
"" 1;
default 0;
}
server {
listen 80; #listen for all the HTTP requests
server_name localhost;
# return 301 https://localhost;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name localhost; # same server name as port 80 is fine
ssl_certificate /etc/nginx/ssldir/ssl.crt;
ssl_certificate_key /etc/nginx/ssldir/ssl.key;
charset utf-8;
# log a user only one time. If cookie is null, it's a new user
access_log /var/log/nginx/access.log combined if=$logme;
error_log /var/log/nginx/error.log debug;
# Optional: If the application does not generate a session cookie, we
# generate our own
add_header Set-Cookie SESSION=1;
# MUST USE TRAILING HASH IN https://localhost:9443/ AND IT WILL NOT ADD BIZAUTH ****important
# Default keycloak configuration points to CONTECT auth in standalone/configuration/standalone.xml. So use auth
location /auth {
proxy_redirect off;
proxy_pass https://localhost:9443;
proxy_read_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location / {
root /usr/local/nginx/html;
index index.html index.htm;
# following is needed for angular pathlocation strategy
try_files $uri $uri/ /index.html;
}
location /mpi {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_max_temp_file_size 0;
# client_max_body_size 10m;
# client_body_buffer_size 128k;
# proxy_connect_timeout 90;
# proxy_send_timeout 90;
# proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_pass http://localhost:8080;
}
location /npi {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_max_temp_file_size 0;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_pass http://localhost:8080;
}
location /tilla/ {
proxy_pass https://www.google.com/;
}
error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/local/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root /usr/local/openresty/nginx/html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
# On error pages, this will prevent showing version number
#server_tokens off;
}
keycloak-nginx.yaml
version: '3.7'
networks:
nginx:
name: nginx
services:
nginx:
image: nginx:1.17.7-alpine
domainname: localhost
ports:
- "80:80"
- "443:443"
networks:
nginx:
network_mode: host
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d
- ./nginx/logs:/var/log/nginx
- ./nginx/html:/usr/local/nginx/html
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- ./nginx/ssldir:/etc/nginx/ssldir:ro
keycloak:
image: jboss/keycloak:8.0.1
domainname: localhost
ports:
- "9443:8443"
networks:
nginx:
volumes:
# - ${USERDIR}/keycloak/config.json:/config.json
- /mnt/disks/vol1/kcthemes:/opt/jboss/keycloak/themes
#- /mnt/disks/vol1/ssldir:/etc/x509/https
environment:
# https://geek-cookbook.funkypenguin.co.nz/recipes/keycloak/setup-oidc-provider/
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=aaaa
# - KEYCLOAK_IMPORT=/config.json
- DB_VENDOR=postgres
- DB_DATABASE=keycloak
- DB_ADDR=keycloak-db
- DB_USER=keycloak
- DB_PASSWORD=myuberpassword
# This is required to run keycloak behind traefik
- PROXY_ADDRESS_FORWARDING=true
- KEYCLOAK_HOSTNAME=localhost
# Tell Postgress what user/password to create
- POSTGRES_USER=keycloak
- POSTGRES_PASSWORD=myuberpassword
- ROOT_LOGLEVEL=DEBUG
- KEYCLOAK_LOGLEVEL=DEBUG
restart: "no"
depends_on:
- keycloak-db
# https://hub.docker.com/_/postgres
keycloak-db:
image: postgres:12.1-alpine
ports:
- target: 5432
published: 5432
networks:
nginx:
volumes:
- ./kc_db:/var/lib/postgresql/data
environment:
- DB_VENDOR=postgres
- DB_DATABASE=keycloak
- DB_ADDR=keycloak-db
- DB_USER=keycloak
- DB_PASSWORD=.
# This is required to run keycloak behind traefik
- KEYCLOAK_HOSTNAME=localhost
# Tell Postgress what user/password to create
- POSTGRES_USER=keycloak
- POSTGRES_PASSWORD=myuberpassword
restart: "no"
keycloak-db-backup:
image: postgres
networks:
nginx:
volumes:
- ${USERDIR}/keycloak/database-dump:/dump
environment:
- PGHOST=keycloak-db
- PGUSER=keycloak
- PGPASSWORD=myuberpassword
- BACKUP_NUM_KEEP=7
- BACKUP_FREQUENCY=1d
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
pg_dump -Fc > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
(ls -t /dump/dump*.psql|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.psql)|sort|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
restart: "no"
depends_on:
- nginx
Command used to run this
docker stack deploy -c keycloak-nginx.yaml kc
docker info
Client:
Debug Mode: false
Server:
Containers: 5
Running: 3
Paused: 0
Stopped: 2
Images: 20
Server Version: 19.03.6
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: pusagcsjon73mkvjxn2wx9bkz
Is Manager: true
ClusterID: ibxcgupiut3apyhwyn78anycj
Managers: 1
Nodes: 1
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Data Path Port: 4789
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 192.168.0.145
Manager Addresses:
192.168.0.145:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version:
runc version:
init version:
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.15.0-96-generic
Operating System: Linux Mint 19.1
OSType: linux
Architecture: x86_64
CPUs: 6
Total Memory: 31.28GiB
Name: Yogi-Linux
ID: YTU6:VKGZ:42ED:QJNQ:34RU:IWAU:L5UL:PJP2:2FJG:FYZC:FRUC:6XNB
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
localhost:32000
127.0.0.0/8
Live Restore Enabled: false

localhost in the container is not the same localhost which you see on the OS level, so:
don't force keycloak service to be "localhost" (domainname,KEYCLOAK_HOSTNAME)
proxy pass /auth to keycloak service (not to localhost)
proxy_pass https://keycloak:9443;
OR:
run all containers in the OS network namespace (--net=host, but generally it isn't recommended) and then localhost in the container will be the same as your OS localhost.

Related

Need help proxying React and NodeJS apps with nginx-proxy on a VPS, in Docker containers

What I'm trying to do is to deploy a dockerized monorepo project (using NX as the monorepo framework) with the Nestjs + React + MySQL + Nginx stack on a VPS. I want the nginx proxy to listen on the host's port 88 (because another stack uses port 80, it's an old stack I do not dare touch). The OS of the VPS is CentOS 7.
I'll try to spare most of the details of the builds (Dockerfile) but know that the builds work, it is all working in my local environment (mostly due to the fact that I dont use nginx-proxy for local development) and I know it's either a matter of my Docker configs (I use docker-compose) or the host's networking that comes into play.
Here's a 'bird's eye view' of the stack:
React-frontend container is running a react app (using nx serve react-frontend) on port 4200 in the container, exposing port 4200 to the host
backend-api container is running a nodejs app (using nodejs entrypoint) on port 3333 of the container, exposing the port to the host
a MySQL container running a mysql server running on port 3306 of the container, exposed on port 3307 of the Host
A nginx-proxy using the jwilder/nginx-proxy docker image (I also tried with nginxproxy/nginx-proxy docker image) listening on port 88 of the host and redirecting the request to react-frontend container through proxy pass (this is the part that I'm failing at).
So here's my "compose-prod.yml" docker-compose file:
version: "3.7"
networks:
corp:
driver: bridge
nginx-proxy:
external:
name: nginx-proxy
volumes:
backend-db-volume:
driver: local
services:
nginx-proxy:
image: jwilder/nginx-proxy # also tried nginxproxy/nginx-proxy image
container_name: nginx-proxy
networks:
- corp
- nginx-proxy
environment:
HTTP_PORT: 88
ports:
- "88:88" # also tried "88:80" but that gives me "connection refused" in the browser
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
backend-db:
image: backend-db
hostname: backend-db
restart: unless-stopped
volumes:
- backend-db-volume:/var/lib/mysql
networks:
- corp
build:
context: ./apps/backend-db
dockerfile: ./Dockerfile
ports:
- 3307:3306
expose:
- 3306
backend-api:
container_name: backend-api
depends_on:
- backend-db
build:
context: ./
cache_from:
- base-image:nx-base
dockerfile: ./apps/backend-api/Dockerfile
args:
NODE_ENV: "production"
BUILD_FLAG: ""
image: backend-api:nx-dev
ports:
- "3333:3333"
environment:
NODE_ENV: "production"
PORT: 3333
[... other env configs ommitted, like DB variables, etc.]
networks:
- corp
restart: on-failure
react-frontend:
container_name: react-frontend
build:
context: ./
dockerfile: ./apps/react-frontend/Dockerfile
args:
NODE_ENV: "production"
BUILD_FLAG: ""
image: react-frontend:nx-dev
environment:
VIRTUAL_HOST: react-frontend # note that my domain is react-frontend.com, obfuscated ofc ... which I also tried using in VIRTUAL_HOST config
VIRTUAL_PORT: 4200
NGINX_PROXY_CONTAINER: nginx-proxy
NODE_ENV: "production"
[...other env configs ommitted]
ports:
- "4200:4200"
expose:
- 4200
networks:
- nginx-proxy
- corp
restart: on-failure
The nginx-proxy container automatically detects containers running with VIRTUAL_HOST env. variable enabled, generates configs for those from the compose-prod.yml file. Right now, the configuration generated, that I get using the "docker exec nginx-proxy cat /etc/nginx/conf.d/default.conf" command is this:
# nginx-proxy version : 1.0.1-6-gc4ad18f
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header based on $proxy_x_forwarded_proto
map $proxy_x_forwarded_proto $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'"$upstream_addr"';
access_log off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;
error_log /dev/stderr;
resolver 127.0.0.11;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
proxy_set_header X-Original-URI $request_uri;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
server_tokens off;
listen 88;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# react-frontend
upstream react-frontend {
## Can be connected with "react-frontend_corp" network
# react-frontend
server <IP of react-frontend container on Docker network>:4200;
# Cannot connect to network 'nginx-proxy' of this container
# Cannot connect to network 'react-frontend_corp' of this container
## Can be connected with "nginx-proxy" network
# react-frontend
server <IP of react-frontend container on Docker network>:4200;
}
server {
server_name react-frontend;
listen 88 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://react-frontend;
}
}
When I access "example.com:88" I get a "503 Service Temporarily Unavailable" page in my browser returned from nginx and I see this in nginx's access logs:
nginx-proxy | nginx.1 | example.com xx.yy.zz.ip - - [06/Jul/2022:16:48:12 +0000] "GET / HTTP/1.1" 503 592 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36" "-"
nginx-proxy | nginx.1 | example.com xx.yy.zz.ip - - [06/Jul/2022:16:48:12 +0000] "GET /favicon.ico HTTP/1.1" 503 592 "http://example.com:88/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36" "-"
I omit the Dockerfiles since the nginx-proxy container is not built, it's taken as is from the image and all the builds work ... it's the deployment which gives me trouble.
Anyone who has any pointer on what I'm missing ? What I should check ? This is for a personal project and even though I can get around as a devop, Docker networking/deployment still baffles me sometimes.
EDIT: I'm adding the VPS (host) vhost config (nginx) here ... so maybe I can proxy-pass using this configuration ... how would I go about modifying this config so I can proxy pass requests to the "example" docker container exposing port 4200 (instead of a root directory on the VPS) ?
# configuration file /etc/nginx/conf.d/users/example.conf:
proxy_cache_path /var/cache/ea-nginx/proxy/example levels=1:2 keys_zone=example:10m inactive=60m;
#### main domain for example ##
server {
server_name example.com www.example.com mail.example.com;
listen 80;
listen [::]:80;
include conf.d/includes-optional/cloudflare.conf;
set $CPANEL_APACHE_PROXY_PASS $scheme://apache_backend_${scheme}_51_222_24_216;
# For includes:
set $CPANEL_APACHE_PROXY_IP 51.222.24.216;
set $CPANEL_APACHE_PROXY_SSL_IP 51.222.24.216;
set $CPANEL_PROXY_CACHE example;
set $CPANEL_SKIP_PROXY_CACHING 0;
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /var/cpanel/ssl/apache_tls/example.com/combined;
ssl_certificate_key /var/cpanel/ssl/apache_tls/example.com/combined;
ssl_protocols TLSv1.2 TLSv1.3;
proxy_ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256;
proxy_ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256;
root /home/example/public_html;
location /cpanelwebcall {
include conf.d/includes-optional/cpanel-proxy.conf;
proxy_pass http://127.0.0.1:2082/cpanelwebcall;
}
location /Microsoft-Server-ActiveSync {
include conf.d/includes-optional/cpanel-proxy.conf;
proxy_pass http://127.0.0.1:2090/Microsoft-Server-ActiveSync;
}
location = /favicon.ico {
allow all;
log_not_found off;
access_log off;
include conf.d/includes-optional/cpanel-proxy.conf;
proxy_pass $CPANEL_APACHE_PROXY_PASS;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
include conf.d/includes-optional/cpanel-proxy.conf;
proxy_pass $CPANEL_APACHE_PROXY_PASS;
}
location / {
proxy_cache $CPANEL_PROXY_CACHE;
proxy_no_cache $CPANEL_SKIP_PROXY_CACHING;
proxy_cache_bypass $CPANEL_SKIP_PROXY_CACHING;
proxy_cache_valid 200 301 302 60m;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout http_429 http_500 http_502 http_503 http_504;
proxy_cache_background_update on;
proxy_cache_revalidate on;
proxy_cache_min_uses 1;
proxy_cache_lock on;
include conf.d/includes-optional/cpanel-proxy.conf;
proxy_pass $CPANEL_APACHE_PROXY_PASS;
}
include conf.d/server-includes/*.conf;
include conf.d/users/example/*.conf;
include conf.d/users/example/example.com/*.conf;
}
server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /var/cpanel/ssl/apache_tls/example.com/combined;
ssl_certificate_key /var/cpanel/ssl/apache_tls/example.com/combined;
server_name cpanel.example.com cpcalendars.example.com cpcontacts.example.com webdisk.example.com webmail.example.com;
include conf.d/includes-optional/cloudflare.conf;
set $CPANEL_APACHE_PROXY_PASS $scheme://apache_backend_${scheme}_51_222_24_216;
# For includes:
set $CPANEL_APACHE_PROXY_IP 51.222.24.216;
set $CPANEL_APACHE_PROXY_SSL_IP 51.222.24.216;
location /.well-known/cpanel-dcv {
root /home/example/public_html;
disable_symlinks if_not_owner;
}
location /.well-known/pki-validation {
root /home/example/public_html;
disable_symlinks if_not_owner;
}
location /.well-known/acme-challenge {
root /home/example/public_html;
disable_symlinks if_not_owner;
}
location / {
# Force https for service subdomains
if ($scheme = http) {
return 301 https://$host$request_uri;
}
# no cache
proxy_cache off;
proxy_no_cache 1;
proxy_cache_bypass 1;
# pass to Apache
include conf.d/includes-optional/cpanel-proxy.conf;
proxy_pass $CPANEL_APACHE_PROXY_PASS;
}
}
If you want to access your app using example.com or www.example.com you have to set the server_name example.com *.example.com;. You can reach the docker container using DNS too while using docker-compose, in your case backend-api:3333 or react-frontend:4200. There are few correction in your configs.
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
server_tokens off;
listen 88;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# react-frontend
upstream frontends {
server react-frontend:4200;
server react-frontend:4200;
}
upstream backends {
server backend-api:3333;
server backend-api:3333;
}
server {
server_name 127.0.0.1 example.com *.example.com;
listen 88;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://frontends;
}
# if required , only then use it otherwise remove it [for direct api calls]
location /api/v1 {
proxy_pass http://backends;
}
}
One can add more options or configs , according to the requirements. We are seeing 503 default page because of server_name _ configs in the above snippet. One can configure it , if needed. (like below snippet)
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
server_tokens off;
listen 88;
access_log /var/log/nginx/access.log vhost;
return 403 "..Ops";
}
Not sure, why two networks are required, if you are running everything in one compose file.
If two networks are required, below is the example for docker-compose file and nginx.conf.
docker-compose.yaml
version: "3.7"
networks:
corp:
driver: bridge
nginx-proxy:
external:
name: nginx-proxy
services:
nginx-proxy:
container_name: nginx-proxy
image: nginx
ports:
- 3210:80
networks:
- nginx-proxy
- corp
volumes:
- another-nginx.conf:/etc/nginx/conf.d/another-nginx.conf
react-frontend:
container_name: react-frontend
image: httpd
ports:
- 3211:80
networks:
- nginx-proxy
- corp
backend-x:
container_name: backend
image: nginx
ports:
- 3212:80
networks:
- corp
db-x:
container_name: db
image: httpd
ports:
- 3213:80
networks:
- corp
another-nginx.conf
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
server_tokens off;
listen 88;
access_log /var/log/nginx/access.log;
return 503;
}
# react-frontend
upstream frontends {
server react-frontend:80;
}
upstream backends {
server backend:80;
}
server {
server_name 127.0.0.1 localhost example.com *.example.com;
listen 80;
access_log /var/log/nginx/access.log;
location / {
proxy_pass http://frontends;
}
# if required , only then use it otherwise remove it [for direct api calls]
location /api/v1 {
proxy_pass http://backends;
}
}
To create nginx-proxy network if not present.
docker network create nginx-proxy

Dockerized NGINX Configuration with ReactJS App Running on Azure (Container Instances)

I have a fairly standard ReactJS frontend (using port 3000) app which is served by a NodeJS backend server (using port 5000). Both apps are Dockerized and I have configured NGINX in order to proxy requests from the frontend to and from the server.
Dockerfile for front end (with NGINX "baked in"):
FROM node:lts-alpine as build
WORKDIR /app
COPY ./package.json ./
COPY ./package-lock.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 3000
EXPOSE 443
EXPOSE 80
COPY ./cert/app.crt /etc/nginx/
COPY ./cert/app.key /etc/nginx/
ENV HTTPS=true
ENV SSL_CRT_FILE=/etc/nginx/app.crt
ENV SSL_KEY_FILE=/etc/nginx/app.key
RUN rm /etc/nginx/conf.d/default.conf
COPY ./default.conf /etc/nginx/nginx.conf
COPY --from=build /app/build/ /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
Dockerfile for server:
FROM node:lts-alpine as build
WORKDIR /app
EXPOSE 5000
ENV NODE_TLS_REJECT_UNAUTHORIZED=0
ENV DANGEROUSLY_DISABLE_HOST_CHECK=true
ENV NODE_CONFIG_DIR=./config/
COPY ./package.json ./
COPY ./package-lock.json ./
RUN npm install
COPY . .
CMD [ "npm", "start" ]
The docker-compose.yml for this setup is
version: '3.8'
services:
client:
container_name: client
depends_on:
- server
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
- HTTPS=true
- SSL_CRT_FILE=/etc/nginx/app.crt
- SSL_KEY_FILE=/etc/nginx/app.key
build:
dockerfile: Dockerfile
context: ./client
expose:
- "8000"
- "3000"
ports:
- "3000:443"
- "8000:80"
volumes:
- ./client:/app
- /app/node_modules
- /etc/nginx
networks:
- internal-network
server:
container_name: server
build:
dockerfile: Dockerfile
context: "./server"
expose:
- "5000"
ports:
- "5000:5000"
volumes:
- /app/node_modules
- ./server:/app
networks:
- internal-network
networks:
internal-network:
driver: bridge
And crucially, the NGINX default.conf is
worker_processes auto;
events {
worker_connections 1024;
}
pid /var/run/nginx.pid;
http {
include mime.types;
upstream loadbalancer {
server server:5000 weight=3;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
port_in_redirect off;
absolute_redirect off;
return 301 https://$host$request_uri;
}
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name example.app* example.co* example.uksouth.azurecontainer.io* localhost*;
error_page 497 https://$host:$server_port$request_uri;
error_log /var/log/nginx/client-proxy-error.log;
access_log /var/log/nginx/client-proxy-access.log;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:AES128:AES256:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 24h;
keepalive_timeout 300;
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains';
ssl_certificate /etc/nginx/app.crt;
ssl_certificate_key /etc/nginx/app.key;
root /usr/share/nginx/html;
index index.html index.htm index.nginx-debian.html;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
try_files $uri $uri/ /index.html;
}
location /tours {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://loadbalancer;
}
}
}
with this configuration I have two problems:
By running docker-compose up -d, this setup builds and deploys two Docker containers locally. When I use https://localhost:3000/id this works and the data is retrieved and shown in browser correctly - when I type http://localhost:3000/id this gets redirected to http://localhost:443/id and this does not work. I have attempted to use NGINX commands port_in_redirect off; absolute_redirect off; but this has not helped. How can I make sure that the redirect does not edit the port number? (this is likely not going to be an issue in production where the port numbers are not used).
The bigger problem: the deployment to Azure is done using a docker context and running docker-compose -f ./docker-compose-azure.yml up. This runs and creates two Docker containers and a side-car process. The docker-compose-azure.yml file is
version: '3.8'
services:
client:
image: dev.azurecr.io/example-client
depends_on:
- server
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
- HTTPS=true
- SSL_CRT_FILE=/etc/nginx/app.crt
- SSL_KEY_FILE=/etc/nginx/app.key
restart: unless-stopped
domainname: "example-dev"
expose:
- "3000"
ports:
- target: 3000
#published: 3000
protocol: tcp
mode: host
networks:
- internal-network
server:
image: dev.azurecr.io/example-server
restart: unless-stopped
ports:
- "5000:5000"
networks:
- internal-network
networks:
internal-network:
driver: bridge
If I don't use HTTPS and a simple reverse proxy - the two issues outline above go away. But with the configuration above, calls to the Azure FQDN/URL fail; HTTPS requests timing out "ERR_CONNECTION_TIMED_OUT", and for HTTP, the site could not be found. What am I doing wrong here?
Thanks for your time.
I think Jan Garaj's answer has touched upon all the important bits. Here is my take, trying to give a targeted answer.
HTTP to HTTPS redirect
Currently the return 301 statement is using the $host variable that only holds the Hostname and not the port information. To capture both, you can use the $http_host variable instead. source
server {
listen [::]:80;
#//307 to preserve POST data
return 307 https://$http_host$request_uri;
}
Problems with the Azure config
In the Azure config, you have this bit:
ports:
- target: 3000
#published: 3000
protocol: tcp
mode: host
which identifies 3000 as the internal client port which listens to the requests. But you have to remember that you have a NGINX proxy inside that only listens to ports 80 or 443 (the server blocks in Nginx config). So this is the reason you get the ERR_CONNECTION_TIMED_OUT error because the requests are sent to port 3000 where nothing is listening.
As you want to do a HTTPS deployment, you can set this to 443 and the Nginx will take care of the request.
enabling HTTP redirect on Azure
The final bit is to configure the Azure deployment such that when a HTTP request is made to your URL, it should get redirected to the HTTPS counterpart. We already have the NGINX redirect block for port 80.
BUT, it will not help. As we specify the target to be 443 inside the container, the HTTP request will try to hit 443 and get refused.This article also mentions the same towards the end
Use your browser to navigate to the public IP address of the container group. The IP address shown in this example is 52.157.22.76, so the URL is https://52.157.22.76. You must use HTTPS to see the running application, because of the Nginx server configuration. Attempts to connect over HTTP fail.
This could be solved if it were possible to add another port to Azure config, the port 80.
ports:
- port: 443
protocol: TCP
- port: 80
protocol: TCP
I am not sure if Azure allows this, but if it does then thats the final solution.
I think you need to check/update Nginx configuration file properly and also make sure SSL certificate files are available
# http block would be
server {
listen 80 default_server;
return 301 https://$server_name$request_uri;
}
and in https server block, you need to update location block
location /tours {
proxy_pass http://server:5000;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
location / {
try_files $uri $uri/ /index.html;
}
Updated
Your Nginx config file would be
worker_processes auto;
events {
worker_connections 1024;
}
pid /var/run/nginx.pid;
http {
include mime.types;
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name my-redirected-domain.com my-azure-domain.io localhost;
access_log /var/log/nginx/client-proxy.log;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:AES128:AES256:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 24h;
keepalive_timeout 300;
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains';
ssl_certificate /etc/nginx/viewform.app.crt;
ssl_certificate_key /etc/nginx/viewform.app.key;
root /usr/share/nginx/html;
index index.html index.htm index.nginx-debian.html;
location / {
try_files $uri $uri/ /index.html;
}
location /tours {
proxy_pass http://server:5000;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
server {
listen 80 default_server;
return 301 https://$server_name$request_uri;
}
}
Use port 443 everywhere to avoid any confusion with port remapping (that can be an advance setup):
1.) Define client container to be running on port 443:
version: '3.8'
services:
client:
...
ports:
- port: 443
protocol: TCP
2.) Define Nginx to be running on the port 443 with proper TLS setup as you have in your updated nginx.conf
Deploy and open https://<public IP> (you will very likely need to add sec. exception in the browser).
BTW: Azure has quite good article about Nginx with TLS (but more advance setup is used):
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-container-group-ssl
IMHO better redirect from http to https is:
server {
listen [::]:80;
return 301 https://$host$request_uri;
}

Creating a reverse proxy with NGINX. What am I missing?

Help Debugging
I have been trying to create a reverse proxy with NGINX. For now I'm just trying to get it to redirect traffic on my local network. I think I'm close, but I'm stuck. Any advice is appreciated!
The expected behavior is that a request to http://api.dev.tagnoo.com routes traffic to one container, while http://app.dev.tagnoo.com routes traffic to another.
The actual behavior is that I can't access anything despite my containers running and Nginx seeming to be working. I have no idea how to debug this.
Recreating My Pain
I spin up containers with the following commands:
docker-compose pull --include-deps $#
docker-compose up -d --remove-orphans --build $#
My docker-compose.yaml file looks like this
services:
lb:
image: nginx:1.19.7-alpine
ports:
- 80:80
- 443:443
volumes:
- ./src/nginx.conf:/etc/nginx/nginx.conf
- ./src/fullchain.pem:/etc/ssl/private/fullchain.pem
- ./src/privkey.pem:/etc/ssl/private/privkey.pem
networks:
default:
aliases:
- api.dev.tagnoo.com
- app.dev.tagnoo.com
- dev.tagnoo.com
postgres:
image: postgres:13.3-alpine
environment:
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
volumes:
- pg-data:/var/lib/postgresql/data
api: &api
image: 410ventures/tagnoo-api:latest
environment: &api_environment
TEST_VAR: 'test123'
VERSION: development
WATCH: 1
api-test:
<<: *api
command: ["true"]
environment:
<<: *api_environment
TAGNOO_API_URL: http://localhost
POSTGRES_URL: pg://postgres:postgres#postgres/tagnooTest
REDIS_KEY_PREFIX: 'api-test:'
api-app-test:
<<: *api
command: ["true"]
environment:
<<: *api_environment
TAGNOO_API_URL: http://api-app-test
POSTGRES_URL: pg://postgres:postgres#postgres/tagnooAppTest
WATCH: 1
app: &app
image: 410ventures/tagnoo-app:latest
environment: &app_environment
VERSION: development
WATCH: 1
app-build:
<<: *app
command: ["true"]
app-livereload:
<<: *app
command: ["true"]
app-test:
<<: *app
command: ["true"]
environment:
<<: *app_environment
TAGNOO_APP_URL: http://localhost
TAGNOO_API_URL: http://api-app-test
volumes:
pg-data:
This works as expected:
container info
My nginx.conf file looks like this
events {}
http { server_tokens off;
map $http_upgrade $connection_upgrade {
'' close;
default upgrade; }
proxy_http_version 1.1; proxy_set_header Connection $connection_upgrade; proxy_set_header Host $host; proxy_set_header Upgrade $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr;
ssl_certificate /etc/ssl/private/fullchain.pem; ssl_certificate_key /etc/ssl/private/privkey.pem;
proxy_read_timeout 24h;
# proxy_pass directives are passed hosts via a $proxy_pass_host variable to # allow nginx to start up before the hosts are actually available. Putting the # hosts directly in the proxy_pass directive will fail start up unless all # hosts are available. A resolver is required to use variables in proxy_pass # directives, so use the docker internal DNS IP here. resolver 127.0.0.11;
server {
return 301 https://$host$request_uri; }
server {
listen 443 ssl http2;
server_name app.dev.tagnoo.com;
location /livereload {
set $proxy_pass_host app-livereload:35729;
proxy_pass http://$proxy_pass_host;
}
location / {
set $proxy_pass_host app;
proxy_pass http://$proxy_pass_host;
} }
server {
listen 443 ssl http2;
server_name api.dev.tagnoo.com;
location / {
set $proxy_pass_host api;
proxy_pass http://$proxy_pass_host;
} } }
I have privkey.pem and fullchain.pem files in my root directory. I just self-signed them using openssl but I assume that should still work for my local network if I ignore ssl.
What I've tried
I've tried accessing the containers (to no avail) through:
http://api.dev.tagnoo.com
https://api.dev.tagnoo.com/healthz (an endpoint I use to test connection)
https://api.dev.tagnoo.com
localhost:80
0.0.0.0:80
Here are the logs for the responses I get (in no particular order)
docker compose logs for lb
I haven't set up any DNS records for my domain, tagnoo.com, but I don't think that should matter because this is just a local environment at the moment. But I'm not sure if that's true.
I can't find more information about debugging NGINX at this point.
Summary
I'm mainly concerned that my NGINX config file is not doing what it should. I'm also not sure if my docker-compose file is set up correctly for the reverse-proxy.
My containers are running, but the reverse proxy to them is broken somehow. Requests fail with 302 or 301 getaddrinfo ENOTFOUND api.dev.tagnoo.com whenever I attempt to access.
Here are some questions I have on the matter:
Is there anything else I need to do to setup this reverse proxy? Am I missing a step?
Are the fullchain.pem and privkey.pem files the reason NGINX is failing? If so - how do I create these?
Is docker-compose configured correctly?
How can I debug this further?
Any advice/tips would be greatly appreciated!
The issue was that I had not yet setup A and AAAA records to resolve the *.dev subdomain to localhost. I added those and created a valid (not self-signed) certificate for the host machine.

Why I receive: 502 Bad Gateway?

I have this Dockerfile I use to deploy my nodejs app together with nginx:
#Create our image from Node
FROM node:latest as builder
MAINTAINER Cristi Boariu <cristiboariu#gmail.com>
# use changes to package.json to force Docker not to use the cache
# when we change our application's nodejs dependencies:
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /opt/app && cp -a /tmp/node_modules /opt/app/
# From here we load our application's code in, therefore the previous docker
# "layer" thats been cached will be used if possible
WORKDIR /opt/app
ADD . /opt/app
EXPOSE 8080
CMD npm start
### STAGE 2: Setup ###
FROM nginx:1.13.3-alpine
## Copy our default nginx config
RUN mkdir /etc/nginx/ssl
COPY nginx/default.conf /etc/nginx/conf.d/
COPY nginx/star_zuumapp_com.chained.crt /etc/nginx/ssl/
COPY nginx/star_zuumapp_com.key /etc/nginx/ssl/
RUN cd /etc/nginx && chmod -R 600 ssl/
RUN mkdir -p /opt/app
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
and this is my nginx file:
upstream api-server {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name example.com www.example.com;
access_log /var/log/nginx/dev.log;
error_log /var/log/nginx/dev.error.log debug;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarder-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass http://api-server;
proxy_redirect off;
}
}
server {
listen 443 default ssl;
server_name example.com www.example.com;
ssl on;
ssl_certificate /etc/nginx/ssl/star_example_com.chained.crt;
ssl_certificate_key /etc/nginx/ssl/star_example_com.key;
server_tokens off;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarder-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://api-server;
proxy_redirect off;
}
}
I already spent a few hours debugging this without success.
Basically, I receive:
502 Bad Gateway
when trying to test it locally on:
https://localhost/docs/#
From docker logs:
172.17.0.1 - - [04/May/2018:05:35:36 +0000] "GET /docs/ HTTP/1.1" 502 166 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/11.1 Safari/605.1.15" "-"
2018/05/04 05:35:36 [error] 7#7: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1, server: example.com, request: "GET /docs/ HTTP/1.1", upstream: "http://127.0.0.1:8080/docs/", host: "localhost"
Can somebody help please?
You should configure:
server 172.17.42.1:8080;
172.17.42.1 is gateway ip address of docker.
or
server app_ipaddress_container:8080;
in nginx.conf file.
Because of port 8080 is listened on host, not on the container.

Using Nginx, node-http-proxy to mask IP addresses

First of all, I'd like to apologize for the long post!
I'm almost close to figuring everything out! What I want to do is to use node-http-proxy to mask a series of dynamic IPs that I get from a MySQL database. I do this by redirecting the subdomains to node-http-proxy and parsing it from there. I was able to do this locally without any problems.
Remotely, it's behind an Nginx web server with HTTPS enabled (I have a wildcard certificate issued through Let's Encrypt, and a Comodo SSL for the domain). I managed to configure it so it passed it to the node-http-proxy without problems. The only problem, is that the latter is giving me
The error is { Error: connect ECONNREFUSED 127.0.0.1:80
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 80 }
Whenever I set:
proxy.web(req, res, { target, ws: true }
And I don't know if the problem is the remote address (highly unlikely since I'm able to connect through a secondary device), or I have misconfigured nginx (highly likely). There's also the possibility that it may be clashing with Nginx which is listening to port 80. But I don't know why node-http-proxy would connect through port 80
Some additional info:
There's a Ruby on Rails app running side-by-side as well.
Node-http-proxy, nginx, ruby on rails are running in each own Docker container. I don't think it's a problem from Docker, since I was able to locally test this without any problems.
Here's my current nginx.conf (I have replaced my domain name for example.com, for security reasons)
The server_name "~^\d+\.example\.co$"; is where I want it to redirect to node-http-proxy, whereas example.com is where a Ruby on Rails application lies.
# https://codepany.com/blog/rails-5-and-docker-puma-nginx/
# This is the port the app is currently exposing.
# Please, check this: https://gist.github.com/bradmontgomery/6487319#gistcomment-1559180
upstream puma_example_docker_app {
server app:5000;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
# Redirect all HTTP requests to HTTPS with a 301 Moved Permanently response.
# Enable once you solve wildcard subdomain issue.
return 301 https://$host$request_uri;
}
server {
server_name "~^\d+\.example\.co$";
# listen 80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
# certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
# Created by Certbot
ssl_certificate /etc/letsencrypt/live/example.co/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.co/privkey.pem;
# include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
# ssl_certificate /etc/ssl/certs/ssl-bundle.crt;
# ssl_certificate_key /etc/ssl/private/example.co.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
# Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
# This is generated by ourselves.
# ssl_dhparam /etc/ssl/certs/dhparam.pem;
# intermediate configuration. tweak to your needs.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
ssl_prefer_server_ciphers on;
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
add_header Strict-Transport-Security max-age=15768000;
# OCSP Stapling ---
# fetch OCSP records from URL in ssl_certificate and cache them
ssl_stapling on;
ssl_stapling_verify on;
## verify chain of trust of OCSP response using Root CA and Intermediate certs
ssl_trusted_certificate /etc/ssl/certs/trusted.crt;
location / {
# https://www.digitalocean.com/community/questions/error-too-many-redirect-on-nginx
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://ipmask_docker_app;
# limit_req zone=one;
access_log /var/www/example/log/nginx.access.log;
error_log /var/www/example/log/nginx.error.log;
}
}
# SSL configuration was obtained through Mozilla's
# https://mozilla.github.io/server-side-tls/ssl-config-generator/
server {
server_name localhost example.co www.example.co; #puma_example_docker_app;
# listen 80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
# certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
# Created by Certbot
# ssl_certificate /etc/letsencrypt/live/example.co/fullchain.pem;
#ssl_certificate_key /etc/letsencrypt/live/example.co/privkey.pem;
# include /etc/letsencrypt/options-ssl-nginx.conf;
# ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
ssl_certificate /etc/ssl/certs/ssl-bundle.crt;
ssl_certificate_key /etc/ssl/private/example.co.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
# Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
# This is generated by ourselves.
ssl_dhparam /etc/ssl/certs/dhparam.pem;
# intermediate configuration. tweak to your needs.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
ssl_prefer_server_ciphers on;
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
add_header Strict-Transport-Security max-age=15768000;
# OCSP Stapling ---
# fetch OCSP records from URL in ssl_certificate and cache them
ssl_stapling on;
ssl_stapling_verify on;
## verify chain of trust of OCSP response using Root CA and Intermediate certs
ssl_trusted_certificate /etc/ssl/certs/trusted.crt;
# resolver 127.0.0.1;
# https://support.comodo.com/index.php?/Knowledgebase/Article/View/1091/37/certificate-installation--nginx
# The above was generated through Mozilla's SSL Config Generator
# https://mozilla.github.io/server-side-tls/ssl-config-generator/
# This is important for Rails to accept the headers, otherwise it won't work:
# AKA. => HTTP_AUTHORIZATION_HEADER Will not work!
underscores_in_headers on;
client_max_body_size 4G;
keepalive_timeout 10;
error_page 500 502 504 /500.html;
error_page 503 #503;
root /var/www/example/public;
try_files $uri/index.html $uri #puma_example_docker_app;
# This is a new configuration and needs to be tested.
# Final slashes are critical
# https://stackoverflow.com/a/47658830/1057052
location /kibana/ {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
#rewrite ^/kibanalogs/(.*)$ /$1 break;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://kibana:5601/;
}
location #puma_example_docker_app {
# https://www.digitalocean.com/community/questions/error-too-many-redirect-on-nginx
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma_example_docker_app;
# limit_req zone=one;
access_log /var/www/example/log/nginx.access.log;
error_log /var/www/example/log/nginx.error.log;
}
location ~ ^/(assets|images|javascripts|stylesheets)/ {
try_files $uri #rails;
access_log off;
gzip_static on;
# to serve pre-gzipped version
expires max;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
break;
}
location = /50x.html {
root html;
}
location = /404.html {
root html;
}
location #503 {
error_page 405 = /system/maintenance.html;
if (-f $document_root/system/maintenance.html) {
rewrite ^(.*)$ /system/maintenance.html break;
}
rewrite ^(.*)$ /503.html break;
}
if ($request_method !~ ^(GET|HEAD|PUT|PATCH|POST|DELETE|OPTIONS)$ ){
return 405;
}
if (-f $document_root/system/maintenance.html) {
return 503;
}
location ~ \.(php|html)$ {
return 405;
}
}
Current docker-compose file:
# This is a docker compose file that will pull from the private
# repo and will use all the images.
# This will be an equivalent for production.
version: '3.2'
services:
# No need for the database in production, since it will be connecting to one
# Use this while you solve Database problems
app:
image: myrepo/rails:latest
restart: always
environment:
RAILS_ENV: production
# What this is going to do is that all the logging is going to be printed into the console.
# Use this with caution as it can become very verbose and hard to read.
# This can then be read by using docker-compose logs app.
RAILS_LOG_TO_STDOUT: 'true'
# RAILS_SERVE_STATIC_FILES: 'true'
# The first command, the remove part, what it does is that it eliminates a file that
# tells rails and puma that an instance is running. This was causing issues,
# https://github.com/docker/compose/issues/1393
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -e production -p 5000 -b '0.0.0.0'"
# volumes:
# - /var/www/cprint
ports:
- "5000:5000"
expose:
- "5000"
networks:
- elk
links:
- logstash
# Uses Nginx as a web server (Access everything through http://localhost)
# https://stackoverflow.com/questions/30652299/having-docker-access-external-files
#
web:
image: myrepo/nginx:latest
depends_on:
- elasticsearch
- kibana
- app
- ipmask
restart: always
volumes:
# https://stackoverflow.com/a/48800695/1057052
# - "/etc/ssl/:/etc/ssl/"
- type: bind
source: /etc/ssl/certs
target: /etc/ssl/certs
- type: bind
source: /etc/ssl/private/
target: /etc/ssl/private
- type: bind
source: /etc/nginx/.htpasswd
target: /etc/nginx/.htpasswd
- type: bind
source: /etc/letsencrypt/
target: /etc/letsencrypt/
ports:
- "80:80"
- "443:443"
networks:
- elk
- nginx
links:
- elasticsearch
- kibana
# Defining the ELK Stack!
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.3
container_name: elasticsearch
networks:
- elk
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elasticsearch:/usr/share/elasticsearch/data
# - ./elk/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- 9200:9200
logstash:
image: docker.elastic.co/logstash/logstash:6.2.3
container_name: logstash
volumes:
- ./elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
# This is the most important part of the configuration
# This will allow Rails to connect to it.
# See application.rb for the configuration!
- ./elk/logstash/pipeline/logstash.conf:/etc/logstash/conf.d/logstash.conf
command: logstash -f /etc/logstash/conf.d/logstash.conf
ports:
- "5228:5228"
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
networks:
- elk
links:
- elasticsearch
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:6.2.3
volumes:
- ./elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- "5601:5601"
networks:
- elk
links:
- elasticsearch
depends_on:
- elasticsearch
ipmask:
image: myrepo/proxy:latest
command: "npm start"
restart: always
environment:
- "NODE_ENV=production"
expose:
- "5050"
ports:
- "4430:80"
links:
- app
networks:
- nginx
# # Volumes are the recommended storage mechanism of Docker.
volumes:
elasticsearch:
driver: local
rails:
driver: local
networks:
elk:
driver: bridge
nginx:
driver: bridge
Thank you very much!
Waaaaaaitttt. There was no problem with the code!
The problem was that I was trying to pass a bland IP address without appending http before it! By appending HTTP everything is working!!
Example:
I was doing:
proxy.web(req, res, { target: '128.29.41.1', ws: true })
When in fact this was the answer:
proxy.web(req, res, { target: 'http://128.29.41.1', ws: true })

Resources