I have an Ingress Controller file in my Kubernetes cluster that looks like such:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my_ingress_service_name
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: 70m
spec:
rules:
- host: some.host.com
http:
paths:
- path: /api/
pathType: Prefix
backend:
service:
name: gateway
port:
number: 4156
- path: /
pathType: Prefix
backend:
service:
name: testpartner
port:
number: 8090
- host: someother.host.com
http:
paths:
- path: /api/
pathType: Prefix
backend:
service:
name: gateway
port:
number: 4156
- path: /
pathType: Prefix
backend:
service:
name: testpartner
port:
number: 8090
When I navigate to any of the hosts (I have this hosted in Azure Kubernetes, with IP Addesses and an A record pointing to my kubernetes cluster) I can successfully see my web application. However, when I try to go to the api route such as
some.host.com/api/someApiController
or
someother.host.com/api/someApiController
I get a 404 not found error. Here is what is returned in the logs:
10.240.0.4 - - [13/Jul/2022:10:22:51 +0000] "GET /api/request/partner HTTP/1.1" 404 0 "-" "PostmanRuntime/7.28.4" 401 0.002 [ingress-basic-gateway-4156] [] 10.240.0.65:4156 0 0.004 404 7a315bb0a56147e39ef692000dc0adfb
To be clear, it is on purpose that my application can have different hosts pointing to the same Kubernetes service with the same pods, so the different hosts will go the same container, but their APIs will also be a different container, so two containers total.
If it is helpful, here is the nginx config file for the web container (testpartner)
server {
listen 8090;
sendfile on;
default_type application/octet-stream;
client_max_body_size 50M;
gzip on;
gzip_http_version 1.1;
gzip_disable "MSIE [1-6]\.";
gzip_min_length 1100;
gzip_vary on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_comp_level 9;
root /usr/share/nginx/html;
location / {
try_files $uri $uri/ /index.html =404;
}
}
What am I doing wrong? I have also verified that the gateway is in fact listening to the correct port.
Related
What I'm trying to do is to deploy a dockerized monorepo project (using NX as the monorepo framework) with the Nestjs + React + MySQL + Nginx stack on a VPS. I want the nginx proxy to listen on the host's port 88 (because another stack uses port 80, it's an old stack I do not dare touch). The OS of the VPS is CentOS 7.
I'll try to spare most of the details of the builds (Dockerfile) but know that the builds work, it is all working in my local environment (mostly due to the fact that I dont use nginx-proxy for local development) and I know it's either a matter of my Docker configs (I use docker-compose) or the host's networking that comes into play.
Here's a 'bird's eye view' of the stack:
React-frontend container is running a react app (using nx serve react-frontend) on port 4200 in the container, exposing port 4200 to the host
backend-api container is running a nodejs app (using nodejs entrypoint) on port 3333 of the container, exposing the port to the host
a MySQL container running a mysql server running on port 3306 of the container, exposed on port 3307 of the Host
A nginx-proxy using the jwilder/nginx-proxy docker image (I also tried with nginxproxy/nginx-proxy docker image) listening on port 88 of the host and redirecting the request to react-frontend container through proxy pass (this is the part that I'm failing at).
So here's my "compose-prod.yml" docker-compose file:
version: "3.7"
networks:
corp:
driver: bridge
nginx-proxy:
external:
name: nginx-proxy
volumes:
backend-db-volume:
driver: local
services:
nginx-proxy:
image: jwilder/nginx-proxy # also tried nginxproxy/nginx-proxy image
container_name: nginx-proxy
networks:
- corp
- nginx-proxy
environment:
HTTP_PORT: 88
ports:
- "88:88" # also tried "88:80" but that gives me "connection refused" in the browser
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
backend-db:
image: backend-db
hostname: backend-db
restart: unless-stopped
volumes:
- backend-db-volume:/var/lib/mysql
networks:
- corp
build:
context: ./apps/backend-db
dockerfile: ./Dockerfile
ports:
- 3307:3306
expose:
- 3306
backend-api:
container_name: backend-api
depends_on:
- backend-db
build:
context: ./
cache_from:
- base-image:nx-base
dockerfile: ./apps/backend-api/Dockerfile
args:
NODE_ENV: "production"
BUILD_FLAG: ""
image: backend-api:nx-dev
ports:
- "3333:3333"
environment:
NODE_ENV: "production"
PORT: 3333
[... other env configs ommitted, like DB variables, etc.]
networks:
- corp
restart: on-failure
react-frontend:
container_name: react-frontend
build:
context: ./
dockerfile: ./apps/react-frontend/Dockerfile
args:
NODE_ENV: "production"
BUILD_FLAG: ""
image: react-frontend:nx-dev
environment:
VIRTUAL_HOST: react-frontend # note that my domain is react-frontend.com, obfuscated ofc ... which I also tried using in VIRTUAL_HOST config
VIRTUAL_PORT: 4200
NGINX_PROXY_CONTAINER: nginx-proxy
NODE_ENV: "production"
[...other env configs ommitted]
ports:
- "4200:4200"
expose:
- 4200
networks:
- nginx-proxy
- corp
restart: on-failure
The nginx-proxy container automatically detects containers running with VIRTUAL_HOST env. variable enabled, generates configs for those from the compose-prod.yml file. Right now, the configuration generated, that I get using the "docker exec nginx-proxy cat /etc/nginx/conf.d/default.conf" command is this:
# nginx-proxy version : 1.0.1-6-gc4ad18f
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header based on $proxy_x_forwarded_proto
map $proxy_x_forwarded_proto $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'"$upstream_addr"';
access_log off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;
error_log /dev/stderr;
resolver 127.0.0.11;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
proxy_set_header X-Original-URI $request_uri;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
server_tokens off;
listen 88;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# react-frontend
upstream react-frontend {
## Can be connected with "react-frontend_corp" network
# react-frontend
server <IP of react-frontend container on Docker network>:4200;
# Cannot connect to network 'nginx-proxy' of this container
# Cannot connect to network 'react-frontend_corp' of this container
## Can be connected with "nginx-proxy" network
# react-frontend
server <IP of react-frontend container on Docker network>:4200;
}
server {
server_name react-frontend;
listen 88 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://react-frontend;
}
}
When I access "example.com:88" I get a "503 Service Temporarily Unavailable" page in my browser returned from nginx and I see this in nginx's access logs:
nginx-proxy | nginx.1 | example.com xx.yy.zz.ip - - [06/Jul/2022:16:48:12 +0000] "GET / HTTP/1.1" 503 592 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36" "-"
nginx-proxy | nginx.1 | example.com xx.yy.zz.ip - - [06/Jul/2022:16:48:12 +0000] "GET /favicon.ico HTTP/1.1" 503 592 "http://example.com:88/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36" "-"
I omit the Dockerfiles since the nginx-proxy container is not built, it's taken as is from the image and all the builds work ... it's the deployment which gives me trouble.
Anyone who has any pointer on what I'm missing ? What I should check ? This is for a personal project and even though I can get around as a devop, Docker networking/deployment still baffles me sometimes.
EDIT: I'm adding the VPS (host) vhost config (nginx) here ... so maybe I can proxy-pass using this configuration ... how would I go about modifying this config so I can proxy pass requests to the "example" docker container exposing port 4200 (instead of a root directory on the VPS) ?
# configuration file /etc/nginx/conf.d/users/example.conf:
proxy_cache_path /var/cache/ea-nginx/proxy/example levels=1:2 keys_zone=example:10m inactive=60m;
#### main domain for example ##
server {
server_name example.com www.example.com mail.example.com;
listen 80;
listen [::]:80;
include conf.d/includes-optional/cloudflare.conf;
set $CPANEL_APACHE_PROXY_PASS $scheme://apache_backend_${scheme}_51_222_24_216;
# For includes:
set $CPANEL_APACHE_PROXY_IP 51.222.24.216;
set $CPANEL_APACHE_PROXY_SSL_IP 51.222.24.216;
set $CPANEL_PROXY_CACHE example;
set $CPANEL_SKIP_PROXY_CACHING 0;
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /var/cpanel/ssl/apache_tls/example.com/combined;
ssl_certificate_key /var/cpanel/ssl/apache_tls/example.com/combined;
ssl_protocols TLSv1.2 TLSv1.3;
proxy_ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256;
proxy_ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256;
root /home/example/public_html;
location /cpanelwebcall {
include conf.d/includes-optional/cpanel-proxy.conf;
proxy_pass http://127.0.0.1:2082/cpanelwebcall;
}
location /Microsoft-Server-ActiveSync {
include conf.d/includes-optional/cpanel-proxy.conf;
proxy_pass http://127.0.0.1:2090/Microsoft-Server-ActiveSync;
}
location = /favicon.ico {
allow all;
log_not_found off;
access_log off;
include conf.d/includes-optional/cpanel-proxy.conf;
proxy_pass $CPANEL_APACHE_PROXY_PASS;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
include conf.d/includes-optional/cpanel-proxy.conf;
proxy_pass $CPANEL_APACHE_PROXY_PASS;
}
location / {
proxy_cache $CPANEL_PROXY_CACHE;
proxy_no_cache $CPANEL_SKIP_PROXY_CACHING;
proxy_cache_bypass $CPANEL_SKIP_PROXY_CACHING;
proxy_cache_valid 200 301 302 60m;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout http_429 http_500 http_502 http_503 http_504;
proxy_cache_background_update on;
proxy_cache_revalidate on;
proxy_cache_min_uses 1;
proxy_cache_lock on;
include conf.d/includes-optional/cpanel-proxy.conf;
proxy_pass $CPANEL_APACHE_PROXY_PASS;
}
include conf.d/server-includes/*.conf;
include conf.d/users/example/*.conf;
include conf.d/users/example/example.com/*.conf;
}
server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /var/cpanel/ssl/apache_tls/example.com/combined;
ssl_certificate_key /var/cpanel/ssl/apache_tls/example.com/combined;
server_name cpanel.example.com cpcalendars.example.com cpcontacts.example.com webdisk.example.com webmail.example.com;
include conf.d/includes-optional/cloudflare.conf;
set $CPANEL_APACHE_PROXY_PASS $scheme://apache_backend_${scheme}_51_222_24_216;
# For includes:
set $CPANEL_APACHE_PROXY_IP 51.222.24.216;
set $CPANEL_APACHE_PROXY_SSL_IP 51.222.24.216;
location /.well-known/cpanel-dcv {
root /home/example/public_html;
disable_symlinks if_not_owner;
}
location /.well-known/pki-validation {
root /home/example/public_html;
disable_symlinks if_not_owner;
}
location /.well-known/acme-challenge {
root /home/example/public_html;
disable_symlinks if_not_owner;
}
location / {
# Force https for service subdomains
if ($scheme = http) {
return 301 https://$host$request_uri;
}
# no cache
proxy_cache off;
proxy_no_cache 1;
proxy_cache_bypass 1;
# pass to Apache
include conf.d/includes-optional/cpanel-proxy.conf;
proxy_pass $CPANEL_APACHE_PROXY_PASS;
}
}
If you want to access your app using example.com or www.example.com you have to set the server_name example.com *.example.com;. You can reach the docker container using DNS too while using docker-compose, in your case backend-api:3333 or react-frontend:4200. There are few correction in your configs.
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
server_tokens off;
listen 88;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# react-frontend
upstream frontends {
server react-frontend:4200;
server react-frontend:4200;
}
upstream backends {
server backend-api:3333;
server backend-api:3333;
}
server {
server_name 127.0.0.1 example.com *.example.com;
listen 88;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://frontends;
}
# if required , only then use it otherwise remove it [for direct api calls]
location /api/v1 {
proxy_pass http://backends;
}
}
One can add more options or configs , according to the requirements. We are seeing 503 default page because of server_name _ configs in the above snippet. One can configure it , if needed. (like below snippet)
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
server_tokens off;
listen 88;
access_log /var/log/nginx/access.log vhost;
return 403 "..Ops";
}
Not sure, why two networks are required, if you are running everything in one compose file.
If two networks are required, below is the example for docker-compose file and nginx.conf.
docker-compose.yaml
version: "3.7"
networks:
corp:
driver: bridge
nginx-proxy:
external:
name: nginx-proxy
services:
nginx-proxy:
container_name: nginx-proxy
image: nginx
ports:
- 3210:80
networks:
- nginx-proxy
- corp
volumes:
- another-nginx.conf:/etc/nginx/conf.d/another-nginx.conf
react-frontend:
container_name: react-frontend
image: httpd
ports:
- 3211:80
networks:
- nginx-proxy
- corp
backend-x:
container_name: backend
image: nginx
ports:
- 3212:80
networks:
- corp
db-x:
container_name: db
image: httpd
ports:
- 3213:80
networks:
- corp
another-nginx.conf
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
server_tokens off;
listen 88;
access_log /var/log/nginx/access.log;
return 503;
}
# react-frontend
upstream frontends {
server react-frontend:80;
}
upstream backends {
server backend:80;
}
server {
server_name 127.0.0.1 localhost example.com *.example.com;
listen 80;
access_log /var/log/nginx/access.log;
location / {
proxy_pass http://frontends;
}
# if required , only then use it otherwise remove it [for direct api calls]
location /api/v1 {
proxy_pass http://backends;
}
}
To create nginx-proxy network if not present.
docker network create nginx-proxy
First of all, I'd like to apologize for the long post!
I'm almost close to figuring everything out! What I want to do is to use node-http-proxy to mask a series of dynamic IPs that I get from a MySQL database. I do this by redirecting the subdomains to node-http-proxy and parsing it from there. I was able to do this locally without any problems.
Remotely, it's behind an Nginx web server with HTTPS enabled (I have a wildcard certificate issued through Let's Encrypt, and a Comodo SSL for the domain). I managed to configure it so it passed it to the node-http-proxy without problems. The only problem, is that the latter is giving me
The error is { Error: connect ECONNREFUSED 127.0.0.1:80
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 80 }
Whenever I set:
proxy.web(req, res, { target, ws: true }
And I don't know if the problem is the remote address (highly unlikely since I'm able to connect through a secondary device), or I have misconfigured nginx (highly likely). There's also the possibility that it may be clashing with Nginx which is listening to port 80. But I don't know why node-http-proxy would connect through port 80
Some additional info:
There's a Ruby on Rails app running side-by-side as well.
Node-http-proxy, nginx, ruby on rails are running in each own Docker container. I don't think it's a problem from Docker, since I was able to locally test this without any problems.
Here's my current nginx.conf (I have replaced my domain name for example.com, for security reasons)
The server_name "~^\d+\.example\.co$"; is where I want it to redirect to node-http-proxy, whereas example.com is where a Ruby on Rails application lies.
# https://codepany.com/blog/rails-5-and-docker-puma-nginx/
# This is the port the app is currently exposing.
# Please, check this: https://gist.github.com/bradmontgomery/6487319#gistcomment-1559180
upstream puma_example_docker_app {
server app:5000;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
# Redirect all HTTP requests to HTTPS with a 301 Moved Permanently response.
# Enable once you solve wildcard subdomain issue.
return 301 https://$host$request_uri;
}
server {
server_name "~^\d+\.example\.co$";
# listen 80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
# certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
# Created by Certbot
ssl_certificate /etc/letsencrypt/live/example.co/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.co/privkey.pem;
# include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
# ssl_certificate /etc/ssl/certs/ssl-bundle.crt;
# ssl_certificate_key /etc/ssl/private/example.co.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
# Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
# This is generated by ourselves.
# ssl_dhparam /etc/ssl/certs/dhparam.pem;
# intermediate configuration. tweak to your needs.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
ssl_prefer_server_ciphers on;
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
add_header Strict-Transport-Security max-age=15768000;
# OCSP Stapling ---
# fetch OCSP records from URL in ssl_certificate and cache them
ssl_stapling on;
ssl_stapling_verify on;
## verify chain of trust of OCSP response using Root CA and Intermediate certs
ssl_trusted_certificate /etc/ssl/certs/trusted.crt;
location / {
# https://www.digitalocean.com/community/questions/error-too-many-redirect-on-nginx
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://ipmask_docker_app;
# limit_req zone=one;
access_log /var/www/example/log/nginx.access.log;
error_log /var/www/example/log/nginx.error.log;
}
}
# SSL configuration was obtained through Mozilla's
# https://mozilla.github.io/server-side-tls/ssl-config-generator/
server {
server_name localhost example.co www.example.co; #puma_example_docker_app;
# listen 80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
# certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
# Created by Certbot
# ssl_certificate /etc/letsencrypt/live/example.co/fullchain.pem;
#ssl_certificate_key /etc/letsencrypt/live/example.co/privkey.pem;
# include /etc/letsencrypt/options-ssl-nginx.conf;
# ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
ssl_certificate /etc/ssl/certs/ssl-bundle.crt;
ssl_certificate_key /etc/ssl/private/example.co.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
# Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
# This is generated by ourselves.
ssl_dhparam /etc/ssl/certs/dhparam.pem;
# intermediate configuration. tweak to your needs.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
ssl_prefer_server_ciphers on;
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
add_header Strict-Transport-Security max-age=15768000;
# OCSP Stapling ---
# fetch OCSP records from URL in ssl_certificate and cache them
ssl_stapling on;
ssl_stapling_verify on;
## verify chain of trust of OCSP response using Root CA and Intermediate certs
ssl_trusted_certificate /etc/ssl/certs/trusted.crt;
# resolver 127.0.0.1;
# https://support.comodo.com/index.php?/Knowledgebase/Article/View/1091/37/certificate-installation--nginx
# The above was generated through Mozilla's SSL Config Generator
# https://mozilla.github.io/server-side-tls/ssl-config-generator/
# This is important for Rails to accept the headers, otherwise it won't work:
# AKA. => HTTP_AUTHORIZATION_HEADER Will not work!
underscores_in_headers on;
client_max_body_size 4G;
keepalive_timeout 10;
error_page 500 502 504 /500.html;
error_page 503 #503;
root /var/www/example/public;
try_files $uri/index.html $uri #puma_example_docker_app;
# This is a new configuration and needs to be tested.
# Final slashes are critical
# https://stackoverflow.com/a/47658830/1057052
location /kibana/ {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
#rewrite ^/kibanalogs/(.*)$ /$1 break;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://kibana:5601/;
}
location #puma_example_docker_app {
# https://www.digitalocean.com/community/questions/error-too-many-redirect-on-nginx
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma_example_docker_app;
# limit_req zone=one;
access_log /var/www/example/log/nginx.access.log;
error_log /var/www/example/log/nginx.error.log;
}
location ~ ^/(assets|images|javascripts|stylesheets)/ {
try_files $uri #rails;
access_log off;
gzip_static on;
# to serve pre-gzipped version
expires max;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
break;
}
location = /50x.html {
root html;
}
location = /404.html {
root html;
}
location #503 {
error_page 405 = /system/maintenance.html;
if (-f $document_root/system/maintenance.html) {
rewrite ^(.*)$ /system/maintenance.html break;
}
rewrite ^(.*)$ /503.html break;
}
if ($request_method !~ ^(GET|HEAD|PUT|PATCH|POST|DELETE|OPTIONS)$ ){
return 405;
}
if (-f $document_root/system/maintenance.html) {
return 503;
}
location ~ \.(php|html)$ {
return 405;
}
}
Current docker-compose file:
# This is a docker compose file that will pull from the private
# repo and will use all the images.
# This will be an equivalent for production.
version: '3.2'
services:
# No need for the database in production, since it will be connecting to one
# Use this while you solve Database problems
app:
image: myrepo/rails:latest
restart: always
environment:
RAILS_ENV: production
# What this is going to do is that all the logging is going to be printed into the console.
# Use this with caution as it can become very verbose and hard to read.
# This can then be read by using docker-compose logs app.
RAILS_LOG_TO_STDOUT: 'true'
# RAILS_SERVE_STATIC_FILES: 'true'
# The first command, the remove part, what it does is that it eliminates a file that
# tells rails and puma that an instance is running. This was causing issues,
# https://github.com/docker/compose/issues/1393
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -e production -p 5000 -b '0.0.0.0'"
# volumes:
# - /var/www/cprint
ports:
- "5000:5000"
expose:
- "5000"
networks:
- elk
links:
- logstash
# Uses Nginx as a web server (Access everything through http://localhost)
# https://stackoverflow.com/questions/30652299/having-docker-access-external-files
#
web:
image: myrepo/nginx:latest
depends_on:
- elasticsearch
- kibana
- app
- ipmask
restart: always
volumes:
# https://stackoverflow.com/a/48800695/1057052
# - "/etc/ssl/:/etc/ssl/"
- type: bind
source: /etc/ssl/certs
target: /etc/ssl/certs
- type: bind
source: /etc/ssl/private/
target: /etc/ssl/private
- type: bind
source: /etc/nginx/.htpasswd
target: /etc/nginx/.htpasswd
- type: bind
source: /etc/letsencrypt/
target: /etc/letsencrypt/
ports:
- "80:80"
- "443:443"
networks:
- elk
- nginx
links:
- elasticsearch
- kibana
# Defining the ELK Stack!
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.3
container_name: elasticsearch
networks:
- elk
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elasticsearch:/usr/share/elasticsearch/data
# - ./elk/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- 9200:9200
logstash:
image: docker.elastic.co/logstash/logstash:6.2.3
container_name: logstash
volumes:
- ./elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
# This is the most important part of the configuration
# This will allow Rails to connect to it.
# See application.rb for the configuration!
- ./elk/logstash/pipeline/logstash.conf:/etc/logstash/conf.d/logstash.conf
command: logstash -f /etc/logstash/conf.d/logstash.conf
ports:
- "5228:5228"
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
networks:
- elk
links:
- elasticsearch
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:6.2.3
volumes:
- ./elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- "5601:5601"
networks:
- elk
links:
- elasticsearch
depends_on:
- elasticsearch
ipmask:
image: myrepo/proxy:latest
command: "npm start"
restart: always
environment:
- "NODE_ENV=production"
expose:
- "5050"
ports:
- "4430:80"
links:
- app
networks:
- nginx
# # Volumes are the recommended storage mechanism of Docker.
volumes:
elasticsearch:
driver: local
rails:
driver: local
networks:
elk:
driver: bridge
nginx:
driver: bridge
Thank you very much!
Waaaaaaitttt. There was no problem with the code!
The problem was that I was trying to pass a bland IP address without appending http before it! By appending HTTP everything is working!!
Example:
I was doing:
proxy.web(req, res, { target: '128.29.41.1', ws: true })
When in fact this was the answer:
proxy.web(req, res, { target: 'http://128.29.41.1', ws: true })
My nodejs app is deployed on AWS EB. I already configured the https server and it is working fine. Now I need to redirect every non-https request to https with the www. as prefix, like this:
GET example.com => https://www.example.com
I'm using nginx and my EB instance is a single instance without load balancer in front of it.
I have created a config file in the .ebextensions folder with this code
Resources:
sslSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: {"Fn::GetAtt" : ["AWSEBSecurityGroup", "GroupId"]}
IpProtocol: tcp
ToPort: 443
FromPort: 443
CidrIp: 0.0.0.0/0
files:
/etc/nginx/conf.d/999_nginx.conf:
mode: "000644"
owner: root
group: root
content: |
upstream nodejsserver {
server 127.0.0.1:8081;
keepalive 256;
}
# HTTP server
server {
listen 8080;
server_name localhost;
return 301 https://$host$request_uri;
}
# HTTPS server
server {
listen 443;
server_name localhost;
ssl on;
ssl_certificate /etc/pki/tls/certs/server.crt;
ssl_certificate_key /etc/pki/tls/certs/server.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://nodejsserver;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
/etc/pki/tls/certs/server.crt:
mode: "000400"
owner: root
group: root
content: |
-----BEGIN CERTIFICATE-----
my crt
-----END CERTIFICATE-----
/etc/pki/tls/certs/server.key:
mode: "000400"
owner: root
group: root
content: |
-----BEGIN RSA PRIVATE KEY-----
my key
-----END RSA PRIVATE KEY-----
/etc/nginx/conf.d/gzip.conf:
content: |
gzip on;
gzip_comp_level 9;
gzip_http_version 1.0;
gzip_types text/plain text/css image/png image/gif image/jpeg application/json application/javascript application/x-javascript text/javascript text/xml application/xml application/rss+xml application/atom+xml application/rdf+xml;
gzip_proxied any;
gzip_disable "msie6";
commands:
00_enable_site:
command: 'rm -f /etc/nginx/sites-enabled/*'
I'm sure aws is taking in account my config because de ssl is working fine. But the http block does not work.. There is no redirect.
Maybe my problem is about rewriting the original nginx config of EB, do you know how to achieve this ?
Can you help me with that please ? I've tried a lot of things..
Thank you
OK, found the issue, EB creates a default config file /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf which is listening to 8080. So your re-direct isn't being picked up as Nginx is using the earlier defined rule for 8080.
Here's a config file that I use that works. The file it generates will precede the default rule.
https://github.com/jozzhart/beanstalk-single-forced-ssl-nodejs-pm2/blob/master/.ebextensions/https-redirect.config
I'm currently creating an api for my website, using nodejs and nginx, I've setup reversed proxies for each nodejs app i'll have running (api, mainsite, other stuff..).
However, when i try my api, it will use a very long time on every second request, sometimes time out..
NGINX.CONF
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes 24;
error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
pid /var/run/nginx.pid;
events {
worker_connections 19000;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
#SSL performance tuning
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 10s;
add_header Strict-Transport-Security "max-age=31536000";
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 10;
gzip on;
gzip_disable "msie6";
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain application/xml application/javascript text/css application/x-javascript;
#for mulyiple domains, www.codewolf.red, codewolf.red
server_names_hash_bucket_size 64;
# Load config files from the /etc/nginx/conf.d directory
# The default server is in conf.d/default.conf
include /etc/nginx/conf.d/*.conf;
}
ERROR.LOG
2014/10/27 14:26:46 [error] 6968#8992: *15 WSARecv() failed (10054: FormatMessage() error:(317)) while reading response header from upstream, client: ::1, server: localhost, request: "GET /api/ffd/users HTTP/1.1", upstream: "http://127.0.0.1:3000/ffd/users", host: "localhost"
2014/10/27 14:27:46 [error] 6968#8992: *15 upstream timed out (10060: FormatMessage() error:(317)) while connecting to upstream, client: ::1, server: localhost, request: "GET /api/ffd/users HTTP/1.1", upstream: "http://[::1]:3000/ffd/users", host: "localhost"
2014/10/27 14:39:31 [error] 6968#8992: *20 upstream timed out (10060: FormatMessage() error:(317)) while connecting to upstream, client: ::1, server: localhost, request: "GET /api/ffd/users HTTP/1.1", upstream: "http://[::1]:3000/ffd/users", host: "localhost"
2014/10/27 14:40:09 [notice] 5300#1352: signal process started
Any idea whats wrong?
It's been like this for a while, and its killing me :(
Please help, it's ruining my time for developing apps :/
Adding this because this is what worked for me
https://forum.nginx.org/read.php?15,239760,239760 Seems to indicate that you can proxy_pass to 127.0.0.1 instead of localhost and the request goes through fine
macbresch
One year old but I wanted to point out that there is a work around for that issue. Like Cruz Fernandez wrote you can set 127.0.0.1 instead of localhost on the proxy_pass directive. This prevents the 60s delay on every second request. I'm using Windows 8.1 and nginx 1.9.5.
Cruz Fernandez Wrote:
you can use 127.0.0.1 (instead of localhost on the proxy_pass directive)
location /nodejsServer/ {
proxy_pass http://127.0.0.1:3000/;
}
I was able to fix this by using an upstream statement.
eg. :
upstream nodejs_server {
server 192.168.0.67:8080; #ip to nodejs server
}
#server config
server {
location /nodejsServer/ { #http://localhost/nodejsServer/
proxy_pass http://nodejs_server;
}
}
reference: http://nginx.org/en/docs/http/ngx_http_upstream_module.html
I got the following nginx config:
server {
listen 80;
server_name domainName.com;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/rss+xml text/javascript image/svg+xml application/vnd.ms-fontobject application/x-font-ttf font/opentype;
access_log /var/log/nginx/logName.access.log;
location / {
proxy_pass http://127.0.0.1:9000/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ~ ^/(min/|images/|ckeditor/|img/|javascripts/|apple-touch-icon-ipad.png|apple-touch-icon-ipad3.png|apple-touch-icon-iphone.png|apple-touch-icon-iphone4.png|generated/|js/|css/|stylesheets/|robots.txt|humans.txt|favicon.ico) {
root /root/Dropbox/nodeApps/nodeJsProject/port/public;
access_log off;
expires max;
}
}
It is proxy for node.js application on port 9000.
Is it possible to change this config to let nginx use another proxy url (on port 9001 for example) in case nginx got 504 error.
I need this in case when node.js server is down on port 9000 and need several seconds to restart automatically and several seconds nginx gives 504 error for every request. I want nginx to "guess" that node.js site on port 9000 is down and use reserve node.js site on port 9001
Use upstream module.
upstream node {
server 127.0.0.1:9000;
server 127.0.0.1:9001 backup;
}
server {
...
proxy_pass http://node/;
...
}