I have been trying for a long time to set up a reverse proxy with nginx that works with both nodejs over ssl on port 3000 and apache over ssl on port 443. I have tried so many things that my conf files probably have tons of errors. My most recent attempt had this as the /etc/apache2/sites-enabled/000-default.conf:
<VirtualHost *:443>
# The ServerName directive sets the request scheme, hostname and port t$
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as a last resort host regardless.
# However, you must set it for any further virtual host explicitly.
#ServerName www.example.com
ServerAdmin admin#test.com
DocumentRoot /var/www/html/public
SSLEngine on
SSLCertificateFile /var/www/certs/test_com.crt
SSLCertificateKeyFile /var/www/certs/test_com.key
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf
</VirtualHost>
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
And my nginx configuration looks like this:
server {
listen 80;
access_log /var/log/nginx/secure.log;
error_log /var/log/nginx/secure.error.log;
server_name test.com;
ssl on;
ssl_certificate /var/www/certs/test_com.crt;
ssl_certificate_key /var/www/certs/test_com.key;
location / {
proxy_pass https://127.0.0.1:3000;
}
location /public {
proxy_pass https://127.0.0.1:443;
root /var/www/html/public; //this is where I keep all the html files
}
}
When I run netstat -tulpn I get:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::3000 :::* LISTEN -
udp 0 0 0.0.0.0:68 0.0.0.0:*
This is my first time setting up a server like this so I would really appreciate it if someone could explain how reverse proxies are supposed to work with a nodejs server that uses webpages from an apache instance over ssl.
I'm not sure how this works, but if I had to guess logically, I would want an nginx instance listening on port 80 that redirects all http traffic to https traffic. I would want apache configured on port 443 so I would be able to browse to my css stylesheet https://test.com/assets/css/stylesheet.css and be able to use that as a path in my node server.
I found this in my searches but was unable to correctly implement it:https://www.digitalocean.com/community/questions/how-to-setup-ssl-for-nginx-and-apache
I know I'm not explaining it very well, but I hope someone is able to understand what I need and help me out. Thanks!
Yes this sounds confusing. It is very common to have static assets served directly from nginx, then reverse proxy all other requests to some backend (node on 3000).
I am very confused by the need for Apache.
If you really need Apache this is how you can do it.
First, you said you want nginx to listen on port 80 and just redirect all http traffic to https, configure your nginx like this.
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
Thats it for nginx.
Now for Apache, you need mod_proxy, it will listen on 443 and handle TLS termination, then proxy requests to node listening on 3000
<VirtualHost *:*>
# your config here...
ProxyPreserveHost On
ProxyPass / http://localhost:3000/
ProxyPassReverse / http://localhost:3000/
</VirtualHost>
https://httpd.apache.org/docs/2.4/mod/mod_proxy.html
I'm not sure why you want to do it this way, if you can, I'd suggest only using nginx to listen on 80, redirect to 443, handle TLS termination, serve static assets, and reverse proxy to your backend. It's much simpler to set up.
example
untested nginx config
# replace 'domain.com' with your domain name
# http server, redirects to https
server {
listen 80;
listen [::]:80;
server_name example.com;
return 301 https://example.com$request_uri;
}
# https server
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
ssl on;
# replace this with your cert+key path
ssl_certificate /etc/ssl/private/YOUR_CERT.crt;
ssl_certificate_key /etc/ssl/private/YOUR_SERVER_KEY.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:20m;
ssl_session_tickets off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';
ssl_stapling on;
ssl_stapling_verify on;
# replace 'node-app' with your app name
access_log /var/log/nginx/node-app.access.log;
error_log /var/log/nginx/node-app.error.log;
location / {
# the path to your static assets
root /var/www/your-app/public;
try_files $uri $uri/ #nodebackend;
}
# all requests to locations not found in static path
# will be forwarded to the node server
location #nodebackend {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:3000;
proxy_redirect off;
}
}
Related
I am trying to use NGINX on AWS for a reverse proxy to run a Node server. If I go to https://example.com/ , my connection is secure and everything is fine. But, when I go to http://example.com/ , no reroute occurs, and my connection is not secure. I am also using pm2 to run the Node server in the background.
I have tried the default server block reroutes that come up when I google the issue, but nothing has worked so far. My guess is that Node is handling requests on port 80, since my website comes up the way it did before I had my site fully set up. But I have no clue how to fix that.
Here are my server blocks in /etc/nginx/nginx.conf:
server {
# if ($host = www.example.com) {
# return 301 https://$host$request_uri;
# } # managed by Certbot
listen 80;
listen [::]:80;
server_name _;
return 301 https://$host$request_uri; # managed by Certbot
}
server {
server_name www.example.com example.com; # managed by Certbot
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
listen [::]:443 ssl ipv6only=on default_server; # managed by Certbot
listen 443 ssl default_server; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # mana$
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # ma$
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
Would appreciate any suggestions, as this is for a portfolio website and most places won't link directly to HTTPS.
If anyone else has had this issue, I managed to fix the problem. After trying everything under the sun, I remembered that I messed with my iptables when following an online guide to remove the port number from the address. I fixed my issue by wiping my iptables config, and since I was using a proxy I didn't need to reroute the port.
I am using this configuration down for nginx, i am using it for sidecar in azure, following this link
I can't figure it out what to change in the configuration to automatically redirect from http://domain to https://domain
# nginx Configuration File
# https://wiki.nginx.org/Configuration
# Run as a less privileged user for security reasons.
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
pid /var/run/nginx.pid;
http {
#Redirect to https, using 307 instead of 301 to preserve post data
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name localhost;
# Protect against the BEAST attack by not using SSLv3 at all. If you need to support older browsers (IE6) you may need to add
# SSLv3 to the list of protocols below.
ssl_protocols TLSv1.2;
# Ciphers set to best allow protection from Beast, while providing forwarding secrecy, as defined by Mozilla - https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:AES128:AES256:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK;
ssl_prefer_server_ciphers on;
# Optimize TLS/SSL by caching session parameters for 10 minutes. This cuts down on the number of expensive TLS/SSL handshakes.
# The handshake is the most CPU-intensive operation, and by default it is re-negotiated on every new/parallel connection.
# By enabling a cache (of type "shared between all Nginx workers"), we tell the client to re-use the already negotiated state.
# Further optimization can be achieved by raising keepalive_timeout, but that shouldn't be done unless you serve primarily HTTPS.
ssl_session_cache shared:SSL:10m; # a 1mb cache can hold about 4000 sessions, so we can hold 40000 sessions
ssl_session_timeout 24h;
# Use a higher keepalive timeout to reduce the need for repeated handshakes
keepalive_timeout 300; # up from 75 secs default
# remember the certificate for a year and automatically connect to HTTPS
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains';
ssl_certificate /etc/nginx/ssl.crt;
ssl_certificate_key /etc/nginx/ssl.key;
location / {
proxy_pass http://localhost:80; # TODO: replace port if app listens on port other than 80
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
EDIT:
After the suggestion in the first answer it didn't worked, with the two blocks added it was like this:
# nginx Configuration File
# https://wiki.nginx.org/Configuration
# Run as a less privileged user for security reasons.
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
pid /var/run/nginx.pid;
http {
#Redirect to https, using 307 instead of 301 to preserve post data
server {
# catch HTTP requests for all valid HTTP `Host` header values
listen 80;
listen [::]:80;
server_name _; # list all your domain names here
# do redirection to HTTPS
return 301 https://$http_host$request_uri;
}
server {
# default server listening on port 80
# getting here means the HTTP `Host` header is missing or had an incorrect value
listen 80 default_server;
listen [::]:80 default_server;
# close the connection immediately
return 444;
}
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name localhost;
# Protect against the BEAST attack by not using SSLv3 at all. If you need to support older browsers (IE6) you may need to add
# SSLv3 to the list of protocols below.
ssl_protocols TLSv1.2;
# Ciphers set to best allow protection from Beast, while providing forwarding secrecy, as defined by Mozilla - https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:AES128:AES256:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK;
ssl_prefer_server_ciphers on;
# Optimize TLS/SSL by caching session parameters for 10 minutes. This cuts down on the number of expensive TLS/SSL handshakes.
# The handshake is the most CPU-intensive operation, and by default it is re-negotiated on every new/parallel connection.
# By enabling a cache (of type "shared between all Nginx workers"), we tell the client to re-use the already negotiated state.
# Further optimization can be achieved by raising keepalive_timeout, but that shouldn't be done unless you serve primarily HTTPS.
ssl_session_cache shared:SSL:10m; # a 1mb cache can hold about 4000 sessions, so we can hold 40000 sessions
ssl_session_timeout 24h;
# Use a higher keepalive timeout to reduce the need for repeated handshakes
keepalive_timeout 300; # up from 75 secs default
# remember the certificate for a year and automatically connect to HTTPS
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains';
ssl_certificate /etc/nginx/ssl.crt;
ssl_certificate_key /etc/nginx/ssl.key;
location / {
proxy_pass http://localhost:80; # TODO: replace port if app listens on port other than 80
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
I suggest to use two additional server blocks:
server {
# catch HTTP requests for all valid HTTP `Host` header values
listen 80;
listen [::]:80;
server_name domain www.domain; # list all your domain names here
# do redirection to HTTPS
return 301 https://$http_host$request_uri;
}
server {
# default server listening on port 80
# getting here means the HTTP `Host` header is missing or had an incorrect value
listen 80 default_server;
listen [::]:80 default_server;
# close the connection immediately
return 444;
}
Check this answer for additional details on this configuration.
Update
Checking the documentation link given by OP looks like the provided example uses nginx container listening on port 443 taking the TLS encryption job and proxying requests to some "Hello World" example container that listen on port 80. To do the HTTP to HTTPS redirection via the nginx container you can try to change the "Hello World" example container listening port to 8080 and made nginx proxying the incoming requests to that port instead of port 80. Try the following configuration:
nginx.conf
# nginx Configuration File
# https://wiki.nginx.org/Configuration
# Run as a less privileged user for security reasons.
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
pid /var/run/nginx.pid;
http {
#Redirect to https, using 307 instead of 301 to preserve post data
server {
# catch HTTP requests for all valid HTTP `Host` header values
listen 80;
listen [::]:80;
server_name _; # list all your domain names here
# do redirection to HTTPS
return 307 https://$http_host$request_uri;
}
server {
# default server listening on port 80
# getting here means the HTTP `Host` header is missing or had an incorrect value
listen 80 default_server;
listen [::]:80 default_server;
# close the connection immediately
return 444;
}
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name localhost;
# Protect against the BEAST attack by not using SSLv3 at all. If you need to support older browsers (IE6) you may need to add
# SSLv3 to the list of protocols below.
ssl_protocols TLSv1.2;
# Ciphers set to best allow protection from Beast, while providing forwarding secrecy, as defined by Mozilla - https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:AES128:AES256:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK;
ssl_prefer_server_ciphers on;
# Optimize TLS/SSL by caching session parameters for 10 minutes. This cuts down on the number of expensive TLS/SSL handshakes.
# The handshake is the most CPU-intensive operation, and by default it is re-negotiated on every new/parallel connection.
# By enabling a cache (of type "shared between all Nginx workers"), we tell the client to re-use the already negotiated state.
# Further optimization can be achieved by raising keepalive_timeout, but that shouldn't be done unless you serve primarily HTTPS.
ssl_session_cache shared:SSL:10m; # a 1mb cache can hold about 4000 sessions, so we can hold 40000 sessions
ssl_session_timeout 24h;
# Use a higher keepalive timeout to reduce the need for repeated handshakes
keepalive_timeout 300; # up from 75 secs default
# remember the certificate for a year and automatically connect to HTTPS
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains';
ssl_certificate /etc/nginx/ssl.crt;
ssl_certificate_key /etc/nginx/ssl.key;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
deploy-aci.yaml
api-version: 2019-12-01
location: westus
name: app-with-ssl
properties:
containers:
- name: nginx-with-ssl
properties:
image: nginx
ports:
- port: 80
protocol: TCP
- port: 443
protocol: TCP
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx
- name: my-app
properties:
image: mcr.microsoft.com/azuredocs/aci-helloworld
ports:
- port: 8080
protocol: TCP
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
volumes:
- secret:
ssl.crt: <Enter contents of base64-ssl.crt here>
ssl.key: <Enter contents of base64-ssl.key here>
nginx.conf: <Enter contents of base64-nginx.conf here>
name: nginx-config
ipAddress:
ports:
- port: 80
protocol: TCP
- port: 443
protocol: TCP
type: Public
osType: Linux
tags: null
type: Microsoft.ContainerInstance/containerGroups
I have my domain example.com, so when someone hits on www.example.com or example.com the request is automatically directed to https://example.com - which works fine. However, when I use IP address of the node app 1.2.3.4 it doesn't route to https://example.com which is SSL enabled. If I use the IP address, it shows me the same page but without the padlock icon.
So how do I route a request to https://example.com when someone enters the IP address of node app?
My Node JS APP is hosted on AWS EC2 instance, I have also installed ssl using certbot (LetsEncrpyt). This is my nginx file.
events {
worker_connections 4096; ## Default: 1024
}
http {
include conf/mime.types;
include /etc/nginx/proxy.conf;
include /etc/nginx/fastcgi.conf;
index index.html index.htm;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
sendfile on;
tcp_nopush on;
server_names_hash_bucket_size 128; # this seems to be required for some vhosts
# Settings for normal server listening on port 80
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com www.example.com;
root /usr/share/nginx/html;
# location / {
# }
# Redirect non-https traffic to https
if ($scheme != "https") {
return 301 https://$host$request_uri;
}
}
# Settings for a TLS enabled server.
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name example.com www.example.com;
root /usr/share/nginx/html;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_dhparam "/etc/pki/nginx/dhparams.pem";
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}
SSL will not work for IP because certificate is issued for the domain name (hence no padlock icon)
You can listen for IP on port 80 (default HTTP port) and redirect to https://example.com
server {
listen 80;
server_name XXX.XXX.XXX.XXX;
return 301 https://www.example.com$request_uri;
}
I have two files in sites-available, one for each website running on the machine. Both have identical code with only the domain name and port for the website replaced. Both sites are also symlinked to sites-enabled.
reesmorris.co.uk (which works fine):
# Remove WWW from HTTP
server {
listen 80;
server_name www.reesmorris.co.uk reesmorris.co.uk;
return 301 https://reesmorris.co.uk$request_uri;
}
# Remove WWW from HTTPS
server {
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/reesmorris.co.uk/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/reesmorris.co.uk/privkey.pem;
server_name www.reesmorris.co.uk;
return 301 https://reesmorris.co.uk$request_uri;
}
# HTTPS request
server {
# Enable HTTP/2
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name reesmorris.co.uk;
ssl_certificate /etc/letsencrypt/live/reesmorris.co.uk/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/reesmorris.co.uk/privkey.pem;
location / {
proxy_pass http://127.0.0.1:3000;
include /etc/nginx/proxy_params;
}
}
francescoiacono.co.uk (has a redirection loop):
# Remove WWW from HTTP
server {
listen 80;
server_name www.francescoiacono.co.uk francescoiacono.co.uk;
return 301 https://francescoiacono.co.uk$request_uri;
}
# Remove WWW from HTTPS
server {
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/francescoiacono.co.uk/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/francescoiacono.co.uk/privkey.pem;
server_name www.francescoiacono.co.uk;
return 301 https://francescoiacono.co.uk$request_uri;
}
# HTTPS request
server {
# Enable HTTP/2
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name francescoiacono.co.uk;
ssl_certificate /etc/letsencrypt/live/francescoiacono.co.uk/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/francescoiacono.co.uk/privkey.pem;
location / {
proxy_pass http://127.0.0.1:4000;
include /etc/nginx/proxy_params;
}
}
To experiment, I replaced the return value in the first server block of the broken website to be a 403, which seems to be shown even if the site is using HTTPS. Additionally, removing the first server block on the broken website altogether will cause the website to completely redirect to the already-working website.
Both websites use CloudFlare for the DNS routing. CloudFlare is 'paused' on both websites which means that it only handles the routing, in which both websites have identical routing to the server with AAAA and A records being the same.
I'm not too familiar with server blocks, so if anybody has any ideas as to what is happening then it would be greatly appreciated.
It appears as though this issue was resolved by making both websites 'paused' in the CloudFlare DNS. This was done originally, though may have not had enough time to propagate.
I had not modified the code since this post was created, though I had ensured that both sites were 'paused' on CloudFlare (reesmorris.co.uk was, however francescoiacono.co.uk was not) - and so it seems to have been a misconfiguration issue with CloudFlare.
I have 2 domains running with nginx on a DigitalOcean Droplet
Domain1 is an node app proxypassing to localhost:3000.
Works great!
Domain2 is a static site working great too.
However whenever I load the server IP (without the port 3000), I always get redirected to domain1 (node app).
Domain1 is a sort of a private site, whereas domain2 is a public blog.
My question is what do I have to change for people to get redirected to domain2 whenever they load the IP in order to protect domain1 from beeing easily reachable. (The VPS IP is easy to look up)
Here are the "sites-available" files :
Node App :
server {
listen [::]:80;
listen 80;
server_name www.domain1.com domain1.com;
# and redirect to the https host (declared below)
return 301 https://domain1.com$request_uri;
}
server {
listen 443;
server_name domain1.com www.domain1.com;
ssl on;
# Use certificate and key provided by Let's Encrypt:
ssl_certificate /etc/letsencrypt/live/domain1.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain1.com/privkey.pem;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:3000/;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}
And the static one :
server {
listen [::]:80;
listen 80;
server_name www.domain2.com domain2.com;
root /var/www/html/domain2;
index index.html index.htm;
return 301 https://domain2.com$request_uri;
}
server {
listen [::]:443 ssl;
listen 443 ssl;
root /var/www/html/domain2;
index index.html index.htm;
ssl_certificate /etc/letsencrypt/live/domain2.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain2.com/privkey.pem;
}
Any help/hint is appreciated, thank you in advance!
So both your domains are listening on port 80. When your nginx server receives a request, it then checks the domain before determining its route... but because there is no domain to check when you just type in the ip address, it will default to the first listed server (which i'm guessing is domain1)
you can circumvent this by declaring a default server, or switching the order in which they are listed.
I hope i could be of help. A nice little reference http://nginx.org/en/docs/http/request_processing.html