NGINX SSL Configuration Ignoring API Location Block - node.js

I recently have been trying to use Let's Encrypts SSL service to upgrade my web application to https. My application is a React front-end that runs in the same AWS instance as an Express backend/api.
I've been unable to get the https version of the site working, as it makes requests to the non-SSL certified Express server effectively locking the https version of the site from using the backend.
My solution around this has been to try to configure an NGINX block to proxy the Node.js server. My NGINX configuration isn't complicated and looks like this:
server {
listen 443 ssl default_server;
server_name example.site.com;
root /home/ubuntu/react/build;
ssl_certificate /etc/letsencrypt/live/example.site.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.sitecom/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location /api/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass http://localhost:3050/; //port where API runs
}
location / {
try_files $uri $uri/ /index.html;
add_header Cache-Control public;
expires 1d;
}
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.site.com;
return 301 https://$host$request_uri;
}
This setup worked perfectly when I was running an http version, but seems to have completely broken down for https. NGINX seems to be completely ignoring the /api location block - whenever I try to directly navigate to an /api endpoint it redirects me to the location block below (the static file that serves the react application). I've fiddled around with all kinds of settings and can't for the life of me figure out why NGINX isn't taking the /api location block into account, or why the SSL certification would have changed how it moves through/reads the blocks.

Related

MERN Stack App with NGINX: Timeout when react app tries to connect to server side API

I have a MERN stack app that I am trying to put into production.
I am able to get the client side running using NGINX as a reverse proxy to port 3000.
The issue I am having is when I am trying to get a response from my server running on port 5000. This is where I have my API to query against my database.
I believe the issue lies in my server block I have set up for my site. Below is an example for my signin endpoint that I am getting a TIMEOUT from. I have replaced my URL with example.com
server {
root /var/www/example.com/html;
index index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /users/signin {
proxy_pass http://localhost:5000/;
proxy_buffering on;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 404; # managed by Certbot
}
Any help would be appreciated. I believe I just need help trying to expose these endpoints properly.
Thanks!

Two proxy servers on NGINX are not working simultaneously

I have two Nginx servers acting as reverse proxies for nodejs servers running on ports 5000 and 5001.
The one that is running on port 5000 is for normal form upload
The other one that is running on port 5001 is for uploading images
On the client side, what I've done is after filling out the form (title, description, and image) by the user, the image is uploaded to the image server first and the imageURL, title, and description are uploaded to the normal web server then.
The Problem
When the client fills out the form and clicks on upload if the image upload works then upload to the normal server fails or if normal server upload works then upload to the image server fails.
The error is the following one: (This could for either of them)
Access to XMLHttpRequest at 'https://myserver.com/imagev2api/profile-upload-single' from origin 'https://blogs.vercel.app' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Note: I've used app.use(cors()) on both servers (image and normal server)
Here's both nginx server configurations
Image Server
upstream imageserver.com {
server 127.0.0.1:5001;
keepalive 600;
}
server {
server_name imageserver.com;
error_log /var/www/log/imagserver.com.error;
access_log /var/www/log/imagserver.com.access;
location / {
proxy_pass http://imageserver.com;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# fastcgi_split_path_info ^(.+\.php)(/.+)$;
}
listen 443 ssl http2; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/linoxcloud.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/linoxcloud.com/privkey.pem; # managed by Certbot
ssl_protocols TLSv1.2 TLSv1.3 SSLv2 SSLv3;
ssl_session_cache shared:SSL:5m;
ssl_session_timeout 10m;
ssl_session_tickets off;
}
server {
if ($host = imageserver.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name imageserver.com;
}
Normal Server
upstream normalserver.com {
server 127.0.0.1:5000;
keepalive 600;
}
server {
server_name normalserver.com;
error_log /var/www/log/normalserver.com.error;
access_log /var/www/log/normalserver.com.access;
location / {
proxy_pass http://normalserver.com;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
listen 443 ssl http2; # managed by Certbot
ssl_certificate ...; # managed by Certbot
ssl_certificate_key ...; # managed by Certbot
ssl_protocols TLSv1.2 TLSv1.3 SSLv2 SSLv3;
ssl_session_cache shared:SSL:5m;
ssl_session_timeout 10m;
ssl_session_tickets off;
}
server {
if ($host = normalserver.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name normalserver.com;
}
I've been trying to overcome this problem for some period of time by trying literally everything.
Reference: Two NGINX servers one passing CORS issue (but this doesn't provide any insights into what the problem and solution is)
Any possible fixes, please?
You have to combine these reverse proxies in one configuration file. There was already a similar thread here: https://serverfault.com/questions/242679/how-to-run-multiple-nginx-instances-on-different-port
Hope it helps.
The problem in my case is that I'm running my NODEJS instances/servers using "pm2" and they are not working simultaneously.
Similar issue: https://github.com/Unitech/pm2/issues/4352
Elaborating on what happened was if two requests are made simultaneously one pm2 process successfully executes but meanwhile the server crashes/restarts after that execution which is making the other server throw a 502 Bad Gateway error. (unreachable as though the server is not running)
For now, I'm running one server on "pm2"
and the other one uses "forever"
Note: This issue has nothing to do with Nginx (since it can handle any number of websites with different domain names on a single port 80)
This problem happened quite recently maybe it's some "pm2" bug
In simple words, when the two requests hit individual pm2 processes, one executes, and the pm2 processes kind of restart again making the second request obsolete.

How to redirect HTTP to HTTPS while using an NGINX reverse proxy?

I am trying to use NGINX on AWS for a reverse proxy to run a Node server. If I go to https://example.com/ , my connection is secure and everything is fine. But, when I go to http://example.com/ , no reroute occurs, and my connection is not secure. I am also using pm2 to run the Node server in the background.
I have tried the default server block reroutes that come up when I google the issue, but nothing has worked so far. My guess is that Node is handling requests on port 80, since my website comes up the way it did before I had my site fully set up. But I have no clue how to fix that.
Here are my server blocks in /etc/nginx/nginx.conf:
server {
# if ($host = www.example.com) {
# return 301 https://$host$request_uri;
# } # managed by Certbot
listen 80;
listen [::]:80;
server_name _;
return 301 https://$host$request_uri; # managed by Certbot
}
server {
server_name www.example.com example.com; # managed by Certbot
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
listen [::]:443 ssl ipv6only=on default_server; # managed by Certbot
listen 443 ssl default_server; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # mana$
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # ma$
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
Would appreciate any suggestions, as this is for a portfolio website and most places won't link directly to HTTPS.
If anyone else has had this issue, I managed to fix the problem. After trying everything under the sun, I remembered that I messed with my iptables when following an online guide to remove the port number from the address. I fixed my issue by wiping my iptables config, and since I was using a proxy I didn't need to reroute the port.

Cannot listen to https on port 5050 in NGINX

I have a nodejs app that functions as a webserver listening to port 5050
I've created certificates and configured NGINX which works for normal https calls to the standard port (https://x.x/)
If I make a call to port 5050 with a normal http://x.x:5050 call it also works, but with an https://x.x:5050/conf call I get: This site can’t provide a secure connection
Below the NGINX config file:
(The names of the website are changed)
server {
root /var/www/x.x/html;
index index.html index.htm index.nginx-debian.html;
server_name x.x www.x.x;
location / {
try_files $uri $uri/ =404;
}
location /conf {
proxy_pass http://localhost:5050;
try_files $uri $uri/ =404;
}
location /wh {
proxy_pass http://localhost:5050;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/x.x/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/x.x/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
What am I doing wrong here?
You configured nginx to serve as a reverse proxy, forwarding incoming requests from https://example.com/whatever to http://localhost:5050/whatever. You said you did that correctly and it works. Good. (Getting that working is a notorious pain in the xxx neck.)
You did not configure nginx to listen on port 5050. Nor should you; that's the port it uses to pass along requests to your nodejs program. You cannot forward requests from port 5050 to port 5050. If you try to have nodejs and nginx both listen to port 5050, one of them will get an EADRINUSE error when you start your servers.
Your nodejs program listens for http requests, not https requests, on port 5050. You can't easily make it listen for both http and https on the same port. Your nodejs program, when behind nginx, should not contain any https server, only http. (You're making nginx do the hard crypto work to handle https, and letting nodejs handle your requests.)
Nor do you want your nodejs program to listen directly for http-only requests from outside your server. Because cybercreeps.
If you can block access to port 5050 from anywhere except localhost, you can declare victory on your task of configuring your server. You can do this by using
server.listen({
host: 'localhost',
port: 5050, ...
});```
in your nodejs program. Or you can configure your server's firewall to block incoming requests on any ports except https (and ssh, so you can manage it). Digital Ocean has a useful tutorial on this point.

wss connection to node ws server via nginx, ERR_CERT_AUTHORITY_INVALID

I have a websocket server running behind nginx.
Connecting via http works ok (ws://mywebsite.com) with following nginx config:
http {
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
location /myWebSocket {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://localhost:5000;
}
}
}
I'd like to connect to this via https (wss://mywebsite.com) and have tried this:
http {
server {
listen 443 ssl http2 ;
listen [::]:443 ssl http2 ;
server_name mywebsite.com; # managed by Certbot
root /usr/share/nginx/html;
ssl_certificate /etc/mycert.pem;
ssl_certificate_key /etc/mykey.pem;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers PROFILE=SYSTEM;
ssl_prefer_server_ciphers on;
location /myWebSocket {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://localhost:5000;
}
}
}
However, I get a ERR_CERT_AUTHORITY_INVALID on the browser side (e.g. with wss://myWebsite/myWebSocket).
My understanding is that WSS is a connection to a standard websocket via https, is this correct?
Is my approach above correct?
Is there something obvious I'm doing wrong?
To confirm the cert was installed with certbot, and the e.g. https://example.com works.
Note also, there is another server block listening on 443, this does not define the /myWebSocket location (I think it is the nginx default ssl block that is superceded by certbot block).
Edit: note removing that first ssl server block results in the browser side websocket connection error message changing to ERR_CERT_COMMON_NAME_INVALID.
Another Edit: I was connecting using the server's IP address not the DNS name - that removed the error. :-)

Resources