Two proxy servers on NGINX are not working simultaneously - node.js

I have two Nginx servers acting as reverse proxies for nodejs servers running on ports 5000 and 5001.
The one that is running on port 5000 is for normal form upload
The other one that is running on port 5001 is for uploading images
On the client side, what I've done is after filling out the form (title, description, and image) by the user, the image is uploaded to the image server first and the imageURL, title, and description are uploaded to the normal web server then.
The Problem
When the client fills out the form and clicks on upload if the image upload works then upload to the normal server fails or if normal server upload works then upload to the image server fails.
The error is the following one: (This could for either of them)
Access to XMLHttpRequest at 'https://myserver.com/imagev2api/profile-upload-single' from origin 'https://blogs.vercel.app' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Note: I've used app.use(cors()) on both servers (image and normal server)
Here's both nginx server configurations
Image Server
upstream imageserver.com {
server 127.0.0.1:5001;
keepalive 600;
}
server {
server_name imageserver.com;
error_log /var/www/log/imagserver.com.error;
access_log /var/www/log/imagserver.com.access;
location / {
proxy_pass http://imageserver.com;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# fastcgi_split_path_info ^(.+\.php)(/.+)$;
}
listen 443 ssl http2; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/linoxcloud.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/linoxcloud.com/privkey.pem; # managed by Certbot
ssl_protocols TLSv1.2 TLSv1.3 SSLv2 SSLv3;
ssl_session_cache shared:SSL:5m;
ssl_session_timeout 10m;
ssl_session_tickets off;
}
server {
if ($host = imageserver.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name imageserver.com;
}
Normal Server
upstream normalserver.com {
server 127.0.0.1:5000;
keepalive 600;
}
server {
server_name normalserver.com;
error_log /var/www/log/normalserver.com.error;
access_log /var/www/log/normalserver.com.access;
location / {
proxy_pass http://normalserver.com;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
listen 443 ssl http2; # managed by Certbot
ssl_certificate ...; # managed by Certbot
ssl_certificate_key ...; # managed by Certbot
ssl_protocols TLSv1.2 TLSv1.3 SSLv2 SSLv3;
ssl_session_cache shared:SSL:5m;
ssl_session_timeout 10m;
ssl_session_tickets off;
}
server {
if ($host = normalserver.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name normalserver.com;
}
I've been trying to overcome this problem for some period of time by trying literally everything.
Reference: Two NGINX servers one passing CORS issue (but this doesn't provide any insights into what the problem and solution is)
Any possible fixes, please?

You have to combine these reverse proxies in one configuration file. There was already a similar thread here: https://serverfault.com/questions/242679/how-to-run-multiple-nginx-instances-on-different-port
Hope it helps.

The problem in my case is that I'm running my NODEJS instances/servers using "pm2" and they are not working simultaneously.
Similar issue: https://github.com/Unitech/pm2/issues/4352
Elaborating on what happened was if two requests are made simultaneously one pm2 process successfully executes but meanwhile the server crashes/restarts after that execution which is making the other server throw a 502 Bad Gateway error. (unreachable as though the server is not running)
For now, I'm running one server on "pm2"
and the other one uses "forever"
Note: This issue has nothing to do with Nginx (since it can handle any number of websites with different domain names on a single port 80)
This problem happened quite recently maybe it's some "pm2" bug
In simple words, when the two requests hit individual pm2 processes, one executes, and the pm2 processes kind of restart again making the second request obsolete.

Related

linux nginx dotnet core app signalr not working over ssl

This was working when testing out the app. When i switched the DNS over to the server and then added SSL cert, signalR stopped working (my chat). I presume it's to do with the proxy now redirecting to port 443. The rest of the website works, just not its' chat functionality.
Firefox can’t establish a connection to the server at wss://www.my-website.com/chatHub?id=qDsSrV-APYXpnyk_EfsrXw. signalr.min.js:16:110126
Uncaught (in promise) Error: Server returned handshake error: Handshake was canceled.
and the config in nginx:
server {
server_name www.my-website.com;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/www.my-website.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/www.my-website.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
}
server {
if ($host = www.my-website.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name www.my-website.com;
return 404; # managed by Certbot
}
Any help on getting signalR working again would be greatly appreciated, thanks.
So, turns out that when Certbot edited the config, it added an extra unncessary }. and that's all that was breaking it. The config was broken and was serving a cached state. So i was viewing the website via https:// but was trying to make a websocket connection on port 80, and was failing because it was unsecure.

Nginx proxy to node.js server SSL ERR_SSL_PROTOCOL_ERROR

EDIT:
I have verified that nodejs is running on the correct port, on http, and I have also tried with and without:
app.use('trust proxy', true);
EDIT 2:
I turned off the nodejs server and tried to serve static files just with nginx, and the error persists, so clearly this has something to do with nginx and my ssl cert.
My domain is a free domain from freenom and the ssl certificate was generated with certbot.
Original:
I have a nodejs server running, and want to use nginx and proxy to the nodejs server. (Nginx https -> nodejs http)
Running nginx -t gives no errors.
On ubuntu 20.04.2, nginx 1.18.0 node 14.5.5
I have verified that my site works fine via http (on port 3000), but i get the following error when visiting via browser on https:
ERR_SSL_PROTOCOL_ERROR
Further if i use openssl cli to try and connect, I get this
openssl s_client -connect my_domain.com:443 -servername my_domain.com
CONNECTED(00000003)
139662603941184:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../ssl/record/ssl3_record.c:331:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 5 bytes and written 310 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
/etc/nginx/conf.d/ssl.conf
server {
listen 443 ssl;
ssl_certificate /server/resources/cert.pem;
ssl_certificate_key /server/resources/privkey.pem;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
If you use Cloudflare, It may Cloudflare not issued SSL certificate for you yet, or Cloudflare failed to connect to origin with secure connection. Check your dashboard.
Following is the working configuration of nginx.conf
I have also setup SSL with certbot + letsencrypt.
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name example.com www.example.com;
root "/home/ubuntu/domain/code/directory/path/";
index index.html index.htm;
client_max_body_size 75M; # adjust to taste
location /api {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 600s;
}
location / {
try_files $uri $uri/ /index.html;
}
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_session_timeout 1h;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
add_header Strict-Transport-Security “max-age=15768000” always;
ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
}
I guess the above configuration might solve your issue.
URL is https://www.example.com/api/ping redirects to http://localhost:3000/api/ping on the server.

wss connection to node ws server via nginx, ERR_CERT_AUTHORITY_INVALID

I have a websocket server running behind nginx.
Connecting via http works ok (ws://mywebsite.com) with following nginx config:
http {
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
location /myWebSocket {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://localhost:5000;
}
}
}
I'd like to connect to this via https (wss://mywebsite.com) and have tried this:
http {
server {
listen 443 ssl http2 ;
listen [::]:443 ssl http2 ;
server_name mywebsite.com; # managed by Certbot
root /usr/share/nginx/html;
ssl_certificate /etc/mycert.pem;
ssl_certificate_key /etc/mykey.pem;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers PROFILE=SYSTEM;
ssl_prefer_server_ciphers on;
location /myWebSocket {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://localhost:5000;
}
}
}
However, I get a ERR_CERT_AUTHORITY_INVALID on the browser side (e.g. with wss://myWebsite/myWebSocket).
My understanding is that WSS is a connection to a standard websocket via https, is this correct?
Is my approach above correct?
Is there something obvious I'm doing wrong?
To confirm the cert was installed with certbot, and the e.g. https://example.com works.
Note also, there is another server block listening on 443, this does not define the /myWebSocket location (I think it is the nginx default ssl block that is superceded by certbot block).
Edit: note removing that first ssl server block results in the browser side websocket connection error message changing to ERR_CERT_COMMON_NAME_INVALID.
Another Edit: I was connecting using the server's IP address not the DNS name - that removed the error. :-)

having trouble with ports and deploying react/node project to digital ocean

I tried to deploy my project to Digital Ocean.
At one point I was able to see my react client when I went to my_ip:8080 which was the port it was running on for whatever reason.
I set up SSL and then cd etc/nginx/sites-enabled hit vim default and started to edit. Here is where I started to run into problems, where my react project stopped showing up, and where ultimately I'm stuck.
so this is what is in that file right now
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
listen 80;
server_name my_website.com;
rewrite ^/a(.*) https://my_website.com/$1 permanent;
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
root /var/www/client/build;
index index.html index.htm index.nginx-debian.html;
server_name my_website.com;
ssl_certificate /root/my_website.crt;
ssl_certificate_key /root/my_website.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
location / {
}
something is clearly not working here though.
if I do pm2 list it will show me index running on 0 with status: online and static-page-server-8080 running on 1, but with a status: errored.
I tried to set up ufw and I'm not even sure if that messed anything up or not.
So at the moment if I go to my IP in the browser I get nothing. if I add a port at the end I get nothing. how should I go about fixing this? Should I just scrap it and try over?
Take a read through the documentation here. And for SSL, this is useful.
I'd also suggest, as a sanity check:
Stop everything running you see in pm2 list.
Start your project running directly in the terminal on port 8080 and try and visit it from the browser.
This will tell you if it's a problem with your code, pm2, or how you're setting up nginx.
Also, this is what my config looks like below. You're localhost probably shouldn't be commented out.
server {
server_name www.foo.com foo.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/foo.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/foo.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}

WebSocket NGINX/NODEJS stickiness Issue

I'm writing web socket project, everything is working like expected(locally), I using:
NGINX as a WebSockets Proxy
NODEJS as a backend server
WS as websocket module: ws
NGINX configuration:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream backend_cluster {
server 127.0.0.1:5050;
}
# Only retry if there was a communication error, not a timeout.
proxy_next_upstream error;
server {
access_log /code/logs/access.log;
error_log /code/logs/error.log info;
listen 80;
listen 443 ssl;
server_name mydomain;
root html;
ssl_certificate /code/certs/sslCert.crt;
ssl_certificate_key /code/certs/sslKey.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; # basically same as apache [all -SSLv2]
ssl_ciphers HIGH:MEDIUM:!aNULL:!MD5;
location /websocket/ws {
proxy_pass http://backend_cluster;
proxy_http_version 1.1;
proxy_redirect off ;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Like I mentioned this is working just fine locally and in one machine in development environments, the issue I'm worry about is when we will go to production, in production environments will have more that one nodejs server.
In production the configuration for nginx will be something like:
upstream backend_cluster {
server domain1:5050;
server domain2:5050;
}
So I don't know how NGINX solves the issue for stickiness, meaning how I know that after the 'HANDSHAKE/upgrade' is done in one server, how it will know to continue working with the same server, is there a way to tell NGINX to stick to the same server?
I hope I make my self clear.
Thanks in advanced
Use this configuration:
upstream backend_cluster {
ip_hash;
server domain1:5050;
server domain2:5050;
}
clody69's answer is pretty standard. However I prefer using the following configuration for 2 reasons :
Users connecting from the same public IP should be connecting to 2 different servers if needed. ip_hash enforces 1 server per public IP.
If user 1 is maxing out server 1's performance I want him/her to be able to use the application smoothly if he/she opens another tab. ip_hash doesn't allow that.
upstream backend_cluster {
hash $content_type;
server domain1:5050;
server domain2:5050;
}

Resources