WebSocket NGINX/NODEJS stickiness Issue - node.js

I'm writing web socket project, everything is working like expected(locally), I using:
NGINX as a WebSockets Proxy
NODEJS as a backend server
WS as websocket module: ws
NGINX configuration:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream backend_cluster {
server 127.0.0.1:5050;
}
# Only retry if there was a communication error, not a timeout.
proxy_next_upstream error;
server {
access_log /code/logs/access.log;
error_log /code/logs/error.log info;
listen 80;
listen 443 ssl;
server_name mydomain;
root html;
ssl_certificate /code/certs/sslCert.crt;
ssl_certificate_key /code/certs/sslKey.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; # basically same as apache [all -SSLv2]
ssl_ciphers HIGH:MEDIUM:!aNULL:!MD5;
location /websocket/ws {
proxy_pass http://backend_cluster;
proxy_http_version 1.1;
proxy_redirect off ;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Like I mentioned this is working just fine locally and in one machine in development environments, the issue I'm worry about is when we will go to production, in production environments will have more that one nodejs server.
In production the configuration for nginx will be something like:
upstream backend_cluster {
server domain1:5050;
server domain2:5050;
}
So I don't know how NGINX solves the issue for stickiness, meaning how I know that after the 'HANDSHAKE/upgrade' is done in one server, how it will know to continue working with the same server, is there a way to tell NGINX to stick to the same server?
I hope I make my self clear.
Thanks in advanced

Use this configuration:
upstream backend_cluster {
ip_hash;
server domain1:5050;
server domain2:5050;
}

clody69's answer is pretty standard. However I prefer using the following configuration for 2 reasons :
Users connecting from the same public IP should be connecting to 2 different servers if needed. ip_hash enforces 1 server per public IP.
If user 1 is maxing out server 1's performance I want him/her to be able to use the application smoothly if he/she opens another tab. ip_hash doesn't allow that.
upstream backend_cluster {
hash $content_type;
server domain1:5050;
server domain2:5050;
}

Related

Two proxy servers on NGINX are not working simultaneously

I have two Nginx servers acting as reverse proxies for nodejs servers running on ports 5000 and 5001.
The one that is running on port 5000 is for normal form upload
The other one that is running on port 5001 is for uploading images
On the client side, what I've done is after filling out the form (title, description, and image) by the user, the image is uploaded to the image server first and the imageURL, title, and description are uploaded to the normal web server then.
The Problem
When the client fills out the form and clicks on upload if the image upload works then upload to the normal server fails or if normal server upload works then upload to the image server fails.
The error is the following one: (This could for either of them)
Access to XMLHttpRequest at 'https://myserver.com/imagev2api/profile-upload-single' from origin 'https://blogs.vercel.app' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Note: I've used app.use(cors()) on both servers (image and normal server)
Here's both nginx server configurations
Image Server
upstream imageserver.com {
server 127.0.0.1:5001;
keepalive 600;
}
server {
server_name imageserver.com;
error_log /var/www/log/imagserver.com.error;
access_log /var/www/log/imagserver.com.access;
location / {
proxy_pass http://imageserver.com;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# fastcgi_split_path_info ^(.+\.php)(/.+)$;
}
listen 443 ssl http2; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/linoxcloud.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/linoxcloud.com/privkey.pem; # managed by Certbot
ssl_protocols TLSv1.2 TLSv1.3 SSLv2 SSLv3;
ssl_session_cache shared:SSL:5m;
ssl_session_timeout 10m;
ssl_session_tickets off;
}
server {
if ($host = imageserver.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name imageserver.com;
}
Normal Server
upstream normalserver.com {
server 127.0.0.1:5000;
keepalive 600;
}
server {
server_name normalserver.com;
error_log /var/www/log/normalserver.com.error;
access_log /var/www/log/normalserver.com.access;
location / {
proxy_pass http://normalserver.com;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
listen 443 ssl http2; # managed by Certbot
ssl_certificate ...; # managed by Certbot
ssl_certificate_key ...; # managed by Certbot
ssl_protocols TLSv1.2 TLSv1.3 SSLv2 SSLv3;
ssl_session_cache shared:SSL:5m;
ssl_session_timeout 10m;
ssl_session_tickets off;
}
server {
if ($host = normalserver.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name normalserver.com;
}
I've been trying to overcome this problem for some period of time by trying literally everything.
Reference: Two NGINX servers one passing CORS issue (but this doesn't provide any insights into what the problem and solution is)
Any possible fixes, please?
You have to combine these reverse proxies in one configuration file. There was already a similar thread here: https://serverfault.com/questions/242679/how-to-run-multiple-nginx-instances-on-different-port
Hope it helps.
The problem in my case is that I'm running my NODEJS instances/servers using "pm2" and they are not working simultaneously.
Similar issue: https://github.com/Unitech/pm2/issues/4352
Elaborating on what happened was if two requests are made simultaneously one pm2 process successfully executes but meanwhile the server crashes/restarts after that execution which is making the other server throw a 502 Bad Gateway error. (unreachable as though the server is not running)
For now, I'm running one server on "pm2"
and the other one uses "forever"
Note: This issue has nothing to do with Nginx (since it can handle any number of websites with different domain names on a single port 80)
This problem happened quite recently maybe it's some "pm2" bug
In simple words, when the two requests hit individual pm2 processes, one executes, and the pm2 processes kind of restart again making the second request obsolete.

wss connection to node ws server via nginx, ERR_CERT_AUTHORITY_INVALID

I have a websocket server running behind nginx.
Connecting via http works ok (ws://mywebsite.com) with following nginx config:
http {
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
location /myWebSocket {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://localhost:5000;
}
}
}
I'd like to connect to this via https (wss://mywebsite.com) and have tried this:
http {
server {
listen 443 ssl http2 ;
listen [::]:443 ssl http2 ;
server_name mywebsite.com; # managed by Certbot
root /usr/share/nginx/html;
ssl_certificate /etc/mycert.pem;
ssl_certificate_key /etc/mykey.pem;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers PROFILE=SYSTEM;
ssl_prefer_server_ciphers on;
location /myWebSocket {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://localhost:5000;
}
}
}
However, I get a ERR_CERT_AUTHORITY_INVALID on the browser side (e.g. with wss://myWebsite/myWebSocket).
My understanding is that WSS is a connection to a standard websocket via https, is this correct?
Is my approach above correct?
Is there something obvious I'm doing wrong?
To confirm the cert was installed with certbot, and the e.g. https://example.com works.
Note also, there is another server block listening on 443, this does not define the /myWebSocket location (I think it is the nginx default ssl block that is superceded by certbot block).
Edit: note removing that first ssl server block results in the browser side websocket connection error message changing to ERR_CERT_COMMON_NAME_INVALID.
Another Edit: I was connecting using the server's IP address not the DNS name - that removed the error. :-)

Socket IO websockets issues

I have a node express webserver starting up on my debian linux box on port 8080-8083 using pm2 cluster.
I have an nginx reverse proxy server setup on the server to redirect correctly to the node-express server, with the following /etc/nginx/sites-available/default
server {
listen 80;
listen [::]:80;
server_name a.registered.dns.domain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name a.registered.dns.domain.com;
ssl_certificate /home/admin/certs/a.registered.dns.domain.com.chained.crt;
ssl_certificate_key /home/admin/certs/a.registered.dns.domain.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://nodes;
}
location /socket.io {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://nodes;
# enable WebSockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
}
upstream nodes {
# enable sticky session based on IP
ip_hash;
server 127.0.0.1:8080 fail_timeout=20s;
server 127.0.0.1:8081 fail_timeout=20s;
server 127.0.0.1:8082 fail_timeout=20s;
server 127.0.0.1:8083 fail_timeout=20s;
}
This creates the websocket just fine between the server and the client as seen here.
upgrades the connection from the long polling to the websocket with the status 101. If I do something from the site that sends an emit over the socket, the server receives it and acts on it appropriately. So Far So Good.
However, if I do something elsewhere that causes the server to emit out to the client, I can see using DEBUG='socket.io*' pm2 restart http-server --update-env on the server that the socket information is received and emitted out, the client never receives the data packet it should. can confirm this by running localStorage.debug = '*'; from the console in my chrome dev tools.
Saw the emit out and nothing but ping and pong packets on the websocket.
This all works correctly if I open ports 8080-8083 and use only an http connection. So it feels as if there is some issue with the nginx reverse proxy for the ssl connection of my site.

Issues with nginx + node.js + websockets

I have the following nginx configuration to run node.js with websockets behind an nginx reverse proxy:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
gzip on;
upstream nodejsserver {
server 127.0.0.1:3456;
}
server {
listen 443 ssl;
server_name myserver.com;
error_log /var/log/nginx/myserver.com-error.log;
ssl_certificate /etc/ssl/myserver.com.crt;
ssl_certificate_key /etc/ssl/myserver.com.key;
location / {
proxy_pass https://nodejsserver;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 36000s;
}
}
}
The node.js server uses the same certificates as specified in the nginx configuration.
My issue is that in my browser (Firefox, though this issue occurs in other browsers too), my websocket connection resets every few minutes with a 1006 code. I have researched the reason for this error in this particular (or similar) constellation, and most of the answers here as well as on other resources point to the proxy_read_timeout nginx configuration variable not being set or being set too low. But this is not the case in my configuration.
Worthy of note is also that when I run node.js and access it directly, I do not experience these disconnects, both locally and on the server.
In addition, I've tried running nginx and node.js insecurely (port 80), and accessing ws:// instead of wss:// in my client. The issue remains the same.
There are a few things you need to do to keep a connection alive.
You should stablish a keepalive connection count per worker proccess, and the documentation states you need to be explicit about your protocol as well. Other than that, you maybe running into other kinds of timeouts, so edit your upstream and server blocks:
upstream nodejsserver {
server 127.0.0.1:3456;
keepalive 32;
}
server {
#Stuff...
location / {
#Stuff...
# Your time can be different for each timeout, you just need to tune into your application's needs
proxy_read_timeout 36000s;
proxy_connect_timeout 36000s;
proxy_send_timeout 36000s;
send_timeout 36000s; # This is stupid, try a smaller number
}
}
There are a number of other discussions in SO about the same subject, check this answer out

Google OAuth2 uri_mismatch when behind an nginx reverse proxy

I'm trying a new server configuration using an nginx reverse proxy and ssl, but it seems to break my google OAuth2. I'm using node v6.2.2, pm2 to manage nodejs, and using nginx for ssl and a reverse proxy.
My Nginx server blocks look like:
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
and
server {
listen 443;
ssl on;
include snippets/ssl-example.com.conf;
include snippets/ssl-params.conf;
server_name example.com;
location / {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
When running the nodejs server on my laptop I'm able to login using passportjs' google strategy with no issues, but as soon as I run the same code behind the reverse proxy I get a redirect_uri_mismatch. I've tried hardcoding the callbackURL to http://example.com/auth/oauthCallback and https://example.com/auth/oauthCallback and have added all variations of those to the OAuth Client IDs. I've tried making small changes to my server blocks and couldn't make much headway, so here I am.
Any ideas for a next step?
Turns out Google's OAuth2 doesn't seem to recognize .xyz domains

Resources