infrequent 504s at nginx when proxying to node/express app - node.js

We have a typical nginx+nodejs setup with node v4.2.2 & nginx v1.9.7.2
Backend service app running on express/nodejs is deployed in clustered mode using recluster module with child processes (equal to #cores) forked that listen on same port. nginx is used as reverse proxy to backend.
Each such instance/box (4 core 8 gb) of nginx+nodejs is serving around 100 tps at load with 90th percentile of latency around 120ms.
Problem is that we get infrequent (1-2 times every 5 min) 504s on nginx access log and related error log shows
(110: Connection timed out) while connecting to upstream
As I understand, it would happen when nginx timedout establishing a connection with nodejs server but all the matrices show nodejs is healthy. Also, all the requests below and above this error have normal latency and just one off request gets stuck. There is no corresponding log entry at nodejs server means request never reached node server.
Relevant nginx config below.
worker_processes auto;
worker_rlimit_nofile 40000;
events {
worker_connections 2000;
use epoll;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 60;
}
upstream node {
server 127.0.0.1:3000;
keepalive 256;
}
server {
listen 80;
server_name abc.com
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_read_timeout 60;
location / {
proxy_pass http://node;
}
}
Output to ss -s is
Total: 747 (kernel 0)
TCP: 7400 (estab 481, closed 6890, orphaned 0, synrecv 0, timewait
6890/0), ports 0
It shows high timewait sockets but we don't see any errors in syslog means operating system's limits are not reached.
I have tried tuning network and nginx primarily taking clues from blog post with no luck.
Need help in forum to get right direction to debug this. Let me know what more info I can provide.

Related

Issues with nginx + node.js + websockets

I have the following nginx configuration to run node.js with websockets behind an nginx reverse proxy:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
gzip on;
upstream nodejsserver {
server 127.0.0.1:3456;
}
server {
listen 443 ssl;
server_name myserver.com;
error_log /var/log/nginx/myserver.com-error.log;
ssl_certificate /etc/ssl/myserver.com.crt;
ssl_certificate_key /etc/ssl/myserver.com.key;
location / {
proxy_pass https://nodejsserver;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 36000s;
}
}
}
The node.js server uses the same certificates as specified in the nginx configuration.
My issue is that in my browser (Firefox, though this issue occurs in other browsers too), my websocket connection resets every few minutes with a 1006 code. I have researched the reason for this error in this particular (or similar) constellation, and most of the answers here as well as on other resources point to the proxy_read_timeout nginx configuration variable not being set or being set too low. But this is not the case in my configuration.
Worthy of note is also that when I run node.js and access it directly, I do not experience these disconnects, both locally and on the server.
In addition, I've tried running nginx and node.js insecurely (port 80), and accessing ws:// instead of wss:// in my client. The issue remains the same.
There are a few things you need to do to keep a connection alive.
You should stablish a keepalive connection count per worker proccess, and the documentation states you need to be explicit about your protocol as well. Other than that, you maybe running into other kinds of timeouts, so edit your upstream and server blocks:
upstream nodejsserver {
server 127.0.0.1:3456;
keepalive 32;
}
server {
#Stuff...
location / {
#Stuff...
# Your time can be different for each timeout, you just need to tune into your application's needs
proxy_read_timeout 36000s;
proxy_connect_timeout 36000s;
proxy_send_timeout 36000s;
send_timeout 36000s; # This is stupid, try a smaller number
}
}
There are a number of other discussions in SO about the same subject, check this answer out

net::ERR_CONNECTION_CLOSED on remote server when there are more than 7 sub-documents in mongo document

I am developing a MEAN project with angular 4.1.0.
On my localhost, everything works without errors. When I deploy to the server however, retrieving a user with more than 8 question-answer pairs causes a net::ERR_CONNECTION_CLOSED error on the xhr request that angular's http module fires.
The digital ocean droplet I am hosting on uses an nginx reverse proxy and uses a letsencrypt SSL certificate.
I have tried:
Restarting server, nginx service, node.js etc.
Increasing client_max_body_size to 20M in the nginx config file
Increasing large_client_header_buffers' size to 128k in the nginx config file
Other important facts:
The GET request to qapairs?jwt=ey.. never reaches the node.js app
There is no mention of the request in /var/log/nginx/error.log
The failing requests shown in the /var/log/nginx/access.log are as follows:
89.15.159.19 - - [08/May/2017:14:25:53 +0000] "-" 400 0 "-" "-"
89.15.159.19 - - [08/May/2017:14:25:53 +0000] "-" 400 0 "-" "-"
Please point me in possible directions.
The chrome dev tool network tab screenshots
After logging in to an account where there are only 7 question answer pairs
Then, after going to mlab.com and manually adding another question answer pair to same account and then refreshing the page (notice the number of questions in now 8)
Finally, after logging in and out of the same account (notice the xhr request to qapairs?jwt=ey... returned a failed status)
/etc/nginx/sites-enabled/default
# HTTP — redirect all traffic to HTTPS
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
return 301 https://$host$request_uri;
}
# etc
# HTTPS ^ ^ proxy all requests to the Node app
server {
# Enable HTTP/2
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name subdomain.example.com;
# Use the Let ^ ^ s Encrypt certificates
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# Include the SSL configuration from cipherli.st
include snippets/ssl-params.conf;
# Increase allowed URL length
large_client_header_buffers 4 128k;
# Increase max body size
client_max_body_size 20M;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://subdomain.example.com:3001/;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}
qa-pairs.service.ts
The error is being caught here in the getQAPairs function. Being passed to the callback in the catch function a ProgressEvent object with a type property of
error, eventPhase of 2.
#Injectable()
export class QaPairsService {
/* etc */
getQAPairs () {
const jwt = localStorage.getItem('jwt') ? `?jwt=${localStorage.getItem('jwt')}` : ''
return this.http.get(this.qapairsUrl + jwt)
.map(response => {
this.qapairs = response.json().map((qapair: IQAPair) => new QAPair(qapair))
this.qapairsChanged.emit(this.qapairs)
return this.qapairs
})
.catch(
(error: any) => {
error = error.json()
this.errorsService.handleError(error)
return Observable.throw(error)
}
)
}
/* etc */
}
Solution:
/etc/nginx/sites-enabled/default
# other code here
server {
# other code here
# Increase http2 max sizes
http2_max_field_size 64k;
http2_max_header_size 64k;
}
The reason I found this so hard to debug was because there was
no mention of the request in /var/log/nginx/error.log
and I didn't realize that nginx has the ability to be more verbose with its logging (duh)
So after changing /etc/nginx/sites-enabled/default to include
server {
error_log /var/log/nginx/error.log info;
}
I saw
2017/05/08 16:17:04 [info] 3037#3037: *9 client exceeded http2_max_field_size limit while processing HTTP/2 connection, client: 89.15.159.19, server: 0.0.0.0:443
which was the error message I needed.
This helped me!!! Unfortunately there were no error messages.
It helps me:
client_header_buffer_size 1k; large_client_header_buffers 4 4k;

Nginx upstream prematurely closed connection while reading response header from upstream, for large requests

I am using nginx and node server to serve update requests. I get a gateway timeout when I request an update on large data. I saw this error from the nginx error logs :
2016/04/07 00:46:04 [error] 28599#0: *1 upstream prematurely closed
connection while reading response header from upstream, client:
10.0.2.77, server: gis.oneconcern.com, request: "GET /update_mbtiles/atlas19891018000415 HTTP/1.1", upstream:
"http://127.0.0.1:7777/update_mbtiles/atlas19891018000415", host:
"gis.oneconcern.com"
I googled for the error and tried everything I could, but I still get the error.
My nginx conf has these proxy settings:
##
# Proxy settings
##
proxy_connect_timeout 1000;
proxy_send_timeout 1000;
proxy_read_timeout 1000;
send_timeout 1000;
This is how my server is configured
server {
listen 80;
server_name gis.oneconcern.com;
access_log /home/ubuntu/Tilelive-Server/logs/nginx_access.log;
error_log /home/ubuntu/Tilelive-Server/logs/nginx_error.log;
large_client_header_buffers 8 32k;
location / {
proxy_pass http://127.0.0.1:7777;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
}
location /faults {
proxy_pass http://127.0.0.1:8888;
proxy_http_version 1.1;
proxy_buffers 8 64k;
proxy_buffer_size 128k;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I am using a nodejs backend to serve the requests on an aws server. The gateway error shows up only when the update takes a long time (about 3-4 minutes). I do not get any error for smaller updates. Any help will be highly appreciated.
Node js code :
app.get("/update_mbtiles/:earthquake", function(req, res){
var earthquake = req.params.earthquake
var command = spawn(__dirname + '/update_mbtiles.sh', [ earthquake, pg_details ]);
//var output = [];
command.stdout.on('data', function(chunk) {
// logger.info(chunk.toString());
// output.push(chunk.toString());
});
command.stderr.on('data', function(chunk) {
// logger.error(chunk.toString());
// output.push(chunk.toString());
});
command.on('close', function(code) {
if (code === 0) {
logger.info("updating mbtiles successful for " + earthquake);
tilelive_reload_and_switch_source(earthquake);
res.send("Completed updating!");
}
else {
logger.error("Error occured while updating " + earthquake);
res.status(500);
res.send("Error occured while updating " + earthquake);
}
});
});
function tilelive_reload_and_switch_source(earthquake_unique_id) {
tilelive.load('mbtiles:///'+__dirname+'/mbtiles/tipp_out_'+ earthquake_unique_id + '.mbtiles', function(err, source) {
if (err) {
logger.error(err.message);
throw err;
}
sources.set(earthquake_unique_id, source);
logger.info('Updated source! New tiles!');
});
}
Thank you.
I solved this by setting a higher timeout value for the proxy:
location / {
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
proxy_pass http://localhost:3000;
}
Documentation: https://nginx.org/en/docs/http/ngx_http_proxy_module.html
I think that error from Nginx is indicating that the connection was closed by your nodejs server (i.e., "upstream"). How is nodejs configured?
I had the same error for quite a while, and here what fixed it for me.
I simply declared in service that i use what follows:
Description= Your node service description
After=network.target
[Service]
Type=forking
PIDFile=/tmp/node_pid_name.pid
Restart=on-failure
KillSignal=SIGQUIT
WorkingDirectory=/path/to/node/app/root/directory
ExecStart=/path/to/node /path/to/server.js
[Install]
WantedBy=multi-user.target
What should catch your attention here is "After=network.target".
I spent days and days looking for fixes on nginx side, while the problem was just that.
To be sure, stop running the node service you have, launch the ExecStart command directly and try to reproduce the bug. If it doesn't pop, it just means that your service has a problem. At least this is how i found my answer.
For everybody else, good luck!
I stumbled upon *145660 upstream prematurely closed connection while reading upstream Nginx error log entry when trying to download a 2GB file from the server Nginx was a proxy for. The message indicates that the "upstream" closed connection, but in fact it was related to proxy_max_temp_file_size setting:
Syntax: proxy_max_temp_file_size size;
Default: proxy_max_temp_file_size 1024m;
Context: http, server, location
When buffering of responses from the proxied server is enabled, and the whole response does not fit into the buffers set by the proxy_buffer_size and proxy_buffers directives, a part of the response can be saved to a temporary file. This directive sets the maximum size of the temporary file. The size of data written to the temporary file at a time is set by the proxy_temp_file_write_size directive.
The zero value disables buffering of responses to temporary files.
This restriction does not apply to responses that will be cached or stored on disk.
The symptoms:
download was being forcibly stopped at around 1GB,
Nginx claimed that upstream closed connection, but without proxy server was returning the full content.
The solution:
increased proxy_max_temp_file_size for proxied location to 4096m and it started sending full content.
I was finding this error in the logs of my AWS Elastic Beanstalk instance when trying to post about half a million rows to my api.
I followed all the advice here to no avail.
What did finally work was increasing the size of my EC2 instance from 1 core and 1GB RAM to 4 core and 8 GB RAM.
You can increase the timeout in node like so.
app.post('/slow/request', function(req, res) {
req.connection.setTimeout(100000); //100 seconds
...
}
I don't think this is your case, but I'll post it if it helps anyone. I had the same issue and the problem was that Node didn't respond at all (I had a condition that when failed didn't do anything - so no response) - So if increasing all your timeouts didn't solve it, make sure all scenarios get a response.
I ran into this issue as well and found this post. Ultimately none of these answers solved my problem, instead I had to put in a rewrite rule to strip out the location /rt as the backend my developers made was not expecting any additional paths:
┌─(william#wkstn18)──(Thu, 05 Nov 20)─┐
└─(~)──(16:13)─>wscat -c ws://WebsocketServerHostname/rt
error: Unexpected server response: 502
Testing with wscat repeatedly gave a 502 response. Nginx error logs provided the same upstream error as above, but notice the upstream string shows the GET Request is attempting to access localhost:12775/rt and not localhost:12775:
2020/11/05 22:13:32 [error] 10175#10175: *7 upstream prematurely closed
connection while reading response header from upstream, client: WANIP,
server: WebsocketServerHostname, request: "GET /rt/socket.io/?transport=websocket
HTTP/1.1", upstream: "http://127.0.0.1:12775/rt/socket.io/?transport=websocket",
host: "WebsocketServerHostname"
Since the devs had not coded their websocket (listening on 12775) to expect /rt/socket.io but instead just /socket.io/ (NOTE: /socket.io/ appears to just be a way to specify websocket transport discussed here). Because of this, rather than ask them to rewrite their socket code I just put in a rewrite rule to translate WebsocketServerHostname/rt to WebsocketServerHostname:12775 as below:
upstream websocket-rt {
ip_hash;
server 127.0.0.1:12775;
}
server {
listen 80;
server_name WebsocketServerHostname;
location /rt {
proxy_http_version 1.1;
#rewrite /rt/ out of all requests and proxy_pass to 12775
rewrite /rt/(.*) /$1 break;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://websocket-rt;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
I meet the same problem and no one of the solutions detailed here worked for me ...
First of all I had an error 413 Entity too large so I updated my nginx.conf as following :
http {
# Increase request size
client_max_body_size 10m;
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
##
# Proxy settings
##
proxy_connect_timeout 1000;
proxy_send_timeout 1000;
proxy_read_timeout 1000;
send_timeout 1000;
}
So I only updated the http part, and now I meet the error 502 Bad Gateway and when I display /var/log/nginx/error.log I got the famous "upstream prematurely closed connection while reading response header from upstream"
What is really mysterious for me is that the request works when I run it with virtualenv on my server and send the request to the : IP:8000/nameOfTheRequest
Thanks for reading
I got the same error, here is how I resolved it:
Downloaded logs from AWS.
Reviewed Nginx logs, no additional details as above.
Reviewed node.js logs, AccessDenied AWS SDK permissions error.
Checked the S3 bucket that AWS was trying to read from.
Added additional bucket with read permission to correct server role.
Even though I was processing large files there were no other errors or settings I had to change once I corrected the missing S3 access.
Problem
The upstream server is timing out and I don't what is happening.
Where to Look first before increasing read or write timeout if your server is connecting to a database
Server is connecting to a database and that connection is working just fine and within sane response time, and its not the one causing this delay in server response time.
make sure that connection state is not causing a cascading failure on your upstream
Then you can move to look at the read and write timeout configurations of the server and proxy.
This error can also occur when your code is getting into a loop. So investigate if you have any (indirectly) self-referencing code that's causing this.

get client ip of the request in net library nodejs

I am using sticky session in nodejs which is behind nginx.
Sticky session does the load balancing by checking the remoteAddress of the connection.
Now the problem is it always take ip of nginx server
server = net.createServer({ pauseOnConnect: true },function(c) {
// Get int31 hash of ip
var worker,
ipHash = hash((c.remoteAddress || '').split(/\./g), seed);
// Pass connection to worker
worker = workers[ipHash % workers.length];
worker.send('sticky-session:connection', c);
});
Can we get the client ip using net library?
Nginx Configuration:
server {
listen 80 default_server;
server_name localhost;
root /usr/share/nginx/html;
#auth_basic "Restricted";
#auth_basic_user_file /etc/nginx/.htpasswd;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
set_real_ip_from 0.0.0.0/0;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://socket_nodes;
proxy_read_timeout 3000;
As mef points out, sticky-session doesn't, at present, work behind a reverse proxy, where remoteAddress is always the same.
The pull request in the aforementioned issue, as well as an earlier pull request, might indeed solve the problem, though I haven't tested myself.
However, those fixes rely on partially parsing packets, doing low-level routing while peeking into headers at a higher level... As the comments on the pull requests indicate, they're unstable, depend on undocumented behavior, suffer from compatibility issues, might degrade performance, etc.
If you don't want to rely on experimental implementations like that, one alternative would be leaving load balancing entirely up to nginx, which can see the client's real IP and so keep sessions sticky. All you need is nginx's built-in ip_hash load balancing.
Your nginx configuration might then look something like this:
upstream socket_nodes {
ip_hash;
server 127.0.0.1:8000;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
server 127.0.0.1:8004;
server 127.0.0.1:8005;
server 127.0.0.1:8006;
server 127.0.0.1:8007;
}
server {
listen 80 default_server;
server_name localhost;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
# Note: Trusting all addresses like this means anyone
# can pretend to have any address they want.
# Only do this if you're absolutely certain only trusted
# sources can reach nginx with requests to begin with.
set_real_ip_from 0.0.0.0/0;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://socket_nodes;
proxy_read_timeout 3000;
}
}
Now, to get this to work, your server code would also need to be modified somewhat:
if (cluster.isMaster) {
var STARTING_PORT = 8000;
var NUMBER_OF_WORKERS = 8;
for (var i = 0; i < NUMBER_OF_WORKERS; i++) {
// Passing each worker its port number as an environment variable.
cluster.fork({ port: STARTING_PORT + i });
}
cluster.on('exit', function(worker, code, signal) {
// Create a new worker, log, or do whatever else you want.
});
}
else {
server = http.createServer(app);
// Socket.io initialization would go here.
// process.env.port is the port passed to this worker by the master.
server.listen(process.env.port, function(err) {
if (err) { /* Error handling. */ }
console.log("Server started on port", process.env.port);
});
}
The difference is that instead of using cluster to have all worker processes share a single port (load balanced by cluster itself), each worker gets its own port, and nginx can distribute load between the different ports to get to the different workers.
Since nginx chooses which port to go to based on the IP it gets from the client (or the X-Forwarded-For header in your case), all requests in the same session will always end up at the same process.
One major disadvantage of this method, of course, is that the number of workers becomes far less dynamic. If the ports are "hard-coded" in the nginx configuration, the Node server has to be sure to always listen to exactly those ports, no less and no more. In the absence of a good system for syncing the nginx config and the Node server, this introduces the possibility of error, and makes it somewhat more difficult to dynamically scale to e.g. the number of cores in an environment.
Then again, I imagine one could overcome this issue by either programmatically generating/updating the nginx configuration, so it always reflects the desired number of processes, or possibly by configuring a very high number of ports for nginx and then making Node workers each listen to multiple ports as needed (so you could still have exactly as many workers as there are cores). I have not, however, personally verified or tried implementing either of these methods so far.
Note regarding an nginx server behind a proxy
In the nginx configuration you provided, you seem to have made use of ngx_http_realip_module. While you made no explicit mention of this in the question, please note that this may in fact be necessary, in cases where nginx itself sits behind some kind of proxy, e.g. ELB.
The real_ip_header directive is then needed to ensure that it's the real client IP (in e.g. X-Forwarded-For), and not the other proxy's, that's hashed to choose which port to go to.
In such a case, nginx is actually serving a fairly similar purpose to what the pull requests for sticky-session attempted to accomplish: using headers to make the load balancing decisions, and specifically to make sure the same real client IP is always directed to the same process.
The key difference, of course, is that nginx, as a dedicated web server, load balancer and reverse proxy, is designed to do exactly these kinds of operations. Parsing and manipulating the different layers of the protocol stack is its bread and butter. Even more importantly, while it's not clear how many people have actually used these pull requests, nginx is stable, well-maintained and used virtually everywhere.
It seems that the module you're using does not support yet to be behind a reverse-proxy source.
Have a look at this Github issue, some pull requests seem to fix your problem, so you may have a solution by using a fork of the module (you can point at it on github from your package.json file.)

WebSocket NGINX/NODEJS stickiness Issue

I'm writing web socket project, everything is working like expected(locally), I using:
NGINX as a WebSockets Proxy
NODEJS as a backend server
WS as websocket module: ws
NGINX configuration:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream backend_cluster {
server 127.0.0.1:5050;
}
# Only retry if there was a communication error, not a timeout.
proxy_next_upstream error;
server {
access_log /code/logs/access.log;
error_log /code/logs/error.log info;
listen 80;
listen 443 ssl;
server_name mydomain;
root html;
ssl_certificate /code/certs/sslCert.crt;
ssl_certificate_key /code/certs/sslKey.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; # basically same as apache [all -SSLv2]
ssl_ciphers HIGH:MEDIUM:!aNULL:!MD5;
location /websocket/ws {
proxy_pass http://backend_cluster;
proxy_http_version 1.1;
proxy_redirect off ;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Like I mentioned this is working just fine locally and in one machine in development environments, the issue I'm worry about is when we will go to production, in production environments will have more that one nodejs server.
In production the configuration for nginx will be something like:
upstream backend_cluster {
server domain1:5050;
server domain2:5050;
}
So I don't know how NGINX solves the issue for stickiness, meaning how I know that after the 'HANDSHAKE/upgrade' is done in one server, how it will know to continue working with the same server, is there a way to tell NGINX to stick to the same server?
I hope I make my self clear.
Thanks in advanced
Use this configuration:
upstream backend_cluster {
ip_hash;
server domain1:5050;
server domain2:5050;
}
clody69's answer is pretty standard. However I prefer using the following configuration for 2 reasons :
Users connecting from the same public IP should be connecting to 2 different servers if needed. ip_hash enforces 1 server per public IP.
If user 1 is maxing out server 1's performance I want him/her to be able to use the application smoothly if he/she opens another tab. ip_hash doesn't allow that.
upstream backend_cluster {
hash $content_type;
server domain1:5050;
server domain2:5050;
}

Resources