Socket.io with Nodejs not working with nginx reverse proxy - node.js

I have a Nodejs server app with Express and Socket.io (Ubuntu 18.04). It always worked fine until nGinx (1.14) reverse proxy entered the scene. The nginx server is running on a different machine of Node.js apps, each app on it's own vm, inside the same network.
Server and Client on version 2.1.1.
The nginx server is responsible for multiple app redirects.
I tried several configuration combinations but nothing works.
Here what I've tried (examples for "company1"):
default.conf in /etc/nginx/conf.d
location /company1-srv/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
proxy_pass http://172.16.0.25:51001/;
}
Then in the client code I connect using "path" options because socket.io misplace it's library path.
// companySrv and URL is actually returned by another service (following code is for illustrative purposes):
let companyUrl = 'https://api.myserver.com/company1-srv';
let companySrv = '/company1-srv';
socket(companyUrl, {
path: companySrv + '/socket.io/'
});
I also tried to remove the path option and configured a specific /location for the socket.io stuff (for testing purposes):
location /socket.io/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
proxy_pass http://172.16.0.25:51001/socket.io/;
}
Nothing worked.
It connects, but does'n emits anything. And after a short while (a minute or so), it becomes unavailable, raising the "disconnect" (reason: transport close) client event.
Server:
const io = require('socket.io')(https || http, {
transports: ['polling', 'websocket'],
allowUpgrades: true,
pingInterval: 60000*60*24,
pingTimeout: 60000*60*24
});
I also tried to edit the nginx.conf and write the "upstream socket_nodes { ..." and use the proxy_pass http://socket_nodes. It didn't make sense as I need a exact redirect depending on the company, but for the sake of tests I did, but it doesn't work as well.
What I need to do?
Thanks

We as well use socket.io with reverse-proxy from ngnix. I can share a little bit of our setup, maybe it helps to rule things out.
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
stream {
log_format basic '$time_iso8601 $remote_addr '
'$protocol $status $bytes_sent $bytes_received '
'$session_time $upstream_addr '
'"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"';
access_log /var/log/nginx/stream.log basic;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
##
# Server Blocks
##
# DOMAINEXAMPLE A
server {
server_name exampleA.domain.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://192.168.21.105:5050;
}
}
# DOMAINEXAMPLE B
server {
server_name exampleB.domain.com;
location /api {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://192.168.21.106:5050;
}
}
}
The most interesting part here are probably the server blocks
# DOMAINEXAMPLE A
server {
server_name exampleA.domain.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://192.168.21.105:5050;
}
}
# DOMAINEXAMPLE B
server {
server_name exampleB.domain.com;
location /api {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://192.168.21.106:5050;
}
}
Domain Example A
For location / at http://192.168.21.105:5050 we have a NodeJS process running, including the setup for socket.io
const express = require('express');
const http = require('http');
const app = express();
const server = http.createServer(app);
const io = require('socket.io')(server);
Domain Example B
For location /api at http://192.168.21.106:5050 we have another NodeJS process running, including a slightly different setup for socket.io
const express = require('express');
const http = require('http');
const app = express();
const server = http.createServer(app);
const io = require('socket.io')(server, {path: '/api/socket.io'});
In both cases socket.io works perfectly fine for us
Connecting from Client (Example B)
What we actually do on the server side here is creating a namespace for socket.io, like
const io= require('socket.io')(server, {path: '/api/socket.io'});
const nsp = io.of('/api/frontend');
and then on the client side , connect to it like
import io from 'socket.io-client'
const socket = io('https://exampleB.domain.com/api/frontend', {path: "/api/socket.io"});

Related

Nginx + nodejs, socket.io for https dosnt work

I have a problem with customizing my nodejs app on the server (https, socket.io).
I using: pm2, nginx(works on port :3000), nodejs
Prev I customized nginx for :80 port(http) - works fine!.
But when I inserted ssl certificate and customized him I get errors in the console:
"Access to XMLHttpRequest at 'https://example:2053/socket.io/?EIO=3&transport=polling&t=NAPoEIt' from origin 'https://example.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
My nginx settings:
server {
listen 80 default_server;
server_name example.com www.example.com;
return 301 http://skinsgaben.com$request_uri;
}
server {
listen 443 ssl;
server_name site.com www.example.com;
ssl_certificate /var/www/site/ssl/example.crt;
ssl_certificate_key /var/www/site/ssl/example.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:3000/;
proxy_redirect off;
# Socket.IO Support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /socket.io/ {
proxy_pass http://localhost:3000/;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
error_log /var/www/site-log/error.log;
access_log /var/www/site-log/access.log;
}
I'm mostly using this pattern for local and remote nodejs apps.
let host = process.env.host || 'your host';
let PORT = process.env.port || 'your port';
let protocol = 'http';
let options = {};
if (APP_ENV === 'production') {
protocol = 'https';
options = {
key: fs.readFileSync(SSL.KEY),
cert: fs.readFileSync(SSL.CERT)
};
}
const server = require(protocol).createServer(options, app);
server.listen({ host, port: PORT }, () => {
console.log(`Server listening on `);
});

Python 3.7 Flask-SocketIO + uWSGI + nginx configuration

I'm hosting two python applications (app1 and app2) on ubuntu machine (16.04) under the same setup with different python versions (2.7 and 3.7), both are based on Flask framework using Flask-SocketIO, running on uWSGI (2.0.17.1) behind Nginx proxy.
I've been able to successfully implement websocket support in version 2.7 but im failing to do the same on 3.7.
Nginx configuration and uwsgi are the same, with only exception of different plugin for uwsgi (python version)
In both cases I'm using uwsgi websocket server (via SocketIO) with redis queue.
Beside websocket problem app2 works just fine.
Python Setup
Python 2.7 Libs:
Flask==0.12.2
Flask-SocketIO==2.9.4
gevent==1.2.2
greenlet==0.4.13
Python 3.7 Libs:
Flask==1.0.2
Flask-Script==2.0.6
gevent==1.3.7
greenlet==0.4.15
uWSGI - 2.0.17.1
Working configuration of app1:
__init__.py
app = Flask(__name__)
# SocketIO
try: # This step is required only for version deployed on UWSGI
import uwsgi
socketio = SocketIO(app, message_queue=app.config['REDIS_QUEUE_URL'])
except ImportError:
print 'Application runs outside of uWSGI context'
socketio = SocketIO(app)
manage.py
from flask-script import Manager
from app1 import app
#manager.command
def runserver(host = None, port = None, socket = True):
if not host:
host = 'localhost'
if not port:
port = 5000
if socket:
socketio.run(app)
else:
app.run(host, port, debug=False)
app1.ini
[uwsgi]
plugins-dir = /usr/local/lib/uwsgi
plugins = python27
#application's base folder
base = /home/ubuntu/app1
#python module to import
app = manage
module = %(app)
home = %(base)/venv
virtualenv = %(base)/venv
pythonpath = %(base)
#socket file's location
socket = %(base)/app1.sock
#permissions for the socket file
chmod-socket = 666
callable = app
logto = /var/log/uwsgi/%n.log
processes = 20
http-websockets = true
gevent = 500
vacuum = true
die-on-term = true
enable-threads = true
master = true
app1-site
server {
listen 1014 ssl default_server;
server_name server_name_1;
access_log /var/log/nginx/app1_access_log;
error_log /var/log/nginx/app1_error_log;
auth_basic off;
# SSL only
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
location /socket.io/ {
include uwsgi_params;
uwsgi_pass unix:/home/ubuntu/app1/app1.sock;
proxy_http_version 1.1;
proxy_read_timeout 180s;
proxy_buffering on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location / {deny all;}
location = /app1{ rewrite ^ /app1/; }
location /app1{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:8080;
proxy_read_timeout 90;
proxy_redirect http://server:8080 https://server_name_1:1014/app1;
try_files $uri #app1; }
location #app1{
include uwsgi_params;
uwsgi_param SCRIPT_NAME /app1;
uwsgi_modifier1 30;
uwsgi_read_timeout 180s;
uwsgi_send_timeout 180s;
proxy_read_timeout 180s;
uwsgi_pass unix:/home/ubuntu/app1/app1.sock;
}}
Configuration of app2
__init__.py
def create_app():
...
app = Flask(__name__)
socket_io.init_app(app, message_queue = app.config['REDIS_URL'])
...
return app
wsgi.py
import uwsgi
from gevent.monkey import patch_all
patch_all()
print('Patching all!')
from app2 import create_app
application = create_app()
app2.ini
[uwsgi]
plugins-dir = /usr/local/lib/uwsgi
plugins = python37
#application's base folder
base = /home/ubuntu/app2
home = %(base)/venv
virtualenv = %(base)/venv
pythonpath = %(base)
mount = /app2=%(base)/wsgi.py
callable = application
socket = %(base)/app2.sock
chmod-socket = 666
chdir = %(base)
attach-daemon = %(virtualenv)/bin/celery -A celery_worker.celery worker
attach-daemon = %(virtualenv)/bin/celery -A celery_worker.celery beat
logto = /var/log/uwsgi/%n.log
processes = 20
vacuum = true
die-on-term = true
enable-threads = true
master = true
manage-script-name = true
http-websockets = true
gevent = 5000
#Workaround for flask send_file() failing on python 3 and uwsgi
wsgi-disable-file-wrapper = true
app2-site
server {
listen 1015 ssl default_server;
server_name server_name_2;
access_log /var/log/nginx/app2_access_log;
error_log /var/log/nginx/app2_error_log;
auth_basic off;
# SSL only
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
location /socket.io/ {
include uwsgi_params;
uwsgi_buffering off;
uwsgi_pass unix:/home/ubuntu/app2/app2.sock;
proxy_http_version 1.1;
proxy_read_timeout 180s;
proxy_buffering on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location / {deny all;}
location = /app2 { rewrite ^ /app2/; }
location /app2/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:8080;
proxy_read_timeout 90;
proxy_redirect http://localhost:8080 https://server_name_2:1015/app2;
try_files $uri #app2; }
location #app2 {
include uwsgi_params;
uwsgi_read_timeout 180s;
uwsgi_send_timeout 180s;
proxy_read_timeout 180s;
uwsgi_pass unix:/home/ubuntu/app2/app2.sock;
}}
Based on my research problem is in uWSGI, for some reason it is not receiving any wss calls. From client perspective socket connection is Finished instead of 101 Pending.
Issue persist no matter of what client I'm using.
In app1 I can see each attemp&error of socket connection in both nginx and uwsgi (vassal) log files, in case of app2 I can only see 499 error for each socket connection attempt, without matching entry in vassal log.
Initially I was blaming uwsgi websocket server, that could host only 1 application, but I can freely duplicate app1 as many times as I want under different vassals and nginx sites, websocket connections are fine.
What I've tried
switching between lib versions (gevent must be >= 1.3.6)
using http socket instead of unix one
experiments with paths
juggling with buffer sizes on both nginx and uwsgi
Are there any known issues with python 3.7 & uwsgi & SocketIO integration? I'm out of ideas.

Socket.io, Express 4 and Nginx with SSL *AND CLUSTER* throw a 400 (Bad Request)?

I'm using nginx for web-facing traffic and proxying my node.js connections, as well as handling my SSL.
The connection IS successfully established--io.on('connection') does trigger a console log server side, but then I get a 400 (Bad Request) on the client (in both Firefox and Chrome) and then the connection resets over and over (and continues throwing the same error).
The error is as follows (from Chrome):
polling-xhr.js:264 GET https://192.168.56.101/socket.io/?EIO=3&transport=polling&t=M54C3iW&sid=byqOIkctI9uWOAU2AAAA 400 (Bad Request)
i.create # polling-xhr.js:264
i # polling-xhr.js:165
o.request # polling-xhr.js:92
o.doPoll # polling-xhr.js:122
n.poll # polling.js:118
n.onData # polling.js:157
(anonymous) # polling-xhr.js:125
n.emit # index.js:133
i.onData # polling-xhr.js:299
i.onLoad # polling-xhr.js:366
hasXDR.r.onreadystatechange # polling-xhr.js:252
XMLHttpRequest.send (async)
i.create # polling-xhr.js:264
i # polling-xhr.js:165
o.request # polling-xhr.js:92
o.doPoll # polling-xhr.js:122
n.poll # polling.js:118
n.doOpen # polling.js:63
n.open # transport.js:80
n.open # socket.js:245
n # socket.js:119
n # socket.js:28
n.open.n.connect # manager.js:226
n # manager.js:69
n # manager.js:37
n # index.js:60
(anonymous) # control.js:6
192.168.56.101/:1 WebSocket connection to 'wss://192.168.56.101/socket.io/?EIO=3&transport=websocket&sid=byqOIkctI9uWOAU2AAAA' failed: WebSocket is closed before the connection is established.
Nginx logs (at info level) show the following:
2018/01/29 19:37:10 [info] 28262#28262: *18403 client closed connection while waiting for request, client: 192.168.56.1, server: 192.168.56.101:443
My nginx config is as follows
(I HAVE tried this both with and without the "location /socket.io/ " block, and get exactly the same results.):
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream altairServer {
server 192.168.56.101:8000;
}
server {
listen 192.168.56.101:443;
server_name altair.e6diaspora.com;
ssl on;
ssl_certificate /home/e6serv/crypto/domain.pem;
ssl_certificate_key /home/e6serv/crypto/server.key;
access_log /home/e6serv/logs/nginx/host.access.log;
error_log /home/e6serv/logs/nginx/host.error.log;
root /home/e6serv/e6Code/e6GS1/public;
location / {
try_files maintain.html $uri $uri/index.html #node;
}
location /socket.io/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://altairServer;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location #node {
proxy_pass http://altairServer;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_max_temp_file_size 0;
proxy_redirect off;
proxy_read_timeout 240s;
}
}
The relevant server side code is as follows:
const app = express();
app.set('port', 8000);
app.engine('html', require('ejs').renderFile);
app.use(methodOverride());
app.use(session({
secret: SITE_SECRET,
store: redisSesStore,
cookie: {maxAge: 604800000},
resave: false,
saveUninitialized: false
}));
app.use(parseCookie());
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
app.use('/',router);
const httpServer = http.createServer(app)
const io = socketIo.listen(httpServer);
io.use(passportSocketIo.authorize({
key: 'connect.sid',
secret: SITE_SECRET,
store: redisSesStore,
passport: passport,
cookieParser: parseCookie
}));
httpServer.listen(app.get('port'), '192.168.56.101', function(){
log.warn('Worker Started HTTP Server')
});
io.on('connection', function(socket) {
log.debug(socket.request.user)
var event = { type:'userConnect',data:'Hello Client'};
process.send(event);
}
My client side code is as follows:
control.socket = io.connect('https://'+hostname);
console.log("Should be connected")
//NOTE: This final line does not work--the console.log never fires:
control.socket.on('userConnect',function (data) {console.log(data)})
I've discovered the source of the problem... The extra element that was in here that I didn't know to talk about was node.js's Cluster.
https://github.com/socketio/socket.io/issues/1942
https://socket.io/docs/using-multiple-nodes/
Socket.io defaults to polling, which requires a sticky load balancing between the various workers. The solution was as found in the socket.io multiple node documentation.
I added something like the following to my nginx config:
upstream io_nodes {
ip_hash;
server 127.0.0.1:6001;
server 127.0.0.1:6002;
server 127.0.0.1:6003;
server 127.0.0.1:6004;
}
(Also note, specific workers must be set up to listen on specific ports.)

Heroku nginx websockets

I've been trying to get websockets to work on heroku over nginx and seem to be stuck. I'm using the nginx-buildpack which has worked great but I haven't had any success thus far getting an upgraded websocket connection.
Here is my nginx.conf.erb, which is just slightly modified from the buildpack example...
daemon off;
#Heroku dynos have 4 cores.
worker_processes 4;
events {
use epoll;
accept_mutex on;
worker_connections 1024;
}
http {
gzip on;
gzip_comp_level 2;
gzip_min_length 512;
log_format l2met 'measure.nginx.service=$request_time request_id=$http_heroku_request_id';
access_log logs/nginx/access.log l2met;
error_log logs/nginx/error.log;
include mime.types;
default_type application/octet-stream;
sendfile on;
#Must read the body in 5 seconds.
client_body_timeout 5;
proxy_read_timeout 950s;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen <%= ENV["PORT"] %>;
server_name _;
keepalive_timeout 5;
location /test {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://127.0.0.1:3000;
}
location / {
proxy_pass http://127.0.0.1:3001;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
}
To test this configuration< i've just been using the websocket.org Echo Test. Unfortunately it is not able to connect.
On port 3001 I've just got a simple socket.io server, and I'm logging any connections (none so far...)...
var app2 = require('http').createServer().listen(3001);
var io = require('socket.io').listen(app2);
io.on('connection', function(socketconnection){
socketconnection.send("Connected to Server-1");
console.log("connected to websocket server!");
socketconnection.on('message', function(message){
socketconnection.send(message);
});
});
When I try connecting with the websockets tester this is the error that I get in my heroku logs...
*3 upstream prematurely closed connection while reading response
header from upstream, client: 10.140.231.210, server: _, request: "GET /? encoding=text HTTP/1.1", upstream: "http://127.0.0.1:3001/?encoding=t
ext", host: "www.mydomain.com"
Any ideas what I may be doing wrong here?
UPDATE 1 :
Ok, I so I've found that if I alter the section of my config where it is says :
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
to:
map $http_upgrade $connection_upgrade {
default upgrade;
}
this seems to prevent that error from occurring. I suppose this is due to a blank response being returned on connect. However, now that I've done this I get a new error:
*5 connect() failed (111: Connection refused) while connecting to upstream,
client: 10.99.212.2, server: _, request: "GET /?encoding=text HTTP/1.1",
upstream: "http://127.0.0.1:3001/?encoding=text", host: "www.mydomain.com"
I suppose this may be an issue unrelated to the first, but I'm not sure!

Socket.io set cookie with nginx

My app architecture is here.
front-server 3000 - domain.com, serve files to browser
api-server 3001 - api.domain.com
socket-server 3003 - io.domain.com
In dev mode, socket request have all http request cookies,
But in production mode with nginx (down to conf),
socket cookie just have a cookie io
In dev
In prod
This is nginx conf(part of socket server).
server {
server_name io.domain.com;
location / {
include proxy_params;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwared-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:3003;
}
location /socket.io/ {
include proxy_params;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwared-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:3003;
}
}
Here is socket client
const io = require('socket.io-client');
let socket;
if (process.env.NODE_ENV === 'production') {
socket = io.connect('http://io.domain.com/noti');
} else {
socket = io.connect('http://localhost:3003/noti');
}
module.exports = socket;
In development env, it works well but in production mode because of the problem, I can't retrieve user values.
I need to use cookie value sessionId, token to auth, but two cookie values are disappeared.
What's wrong with it?
Most of all, Set cookie with domain.
For example in node js,
res.setCookie({...
domain: 'domain.com'
});
And in nginx conf,
proxy_cookie_domain io.domain.com domain.com

Resources