Flask Socket IO return status 400 When deploy to Server - python-3.x

I using flask socket io to implementing the notification to my multiple clients using my application.
The flask socket is working well on my local development but the problem happen I deployed it to use on the server. It most of the time returned status 400 frequently and down the server sometime. I tried digging into source to root cause but I could not.
Below is a snippet that I had a problem:
Here is an init file for initializing application
from __future__ import absolute_import
import os
import redis
import flask_sqlalchemy as sa
from flask import Flask
from flask_socketio import SocketIO
from flask_cors import CORS
from .core.constant import MKT_BLUEPRINT, admin
# Initialize the core application and configure the app
app = Flask(__name__, instance_relative_config=True)
CORS(app)
# Initialize the core application and configure the app
app.config.from_object('mkt.config')
# redis address: "redis://localhost:6379/0"
redis_add = "redis://localhost:6379/0"
async_mode = None
notify_socketio = SocketIO(app, cors_allowed_origins="*",
message_queue=redis_add,
async_mode=async_mode
)
db = sa.SQLAlchemy(app)
app.register_blueprint(admin, url_prefix='/%s'%MKT_BLUEPRINT)
And then on client side, I use this js script
$(document).ready(function() {
var domain = "{{notify_domain|safe}}"
var socket = io(domain);
socket.on('connect', function() {
var user = "{{user|safe}}"
console.log("im in connect", user)
socket.emit('join_room', {room: user});
});
});
Then on server I config the nginx as below:
server {
listen 443 ssl;
server_name xxx;
include /config/nginx/ssl.conf;
location / {
proxy_read_timeout 300s;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_pass http://xxx:8000;
}
location /socket.io {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Access-Control-Allow-Origin *;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://xxx:8000/socket.io;
}
}
And here are dependencies that I'm using in my application:
#for notification to client
Flask-SocketIO==5.1.1
#for notification to client
Flask-SocketIO==5.1.1
python-engineio==4.2.1
python-socketio==5.4.0
Without any misconfiguration, I logged in console and below
Till now I still could not know what caused this error 400 (bad request) and solved it. Please kindly help. Thank so much.

Related

NodeJs App + AWS EC2 + Nginx + Websocket configuration

im running a Nodejs app on a Ec2 instance. The app runs node-rtsp-stream that outputs a websocket witch then uses with jsmpeg to display on the web browser.
NGINX config port 80 (this works fine)
server {
listen 80 default_server;
listen [::]:80 default_server;
location / {
proxy_pass http://localhost:3000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
NGINX websocket config (this doesnt)
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream websocket {
server ws://localhost:9999;
}
server {
listen 80;
location / {
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
app.js
Stream = require('node-rtsp-stream')
stream = new Stream({
name: 'stream',
streamUrl: 'rtsp://demo:demo#ipvmdemo.dyndns.org:554/onvif-media/media.amp',
wsPort: 9999,
ffmpegOptions: { // options ffmpeg flags
'-stats': '', // an option with no neccessary value uses a blank string
'-r': 30 // options with required values specify the value after the key
}
})
Script tag on HTML
player = new JSMpeg.Player('ws://localhost:9999', {
canvas: document.getElementById('canvas')
Should this be calling for 'ws://localhost:9999' or something else?. On Browser says that can not find 'ws://localhost:9999'
Each nginx config file is stored in sites-available
Thanks for your time!!!

Socket.io with Nodejs not working with nginx reverse proxy

I have a Nodejs server app with Express and Socket.io (Ubuntu 18.04). It always worked fine until nGinx (1.14) reverse proxy entered the scene. The nginx server is running on a different machine of Node.js apps, each app on it's own vm, inside the same network.
Server and Client on version 2.1.1.
The nginx server is responsible for multiple app redirects.
I tried several configuration combinations but nothing works.
Here what I've tried (examples for "company1"):
default.conf in /etc/nginx/conf.d
location /company1-srv/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
proxy_pass http://172.16.0.25:51001/;
}
Then in the client code I connect using "path" options because socket.io misplace it's library path.
// companySrv and URL is actually returned by another service (following code is for illustrative purposes):
let companyUrl = 'https://api.myserver.com/company1-srv';
let companySrv = '/company1-srv';
socket(companyUrl, {
path: companySrv + '/socket.io/'
});
I also tried to remove the path option and configured a specific /location for the socket.io stuff (for testing purposes):
location /socket.io/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
proxy_pass http://172.16.0.25:51001/socket.io/;
}
Nothing worked.
It connects, but does'n emits anything. And after a short while (a minute or so), it becomes unavailable, raising the "disconnect" (reason: transport close) client event.
Server:
const io = require('socket.io')(https || http, {
transports: ['polling', 'websocket'],
allowUpgrades: true,
pingInterval: 60000*60*24,
pingTimeout: 60000*60*24
});
I also tried to edit the nginx.conf and write the "upstream socket_nodes { ..." and use the proxy_pass http://socket_nodes. It didn't make sense as I need a exact redirect depending on the company, but for the sake of tests I did, but it doesn't work as well.
What I need to do?
Thanks
We as well use socket.io with reverse-proxy from ngnix. I can share a little bit of our setup, maybe it helps to rule things out.
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
stream {
log_format basic '$time_iso8601 $remote_addr '
'$protocol $status $bytes_sent $bytes_received '
'$session_time $upstream_addr '
'"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"';
access_log /var/log/nginx/stream.log basic;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
##
# Server Blocks
##
# DOMAINEXAMPLE A
server {
server_name exampleA.domain.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://192.168.21.105:5050;
}
}
# DOMAINEXAMPLE B
server {
server_name exampleB.domain.com;
location /api {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://192.168.21.106:5050;
}
}
}
The most interesting part here are probably the server blocks
# DOMAINEXAMPLE A
server {
server_name exampleA.domain.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://192.168.21.105:5050;
}
}
# DOMAINEXAMPLE B
server {
server_name exampleB.domain.com;
location /api {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://192.168.21.106:5050;
}
}
Domain Example A
For location / at http://192.168.21.105:5050 we have a NodeJS process running, including the setup for socket.io
const express = require('express');
const http = require('http');
const app = express();
const server = http.createServer(app);
const io = require('socket.io')(server);
Domain Example B
For location /api at http://192.168.21.106:5050 we have another NodeJS process running, including a slightly different setup for socket.io
const express = require('express');
const http = require('http');
const app = express();
const server = http.createServer(app);
const io = require('socket.io')(server, {path: '/api/socket.io'});
In both cases socket.io works perfectly fine for us
Connecting from Client (Example B)
What we actually do on the server side here is creating a namespace for socket.io, like
const io= require('socket.io')(server, {path: '/api/socket.io'});
const nsp = io.of('/api/frontend');
and then on the client side , connect to it like
import io from 'socket.io-client'
const socket = io('https://exampleB.domain.com/api/frontend', {path: "/api/socket.io"});

Python 3.7 Flask-SocketIO + uWSGI + nginx configuration

I'm hosting two python applications (app1 and app2) on ubuntu machine (16.04) under the same setup with different python versions (2.7 and 3.7), both are based on Flask framework using Flask-SocketIO, running on uWSGI (2.0.17.1) behind Nginx proxy.
I've been able to successfully implement websocket support in version 2.7 but im failing to do the same on 3.7.
Nginx configuration and uwsgi are the same, with only exception of different plugin for uwsgi (python version)
In both cases I'm using uwsgi websocket server (via SocketIO) with redis queue.
Beside websocket problem app2 works just fine.
Python Setup
Python 2.7 Libs:
Flask==0.12.2
Flask-SocketIO==2.9.4
gevent==1.2.2
greenlet==0.4.13
Python 3.7 Libs:
Flask==1.0.2
Flask-Script==2.0.6
gevent==1.3.7
greenlet==0.4.15
uWSGI - 2.0.17.1
Working configuration of app1:
__init__.py
app = Flask(__name__)
# SocketIO
try: # This step is required only for version deployed on UWSGI
import uwsgi
socketio = SocketIO(app, message_queue=app.config['REDIS_QUEUE_URL'])
except ImportError:
print 'Application runs outside of uWSGI context'
socketio = SocketIO(app)
manage.py
from flask-script import Manager
from app1 import app
#manager.command
def runserver(host = None, port = None, socket = True):
if not host:
host = 'localhost'
if not port:
port = 5000
if socket:
socketio.run(app)
else:
app.run(host, port, debug=False)
app1.ini
[uwsgi]
plugins-dir = /usr/local/lib/uwsgi
plugins = python27
#application's base folder
base = /home/ubuntu/app1
#python module to import
app = manage
module = %(app)
home = %(base)/venv
virtualenv = %(base)/venv
pythonpath = %(base)
#socket file's location
socket = %(base)/app1.sock
#permissions for the socket file
chmod-socket = 666
callable = app
logto = /var/log/uwsgi/%n.log
processes = 20
http-websockets = true
gevent = 500
vacuum = true
die-on-term = true
enable-threads = true
master = true
app1-site
server {
listen 1014 ssl default_server;
server_name server_name_1;
access_log /var/log/nginx/app1_access_log;
error_log /var/log/nginx/app1_error_log;
auth_basic off;
# SSL only
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
location /socket.io/ {
include uwsgi_params;
uwsgi_pass unix:/home/ubuntu/app1/app1.sock;
proxy_http_version 1.1;
proxy_read_timeout 180s;
proxy_buffering on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location / {deny all;}
location = /app1{ rewrite ^ /app1/; }
location /app1{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:8080;
proxy_read_timeout 90;
proxy_redirect http://server:8080 https://server_name_1:1014/app1;
try_files $uri #app1; }
location #app1{
include uwsgi_params;
uwsgi_param SCRIPT_NAME /app1;
uwsgi_modifier1 30;
uwsgi_read_timeout 180s;
uwsgi_send_timeout 180s;
proxy_read_timeout 180s;
uwsgi_pass unix:/home/ubuntu/app1/app1.sock;
}}
Configuration of app2
__init__.py
def create_app():
...
app = Flask(__name__)
socket_io.init_app(app, message_queue = app.config['REDIS_URL'])
...
return app
wsgi.py
import uwsgi
from gevent.monkey import patch_all
patch_all()
print('Patching all!')
from app2 import create_app
application = create_app()
app2.ini
[uwsgi]
plugins-dir = /usr/local/lib/uwsgi
plugins = python37
#application's base folder
base = /home/ubuntu/app2
home = %(base)/venv
virtualenv = %(base)/venv
pythonpath = %(base)
mount = /app2=%(base)/wsgi.py
callable = application
socket = %(base)/app2.sock
chmod-socket = 666
chdir = %(base)
attach-daemon = %(virtualenv)/bin/celery -A celery_worker.celery worker
attach-daemon = %(virtualenv)/bin/celery -A celery_worker.celery beat
logto = /var/log/uwsgi/%n.log
processes = 20
vacuum = true
die-on-term = true
enable-threads = true
master = true
manage-script-name = true
http-websockets = true
gevent = 5000
#Workaround for flask send_file() failing on python 3 and uwsgi
wsgi-disable-file-wrapper = true
app2-site
server {
listen 1015 ssl default_server;
server_name server_name_2;
access_log /var/log/nginx/app2_access_log;
error_log /var/log/nginx/app2_error_log;
auth_basic off;
# SSL only
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
location /socket.io/ {
include uwsgi_params;
uwsgi_buffering off;
uwsgi_pass unix:/home/ubuntu/app2/app2.sock;
proxy_http_version 1.1;
proxy_read_timeout 180s;
proxy_buffering on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location / {deny all;}
location = /app2 { rewrite ^ /app2/; }
location /app2/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:8080;
proxy_read_timeout 90;
proxy_redirect http://localhost:8080 https://server_name_2:1015/app2;
try_files $uri #app2; }
location #app2 {
include uwsgi_params;
uwsgi_read_timeout 180s;
uwsgi_send_timeout 180s;
proxy_read_timeout 180s;
uwsgi_pass unix:/home/ubuntu/app2/app2.sock;
}}
Based on my research problem is in uWSGI, for some reason it is not receiving any wss calls. From client perspective socket connection is Finished instead of 101 Pending.
Issue persist no matter of what client I'm using.
In app1 I can see each attemp&error of socket connection in both nginx and uwsgi (vassal) log files, in case of app2 I can only see 499 error for each socket connection attempt, without matching entry in vassal log.
Initially I was blaming uwsgi websocket server, that could host only 1 application, but I can freely duplicate app1 as many times as I want under different vassals and nginx sites, websocket connections are fine.
What I've tried
switching between lib versions (gevent must be >= 1.3.6)
using http socket instead of unix one
experiments with paths
juggling with buffer sizes on both nginx and uwsgi
Are there any known issues with python 3.7 & uwsgi & SocketIO integration? I'm out of ideas.

Socket.io, Express 4 and Nginx with SSL *AND CLUSTER* throw a 400 (Bad Request)?

I'm using nginx for web-facing traffic and proxying my node.js connections, as well as handling my SSL.
The connection IS successfully established--io.on('connection') does trigger a console log server side, but then I get a 400 (Bad Request) on the client (in both Firefox and Chrome) and then the connection resets over and over (and continues throwing the same error).
The error is as follows (from Chrome):
polling-xhr.js:264 GET https://192.168.56.101/socket.io/?EIO=3&transport=polling&t=M54C3iW&sid=byqOIkctI9uWOAU2AAAA 400 (Bad Request)
i.create # polling-xhr.js:264
i # polling-xhr.js:165
o.request # polling-xhr.js:92
o.doPoll # polling-xhr.js:122
n.poll # polling.js:118
n.onData # polling.js:157
(anonymous) # polling-xhr.js:125
n.emit # index.js:133
i.onData # polling-xhr.js:299
i.onLoad # polling-xhr.js:366
hasXDR.r.onreadystatechange # polling-xhr.js:252
XMLHttpRequest.send (async)
i.create # polling-xhr.js:264
i # polling-xhr.js:165
o.request # polling-xhr.js:92
o.doPoll # polling-xhr.js:122
n.poll # polling.js:118
n.doOpen # polling.js:63
n.open # transport.js:80
n.open # socket.js:245
n # socket.js:119
n # socket.js:28
n.open.n.connect # manager.js:226
n # manager.js:69
n # manager.js:37
n # index.js:60
(anonymous) # control.js:6
192.168.56.101/:1 WebSocket connection to 'wss://192.168.56.101/socket.io/?EIO=3&transport=websocket&sid=byqOIkctI9uWOAU2AAAA' failed: WebSocket is closed before the connection is established.
Nginx logs (at info level) show the following:
2018/01/29 19:37:10 [info] 28262#28262: *18403 client closed connection while waiting for request, client: 192.168.56.1, server: 192.168.56.101:443
My nginx config is as follows
(I HAVE tried this both with and without the "location /socket.io/ " block, and get exactly the same results.):
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream altairServer {
server 192.168.56.101:8000;
}
server {
listen 192.168.56.101:443;
server_name altair.e6diaspora.com;
ssl on;
ssl_certificate /home/e6serv/crypto/domain.pem;
ssl_certificate_key /home/e6serv/crypto/server.key;
access_log /home/e6serv/logs/nginx/host.access.log;
error_log /home/e6serv/logs/nginx/host.error.log;
root /home/e6serv/e6Code/e6GS1/public;
location / {
try_files maintain.html $uri $uri/index.html #node;
}
location /socket.io/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://altairServer;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location #node {
proxy_pass http://altairServer;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_max_temp_file_size 0;
proxy_redirect off;
proxy_read_timeout 240s;
}
}
The relevant server side code is as follows:
const app = express();
app.set('port', 8000);
app.engine('html', require('ejs').renderFile);
app.use(methodOverride());
app.use(session({
secret: SITE_SECRET,
store: redisSesStore,
cookie: {maxAge: 604800000},
resave: false,
saveUninitialized: false
}));
app.use(parseCookie());
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
app.use('/',router);
const httpServer = http.createServer(app)
const io = socketIo.listen(httpServer);
io.use(passportSocketIo.authorize({
key: 'connect.sid',
secret: SITE_SECRET,
store: redisSesStore,
passport: passport,
cookieParser: parseCookie
}));
httpServer.listen(app.get('port'), '192.168.56.101', function(){
log.warn('Worker Started HTTP Server')
});
io.on('connection', function(socket) {
log.debug(socket.request.user)
var event = { type:'userConnect',data:'Hello Client'};
process.send(event);
}
My client side code is as follows:
control.socket = io.connect('https://'+hostname);
console.log("Should be connected")
//NOTE: This final line does not work--the console.log never fires:
control.socket.on('userConnect',function (data) {console.log(data)})
I've discovered the source of the problem... The extra element that was in here that I didn't know to talk about was node.js's Cluster.
https://github.com/socketio/socket.io/issues/1942
https://socket.io/docs/using-multiple-nodes/
Socket.io defaults to polling, which requires a sticky load balancing between the various workers. The solution was as found in the socket.io multiple node documentation.
I added something like the following to my nginx config:
upstream io_nodes {
ip_hash;
server 127.0.0.1:6001;
server 127.0.0.1:6002;
server 127.0.0.1:6003;
server 127.0.0.1:6004;
}
(Also note, specific workers must be set up to listen on specific ports.)

How do I setup NGINX with a reverse proxy to port 80 for two apps that both need socket.io?

I've been going around this for a couple few days now. I get so close and then a connection seems to die or socket.io cannot be found. But then maybe I'm doing it wrong?
My NGINX files looks something like this:
upstream appOne {
server demo.someserver.com:1111;
}
upstream appTwo {
server demo.someserver.com:2222;
}
location /appOne/ {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://appOne/;
}
location /appTwo/ {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://appTwo/;
}
location /socket.io/ {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://appOne/socket.io/;
}
So what I'm trying to do here is have appOne running in a subfolder at demo.someserver.com/appOne and have appTwo running in a subfolder at demo.someserver.com/appTwo but both have a reverse proxy.
All connects great except both apps need socket.io to run and shouldn't really need to connect to each other (Although I'm starting to think this wouldn't be a bad idea). But at the moment they both connection to appOne/socket.io/socket.io.js because of the last NGINX location. This causes all sorts of problems when connecting like the socket connection not being on the same port etc.
What I'm trying to avoid is naming the ports and the app name inside any frontend JS files as appOne and appTwo in this context could be clientOne and clientTwo.
I did think of something like this:
if ($request_uri == 'appOne') {
proxy_pass http://appOne/socket.io/;
}
if ($request_uri == 'appTwo') {
proxy_pass http://appTwo/socket.io/;
}
But I have no idea how that actually works. Any pointers or has anyone tried to do something the same?
So my question is - how can I have separate connections to socket.io through the reverse proxy. Or should I have one socket.io connection and both attach to that? (but I could have multiple clients on one server)
If you need two separate socket.io apps, you can perform this by setting (undocumented) path option when initializing socket.io on the client.
To be consistent, I will provide you full working example of Nginx config and Node files:
nginx config:
upstream appOne {
server demo.someserver.com:1111;
}
upstream appTwo {
server demo.someserver.com:2222;
}
server {
listen 80;
server_name demo.someserver.com;
root /path/to/working/dir; #probably not necessary
location /appOne/ {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://appOne/;
}
location /appTwo/ {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://appTwo/;
}
# no need for /socket.io location
# each app will connect socket.io via /appOne/socket.io or /appTwo/socket.io
}
app1.js and app2.js (Express + Socket.io example):
var app = require('express')();
var server = require('http').Server(app);
var io = require('socket.io')(server);
var port = 1111; //or 2222 for app2.js
app.get('*', function(req, res) {
res.sendFile(__dirname + '/index1.html'); //or index2.html for app2.js
});
io.on('connection', function(socket) {
socket.emit('hello', {port: port});
});
server.listen(port);
index1.html and index2.html:
<!DOCTYPE html>
<html>
<head>
<script src="/appOne/socket.io/socket.io.js"></script>
<!--<script src="/appTwo/socket.io/socket.io.js"></script>-->
<script>
var socket = io('/', {path: '/appOne/socket.io'});
//var socket = io('/', {path: '/appTwo/socket.io'});
socket.on('hello', function(data) {
console.log(data.port);
});
</script>
</head>
<body>
<h1>app</h1>
</body>
</html>
So if you launch both app1.js and app2.js and navigate to
http://demo.someserver.com/appOne
and then
http://demo.someserver.com/appTwo
you will see in your console 1111 and 2222 respectively, which means that you have two independent socket.io apps.
You can set a custom path to socket.io in your script.
Sets the path v under which engine.io and the static files will be
served. Defaults to /socket.io.
If no arguments are supplied this method returns the current value.
Source: http://socket.io/docs/server-api/#server#path%28v:string%29:server

Resources