I have a chat application using socket.io. I put code on 2 server (ip1, ip2) and using nginx to make load balancing.
this is nginx config
upstream socket_nodes {
ip_hash;
server ip1:1654;
server ip2:1653;
}
server {
listen 1653;
server_name livechatsoftware.com.vn;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://socket_nodes;
proxy_redirect off;
}
}
Everything is working well if clients connect to same server. but if client 1 connect to server ip1, client 2 connect to server ip2. client 1 and client 2 can not interact (eg cannot emit, send message,....)
Thanks for advance
I am using nginx to load balance my node processes (express.js + mongodb + socket.io + passport.js) . Although they are all in the same server, still socket.io events don't get shared between them. That means I've also faced the exact same issue. So I've used mong.socket.io (https://github.com/tmfkmoney/mong.socket.io) to store socket.io events in the mongo store. In that way each socket.io process reads the events from a centralized store.
You also need a central store which socket.io events can be accessed from any of the servers. I've never used that kind of setup but check out the below package;
https://www.npmjs.com/package/socket.io-redis
I'd install redis to one of my servers and point my socket.io processes to that server like below;
io.adapter(redis({ host: 'localhost', port: 6379 })); // you should use ip address and the port for the remote server
You can also use other adapter options as mentioned in the past link I've shared.
Related
I've been learning frontend development only and just recently went over basics of Nodejs. I know that I would connect to certain port number when developing in Nodejs alone. However, I'm confused about how I would connect Vue application (built with Vue CLI) to backend since npm run serve will automatically connect to port 8080 by default.
My ultimate goal is to connect MongoDB to my application. Current error I'm getting is Error: Can't resolve 'dns'.
TLDR: Could someone please explain in newbie term how I can connect Vue application with MongoDB?
In my opinion, you have two ways of solving this:
First, there is a field called devServer through which you can tweak the configuration of the dev server that starts up when you run npm run serve. Specifically, you want to pay attention to proxy field, using which you can ask the dev server to route certain requests to your node backend.
Second, depending on your setup, you could use a different host altogether to handle backend calls. For example, as you mentioned, the dev server runs on 8080 by default. You could set up your node backend to run on, say, 8081 and all backend requests that you make in your VueJS app will explicitly use the host of <host>:8081. When you decide to move your code into production, and get SSL certificates, you can have a reverse-proxy server like Nginx redirect all requests from say, api.example.com to port 8081.
As for connections to MongoDB, IMO, here's a question you should be asking yourself:
Is it safe to provide clients direct access to the database?
If the answer is yes, then by all means, ensure the mongoDB server starts with its HTTP interface enabled, set up some access restrictions, update the proxy and/or nginx and you're good to go.
If the answer is no, then you're going to have to write light-weight API endpoints in your NodeJS app. For example, instead of allowing users to directly talk to the database to get their list of privileges, you instead make a request to your NodeJS app via GET /api/privileges, and your NodeJS app will in turn communicate with your database to get this data and return it to the client.
Another added benefit to having the backend talk to your database rather than the client, is that your database instance's details are never exposed to malicious clients.
Here's a sample vue.config.js setup that I have on one of my websites:
const proxyPath = 'https://api.example.com'
module.exports = {
devServer: {
port: 8115, // Change the port from 8080
public: 'dev.example.com',
proxy: {
'/api/': {
target: proxyPath
},
'/auth/': {
target: proxyPath
},
'/socket.io': {
target: proxyPath,
ws: true
},
'^/websocket': {
target: proxyPath,
ws: true
}
}
}
}
Here's the nginx config for the same dev server. I quickly pulled what I could from our production config and obscured certain fields for safety. Consider this as pseudo-code (pseudo-config?).
server {
listen 443 ssl;
server_name dev.example.com;
root "/home/www/workspace/app-dev";
set $APP_PORT "8115";
location / {
# Don't allow robots to access the dev server
if ($http_user_agent ~* "baiduspider|twitterbot|facebookexternalhit|rogerbot|linkedinbot|embedly|quora link preview|showyoubot|outbrain|pinterest|slackbot|vkShare|W3C_Validator|Googlebot") {
return 404;
}
# Redirect all requests to the vue dev server #localhost:$APP_PORT
proxy_pass $scheme://127.0.0.1:$APP_PORT$request_uri;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server {
listen 443 ssl;
server_name api.example.com;
set $APP_PORT "8240";
location / {
# Don't allow robots to access the dev server
if ($http_user_agent ~* "baiduspider|twitterbot|facebookexternalhit|rogerbot|linkedinbot|embedly|quora link preview|showyoubot|outbrain|pinterest|slackbot|vkShare|W3C_Validator|Googlebot") {
return 404;
}
# Redirect all requests to NodeJS backend #localhost:$APP_PORT
proxy_pass $scheme://127.0.0.1:$APP_PORT$request_uri;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
I've been able to find guides pertaining to various combinations of nginx, node, ssl, and websockets, but never all together and never on a single server.
Ultimately, this is what I'm after and I'm not sure if it's even possible:
single server (Ubuntu 14.04)
forced HTTPS (browsing to http://site forwards to https://)
node app is hosted on localhost:3000
node app uses web sockets
it's a single-page React app with no routing at all, so I don't need routes. I repeat, I'm only hosting one page with no navigation whatsoever.
With the below config, I have everything working except websockets - the client throws an error which doesn't happen if I browse straight to the node server and don't use nginx (browse to http://my.domain:3000):
bundle.js:26 WebSocket connection to 'wss://<my domain>/socket.io/?EIO=3&transport=websocket&sid=x1uQtRzF3gYYEvfIAAAi' failed: Error during WebSocket handshake: Unexpected response code: 400
server {
listen 80;
return 301 https://my/domain$request_uri;
}
server {
listen 443 ssl;
listen [::]:443;
ssl_certificate /path/cert.crt;
ssl_certificate_key /path/key.key;
ssl_session_cache shared:SSL:10m;
server_name blaze.chat;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_redirect http://localhost:3000 https://my.domain;
}
}
Right, got it working... Found a lot of articles all showing similar things but missing these key lines:
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
In my case, websockets won't work without those lines, contrary to many other posts where similar questions were asked. Not sure what the difference was. Ironically it is listed as a requirement in the Nginx Websocket Proxying doco... Should have started there, my bad.
http://nginx.org/en/docs/http/websocket.html
Side note, I am just using this on the root path of / which works fine.
We have a nodejs app that currently uses socket.io ( with namespaces ). This app is used as a dashboard for a specific financial market. Each instance of app subscribe to a specific market data and provides a dashboard. Initially we were running 3 separate instances of this app configured for 3 separate markets on the server, all binding to separate ports for serving requests.
Since we plan to add more markets it makes sense to have a reverse proxy server where a single port (along with separate URI for each market) can be used. However, setting up nginx has been a nightmare for various reasons.
(a) each instance of app for a market can be in different development stage and hence can have different static files. Managing all static file via nginx seems painful ? What can be done to leave handling of the static files with the app itself.
(b) socket.io communication is a failure. We tried to look into network communication and it seems it keeps on getting 404 page not found error when trying to connect to socket.io server. Not sure why it is connecting via http::/localhost/server.io/ instead of ws://localhost/server.io/ ? Can somebody point us to a similar example ? Anything that needs to be taken care of ?
IN our case we have been trying the following inside nginx sites-available/default
location /app/ {
proxy_pass http://localhost:3000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# kill cache
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
if_modified_since off;
expires off;
etag off;
}
Using nginx as a reversed proxy should not give you a hard time. The great thing about nginx is that you can have multiple projects on the same server with different domains.
Here is an example of nginx with multiple projects:
server {
listen 80;
server_name yourdomain.com;
location / {
proxy_pass http://localhost:3000;
#Rember to set the header like this otherwise the socket might not work.
proxy_set_header X-Real-Ip $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
server {
listen 80;
server_name subdomain.yourdomain.com;
location / {
proxy_pass http://localhost:3001;
}
}
I'm not sure why your socket should fail. Perhaps the mistake is that you try to define the route on the client site. Try having the javascript like this:
var socket = io();
or if your socket runs on one of your other applications:
var socket = io('http://yourdomain.com');
And remember that your changes should be added to sites-enabled instead of sites-avaible
I am working on a node.js application using express to serve content and socket.io for websocket communication. The setup has been working fine, but now I want to be able to access the websocket via SSL, too. I thought using nginx (which we already used for other stuff) as a proxy was a good idea, and configured it like this:
upstream nodejs {
server 127.0.0.1:8080;
}
server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
server_name _;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://nodejs;
proxy_redirect off;
}
}
The node.js server is set up like this:
var express = require('express'),
app = express();
// some app configuration that I don't think matters
server = http.createServer(app).listen(8080);
var io = require('socket.io').listen(server);
io.configure(function() {
io.set('match original protocol', true);
io.set('log level', 0);
io.set('store', redisStore); // creation of redisStore not shown
}
Both nginx and node.js run inside a Vagrant box which forwards port 443 (which nginx listens on) to port 4443 on the host system.
With this setup, navigating to http://localhost:4443 (using Firefox 23) gives me access to the files served by Express, but when socket.io tries to connect to the socket, it throws the following error:
Blocked loading mixed active content "http://localhost:4443/socket.io/1/?t=1376058430540"
This outcome is sadly obvious, as it tries to load the JS file via HTTP from inside an HTTPS page, which Firefox does not allow. The question is why it does so in the first place.
Socket.io tries to determine which protocol is used to access the web page, and uses the same protocol in the construction of the above URL. In this case, it thinks it is being accessed over HTTP, which may be the result of being proxied. However, as I understand, setting match original protocol to true in the socket.io config is supposed to help in situations like this, but it does not in my case.
I have found numerous questions and answers here about websocket proxying, but none that deal with this particular issue. So I'm pretty much at wit's end, and would really appreciate some advice.
Change match original protocol to match origin protocol:
io.configure(function() {
//io.set('match original protocol', true);
io.set('match origin protocol', true);
...
}
I have an NGINX instance (1.4 stable) in front of a few NodeJS instances. I'm trying to load balance with NGINX using the upstream module like so:
upstream my_web_upstream {
server localhost:3000;
server localhost:8124;
keepalive 64;
}
location / {
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_cache one;
proxy_cache_key sfs$request_uri$scheme;
proxy_pass http://my_web_upstream;
}
The problem occurs when the instance at port 3000 is not available. I get a 502 Bad Gateway from NGINX.
If I change the upstream config to just point at one instance, 8124 for example, the 502 still occurs.
Running a netstat shows 0 other applications listening on any of the ports I've tried.
Why is NGINX reporting a bad gateway? How can I get NGINX to do a fallthrough if one of the instances is down?
If netstat shows that your nodejs applications aren't running on the ports, then the problem is that you haven't started your nodejs applications.
This nginx config knows how to proxy to the nodejs application, but you are guaranteed to get a 502 if the nodejs application has not been started. If you want to run it on multiple ports, then you have to start the application on each port. So, don't hardcode port 3000 into the NodeJS code, but make it take the port from an environmental variable, or spawn multiple instances using a process manager like pm2 (https://github.com/Unitech/pm2). Once these are running, then nginx can proxy to them.