Bottleneck with sockets approach? - node.js

Thinking of creating a real-time app where users can collaborate. Found node.js + socket.io to be one of the solutions for this type of problem.
I hear from other developers that there will be a bottleneck as far as number of sockets my server will give to users. So if I have hundreds of users collaborating at same time, number of open sockets will run out and users will not be able to connect. Is this a valid concern?
update: on sort of related note I'm looking to use SockJS instead of Socket.io. There is a thread that explains pros and cons of these libraries. Also this is a good read.

For hundreds of users I don't think it is a concern.
Sockets as you know have persistent connection between the client and the server and both parties can start sending data at any time. Keeping them open is not a problem as much as the handling the load in terms of messages sent/second.
Socket.io can easily handle 1000 concurrent connections. But it will fail if it is sending more than 8-10k messages per second. You will hit the load barrier before your sockets are exhausted. In most cases handling more concurrent users translates to higher load. So don't worry about getting low on sockets. Trying to scale beyond that barrier would require more server resources.
Helpful links :
Socket.IO - are the open connections a concern?
http://www.quora.com/How-do-I-scale-socket-io-servers-2

There are already solutions using this approach like Cloud9 and it works good. There will be a point where you will need to scale out. So if you are planing something big I would think about it.
Here are some tests on sockets.io with 10,000 concurrent connections. Looks like it's good solution but not easy one because of fallback mechanism.

Related

Whats the best way to share objects between webservers on different hosts

TLDR;
I am wondering what is the best way to reliably share an object or other data between n number of webservers on n number of machines?
I have looked at the likes of redis but it seems that this would not be what I am actually looking for here. I am now thinking that something like IPC over remote / RPC might be more appropriate? Is there a better way to do this given it will be called at minimum 10 times over a 30 second interval which can exponentially grow as the number of users running servers grows too.
Example & current use case:
I run a multiplayer mod for a game which receives a decent level of traffic and we are starting to notice cases where requests get dropped sometimes. The backend webserver is written in NodeJS and uses express in a couple of places too. We are in the process of restructuring the system and we have now come to restructuring the part of the system that handles a heartbeat from each server that members of the public host. This information is then shared out to the players so they can decide which server to join.
Based on my own research I am looking to host the service on several different machines for redundancy. These machines are then linked over vlan / vswitch so that they have a secure method to communicate with eachother. The database system is already setup to replicate this way however I cannot see a performance inclined way to handle the sharing of objects containing information about the servers that have communicated with each webhost.
If it helps the system works something like this:
Users server -> my load balancer -> webhost (backend).
Player -> my load balancer -> webhost (backend) returns info on all currently online servers.
In the example above and what is currently in use is a single instance webserver which handles the requests and processes needed.
Just an idea while the community proposes answers: consider reading about Apache Thrift. It is not such as IPC like, but more an RPC like. If the architecture of your servers, or the different components or the "backend network" is in "star"... with one "master" I shoud consider that possibility.
If the architecture of your backend is not like that... but a group of "independent" entities, it comes to my mind to solve the functionality with some "data bus" such as private MQTT broker and a group of members, subscribed or publishing data for the rest of the network. The most optimal serialization strategy for the object would be in my opinion Google Protobuf.
The integration of Mqtt with nodeJS is very simple, and if the weight of the packets is not too big, and you can admit some latency, I would really recommend you to make some tests using Mqtt with a publish/subscription QoS=2. It would not take great efforts to substitute de underlying communications library that you are using.
Once that is said, it seems that there's another solution: Kafka, that seems very interesting (I don't really know it).
Your choice will depend on the nature of your data, mostly: weight of the packets, frequency per user, and the latency you are willing to admit for the worst of the scenarios.

Does Node.JS App with Thousands Concurrent Users Need Connection Pooling

Does a node.js app with thousands of concurrent users really need to use connection pooling mechanism ?
EDITED:
App could be an ecommerce app that requires high volume for reading and writing to databases.
Not necessarily. It depends in what situation. You should be able to handle thousands of concurrent connections but of course it all depends on what you do in those connection handlers. This is the only answer that can really be given with so little details in the question.

Strategy to implement a scalable chat server

I am looking to implement some sort of chat server. And I want it to scale. This seems like a big question, so I guess I expect the answers to be direction pointers, sort of exploratory.
The end-user clients are web or phone client. I think some sort of websocket implementation, such as Socket.IO is nice.
On the server side I wish to use Node.js. I want the architecture to be scalable so that the number of users are not limited (well, within reason, the chance of big hit is not expected, and if it is, the chance of having smarter, experienced people to work on it is reasonable instead of currently just me coding) The number of users per chatroom is hopefully not limited, or maybe some fixed large number. And that means I need to scale horizontally using several servers written in Node.
Suppose some load balancer (and hopefully in the future not a single point of failure, but I don't know how I would achieve that, or maybe just move to AWS) are dispatching SocketIO connections from the end clients to the chat servers. Different users connection to different servers may be in the same room, so the messages need to be send to other servers.
How would I feasibly implement something like this? Hopefully not too complex.
Questions:
(1) If all servers need to handle all messages as users can be logged on via any of the servers, does this scale?
(2) Do I need some sort of message queue for the servers to talk among them? Is Pub-sub from Rabbitmq usable for this? Or if zeromq, how would I scale with pub sub? The Zeromq guide is has explanations for scaling to more than one server with REQ/REP type of applications. But not Pub Sub.
(3) Or should I start with XMPP?
I am hoping to make it work as easy as possible.
There's a rather good explanation at the Socket.io site. Have a look at
http://socket.io/docs/using-multiple-nodes/
It suggests using Nginx as HTTP load balancer, Node.js clustering (with sticky sessions) and Redis as the message backend.
I think your goals should be achievable with little to none coding involved, only using the given modules and configuration mechanisms.

Node.js Real-Time Stuff with DNode/shoe and all about load balancing

I have built a little system that uses dnode, shoe and browserify on the client, and NodeJS and dnode/shoe on the server end. I'm wondering if it is a good idea to use dnode (RPC) as the sole protocol for a real-time web application.
Let's look at the benefits of DNode or any other RPC interface. I like being able to call functions remotely (RPC). It definitely beats Ajax because you get a consistent interface for communicating from the client to the server and server to the client. I'm also betting that you get a small measure of performance over Ajax because of the HTTP overhead involved with Ajax.
However, using RPC, you have to deal with load balancing and the client connections on the server. But this goes with any websocket implementation. But, with other websocket implementation, you have a more traditional event based system, where the client listens to events from the server and responds to those events. I tried replicating this sort of interface using EventEmitters, but it's awful, and I keep getting warnings about too many handlers. Ugh!
I'm looking to achieve a lightweight, clean, interface that I can use to develop my application with. One that feels robust and is able to scale to many clients. It needs to feel solid.
I'm not really sure what my question is in writing this post. I'm tasked to update this codebase I wrote so that connections aren't lost, and it's overall more robust. I guess I'm just desperate for advice or consulting with my application. Is there someone so willing as to go face-to-face with me on discussing this topic (RPC and real-time web applications)?
Thanks for reading.
I have been investigating some of the same topics as it seemed to me than some of the RPC libs were very cool, but not altogether practical for large scale apps. I actually started with NowJS, realized it was a dead project, moved to DNode/Shoe/Browserify, and finally have moved on to SocketStream in an attempt to offload some of the dirty work to a project that has a unified goal. I really didn't want rewrite what others had already done on this subject and socketstream makes that easy. To get back to your question, as you can see on their page, SocketStream uses sticky sessions. This is a big assumption but one that probably can't be worked around at the moment without further developments. The reason I mention it is that they talk about some of the things they are working on as far as scaling goes. Might we worth a read or reaching out to the developer to see if you could talk things over with him. Good luck!

Scale Socket.io vertically AND horizontally - what is the "right" way to go?

I want to scale my Node.js Socket application vertically and horizontally and I haven´t found a sophisticated solution yet.
My application has two use-cases:
Broadcast messages from one user to all others
Push messages from one user to a subset of users
On one hand, I´ve read that I need Redis for both cases together with socket.io-redis
On the other hand, I´ve watched this video and read this SO answer where it says that Redis isn´t reliable and it´s not guaranteed that the published messages will arrive, so you should only use it for clustering/vertical scaling
Microsoft Azures solution to use ServiceBus is out of question, because I don´t want to use Azure.
Instead of Redis, the guy recommends using RabbitMQ for horizontal scaling.
For the vertical scaling there is also socket.io-clusterhub, an IPC for node processes, but it seems to work only on Socket.io <= v0.9.0
Then there is this guy, who has implemented his own method to pass messages to other nodes via HTTP requests, which makes somehow sense. But why HTTP requests if you could also establish direct socket connections between servers, push the message to all servers simultaneously and overcome the delay of going from one server to another?
As a conclusion I thought maybe I could go with Redis on EACH server, just for the exchange of messages when clustering my application on multiple processes, together with RabbitMQ as a S2S communication solution.
But it seems a bit like an overkill to have one Redis per Server and another central RabbitMQ.
Is there any known shorter/better solution to scale Socket.io reliably in both directions?
EDIT:
I´ve tried using a single Redis Server for multiple Node.js Servers, where each of them uses Clustering via sticky-session over all cores. While the Clustering at its own works like a charm with redis, there seems to be a problem when using multiple servers. Messages won´t arrive at the other nodes.
I'd say Kafka is a good fit for the horizontal scaling. It is a fairly sophisticated way of distributing a huge amount of events across servers (which at the end is what you want). This is a good read about it: https://engineering.linkedin.com/kafka/running-kafka-scale
Regarding the vertical scale, instead of socket.io-clusterhub I would use something called PM2 (https://github.com/Unitech/pm2) which allows you to resize the scale of the apps in every computer dynamically as well as controlling the logs and reporting to keymetrics.io (if you are using it).
If you need any snippets ask me and I will edit the answer but in the PM2 github there are quite few.

Resources