SignalR Max Connections - azure-web-app-service

On a Azure WebApp Service, with selfhosted SignalR Hub (inside IIS) does someone know the maximum number of concurrent connections that the Hub can broadcast to without problems? Can someone suggest any trick to avoid disconnection of the clients?

For SignalR, the limitation depends on your IIS Configuration.
Thanks for Florin Secal's answer to explains the maximum number of connections.
Can someone suggest any trick to avoid disconnection of the clients?
I think we don't need any way to avoid this, we only need to use Reconnections reasonably.

Related

Does Node.JS App with Thousands Concurrent Users Need Connection Pooling

Does a node.js app with thousands of concurrent users really need to use connection pooling mechanism ?
EDITED:
App could be an ecommerce app that requires high volume for reading and writing to databases.
Not necessarily. It depends in what situation. You should be able to handle thousands of concurrent connections but of course it all depends on what you do in those connection handlers. This is the only answer that can really be given with so little details in the question.

Why is message delivery time not scaling well using SignalR?

I'm still testing SignalR but one of the things that's really important to me is that the messages reach the client as quickly as possible (I'm dealing with real time stock rates).
The things is - under almost every scenario I've tried - from totally local to running 100's of instances on Azure (with back-plane and everything...) , the time it takes the message to get from the server to the client increases exponentially as the number of connected clients increase.
I've tried this with Hubs, Persistent Connection, .net clients, JS clients running in phantomJS, zombieJS & node.js ... I've pretty much tried dozens of configurations, but the behavior is always the same, which leads me to the conclusion that this is something inherent in SignalR.
I know SignalR can handle thousands of concurrent clients on very few servers, but if it takes a couple of seconds for the message to go across (In the same Azure region), it's of no use to me.
Any idea what might be slowing the messages down ?
Thanks
It is explained in their scale out guide: http://www.asp.net/signalr/overview/signalr-20/performance-and-scaling/scaleout-in-signalr
Limitations
Using a backplane, the maximum message throughput is lower than it is
when clients talk directly to a single server node. That's because the
backplane forwards every message to every node, so the backplane can
become a bottleneck. Whether this limitation is a problem depends on
the application. For example, here are some typical SignalR scenarios:
Server broadcast (e.g., stock ticker): Backplanes work well for this scenario, because the server controls the rate at which messages are
sent.
Client-to-client (e.g., chat): In this scenario, the backplane might be a bottleneck if the number of messages scales with the number of
clients; that is, if the rate of messages grows proportionally as more
clients join.
High-frequency realtime (e.g., real-time games): A backplane is not recommended for this scenario.
So using a backplane will create delays, and depending on the type of application you are doing... it may not be the right choice.
I gave up SignalR long ago and focus on websockets with a proper queuing system behind. You may use a websocket library together with MassTransit for example. SignalR is for little projects, or not very segmented or real time scenarios.

Bottleneck with sockets approach?

Thinking of creating a real-time app where users can collaborate. Found node.js + socket.io to be one of the solutions for this type of problem.
I hear from other developers that there will be a bottleneck as far as number of sockets my server will give to users. So if I have hundreds of users collaborating at same time, number of open sockets will run out and users will not be able to connect. Is this a valid concern?
update: on sort of related note I'm looking to use SockJS instead of Socket.io. There is a thread that explains pros and cons of these libraries. Also this is a good read.
For hundreds of users I don't think it is a concern.
Sockets as you know have persistent connection between the client and the server and both parties can start sending data at any time. Keeping them open is not a problem as much as the handling the load in terms of messages sent/second.
Socket.io can easily handle 1000 concurrent connections. But it will fail if it is sending more than 8-10k messages per second. You will hit the load barrier before your sockets are exhausted. In most cases handling more concurrent users translates to higher load. So don't worry about getting low on sockets. Trying to scale beyond that barrier would require more server resources.
Helpful links :
Socket.IO - are the open connections a concern?
http://www.quora.com/How-do-I-scale-socket-io-servers-2
There are already solutions using this approach like Cloud9 and it works good. There will be a point where you will need to scale out. So if you are planing something big I would think about it.
Here are some tests on sockets.io with 10,000 concurrent connections. Looks like it's good solution but not easy one because of fallback mechanism.

WCF and client communication on a self hosted WCF service

I am new to WCF services. I have been working with WCF for over two months now and love its capabilities. I am using a self hosted WCF in a Windows Service. The binding is netTCP because the client and service are on the same machine. My communication is duplex and I am using a WCF session. With these features, one of the design needs for my application is that UI should always be connected to the service - I am using a separate thread in my UI to always poll the connection status and re-create and open the channel in case it goes to faulted state. Since I have async call backs from the service, the client should always be connected. Here are a couple of questions:
Is it OK to use self host technique knowing that the client and service on the same machine? I used WCF for ease of inter process communication.
Does it make sense to keep this keep alive thread from the client or should I be using some other technique?
I want to get better in using and configuring WCF. is there a good book or online reading material on self hosted WCF services?
Please advice.
Thanks,
Subbu
I think it's absolutely fine to use self-hosting with WCF. I've implemented many services that are hosted in a Windows Service for example.
I'm assuming that you're talking about client and server being hosted in different processes on the same machine? If so, then ideally you should use binary over named pipes in your bindings.
If client and server and physically in the same process, then you might consider using something like Roman Kiss's Null Transport to reduce the serialization overhead. His CodeProject article can be found here: http://www.codeproject.com/KB/WCF/NullTransportForWCF.aspx
To answer point 2, I've suggested an alternative approach in my answer to another Stackover question: WCF net.tcp server disconnects - how to handle properly on client side?
Hope this helps.

Scale Socket.io vertically AND horizontally - what is the "right" way to go?

I want to scale my Node.js Socket application vertically and horizontally and I haven´t found a sophisticated solution yet.
My application has two use-cases:
Broadcast messages from one user to all others
Push messages from one user to a subset of users
On one hand, I´ve read that I need Redis for both cases together with socket.io-redis
On the other hand, I´ve watched this video and read this SO answer where it says that Redis isn´t reliable and it´s not guaranteed that the published messages will arrive, so you should only use it for clustering/vertical scaling
Microsoft Azures solution to use ServiceBus is out of question, because I don´t want to use Azure.
Instead of Redis, the guy recommends using RabbitMQ for horizontal scaling.
For the vertical scaling there is also socket.io-clusterhub, an IPC for node processes, but it seems to work only on Socket.io <= v0.9.0
Then there is this guy, who has implemented his own method to pass messages to other nodes via HTTP requests, which makes somehow sense. But why HTTP requests if you could also establish direct socket connections between servers, push the message to all servers simultaneously and overcome the delay of going from one server to another?
As a conclusion I thought maybe I could go with Redis on EACH server, just for the exchange of messages when clustering my application on multiple processes, together with RabbitMQ as a S2S communication solution.
But it seems a bit like an overkill to have one Redis per Server and another central RabbitMQ.
Is there any known shorter/better solution to scale Socket.io reliably in both directions?
EDIT:
I´ve tried using a single Redis Server for multiple Node.js Servers, where each of them uses Clustering via sticky-session over all cores. While the Clustering at its own works like a charm with redis, there seems to be a problem when using multiple servers. Messages won´t arrive at the other nodes.
I'd say Kafka is a good fit for the horizontal scaling. It is a fairly sophisticated way of distributing a huge amount of events across servers (which at the end is what you want). This is a good read about it: https://engineering.linkedin.com/kafka/running-kafka-scale
Regarding the vertical scale, instead of socket.io-clusterhub I would use something called PM2 (https://github.com/Unitech/pm2) which allows you to resize the scale of the apps in every computer dynamically as well as controlling the logs and reporting to keymetrics.io (if you are using it).
If you need any snippets ask me and I will edit the answer but in the PM2 github there are quite few.

Resources