Limits for websocket connections in azure - azure

Somewhere I've read websocket clients are limited ...so what if I wanna create a simple game with 10000 users grouped by 2 or 4 players for team? it seams I ve no solution or maybe I'm watching in a wrong direction. Any suggestions?
Thanks
Free: (5) concurrent connections per website instance
Shared: (35) concurrent connections per website instance
Standard: (350) concurrent connections per website instance

I see this has now been changed to (from here):
Free: (5) connections
Shared: (35) connections
Basic: (350) connections
Standard: no limit

It makes sense that Azure Websites have a limit on concurrent connected users.
Are you really gonna need to support 10k of them ? it looks like you're gonna have to look elsewhere for hosting, because 10 instances of standard web sites is 3500 users max :-/
If you're using ASP.NET with signalR, you may have more luck with Azure Web Roles (the service bus/signalr nuget package makes it really easy to build scaling applications without worrying about sharing client states between your web role instances).

Related

Memory leak/consumption in Hubs due to large messages?

I'm currently trying SignalR and RabbitMQ in order to round-robin / load balance json webservice queries and I'm having troubles with the memory consumption by one of the application when it processes large (~ 300 - 2500 kb) messages.
I have a IIS server hosting a web application (named "Backend") that needs to query an another web application (name "Pricing") also hosted by a IIS server.
In order to keep a connection alive with my RabbitMQ server, I developped console application that are connected to Backend and Princing using SignalR.
So when Backend needs to query Princing, it asks its console to publish the message in the queue and the console attached to Pricing takes the message and give it to Pricing (with Invoke<> method). When Pricing finished its job, it asks its console to publish the reply message and the console attached to Backend takes it and give to Backend.
To sum up :
[Backend] -> [Console] -> [RabbitMQ] <- [Console] <- [Pricing]
And I have 2 Pricing taking messages from their console from the RabbitMQ queue.
This setup is to replace a traditionnal webservice query between the 2 IIS and benefit from the advantages of RabbitMQ (load balancer and asynchronous call in a micro/web services architecture)
I added
GlobalHost.Configuration.MaxIncomingWebSocketMessageSize = null;
in Startup.cs in both IIS in order to accept large messages.
When I take a look at Pricing's memory consumption in Windows Task Manager, it quickly grows from 500Mb to 1500Mb (in 5 minutes, dealing with neverending queries from Backend to test the setup).
I tried something else by writing the queries content in files in a shared folder and just publishing the name of the file in RabbitMQ's messages and the memory consumption of Pricing (with of course a code modification to load the file) doesn't move and stays around 500Mb.
So it seems that it has something to do with the message length that my console passes to the IIS.
I tried to disconnect the console from the IIS Hubs because I thought that it will maybe free some memory but nope.
Does anyone experienced this issue of memory consumption by large messages into Hubs ? How can I check if there's indeed a memory leak in my application ?
What about using SignalR and RabbitMQ in web/micro services environment ? Any feedback ?
Many thanks,
Jean-Francois
.NETFramework : 4.5
Microsoft.AspNet.SignalR : 2.4.1
So it seems that the version of SignalR I use (.NetFramework) allows to tune the number of messages per hub per connection kept in memory.
I fixed it to an arbitrary 50 in Startup.cs
GlobalHost.Configuration.DefaultMessageBufferSize = 50;
Its default value is 1000, meaning (if I understood it clearly) that IIS keep a circular buffer of 1000 messages in memory. Some of the messages were weighting 2.5Mo meaning that the memory used could go up to 2500Mo per connection.
As my IIS only has one connection (its console) and doesn't need to keep track of messages (as it works as webservice), it seems that 1000 messages is way too much.
With the limit of 50 messages, the memory used by the application in Windows Task Manager stays put (around 500Mo).
Is there any flaw in the way I'm using it ?
Thanks !

Stress testing Azure App Service periodically stops processing requests

I am currently stress testing a .Net Core application, targeting netcoreapp2.2, that is hosted on Azure as a App Service connected to a P1V2 (210 ACU, 3.5GB memory) service plan with 2 instances.
The endpoint that I'm stress testing is very simple, it validates a Oauth2.0 token, gets the user and some info about the user from a P2 (250 DTU) Azure hosted database, total 4 db queries per request, and returns the string "Pong".
When running 15 concurrent users (or more) in 200 loops I see the stop(s) in processing seen in the image (between the high peaks). The service plan never hits more than around 20-35% CPU and the database never uses more than 2% load. Increasing the users decreases the average throughput.
When looking at the slow requests it is like it just randomly stops, never at the same place. When I look at the DB requests I never see a request that takes longer than a couple of 100 milliseconds while some requests can take upwards to 5-6s to process.
It feels like I reach some limit which results in something stopping for a period of time, but I can't figure out where the problem lies.
When running the same stress locally I don't see these stops.
I'm using jmeter cli to run the stress tests against both environments.
Any help is greatly appreciated, thanks!
This could be because of Azure DDOS protection behaviour.
If your application is being attacked by a DDOS attack, Microsoft will
stop all connections to your end point and in effect taking down your
service.
To avoid this you need to setup Web application firewall (WAF) to exclude healthy requests.

Azure Redis Cache - how connections are calculated

Can somebody explains how the connections are calculated in Azure Redis Cache?
For example if I have an MVC app and I use a Basic Redis Cache with 256 connections, and I have 256 users accessing my websites will there be 256 connections made? How exactly does this work?
How many connections are made depends on the application you implement.
If you follow best practices, your application will be able to handle many users with a very low amount of connections.
E.g. Stackexchange.Redis should be able to handle thousands of users without exhausting your 256 connections if you reuse the connection multiplexer object.
Some more information:
https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f
https://stackexchange.github.io/StackExchange.Redis/Basics
the key idea in StackExchange.Redis is that it aggressively shares the connection between concurrent callers

Confused about the Azure Redis Connection Limit (Up to X Connections)

I'm looking into building some signalR applications in .NET hosted in Azure (Self hosting workers).
I want to scaleOut and setup a Backplane using Azure Redis, however when i've gone to setup a new Redis Cache i've got confused to what the 'Up to X connections' actually means.
For example, the 'CO Basic 250MB Cache' has 'Up to 256 connections' and the 'C1 Standard 1GB Cache' has 'Up to 1,000 connections'
To confirm, can i take 'Up to 256 connections' to mean that i could (In theory) have up to 256 worker threads all pushing SignalR messages around at once ... Or does this mean the total amount of connections (users) from my website that are connected to my SignalR and in turn, pushing messages around the Redis Cache?
Obviously, if it means 256 workers thats fine - But if it means the total number of different connections from my website, then that is a deal breaker
Thanks and sorry if this is a silly question!
From a SignalR backplane perspective, the SignalR websocket connections do not correlate with the number of connections to a Redis Cache server.
A user's SignalR connection is with the SignalR Hub server, which in turn acts as a Redis client in case of a scaleout.
The Redis client in signalr connects using the standard ConnectionMultiplexer which handles the connections to redis internally. And the guidance is to use a single multiplexer for the whole application, or a minimal number.
The Redis client is to send/receive messages and not create/access keys for each operation, so it makes sense to have a single channel open and to have all the messages being exchanged on that single channel.
I am not sure how that connectionmultiplexer manages Redis connections, but we do use Redis backplane for SignalR scaleout on Azure for our application.
We load tested the app and with around 200 thousand always active Signalr websocket connections scaled out over 10 servers, the Azure Redis cache connections count hovered around 50 on average, almost never going above even 60..
I think it is safe to say that the connection limits on Azure Redis cache are not a limiting factor for SignalR, unless you scale out to hundreds of servers.
The connection limit is not about the number of worker threads that are reading from or writing to Redis. It is about physical TCP connections. Redis supports pipelining. Many of the client libraries are thread safe so that you can use the same physical connection object to write from multiple threads at the same time.

Nodejs websocket

I am working on a nodejs application and the requirement is to send around 10k requests per second per connection. The client application has to open one websocket connection to send these requets and at the server side it has to just receive and send the data to a queue. The number of socket connections at the server side isn't that much, may be around 1k. I have few questions regarding this and any help is greatly appreciated.
First, is it possible to achieve this setup with a single master process? Since I cannot share the web socket connections with the child processes I need to get the bandwith from master process.
When I tried benchmarking nodejs ws library, I was only able to send approximately 1k requests per second of 9kb each. How can I increase the throughput?
Are there any examples on how to achieve max throughput since I can only find posts with how to achieve max connections?
Thanks.
You will eventually need to scale
First, is it possible to achieve this setup with a single master process?
I don't really think its possible to achieve this with a single thread.
(you should consider scaling and never design restricting yourself from that option)
Since I cannot share the web socket connections with the
child processes I need to get the bandwith from master process.
Im sure you will be happy to know about the existance of socket.io-redis.
With it you will be able to send/receive events (share clients) between multiple instances of your code (processes or servers). Read more : socket.io-redis (Github)
I know you are speaking about ws but maybe its worth the change to socket.io. Image Source Article
Specially knowing you can scale both vertically (increase the number of threads per machine)
and horizontally (deploy more "masters" accross more machines) with relative ease. (and I repeat myself : sharing your socket clients & communications accross all instances)
When I tried benchmarking nodejs ws library, I was only able to send
approximately 1k requests per second of 9kb each. How can I increase
the throughput?
I would suggest trying socket.io + socket.io-redis
spawn a master with a number of workers equal to the number of CPU
cores. (vertical scaling)
deploy your master accross 2 or more machines (horizontal scaling)
learn about load-balancing & perform benchmarks.
Are there any examples on how to achieve max throughput since I can
only find posts with how to achieve max connections?
You will increase the total throughput if you increase the number of instances communicating with clients. (Horizontal + Vertical Scaling)
socket.io-redis Application Example (github)
Using Redis with Node.js and Socket.IO (tutorial)
This might also be an interesting read :
SocketCluster 100k messages / sec (ycombinator)
SocketCluster (github)
Hope it helped.

Resources