Can somebody explains how the connections are calculated in Azure Redis Cache?
For example if I have an MVC app and I use a Basic Redis Cache with 256 connections, and I have 256 users accessing my websites will there be 256 connections made? How exactly does this work?
How many connections are made depends on the application you implement.
If you follow best practices, your application will be able to handle many users with a very low amount of connections.
E.g. Stackexchange.Redis should be able to handle thousands of users without exhausting your 256 connections if you reuse the connection multiplexer object.
Some more information:
https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f
https://stackexchange.github.io/StackExchange.Redis/Basics
the key idea in StackExchange.Redis is that it aggressively shares the connection between concurrent callers
Related
We are trying to implement the strategy outlined in the following presentation (slides 13-18) using nodejs/mongo-native driver.
https://www.slideshare.net/mongodb/securing-mongodb-to-serve-an-awsbased-multitenant-securityfanatic-saas-application
In summary:
Create a connection pool to mongodb from node.js.
For every request for a tenant, get a conenction from the pool and "authenticate" it. Use the authenticated conenection to serve the request. After response, return the connection to the pool.
Im able to create a connection pool to mongodb without specifying any database using the mongo-native driver like so:
const client = new MongoClient('mongodb://localhost:27017', { useNewUrlParser: true, poolSize: 10 });
However, in order to get a db object, I need to do the following:
const db = client.db(dbName);
This is where I would like to authenticate the connection, and it AFAICS, this functionality has been deprecated/removed from the more recent mongo drivers, node.js and java.
Going by the presentation, looks like this was possible to do with older versions of the Java driver.
Is it even possible for me to use a single connection pool and authenticate tenants to individual databases using the same connections ?
The alternative we have is to have a connection pool per tenant, which is not attractive to us at this time.
Any help will be appreciated, including reasons why this feature was deprecated/removed.
it's me from the slides!! :) I remember that session, it was fun.
Yeah that doesn't work any more, they killed this magnificent feature like 6 months after we implemented it and we were out with it in Beta at the time. We had to change the way we work..
It's a shame since till this day, in Mongo, "connection" (network stuff, SSL, cluster identification) and authentication are 2 separate actions.
Think about when you run mongo shell, you provide the host, port, replica set if any, and your in, connected! But not authenticated. You can then authenticate to user1, do stuff, and then authenticate to user2 and do stuff only user2 can do. And this is done on the same connection! without going thru the overhead creating the channel again, SSL handshake and so on...
Back then, the driver let us have a connection pool of "blank" connections that we could authenticate at will to the current tenant in context of that current execution thread.
Then they deprecated this capability, I think it was with Mongo 2.4. Now they only supported connections that are authenticated at creation. We asked enterprise support, they didn't say why, but to me it looked like they found this way is not secured, "old" authentication may leak, linger on that "not so blank" reusable connection.
We made a change in our multi-tenancy infra implementation, from a large pool of blank connections to many (small) pools of authenticated connections, a pool per tenant. These pools per tenant can be extremely small, like 3 or 5 connections. This solution scaled nicely to several hundreds of tenants, but to meet thousands of tenants we had to make all kinds of optimizations to create pools as needed, close them after idle time, lazy creation for non-active or dormant tenants, etc. This allowed us to scale even more... We're still looking into solutions and optimizations.
You could always go back to a global pool of authenticated connections to a Mongo user that have access to multiple databases. Yes, you can switch database on that same authenticated connection. You just can't switch authentication..
This is an example of pure Mongo Java driver, we used Spring which provide similar functionality:
MongoClient mongoClient = new MongoClient();
DB cust1db = mongoClient.getDB("cust1");
cust1db.get...
DB cust2db = mongoClient.getDB("cust2");
cust2db.get...
Somewhat related, I would recommend looking at MongoDB encryption at rest, it's an enterprise feature. The only way to encrypt each database (each customer) according to a different key.
I'm looking into building some signalR applications in .NET hosted in Azure (Self hosting workers).
I want to scaleOut and setup a Backplane using Azure Redis, however when i've gone to setup a new Redis Cache i've got confused to what the 'Up to X connections' actually means.
For example, the 'CO Basic 250MB Cache' has 'Up to 256 connections' and the 'C1 Standard 1GB Cache' has 'Up to 1,000 connections'
To confirm, can i take 'Up to 256 connections' to mean that i could (In theory) have up to 256 worker threads all pushing SignalR messages around at once ... Or does this mean the total amount of connections (users) from my website that are connected to my SignalR and in turn, pushing messages around the Redis Cache?
Obviously, if it means 256 workers thats fine - But if it means the total number of different connections from my website, then that is a deal breaker
Thanks and sorry if this is a silly question!
From a SignalR backplane perspective, the SignalR websocket connections do not correlate with the number of connections to a Redis Cache server.
A user's SignalR connection is with the SignalR Hub server, which in turn acts as a Redis client in case of a scaleout.
The Redis client in signalr connects using the standard ConnectionMultiplexer which handles the connections to redis internally. And the guidance is to use a single multiplexer for the whole application, or a minimal number.
The Redis client is to send/receive messages and not create/access keys for each operation, so it makes sense to have a single channel open and to have all the messages being exchanged on that single channel.
I am not sure how that connectionmultiplexer manages Redis connections, but we do use Redis backplane for SignalR scaleout on Azure for our application.
We load tested the app and with around 200 thousand always active Signalr websocket connections scaled out over 10 servers, the Azure Redis cache connections count hovered around 50 on average, almost never going above even 60..
I think it is safe to say that the connection limits on Azure Redis cache are not a limiting factor for SignalR, unless you scale out to hundreds of servers.
The connection limit is not about the number of worker threads that are reading from or writing to Redis. It is about physical TCP connections. Redis supports pipelining. Many of the client libraries are thread safe so that you can use the same physical connection object to write from multiple threads at the same time.
I've recently read a lot about best practices with JMS, Spring (and TIBCO EMS) around connections, sessions, consumers & producers
When working within the Spring world, the prevailing wisdom seems to be
for consuming/incoming flows - to use an AbstractMessageListenerContainer with a number of consumers/threads.
for producing/publishing flows - to use a CachingConnectionFactory underneath a JmsTemplate to maintain a single connection to the broker and then cache sessions and producers.
For producing/publishing, this is what my (largeish) server application is now doing, where previously it was creating a new connection/session/producer for every single message it was publishing (bad!) due to use of the raw connection factory under JmsTemplate. The old behaviour would sometimes lead to 1,000s of connections being created and closed on the broker in a short period of time in high peak periods and even hitting socket/file handle limits as a result.
However, when switching to this model I am having trouble understanding what the performance limitations/considerations are with the use of a single TCP connection to the broker. I understand that the JMS provider is expected to ensure it can be used in the multi-threaded way etc - but from a practical perspective
it's just a single TCP connection
the JMS provider to some degree needs to co-ordinate writes down the pipe so they don't end up an interleaved jumble, even if it has some chunking in its internal protocol
surely this involves some contention between threads/sessions using the single connection
with certain network semantics (high latency to broker? unstable throughput?) surely a single connection will not be ideal?
On the assumption that I'm somewhat on the right track
Am I off base here and misunderstanding how the underlying connections work and are shared by a JMS provider?
is any contention a problem mitigated by having more connections or does it just move the contention to the broker?
Does anyone have any practical experience of hitting such a limit they could share? Either with particular message or network throughput, or even caused by # of threads/sessions sharing a connection in parallel
Should one be concerned in a single-connection scenario about sessions that write very large messages blocking other sessions that write small messages?
Would appreciate any thoughts or pointers to more reading on the subject or experience even with other brokers.
When thinking about the bottleneck, keep in mind two facts:
TCP is a streaming protocol, almost all JMS providers use a TCP based protocol
lots of the actions from TIBCO EMS client to EMS server are in the form of request/reply. For example, when you publish a message / acknowledge a receive message / commit a transactional session, what's happening under the hood is that some TCP packets are sent out from client and the server will respond with some packets as well. Because of the nature of TCP streaming, those actions have to be serialised if they are initiated from the same connection -- otherwise say if from one thread you publish a message and in the exact same time from another thread you commit a session, the packets will be mixed on the wire and there is no way server can interpret the right message from the packets. [ Note: the synchronisation is done from the EMS client library level, hence user can feel free to share one connection with multiple threads/sessions/consumers/producers ]
My own experience is multiple connections always output perform single connection. In a lossy network situation, it is definitely a must to use multiple connections. Under best network condition, with multiple connections, a single client can nearly saturate the network bandwidth between client and server.
That said, it really depends on what is your clients' performance requirement, a single connection under good network can already provides good enough performance.
Even if you use one connection and 100 sessions it means finally you
are using 100threads, it is same as using 10connections* 10 sessions =
100threads.
You are good until you reach your system resource limits
Trying to build a TCP server using Spring Integration in which keeps connections may run into thousands at any point in time. Key concerns are regarding
Max no. of concurrent client connections that can be managed as session would be live for a long period of time.
What is advise in case connections exceed limit specified in (1).
Something along the lines of a cluster of servers would be helpful.
There's no mechanism to limit the number of connections allowed. You can, however, limit the workload by using fixed thread pools. You could also use an ApplicationListener to get TcpConnectionOpenEvents and immediately close the socket if your limit is exceeded (perhaps sending some error to the client first).
Of course you can have a cluster, together with some kind of load balancer.
Somewhere I've read websocket clients are limited ...so what if I wanna create a simple game with 10000 users grouped by 2 or 4 players for team? it seams I ve no solution or maybe I'm watching in a wrong direction. Any suggestions?
Thanks
Free: (5) concurrent connections per website instance
Shared: (35) concurrent connections per website instance
Standard: (350) concurrent connections per website instance
I see this has now been changed to (from here):
Free: (5) connections
Shared: (35) connections
Basic: (350) connections
Standard: no limit
It makes sense that Azure Websites have a limit on concurrent connected users.
Are you really gonna need to support 10k of them ? it looks like you're gonna have to look elsewhere for hosting, because 10 instances of standard web sites is 3500 users max :-/
If you're using ASP.NET with signalR, you may have more luck with Azure Web Roles (the service bus/signalr nuget package makes it really easy to build scaling applications without worrying about sharing client states between your web role instances).