In order to improve application performance, thought of trying & creating the redis connection pool to share the load, instead of rotating single same redis connection to cater all the incoming requests, as the per the suggestion by Redis team here
What would be the best way of creating StackExchange.Redis connection pool for same Redis server config using C# & keep rotating one connection after another from the pool to cater the incoming request?
Is there any SDK/nuget package available to create Redis connection pool?
At present we are reuse the single ConnectionMultiplexer created using Lazy pattern via singleton class which will initiate single redis connection object on the very first request & will be reused throughout the application lifetime.
P.S: thread safe can be ignored as all the instance in the connection pool using same Redis server config.
There's a library that I have implemented two years ago exactly for that requirement. It's thread safe and it creates the connection pool lazily.
Also, you can use built in implementations of connection selection strategy such as round-robin and load based.
The NuGet is https://www.nuget.org/packages/StackExchange.Redis.MultiplexerPool/
You can see sample here https://github.com/mataness/StackExchange.Redis.MultiplexerPool/blob/master/samples/RedisConnectionPoolConsoleApp/Program.cs
Related
What is the different use cases of Java NIO transport connector vs PoolConnectionFactory in ActiveMQ. Both serves the pool of connections.I want to use thousand of clients connect to the broker and maintain a seperate queue for each client. Where is is use case for both of this in the scenario?
The NIO Transport connector is a server side incoming connection API that utilizes a selector based event loop to share the load of multiple active connections where normally on the normal transport connector a single thread is created per connection to process IO leading to higher thread counts when large numbers of connections are active.
The PooledConnectionFactory is a client side device that provides a pool of one or more open connections that can be used by application code to reduce the number of connection create / destroy events thereby leading to faster client side code in some cases and lower overhead on the remote broker as it would not need to process connection create / destroy events from an application whose model causes this sort of behavior. Depending on how you've coded your application or what API layering you have such as Camel or Spring etc a pool may or may not be of benefit.
The two things are not related and should not be equated with one another.
NIO transport uses on low level the selector which is much more performant then Pool connectionfactory.
It means it get notification if any new data is ready while Pool wait for each Connection. For your use case i would strongly suggest NIO Connector
I'm using Azure Functions with queue triggers for part of our workload. The specific function queries the database and this creates problems with scaling since the large concurrent number of function instances pinging the db results in maximum allowed number of Azrue DB connections being hit constantly.
This article https://learn.microsoft.com/en-us/azure/azure-functions/manage-connections lists HttpClient as one of those resources that should be made static.
Should database access also be made static with static SqlConnection to resolve this issue, or would that cause some other problems by keeping the constant connection object?
Should database access also be made static with static SqlConnection
Definitely not. Each function invocation should open a new SqlConnection, with the same connection string, in a using block. It's not really clear how many concurrent Function Invocations the runtime will make to a single instance of your application. But if it's more than 1, then a singleton SqlConnection is a bad thing.
I wonder exactly which limit you're hitting in SQL Database, the connection limit or the concurrent request limit? In either case I'm a bit surprised (not a Functions expert) that you get that many concurrent function invocations, so there might be something else going on. Like you're leaking SqlConnections.
But reading the Functions docs, my guess is that the functions runtime is scaling by launching multiple instances of your function app. Your .NET app could scale in a single process, but that's apparently not the way Functions works. Each instance of your Functions app has it's own ConnectionPool for SQL Server, and by default each ConnectionPool can have 100 connections.
Perhaps if you sharply limit the Max Pool Size in your connection string, won't have so many connections open. When you hit the Max Pool Size, new calls to SqlConnection.Open() will block for up to 30 seconds waiting for a pooled SqlConnection to become available. So this not only limits the connection use for each instance of your application, it throttles your throughput under load.
You can use the configuration settings in host.json to control the level of concurrency your functions execute at per instance and the max scaleout setting to control how many instances you scale out to. This will let you control the total amount of load put on your database.
For future readers, the documentation has been updated with some information about the SQL connection stating:
Your function code may use the .NET Framework Data Provider for SQL Server (SqlClient) to make connections to a SQL relational database. This is also the underlying provider for data frameworks that rely on ADO.NET, such as Entity Framework. Unlike HttpClient and DocumentClient connections, ADO.NET implements connection pooling by default. However, because you can still run out of connections, you should optimize connections to the database. For more information, see SQL Server Connection Pooling (ADO.NET).
So, as David Browne already mentioned, you shouldn't make your SqlConnection static.
I have a Node.js script and a PostgreSQL database, and I'll be using a library that maintains a pool of connections to the database.
Say I have a script that queries the database multiple times (not a transaction) at different parts of the script, how do I tell if I should acquire a single connection/client and reuse it throughout*, or acquire a new client from the pool for each query? (Both works but which has better performance?)
*task in the pg-promise library, connect in the node-postgres library.
...
// Acquire connection from pool.
(Database query)
(Non-database-related code)
(Database query)
// Release connection to pool.
...
or
...
// Acquire connection from pool.
(Database query)
// Release connection to pool.
(Non-database-related code)
// Acquire connection from pool.
(Database query)
// Release connection to pool.
...
I am not sure, how the pool you are using works, but normally they should reuse the connections (don't disconnect after use), so you do not need to be concerned with caching connections.
You can use node-postgres module that will make you task easier.
And about your question when to use pool here is the brief answer.
PostgreSQL server can only handle 1 query at a time per connection.
That means if you have 1 global new pg.Client() connected to your
backend your entire app is bottleknecked based on how fast postgres
can respond to queries. It literally will line everything up, queuing
each query. Yeah, it's async and so that's alright...but wouldn't you
rather multiply your throughput by 10x? Use pg.connect set the
pg.defaults.poolSize to something sane (we do 25-100, not sure the
right number yet).
new pg.Client is for when you know what you're doing. When you need a
single long lived client for some reason or need to very carefully
control the life-cycle. A good example of this is when using
LISTEN/NOTIFY. The listening client needs to be around and connected
and not shared so it can properly handle NOTIFY messages. Other
example would be when opening up a 1-off client to kill some hung
stuff or in command line scripts.
here is the link of that module.
Hopefully this will help.
https://github.com/brianc/node-postgres
You can see the documentation over there and about the pooling. Thanks :)
And about closing the pool it provides the callback done which can be called when you want to close that pool.
Im currently using Redis Cluster Mode with 3 master instances, i'm using Jedis(Java client) in listening server which every data received i create a new thread then the thread make an update in redis.
My question is how can i use Redis Cluster instance in multiple thread with pool configuration.
JedisCluster is thread-safe.
It contains JedisPool for each node internally, so you don't need to worry about dealing JedisCluster instance with multithread.
What are your suggests?
Is it better to place TRTCHttpServer to main form or to datamodule with other server components? In demos apps there are both implemetnation. Component will be set up with multithreaded property TRUE. As far as I know if it is separated in main form datamodule is created with every thread when client connected. Is it true?
Also, If I want to make a pool algorithm for DB connection (TZConnection) where should be put? In datamodule with other server components and DBAware components or in separate datamodule? Pool algorithm would be threaded like this:
DB Connection pool
The server should have db connection pool and be multithreaded. It could be achieved by RTC components. It would serves as the 2nd tier of 3 tier architecture. The 3rd tier is MySQL connected via ZeosLib.
Thanks for answers.
PS: I have searched for any other suggestions but I could not make it clear. Please help.
I guess you have several questions in one...
AFAIK RTC uses a thread pool, for better scalability and less resource use. So you can not assume you have one thread per client.
It is always preferred to place your logic in a datamodule, and NEVER in a main form: do not mix UI and server - for instance, it could make sense to host your server in a service, on production.
If you are using ZeosLib, the connection pool you are talking about has nothing to do with the ZDBC connection pool.