I need to do the following, if possible, I need to configure a connection pool in IIS, the idea is that I want to have a limit of 20 concurrent connections, a queue of 10 concurrent connections and lets say that I get 35 concurrent connections, then the server will receive the first 20 connections, process them and the next 10 will be put in the queue and those will get processed according the other connections are finished and the remaining 5 will get the 503 message.
Is this possible? Is there an application that can help me achieve this?
Related
I need to keep 2 or more connection pool so that heavy task can be allocated on one pool with defined connection size and other pool for small task.
For example: I am having 10 to 12 dashboards on same page with heavy data and if user visits this page then at the same time 10 to 12 connections will be get utilized for this. And as this is heavy data fetching activity these connections will not respond up to certain time and these 10 to 12 connections will be un-useful for other task during this time. And if I am keeping 100 connections in my connection pool and if such request comes from 8 to 10 users then all my 100 connections will be get blocked. So I want to keep only 50 connections for dashboard queries and 50 connections for smaller task. So that because of dashboard queries other users who are using other system with smaller task should not be blocked.
Please let me know how can I define 2 or more connection pools on same database with same models bcoz I don't want to write models repeatedly to define more connection pools.
I am using Sequelize in our nodejs application.
Thanks in advance...
I am running a JMeter test with 20 concurrent users (use kee alive is enabled). All 20 users with different login ids trying to login (1st test) and create a record(2nd test). While creating a record i observed most of the create records has connect time of '0' but certain records (say 5/20) has connect time of 21000 ms, so due to that elapsed time of 5 requests alone is so high compared to other 15 requests. Why its happening for 5 users alone ?
We don't know, according to JMeter Glossary
Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
So it's more a network metric which indicates how long did it take for JMeter to establish connection with the server.
You need to check:
Your network adapter statistics, it might be the case it doesn't have enough bandwidth to send 20 concurrent requests
Your application connection pool settings, for example if it has 15 connections in the pool the remaining 5 will be put into queue and wait until a connection becomes available.
Check your server baseline health metrics like CPU, RAM, etc. (it can be done using JMeter PerfMon Plugin) as it might be the case your server lacks resources to serve all connections at the same time
Make sure to follow JMeter Best Practices as it might be the case JMeter is overloaded and cannot send requests fast enough
Using varnishstat, the metric 'sess_herd' is increasing a lot, during trafic and it seems that I've maybe reached some a limit (300 sess_herd / s)
I think, I got no backend issue (all busy, unhealthy, retry, failed at 0).
Backend_req/Client_req is around 150 req/s.
Right now, our Varnish isn't caching at all, it is just "proxying" to our backend server. So the "pass" rates is about 150 req/s
What could explain such a sess_herd ?
Session_herd
Regards
Olivier
Session herding is a counter that indicates when an ongoing session (TCP-connection) is handed off the worker thread and to a waiter that keeps it while the client thinks.
By default a connection get to keep its worker thread for 50ms (timeout_linger parameter in 4.1) before this happens.
Since networks and clients are slow, a worker thread can in that way serve a whole lot of clients. This reduces the number of running threads needed.
In practice this happens after a response has been sent and while waiting for another request on the reused connection.
I'm using Managed Executor Service to implement a process manager which will process tasks in the background upon receiving an JMS message event. Normally, there will be a small number of tasks running (maybe 10 max) but what if something happens and my application starts getting hundred of JMS message events. How do I handle such event?
My thought is to limit the number of threads if possible and save all the other messages to database and will be run when thread available. Thanks in advance.
My thought is to limit the number of threads if possible and save all the other messages to database and will be run when thread available.
The detailed answer to this question depends on which Java EE app server you choose to run on, since they all have slightly different configuration.
Any Java EE app server will allow you to configure the thread pool size of your Managed Executor Service (MES), this is the number of worker threads for your thread pool.
Say you have a 10 worker threads, and you get flooded with 100 requests all at once, the MES will keep a queue of requests that are backlogged, and the worker threads will take work off the queue whenever they finish work until the queue is empty.
Now, it's fine if work goes to the queue sometimes but if overall your work queue increases more quickly than your worker threads can take work off the queue, you will run into problems. The solution to this is to increase your thread pool size otherwise the backlog will get overrun and your server will run out of memory.
what if something happens and my application starts getting hundred of JMS message events. How do I handle such event?
If the load on your server will be so sporadic that tasks need to be saved to a database, it seems that the best approach would be to either:
increase thread pool size
have the server immediately reject incoming tasks when the task backlog queue is full
have clients do a blocking wait for the server task queue to be not full (I would only advise this option if client task submission is in no way connected to user experience)
Trying to build a TCP server using Spring Integration in which keeps connections may run into thousands at any point in time. Key concerns are regarding
Max no. of concurrent client connections that can be managed as session would be live for a long period of time.
What is advise in case connections exceed limit specified in (1).
Something along the lines of a cluster of servers would be helpful.
There's no mechanism to limit the number of connections allowed. You can, however, limit the workload by using fixed thread pools. You could also use an ApplicationListener to get TcpConnectionOpenEvents and immediately close the socket if your limit is exceeded (perhaps sending some error to the client first).
Of course you can have a cluster, together with some kind of load balancer.