Problem: Limit the number of threads for each deployed application in a tomcat container. So that no one deployed component can hog all the resources.
In the weblogic world I used workmanagers with a min and max thread constraint, then for each deployment specified the workmanager for that application.
I have read about Executors which can set thread constraints; but only at Connector level. One silly trick available is making sure my clients use different ports for a single tomcat instance, and then organizing different Executors for each Connector, but that seems inefficient.
Question: Are there better solutions than the silly idea proposed?
Example
deploymentA should be allocated minimum 5 threads and maximum 10 threads
deploymentB should be allocated minimum 10 threads and maximum 50 threads
Silly Solution
<Executor name="exeOne" maxThreads="10" minSpareThreads="5" maxQueueSize="10" />
<Executor name="exeTwo" maxThreads="50" minSpareThreads="10" maxQueueSize="10" />
<Connector port="11400" executor="exeOne" />
<Connector port="11500" executor="exeTwo" />
Have clients/users of deploymentA call port 11400 only. Have clients/users of deploymentB call port 11500 only.
Related
I am using Spring Boot and Java 8
For calling an api with 1 employee id it takes 1 miliseconds .So if I am calling API 100,000 times with 100,000 times with different employee id
why it is taking hours and not 100,000*1 millis i.e just 1.6 minutes
SpringBoot uses a thread pool to manage the workload for working on tasks. Thus, the max worker threads, is set as 200 by default.
Though this is a good number, the number of threads that can work in parallel depends upon the CPU time slicing and availability of backend resources. Assuming, that the backend resources are unlimited, the throughput will solely depend upon the CPU time available for each thread. In a multi-core CPU, it would be the maximum cores available and are able to serve the embedded tomcat container.
As Spring is a blocking framework, for a normal quad-core single CPU environment (assuming that all 4 cores are able to serve), this no. is 4. This means a maximum of 4 requests can be served in parallel. Rest all are likely to be queued and taken up when the next CPU slice is available.
Mathematical analysis:
Time taken by the API to process 1 request = 1ms
Time is taken by the API to process 4 concurrent requests = 1ms
Time taken by the API to process 1000,000 concurrent requests = 1000000 / 4 = 250 secs
This is just the best-case scenario. In real scenarios, all the CPUs are less likely to provide a time slice at the same instant. So, you are likely to see differences.
In such scenarios, it would be better to use the Spring Reactive than the conventional Spring framework in SpringBoot.
The API you're pulling from could be limit the amount of requests you can pull in a certain period of time. If you don't have access to the API source, I would attempt to run larger and larger numbers of pulls until you notice it takes significantly longer.
Well the time needed to get response from a web server depends on it's hosting machine and environment.
Usually in a single machine a limited number of thread inside a thread pool and each request is bound with one thread. So while making concurrent request every time certain number of request is processed within the available threads and rest awaits in the queue.
This can be the reason that your requests are taking a while to get response or even some of them can get a request time out.
I am new to connection management in jboss and hibernate.I have an application using spring + hibernate running on jboss 7.I did some reading but have few doubts now:
How are connections and threads related to access a application.
Suppose i have maximum pool size of 10.Does it mean only 10 threads
can access my application to perform database operations at a time?
If no, what happens when there are more than 10 threads say 15 or 20 accessing it?
Does other threads wait for running threads to complete and they run next?
(or)
This results in a connection error like no managed connections available?
My understanding is that in Tomcat, each request will take up one Java/(and thus OS) thread.
Imagine I have an app with lots of long-running requests (eg a poker game with multiple players,) that involves in-game chat, and AJAX long-polling etc.
Is there a way to change the tomcat configuration/architecture for my webapp so that I'm not using a thread for each request but 'intercept' the request and response so they can be processed as part of a queue?
I think you're right about tomcat likes to handle each request in its own thread. This could be problematic for several concurrent threads. I have the following suggestions:
Configure maxThreads and acceptCount attributes of the Connector elements in server.xml. In this way you limit the number of threads that can get spawned to a threshold. Once that limit is reached, requests get queued. The acceptCount attribute is to set this queue size. Simplest to implement but not a good long term solution
Configure multiple Connector elements in server.xml and make them share a threadpool by adding an Executor element in server.xml. You probably want to point tomcat to your own implementation of Executor interface.
If you want finer grain control no how requests are serviced, consider implementing your own connector. The 'protocol' attribute of the Connector element in server.xml should point to your new connector. I have done this to add a custom SSL connector and this works great.
Would you reduce this problem to a general requirement to make tomcat more scalable in terms of the number of requests/connections? The generic solution to that would be configuring a loadbalancer to handle multiple instances of tomcat.
We are using Tomcat 6 / IIS to host our Java MVC web applications (Spring MVC and Frontman). We started running into problems recently when we see threads stuck in the Service stage for hours.
Using Lambda Probe we see the threads start to pile up and eventually the app becomes unresponsive. The processing time increases, zero bytes in or out. The url is reachable and the logs show that it starts but never finishes.
IP Stage processing time bytes-in bytes-out url
111.11.111.111 Service 00:57:26.0 0b 0b GET /Application/command/monitor
All of this is on a test server set up as follows:
ISAPI filter worker:
worker.testuser.type=ajp13
worker.testuser.host=localhost
worker.testuser.port=8009
worker.testuser.socket_timeout=300
worker.testuser.connection_pool_timeout=600
Server.xml:
<
Connector
port="8009"
protocol="AJP/1.3"
redirectPort="8443"
tomcatAuthentication="false"
connectionTimeout="6000"
/>
Any thoughts on why this happens or how to configure Tomcat to kill ancient application threads?
Can use java monitoring package to get the all the thread and thread dumps and kill using the thread id (though thread stop is deprecated it does the work)
http://docs.oracle.com/javase/1.5.0/docs/guide/management/overview.html
We have a .NET 2.0 Remoting server running in Single-Call mode under IIS7. It has two APIs, say:
DoLongRunningCalculation() - has a lot of database requests and can take a long time to execute.
HelloWorld() - just returns "Hello World".
We tried to stress test the remoting server (on a Windows 7 machine) in a worst case scenario by bombarding it randomly with the two API calls and found that if we go beyond 10 client requests, the HelloWorld response (which generally is less than 0.1 sec) starts taking longer and longer going into many seconds. Our objective is that we dont want to have the long-running remoting calls to block the short-running calls. Here are the performance counters for ASP.NET v2.0.50727 if we have 20 client threads running:
Requests Queued: 0
Requests Executing: (Max:10)
Worker Processes Running: 0
Pipeline Instance Mode: (Max:10)
Requests in Application Queue: 0
We've tried setting maxConcurrentRequestsPerCPU to "5000" in registry as per Thomas's blog: ASP.NET Thread Usage on IIS 7.0 and 6.0 but it hasn't helped. Based on the above data, it appears that the number of concurrent requests is stuck at 10.
So, the question is:
How do we go about increasing the concurrent requests? The main objective is that we don't want to have the long-running remoting calls to block the short-running calls.
Why are the Max Requests Executing always stuck at 10?
Thanks in advance.
Windows 7 has a 20 inbound connection limit. XP and prior was limited to 10 (not sure about Vista). This is likely the cause of your drop in performance. Try testing on an actual server OS that doesn't have an arbitrary connection limit.