Limit number of threads in Mule's JDBC inbound - multithreading

I have a jdbc inbound endpoint that selects tens of thousands of records. Mule automatically spits them up and process each of them concurrently. The problem is, the process involves calling another service that cannot handle so many requests at the same time.
Is there a way to limit the number of concurrent threads so that not that many requests happen at the same time?
I am reading http://www.mulesoft.org/documentation/display/current/Configuring+a+Transport#ConfiguringaTransport-receiver but I cannot grasp how to do it. I tried:
<jdbc-ee:inbound-endpoint doc:name="db" connector-ref="testConnector" exchange-pattern="one-way" pollingFrequency="60000" queryTimeout="-1" queryKey="findAllPersonIds">
<receiver-threading-profile maxThreadsActive="2" />
</jdbc-ee:inbound-endpoint>
But when I try to start it, Mule complains that 'receiver-threading-profile' isn't valid.

A JDBC inbound endpoint is a poller endpoint, which is backed by a single thread per Mule instance (or per cluster if you run EE).
The parallelism you're experiencing comes from the flow processing strategy, which by default has a threading profile that will use multiple concurrent threads.
You need to limit this parallelism for the flow that performs the remote HTTP invocation. You haven't shown your config so I can't tell if it's the same flow where the inbound JDBC endpoint is.
With the information you have provided, the best I can do is direct you to the reference documentation: http://www.mulesoft.org/documentation/display/current/Flow+Processing+Strategies
From this documentation, here is a flow that will use 500 concurrent threads:
<queued-asynchronous-processing-strategy name="allow500Threads"
maxThreads="500"/>
<flow name="manyThreads" processingStrategy="allow500Threads">
<vm:inbound-endpoint path="manyThreads" exchange-pattern="one-way"/>
<vm:outbound-endpoint path="output" exchange-pattern="one-way"/>
</flow>

Related

Defining number of threads on HTTP Requester configurations/connectors

I'm trying to control the amount of maxThreadsActive and maxThreadsIdle for outgoing HTTP connections in Mule.
Setting the default-threading-profile doesn't affect the amount of threads that are allocated HTTP requesters.
For HTTP listeners it's possible to set the threading profile via the http:worker-threading-profile, like this:
<http:listener-config name="HTTP_Listener_Configuration" host="0.0.0.0" port="8081" doc:name="HTTP Listener Configuration">
<http:worker-threading-profile maxThreadsActive="2" maxThreadsIdle="1" threadTTL="60000"/>
</http:listener-config>
But i can't find a way to apply a threading profile on a http:request element.
Besides this i'm wondering how the http:worker-threading-profile in this case works for listeners, when i use a profiler (VisualVM) i don't see any changes in the amount of threads that are allocated for the HTTP listener.
Any ideas regarding threads for HTTP endpoints and how to control them and verify it?
Screenshot below is from a simple test app with the threading profile applied as mentioned above.
The same app has a simple http:request config, for an outbound HTTP connection (requester) i always get this number of threads:
Never tried it myself, but some info from research and training says this: If your flow is using a synchronous processing strategy, which Mule sets based on your message source and flow behavior, processing is done in the same thread. This might explain why you don't see any changes in the amount of threads that are allocated for the HTTP listener. The flow is set to synchronous if the message source is request-response--sender of the message is expecting a response or the flow is involved in a transaction.
Otherwise, Mule sets the flow to queued-asynchronous. In this case you set threads using the flow's properties view (in Studio, select the flow itself and look for Processing Strategy in the properties). Set properties for the flow as described in the docs. You do not set threads for the HTTP Requester afaik.

Can outgoing job counts be controlled in tibco bw and how?

I have a requirement: the backend can accept only 20 parallel request at a time. It is shared by many other clients and so it is not dedicated.
I have 100 ready request to be sent to the backend, but according to the requirement only 20 request should reach the backend.
How can I controll number of request send to the backend?
I checked tibco bw administrator and found that only load on start up process can be controlled with max job count properties that is incoming messages.
How would tibco do the controlling for out going requests count? Is there any controlling max job count parameter for this or any external way?
I assume it has to do with your Business Logic. However you may not ant to control the Process's thread creation in this. You may want to be little creative and may want to design two different process.
One to receive the request and Log into DB and other to pick specific 20 whatever jobs and send it to backed.
Moreover, you haven't specify if you want to use SOAP over HTTP or JMS. Over JMS we have more options to control this scenario without introducing 2nd process.
hope it may help.

Common timeout across ExecutorChannel threads

Our application integration flow is defined as splitter -> ws gateway -> aggregator. The splitter splits request into a list of account numbers; so that for each account number a web service call is initiated and the responses from multiple web service calls are aggregated in the aggregator.The channel between splitter and ws gateway is defined with dispatcher "commonj WorkManagerTaskExecutor" so that each webservice call is initiated parallel in different threads.
We have added timeout for each webservice call. But we would like to set a single timeout for the whole process. i.e. all the webservice calls should be completed in, say 50 secs, rather than setting 50 secs timeout for each individual call. commonj WorkManagerTaskExecutor, provides this feature by waitForAll(Collection workItems, long timeout_ms) method when implemented directly through code. Is there any way to use this or a similar feature to achieve our requirement.
Unfortunately, no, we can't use such a custom feature of that specific TaskExecutor.
From other side if you say "single timeout for the whole process" I can help you with the <gateway> pattern:
<chain>
<gateway request-channel="splitterChannel" reply-timeout="50000"/>
</chain>
Where reply-timeout is:
Specifies how long this gateway will wait for the reply message
before returning. By default it will wait indefinitely. 'null' is returned
if the gateway times out.
Does it make sense for you?

Mule: Thread count under load with doThreading="false"

we have a mule app with HTTP inbound endpoint and I'm trying to figure out how to control the thread count under load. As an experiment I have added the following configuration:
<core:configuration>
<core:default-threading-profile doThreading="false" maxThreadsActive="500" poolExhaustedAction="RUN"/>
</core:configuration>
Under load I'm seeing the thread count peak at over 1000 threads. Am not sure why this is the case give the maxThreadsActive setting and the doThreading="false". Reading about poolExhaustedAction="RUN", I would expect the listener thread to block while processing inbound requests rather than spawn new ones, and finally reject the connection if its backlog queue is full. I never see rejected client connections.
Does Mule maintain a separate thread pool for each inbound endpoint in the app (sorry if this is in the documentation)? Even if so, don't think it helps explain what I'm seeing.
Any help appreciated. We are running a number of mule apps in one container and I'd like to control the total number of threads.
Thanks, Alfie.
Clearly the doThreading attribute on default-threading-profile is not enough to control Mule threading as a whole nor limit with a global cap the specific threading behaviour of transports. I reckon you're getting 500 threads for the HTTP message receiver pool and 500 for the VM message dispatcher pool.
I strongly suggest you reading about tuning Mule: http://www.mulesoft.org/documentation/display/current/Tuning+Performance
My gut feel is that you need to
configure threading on each transport (VM, HTTP), strictly specifying the pool size for receivers and dispatchers,
select flow processing strategies that prevent Mule from spawning new threads (i.e. use synchronous to hog the receiver threads),
select exchange patterns that also prevent Mule from spawning new threads (i.e. use request-response to piggyback the current execution thread).

Tomcat - one thread per request - or other alternatives?

My understanding is that in Tomcat, each request will take up one Java/(and thus OS) thread.
Imagine I have an app with lots of long-running requests (eg a poker game with multiple players,) that involves in-game chat, and AJAX long-polling etc.
Is there a way to change the tomcat configuration/architecture for my webapp so that I'm not using a thread for each request but 'intercept' the request and response so they can be processed as part of a queue?
I think you're right about tomcat likes to handle each request in its own thread. This could be problematic for several concurrent threads. I have the following suggestions:
Configure maxThreads and acceptCount attributes of the Connector elements in server.xml. In this way you limit the number of threads that can get spawned to a threshold. Once that limit is reached, requests get queued. The acceptCount attribute is to set this queue size. Simplest to implement but not a good long term solution
Configure multiple Connector elements in server.xml and make them share a threadpool by adding an Executor element in server.xml. You probably want to point tomcat to your own implementation of Executor interface.
If you want finer grain control no how requests are serviced, consider implementing your own connector. The 'protocol' attribute of the Connector element in server.xml should point to your new connector. I have done this to add a custom SSL connector and this works great.
Would you reduce this problem to a general requirement to make tomcat more scalable in terms of the number of requests/connections? The generic solution to that would be configuring a loadbalancer to handle multiple instances of tomcat.

Resources