Defining number of threads on HTTP Requester configurations/connectors - multithreading

I'm trying to control the amount of maxThreadsActive and maxThreadsIdle for outgoing HTTP connections in Mule.
Setting the default-threading-profile doesn't affect the amount of threads that are allocated HTTP requesters.
For HTTP listeners it's possible to set the threading profile via the http:worker-threading-profile, like this:
<http:listener-config name="HTTP_Listener_Configuration" host="0.0.0.0" port="8081" doc:name="HTTP Listener Configuration">
<http:worker-threading-profile maxThreadsActive="2" maxThreadsIdle="1" threadTTL="60000"/>
</http:listener-config>
But i can't find a way to apply a threading profile on a http:request element.
Besides this i'm wondering how the http:worker-threading-profile in this case works for listeners, when i use a profiler (VisualVM) i don't see any changes in the amount of threads that are allocated for the HTTP listener.
Any ideas regarding threads for HTTP endpoints and how to control them and verify it?
Screenshot below is from a simple test app with the threading profile applied as mentioned above.
The same app has a simple http:request config, for an outbound HTTP connection (requester) i always get this number of threads:

Never tried it myself, but some info from research and training says this: If your flow is using a synchronous processing strategy, which Mule sets based on your message source and flow behavior, processing is done in the same thread. This might explain why you don't see any changes in the amount of threads that are allocated for the HTTP listener. The flow is set to synchronous if the message source is request-response--sender of the message is expecting a response or the flow is involved in a transaction.
Otherwise, Mule sets the flow to queued-asynchronous. In this case you set threads using the flow's properties view (in Studio, select the flow itself and look for Processing Strategy in the properties). Set properties for the flow as described in the docs. You do not set threads for the HTTP Requester afaik.

Related

Play Framework Scala thread affinity

We have our HTTP layer served by Play Framework in Scala. One of our APIs is something of the form:
POST /customer/:id
Requests are sent by our UI team which calls these APIs through a React Framework.
The issue is that, sometimes, the requests are issued in batches, successively one after the other for the same customer ID. When this happens, different threads process these requests and so our persistent layer (MySQL) reaches an inconsistent state due to the difference in the timestamp of the handling of these requests.
Is it possible to configure some sort of thread affinity in Play Scala? What I mean by that is, can I configure Play to ensure that requests of a particular customer ID are handled by the same thread throughout the life-cycle of the application?
Batch is
put several API calls into a single HTTP request.
A batch request is a set of command in one HTTP request, like here https://developers.facebook.com/docs/graph-api/making-multiple-requests/
You describe it as
The issue is that, sometimes, the requests are issued in batches, successively one after the other for the same customer ID. When this happens, different threads process these requests and so our persistent layer (MySQL) reaches an inconsistent state due to the difference in the timestamp of the handling of these requests.
This is a set of concurrent requests. Play framework usually works as a stateless server. I assume you also organize it as stateless. There is nothing that binds one request to another, you can't control order. Well, you can, if you create a special protocol, like "opening batch request", request #1, #2, ... "closing batch request". You need to check if exactly all request was correct. You also need to run some stateful threads and some queues ... Thought akka can help with this but I am pretty sure you wan't do it.
This issue is not a "play-framework" depended. You will reproduce it in any server. For example, the general case: Is it possible to receive out-of-order responses with HTTP?
You can go in either way:
1. "Batch" the command in one request
You need to change the client so it jams "batch" requests into one. You also need to change server so it processes all the commands from the batch one after another.
Example of the requests: https://developers.facebook.com/docs/graph-api/making-multiple-requests/
2. "Pipeline" requests
You need to change the client so it sends the next request after receive the response from the previous.
Example: Is it possible to receive out-of-order responses with HTTP?
The solution to this is to pipeline Ajax requests, transmitting them serially. ... . The next request sent only after the previous one has returned successfully."

Common timeout across ExecutorChannel threads

Our application integration flow is defined as splitter -> ws gateway -> aggregator. The splitter splits request into a list of account numbers; so that for each account number a web service call is initiated and the responses from multiple web service calls are aggregated in the aggregator.The channel between splitter and ws gateway is defined with dispatcher "commonj WorkManagerTaskExecutor" so that each webservice call is initiated parallel in different threads.
We have added timeout for each webservice call. But we would like to set a single timeout for the whole process. i.e. all the webservice calls should be completed in, say 50 secs, rather than setting 50 secs timeout for each individual call. commonj WorkManagerTaskExecutor, provides this feature by waitForAll(Collection workItems, long timeout_ms) method when implemented directly through code. Is there any way to use this or a similar feature to achieve our requirement.
Unfortunately, no, we can't use such a custom feature of that specific TaskExecutor.
From other side if you say "single timeout for the whole process" I can help you with the <gateway> pattern:
<chain>
<gateway request-channel="splitterChannel" reply-timeout="50000"/>
</chain>
Where reply-timeout is:
Specifies how long this gateway will wait for the reply message
before returning. By default it will wait indefinitely. 'null' is returned
if the gateway times out.
Does it make sense for you?

how to log time taken by jdbc components

In my spring integration application i have several stored-proc-outbound-gateway, i would to log how much time each call is taking, any help would be appreciated.
I would ideally like to be able to enable/disable logging for the parameters used, time taken and total rows retrieved (returning-resultset) to monitor and performance tuning purpose.
Thanks
You can add a ChannelInterceptor (subclass of ChannelInterceptorAdapter) to the request channel which will give you raw timing (preSend/postSend), but the time will include any processing downstream of the gateway (on direct channels).
Since you want to examine the results too, you could start a timer (e.g. Spring StopWatch) in the interceptor (preSend) on the request channel and stop the timer in an interceptor on the reply channel). If you use the same interceptor bean you can store the timer in a ThreadLocal.
You can turn on/off collection using a boolean property on the interceptor.
Alternatively, you can add a custom advice to the gateway.
EDIT
The advice is probably the best approach because with a ThreadLocal you will need to add code to the first interceptor to handle failures and clean up. With an around advice, the timer would just be a local method variable.

Mule: Thread count under load with doThreading="false"

we have a mule app with HTTP inbound endpoint and I'm trying to figure out how to control the thread count under load. As an experiment I have added the following configuration:
<core:configuration>
<core:default-threading-profile doThreading="false" maxThreadsActive="500" poolExhaustedAction="RUN"/>
</core:configuration>
Under load I'm seeing the thread count peak at over 1000 threads. Am not sure why this is the case give the maxThreadsActive setting and the doThreading="false". Reading about poolExhaustedAction="RUN", I would expect the listener thread to block while processing inbound requests rather than spawn new ones, and finally reject the connection if its backlog queue is full. I never see rejected client connections.
Does Mule maintain a separate thread pool for each inbound endpoint in the app (sorry if this is in the documentation)? Even if so, don't think it helps explain what I'm seeing.
Any help appreciated. We are running a number of mule apps in one container and I'd like to control the total number of threads.
Thanks, Alfie.
Clearly the doThreading attribute on default-threading-profile is not enough to control Mule threading as a whole nor limit with a global cap the specific threading behaviour of transports. I reckon you're getting 500 threads for the HTTP message receiver pool and 500 for the VM message dispatcher pool.
I strongly suggest you reading about tuning Mule: http://www.mulesoft.org/documentation/display/current/Tuning+Performance
My gut feel is that you need to
configure threading on each transport (VM, HTTP), strictly specifying the pool size for receivers and dispatchers,
select flow processing strategies that prevent Mule from spawning new threads (i.e. use synchronous to hog the receiver threads),
select exchange patterns that also prevent Mule from spawning new threads (i.e. use request-response to piggyback the current execution thread).

Mule poolExhaustedAction

I'm trying to make sure I understand the meaning of the poolExhaustedAction values for a threading profile. I'm not seeing a lot of examples out there.
Assume I have a thread pool on an HTTP endpoint that has maxThreadsActive set to "16". I receive 20 inbound requests in a short period (faster than I can process any of them).
If poolExhaustedAction is set to "WAIT" then the last 4 requests will wait for threadWaitTimeout. Is this correct?
If poolExhaustedAction is set to "RUN" then the last 4 requests will ????...use the thread that carried the request to the endpoint to run the flow???? I'm a bit confused on this one. Specifically, if set to "RUN", will the service ever reject a request (assuming Mule has threads to deliver messages to it)?
Have you read http://www.mulesoft.org/documentation/display/current/Tuning+Performance? Especially this part?
Answers to your questions are:
Yes.
Indeed the thread that received the request will be used to process it in the flow. The service will start rejecting requests when inbound socket connections will time-out because the thread in charge of routing them in Mule is too busy to accept them.

Resources