Mule poolExhaustedAction - multithreading

I'm trying to make sure I understand the meaning of the poolExhaustedAction values for a threading profile. I'm not seeing a lot of examples out there.
Assume I have a thread pool on an HTTP endpoint that has maxThreadsActive set to "16". I receive 20 inbound requests in a short period (faster than I can process any of them).
If poolExhaustedAction is set to "WAIT" then the last 4 requests will wait for threadWaitTimeout. Is this correct?
If poolExhaustedAction is set to "RUN" then the last 4 requests will ????...use the thread that carried the request to the endpoint to run the flow???? I'm a bit confused on this one. Specifically, if set to "RUN", will the service ever reject a request (assuming Mule has threads to deliver messages to it)?

Have you read http://www.mulesoft.org/documentation/display/current/Tuning+Performance? Especially this part?
Answers to your questions are:
Yes.
Indeed the thread that received the request will be used to process it in the flow. The service will start rejecting requests when inbound socket connections will time-out because the thread in charge of routing them in Mule is too busy to accept them.

Related

Tuning gRPC thread pool

I'm dealing with a legacy synchronous server that has operations running for upto a minute and exposes 3 ports to overcome this problem. There is "light-requests" port, "heavy-but-important" requests port and "heavy" port.
They all expose the same service, but since they run on separate ports, they end up with dedicated thread pools.
Now this approach is running into a problem with load balancing, as Envoy can't handle a single services exposing the same proto on 3 different ports.
I'm trying to come up with a single threadpool configuration that would work (probably an extremely overprovisioned one), but I can't find any documentation on what the threadpool settings actually do.
NUM_CQS
Number of completion queues.
MIN_POLLERS
Minimum number of polling threads.
MAX_POLLERS
Maximum number of polling threads.
CQ_TIMEOUT_MSEC
Completion queue timeout in milliseconds.
Is there some reason why you need the requests split into three different thread pools? By default, there is no limit to the number of request handler threads. The sync server will spawn a new thread for each request, so the number of threads will be determined by the number of concurrent requests -- the only real limit is what your server machine can handle. (If you actually want to bound the number of threads, I think you can do so via ResourceQuota::SetMaxThreads(), although that's a global limit, not one per class of requests.)
Note that the request handler threads are independent from the number of polling threads set via MIN_POLLERS and MAX_POLLERS, so those settings probably aren't relevant here.
UPDATE: Actually, I just learned that my description above, while correct in a practical sense, got some of the internal details wrong. There is actually just one thread pool for both polling and request handlers. When a request comes in, an existing polling thread basically becomes a request handler thread, and when the request handler completes, that thread is available to become a polling thread again. The MIN_POLLERS and MAX_POLLERS options allow tuning the number of threads that are used for polling: when a polling thread becomes a request handler thread, if there are not enough polling threads remaining, a new one will be spawned, and when a request handler finishes, if there are too many polling threads, the thread will terminate. But none of this affects the number of threads used for request handlers -- that is still unbounded by default.

How does gRPC handle multiple overlapping requests?

Let us say, a gRPC client makes two requests R1 and R2 to gRPC server, one after the other (assume without any significant time gap, i.e R2 is made when R1 is still not served). Also, assume that R1 takes much more time than R2.
In this case, should I expect R2's response first as it takes less time or should I expect R1's response first as this request is made prior to R2? What will happen and why?
As far as what I have observed, I think requests are served in FCFS fashion, so, R1's response will be received by the client first and then R2's, but I am not sure.
Theoretically nothing discourages server and client process gRPC requests in parallel. GRPC connection is made over HTTP/2 one that can handle multiple requests at once. So yes - if server doesn't use some specific synchronization or limitation mechanisms then requests would be processes with overlapping. If server resources or policy doesn't allow it then they should be processed one by one. Also I can add than request can have a Timeout after which it would be cancelled. So long wait can lead to cancellation and non-processing at all.
All requests should be processed in parallel. The gRPC architecture for the Java implementation for example, it is divided into 2 "parts":
The event loop runs in a thread work group - It is similar to what we have to reactive implementations. One thread per core to handle the incoming requests.
The request processing is done in a dedicated thread which will be created using the CachedThreadPool system by default.
For single-thread languages like Javascript, I am not sure how they are doing it, but I would guess it is done in the same thread and therefore it would end up queuing the requests.

Play Framework Scala thread affinity

We have our HTTP layer served by Play Framework in Scala. One of our APIs is something of the form:
POST /customer/:id
Requests are sent by our UI team which calls these APIs through a React Framework.
The issue is that, sometimes, the requests are issued in batches, successively one after the other for the same customer ID. When this happens, different threads process these requests and so our persistent layer (MySQL) reaches an inconsistent state due to the difference in the timestamp of the handling of these requests.
Is it possible to configure some sort of thread affinity in Play Scala? What I mean by that is, can I configure Play to ensure that requests of a particular customer ID are handled by the same thread throughout the life-cycle of the application?
Batch is
put several API calls into a single HTTP request.
A batch request is a set of command in one HTTP request, like here https://developers.facebook.com/docs/graph-api/making-multiple-requests/
You describe it as
The issue is that, sometimes, the requests are issued in batches, successively one after the other for the same customer ID. When this happens, different threads process these requests and so our persistent layer (MySQL) reaches an inconsistent state due to the difference in the timestamp of the handling of these requests.
This is a set of concurrent requests. Play framework usually works as a stateless server. I assume you also organize it as stateless. There is nothing that binds one request to another, you can't control order. Well, you can, if you create a special protocol, like "opening batch request", request #1, #2, ... "closing batch request". You need to check if exactly all request was correct. You also need to run some stateful threads and some queues ... Thought akka can help with this but I am pretty sure you wan't do it.
This issue is not a "play-framework" depended. You will reproduce it in any server. For example, the general case: Is it possible to receive out-of-order responses with HTTP?
You can go in either way:
1. "Batch" the command in one request
You need to change the client so it jams "batch" requests into one. You also need to change server so it processes all the commands from the batch one after another.
Example of the requests: https://developers.facebook.com/docs/graph-api/making-multiple-requests/
2. "Pipeline" requests
You need to change the client so it sends the next request after receive the response from the previous.
Example: Is it possible to receive out-of-order responses with HTTP?
The solution to this is to pipeline Ajax requests, transmitting them serially. ... . The next request sent only after the previous one has returned successfully."

Background processes in the aiohttp event loop

I have a web service that accepts post requests. A post request specifies a specific job to be executed in the background, that modifies a database used for later analysis. The sender of the request does not care about the result, and only needs to receive a 202 acknowledgment from the web service.
How it was implemented so far:
Flask Web service will get the http request , and add the necessary parameters to the task queue (rq workers), and return back an acknowledgement. A separate rq worker process listens on the queue and processes the job.
We have now switched to aiohttp, and realized that the web service can now schedule the actual job request in its own event loop, by using the aiohttp.ensure_future() method.
This however blurs the lines between the web-server and the task queue. On the positive side, it eliminates the need of having to manage the rq workers.
Is this considered a good practice?
If your tasks are not CPU heavy - yes, it is good practice.
But if so, then you need to move them to separate service or use run_in_executor(). In other case your aiohttp event loop will be blocked by this tasks and server will not be able to accept new requests.

Mule: Thread count under load with doThreading="false"

we have a mule app with HTTP inbound endpoint and I'm trying to figure out how to control the thread count under load. As an experiment I have added the following configuration:
<core:configuration>
<core:default-threading-profile doThreading="false" maxThreadsActive="500" poolExhaustedAction="RUN"/>
</core:configuration>
Under load I'm seeing the thread count peak at over 1000 threads. Am not sure why this is the case give the maxThreadsActive setting and the doThreading="false". Reading about poolExhaustedAction="RUN", I would expect the listener thread to block while processing inbound requests rather than spawn new ones, and finally reject the connection if its backlog queue is full. I never see rejected client connections.
Does Mule maintain a separate thread pool for each inbound endpoint in the app (sorry if this is in the documentation)? Even if so, don't think it helps explain what I'm seeing.
Any help appreciated. We are running a number of mule apps in one container and I'd like to control the total number of threads.
Thanks, Alfie.
Clearly the doThreading attribute on default-threading-profile is not enough to control Mule threading as a whole nor limit with a global cap the specific threading behaviour of transports. I reckon you're getting 500 threads for the HTTP message receiver pool and 500 for the VM message dispatcher pool.
I strongly suggest you reading about tuning Mule: http://www.mulesoft.org/documentation/display/current/Tuning+Performance
My gut feel is that you need to
configure threading on each transport (VM, HTTP), strictly specifying the pool size for receivers and dispatchers,
select flow processing strategies that prevent Mule from spawning new threads (i.e. use synchronous to hog the receiver threads),
select exchange patterns that also prevent Mule from spawning new threads (i.e. use request-response to piggyback the current execution thread).

Resources