In my application, I have a multiple file upload AJAX client. I noticed (using a stub file processing class) that Spring usually opens 6 threads at once, and the rest of the file upload requests are blocked until any of those 6 threads finishes its job. It is then assigned a new request, as in a thread pool.
I haven't done anything specific to reach this behavior. Is this something that Spring does by default behind the scenes?
While uploading, I haven't had any problems browsing the other parts of the application, with pretty much no significant overhead in performance.
I noticed however that one of my "behind the scenes" calls to the server (I poll for new notifications every 20 secs) gets blocked as well. On the server side, my app calls a Redis-based key-value store which should always return even if there are no new notifications. The requests to it start getting normally processed only after the uploads get finished. Any explanation for this kind of blocking?
Edit: I think it has to deal with a maximum of concurrent requests per session
I believe this type of treading belongs to the Servlet Container but not to Spring.
Related
We have our HTTP layer served by Play Framework in Scala. One of our APIs is something of the form:
POST /customer/:id
Requests are sent by our UI team which calls these APIs through a React Framework.
The issue is that, sometimes, the requests are issued in batches, successively one after the other for the same customer ID. When this happens, different threads process these requests and so our persistent layer (MySQL) reaches an inconsistent state due to the difference in the timestamp of the handling of these requests.
Is it possible to configure some sort of thread affinity in Play Scala? What I mean by that is, can I configure Play to ensure that requests of a particular customer ID are handled by the same thread throughout the life-cycle of the application?
Batch is
put several API calls into a single HTTP request.
A batch request is a set of command in one HTTP request, like here https://developers.facebook.com/docs/graph-api/making-multiple-requests/
You describe it as
The issue is that, sometimes, the requests are issued in batches, successively one after the other for the same customer ID. When this happens, different threads process these requests and so our persistent layer (MySQL) reaches an inconsistent state due to the difference in the timestamp of the handling of these requests.
This is a set of concurrent requests. Play framework usually works as a stateless server. I assume you also organize it as stateless. There is nothing that binds one request to another, you can't control order. Well, you can, if you create a special protocol, like "opening batch request", request #1, #2, ... "closing batch request". You need to check if exactly all request was correct. You also need to run some stateful threads and some queues ... Thought akka can help with this but I am pretty sure you wan't do it.
This issue is not a "play-framework" depended. You will reproduce it in any server. For example, the general case: Is it possible to receive out-of-order responses with HTTP?
You can go in either way:
1. "Batch" the command in one request
You need to change the client so it jams "batch" requests into one. You also need to change server so it processes all the commands from the batch one after another.
Example of the requests: https://developers.facebook.com/docs/graph-api/making-multiple-requests/
2. "Pipeline" requests
You need to change the client so it sends the next request after receive the response from the previous.
Example: Is it possible to receive out-of-order responses with HTTP?
The solution to this is to pipeline Ajax requests, transmitting them serially. ... . The next request sent only after the previous one has returned successfully."
I'm investigating what reactive means and because it is kind of low level difference, compared to the common non-reactive approach, I'd like to understand what is going on. Let's take Tomcat as a server(I guess it will be different for netty)
Non-reactive
Connection from the browser is created.
For each request thread from thread pool is taken, which will process it.
After the thread finished processing, it returns the result through the connection back to other side.
Reactive???
How is it done for Tomcat or Netty. I cannot find any decent article about how Tomcat supports reactive apps and how Netty does that differently(Connection, Thread, request level explanation)
What bothers me is how reactive is making the webserver unblocking, when you still need to wait for the response. You can get first part of the response quicker maybe with reactive, but is it all? I guess the main point of reactivness is effective thread utilization and this is what I am asking about.
The last point by you : " I guess the main point of reactiveness is effective thread utilization and this is what I am asking about.", is exactly what reactive approach was designed for.
So how does effective utilization achieved?
Well, as an example, lets say you are requesting data from a server multiple times.
In a typical non-reactive way, you will be creating/using multiple threads(may be from a thread-pool) for each of your requests. And job of one particular thread is only to serve that particular request. The thread will take the request, give it to the server and waits for its response till the data is fetched from the server, and then bring that data back to the client.
Now, in a Reactive way, once there is a request, a thread will be allocated for it. Now if another request comes up, there won't be creation of another thread, rather it will be served by the same thread. How?
The thread when takes a request to the server, it won't wait for any immediate response from the server, rather it will come back and serve other request.
Now, when server searches for the data and it is available with the server, an event will be raised, and then the thread will go to fetch that data. This is called Event-loop mechanism as all the work of calling the thread when data is available is achieved by invoking an event.
Now, there is complexity assigned with it to map exact response to requests.
And all these complexity is abstracted by Spring-Webflux(in Java).
So the whole process becomes non-blocking. And as only one thread is enough to serve all the requests, there will be no thread switching we can have one thread per CPU core. Thus achieving effective utilization of threads.
Few images over the net to help you understand: ->
Sorry for the novel...
I'm working on a Nodejs project where I need to decrypt millions of envelopes in multiple files. Any APIs of my application have to run on localhost.
The main API handles client requests to decrypt a batch of files. Each file contains thousands to millions of envelopes that need to be decrypted. Each file is considered a job and these jobs are queued up by the Main API and then run concurrently by forking a new process for each job. (I only allow 5 concurrent jobs/forks at one time) In each process, a script runs that goes through and decrypts the file.
This runs relatively quickly but instead of doing the decryption in the code of each process/script forked by the Main API, I want to dish this work out to another API (call it Decrypt API) that basically takes the envelope in the request and sends back the decrypted result in the response.
So I created this api and then used 'forky' to cluster it. Then from my processes, instead of doing the decryption in those, I makes multiple parallel requests to the Decrypt API and once I get the responses back just place the decrypted results in a file.
At first my problem was that I made requests right as I got each envelope without waiting for a request to return before sending the next one. I would basically send "parallel" requests if you will, and then just handle the vote in the callback of each request. This led to what I think is too many outstanding reqs at one time because I was getting an ECONNRESET error. Some requests were dropped. So my solution was to have a maximum of x outstanding reqs(I used 10) at any one time to avoid too many concurrent reqs. This seemed ok but then I realized since I was forking 5 processes from the MainAPI, and although each one had this new 'outstanding reqs' limiting code, since they were running concurrently I was still running into the problem of too many reqs at once to the Decrypt API. Also, this method of using two different microservices/APIs is slower than just having the MainAPI's forked processses just do the decryption. In the Decrypt API I'm also using the node 'crypto' library and some of those functions that I use are synchronous so I suspect that with high traffic that's a problem, but I can't avoid those sync methods.
So finally, my question is, what can I do to increase the speed of the Decrypt API with high traffic like I described, and what can I do to avoid these dropped requests?
Forgive me if I sound like a noob, but since these APIs are all running on the same machine and localhost, could this be why this method is slower than just doing decryption in each process?
Thanks!
How can I have a JSP page send a HTTP response to the client, and then keep executing?
I have a JSP page that has to execute a long request which generates a report with large amounts of data and saves it to disk. This normally takes minutes to complete. This JSP page is called by an application that I have no control over.
This application has a timeout delay shorter than 45 seconds and therefore each request to a JSP page is resulting in a timeout. Therefore, I need the JSP page to send the complete HTTP response to the client as fast as possible (ideally, as soon as the input parameters have been validated), and then run the process to generate the report and saving it to disk afterward.
Should I open a separate thread to generate the report? This way, the response is sent back to the client, while the thread is still running. Is there any other way with which I can achieve this?
A JSP is a view, not a controller. It is not the preferred place to perform work.
By definition, the JSP bytes are the output, so you cannot expect to do synchronous (on same thread) work in java directive tags after the end of the jsp. Well, unless you prematurely close the response in a java directive tag... But that is unclean (expect trashing your server logs with error messages...)
The spun thread trick only works if you can even start a thread (not all container would allow that). And if you can, you run into the risk of attacks; you would need to control how many threads are allowed. That means having a pool in the context, and implementing a fail-fast if all threads are busy.
This can take the form of a ThreadPoolExecutor (that you woul dlazily install in the servlet context), constructed with a blockingqueue with a small limit, to which you submit() your task but handling that it can be rejected (and failing with a suitable message to the user).
Read carefully the Queueing part of the ThreadPoolExecutor.
Of course, you can toy with onTimer() on the javascript side to pool for the progress, or to cancel (all that with a token in the session for the running task of course).
My application queries 7 marketplaces in a row using Indy HTTP client. All marketplaces provide a unified interface to request/response. That said, the structure of request/response is the same for all 7 marketplaces.
I submit GTIN to a MainForm's TEdit box and the app posts 7 RESTful requests to the marketplaces and returns XML responses for all of them.
My idea is to wrap each request in a separate thread, but I am really concerned by the performance issues. Normaly I perform 1 request in 3-5 seconds.
Each thread is created in a for statement and initalizes TIdHTTP object, makes a request, gets an XML-response, parses it and ships it back to the MainForm.
When the job is done each thread needs to be terminated (or paused?)
If the thread completely terminates, then it must perform the same initialization routine on a next request. I find it relatively slow, assuming 7 thread initializations at a time.
However, if the thread is paused it merely resides in memory, has all its factories initialized and is ready to accept next requests.
How do I leave the thread operationally terminated, but still completely initialized? I assume, if TIdHTTP and XML-parsing objects stay alive in a paused thead, they will act much faster on the next request. Does it have any sense?