Multiple threads are generating for one API request using swagger in spring boot - multithreading

I have one get API and it handles a large process. When I hit it from swagger it saw in kibana I see it is creating three threads and there for the request is changing the database and publishing same messages three times. I do have some feign client calls also in between.
Note: There is no code for multithreading in the flow. And on swagger it gives me timeout.
I'm expecting to run the process only once.

Related

Play Framework Scala thread affinity

We have our HTTP layer served by Play Framework in Scala. One of our APIs is something of the form:
POST /customer/:id
Requests are sent by our UI team which calls these APIs through a React Framework.
The issue is that, sometimes, the requests are issued in batches, successively one after the other for the same customer ID. When this happens, different threads process these requests and so our persistent layer (MySQL) reaches an inconsistent state due to the difference in the timestamp of the handling of these requests.
Is it possible to configure some sort of thread affinity in Play Scala? What I mean by that is, can I configure Play to ensure that requests of a particular customer ID are handled by the same thread throughout the life-cycle of the application?
Batch is
put several API calls into a single HTTP request.
A batch request is a set of command in one HTTP request, like here https://developers.facebook.com/docs/graph-api/making-multiple-requests/
You describe it as
The issue is that, sometimes, the requests are issued in batches, successively one after the other for the same customer ID. When this happens, different threads process these requests and so our persistent layer (MySQL) reaches an inconsistent state due to the difference in the timestamp of the handling of these requests.
This is a set of concurrent requests. Play framework usually works as a stateless server. I assume you also organize it as stateless. There is nothing that binds one request to another, you can't control order. Well, you can, if you create a special protocol, like "opening batch request", request #1, #2, ... "closing batch request". You need to check if exactly all request was correct. You also need to run some stateful threads and some queues ... Thought akka can help with this but I am pretty sure you wan't do it.
This issue is not a "play-framework" depended. You will reproduce it in any server. For example, the general case: Is it possible to receive out-of-order responses with HTTP?
You can go in either way:
1. "Batch" the command in one request
You need to change the client so it jams "batch" requests into one. You also need to change server so it processes all the commands from the batch one after another.
Example of the requests: https://developers.facebook.com/docs/graph-api/making-multiple-requests/
2. "Pipeline" requests
You need to change the client so it sends the next request after receive the response from the previous.
Example: Is it possible to receive out-of-order responses with HTTP?
The solution to this is to pipeline Ajax requests, transmitting them serially. ... . The next request sent only after the previous one has returned successfully."

Handling Response on a Worker Process in NodeJs

I am trying to design a service following the Command Query Responsibility Segregation Pattern (CQRS) in NodeJs. To handle segregation, I am taking the following approach:
Create separate workers for querying and executing commands
Expose them using a REST API
The REST API has been designed with ExpressJs. All endpoints starting with 'update', 'create' and 'delete' keywords are treated as commands; all endpoints with 'get' or 'find' are treated as queries.
When a request reaches its designated handler, one of the following occurs:
If its a command, a response is sent immediately after delegating the task to worker process; other services are notified by generating appropriate events when the master process receives a completion message from the worker.
If its a query, the response is handled by a designated worker that can use a reference of the database connection passed on as arguments to fetch and send the query result.
For (2) above, I am trying to create a mechanism that somehow "passes" the response object to the worker which, can then complete the request. Can this be done by "cloning" the response object and passing it as plain arguments? If not, what is the preferred way of achieving this?
I think you are better off in (2) to pass the query off onto a worker process, which returns to the master process, which then sends back the request.
First of all, you don't really want to give the worker processes "access" to the outside. They should be all internal workers, managed by the master process.
Second, the Express server's job is to receive requests, do something with them, then return a result. It seems like over-complicating to try to pass the communication off to a worker.
If you are really worried about your Express server getting overwhelmed with requests, you should consider something like Docker to create a "swarm" of express instances.

Nodejs - High Traffic to Clustered Microservices Problems

Sorry for the novel...
I'm working on a Nodejs project where I need to decrypt millions of envelopes in multiple files. Any APIs of my application have to run on localhost.
The main API handles client requests to decrypt a batch of files. Each file contains thousands to millions of envelopes that need to be decrypted. Each file is considered a job and these jobs are queued up by the Main API and then run concurrently by forking a new process for each job. (I only allow 5 concurrent jobs/forks at one time) In each process, a script runs that goes through and decrypts the file.
This runs relatively quickly but instead of doing the decryption in the code of each process/script forked by the Main API, I want to dish this work out to another API (call it Decrypt API) that basically takes the envelope in the request and sends back the decrypted result in the response.
So I created this api and then used 'forky' to cluster it. Then from my processes, instead of doing the decryption in those, I makes multiple parallel requests to the Decrypt API and once I get the responses back just place the decrypted results in a file.
At first my problem was that I made requests right as I got each envelope without waiting for a request to return before sending the next one. I would basically send "parallel" requests if you will, and then just handle the vote in the callback of each request. This led to what I think is too many outstanding reqs at one time because I was getting an ECONNRESET error. Some requests were dropped. So my solution was to have a maximum of x outstanding reqs(I used 10) at any one time to avoid too many concurrent reqs. This seemed ok but then I realized since I was forking 5 processes from the MainAPI, and although each one had this new 'outstanding reqs' limiting code, since they were running concurrently I was still running into the problem of too many reqs at once to the Decrypt API. Also, this method of using two different microservices/APIs is slower than just having the MainAPI's forked processses just do the decryption. In the Decrypt API I'm also using the node 'crypto' library and some of those functions that I use are synchronous so I suspect that with high traffic that's a problem, but I can't avoid those sync methods.
So finally, my question is, what can I do to increase the speed of the Decrypt API with high traffic like I described, and what can I do to avoid these dropped requests?
Forgive me if I sound like a noob, but since these APIs are all running on the same machine and localhost, could this be why this method is slower than just doing decryption in each process?
Thanks!

How to manage multiple parallel threads in nodejs?

Scenario:
I am using node.js for writing a heavy application that listens to the tweets using streaming API, and for every tweet - it calls 3-4 REST APIs, then listens to a web socket for about 5-10 minutes and again calls 2 more REST apis.
Current Approach:
Though I have prepared functions for every task but I am not using callbacks. From tweet function F, I call another function F1 then from inside of F1 I call F2 and so on...
Problem:
It makes the code super messy and MAJORLY- the problem is that the concurrent requests are overlapping and sharing data among each other. For eg- while listening to a web socket I pass endpoint and auth info -
listenXYZ(endpoint, auth)
But it seems when two threads hit this function at same time then same endpoint is passed in those 2 sometimes.
Why does this happen?
How can I accomplish multi-threading with nice and organized data flow with controlled memory management?
Can I use workers?
Multi threading?

How does Spring handle multiple post requests?

In my application, I have a multiple file upload AJAX client. I noticed (using a stub file processing class) that Spring usually opens 6 threads at once, and the rest of the file upload requests are blocked until any of those 6 threads finishes its job. It is then assigned a new request, as in a thread pool.
I haven't done anything specific to reach this behavior. Is this something that Spring does by default behind the scenes?
While uploading, I haven't had any problems browsing the other parts of the application, with pretty much no significant overhead in performance.
I noticed however that one of my "behind the scenes" calls to the server (I poll for new notifications every 20 secs) gets blocked as well. On the server side, my app calls a Redis-based key-value store which should always return even if there are no new notifications. The requests to it start getting normally processed only after the uploads get finished. Any explanation for this kind of blocking?
Edit: I think it has to deal with a maximum of concurrent requests per session
I believe this type of treading belongs to the Servlet Container but not to Spring.

Resources