How Request and Response will got process in service stack? - iis

I am using service stack to build the create RESTful services, not have depth knowledge of it. This works as sending request and getting response back. I have scenario and my question is depends on it.
Scenario: I am sending request from browser or any client where I am able to send request to server. Consider server will take 3 seconds to process single request and send back response to browser. After one second, I have sent another request to server from same browser(client). Now I am getting response of second request which I sent later.
Question 1: What is happening behind with the first request which I did not get response.
Question 2: How I can stop processing of orphan request.
Edit : I have used IIS server to host services.

ServiceStack executes requests concurrently on multithreaded web servers, whether you're hosting on ASP.NET/IIS or self-hosted so 2 concurrent requests are running concurrently on different threads. There are different scenarios possible if you're executing async tasks in your Services in which it frees up the thread to execute different tasks, but the implementation details are largely irrelevant here.
HTTP Web Requests are each executed to their end, even when its client connection is lost your Services are never notified and no Exceptions are raised.
But for long running Services you can enable the high-level ServiceStack's Cancellable Requests Feature which enables a way for clients to cancel long running requests.

Related

Sending a response after jobs have finished processing in Express

So, I have Express server that accepts a request. The request is web scraping that takes 3-4 minute to finish. I'm using Bull to queue the jobs and processing it as and when it is ready. The challenge is to send this results from processed jobs back as response. Is there any way I can achieve this? I'm running the app on heroku, but heroku has a request timeout of 30sec.
You don’t have to wait until the back end finished do the request identified who is requesting . Authenticate the user. Do a res.status(202).send({message:”text});
Even though the response was sended to the client you can keep processing and stuff
NOTE: Do not put a return keyword before res.status...
The HyperText Transfer Protocol (HTTP) 202 Accepted response status code indicates that the request has been accepted for processing, but the processing has not been completed; in fact, processing may not have started yet. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place.
202 is non-committal, meaning that there is no way for the HTTP to later send an asynchronous response indicating the outcome of processing the request. It is intended for cases where another process or server handles the request, or for batch processing.
You always need to send response immediately due to timeout. Since your process takes about 3-4 minutes, it is better to send a response immediately mentioning that the request was successfully received and will be processed.
Now, when the task is completed, you can use socket.io or web sockets to notify the client from the server side. You can also pass a response.
The client side also can check continuously if the job was completed on the server side, this is called polling and is required with older browsers which don't support web sockets. socket.io falls back to polling when browsers don't support web sockets.
Visit socket.io for more information and documentation.
Best approach to this problem is socket.io library. It can send data to client send whenever you want. It triggers a function on client side which receives the data. Socket.io supports different languages and it is really ease to use.
website link
Documentation Link
create a jobs table in a database or persistant storage like redis
save each job in the table upon request with a unique id
update status to running on starting the job
sent HTTP 202 - Accepted
At the client implement a polling script, At the server implement a job status route/api. The api accept a job id and queries the job table and respond with the status
When the job is finished update the job table with status completed, when the jon is errored updated the job table with status failed and maybe a description column to store the cause for error
This solution makes your system horizontaly scalable and distributed. It also prevents the consequences of unexpected connection drops. Polling interval depends on average job completion duration. I would recommend an average interval of 5 second
This can be even improved to store job completion progress in the jobs table so that the client can even display a progress bar
->Request time out occurs when your connection is idle, different servers implement in a different way so timeout time differs
1)The solution for this timeout problem would be to make your connections open(constant), that is the connection between client and servers should remain constant.
So for such scenarios use WebSockets, which ensures that after the initial request and response handshake between client and server the connection stays open.
there are many libraries to implement realtime connection.Eg Pubnub,socket.io. This is the same technology used for live streaming.
Node js can handle many concurrent connections and its lightweight too, won't use many resources too.

One API call vs multiple

I have a process in the back-end which will take take on average 30 to 90 seconds to complete.
Is it better to have a font-end react app make ONE API call and wait for back-end to complete and process and return the data. Or is it better to have the front-end make multiple calls, lets say every 2 seconds to check if the process and complete and get back the result?
Both are valid approaches. You could also report status changes with websocket so there's no need for polling.
If you do want to go the polling route, the general recommendation is to:
Return 202 accepted from your long-running process endpoint.
Also return a Link header with a url to where the status of the process can be read.
The client can then follow that client and ping it every x seconds.
I think it's not good to make a single API call and wait for 30-90 seconds to get a response. Instead send a response immediately mentioning that the request is successful and would be processed.
Now you can use web sockets or library like socket.io so that the server can communicate directly to the client once the requested processing is complete.
The multiple API calls to check if server is done or server has any new message is called polling and is not much efficient but it is still required in old browsers which don't support web sockets. Socket.io support s polling automatically in old browsers.
But, yes if you want you can do multiple calls to check if server is done processing, but I would prefer server to communicate back to the client , it is better.

socket.io server to relay/proxy some events

I currently have a socket.io server spawned by a nodeJS web API server.
The UI runs separately and connects to the API via web socket. This is mostly used for notifications and connectivity status checks.
However the API also acts as a gateway for several micro services. One of these is responsible for computing the data necessary for the UI to render some charts. This operation is long-lasting and due to many reasons the computation will only start when a request is received.
In a nutshell, the UI sends a REST request to the API and the API currently uses gRPC to send the request to the micro service. This is bad because it locks both API and UI.
To avoid locking the socket server on the API should be be able to relay the UI request and the "computation ended" event received by the micro service, this way nothing would be locked. This could eventually lead to the gRPC server on the micro service to be removed.
Is this something achievable with socket.io?
If not is the only way for the API to spawn a secondary socket connection to the micro service for each one received by the UI?
Is this a bad idea?
I hope this is clear, thanks.
I actually ended up not using socket.io. However this can still be done with it if the API spawns a server and has the different services connected as clients, https://socket.io/docs/rooms-and-namespaces/ can be used.
This way messages can be "relayed" and even broadcasted from the server to both in case something happens.

Web API call wait for processing of another thread before responding

I have a windows service that has a self hosted web api. There is a single thread outside of the web api (call it Main thread) that is running who's sole responsibility is to check (somewhere) if there is data to be processed, sends it down stream to be processed, then put a response (somewhere) on the success (or not success) of the processing. That part of the design cannot be changed.
What I need to do is accept an HTTP request on the web api that will put the incoming data somewhere for the Main thread to pick up (I was thinking ConcurrentQueue), and then wait for that data to be processed before sending an HTTP Response.
The part I'm running into is how best for the HTTP request thread to wait for the data to be processed before responding? Doing a while loop watching for a response doesn't seem very efficient. My current thought is to setup a message bus where the HTTP request thread subscribes by passing a reference to its ManualResetEvent then waiting one. The Main thread will then publish to the bus after its done which will set the ManualResetEvent, allowing the HTTP request thread to grab the result and send an HTTP response.
Is using ManualResetEvents in this way a good idea? It feels like there should already be a way to handle this on a web api but I haven't found anything.

grpc complete async java service request/response mapping

A Java service (let's call it portal) is both a gRPC client as well as server. It serves millions of gRPC clients (server), each client requesting for some task/resource. Based on the incoming request, portal will figure out the backend services and talk to one or more of them, and send the returned response(s) to the originating client. Hence, here, the requirement is:
Original millions of clients will have their own timeouts
The portal should not have a thread blocking for the millions of clients (async). It should also not have a thread blocking for each client's call to the backend services (async). We can use the same thread which received a client call for invoking the backend services.
If the original client times out, portal should be able to communicate it to the backend services or terminate the specific call to the backend services.
On error from backend services, portal should be able to communicate it back to the specific client whose call failed.
So the questions here are:
We have to use async unary calls here, correct?
How do the intermediate server (portal) match the original requests to the responses from the backend services?
In case of errors on backend services, how does the intermediate server propagate the error?
How does the intermediate server propagate the deadlines?
How does the intermediate server cancel the requests on the backend services, if the originating client terminates?
gRPC Java can make a proxy relatively easily. Using async stubs for such a proxy would be common. When the proxy creates its outgoing RPCs, it can save a reference to the original RPC in the callback of the outgoing RPC. When the outgoing RPC's callback fires, simply issue the same call to the original RPC. That solves both messages and errors.
Deadline and cancellation propagation are automatically handled by io.grpc.Context.
You may want to reference this grpc-level proxy example (which has not been merged to grpc/grpc-java). It uses ClientCall/ServerCall because it was convenient and because it did not want to parse the messages. It is possible to do the same thing using the StreamObserver API.
The main difficulty in such a proxy would be to observe flow control. The example I referenced does this. If using StreamObserver API you should cast the StreamObserver passed to the server to ServerCallStreamObserver and get a ClientCallStreamObserver by passing a ClientResponseObserver to the client stub.

Resources