AJAX request errors out with no response - node.js

I have an application with webix on UI and node js on server side.
From the UI if I trigger a long running AJAX request for e.g. process 1000 records, the request errors out after 1.5 mins (not consistently) approximately.
The error object contains no information about the reason for request failure but since processing smaller set of records seems to work fine I am thinking of blaming it on timeout.
From the developer console I see that request seems to be Stalled and response is empty.
Currently I cant drop a request and keep polling it after every few seconds to see if the processing has been finished. I have to wait for the request to finish but I am not sure how to do it as webix forum doesn't seem to have any information on this except for setting timeout.
If setting timeout is the way to go then what would happen tomorrow if the request size goes to 2000 records - I don't want to keep on increasing the timeout
Also, if I am left with no choice how would I implement the polling. If I drop a request on to server there can be other clients as well who are triggering a similar request. How would I distinguish between requests originated from different clients?
I would really appreciate some help on this.

Related

IIS applicaiton HTTP method stop running

I have Web application on IIS server.
I have POST method that take a long time to run (Around 30-40min).
After period time the application stop running (Without any exception).
I set Idle timeout to be 0 and It is not help for me.
What can I do to solve it?
Instead of doing all the work initiated by the request before responding at all:
Receive the request
Put the information in the request in a queue (which you could manage with a database table, ZeroMQ, or whatever else you like)
Respond with a "Request recieved" message.
That way you respond within seconds, which is acceptable for HTTP.
Then have a separate process monitor the queue and process the data on it (doing the 30-40 minute long job). When the job is complete, notify the user.
You could do this through the browser with a Notification or through a WebSocket or use a completely different mechanism (such as by sending an email to the user who made the request).

What is the correct client reaction to a HTTP 429 when the client is multi-threaded?

The HTTP status code 429 tells the client making the request to back off and retry the request after a period specified in the response's Retry-After header.
In a single-threaded client, it is obvious that the thread getting the 429 should wait as told and then retry. But the RFC explicitly states that
this specification does not define how the origin server identifies
the user, nor how it counts requests.
Consequently, in a multi-threaded client, the conservative approach would stop all threads from sending requests until the Retry-After point in time. But:
Many threads may already be past the point where they can note the information from the one rejected thread and will send at least one more request.
The global synchronization between the threads can be a pain to implement and get right
If the setup runs not only several threads but several clients, potentially on different machines, backing off all of them on one 429 becomes non-trivial.
Does anyone have specific data from the field how servers of cloud providers actually handle this? Will they get immediately aggravated if I don't globally hold back all threads. Microsoft's advice is
Wait the number of seconds specified in the Retry-After field.
Retry the request.
If the request fails again with a 429 error code, you are still being throttled. Continue to use the recommended Retry-After delay and retry the request until it succeeds.
It twice says 'the request' not 'any requests' or 'all requests', but this is legal-type interpretation I am not confident about.
To be sure this is not an opinion question, let me phrase it as fact-based as possible:
Are there more detailed specifications for cloud APIs (Microsoft, Google, Facebook, Twitter) then the example above that allow me to make an informed decision whether global back-off is necessary or whether it suffices to back-off with the specific request that got the 429?
Servers knows that its tuff to sync or expect programmers to do this. So doubt if there is a penalty unless they get an ocean of requests that do not back off at all after 429.
Each thread should wait, but each would, after being told individually.
A good system would know what its rate is and be within that. One way to impolement this is having a sleepFor variable between requests. Exact prod value can be arrived at by trial and error, and would be the sleep time minus the previous request time.
So if one requests ends, and say it took x milliseconds. Now if the sleep time is
0 or less, move immediately to next request
if 1 or more than find out sleepTime - x, if this is less than 1, go to next immediately, else sleep for so many milliseconds and then move to next request.
Another way would be to have a timeCountStrarted at request 1; count for every 5 minutes or so. After every request, check if the actual request count of current thread is more than that. If yes current thread sleeps till 5 minutes is up before moving to next. Here 5 can be configured as the timePeriod. If after a request the count is not more than set figure but time elapsed since timeCountStrarted is more than 5 minutes; then set timeCountStrarted to current time and the count of requests to 0.
What we do is keep these configuration values in a data base but cache them at run time.
Also have a page to invalidate the caches so if we like we can update the data base from an admin page, then invalidate the caches and thus the clients would pick up the new information on the run. This helps to configure the correct value to stay within API limits and get enough jobs done.

Cancel a running query

I have an application where users are running a geospatial query against a mongo database. The query can return many thousands of results (~50k). These results are then streamed to the client over a websocket. However, users can abort a request mid result set and execute a new query. Users will frequently start, abort, and re-start requests on the order of several times per minute. Sometimes they even cancel/restart every couple of seconds.
The question is, when a user aborts a request, how do I cancel the query on the server so it doesn't continue to tie up resources streaming back thousands of unneeded results? I'm currently calling destroy() on the cursor, but it's not clear that this is actually stopping the query from executing on the server.
What's the best practice in this case?
Have you tried this?
db.currentOp()
db.killOp(IDRETURNEDHE)
This is a good example.
The answer is it depends upon a lot of your implementation details.
If your server is in the middle of streaming results (e.g. still hasn't sent or queued everything) when the server receives some sort of other message that the previous results should be cancelled, then it is possible for you to communicate with that other stream and tell it to stop sending. How exactly you would do that depends entirely upon your code and you would have to show us your code for us to know.
Chances are the db query is long since complete and what is going on is the server is in the process of streaming results to the client. So, if that's the case, then it isn't the db you're looking for, it's the code that streams the response to the client. Since node.js JS is single threaded, the only time another request would actually get run on the server would be while the streaming code was in some async write operation, waiting for that to finish. You would probably have to set some flag that was uniquely associated with a particular user and then your stream code would have to check for that flag before each chunk of data was sent. If it saw the cancel flag, it could abandon sending the rest of the results.
You could make things more cancellable by explicitly chunking your results (say 500 at a time) and checking for a cancel flag between the sending of each chunk.
If, on the other hand, all the data has already been buffered up by the TCP layer on the server, then the only way to stop that from being sent is to tear down the webSocket and force the client to reconnect.

Background jobs that run on every request on Heroku and node.js

I have an app that needs to run a very long process (takes 30-60 seconds for each request). After the processing, the result is then returned to the request as a response. This works fine locally, but it crashes my Heroku instance.
What I'd like to happen instead is:
User comes on site, request sent to backend
Backend returns immediately, and kicks off another process/task/job that does the processing
When the processing ends, the response is returned to the correct user.
I am not sure what all I need for this. Based on an hour-long research, it seems like I can use Redis as a queue and a worker can poll it every x minutes. But what I can't understand is how to figure out which request to send the response to after processing ends.
Is there a sample Express/node.js for this? Any pointers are helpful.
Like you found in your research, setting up a worker queue using Redis is a good approach for long running processes. A nice library for this is kue (https://github.com/learnboost/kue).
When it comes to responding to a request with the results of the job, having an outanding requesting hanging waiting for a response is not a good way to go about it (and may not work, heroku kills requests that have been idle for a certain period of time).
What you could do is when the request is made start the background job and respond to the request right away with job ID. The client can then poll the server for the status of the job, when the job is complete it can then fetch the needed result.
Kue (from #mattetre's answer) is not maintained anymore. Kue's GitHub page suggests Bull as a good alternative. It is a fast and reliable Redis based queue for Node.js.

How to handle requests that have heavy load?

This is a Brain-Question for advice on which scenario is a smarter approach to tackle situations of heavy lifting on the server end but with a responsive UI for the User.
The setup;
My System consists of two services (written in node); One Frontend Service that listens on Requests from the user and a Background Worker, that does heavy lifting and wont be finished within 1-2 seconds (eg. video conversion, image resizing, gzipping, spidering etc.). The User is connected to the Frontend Service via WebSockets (and normal POST Requests).
Scenario 1;
When a User eg. uploads a video, the Frontend Service only does some simple checks, creates a job in the name of the User for the Background Worker to process and directly responds with status 200. Later on the Worker see's its got work, does the work and finishes the job. It then finds the socket the user is connected to (if any) and sends a "hey, job finished" with the data related to the video conversion job (url, length, bitrate, etc.).
Pros I see: Quick User feedback of sucessfull upload (eg. ProgressBar can be hidden)
Cons I see: User will get a fake "success" respond with no data to handle/display and needs to wait till the job finishes anyway.
Scenario 2;
Like Scenario 1 but that the Frontend Service doesn't respond with a status 200 but rather subscribes to the created job "onComplete" event and lets the Request dangle till the callback is fired and the data can be sent down the pipe to the user.
Pros I see: "onSuccess", all data is at the User
Cons I see: Depending on the job's weight and active job count, the Users request could Timeout
While writing this question things are getting clearer to me by the minute (Scenario 1, but with smart success and update events sent). Regardless, I'd like to hear about other Scenarios you use or further Pros/Cons towards my Scenarios!?
Thanks for helping me out!
Some unnecessary info; For websockets I'm using socket.io, for job creating kue and for pub/sub redis
I just wrote something like this and I use both approaches for different things. Scenario 1 makes most sense IMO because it matches the reality best, which can then be conveyed most accurately to the user. By first responding with a 200 "Yes I got the request and created the 'job' like you requested" then you can accurately update the UI to reflect that the request is being dealt with. You can then use the push channel to notify the user of updates such as progress percentage, error, and success as needed but without the UI 'hanging' (obviously you wouldn't hang the UI in scenario 2 but its an awkward situation that things are happening and the UI just has to 'guess' that the job is being processed).
Scenario 1 -- but instead of responding with 200 OK, you should respond with 202 Accepted. From Wikipedia:
https://en.wikipedia.org/wiki/List_of_HTTP_status_codes
202 Accepted The request has been accepted for processing, but the
processing has not been completed. The request might or might not
eventually be acted upon, as it might be disallowed when processing
actually takes place.
This leaves the door open for the possibility of worker errors. You are just saying you accepted the request and is trying to do something with it.

Resources