I have a ajax that calls a function.
This function spend 5 minutes to complete.
When I run in my machine, it's everything ok.
But when I run in my deployed web site in azure, the request return with error 500 when past 3.5 minutes. But it's continue running and complete the work, I see in the database.
The response is blank.
Any help?
Thanks!
You can change approach and use web sockets.
5 minutes is a long time to hold a connection, a lot can happen in 5 minutes,
Different approach would be to return a guid before you start the process and make a lull request from the client every 10 sec or so until the process state is changed to finished and you can return the result.
Good luck.
Related
we are evaluating logic apps for long running workflows
our process is as follows
once we receive a request (http request trigger), we call another service with the webhook action sending a callback url, now the process might take any where between 10 to 15 days to complete.
Question
can the logic app wait for 10 to 15 days ?
what happens if the callback does not happen ?
Thanks -Nen
A single HTTP request from Logic Apps will time out after 2 minutes. The default run time limit for all synchronous actions in multi-tenant Logic App is 2 minutes.
can the logic app wait for 10 to 15 days --> no
what happens if the callback does not happen ? --> Action
patterns
check below links -
calling-long-running-functions-from-logic-apps
Limits and configuration information for Azure Logic Apps
There are two points that need to be made when answering your question.
Firstly, the standard amount of time that a HTTP trigger can run for is two minutes (https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-limits-and-config?tabs=azure-portal#run-duration-and-retention-history-limits), but, that's when the request/response architecture is synchronous based. If you want to fire it in an asynchronous way (like you do) then you need to provide a Response to the calling application prior to the two minute timeout. Like thus ..
Secondly, You can see from the above image that a delay has been running for 11 minutes at the time of posting this answer which is more than the 2 minutes restriction if the response wasn't provided back.
I suspect (and would need to confirm but it would take me 10 days) that a webhook will perform for your full 10 to 15 days given there is absolutely no evidence to show it doesn't (i.e. the documentation does not explicitly state it). I believe it will stick to the 90 day period as per the full length of any multi-tenant Logic App implementation.
Im using a Cloud Run to run my deployment test suite. It takes about 3 minutes to run the suite and my instance timeou is set to 5 minutes.
I've set up a Cloud Run project that will accept an http request (from my CI provider) triggering the tests to run, and then report back pass fail.
Even though the containers are set to only handle 1 concurrent request they are accepting a second request after the first test run completes. As the first run took up 3 of the available 5 minutes, the Second request times-out at 2 minutes.
So, does anyone know of a way to either self terminate a given instance (preferably from within) or to set the total number of requests an instance will accept before closing itself?
Thank you very much for reading my question. Any help would be greatly appreciated!
You don't have instance timeout in Cloud Run. The timeout is on the request processing. You set the maximum duration time to process a request (up to 3600 seconds). So, in your case, you haven't this timeout issue, or I didn't understand your configuration and current issue.
The other part of your question is "how to stop an instance". Simply exit it! According to your language the method are different. In python for example exit(0).
I have a nodejs based aplication running as a Google App Engine application. It accesses the database using node-postgres module. I have noticed the following:
The first request that I am making from my machine (using postman) is taking longer (around 800 ms- 1.5 seconds). However, the subsequent requests that I am making are taking much lesser time (around 200 ms - 350 ms).
I am unable to pinpoint the exact reason for this happening. It could be due to the following reasons:
A new connection is initiated the first time I make a request to the server.
There is some issue with the database fetching using node-postgres (But since the problem occurs only at the first instance, this is more unlikely).
I am worried about this issue because logs are showing me that almost 20% of my requests are taking around 2 seconds. When I viewed the logs for some of the time taking requests, they seemed to be instantiating a new process which was leading to the longer wait time.
What can I do to investigate further and resolve this issue?
Your first request take more time than the others because App Engine standard has a startup time for a new instance. This time is really short, but there is. You need to add the time to set up the connection to the database. This is why you have a longer response time for the first request.
To understand better the app engine start time you can read the Best practices for App Engine startup time doc (little bit old but I think really clear). And to perform profiling for your app engine application you can read in this Medium public blog.
After this, you can set up a Stackriver dashboard to understand if your 20% of slow requests are due to the start of a new app engine instance.
I've changed a long running process on an Node/Express route to use Bull/Redis.
I've pretty much copied this tutorial on Heroku docs.
The gist of that tutorial is: The Express route schedules the Job, immediately returns 200 to the client, and browser long polls for the job status (a ping route on Express). When the client gets a completed status, it displays it in the UI. The Worker is a separate file and is run with an addtional yarn run worker.js.
Notice the end where it recommends using Throng for clustering your workers.
I'm using this Bull Dashboard to monitor jobs/queues. This dashboard shows the available Workers, and their status (Idle when not running/ Not in Idle when running).
I've got the MVP working, but the operation is super slow. The average time to complete is 1 minute 30 second. Whereas before (before adding Bull) it was seconds to complete.
Another strange it seems to take at least 30 seconds for a Workers status to change from Idle to not Idle. Seems that a lot of the latency is waiting for the worker.
Being that the new operation is a separate file (worker.js) and throng is enabling clustering, I was expecting this operation to be super fast, but it is just the opposite.
Does anyone have an experience with this? Or pointers to help figure out what is causing this to be so slow?
I have a time consuming API route, and sometimes i need to run it locally.
I never have any problem when I'm on a fast connection, but when I' on some slower one the request goes on Timeout after a couple of minutes showing the following:
The thing is:
backend is working properly (It ends after a while and I'm currently saving everything on a file to get my result)
i turned off the SLL toggle
there's no proxy
last but not least, timeout is set to 0 which should be infinity.
any suggestion?
After further investigation, (thanks #Mykola Borysyuk), i can confirm the request send the timeout event after 2 minutes.
server timeout doc
It was enough to add the following line to my route:
req.setTimeout(300000);
setTimeout doc