Increase timeout from Webhook response in API.ai - dialogflow-es

I am trying to make multiple API calls to get the response from the webhook. So, the API calls are taking more than 5 secs for getting the response.
I have gone through the document for API.ai and found that set timeout is 5secs for intent request. Is there a way we can increase the timeout for the webhook response for API.ai?

The timeout is not configurable. The nature of the interaction with the user is conversational and therefore the user expects a response in a timeous manner. Long delays will confuse users and make them think your app is unresponsive.
If your operation takes longer than the timeout, consider changing the design of your conversation, either to have the user come back later or to gather other information from the user while the operation is completing.

Someone suggested this on forum:
Jan '17
what you would probably want is something that starts handling the request async and an intent that can be invoked to check the status of the request. so basically 2 intents/actions.
startprocess intent - webhook returns response “ill start on that”, handles intent/action async, when finished stores results in db with sessionId.
statusprocess intent - webhook checks status by pulling results from db with sessionId, if data is found return results if not then return “still working on it”

Related

What is the best way to have timeout triggers in nodejs ? for example I want to cancel order if no drivers accepts in certain time

I am building an application similar to uber so I want to have timeout callbacks, Where when a new request comes in I want to keep track of the order and cancel the order after certain time if no drivers accepts the order. I have planned to use setTimeout in nodejs for now, Is there any better way to do this ?
So I assume the proper solution would be:
accept the request.
store it inside the DB with the status pending.
return the response to the client.
in another process or worker search for the requests inside the DB and dispatch them to the drivers, if any driver accepts that request, change the status accepted.
in the meanwhile, you need to have another endpoint to return job status and etc, client use this for checking for job status for let's say 1 minute, and pulls the result every 2 seconds.
Inside the worker just update the request to status canceled if they passed the threshold.

Working time of webhook in dialogflow or alternative

I'm writing a bot for myself, which could, on request, find torrents and download them to my home media center.
I receive an error with my webhook: request lives only ~ 5 seconds.
Parsers work 1-10 seconds + home server on hackberry is very slow.
With this, my requests die at 50%.
How can I query and receive an answer after more then 5 seconds?
An action is expected to respond within 5 seconds. This does not necessarily have to be the exact answer, but you'll need to have something to let the user know that your action is still processing.
This could be as simple as giving an intermediary state like, "Okay, I'm going to start. Do you want anything else?", or playing a short MediaResponse as "hold music". Then you can store the state in a short-term and quick to access database which is easy to poll and give as a status update when the user asks.
This can be simply done through followUpEvents. You can call any intent through web hook's followUpEvent. So, to solve your problem, you have to maintain states in your web application like "searching", "found", "downloading" and "downloaded", it's completely upto you.
Now, once an initial intent is called, you initiate the process on your server then hold for 3-3.5 seconds and send a followUpEvent to call other intent which will do nothing but wait another 3-3.5 seconds and keep polling your server each second for updated status. You can keep calling next follow up intents till you get your desired status from server.
So if your request die at 50% on a single intent then it should work fine with two follow up intents.

Check the response of a 'Send Event' Logic App action

I have a Logic App that sends an event to a specified Event Hub using the Send Event action.
It seems that regardless of whether or not the event is accepted by the specified Event Hub, the Logic App continues on regardless. Unlike the Azure Functions action, there appears to be no automatically generated StatusCode property available for Send Event action.
Is it possible to check the response from Event Hubs so that I may determain whether or not to halt execution?
Update
After a completed run, it seems that there is a status code returned by Event Hubs, although unusually it seems to be 200 where as typically when sending events it's 201.
However, when editing the Logig App, there doesn't seem to be any way of accessing that status code in order to check the success/failure of the send event action.
You should be able to use #outputs('Send_event')?['statusCode'] to access the status code.

API with Work Queue Design Pattern

I am building an API that is connected to a work queue and I'm having trouble with the structure. What I'm looking for is a design pattern for a worker queue that is interfaced via a API.
Details:
I'm using a Node.js server and Express to create an API that takes a request and returns JSON. These request can take a long time to process (very data intensive) so this is why we use a queuing system (RabbitMQ).
So for example lets say I send a request to the API that will take 15 min to process. The Express API formats the request and puts it in a RabbitMQ (AMQP) queue. The next available worker takes the request off the queue and starts to process it. After its done (in this case 15 min) it saves the data into a MongoDB. .... now what .....
My issue is, how do I get the finished data back to the caller of the API? The caller is a completely separate program that contacts the API via something like an Ajax request.
The worker will save the processed data into a database but I have no way to push back to the original calling program.
Does anyone have any API with a work queue resources?
Please and thank you.
On the initiating call by the client you should return to the client a task identifier that will persist with the data all the way to MongoDB.
You can then provide an additional API method for the client to check the task's status. This method should take a single parameter, the task identifier, and check if a document with that identifier has made in into your collection in MongoDB. Return false if it doesn't exist yet, true when it does.
The client will have to repeatedly poll (but maybe at a 1 minute interval) the task status API method until it returns true.

How to handle requests that have heavy load?

This is a Brain-Question for advice on which scenario is a smarter approach to tackle situations of heavy lifting on the server end but with a responsive UI for the User.
The setup;
My System consists of two services (written in node); One Frontend Service that listens on Requests from the user and a Background Worker, that does heavy lifting and wont be finished within 1-2 seconds (eg. video conversion, image resizing, gzipping, spidering etc.). The User is connected to the Frontend Service via WebSockets (and normal POST Requests).
Scenario 1;
When a User eg. uploads a video, the Frontend Service only does some simple checks, creates a job in the name of the User for the Background Worker to process and directly responds with status 200. Later on the Worker see's its got work, does the work and finishes the job. It then finds the socket the user is connected to (if any) and sends a "hey, job finished" with the data related to the video conversion job (url, length, bitrate, etc.).
Pros I see: Quick User feedback of sucessfull upload (eg. ProgressBar can be hidden)
Cons I see: User will get a fake "success" respond with no data to handle/display and needs to wait till the job finishes anyway.
Scenario 2;
Like Scenario 1 but that the Frontend Service doesn't respond with a status 200 but rather subscribes to the created job "onComplete" event and lets the Request dangle till the callback is fired and the data can be sent down the pipe to the user.
Pros I see: "onSuccess", all data is at the User
Cons I see: Depending on the job's weight and active job count, the Users request could Timeout
While writing this question things are getting clearer to me by the minute (Scenario 1, but with smart success and update events sent). Regardless, I'd like to hear about other Scenarios you use or further Pros/Cons towards my Scenarios!?
Thanks for helping me out!
Some unnecessary info; For websockets I'm using socket.io, for job creating kue and for pub/sub redis
I just wrote something like this and I use both approaches for different things. Scenario 1 makes most sense IMO because it matches the reality best, which can then be conveyed most accurately to the user. By first responding with a 200 "Yes I got the request and created the 'job' like you requested" then you can accurately update the UI to reflect that the request is being dealt with. You can then use the push channel to notify the user of updates such as progress percentage, error, and success as needed but without the UI 'hanging' (obviously you wouldn't hang the UI in scenario 2 but its an awkward situation that things are happening and the UI just has to 'guess' that the job is being processed).
Scenario 1 -- but instead of responding with 200 OK, you should respond with 202 Accepted. From Wikipedia:
https://en.wikipedia.org/wiki/List_of_HTTP_status_codes
202 Accepted The request has been accepted for processing, but the
processing has not been completed. The request might or might not
eventually be acted upon, as it might be disallowed when processing
actually takes place.
This leaves the door open for the possibility of worker errors. You are just saying you accepted the request and is trying to do something with it.

Resources