How long takes Nest client review process? - nest-api

I've been requesting more clients (from 1000 to 10000) for my nest application 10 days ago.
my process is "in review" for the past 5 days.
When can I espect to get the review complete ?
Thanks

It can take from 5 days up to two weeks. It depends if your app passed through the checklist for client review. Also make sure your app follows all the UI/Marketing guidelines so it won't slow down the process of the review and you will be able to get more user limits as soon as possible.

Related

how long can a logic apps webhook wait?

we are evaluating logic apps for long running workflows
our process is as follows
once we receive a request (http request trigger), we call another service with the webhook action sending a callback url, now the process might take any where between 10 to 15 days to complete.
Question
can the logic app wait for 10 to 15 days ?
what happens if the callback does not happen ?
Thanks -Nen
A single HTTP request from Logic Apps will time out after 2 minutes. The default run time limit for all synchronous actions in multi-tenant Logic App is 2 minutes.
can the logic app wait for 10 to 15 days --> no
what happens if the callback does not happen ? --> Action
patterns
check below links -
calling-long-running-functions-from-logic-apps
Limits and configuration information for Azure Logic Apps
There are two points that need to be made when answering your question.
Firstly, the standard amount of time that a HTTP trigger can run for is two minutes (https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-limits-and-config?tabs=azure-portal#run-duration-and-retention-history-limits), but, that's when the request/response architecture is synchronous based. If you want to fire it in an asynchronous way (like you do) then you need to provide a Response to the calling application prior to the two minute timeout. Like thus ..
Secondly, You can see from the above image that a delay has been running for 11 minutes at the time of posting this answer which is more than the 2 minutes restriction if the response wasn't provided back.
I suspect (and would need to confirm but it would take me 10 days) that a webhook will perform for your full 10 to 15 days given there is absolutely no evidence to show it doesn't (i.e. the documentation does not explicitly state it). I believe it will stick to the 90 day period as per the full length of any multi-tenant Logic App implementation.

Allow DialogFlow's response time for web-service-fulfillment to be greater than 5 seconds

According to Dialogflow Docs
The response must occur within 10 seconds for Google Assistant applications or 5 seconds for all other applications, otherwise, the request will time out.
Is there any way we can increase this without going for an API WebClient approach?
I am using the dialogflow web demo as web client and need to make a call to node service to fetch data from a cloud dB.
The following limitations apply to your response:
The response must occur within 10 seconds for Google Assistant
applications or 5 seconds for all other applications, otherwise the
request will time out.
The response must be less than or equal to 64 KiB in size.
How ever there will be,
Webhook call failed. Error: DEADLINE_EXCEEDED
So you must complate your task within 5 second. and if you are not able to fetch data within 5 second then there is something wrong with your infrastructure.

Is API Polling more efficient than WebSockets for infrequent updates?

I am working on building a trading algorithm for cryptocurrencies and have added several different exchanges.
for the last 4 exchanges I added, I use the exchange api endpoint to retrieve ticker prices for all coins (I make one api call every 5 minutes per exchange).
I started working on implementing Coinbase Pro which does not have an endpoint to import all ticker prices at once.
Based on my current coding experience I came up with two options...
Option A: make 30 API Calls once every 5 minutes to import price data.
or
Option B: Connect through web-sockets on 30 different channels and update a dictionary with the most recent updates and send to SQL at intervals of 5 minutes.
My concerns:
Coinbase is very active and there seems to be multiple updates per second when only connected to 2 web-sockets.
Is it wasteful to use websockets in this scenario?
I am not building a high frequency trading strategy so minor latency is not a concern. At the same time I am concerned that due to http hangups/timeouts there could be a scenario where 30 requests could take over a minute.
AS well if API polling is the best solution, will I be able to reduce latency using grequests (for async http requests), urllib3 (connection pooling), or httplib (by keeping connections open)?
I apologize if this is a dumb question I have only a year of coding experience and I cannot seem to find the right solution for this situation on google/stackoverflow.
Any advice would be greatly appreciated!
best regards,
Slurpgoose

Working time of webhook in dialogflow or alternative

I'm writing a bot for myself, which could, on request, find torrents and download them to my home media center.
I receive an error with my webhook: request lives only ~ 5 seconds.
Parsers work 1-10 seconds + home server on hackberry is very slow.
With this, my requests die at 50%.
How can I query and receive an answer after more then 5 seconds?
An action is expected to respond within 5 seconds. This does not necessarily have to be the exact answer, but you'll need to have something to let the user know that your action is still processing.
This could be as simple as giving an intermediary state like, "Okay, I'm going to start. Do you want anything else?", or playing a short MediaResponse as "hold music". Then you can store the state in a short-term and quick to access database which is easy to poll and give as a status update when the user asks.
This can be simply done through followUpEvents. You can call any intent through web hook's followUpEvent. So, to solve your problem, you have to maintain states in your web application like "searching", "found", "downloading" and "downloaded", it's completely upto you.
Now, once an initial intent is called, you initiate the process on your server then hold for 3-3.5 seconds and send a followUpEvent to call other intent which will do nothing but wait another 3-3.5 seconds and keep polling your server each second for updated status. You can keep calling next follow up intents till you get your desired status from server.
So if your request die at 50% on a single intent then it should work fine with two follow up intents.

Azure Bot Service using over 1GB of data transfer out per day. Why? How can I stop that?

I created a QnA bot using the Azure Bot service, and now I'm seeing data transfers out of my subscription of over 1 GB a day! I cannot figure out why, but since it's billable, I'd like to know why and how I can stop it.
The bot isn't being used yet, so no one is sending queries to it. I'm confused how this is happening.
Here's a screen shot of the graph for use in the last hour as well as a screen shot of the billing for the last few days showing the sudden jump in use.
Is this normal?
If you add AzureWebJobsDisableHomepage with a value of true, to the App settings, the data out will stop.
The setting itself is documented here: https://github.com/Azure/azure-webjobs-sdk-script/wiki/Configuration-Settings (although it doesn't provide an explanation for how this setting affects a bot specifically)
The reasoning behind what is happening is a little complex. Azure Functions are not normally "in memory" and available all the time. There is a small spinup time that is not ideal within a bot. So, apparently there is a job setup with consumption plan bots to ping it every 10 seconds (and by 'ping', i mean retrieve the root of the site). If you open the Log Stream, you'll see an http get request every 10 seconds. Adding AzureWebJobsDisableHomepage doesn't disable the request, but changes the status of what is returned from "OK" to "NoContent".
This will be added to the Bot Service arm template soon (so future consumption plan bots do not automatically accrue these data usages).

Resources