Is polling a folder the same as polling envelopes? - docusignapi

Reading the Docusign API rules and limits (https://developers.docusign.com/docs/esign-rest-api/esign101/rules-and-limits/) they state a restriction of not polling an envelope more than once every 15 minutes. They provide this API call as an example:
GET /accounts/12345/envelopes/AAA/documents/1
My script is set to look at two folders:
/accounts/12345/folders/AAA
/accounts/12345/folders/BBB
Now both these folders may have the same envelope in each. If I'm polling those two folders in my script every 15 minutes, does that violate the Docusign polling rule since each folder may contain the same envelope?

If you're trying to create an effective polling rate of 7.5 minutes by polling each folder once every 15 minutes? Yes, that's against the rules.
But your case sounds like polling every 15 minutes and, in some cases not under your control, an envelope may be polled more often than once per 15 minutes. That's fine.
But use a webhook instead, and we'll all be much happier.
See my post on using aws (free usage tier) and we have code examples too.
For your use case, investigate Recipient connect webhooks since you're not the sender.

Related

how long can a logic apps webhook wait?

we are evaluating logic apps for long running workflows
our process is as follows
once we receive a request (http request trigger), we call another service with the webhook action sending a callback url, now the process might take any where between 10 to 15 days to complete.
Question
can the logic app wait for 10 to 15 days ?
what happens if the callback does not happen ?
Thanks -Nen
A single HTTP request from Logic Apps will time out after 2 minutes. The default run time limit for all synchronous actions in multi-tenant Logic App is 2 minutes.
can the logic app wait for 10 to 15 days --> no
what happens if the callback does not happen ? --> Action
patterns
check below links -
calling-long-running-functions-from-logic-apps
Limits and configuration information for Azure Logic Apps
There are two points that need to be made when answering your question.
Firstly, the standard amount of time that a HTTP trigger can run for is two minutes (https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-limits-and-config?tabs=azure-portal#run-duration-and-retention-history-limits), but, that's when the request/response architecture is synchronous based. If you want to fire it in an asynchronous way (like you do) then you need to provide a Response to the calling application prior to the two minute timeout. Like thus ..
Secondly, You can see from the above image that a delay has been running for 11 minutes at the time of posting this answer which is more than the 2 minutes restriction if the response wasn't provided back.
I suspect (and would need to confirm but it would take me 10 days) that a webhook will perform for your full 10 to 15 days given there is absolutely no evidence to show it doesn't (i.e. the documentation does not explicitly state it). I believe it will stick to the 90 day period as per the full length of any multi-tenant Logic App implementation.

How to handle multiple API Requests

I am working with the Google Admin SDK to create Google Groups for my organization. I can't add members to the group when creating the group, ideally, when I create a new group I'd like to add roughly 60 members. In addition, the ability to add members after the group is created in bulk (a batch request) was deprecated August 2020. Right now, after I create a group I need to make a web request to the API to add each member of the group individually (which will be about 60 members).
I am using node.js and express, is there a good way to handle 60 web requests to an api? I don't know how taxing this will be on my server. If anyone has any resources to share where I can learn about the impact this would have on a nodejs server that would be great.
Fortunately, these groups aren't created often, maybe 15 a month.
One idea I have is to offload the work to something like a cloud function so my node server makes one request to the cloud function, then the cloud function makes all the additional requests to add members to the Group. I'm not 100% sure if this is necessary and I'm curious on other approaches.
Limits and Quotas
Note that adding group members may take 10 minutes to propagate.
The rate limit for the Directory API is 3000 queries per 100 seconds per IP address. This works out to around 30 per second. 60 requests is not a large amount of requests, but if you try to send them all in a few milliseconds the system may extrapolate the rate and deem it over the limit, I wouldn't think so, though probably best to test it on your end with your system and connection etc.
Exponential Backoff
If you do need to make many requests this is the method Google recommends. It involves repeating the request if it fails and then exponentially increasing the amount of time to wait until it reaches 16 seconds. You can always implement a longer wait to retry. Its only 60 requests after all.
Batch Requests
The previously mentioned methods should work no issue for you, since there are only 60 requests to make, it won't put any "stress" on the system as such. That said, the most performant way to handle many requests to the Directory API is to use a batch request. This allows you to bundle up all your member requests into one large batch, of up to 1000 calls. This will also give you a nice cushion in case you need to increase your requests in future!
EDIT - I am sorry, I missed that you mentioned that Batching is deprecated. Only global batching is deprecated. If you send a batch request to a specific API, batching will still be supported. What you can no longer do is send a single batch request to different APIs, like Directory and Sheets in one.
References
Limits and Quotas
Exponential Backoff
Batch Requests

Schedule Nodemailer email based on info in database

I am creating an application that stores events and sends reminder emails to people who signed up 1 hour before the event(the time of each event is stored in the database). At first I was thinking about using CronJobs to schedule these emails, but now I am not sure if that will work. Is there any other node module that will allow me to implement the reminder email functionality.
If you have Redis available to backend it, you might look at something like bull.
From the readme:
Minimal CPU usage due to a polling-free design.
Robust design based on Redis.
Delayed jobs.
Schedule and repeat jobs according to a cron specification.
Rate limiter for jobs.
Retries.
Priority.
Concurrency.
Pause/resume—globally or locally.
Multiple job types per queue.
Threaded (sandboxed) processing functions.
Automatic recovery from process crashes.
You can give a try node-schedule. It is using cron-job underneath.
In a quality interval, you can check if there is an upcoming interval, and send the reminder to the appropriate persons.

What are the Docusign API resource/rate limits? Are they active only in production?

I found this document explaining what the resource/rate limits are for the docusign API. https://developers.docusign.com/esign-rest-api/guides/resource-limits However I didn't get any errors related to resource limits during development and testing. Are these only active in production environment? Is there a way to test these limits during development to make sure the application will work correctly in production? Is this document valid/up to date?
Update (I just also want to expand my question here too)
So there is only ONE TYPE of limit? 1000 calls per hour and that's it? Or do I also need to wait 15 minutes between requests to the same URL?
If the second type of limitation exists (multiple calls to the same URL in an interval of 15 minutes) does it apply only to GET requests? So I can still create/update envelopes multiple times in 15 minutes?
Also if the second type of limit exists can I test it in the sandbox environment somehow?
The limits are also active in the sandbox system.
Not all API methods are metered (yet).
To test, just do a lot of calls and you'll see that the limits are applied. Eg do 1,000 status calls in an hour. Or create 1,000 envelopes and you'll be throttled.
Added
Re: only one type of limit?
Correct. Calls per hour is the only hard limit at this time. If 1,000 calls per hour is not enough for your application in general, or not enough for a specific user of your application, then there's a process for increasing the limit.
Re: 15 minute per status call per envelope.
This is the polling limit. An application is not well behaved if it polls DocuSign more than once every 15 minutes per envelope. In other words, you can poll for status on envelope A once per 15 minutes and also poll once every 15 minutes about envelope B.
The polling limit is monitored during your application's test as part of the Go Live process. It is also soft-monitored once the app is in production. In the future, the monitoring of polling for production apps will become more automated.
If you have a lot of envelopes that you're polling on then you might also run into the 1,000 calls per hour limit.
But there's no need to run into issues with polling: don't poll! Instead set up a webhook via DocuSign Connect or eventNotification and we'll call you.
Re: limits for creating/updating an envelope and other methods
Only the (default) 1,000 calls per hour affects non-polling methods.
Eg, asking for the status of an envelope's recipients, field values, general status, etc, over and over again is polling. Creating /updating envelopes can be done as often as you want (up to the default of 1,000 per hour).
If you want to create more than 1,000 envelopes per hour, we'll be happy to accommodate you. (And many of our larger customers do exactly that.)
The main issue that we're concerned with is unnecessary polling.
There can be other unnecessary calls which we'd also prefer to not have. For example, the OAuth:getUser call is only needed once per user login. It shouldn't be repeated more often than that since the information doesn't change.

Instagram real-time API POST rate

I'm building an application using tag subscriptions in the real-time API and have a question related to capacity planning. We may have a large number of users posting to a subscribed hashtag at once, so the question is how often will the API actually POST to our subscription processing endpoint? E.g., if 100 users post to #testhashtag within a second or two, will I receive 100 POSTs or does the API batch those together as one update? A related question: is there a maximum rate at which POSTs can be sent (e.g., one per second or one per ten seconds, etc.)?
The Instagram API seems to lack detailed information about both how many updates are sent and what are the rate limits. From the [API docs][1]:
Limits
Be nice. If you're sending too many requests too quickly, we'll send back a 503 error code (server unavailable).
You are limited to 5000 requests per hour per access_token or client_id overall. Practically, this means you should (when possible) authenticate users so that limits are well outside the reach of a given user.
In other words, you'll need to check for a 503 and throttle your application accordingly. No information I've seen for how long they might block you, but it's best to avoid that completely. I would advise you manage this by placing a rate limiting mechanism on your own code, such as pushing your API requests through a queue with rate control. That will also give you the benefit of a retry of you're throttled so you won't lose any of the updates.
Moreover, a mechanism such as a queue in the case of real-time updates is further relevant because of the following from the API docs:
You should build your system to accept multiple update objects per payload - though often there will be only one included. Also, you should acknowledge the POST within a 2 second timeout--if you need to do more processing of the received information, you can do so in an asynchronous task.
Regarding the number of updates, the API can send you 1 update or many. The problem with this is you can absolutely murder your API calls because I don't think you can batch calls to specific media items, at least not using the official python or ruby clients or API console as far as I have seen.
This means that if you receive 500 updates either as 1 request to your server or split into many, it won't matter because either way, you need to go and fetch these items. From what I observed in a real application, these seemed to count against our quota, however the quota itself seems to consume resources erratically. That is, sometimes we saw no calls at all consumed, other times the available calls dropped by far more than we actually made. My advice is to be conservative and take the 5000 as a best guess rather than an absolute. You can check the remaining calls by parsing one of the headers they send back.
Use common sense, don't be stupid, and using a rate limiting mechanism should keep you safe and have the benefit of dealing with failures either due to outages (this happens more than you may think), network hicups, and accidental rate limiting. You could try to be tricky and use different API keys in a pooling mechanism, but this is likely a violation of the TOS and if they are doing anything via IP, you'd have to split this up to different machines with different IPs.
My final advice would be to restructure your application to not completely rely on the subscription mechanism. It's less than reliable and very expensive API wise. It's only truly useful if you just need to do something in your app that doesn't require calling back to Instgram, your number of items is small, or you can filter out the majority of items to avoid calling back to Instagram accept when a specific business rule is matched.
Instead, you can do things like query the tag or the user (ex: recent media) and scale it out that way. Normally this allows you to grab 100 items with 1 request rather than 100 items with 100 requests. If you really want to be cute, you could at least merge the subscription notifications asynchronously and combine the similar ones into a single batched request when you combine the duplicate characteristics such as tag into a single bucket. Sort of like a map/reduce but on a small data set. You could of course do an actual map/reduce from time-to-time on your own data as another way of keeping things in async. Again, be careful not to thrash instagram, but rather just use map/reduce to batch out your calls in a way that's useful to your app.
Hope that helps.

Resources