does it mean 5000 per clientid per hour OR per IP per hour?
It is not per IP! You can check that yourself
From a terminal:
curl -i https://api.foursquare.com/v2/venues/search?ll=40.7,-74\&client_id= <client_id_1>\&client_secret=<client_secret_1>\&v=<YYYYMMDD>
and then use another client_id/secret from the same machine (same IP):
curl -i https://api.foursquare.com/v2/venues/search?ll=40.7,-74\&client_id=<client_id_2>\&client_secret=<client_secret_2>\&v=<YYYYMMDD>
Now look at the X-RateLimit-Limit and X-RateLimit-Remaining in both headers and you can see that they are not sharing the same limit!
Yes, this means that each application clientId can make 5,000 userless requests per hour. If you need more requests per hour, see Foursquare's rate limit guidelines.
Related
・What does QPS mean on this page ? ( https://developer.foursquare.com/docs/places-api/getting-started/#choose-your-account-tier )
・How many requests can I make in one second?
I think that I can make a maximum of 2 requests per second to endpoints for the Personal Tier Accounts,
but no error occured when I tried to make more than 2 requests per second to endpoints.
I am doing a project where I need to send device parameters to the server. I will be using Rasberry Pi for that and flask framework.
1. I want to know is there any limitation of HTTPS POST requests per second. Also, I will be using PythonAnywhere for server-side and their SQL database.
Initially, my objective was to send data over the HTTPS channel when the device is in sleep mode. But when the device (ex: car) wakes up I wanted to upgrade the HTTPS to WebSocket and transmit data in realtime. Later came to know PythonAnywhere doesn't support WebSocket.
Apart from answering the first question, can anyone put some light on the second part? I can just increase the number of HTTPS requests when the device is awake (ex: 1 per 60 min in sleep mode and 6 per 60sec when awake), but it will be unnecessary data consumption over the wake period for transmission of the overhead. It will be a consistent channel during the wake period.
PythonAnywhere developer here: from the server side, if you're running on our platform, there's no hard limit on the number of requests you can handle beyond the amount of time your Flask server takes to process each request. In a free account you would have one worker process handling all of the requests, each one in turn, so if it takes (say) 0.2 seconds to handle a request, your theoretical maximum throughput would be five requests a second. A paid "Hacker" plan would have two worker processes, and they would both be handling requests, to that would get you up to ten a second. And you could customize a paid plan and get more worker processes to increase that.
I don't know whether there would be any limits on the RPi side; perhaps someone else will be able to help with that.
I found this document explaining what the resource/rate limits are for the docusign API. https://developers.docusign.com/esign-rest-api/guides/resource-limits However I didn't get any errors related to resource limits during development and testing. Are these only active in production environment? Is there a way to test these limits during development to make sure the application will work correctly in production? Is this document valid/up to date?
Update (I just also want to expand my question here too)
So there is only ONE TYPE of limit? 1000 calls per hour and that's it? Or do I also need to wait 15 minutes between requests to the same URL?
If the second type of limitation exists (multiple calls to the same URL in an interval of 15 minutes) does it apply only to GET requests? So I can still create/update envelopes multiple times in 15 minutes?
Also if the second type of limit exists can I test it in the sandbox environment somehow?
The limits are also active in the sandbox system.
Not all API methods are metered (yet).
To test, just do a lot of calls and you'll see that the limits are applied. Eg do 1,000 status calls in an hour. Or create 1,000 envelopes and you'll be throttled.
Added
Re: only one type of limit?
Correct. Calls per hour is the only hard limit at this time. If 1,000 calls per hour is not enough for your application in general, or not enough for a specific user of your application, then there's a process for increasing the limit.
Re: 15 minute per status call per envelope.
This is the polling limit. An application is not well behaved if it polls DocuSign more than once every 15 minutes per envelope. In other words, you can poll for status on envelope A once per 15 minutes and also poll once every 15 minutes about envelope B.
The polling limit is monitored during your application's test as part of the Go Live process. It is also soft-monitored once the app is in production. In the future, the monitoring of polling for production apps will become more automated.
If you have a lot of envelopes that you're polling on then you might also run into the 1,000 calls per hour limit.
But there's no need to run into issues with polling: don't poll! Instead set up a webhook via DocuSign Connect or eventNotification and we'll call you.
Re: limits for creating/updating an envelope and other methods
Only the (default) 1,000 calls per hour affects non-polling methods.
Eg, asking for the status of an envelope's recipients, field values, general status, etc, over and over again is polling. Creating /updating envelopes can be done as often as you want (up to the default of 1,000 per hour).
If you want to create more than 1,000 envelopes per hour, we'll be happy to accommodate you. (And many of our larger customers do exactly that.)
The main issue that we're concerned with is unnecessary polling.
There can be other unnecessary calls which we'd also prefer to not have. For example, the OAuth:getUser call is only needed once per user login. It shouldn't be repeated more often than that since the information doesn't change.
According to the documentation there a limitation for REST API to Bitfinex:
If an IP address exceeds 90 requests per minute to the REST APIs, the requesting IP address will be blocked for 10-60 seconds and the JSON response {"error": "ERR_RATE_LIMIT"} will be returned. Please note the exact logic and handling for such DDoS defenses may change over time to further improve reliability.
For users who need high-frequency connections, please switch to the WebSockets APIs.
But what is the limit for order count via Socket API (v2) using NodeJS?
After some experiments, I found that request limit is 500 orders per 10 mins, but after a minute or 2 of the ban, you can post 500 requests more.
I use Amazon's Product Advertising API to retrieve their node hierarchy using the API's BrowseNodeLookup method (REST using Java). On Amazon's sandbox individual requests seem to work but if I keep sending requests for various nodes I eventually end up getting HTTP 503 errors.
One of previous posts on an Amazon's forum indicated a limit of 20 requests per second on sandbox: https://forums.aws.amazon.com/thread.jspa?messageID=152657𥑑
After I put throttling in place I tried limiting code to send 20 requests/sec, as well as 10 requests/sec. In both cases I ended up eventually getting a 503 error. I had posted my question on Amazon's forum but have not received any information so I was wondering whether anybody knows answers to the following questions:
What kind of limits does the sandbox environment impose in this case?
Are those or similar limits in place in the production environment?
Do those limits apply to both REST and SOAP calls?
Maybe 10 requests/sec is too many?
I am having the same problem. I found this link that mentions 1 request/sec.
http://www.mail-archive.com/google-appengine#googlegroups.com/msg19305.html
It's approximately 2,000 per hour; with the opportunity to scale up if you're a merchant shipping a lot of product sold through their marketplace.
One way to help with this limit is by batching multiple requests in each API call - they're treated as one invocation, for the purposes of Amazon's rate-limiting governor. Not only does that help with throughput by permitting larger sets of requests to be issued; but because you're not dealing with intermachine latency (between your app & the Amazon server handling your API request) you're making up a bunch of time there, as well.