According to the documentation there a limitation for REST API to Bitfinex:
If an IP address exceeds 90 requests per minute to the REST APIs, the requesting IP address will be blocked for 10-60 seconds and the JSON response {"error": "ERR_RATE_LIMIT"} will be returned. Please note the exact logic and handling for such DDoS defenses may change over time to further improve reliability.
For users who need high-frequency connections, please switch to the WebSockets APIs.
But what is the limit for order count via Socket API (v2) using NodeJS?
After some experiments, I found that request limit is 500 orders per 10 mins, but after a minute or 2 of the ban, you can post 500 requests more.
Related
My application sends requests to Azure Machine Learning REST API in order to invoke a batch endpoint and start scoring jobs as described here. It works well for small number of requests, but if the app sends many concurrent requests the REST API sometimes responds with status code 429 "TooManyRequests" and message "Received too many requests in a short amount of time. Retry again after 1 seconds.". For example, it happened after sending 77 requests at once.
The message is pretty clear and the best solution I can think about is to throttle outgoing requests. That is making sure the app doesn't exceed limits when it sends concurrent requests. But the problem is I don't know what are the request limits for Azure Machine Learning REST API. Looking through the Microsoft documentation I could only find this article which provides limits for Managed online endpoints whereas I'm looking for Batch endpoints.
I would really appreciate if someone helped me to find the Azure ML REST API request limits or suggested a better solution. Thanks.
UPDATE 20 Jun 2022:
I couldn't find out how many concurrent requests are allowed by Azure Machine Learning batch endpoints. So I ended with a limit of 10 outgoing requests which solved the "TooManyRequests" problem. In order to throttle requests I used SemaphoreSlim as described here.
According to the document, there is chance to enhance the quota of the request limit which is the way to solve the request limit exceed issue. Regarding batch quota limit, here is the document designed by Microsoft.
According to the above image, change the quota values.
Document Credit: prkannap and team
Alternatively, you could reduce the number of requests by storing multiple input files in a folder and invoking the job with the folder path.
If you want further assistance, please file a support ticket and a customer support engineer will assist you.
I have an API endpoint that I want to throttle using my API gateway in Azure, but it seems like throttling is always based on someone's IP address, in turn restricting a single user from only being able to call so many times per X seconds/minutes. I want to throttle solely by the number of request per second. I don't care who calls it, it just can't exceed 100 requests per second.
So if 101 different people (with different IP addresses) all call the API at the same time, it will work for the first 100 people, but the 101th person will receive an error message saying something like "Too many requests, try again later"
Is this something that is even possible? How would I go about handling that?
Thanks!
I am doing a project where I need to send device parameters to the server. I will be using Rasberry Pi for that and flask framework.
1. I want to know is there any limitation of HTTPS POST requests per second. Also, I will be using PythonAnywhere for server-side and their SQL database.
Initially, my objective was to send data over the HTTPS channel when the device is in sleep mode. But when the device (ex: car) wakes up I wanted to upgrade the HTTPS to WebSocket and transmit data in realtime. Later came to know PythonAnywhere doesn't support WebSocket.
Apart from answering the first question, can anyone put some light on the second part? I can just increase the number of HTTPS requests when the device is awake (ex: 1 per 60 min in sleep mode and 6 per 60sec when awake), but it will be unnecessary data consumption over the wake period for transmission of the overhead. It will be a consistent channel during the wake period.
PythonAnywhere developer here: from the server side, if you're running on our platform, there's no hard limit on the number of requests you can handle beyond the amount of time your Flask server takes to process each request. In a free account you would have one worker process handling all of the requests, each one in turn, so if it takes (say) 0.2 seconds to handle a request, your theoretical maximum throughput would be five requests a second. A paid "Hacker" plan would have two worker processes, and they would both be handling requests, to that would get you up to ten a second. And you could customize a paid plan and get more worker processes to increase that.
I don't know whether there would be any limits on the RPi side; perhaps someone else will be able to help with that.
Once a min (1,440 times/day), I'm reading a Gmail mailbox from an Azure Logic App. After 2 days, it consistently returns 429-Too many requests. The quota threshold is 20,000/day. It has not run successfully since.
You might be running into the threshold for Concurrent Requests of gmail due to the parallel actions of Logic Apps. This will also return the 429 error.
What are you exactly doing in the logic app?
Based from this documentation, the Gmail API enforces the standard daily mail sending limits.
These limits are per-user and are shared by all of the user's clients, whether API clients, native/web clients or SMTP MSA. If these limits are exceeded a HTTP 429 Too Many Requests "User-rate limit exceeded" error mentioning "(Mail sending)" is returned with a time to retry. Note that daily limits being exceeded may result in these types of errors for multiple hours before the request is accepted, so your client may retry the request with standard exponential backoff.
These per-user limits cannot be increased for any reason.
The mail sending pipeline is complex: once the the user exceeds their quota, there can be a delay of several minutes before the API begins to return 429 error responses. So you cannot assume that a 200 response means the email was successfully sent.
You may try considering exponential backoff. Here's also an additional link which might help: Gmail API error 429 rateLimitExceeded even where is no any activity
I use Amazon's Product Advertising API to retrieve their node hierarchy using the API's BrowseNodeLookup method (REST using Java). On Amazon's sandbox individual requests seem to work but if I keep sending requests for various nodes I eventually end up getting HTTP 503 errors.
One of previous posts on an Amazon's forum indicated a limit of 20 requests per second on sandbox: https://forums.aws.amazon.com/thread.jspa?messageID=152657𥑑
After I put throttling in place I tried limiting code to send 20 requests/sec, as well as 10 requests/sec. In both cases I ended up eventually getting a 503 error. I had posted my question on Amazon's forum but have not received any information so I was wondering whether anybody knows answers to the following questions:
What kind of limits does the sandbox environment impose in this case?
Are those or similar limits in place in the production environment?
Do those limits apply to both REST and SOAP calls?
Maybe 10 requests/sec is too many?
I am having the same problem. I found this link that mentions 1 request/sec.
http://www.mail-archive.com/google-appengine#googlegroups.com/msg19305.html
It's approximately 2,000 per hour; with the opportunity to scale up if you're a merchant shipping a lot of product sold through their marketplace.
One way to help with this limit is by batching multiple requests in each API call - they're treated as one invocation, for the purposes of Amazon's rate-limiting governor. Not only does that help with throughput by permitting larger sets of requests to be issued; but because you're not dealing with intermachine latency (between your app & the Amazon server handling your API request) you're making up a bunch of time there, as well.