I created a Web API that uses AAD for authentication. The client is a daemon application requesting tokens from AAD using client id and secret.
I was doing a stress test on the client/web API where a client requests a token for each call to the service.
After around 26 mins of the test run, I start to see a very high request rate error from AAD when trying to get the token.
I understand I can reuse the token and avoid the issue but, I am curious to know what's the throttle limit? Does anyone have a pointer to the document explaining the request rate limit/throttle?
Thanks!
Throttling behavior can be dependent on the type and number of requests. For example, if you have a very high volume of requests, all requests types are throttled. Threshold limits vary based on the request type. When you reach the limit, you receive the HTTP status code 429 Too many requests. Exact request rate limit is not exposed currently.
Please refer these articles which may help you in understanding more.
https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/request-limits-and-throttling
https://learn.microsoft.com/en-us/azure/architecture/best-practices/retry-service-specific#retry-usage-guidance
Related
I have a website that has been the frequent target of botters who are attempting to scrape our content, and have been trying to devise a better mechanism of prevent bot traffic. I've conceived the following workflow to protect our APIs and webpages, while still allowing legitimate traffic through with a minimal impact on performance:
When user visits the page for the first time, make a request to the backend using recaptcha v3 to tell if the user is human or not. If they pass the captcha, generate a token for the user, store it in our backend with an expiration date, and give it back to the client to store as a cookie.
On the frontend, include the token in all API requests to our backend. Before handling any request, the backend will check the token and make sure that it is valid and not expired.
If/when the token expires, inform the frontend and run step 1 again to make sure the user is human and generate a new token.
I believe this approach would achieve the goals that I have outlined above. There would relatively few requests to recaptcha (basically one request per session, assuming a token duration of 1 hour, let's say), and there would not be any noticeable change to the user workflow. One limitation I see is that we would want verified bots (GoogleBot) to be able to bypass the captcha, but maybe there is a way to do this through Cloudflare by having it tell us when traffic is coming from a verified bot.
Are there any other limitations/downsides to this approach that I haven't thought of? I tried searching for a solution to bot traffic similar to this, but haven't been able to find anything. It seems like this is not a very common solution, and I'm wondering if that may be because there is a flaw to the design that I haven't considered.
I would like to expose a problem to which I just cannot find a solution, although I have been informed several times on the web, the resources I find do not satisfy my curiosity.
The question is the following:
Suppose we have a rest API in node js (express) on the following endpoint -> / stars.
Suppose we want to sell this API with the endpoint/stars to a certain target of customers, the endpoint will therefore only allow customers who buy the API to use it.
The problem arises spontaneously, let's suppose that the pizza company buys my API and that I generate an access token for them, then they would call my endpoint with their token to have the resource, so far very good.
However, all the requests are easily visible.
Example Chrome> dev tools> network and I see not only the endpoint with the full address, but even the payload that is passed!
So as an attacker I could very well (without paying the API) catch the pizza industry using the endpoint/stars with a token, copy everything and slap it on my services by providing the same token and the same endpoint.
I already know the existence of tokens like jwt but they don't solve the problem anyway, as that different token only has the expiration.
Even if it expires after 15 minutes or after 3 minutes, just retrieve another one and provide an identical request with the same token, would anyone be able to direct me to a solution?
The only one I've seen to find a solution to this is Instagram that sends behind a payload of thousands of lines, is it really the only method?
note: it is not even public.
#xVoid
The first thing you can set an encryption/decryption module for your response data with the help of the crypto module in node.js, Here you send encrypted response and the your API client decrypt your response and use it.
You can set a key for your API it means every time your client or user send you a request they have to send that key in the body, not header so other people can't
get your data because they don't have that key, and in express you can set middleware to validate this key is exist or not if not simply return "You are not authorized"
If you aren't getting any point or you want to go deep on particular thing just let me know
You may simply use http-only cookie and send the token in cookie, instead of normal header
A customer using your endpoint should not be sharing their API keys with the end-users.
This means that any customer using your service should create at least a proxy server to your specific endpoint.
CLIENT GET /pizza FROM CUSTOMER -> CUSTOMER GET /pizza?apiToken=<...> FROM SERVICE
Obviously there can be a man in the middle attack between the CUSTOMER and your SERVICE but that's is unlikely to occur using SSL (Related: Are querystring parameters secure in HTTPS (HTTP + SSL)? )
If a CUSTOMER suspects that their api key was leaked they should revoke it and request a new one to your SERVICE.
I created a script to upload some 75000 files to a container in Azure blob storage. But somehow after some 47000 uploads, the auth token failed and it gave auth error due to which all the subsequent requests were canceled.
For each request, I am checking if the token has failed I am refreshing it and updating it with a new token. That is I am updating tokens with the same refresh token.
When I tried to investigate rate limits for azure. I found 2 articles.
Scalability and performance targets for standard storage accounts
Throttling Resource Manager requests
One of them says limit to be 20000/sec while the other says it to be some 10/sec 1200/hr. Can someone please clarify whats the actual limit is?
Can someone please clarify whats the actual limit is?
Uploading the objects to an Azure blob storage you should get 20k/s limit.
The limit 1200/h relates to writes if the Resource Manager, it means updating Azure resources, such as creating containers, creating blob storages,...
auth token failed
When reaching the limit, you suppose to get http 500 or http 503 response (as far I recall, plz check). But you say the auth token fails which implies an authentication issue (401, 403 response). So which is it? (Btw the auth token has its validity )
I'm considering using https://account-d.docusign.com/oauth/userinfo endpoint to verify if the user's access token is valid before calling eSignature API.
However, I have not been able to find any information regarding the rate limits for OAuth calls, only for specific DocuSign APIs.
Are there any rate limits for OAuth calls, specifically for get userInfo, and if so what are they?
There is a limit, but your workflow wouldn't really be recommended as that endpoint is not meant to be used for the purposes of token validation. Couldn't you just call eSig and use retry logic (i.e getting a new token) if you get an unauthorised response? Have a look at our quick start applications for ideas on how to do this.
In my Node.js application I use OpenStack Swift (Object Storage) as a storage service. The app needs a authorization token to access the storage service, the (small) problem is the access token needs to be refreshed once in a couple of hours.
How should I do it to provide smooth experience to end client?
Refresh the token when expired in a reaction to OpenStack 401 response code.
Schedule the token refresh request manually via some sort of node scheduler or cron task.
The app relies heavy on access to storage service.
Using option 1 will effectively limit the access to app for my clients for a second.
This may seem nothing but if you multiply this by the number of clients its not so small.
If the application relies on some database/storage that requires authorization What is the industry standard for performing such server-to-server authorization requests?
For some reason obtaining token from OpenStack Keyrock takes a lot of time (~1s) that's why I'm asking.
NOTE: currently I'm not in a position to influence tokens lifetime.
Considering that you do not have the ability to change auth token lifetimes and are looking to hide the authorization refresh from users, it would seem only appropriate to go with your second option. Fortunately, timed asynchronous actions are easily implemented in Node.js.
It seems best to have this to have this update service rely on either the timeout threshold or expiration timing. Defining arbitrary timings doesn't seem optimal.