I have a few services which are behind Azure API Managment.
When client send a request it looks like below:
Client -> Azure Api Managment -> Service A -> Service B
Sometimes Service B responses after 5min, but in the meantime Client gets 504 Gateway timeout.
In Azure API Managment I set policy
<backend>
<forward-request timeout="300" />
</backend>
In documentation I saw below information, but if it is possible to set some settings for network infrastructure in the pipeline?
The amount of time in seconds to wait for the HTTP response headers to
be returned by the backend service before a timeout error is raised.
Minimum value is 0 seconds. Values greater than 240 seconds may not be
honored as the underlying network infrastructure can drop idle
connections after this time.
As the doco indicated, any value greater than 240 won't be reliable. So the best bet for you is to change the implementation a bit if that's in your control.
A simple one could be: instead of waiting for 5 mins, you change the request to "fire and forget", and once Service B has finished its execution, it can write the response somewhere the Service A can access. Your client can then keep pinging to see if the response is available.
This may not be feasible if you don't have control, but just suggested it as a workaround in case you do :)
Related
I have azure api management instance which contains one operation which has backend call to azure function. I am facing issue that client who is calling my apim endpoint receiving 500 error due to timeout at forward-request (after 30 sec), but at the backend processing continues and records are created in backend.
So in this situation client think record is not created but actually its created.
1. What is the default timeout for APIM as documentation shows 300 seconds whereas it also states 240 as max timeout? So this is misleading
2. Backend processing doesn't stop so is this expected that apim is returning timeout error?
I have a logic app that gets called from APIM > Function > Logic app > D365 (synchronous call based on http request trigger). When I call it the first time (after a day, or after a few hours), takes longer than usual (around 25-30 seconds) and results in a Gateway timeout error.
When I call it the second time, it usually completes the operation within 8-10 secs with no timeouts.
The error message is typical:
The execution of template action 'Response_-_to_be_displayed' is failed: the client application timed out waiting for a response from service. This means that workflow took longer to respond than the alloted timeout value. The connection maintained between the client application and service will be closed and client application will get an HTTP status code 504 Gateway Timeout.
While keeping the pattern synchronous, i don't think this is something that may cause a timeout issue. I have already checked this link and this one too, but that's not a solution to my problem.
I want to keep the call synchronous (it's only a 25-30 sec call), is it something to do with an APIM policy or any settings in logic apps that can increase this?
you can increase the timeout in both logic app and APIM. In the case of APIM we implement a policy, which will dictate the timeout and in logic app it's app settings.
But in case of APIM if the timeout is more than 240 sec then APIM won't be reliable, and it is advised to look for the implementation of the function.
Now to increase the timeout in APIM we must set up the forward request policy and this policy has the attribute of timeout which you can set to your desired limit.
Go to policy fragments in APIM and add the following to a policy
<forward-request timeout="60"/>
Now regarding the logic-app, we can setup custom timeout by setting up application settings. For this you would need two setting one is threshold and other one is time out. you have to add these in app setting which is under configuration in portal for logic app.
The settings are called
Runtime.FlowRetentionThreshold
and
Runtime.Backend.FlowRunTimeout
the values of these settings are in format Days.hours:Minutes:seconds for example : 00.00:05:00 this one is for 5 minutes
Also please check that your function is working properly as workflow timeout in logic apps and apim timeouts default value is more than the time it takes to execute a task according to you.
Refer this msDoc on Logic app timeout
Refer this msdoc on apim timeout.
I know this has been asked before, but I tried all known solutions and still no luck. I have a request that returns roughly 26MB of JSON. It is returning a 502 on my azure web app. I have set maxRequestLength and maxAllowedContentLength to their max allowed values as detailed here.
How to set the maxAllowedContentLength to 500MB while running on IIS7?
I have also set the applicationHost.xdt on the site folder of my webapp and verified it is applied as detailed here.
ApplicationHost.xdt in Azure Web Apps
None the less, my request timeout at exactly 4 minutes every time. I can run the same request against my localhost running on iisexpress pointed to the Azure SQL database and it returns the data, so I know this is something azure webapp speciic.
I have enabled all types of logging in "App Service Logs" section of my webapp. I see other failed request traces for 401 when session expires, but this request doesn't log a failed request trace, or an application error. In live log stream it shows the request as a 200 response in the web server logs.
Any other ideas?
Thanks for a detailed question and sharing the solutions that you have already tried. I'm unsure if "Always ON" feature is turned on on your WebApp. Such time-out error may occur due this,so kindly enable it and let us know for further investigation.
Additional information, Azure Load Balancer has a default idle timeout setting of approximately four minutes (230 sec); this is a general idle request timeout that will cause clients to get disconnected after 230 seconds. However, the command will still continue running server-side after that. For a typical scenario, this is generally a reasonable response time limit for a web request. In such scenarios, you could look at async methods to run additional reports. WebJobs or Azure Functions is another option.
If ‘Always On’ config is not turned On, please do turn it on. The AlwaysOn would help keep the app loaded even when there's no traffic, it will send a request to the ROOT of your application. Whatever file is delivered when a request is made to / is the one which will be warmed up and this feature comes with the App Service Plan is not charged separately
1) From the Azure Portal, go to your WebApp.
2) Select Settings> Configuration > General settings.
3) For Always On, select On.
I am trying to get a list of all Storage Accounts present in my Azure subscription but I am getting a throttling error.
com.microsoft.azure.CloudException: Status code 429, {"error":{"code":"ResourceCollectionRequestsThrottled","message":"Operation 'Microsoft.Storage/storageAccounts/read' failed as server encountered too many requests. Please try after '17' seconds. Tracking Id is 'e982a894-0f3e-4291-a9b3-e147c18f8f60'."}}
The request prior to this request prints there are 13869 more remaining subscription reads but it still fails.
x-ms-ratelimit-remaining-subscription-reads: 13869
There are around 60 Storage Accounts in my subscription and that according is a small number.
Any idea what's causing this and that too only while listing Storage Accounts and nowhere else.
According to this article:
For each subscription and tenant, Resource Manager limits read requests to 15,000 per hour and write requests to 1,200 per hour. These limits apply to each Azure Resource Manager instance; there are multiple instances in every Azure region, and Azure Resource Manager is deployed to all Azure regions. So, in practice, limits are effectively much higher than those listed above, as user requests are generally serviced by many different instances.
If your application or script reaches these limits, you need to throttle your requests.
So if you reach the request limit, Resource Manager returns the 429 HTTP status code and a Retry-After value in the header. The Retry-After value specifies the number of seconds your application should wait (or sleep) before sending the next request. If you send a request before the retry value has elapsed, your request is not processed and a new retry value is returned.
I suggest you could use this way to get the number of the read time. If it will meet the limit, you could write codes to limit the application to send the request.
This page says that the availability of an azure storage account is computed as (billable requests)/(total requests). Billable requests mean all the requests excluding those which experienced anonymous failures (except network errors), throttled requests, server timeout errors and unknown errors.
Now,what I see on the azure portal for my storage account is a straight line continuously at 100%, meaning that the account is available at 100% availability continuously. The line is without any break which means that the availability is being calculated continuously.
I know for sure that I don't throw requests to the storage account continuously. Then, how is this metric calculated for times when there are no requests?
Additionally, even a slight drop in storage availability means that some requests failed due to some server side issues. How can we ensure that these failed requests are retried and they pass?
When there isn't any incoming request, the availability is 100%. If your request encounters server side failures, you should retry the request via your code explicitly (in .NET client library, you can easily leverage RetryPolicy. See more details about RetryPolicy here).