Validating the endpoint for Azure hosted service - azure

I have deployed one azure WCF hosted service. I created one service and one service contract. Everything works fine if I call it as below:
http://myexampleservice.cloudapp.net/TestSertvice.svc/Test1
Now, we want to catch all invalid request, which either do not have the correct service name or correct operation name. For example all requests of the form below:
http://myexampleservice.cloudapp.net/TestSertvice12.svc/Test1
Is there any way to do this?
If I call above invalid requests then service returns response status as 404. Is there any possibility that azure traffic manager will degrade the service if it gets too many such requests?
-Manish

This is actually pretty easy to do. So first you will need to catch 404 requests occurring on your instance(s):
<customErrors mode="On" >
<error statusCode="404" redirect="~/Errors/Error404.aspx" />
</customErrors>
Each time a 404 error occurs the user/consumer will be redirected to Error404.aspx where you can add this event to a local counter (a file or a static variable) or shared counter if you have multiple instances (SQL Azure, Table Storage, ...).
Take a look at the options you have when configuring the traffic manager:
You can setup a monitoring endpoint. This would point to a different page (like /Status/CanBeUsed.aspx). This page should return an HTTP status code different from 200 if it decides that the deployment should not be used (ie: if your local/shared counter contains too many 404 errors). The Traffic Manager will monitor this page and after 3 failed requests it will fail over to a different deployment.

Related

Application Initialization for an App Service in Azure not working

We have an appservice in Azure configured to have max 8 instances and everytime we deploy, we see restart activity under Availability and Performance (Diagnostics).
We have also observed loads of 5xx errors this is happening. Our analysis so far is that requests are getting routed to cold instances which have just been spun up and these are reasons for getting failures.
I have found this guide -> https://azure.github.io/AppService/2020/05/15/Robust-Apps-for-the-cloud.html and following the Application Initialization advice.
As a result, I have added
<applicationInitialization >
<add initializationPage="/healthcheck"/>
</applicationInitialization>
to web.config
I restarted the app service and sent few test requests to the app. In the Application Insights I can see the health endpoint being called - so application initialization logic is kicking in. However, it is calling http://localhost/healthcheck and 307 being returned.
I looked into 307 and the reason, it is returning 307 as app service is configured to only run using https protocol but http://localhost is non-https and hence service is redirecting.
What do i do need to so it calls the app service with https protocol.
I tried adding the full app url in the application initialization block but then I can see
http://localhost/https://app-service-name.azurewebsites.net/healthcheck being called - which is even worse.
What am I doing wrong?
I assume your application runs ASP.NET Core. As mentioned in the other answer, application initialization only supports HTTP, so you need to make that work. What we did is this:
/// </summary>
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
// ...
// IIS App initialization requires HTTP
app.UseWhen(
context => !WarmupController.IsWarmupRoute(context.Request.Path),
mainApp => mainApp.UseHttpsRedirection()
);
WarmupController.IsWarmupRoute basically only contains a StartsWith check.
This way you allow HTTP requests only for your warmup/healthcheck route. There is no need to adapt web.config or AppService settings.

Are custom error pages possible on Azure App Service slots?

I'm trying to build a "maintenance mode" screen for Azure App Services running a single-page Vue app. Is this possible without Application Gateway?
Background
Azure App Service instance
Runs a Node Express server on startup
Express serves up a single page app
All data is fed into the SPA via an API
When the API undergoes maintenance I need a way to tell the front-end
I'd prefer not to ask the API if it's available for each page view
Ideas and things we've tried
I know Azure Application Gateway can serve up custom error pages, but we're currently not using this service and might have some legal/data privacy issues (it's a healthcare tool) with its data caching requirements.
I added a web.config file to the wwwroot on the service slot as an attempt to catch errors and redirect but it seems to have no impact. This is what I expected since Express should be handling routing and errors (which it does).
Asure has the ability to "stop a slot". Is there any way to customize the page that's displayed?
Other web searches show that custom errors were requested at one point, but the Azure feature request page produces a very beautiful 404 page (oh the irony).
Is there a way to customize the stopped and/or server error pages in Azure? Are there other commonly accepted ways of solving a problem like this?
Custom Error page is only available by using the Application Gateway. This feature is not available Without Application Gateway. If you want this feature please add your feed back/Feature request here
You can catch the http errors only by using web.config in wwwroot directory by adding below code
< httpErrors errorMode="Custom" defaultResponseMode="ExecuteURL">
< remove statusCode="404" subStatusCode="-1" />
< error statusCode="404" path="/public/CustomError.html" responseMode="ExecuteURL" />
< /httpErrors>
On above snippet of code can redirect to the CustomError.html if any html error occurred.
Refer here

Azure API Management (consumption tier): First request gives timeout and is not sent to backend service

I have a service running behind an Azure API Management instance running in the consumption tier. When no traffic has been sent to the API Management instance in a while (15 minutes isn't enough to trigger it, but an hour is), the first request sent takes about 3 minutes 50 seconds and returns a HTTP 500 with this body content:
<html><head><title>500 - The request timed out.</title></head><body> <font color ="#aa0000"> <h2>500 - The request timed out.</h2></font> The web server failed to respond within the specified time.</body></html>
Following requests work fine. Based on application logs and testing with an API Management instance pointing to my local machine via ngrok, it doesn't look like API management is even trying to connect to the backend for these requests. For the local test, I ran my app under the debugger, put a breakpoint in my service method (there's no auth that could get in the way) and watched the "output" window in Visual Studio. It never hit my breakpoint, and never showed anything in the output window for that "500 request timed out" request. When I made another request to API Management, it forwarded along to my service as expected, giving me output and hitting my breakpoint.
Is this some known issue with API Management consumption tier that I need to find some way to work around (ie. a service regularly pinging it)? Or a possible configuration issue with the way I've set up my API Management instance?
My API management instance is deployed via an ARM template using the consumption tier in North Central US and has some REST and some SOAP endpoints (this request I've been using for testing is one of the SOAP ones and uses the envelope header to specify the SOAP action).
Additional information:
The request is question is about 2KB, and a response from the server (which doesn't play into this scenario as the call never makes it to my server) is about 1KB, so it's not an issue with request/response sizes.
When I turn on request tracing (by sending the Ocp-Apim-Subscription-Key + Ocp-Apim-Trace headers), this 500 response I'm getting doesn't have the Ocp-Apim-Trace-Location header with the trace info that other requests do.
I get this behavior when I send 2 requests (to get the 4-minute 500 response and then a normal 5s 200 response), wait an hour, and make another request (which gets the 4-minute delay and 500 response), so I don't believe this could be related to the instance serving too much traffic (at least too much of my traffic).
Further testing shows that this happens about once every 60 to 90 minutes, even if I send one request every minute trying to keep the APIM instance "alive".
HTTP 500 (Internal Server Error) status code indicates that the server encountered an unexpected condition that prevented it from fulfilling the request. (possibly due to large payload). There is no issue at APIM level. Analyze the APIM inspector trace and you should see HTTP 500 status code under 'forward-request' response attribute.
You need to understand who is throwing these HTTP 404 and 500 responses, APIM, or the backend SOAP API. The best way to get that answer is to collect APIM inspector trace to inspect request and response. Debug your APIs using request tracing
The Consumption tier exposes serverless properties. It runs on a shared infrastructure, can scale down to zero in times of no traffic and is billed per execution. Connections are pooled and reused unless explicitly closed by the back end. Api management service limits
1. These pattern of symptoms are also often known to occurs due to
network address translation (SNAT) port limits with your APIM
service.
Whenever a client calls one of your APIM APIs, Azure API Management service opens a SNAT port to access your backend API. Azure uses SNAT and a Load Balancer (not exposed to customers) to communicate with end points outside Azure in the public IP address space, as well as end points internal to Azure that aren't using Virtual Network service endpoints. (This situation is only applicable to backend APIs exposed on public IPs.)
Each instance of API Management service is initially given a pre-allocated number of SNAT ports. That limit affects opening connections to the same host and port combination. SNAT ports are used up when you have repeated calls to the same address and port combination. Once a SNAT port has been released, the port is available for reuse as needed. The Azure Network load balancer reclaims SNAT ports from closed connections only after waiting four minutes.
A rapid succession of client requests to your APIs may exhaust the pre-allocated quota of SNAT ports if these ports are not closed and recycled fast enough, preventing your APIM service from processing client requests in a timely manner.
Following strategies can be considered:
Use multiple IPs for your backend URLs
Place your APIM and backend service in the same VNet
Place your APIM in a virtual network and route outbound calls to Azure Firewall
Consider response caching and other backend performance tuning (configuring certain APIs with response caching to reduce latency
between client applications calling your API and your APIM backend
load.)
Consider implementing access restriction policies (policy can be used to prevent API usage spikes on a per key basis by limiting the
call rate per a specified time period.)
2. The forward-request policy forwards the incoming request to the
backend service specified in the request context. The backend
service URL is specified in the API settings and can be changed
using the set backend service policy.
Policy statement:
<forward-request timeout="time in seconds" follow-redirects="false | true" buffer-request-body="false | true" buffer-response="true | false" fail-on-error-status-code="false | true"/>
Example:
The following API level policy forwards all API requests to the backend service with a timeout interval of 60 seconds.
<!-- api level -->
<policies>
<inbound>
<base/>
</inbound>
<backend>
<forward-request timeout="60"/>
</backend>
<outbound>
<base/>
</outbound>
</policies>
Attribute: timeout="integer"
Description: The amount of time in seconds to wait for the HTTP
response headers to be returned by the backend service before a
timeout error is raised. Minimum value is 0 seconds. Values greater
than 240 seconds may not be honored as the underlying network
infrastructure can drop idle connections after this time.
Required: No
Default: None
This policy can be used in the following policy sections and scopes.
Policy sections: backend
Policy scopes: all scopes
Checkout similar feedback for your reference. Also, refer for detailed troubleshooting of 5oo error for APIM.

Azure App Service Deployment Slot - Application Gateway

Working on a project where we are starting to use Deployment Slots in our App Services.
All our Prod apps are located behind Application Gateway, and we would like to also have our Slots located behind Application Gateway.
I understand we can not do this using "App Services" as target type in the Backend Pools as of now, but wondered if it is doable using "IP Address or FQDN" as target type.
I have tried to set it up, with various changes in the "HTTP Settings", Probe and so, but havenĀ“t gotten it up spinning.
Can anyone confirm if this is possible, and have any tips on how this should be configured?
Thanks!
I was able to get this working on one of my slots.
Basically setup the listener with your necessary protocol, port cert, hostname, etc... I'm using multi-site listeners so I can have multiple URLs for the one AppGW/Public IP.
The rule points to the listenter, backend pool and appropriate http setting.
The HTTP setting should be configured to connect to your app service URL accordingly. I'm using the azurewebsite.net URL, so I use well known CA cert & override hostname from backend target:
The backend pool then points to the azurewebsites.net URL:
Make sure that GET / works on your app service and returns 200-399 HTTP status codes. Anything outside that range is a failure and the backend pool will be removed. If you need to create a custom health probe to a URL that will respond properly, or adjust the acceptable HTTP status code (if 401 or 403 due to required auth, then just override it with that for testing purposes for now).
I'm trying to do it again with a second slot and running into 502 errors from the App Gateway... However, I'm also waiting on DNS changes from my network team. My first one with my company domain works via hosts file edit, but the 2nd slot (which has 2 different URLs/listeners configured in the AppGW) doesn't want to work the same way for some reason.

Azure Application Gateway Url based routing does not work

I'm configuring Azure Application Gateway Url based routing for my two back-end pools and it is not working.
My default routing configuration is pointing to b1 end point and it is reachable by blabla.cloudapp.azure.com
When I add additional route path /b1/* I cannot access my back-end pool via blabla.cloudapp.azure.com/b1/. I get 404 page not found response.
Can anyone please help me to understand what is wrong with my configuration?
Azure Application Gateway URL Based routing will route different requests to different groups of servers (backend pools) based upon the URL of the request. Once the request is sent to a VM, it is received and treated like a normal web request. If the URL you are accessing is a Valid web URL for the web server that is receiving the request, then it will return a proper response.
The fact that you are getting a 404 error means that your web servers are receiving the request, but not finding anything at the requested location. One way you can troubleshoot this is to log onto the VM that should be receiving the request and trying the request in a browser replacing blabla.cloudapp.azure.com/with localhost/.
In the example you posted, you would need a folder from within your web directory called "b1" for the URL you specified to be a valid request.
You can use a Path-Based Rule to specify the default backend pool, as well as specific URL paths that should be sent to other backend pools. Here is an example of how to configure a Azure Application Gateway with URL based routing in the Portal.

Resources