We have a set-up an azure gateway of tier WAF V2 (so it would be zone-redundant). It has a backend pool containing 2 WebApps -AppServices (supposedly a Primary and a Secondary).
The idea behind it was to use the gateway similarly to priority traffic manager: Routing usually to the primary WebApp, and only routing to the secondary WebApp in case the first one goes down.
The Problem is that the only way I found to do that is to order the rules associated with the listeners of the backend pool (because I believe azure prioritizes them according to the order they are listed). But given that both Apps are in the same backend pool, Im unsure of how to do that.
So now the gateway randomly routes to either the first or second WebApp.
Any advice or suggestions would be much appreciated,
Thank you
Note: Also we have tried setting a traffic manager in between the gateway and the WebApps, but the gateway keeps connecting to the primary WebApp even when its down and its probe becomes of health status unknown.
Application Gateway is a layer 7 load balancer, which means it works with web traffic only (HTTP/HTTPS/WebSocket). It supports capabilities such as SSL termination, cookie-based session affinity, and round robin for load balancing traffic. This indicated that the application gateway frontend randomly distribute the incoming traffics to the endpoint if both endpoints are healthy. So you could see the gateway randomly routes to either the first or second WebApp. See the application gateway FAQ. The app gateway does not work like priority-based traffic manager which always requests to the primary web app unless the primary web app is unhealthy.
About the health status unknown, the most common reason is that access to the backend is being blocked by an NSG or custom DNS. Ref: Troubleshooting bad gateway errors in Application Gateway
Related
We have setup Azure Front Door (AFD), with an Azure Load Balancer (ALB) behind it, as what was recommended when taking the decision tree approach found here --> Decision tree for load balancing in Azure
Here is an image as well for quick reference:
We have configured everything and it is working from a resolving perspective and the websites are being presented and all.
We are struggling to configure session affinity as the backend websites and applications are using ASP.NET MVC 5.0 they require session based items. Therefore when a user requests the application, each subsequent request should be routed to the same backend.
We have enabled session affinity from AFD and we can see the cookies being set, and they stay constant between requests, but we assume that because the ALB is a layer 4 load balancer it does not respect cookies, and that is why the session affinity is being lost and each request is sent to another request in some scenarios. This creates the issue that the Session variable is not available anymore and the user is logged out due to this.
We have also enabled Client IP & Protocol affinity in ALB but this does not seem to help, again the assumption is that each request that comes through get either a new private IP and or PORT.
We are using Azure Private Link and IP between these services to ensure our VNET has not internet facing IP and is not reachable without our VPN.
We have thought of other solution such as replacing the ALB with Azure Application Gateway because it is also layer 7, but this needs a public IP which we are trying to get away from.
Any ideas on how to get this right?
what's a good native Azure service that I can use for Active/Passive load balancing on VM's with private endpoints? The application on these servers will cause issues if more than one node is active and we'd. The VM are in availability zones with connected via private endpoints only. We need connection to TCP ports so it's not just port 443 access.
Thank you
what's a good native Azure service that I can use for Active/Passive
load balancing on VM's with private endpoints?
You can use Azure Traffic Manager with the Private Endpoints for load balancing the Azure VM.
if you are using Azure Traffic Manager then you need to remember one thing that Health Monitor feature is not available for Azure Traffic manager with private End Points
Understanding Traffic Manager probes
Traffic Manager considers an endpoint to be ONLINE only when the probe receives an HTTP 200 response back from the probe path. If you application returns any other HTTP response code you should add that response code to Expected status code ranges of your Traffic Manager profile.
A 30x redirect response is treated as failure unless you have specified this as a valid response code in Expected status code ranges of your Traffic Manager profile. Traffic Manager does not probe the redirection target.
For HTTPs probes, certificate errors are ignored.
The actual content of the probe path doesn't matter, as long as a 200 is returned. Probing a URL to some static content like "/favicon.ico" is a common technique. Dynamic content, like the ASP pages, may not always return 200, even when the application is healthy.
A best practice is to set the probe path to something that has enough logic to determine that the site is up or down. In the previous example, by setting the path to "/favicon.ico", you are only testing that w3wp.exe is responding. This probe may not indicate that your web application is healthy. A better option would be to set a path to a something such as "/Probe.aspx" that has logic to determine the health of the site. For example, you could use performance counters to CPU utilization or measure the number of failed requests. Or you could attempt to access database resources or session state to make sure that the web application is working.
If all endpoints in a profile are degraded, then Traffic Manager treats all endpoints as healthy and routes traffic to all endpoints. This behavior ensures that problems with the probing mechanism do not result in a complete outage of your service.
Else you can even use Azure Front door premium as it supports traffic routing to private link. by which you need to use application gateway/load balancer as backend private IP's and front door as the routing methods.
I have been trying to add multiple backend pools (multiple IP addresses) in the Azure Application Gateway, to route the request to any one of the servers mentioned.
once I have added two servers to the pool, the request is now showing a 500 internal server error. but it will work when only 1 instance is added to the backend pool.
The servers which I have added are 2 VM's IP address. In backend health also I could see the servers as healthy.
what could be the issue?
Go to backend settings and try enabling Cookie-based affinity
We are planning to have a web app hosted in 2 web servers in 2 different azure regions and I am planning to use either traffic manager or Azure front door for load balancing.
We want to distribute traffic based on priority so if app at one region goes down, LB can shift to other instance.
Suppose I have one instance hosted in Central US and other in Europe and using Traffic manager or Front door in India region.
I want to set Central Us instance as primary and Europe as Secondary, So LB can route traffic to Central Us and fail over to Europe in disaster.
What happens when user gets connected to Central Us region and it goes down, how does load balancer handle the session management? is it handled by load balancer automatically or any configuration needed for the same?
I do not want to go with Azure front door Sticky sessions as I want to use Priority based routing.
As traffic manager acts at DNS level, can I use it for my use case?
Yes, the Priority traffic-routing method of Azure traffic manager is exactly doing the trick in your scenario.
Select Priority when you want to use a primary service endpoint for all traffic, and provide backups in case the primary or the backup endpoints are unavailable.
By default, Traffic Manager sends all traffic to the primary (highest-priority) endpoint. If the primary endpoint is not available, Traffic Manager routes the traffic to the second endpoint. If both the primary and secondary endpoints are not available, the traffic goes to the third, and so on. Availability of the endpoint is based on the configured status (enabled or disabled) and the ongoing endpoint monitoring.
Reference: Tutorial: Configure priority traffic routing method in Traffic Manager
Update
Since Azure TM works at the DNS layer, it has no way to track individual clients, and cannot implement 'sticky' sessions. If you still persist to use it for sticky sessions, you need to have an extra configuration on your web apps.
So, in this case, the Azure front door is a better-recommended method for Sticky sessions and it also supports Priority-based traffic-routing.
Each backend in your backend pool of the Front Door configuration has
a property called 'Priority', which can be a number between 1 and 5.
With Azure Front Door, you configure the backend priority explicitly
using this property for each backend. This property is a value between
1 and 5. Lower values represent a higher priority. Backends can share
priority values.
When you add a backend web app in the backend pool, you just need to specify the priority in the Azure front door UI.
I have two VMs that are part of a kubernetes cluster. I have a single service that is exposed as NodePort (30001). I am able to reach this service on port 30001 through curl on each of these VMs. When I create an Azure application gateway, the gateway is not directing traffic to these VMs.
I've followed the steps for setting up the application gateway as listed in the Azure documentation.
I constantly get a 502 from the gateway.
In order for the Azure Application Gateway to redirect or route traffic to the NodePort you need to add the Backend servers to the backend pool inside the Azure Application Gateway.
There are options to choose Virtual Machines as well.
A good tutorial explaining how to configure an application gateway in azure and direct web traffic to the backend pool is:
https://learn.microsoft.com/en-us/azure/application-gateway/quick-create-portal
I hope this solves your problem.
So I finally ended up getting on a call with the support folks. It turned out that the UI on Azure's portal is slightly tempremental.
For the gateway to be able to determine which of your backends are healthy it needs to have a health probe associated with the HTTP setting (the HTTP Setting is the one that determines how traffic from the gateway flows to your backends).
Now, when you are configuring the HTTP setting, you need to select the "Use Custom Probe" but when you do that it doesn't show the probe that you have already created. Hence, I figured that wasn't required.
The trick to first check the box below "Use Custom probe" which reads "Pick hostname from backend setttings", and then click on custom probe and your custom probe will show up and things will work.