Azure application gateway overwrites custom health probe - azure

As Azure Application Gateway doesn't support http basic authentication and throws 502 error when I try to reach my app in kubernetes, I've added custom settings to health probe (treat 401 error as healthy). Everything works fine, however my setting are being overwritten after some time. Any ideas what to do?

Are you using AKS with the Application Gateway Ingress Controller (AGIC)? AGIC is an AKS pod that manages ingress. When AKS + AGIC is configured, it has full ownership over the Application Gateway. Meaning, when you make a change to the AppGW manually (such as updated health probe status code), AGIC has the authority to remove the manual change and revert to the original config.
To get around this, you need to use AGIC to configure the health probe acceptable status code using annotations. See this Microsoft doc. The doc doesn't have the health probe status code annotation listed but I think you can use:
appgw.ingress.kubernetes.io/health-probe-status-codes: "200-401"
If you are NOT using AKS + AGIC, there is more than likely some other automation overwriting your changes to the health probe. I'd recommend reviewing the Application Gateway Activity Log to see what/who is reverting your change. More info on reviewing activity logs at this Microsoft doc.

Related

Azure Application Gateway pointing to Azure CDN?

Has anyone experience with pointing an Application Gateway to an Azure CDN? We can add internal and public services to our App Gateway but as soon as we point to an Azure CDN, then problems starts to arise.
Here is what we are seeing:
When adding a health probe pointing to the CDN it says everything is fine and it got a 200 reply.
When using the Connection Troubleshoot tool and pointing it to the CDN, it says it is reachable.
However, when checking the Backend Health, it says
The backend health status could not be retrieved. This happens when an
NSG/UDR/Firewall on the application gateway subnet is blocking traffic
on ports 65503-65534 in case of v1 SKU, and ports 65200-65535 in case
of the v2 SKU or if the FQDN configured in the backend pool could not
be resolved to an IP address. To learn more visit -
https://aka.ms/UnknownBackendHealth.
For one the backend health is not "unknown" but "unhealthy". And the tips there are not really useful. We don't have a blocking firewall and there are no NSGs.
And to make it even more confusing, it actually appears that the endpoint is functioning when accessing the App Gateway listener. But it is a bit sporadic and sometimes it doesn't work.
Any suggestion on how to debug this, as the tools available seems to indicate everything is fine until it is configured and the Backend Health says it is not?
Update:
It does in fact work if we use an IP from the CDN directly. Could indicate DNS issue however our DNS log does show that the App Gateway resolves the DNS.

Azure Traffic Manager Custom Header Settings for Application Gateway with Multiple Sites

I'm facing some issues getting the correct custom header in Traffic Manager to check health for multiple sites behind an Application Gateway. These applications are on a single listener in the Application Gateway.
No matter the header variations I am using, I am still getting the "Degraded" status on health monitoring.
Let's say my applications are as such: app1.example.com, app2.example.com
What would be the correct custom header settings in Traffic Manager? I was thinking such as below.
host:app1.example.com,host:app2.example.com
Thank you for your time.
Thank you Morariu for your comment. I am converting your comment as an answer to help other community member.
Fixed the monitoring "Degraded" status by add allow the Traffic Manager health checks in the NSG for the Application Gateway subnet.
Reference: https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-probe-overview
For adding multiple hosts in the customer header
Reference: https://learn.microsoft.com/en-us/azure/traffic-manager/traffic-manager-monitoring

Azure App Gateway with Internal API Management 503 backend server error

I am following up this doc series to set up an internal API management instance integrated with APP Gateway in azure. I followed everything to the detail:
Created a new resource group
Setup a Vnet with 3 subnets
And setup a private dns zone and link the vnet
And then created self-signed certificates to be used with the dns created in private dns zone
Created API management instance and added custom domains
Created a App Gateway with public IP and setup routing rules and backends and setup health probes with path /status-0123456789abcdef for APIM
But now I am getting this backend health error as below:
Can someone tell me what I am doing wrong?
Are there any security groups to be configured? I am using an internl mode for the APIM, and when I even try to test the default API (which is echo test) it gives the below error:
Why this is not working? If you need any more information, I will let you know (update the question). Can someone please help me?
I have a similar situation which was driving me insane. I must have changed everything I possibly could. The answer, was to create a custom health probe and at the very bottom of the HTTP settings it was an option to use the custom probe.
Since the Gateway URL is not registered on the public DNS, the test console available on the Azure portal will not work for Internal VNET deployed service. Instead, use the test console provided on the Developer portal.
You can find more details here.

Set kubernetes VM with nodeports as backend for application gateway

I have two VMs that are part of a kubernetes cluster. I have a single service that is exposed as NodePort (30001). I am able to reach this service on port 30001 through curl on each of these VMs. When I create an Azure application gateway, the gateway is not directing traffic to these VMs.
I've followed the steps for setting up the application gateway as listed in the Azure documentation.
I constantly get a 502 from the gateway.
In order for the Azure Application Gateway to redirect or route traffic to the NodePort you need to add the Backend servers to the backend pool inside the Azure Application Gateway.
There are options to choose Virtual Machines as well.
A good tutorial explaining how to configure an application gateway in azure and direct web traffic to the backend pool is:
https://learn.microsoft.com/en-us/azure/application-gateway/quick-create-portal
I hope this solves your problem.
So I finally ended up getting on a call with the support folks. It turned out that the UI on Azure's portal is slightly tempremental.
For the gateway to be able to determine which of your backends are healthy it needs to have a health probe associated with the HTTP setting (the HTTP Setting is the one that determines how traffic from the gateway flows to your backends).
Now, when you are configuring the HTTP setting, you need to select the "Use Custom Probe" but when you do that it doesn't show the probe that you have already created. Hence, I figured that wasn't required.
The trick to first check the box below "Use Custom probe" which reads "Pick hostname from backend setttings", and then click on custom probe and your custom probe will show up and things will work.

Configuring an AKS load balancer for HTTPS access

I'm porting an application that was originally developed for the AWS Fargate container service to AKS under Azure. In the AWS implementation an application load balancer is created and placed in front of the UI microservice. This load balancer is configured to use a signed certificate, allowing https access to our back-end.
I've done some searches on this subject and how something similar could be configured in AKS. I've found a lot of different answers to this for a variety of similar questions but none that are exactly what I'm looking for. From what I gather, there is no exact equivalent to the AWS approach in Azure. One thing that's different in the AWS solution is that you create an application load balancer upfront and configure it to use a certificate and then configure an https listener for the back-end UI microservice.
In the Azure case, when you issue the "az aks create" command the load balancer is created automatically. There doesn't seem be be a way to do much configuration, especially as it relates to certificates. My impression is that the default load balancer that is created by AKS is ultimately not the mechanism to use for this. Another option might be an application gateway, as described here. I'm not sure how to adapt this discussion to AKS. The UI pod needs to be the ultimate target of any traffic coming through the application gateway but the gateway uses a different subnet than what is used for the pods in the AKS cluster.
So I'm not sure how to proceed. My question is: Is the application gateway the correct solution to providing https access to a UI running in an AKS cluster or is there another approach I need to use?
You are right, the default Load Balancer created by AKS is a Layer 4 LB and doesn't support SSL offloading. The equivalent of the AWS Application Load Balancer in Azure is the Application Gateway. As of now there is no option in AKS which allows to choose the Application Gateway instead of a classic load balancer, but like alev said, there is an ongoing project that still in preview which will allow to deploy a special ingress controller that will drive the routing rules on an external Application Gateway based on your ingress rules. If you really need something that is production ready, here are your options :
Deploy an Ingress controller like NGINX, Traefik, etc. and use cert-manager to generate your certificate.
Create an Application Gateway and manage your own routing rule that will point to the default layer 4 LB (k8s LoadBalancer service or via the ingress controller)
We implemented something similar lately and we decide to managed our own Application Gateway because we wanted to do the SSL offloading outside the cluster and because we needed the WAF feature of the Application Gateway. We were able to automatically manage the routing rules inside our deployment pipeline. We will probably use the Application Gateway as an ingress project when it will be production ready.
Certificate issuing and renewal are not handled by the ingress, but using cert-manager you can easily add your own CA or use Let's encrypt to automatically issue certificates when you annotate the ingress or service objects. The http_application_routing addon for AKS is perfectly capable of working with cert-manager; can even be further configured using ConfigMaps (addon-http-application-routing-nginx-configuration in kube-system namespace). You can also look at initial support for Application Gateway as ingress here

Resources