azure gateway probe always unhealty - azure

How to debug the azure gateway probe? I have a setup which azure identifies as "unhealthy". The setup consist of one gateway with a http-setting with a custom probe. Behind it are two virtual machines.
When testing the path to the probes by direct ip, they both work. But the gateway identifies them as unhealthy.
Can I see a log somewhere why they are unhealty?
My response code is a simple "OK" string with 200 OK status code.

You could check this link.
View back-end health through PowerShell:
Get-AzureRmApplicationGatewayBackendHealth -Name ApplicationGateway1 -ResourceGroupName Contoso

Related

Azure application gateway overwrites custom health probe

As Azure Application Gateway doesn't support http basic authentication and throws 502 error when I try to reach my app in kubernetes, I've added custom settings to health probe (treat 401 error as healthy). Everything works fine, however my setting are being overwritten after some time. Any ideas what to do?
Are you using AKS with the Application Gateway Ingress Controller (AGIC)? AGIC is an AKS pod that manages ingress. When AKS + AGIC is configured, it has full ownership over the Application Gateway. Meaning, when you make a change to the AppGW manually (such as updated health probe status code), AGIC has the authority to remove the manual change and revert to the original config.
To get around this, you need to use AGIC to configure the health probe acceptable status code using annotations. See this Microsoft doc. The doc doesn't have the health probe status code annotation listed but I think you can use:
appgw.ingress.kubernetes.io/health-probe-status-codes: "200-401"
If you are NOT using AKS + AGIC, there is more than likely some other automation overwriting your changes to the health probe. I'd recommend reviewing the Application Gateway Activity Log to see what/who is reverting your change. More info on reviewing activity logs at this Microsoft doc.

Azure Application Gateway API Management probe cannot connect to back end

I hope somebody can help to understand what I am doing wrong here because I am totally confused and lost.
I am trying to build an API Management in internal mode, and have in front of it a application gateway.
Following Microsoft Documentation I build the following resource:
API Management
Application Gateway
Virtual network
In the virtual network I set 2 subnets(application gateway and API Management)
2 Network Security groups one for each resources
As per the documentation and general advice I found online. I created a Keyvault and generated a certificate. In the Subject I set this CN:
api.test.com
I assigned a Managed identity to this KeyVault.
After this step I created a API Management Service. and the only api inside is a
/configurations
Once this was done. In the Newtork Tab I set the Api management to the internal mode and selected my virtual network and the subnet I designed for this service. So far everything went smooth. When the update completed I set the custom domain in the api management.
In the Tab Custom Domain I added a new domain, in the hostname I set the same CN I set in the KeyVault
api.test.com
and selected the KeyVault from which it has to fetch the right cert.
Everything is done here.
I created the Application gateway in the designed virtual network and subnet.
the first thing I set the backend pool to the gateway url of the API Management
api.test.com
I set a HTTP settings over protocol HTTPS port 443 as follow
Still in the application Gateway I set the Listeners on port 443 and selected my certificate from the KeyVault
In the Rules I configured the listeners and the back end targets to target the backend pool.
At this point, when I test the Probe:
I get the following error
Cannot connect to backend server. Check whether any NSG/UDR/Firewall is blocking access to the server. Check if application is running on correct port.
I checked both my security groups which are set as follow
this is the msg for the apim
and this for the application gateway
Can please anyone help understand what I am doing wrong here? Because I have no clue anyomore what could be the issue.
And please, if you need anymore info don't hesitate to let me know. And if is easy, I can post here my terraform script to deploy this infra.
You said: the first thing I set the backend pool to the gateway url of the API Management "api.test.com"
That url is inaccessible and points to a public IP (which I guess should be the app gateway IP)
The backend pool should be the private SLB IP address of the api management service. the listener should be listening to the host name so when it receives a request with that host name, it forwards it to the api management service through its private IP and a host header holding the same listener host name as a value.

Azure App Gateway with Internal API Management 503 backend server error

I am following up this doc series to set up an internal API management instance integrated with APP Gateway in azure. I followed everything to the detail:
Created a new resource group
Setup a Vnet with 3 subnets
And setup a private dns zone and link the vnet
And then created self-signed certificates to be used with the dns created in private dns zone
Created API management instance and added custom domains
Created a App Gateway with public IP and setup routing rules and backends and setup health probes with path /status-0123456789abcdef for APIM
But now I am getting this backend health error as below:
Can someone tell me what I am doing wrong?
Are there any security groups to be configured? I am using an internl mode for the APIM, and when I even try to test the default API (which is echo test) it gives the below error:
Why this is not working? If you need any more information, I will let you know (update the question). Can someone please help me?
I have a similar situation which was driving me insane. I must have changed everything I possibly could. The answer, was to create a custom health probe and at the very bottom of the HTTP settings it was an option to use the custom probe.
Since the Gateway URL is not registered on the public DNS, the test console available on the Azure portal will not work for Internal VNET deployed service. Instead, use the test console provided on the Developer portal.
You can find more details here.

Azure server which executes APIM Test

I have a URL which is only accessible to internal network. Due to certain business requirement, this URL has to be accessible from Azure APIM. The way I call the endpoint is as per screenshot below which I got from Microsoft docs https://learn.microsoft.com/en-us/azure/api-management/mock-api-responses
However, I got the following error message because myprivatedomain.com is only accessible from internal VPN. May I know how APIM execute the test API? (ie, what's the IP address? ,etc..) Thanks.
HTTP/1.1 400 Bad Request
content-length: 85
content-type: application/json
vary: Origin
{
"error": "The remote name could not be resolved: 'myprivatedomain.com'"
}
When API Management deploys in internal virtual network mode, all the service endpoints are only visible within a virtual network that you control the access to. None of the service endpoints are registered on the public DNS server.
Enable a virtual network connection using the Azure portal
2.Enable a virtual network connection by using PowerShell cmdlets
Update-AzApiManagementRegion
-ApiManagement <PsApiManagement>
-Location <String>
-Sku <PsApiManagementSku>
-Capacity <Int32>
[-VirtualNetwork <PsApiManagementVirtualNetwork>]
[-DefaultProfile <IAzureContextContainer>]
[<CommonParameters>]
3.For internal virtual network mode, you have to manage your own DNS.
You can set up custom domain names for all your service endpoints as shown in the following image:
Then you can create records in your DNS server to access the endpoints that are only accessible from within your virtual network.
For more details, you could refer to this article.

Set kubernetes VM with nodeports as backend for application gateway

I have two VMs that are part of a kubernetes cluster. I have a single service that is exposed as NodePort (30001). I am able to reach this service on port 30001 through curl on each of these VMs. When I create an Azure application gateway, the gateway is not directing traffic to these VMs.
I've followed the steps for setting up the application gateway as listed in the Azure documentation.
I constantly get a 502 from the gateway.
In order for the Azure Application Gateway to redirect or route traffic to the NodePort you need to add the Backend servers to the backend pool inside the Azure Application Gateway.
There are options to choose Virtual Machines as well.
A good tutorial explaining how to configure an application gateway in azure and direct web traffic to the backend pool is:
https://learn.microsoft.com/en-us/azure/application-gateway/quick-create-portal
I hope this solves your problem.
So I finally ended up getting on a call with the support folks. It turned out that the UI on Azure's portal is slightly tempremental.
For the gateway to be able to determine which of your backends are healthy it needs to have a health probe associated with the HTTP setting (the HTTP Setting is the one that determines how traffic from the gateway flows to your backends).
Now, when you are configuring the HTTP setting, you need to select the "Use Custom Probe" but when you do that it doesn't show the probe that you have already created. Hence, I figured that wasn't required.
The trick to first check the box below "Use Custom probe" which reads "Pick hostname from backend setttings", and then click on custom probe and your custom probe will show up and things will work.

Resources