Azure kubernetes service ingress and clusters per region - azure

I have an application that consists of microservices which are dockerized and are deployed on Azure kubernetes service in West Europe.
In order to reach the application an nginx ingress controller is created and the public endpoint is mapped to a custom domain.
For example the public ip x.x.x.x is mapped to domain testwebsite.com in Azure DNS.
The Ingress takes care of the routing to the microservices.
How do I translate this setup to multiple regions and still use the same DNS name?

An ingress controller is a piece of software that provides a reverse
proxy, configurable traffic routing, and TLS termination for
Kubernetes services. Kubernetes ingress resources are used to
configure the ingress rules and routes for individual Kubernetes
services.
So ingress is not appropriate to your purpose. According to your description, I think you could try Azure Traffic Manager. The introduction of Azure Traffic Manager is here:
Azure Traffic Manager is a DNS-based traffic load balancer that
enables you to distribute traffic optimally to services across global
Azure regions, while providing high availability and responsiveness.
With this introduction, I think it's more appropriate for your purpose.

Related

How to configure Azure App Gateway in Istio

I have an application setup on AKS (Azure Kubernetes Service) and I’m currently using Azure Application gateway as ingress resource for my application running on AKS.
Now after setting up ISTIO for my cluster the graphs are coming up fine except one part. Since the Azure APP gateway is unknown to ISTIO it is showing the resource as “unknown”. I even tried launching a virtual service and pointed it to the ingress resource but that didn’t have any effect on the graph. How shall I establish to ISTIO that it is Azure app gateway and not “unknown” resource.
This is because Azure Application gateway is not part of Istio Mesh. Depending on how You have Your Azure Application Gateway configured You might not even get any benefits of using istio.
Getting istio to work with Azure Application Gateway is lot more complicated than it seems.
There is a Github issue that uses istio and Azure Application Gateway at the same time.
With the following statement:
You may wonder why I chose to put the ingress resource into the istio-system namespace. Im doing so because in my understanding the istio-ingress must be the endpoint for each app-gateway redirect. If I would let it redirect to the echo-server service, AGKI(application-gateway-kubernetes-ingress) would point to the ip-address of the deployed pod, which would completely disregard istios servicemesh.
So if don't already have configuration like that and You want to use Istio I suggest setting Istio Ingress Gateway as an endpoint for Your Azure Application Gateway and treat it as traffic comming from outside mesh.
Here is an explanation why Azure Application gateway is "unknown" resource.
In an this article you can find the following statement:
Ingress traffic
Istio expects traffic to go via the the Ingress Gateway. When you see ‘unknown’ traffic it can simply be the case that you use the standard Kubernetes Ingress or an OpenShift route to send traffic from the outside to Istio.
Azure Application gateway uses custom ingress controller:
Application Gateway Ingress Controller (AGIC) allows you to use Application Gateway as the ingress for an Azure Kubernetes Service (AKS) cluster.
The ingress controller runs as a pod within the AKS cluster and consumes Kubernetes Ingress Resources and converts them to an Application Gateway configuration which allows the gateway to load-balance traffic to the Kubernetes pods. The ingress controller only supports Application Gateway V2 SKU.
For more information, see Application Gateway Ingress Controller (AGIC).
According to Kiali documentation:
In some situations you can see a lot of connections from an "Unknown" node to your services in the graph, because some software external to your mesh might be periodically pinging or fetching data. This is typically the case when you setup Kubernetes liveness probes, or have some application metrics pushed or exposed to a monitoring system such as Prometheus. Perhaps you wouldn’t like to see these connections because they make the graph harder to read.
To address Your additional question:
How shall I establish to ISTIO that it is Azure app gateway and not “unknown” resource.
As far as I know there is no way to make Custom (non-istio) Ingress Gateway be part of istio mesh. Leaving Azure Application Gateway labelled as “unknown”.
Hope this helps.
AFAIK, istio needs its own ingress gateway for apps.
Create an istio VirtualService and point it to istio's ingress gateway. The steps to do it are here and here.
Istio's ingress gateway for the app can be seen in the output of kubectl get gateway:
$ kubectl get gateway
NAME AGE
bookinfo-gateway 32s

Azure APIM pricing tiers and Virtual Networks

From the Azure API management pricing page I see that Virtual Networks aren't supported besides the Developer and Premium tiers.
Currently with my developer tier subscription when configuring the VN of an APIM I can choose between "off", "External" and "Internal". With the other tiers, can I still use an External VN or no VN at all?
When I try to connect a kubernetes cluster/VM to the APIM, I have to configure the APIM with an external VN. So if that's not possible with the other subscription tiers, is it still possible to connect to a kubernetes cluster?
VNET Support (and hence the options) are available only in the Developer and Premium Tiers.
You can still use APIM by routing requests from APIM to the AKS load balancer using a static IP and overriding the Host header as required. If possible, you could also use Azure Application Gateway as an ingress controller too.
When taking the load balancer approach, you could setup a Network Security Group to allow traffic only from APIM (and any other IPs/Services) to your AKS nodes.
When taking the Application Gateway approach, you could setup IP restrictions to APIM.
You should be able to setup a similar source IP rule on your own ingress controller too instead of an NSG rule I suppose.

How to route specific port traffic on one domain to multiple Azure App Services?

How would one route a domain's port traffic to multiple AppServices (not VMs):
https://www.example.com:80 (Website) --> AppService A
https://www.example.com:2000 (WebService) --> AppService B or AppService A/Slot 1
https://www.example.com:3000 (WebService) --> AppService C or AppService A/Slot 2
Would this be best accomplished with an Application Gateway? Can it be done with a Load Balancer or Traffic Manager? Can it be routed to Deployment Slots?
As #juunas comment, the traffic manager works at the DNS level, it does not work on routing application based on the port for HTTP/HTTPS traffic.
For routing multiple app services based on each port, you could consider Azure application gateway with path-based routing riles and Azure front door. However, that deployment will be complex and it will be very expensive.
Moreover, it does not allow to add the same domain for multiple different public services. So you can add the domain to one app service then use virtual directories. It is a relatively cheap way. Refer to this to create a virtual directory.

Can we have a single application gateway for all VMSS created in different regions?

Can we have a single Application Gateway for all VMSS created in different regions?
If yes please share the possible options.
As the comment mentioned, we could not have a single Application gateway for all VMSS created in a different region since Application Gateway is always deployed in a virtual network subnet and it directly supports to deploy the VMSS as the backends in the same region and virtual network as the Application gateway.
As a workaround, you could use a public IP address as the backend for communicating with instances outside of the virtual network as long as there is IP connectivity. Read more details about backend pools. So you may use a public-facing load balancer associated with the VMSS.
Furthermore, you also could use Traffic Manager to distribute traffic across multiple Application Gateways in different datacenters. Or use Azure Front Door Service provides a scalable and secure entry point for fast delivery of your global web applications.

Azure Container Services (AKS) - Exposing containers to other VNET resources

I am using Azure Container Services (AKS - not ACS) to stand up some API's - some of which are for public consumption, some of which are not.
For the public access route everything is as you might expect, a load-balancer service bound to a public IP is created, DNS zone contains our A record forwarding to the public IP, traffic is routed through to an NGINX controller and then onwards to the correct internal service endpoints.
Currently the preview version assigns a new VNET to place the AKS resource group within, moving forwards I will place the AKS instance inside an already existing VNET which houses other components (App Services, on an App Service Environment).
My question is how to grant access to the private APIs to other components inside the same VNET, as well as components in other VNETS?
I believe AKS supports an ILB-type load balancer, which I think might be what is required for routing traffic from other VNETS? But what about where the components reside already inside the same VNET?
Thank you in advance!
If you need to access these services from other services outside the AKS cluster, you still need an ILB to load balance across your service on the different nodes in your cluster. You can either use the ILB created by using the annotation in your service. The alternative is using NodePort and then stringing up your own way to spread the traffic across all the nodes that host the endpoints.
I would use ILB instead of trying to make your own using NodePort service types. The only thing would be perhaps using some type of API Gateway VM inside your vnet where you can define the backend Pool, that may be a solution if you are hosting API's or something through a 3rd party API Gateway hosted on an Azure VM in the same VNet.
Eddie Villalba
MCSD: Azure Solutions Architect | CKA: Certified Kubernetes Administrator

Resources