Azure AKS App Gateway Ingress and Istio Ingress Gateway - azure

Is anyone tried to use this bundle? Main question if it's really make sense as long main advantage of App Gateway as K8S Ingress Controller is ability to connect directly to pods, avoiding NodePort schema.
And in case of Istio Ingress Gateway we still have additional hop to pods, so L3 Azure ILB should be also fine?

I'd say that the main advantage of AGIC is not necessarily the ability to connect directly to pods but to be able to use the WAF functionality of the Application Gateway and have Microsoft support, which is sometime needed for big corpo. If you are not planning to use the WAF functionality of the Application Gateway, it don't really make sense to use AGIC instead of a L4 load balancer in front of the Istio Ingress Gateway.

Related

Security for applications hosted in kubernetes ingress

I need to host the frontend and backend parts of my application on ingress kubernetes. I would like only the frontend part to be sent to the backend part, even though both are available in ingress under one host (but a different path). Is it possible to set something like this in a kubernetes cluster? So that no other applications can send requests to the backend part. Can you do something like this with kubernetes security headers?
Within the cluster, you can restrict traffic between services by using Network Policies. E.g. you can declare that service A can send traffic to service B, but that service C can not send traffic to service B. However, you need to make sure that your cluster has a CNI with support for Network Policies. Calico is an example for such add-on.
Ingress is useful for declaring what services can receive traffic from outside of the cluster.
Also, Service Meshes, like Istio is useful for further enhance this security. E.g. by using an Egress proxy, mTLS and require JWT based authentication between services.

How to configure Azure App Gateway in Istio

I have an application setup on AKS (Azure Kubernetes Service) and I’m currently using Azure Application gateway as ingress resource for my application running on AKS.
Now after setting up ISTIO for my cluster the graphs are coming up fine except one part. Since the Azure APP gateway is unknown to ISTIO it is showing the resource as “unknown”. I even tried launching a virtual service and pointed it to the ingress resource but that didn’t have any effect on the graph. How shall I establish to ISTIO that it is Azure app gateway and not “unknown” resource.
This is because Azure Application gateway is not part of Istio Mesh. Depending on how You have Your Azure Application Gateway configured You might not even get any benefits of using istio.
Getting istio to work with Azure Application Gateway is lot more complicated than it seems.
There is a Github issue that uses istio and Azure Application Gateway at the same time.
With the following statement:
You may wonder why I chose to put the ingress resource into the istio-system namespace. Im doing so because in my understanding the istio-ingress must be the endpoint for each app-gateway redirect. If I would let it redirect to the echo-server service, AGKI(application-gateway-kubernetes-ingress) would point to the ip-address of the deployed pod, which would completely disregard istios servicemesh.
So if don't already have configuration like that and You want to use Istio I suggest setting Istio Ingress Gateway as an endpoint for Your Azure Application Gateway and treat it as traffic comming from outside mesh.
Here is an explanation why Azure Application gateway is "unknown" resource.
In an this article you can find the following statement:
Ingress traffic
Istio expects traffic to go via the the Ingress Gateway. When you see ‘unknown’ traffic it can simply be the case that you use the standard Kubernetes Ingress or an OpenShift route to send traffic from the outside to Istio.
Azure Application gateway uses custom ingress controller:
Application Gateway Ingress Controller (AGIC) allows you to use Application Gateway as the ingress for an Azure Kubernetes Service (AKS) cluster.
The ingress controller runs as a pod within the AKS cluster and consumes Kubernetes Ingress Resources and converts them to an Application Gateway configuration which allows the gateway to load-balance traffic to the Kubernetes pods. The ingress controller only supports Application Gateway V2 SKU.
For more information, see Application Gateway Ingress Controller (AGIC).
According to Kiali documentation:
In some situations you can see a lot of connections from an "Unknown" node to your services in the graph, because some software external to your mesh might be periodically pinging or fetching data. This is typically the case when you setup Kubernetes liveness probes, or have some application metrics pushed or exposed to a monitoring system such as Prometheus. Perhaps you wouldn’t like to see these connections because they make the graph harder to read.
To address Your additional question:
How shall I establish to ISTIO that it is Azure app gateway and not “unknown” resource.
As far as I know there is no way to make Custom (non-istio) Ingress Gateway be part of istio mesh. Leaving Azure Application Gateway labelled as “unknown”.
Hope this helps.
AFAIK, istio needs its own ingress gateway for apps.
Create an istio VirtualService and point it to istio's ingress gateway. The steps to do it are here and here.
Istio's ingress gateway for the app can be seen in the output of kubectl get gateway:
$ kubectl get gateway
NAME AGE
bookinfo-gateway 32s

When to use external LoadBalancer in K8s?

Explaining my confusion / lack of understanding
When reading about the external LoadBalancer in K8s, which is a cloud provider only feature, I don't quite understand when it should be used, as when one creates a Deployment K8s will do Round Robin load balancing on the pods in that Deployment.
So from my current understanding all one would need to do is make a NodeIP, and you have the equivalent of an external load balancer?
Or should I think of the LoadBalancer type as haproxy/nginx/Envoy, where one can do SSL, reverse proxy, and many other useful things?
My current guess is that the proper use of LoadBalancer is to add many NodeIP's, but I can't find anything to back that up.
Question
Can anyone explain when and why to use LoadBalancer and not just using the NodeIP?
For example, You want to deploy multiple applications in your cluster, say 10 apps.
You would like to access these 10 apps over internet. One way is to set those 10 application services as nodeport so you can access them from outside. For this to happen kubernetes opens 10 nodeports on each cluster node. This is a security risk.
In most of the enterprises where they work behind firewall in a closed network dont allow external traffic to/from any ports other than http/https ( 80/443 ).
One way is to set service type as Loadbalancer for each application service. So, to access 10 app, you will be provisioning 10 load balancers to access the app servers over http/https ports. Since loadbalancers are charged resources, economically it is not viable to have one load balancer for each service that you want to access over itnernet.
Is there a way to access all those 10 app services running inside kubernetes over single port. This is where ingress controller comes into picture.
Ingress controller allows single ip-port to access all services running in k8s through ingress rules. The ingress controller service is set to load balancer so it is accessible from public internet

Configuring an AKS load balancer for HTTPS access

I'm porting an application that was originally developed for the AWS Fargate container service to AKS under Azure. In the AWS implementation an application load balancer is created and placed in front of the UI microservice. This load balancer is configured to use a signed certificate, allowing https access to our back-end.
I've done some searches on this subject and how something similar could be configured in AKS. I've found a lot of different answers to this for a variety of similar questions but none that are exactly what I'm looking for. From what I gather, there is no exact equivalent to the AWS approach in Azure. One thing that's different in the AWS solution is that you create an application load balancer upfront and configure it to use a certificate and then configure an https listener for the back-end UI microservice.
In the Azure case, when you issue the "az aks create" command the load balancer is created automatically. There doesn't seem be be a way to do much configuration, especially as it relates to certificates. My impression is that the default load balancer that is created by AKS is ultimately not the mechanism to use for this. Another option might be an application gateway, as described here. I'm not sure how to adapt this discussion to AKS. The UI pod needs to be the ultimate target of any traffic coming through the application gateway but the gateway uses a different subnet than what is used for the pods in the AKS cluster.
So I'm not sure how to proceed. My question is: Is the application gateway the correct solution to providing https access to a UI running in an AKS cluster or is there another approach I need to use?
You are right, the default Load Balancer created by AKS is a Layer 4 LB and doesn't support SSL offloading. The equivalent of the AWS Application Load Balancer in Azure is the Application Gateway. As of now there is no option in AKS which allows to choose the Application Gateway instead of a classic load balancer, but like alev said, there is an ongoing project that still in preview which will allow to deploy a special ingress controller that will drive the routing rules on an external Application Gateway based on your ingress rules. If you really need something that is production ready, here are your options :
Deploy an Ingress controller like NGINX, Traefik, etc. and use cert-manager to generate your certificate.
Create an Application Gateway and manage your own routing rule that will point to the default layer 4 LB (k8s LoadBalancer service or via the ingress controller)
We implemented something similar lately and we decide to managed our own Application Gateway because we wanted to do the SSL offloading outside the cluster and because we needed the WAF feature of the Application Gateway. We were able to automatically manage the routing rules inside our deployment pipeline. We will probably use the Application Gateway as an ingress project when it will be production ready.
Certificate issuing and renewal are not handled by the ingress, but using cert-manager you can easily add your own CA or use Let's encrypt to automatically issue certificates when you annotate the ingress or service objects. The http_application_routing addon for AKS is perfectly capable of working with cert-manager; can even be further configured using ConfigMaps (addon-http-application-routing-nginx-configuration in kube-system namespace). You can also look at initial support for Application Gateway as ingress here

Azure kubernetes service ingress and clusters per region

I have an application that consists of microservices which are dockerized and are deployed on Azure kubernetes service in West Europe.
In order to reach the application an nginx ingress controller is created and the public endpoint is mapped to a custom domain.
For example the public ip x.x.x.x is mapped to domain testwebsite.com in Azure DNS.
The Ingress takes care of the routing to the microservices.
How do I translate this setup to multiple regions and still use the same DNS name?
An ingress controller is a piece of software that provides a reverse
proxy, configurable traffic routing, and TLS termination for
Kubernetes services. Kubernetes ingress resources are used to
configure the ingress rules and routes for individual Kubernetes
services.
So ingress is not appropriate to your purpose. According to your description, I think you could try Azure Traffic Manager. The introduction of Azure Traffic Manager is here:
Azure Traffic Manager is a DNS-based traffic load balancer that
enables you to distribute traffic optimally to services across global
Azure regions, while providing high availability and responsiveness.
With this introduction, I think it's more appropriate for your purpose.

Resources