integrate azurerm_application_gateway with AKS with terraform - azure

I am able to create aks cluster with advance networking. able to integrate application load balancer also with this AKS cluster but i am unable to find any way to integrate azure api gateway with aks.

Using Application Gateway as an Ingress controller for AKS is in a beta state at the moment (as shown on the Github page - https://github.com/Azure/application-gateway-kubernetes-ingress) and so I don't believe there will be any support for setting it up with Terraform until it gets to GA.
You might be able to do something with exec resources to set it up, but that would be up to you to figure out.

Unfortunately, it seems there is no way to integrate the application load balancer with the AKS cluster directly. And you can see all the things you can set for AKS here.
But you can integrate the application load balancer with AKS cluster when you take knowledge of AKS internal load balancer and Application gateway backend pool addresses. You can take a look at the steps that how to integrate application gateway with AKS cluster.
First of all, you need to make a plan for the AKS cluster network and take an exact IP address for the application gateway backend pool address in the Terraform. Hope this will help you if there any more question you can give me the message.

Related

AKS and Application Gateway network setup

Our AKS and Application Gateway are in different Vnets. From the AGIC documentation, the connection between AKS and Application Gateway is through a route table, as can be seen here AGIC Github.
However, we have a requirement that the route table should not exist between these two resources. AKS is using Kubenet and we cannot change it.
Is there another way to connect AKS and Application Gateway? Thanks.
To connect AKS and Application Gateway, please try the below:
You can enable AGIC on existing AKS cluster through Azure portal:
Enable the Application Gateway ingress controller and add the created application gateway like below:
If the above suggestion doesn't work in your scenario, then please refer the below links:
Managing traffic to AKS through Azure Application Gateway using Application gateway Ingress Controller by SUDHAKARA RAO SAJJA
Enable ingress controller add-on for existing AKS cluster with existing Azure application gateway | Microsoft Docs

Configuring an AKS load balancer for HTTPS access

I'm porting an application that was originally developed for the AWS Fargate container service to AKS under Azure. In the AWS implementation an application load balancer is created and placed in front of the UI microservice. This load balancer is configured to use a signed certificate, allowing https access to our back-end.
I've done some searches on this subject and how something similar could be configured in AKS. I've found a lot of different answers to this for a variety of similar questions but none that are exactly what I'm looking for. From what I gather, there is no exact equivalent to the AWS approach in Azure. One thing that's different in the AWS solution is that you create an application load balancer upfront and configure it to use a certificate and then configure an https listener for the back-end UI microservice.
In the Azure case, when you issue the "az aks create" command the load balancer is created automatically. There doesn't seem be be a way to do much configuration, especially as it relates to certificates. My impression is that the default load balancer that is created by AKS is ultimately not the mechanism to use for this. Another option might be an application gateway, as described here. I'm not sure how to adapt this discussion to AKS. The UI pod needs to be the ultimate target of any traffic coming through the application gateway but the gateway uses a different subnet than what is used for the pods in the AKS cluster.
So I'm not sure how to proceed. My question is: Is the application gateway the correct solution to providing https access to a UI running in an AKS cluster or is there another approach I need to use?
You are right, the default Load Balancer created by AKS is a Layer 4 LB and doesn't support SSL offloading. The equivalent of the AWS Application Load Balancer in Azure is the Application Gateway. As of now there is no option in AKS which allows to choose the Application Gateway instead of a classic load balancer, but like alev said, there is an ongoing project that still in preview which will allow to deploy a special ingress controller that will drive the routing rules on an external Application Gateway based on your ingress rules. If you really need something that is production ready, here are your options :
Deploy an Ingress controller like NGINX, Traefik, etc. and use cert-manager to generate your certificate.
Create an Application Gateway and manage your own routing rule that will point to the default layer 4 LB (k8s LoadBalancer service or via the ingress controller)
We implemented something similar lately and we decide to managed our own Application Gateway because we wanted to do the SSL offloading outside the cluster and because we needed the WAF feature of the Application Gateway. We were able to automatically manage the routing rules inside our deployment pipeline. We will probably use the Application Gateway as an ingress project when it will be production ready.
Certificate issuing and renewal are not handled by the ingress, but using cert-manager you can easily add your own CA or use Let's encrypt to automatically issue certificates when you annotate the ingress or service objects. The http_application_routing addon for AKS is perfectly capable of working with cert-manager; can even be further configured using ConfigMaps (addon-http-application-routing-nginx-configuration in kube-system namespace). You can also look at initial support for Application Gateway as ingress here

Network setup for accessing Azure Redis service from Azure AKS

We have an application that runs on an Ubuntu VM. This application connects to Azure Redis, Azure Postgres and Azure CosmosDB(mongoDB) services.
I am currently working on moving this application to Azure AKS and intend to access all the above services from the cluster. The services will continue to be external and will not reside inside the cluster.
I am trying to understand how the network/firewall of both the services and aks should be configured so that pods inside the cluster can access the above services or any Azure service in general.
I tried the following:
Created a configMap containing the connection params(public ip/address, username/pwd, port, etc) of all the services and used this configMap in the deployment resource.
Hardcoded the connection params of all the services as env vars inside the container image
In the firewall/inbound rules of the services, I added the AKS API ip, individual node ips
None of the above worked. Did I miss anything? What else should be configured?
I tested the setup locally on minikube with all the services running on my local machine and it worked fine.
I am currently working on moving this application to Azure AKS and
intend to access all the above services from the cluster.
I assume that you would like to make all services to access each other and all the services are in AKS cluster? If so, I advise you configure the internal load balancer in AKS cluster.
Internal load balancing makes a Kubernetes service accessible to
applications running in the same virtual network as the Kubernetes
cluster.
You can take a try and follow the following document: Use an internal load balancer with Azure Kubernetes Service (AKS). In the end, good luck to you!
Outbound traffic in azure is SNAT-translated as stated in this article. If you already have a service in your AKS cluster, the outbound connection from all pods in your cluster will come thru the first LoadBalancer type service IP; I strongly suggest you create one for the sole purpose to have a consistent outbound IP. You can also pre-create a Public IP and use it as stated in this article using the LoadBalancerIP spec.
On a side note, rather than a ConfigMap, due to the sensitiveness of the connection string, I'd suggest you create a Secret and pass that down to your Deployment to be mounted or exported as environment variable.

How to deploy AKS (Azure container service) in a VPN?

I want to deploy some kubernetes workloads, which are visible from some other VM's on Azure but not visible from the outside world.
For example: I might have a VM running a Zuul Gateway which for some routes I want to redirect to the K8s cluster, yet I don't want to allow people to directly access my K8s cluster.
Is it possible to place my AKS inside a VPN? If so, how should I achieve this?
In addition to options, pointed out by #4c74356b41, you can run ingress controller on the cluster, and limit it to your internal server IP only
So this isnt possible now (at least out of the box) due to the nature of AKS being a service with no VNet integration as of yet. You can try to hack around this, but it will probably not work really well as your agents need to talk to the master.
I see 2 options:
Use internal load balancers instead of public ones to expose your services
Use ACS which has vnet integration, but I'm not sure if you can apply 2 routes to the same vnet

Azure k8s pods to cluster external communication via internal IPs

I am migrating from GCP to Azure. My use case is simple: I have a k8s cluster running some web crawlers which needs to talk to Elastic and Cassandra clusters (not in the k8s cluster) using internal IPs. All of these components can be in the same Azure Region (e.g: East US). I understand from this discussion that VNET peering is the way to go.
This solution did not work for me. I am still unable to reach my Cass/ES cluster from the pods. I believe this solution is outdated, is there some other approach to accomplish this, that I am missing ?
We can use Azure route table to achieve that, then we can ping pod IP address out of your k8s Vnet.
I have answered your another question here, please check it.
If you would like further assistance, please let me know:)

Resources