How to deploy AKS (Azure container service) in a VPN? - azure

I want to deploy some kubernetes workloads, which are visible from some other VM's on Azure but not visible from the outside world.
For example: I might have a VM running a Zuul Gateway which for some routes I want to redirect to the K8s cluster, yet I don't want to allow people to directly access my K8s cluster.
Is it possible to place my AKS inside a VPN? If so, how should I achieve this?

In addition to options, pointed out by #4c74356b41, you can run ingress controller on the cluster, and limit it to your internal server IP only

So this isnt possible now (at least out of the box) due to the nature of AKS being a service with no VNet integration as of yet. You can try to hack around this, but it will probably not work really well as your agents need to talk to the master.
I see 2 options:
Use internal load balancers instead of public ones to expose your services
Use ACS which has vnet integration, but I'm not sure if you can apply 2 routes to the same vnet

Related

integrate azurerm_application_gateway with AKS with terraform

I am able to create aks cluster with advance networking. able to integrate application load balancer also with this AKS cluster but i am unable to find any way to integrate azure api gateway with aks.
Using Application Gateway as an Ingress controller for AKS is in a beta state at the moment (as shown on the Github page - https://github.com/Azure/application-gateway-kubernetes-ingress) and so I don't believe there will be any support for setting it up with Terraform until it gets to GA.
You might be able to do something with exec resources to set it up, but that would be up to you to figure out.
Unfortunately, it seems there is no way to integrate the application load balancer with the AKS cluster directly. And you can see all the things you can set for AKS here.
But you can integrate the application load balancer with AKS cluster when you take knowledge of AKS internal load balancer and Application gateway backend pool addresses. You can take a look at the steps that how to integrate application gateway with AKS cluster.
First of all, you need to make a plan for the AKS cluster network and take an exact IP address for the application gateway backend pool address in the Terraform. Hope this will help you if there any more question you can give me the message.

Configuring an AKS load balancer for HTTPS access

I'm porting an application that was originally developed for the AWS Fargate container service to AKS under Azure. In the AWS implementation an application load balancer is created and placed in front of the UI microservice. This load balancer is configured to use a signed certificate, allowing https access to our back-end.
I've done some searches on this subject and how something similar could be configured in AKS. I've found a lot of different answers to this for a variety of similar questions but none that are exactly what I'm looking for. From what I gather, there is no exact equivalent to the AWS approach in Azure. One thing that's different in the AWS solution is that you create an application load balancer upfront and configure it to use a certificate and then configure an https listener for the back-end UI microservice.
In the Azure case, when you issue the "az aks create" command the load balancer is created automatically. There doesn't seem be be a way to do much configuration, especially as it relates to certificates. My impression is that the default load balancer that is created by AKS is ultimately not the mechanism to use for this. Another option might be an application gateway, as described here. I'm not sure how to adapt this discussion to AKS. The UI pod needs to be the ultimate target of any traffic coming through the application gateway but the gateway uses a different subnet than what is used for the pods in the AKS cluster.
So I'm not sure how to proceed. My question is: Is the application gateway the correct solution to providing https access to a UI running in an AKS cluster or is there another approach I need to use?
You are right, the default Load Balancer created by AKS is a Layer 4 LB and doesn't support SSL offloading. The equivalent of the AWS Application Load Balancer in Azure is the Application Gateway. As of now there is no option in AKS which allows to choose the Application Gateway instead of a classic load balancer, but like alev said, there is an ongoing project that still in preview which will allow to deploy a special ingress controller that will drive the routing rules on an external Application Gateway based on your ingress rules. If you really need something that is production ready, here are your options :
Deploy an Ingress controller like NGINX, Traefik, etc. and use cert-manager to generate your certificate.
Create an Application Gateway and manage your own routing rule that will point to the default layer 4 LB (k8s LoadBalancer service or via the ingress controller)
We implemented something similar lately and we decide to managed our own Application Gateway because we wanted to do the SSL offloading outside the cluster and because we needed the WAF feature of the Application Gateway. We were able to automatically manage the routing rules inside our deployment pipeline. We will probably use the Application Gateway as an ingress project when it will be production ready.
Certificate issuing and renewal are not handled by the ingress, but using cert-manager you can easily add your own CA or use Let's encrypt to automatically issue certificates when you annotate the ingress or service objects. The http_application_routing addon for AKS is perfectly capable of working with cert-manager; can even be further configured using ConfigMaps (addon-http-application-routing-nginx-configuration in kube-system namespace). You can also look at initial support for Application Gateway as ingress here

Network setup for accessing Azure Redis service from Azure AKS

We have an application that runs on an Ubuntu VM. This application connects to Azure Redis, Azure Postgres and Azure CosmosDB(mongoDB) services.
I am currently working on moving this application to Azure AKS and intend to access all the above services from the cluster. The services will continue to be external and will not reside inside the cluster.
I am trying to understand how the network/firewall of both the services and aks should be configured so that pods inside the cluster can access the above services or any Azure service in general.
I tried the following:
Created a configMap containing the connection params(public ip/address, username/pwd, port, etc) of all the services and used this configMap in the deployment resource.
Hardcoded the connection params of all the services as env vars inside the container image
In the firewall/inbound rules of the services, I added the AKS API ip, individual node ips
None of the above worked. Did I miss anything? What else should be configured?
I tested the setup locally on minikube with all the services running on my local machine and it worked fine.
I am currently working on moving this application to Azure AKS and
intend to access all the above services from the cluster.
I assume that you would like to make all services to access each other and all the services are in AKS cluster? If so, I advise you configure the internal load balancer in AKS cluster.
Internal load balancing makes a Kubernetes service accessible to
applications running in the same virtual network as the Kubernetes
cluster.
You can take a try and follow the following document: Use an internal load balancer with Azure Kubernetes Service (AKS). In the end, good luck to you!
Outbound traffic in azure is SNAT-translated as stated in this article. If you already have a service in your AKS cluster, the outbound connection from all pods in your cluster will come thru the first LoadBalancer type service IP; I strongly suggest you create one for the sole purpose to have a consistent outbound IP. You can also pre-create a Public IP and use it as stated in this article using the LoadBalancerIP spec.
On a side note, rather than a ConfigMap, due to the sensitiveness of the connection string, I'd suggest you create a Secret and pass that down to your Deployment to be mounted or exported as environment variable.

Azure Container Services (AKS) - Exposing containers to other VNET resources

I am using Azure Container Services (AKS - not ACS) to stand up some API's - some of which are for public consumption, some of which are not.
For the public access route everything is as you might expect, a load-balancer service bound to a public IP is created, DNS zone contains our A record forwarding to the public IP, traffic is routed through to an NGINX controller and then onwards to the correct internal service endpoints.
Currently the preview version assigns a new VNET to place the AKS resource group within, moving forwards I will place the AKS instance inside an already existing VNET which houses other components (App Services, on an App Service Environment).
My question is how to grant access to the private APIs to other components inside the same VNET, as well as components in other VNETS?
I believe AKS supports an ILB-type load balancer, which I think might be what is required for routing traffic from other VNETS? But what about where the components reside already inside the same VNET?
Thank you in advance!
If you need to access these services from other services outside the AKS cluster, you still need an ILB to load balance across your service on the different nodes in your cluster. You can either use the ILB created by using the annotation in your service. The alternative is using NodePort and then stringing up your own way to spread the traffic across all the nodes that host the endpoints.
I would use ILB instead of trying to make your own using NodePort service types. The only thing would be perhaps using some type of API Gateway VM inside your vnet where you can define the backend Pool, that may be a solution if you are hosting API's or something through a 3rd party API Gateway hosted on an Azure VM in the same VNet.
Eddie Villalba
MCSD: Azure Solutions Architect | CKA: Certified Kubernetes Administrator

Create loadbalancer inside a vnet with azure

I want to create a load balancer for all my agents.
In the official docs I found a guide for an external load balancer, but I want to connect it with the api management so it has to be only visible in the vnet.
This post works if you only have one agent (you enter the private ip of the agent in your api route). But it does not handle the second agent.
Is it possible to use Azure API Management and Azure ACS (kubernetes) as frontend and backend?
So in my case I need to create a load balancer that handles all agents for the service and has a private ip in a vnet in that the api management service also is.
well, nothing prevents you from connecting api management to an external endpoint, so there's that.
and if you really want internal endpoint I doubt that it is possible, since a NIC can only be attached to a single load balancer. maybe if you detach agent nics from the external load balancer and attach them to internal load balancer... that might work, but looks like a solid hack.
other way around this might be using ACS engine to generate a template for you and alter the template to deploy internal load balancer.
As 4c74356b41 said, we can't add a VM to two backend pools (if your k8s create via azure portal, the agents in a VMSS.)
In your scenario, I think we can create a VM in ACS resource group, and install load balance software on it, make this VM work as a load balancer.
For example, we can use Haproxy to load balance the network traffic to agents.

Resources