Azure Kubernetes Service with custom routing tables - azure

We are trying to deploy a Kubernetes cluster with help of Azure Kubernetes Service (AKS) to our existing virtual network. This virtual network has custom route tables.
The deployment process is done via an external application. Permissions should be given to this application with help of Service Principal. As per the documentation says under the Limitations section:
Permissions must be assigned before cluster creation, ensure you are using a service principal with write permissions to your custom subnet and custom route table.
We have a security team which are responsible for giving permissions to service principals, managing networking. Without knowing exactly what rules will be written into the route tables by the AKS, they wont give the permission to the proper service principal.
Does somebody know what rules the AKS wants to write into those route tables?

The documentation you are pointing to is for a cluster using Kubenet networking. Is there a reason why you don't want to use Azure CNI instead? If you are using Azure CNI, you will off course consume more IP adresses, but AKS will not need to write into the route table.
With that said, if you really want to use Kubenet, the rules that will be write on the route table will depend on what you are deploying inside your cluster since Kubenet is using the route table to route the traffic... It will adds rules throughout the cluster lifecycle when you will add Pods, Services, etc.

Related

How can I easily lock-down access to an Azure resource?

I have a resource (specifically, a Kubernetes service deployed to my AKS cluster) to which I want to limit access. I've looked around through the MSDN documentation on What is Azure Virtual Network?, VPN Gateway design, and more, but I don't see a clear way that I can either:
Require AAD authentication before a specific IP/Port is accessed, or
Whitelist access coming from a specific IP/subnet (eg, specifying CIDR format www.xxx.yyy.zzz/nn that should get access).
There seem to be ways to restrict access that require I install some a RADIUS VPN client, but I don't want to require this. It seems like there are a ton of hoops to jump through -- is there a way I can block all incoming traffic to my AKS cluster except from specific AAD roles or from specific IP ranges?
It would be helpful to understand what you intend to use AKS for (web site, batch computing, etc.)
First you should fully explore the networking options offered by the service itself. Start with locking down to your personal IP address and the service will likely (based on Azure docs) append a deny-all to the end of your networking rules. To get the IP address it sees you from, try IP Chicken.
I offer two additional options: Application Gateway or API Management.
One way to lock this down, based on the information you shared, is Application Gateway.
Application Gateway (Product Page)
Ingress Controller for AKS
"Application Gateway Ingress Controller (AGIC) allows you to use Application Gateway as the ingress for an Azure Kubernetes Service (AKS) cluster." - from Azure Docs
API Management
You also have API Management paired with policies on that resource. It can restrict by AAD (check pricing tier for details) and IP address (on any pricing tier).
If the organic networking options of AKS don't cover your use case, I would choose API Management. Price and options are better for what it seems you are aiming for.

Subnets in kubernetes

I'm still experimenting with migrating my service fabric application to kubernetes. In service fabric I have two node types (effectively subnets), and for each service I configure which subnet it will be deployed to. I cannot see an equivalent option in the kubernetes yml file. Is this possible in kubernetes?
First of all, they are not effectively subnets. They could all be in the same subnet. But with AKS you have node pools. similarly to Service Fabric those could be in different subnets (inside the same vnet*, afaik). Then you would use nodeSelectors to assign pods to nodes on the specific node pool.
Same principle would apply if you are creating kubernetes cluster yourself, you would need to label nodes and use nodeSelectors to target specific nodes for your deployments.
In Azure the AKS cluster can be deployed to a specific subnet. If you are looking for deployment level isolation, deploy the two node types to different namespaces in k8s cluster. That way the node types get isolation and can be reached using service name and namespace combination
I want my backend services that access my SQL database in a different subnet to the front-end. This way I can limit access to the DB to backend subnet only.
This is an older way to solve network security. A more modern way is called Zero Trust Networking, see e.g. BeyondCorp at Google or the book Zero Trust Networks.
Limit who can access what
The Kubernetes way to limit what service can access what service is by using Network Policies.
A network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.
NetworkPolicy resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods.
This is a more software defined way to limit access, than the older subnet-based. And the rules are more declarative using Kubernetes Labels instead of hard-to-understand IP-numbers.
This should be used in combination with authentication.
For inspiration, also read We built network isolation for 1,500 services to make Monzo more secure.
In the Security team at Monzo, one of our goals is to move towards a completely zero trust platform. This means that in theory, we'd be able to run malicious code inside our platform with no risk – the code wouldn't be able to interact with anything dangerous without the security team granting special access.
Istio Service Entry
In addition, if using Istio, you can limit access to external services by using Istio Service Entry. It is possible to use custom Network Policies for the same behavior as well, e.g. Cilium.

integrate azurerm_application_gateway with AKS with terraform

I am able to create aks cluster with advance networking. able to integrate application load balancer also with this AKS cluster but i am unable to find any way to integrate azure api gateway with aks.
Using Application Gateway as an Ingress controller for AKS is in a beta state at the moment (as shown on the Github page - https://github.com/Azure/application-gateway-kubernetes-ingress) and so I don't believe there will be any support for setting it up with Terraform until it gets to GA.
You might be able to do something with exec resources to set it up, but that would be up to you to figure out.
Unfortunately, it seems there is no way to integrate the application load balancer with the AKS cluster directly. And you can see all the things you can set for AKS here.
But you can integrate the application load balancer with AKS cluster when you take knowledge of AKS internal load balancer and Application gateway backend pool addresses. You can take a look at the steps that how to integrate application gateway with AKS cluster.
First of all, you need to make a plan for the AKS cluster network and take an exact IP address for the application gateway backend pool address in the Terraform. Hope this will help you if there any more question you can give me the message.

Network setup for accessing Azure Redis service from Azure AKS

We have an application that runs on an Ubuntu VM. This application connects to Azure Redis, Azure Postgres and Azure CosmosDB(mongoDB) services.
I am currently working on moving this application to Azure AKS and intend to access all the above services from the cluster. The services will continue to be external and will not reside inside the cluster.
I am trying to understand how the network/firewall of both the services and aks should be configured so that pods inside the cluster can access the above services or any Azure service in general.
I tried the following:
Created a configMap containing the connection params(public ip/address, username/pwd, port, etc) of all the services and used this configMap in the deployment resource.
Hardcoded the connection params of all the services as env vars inside the container image
In the firewall/inbound rules of the services, I added the AKS API ip, individual node ips
None of the above worked. Did I miss anything? What else should be configured?
I tested the setup locally on minikube with all the services running on my local machine and it worked fine.
I am currently working on moving this application to Azure AKS and
intend to access all the above services from the cluster.
I assume that you would like to make all services to access each other and all the services are in AKS cluster? If so, I advise you configure the internal load balancer in AKS cluster.
Internal load balancing makes a Kubernetes service accessible to
applications running in the same virtual network as the Kubernetes
cluster.
You can take a try and follow the following document: Use an internal load balancer with Azure Kubernetes Service (AKS). In the end, good luck to you!
Outbound traffic in azure is SNAT-translated as stated in this article. If you already have a service in your AKS cluster, the outbound connection from all pods in your cluster will come thru the first LoadBalancer type service IP; I strongly suggest you create one for the sole purpose to have a consistent outbound IP. You can also pre-create a Public IP and use it as stated in this article using the LoadBalancerIP spec.
On a side note, rather than a ConfigMap, due to the sensitiveness of the connection string, I'd suggest you create a Secret and pass that down to your Deployment to be mounted or exported as environment variable.

How to deploy AKS (Azure container service) in a VPN?

I want to deploy some kubernetes workloads, which are visible from some other VM's on Azure but not visible from the outside world.
For example: I might have a VM running a Zuul Gateway which for some routes I want to redirect to the K8s cluster, yet I don't want to allow people to directly access my K8s cluster.
Is it possible to place my AKS inside a VPN? If so, how should I achieve this?
In addition to options, pointed out by #4c74356b41, you can run ingress controller on the cluster, and limit it to your internal server IP only
So this isnt possible now (at least out of the box) due to the nature of AKS being a service with no VNet integration as of yet. You can try to hack around this, but it will probably not work really well as your agents need to talk to the master.
I see 2 options:
Use internal load balancers instead of public ones to expose your services
Use ACS which has vnet integration, but I'm not sure if you can apply 2 routes to the same vnet

Resources