How to enable Calico network policy on the existing AKS cluster - azure

I want to enable Calico network policy on existing AKS cluster. Is there any way to do that?
I found the following statement on the official Microsoft documentation that, "You can't enable network policy on an existing AKS cluster. To use Azure Network Policy, you must use the Azure CNI plug-in and define your own virtual network and subnets."
But I raising this question because what if we need to implement this on the existing production level clusters??
If there is no option then for only this thing we need to do various time consuming operations on that production servers and then we have to enable policy.
Please help me on this.
Thank you.

I'm afraid you can't enable the network policy on the existing AKS cluster. It shows here:
The network policy feature can only be enabled when the cluster is
created. You can't enable network policy on an existing AKS cluster.

Yes right now there is no option to add network policy on existing AKS cluster, We have to recreate entire cluster environment. I am in same situation like I need to do it in multiple environments

Has there been any update about this is 2022? I would like to implement this, however destroying all the production clusters only for this will be a nightmare, not to mention required downtime.
Can we install calico some other-way e.g via a helm chart so that we will not need to destroy the AKS cluster?
I am also open to other tools which do the same job. My goal is to restrict/control inbound/outbound pod traffic for increased security.

Related

Azure Kubernetes Service with custom routing tables

We are trying to deploy a Kubernetes cluster with help of Azure Kubernetes Service (AKS) to our existing virtual network. This virtual network has custom route tables.
The deployment process is done via an external application. Permissions should be given to this application with help of Service Principal. As per the documentation says under the Limitations section:
Permissions must be assigned before cluster creation, ensure you are using a service principal with write permissions to your custom subnet and custom route table.
We have a security team which are responsible for giving permissions to service principals, managing networking. Without knowing exactly what rules will be written into the route tables by the AKS, they wont give the permission to the proper service principal.
Does somebody know what rules the AKS wants to write into those route tables?
The documentation you are pointing to is for a cluster using Kubenet networking. Is there a reason why you don't want to use Azure CNI instead? If you are using Azure CNI, you will off course consume more IP adresses, but AKS will not need to write into the route table.
With that said, if you really want to use Kubenet, the rules that will be write on the route table will depend on what you are deploying inside your cluster since Kubenet is using the route table to route the traffic... It will adds rules throughout the cluster lifecycle when you will add Pods, Services, etc.

Subnets in kubernetes

I'm still experimenting with migrating my service fabric application to kubernetes. In service fabric I have two node types (effectively subnets), and for each service I configure which subnet it will be deployed to. I cannot see an equivalent option in the kubernetes yml file. Is this possible in kubernetes?
First of all, they are not effectively subnets. They could all be in the same subnet. But with AKS you have node pools. similarly to Service Fabric those could be in different subnets (inside the same vnet*, afaik). Then you would use nodeSelectors to assign pods to nodes on the specific node pool.
Same principle would apply if you are creating kubernetes cluster yourself, you would need to label nodes and use nodeSelectors to target specific nodes for your deployments.
In Azure the AKS cluster can be deployed to a specific subnet. If you are looking for deployment level isolation, deploy the two node types to different namespaces in k8s cluster. That way the node types get isolation and can be reached using service name and namespace combination
I want my backend services that access my SQL database in a different subnet to the front-end. This way I can limit access to the DB to backend subnet only.
This is an older way to solve network security. A more modern way is called Zero Trust Networking, see e.g. BeyondCorp at Google or the book Zero Trust Networks.
Limit who can access what
The Kubernetes way to limit what service can access what service is by using Network Policies.
A network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.
NetworkPolicy resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods.
This is a more software defined way to limit access, than the older subnet-based. And the rules are more declarative using Kubernetes Labels instead of hard-to-understand IP-numbers.
This should be used in combination with authentication.
For inspiration, also read We built network isolation for 1,500 services to make Monzo more secure.
In the Security team at Monzo, one of our goals is to move towards a completely zero trust platform. This means that in theory, we'd be able to run malicious code inside our platform with no risk – the code wouldn't be able to interact with anything dangerous without the security team granting special access.
Istio Service Entry
In addition, if using Istio, you can limit access to external services by using Istio Service Entry. It is possible to use custom Network Policies for the same behavior as well, e.g. Cilium.

integrate azurerm_application_gateway with AKS with terraform

I am able to create aks cluster with advance networking. able to integrate application load balancer also with this AKS cluster but i am unable to find any way to integrate azure api gateway with aks.
Using Application Gateway as an Ingress controller for AKS is in a beta state at the moment (as shown on the Github page - https://github.com/Azure/application-gateway-kubernetes-ingress) and so I don't believe there will be any support for setting it up with Terraform until it gets to GA.
You might be able to do something with exec resources to set it up, but that would be up to you to figure out.
Unfortunately, it seems there is no way to integrate the application load balancer with the AKS cluster directly. And you can see all the things you can set for AKS here.
But you can integrate the application load balancer with AKS cluster when you take knowledge of AKS internal load balancer and Application gateway backend pool addresses. You can take a look at the steps that how to integrate application gateway with AKS cluster.
First of all, you need to make a plan for the AKS cluster network and take an exact IP address for the application gateway backend pool address in the Terraform. Hope this will help you if there any more question you can give me the message.

kubectl exec vs ssh using bastion

KOPS lets us create a Kubernetes cluster along with a bastion that has ssh access to the cluster nodes
With this setup is it still considered safe to use kubectl to interact with the Kubernetes API server?
kubectl can also be used to interact with shell on the pods? Does this need any restrictions?
What are the precautionary steps that need to be taken if any?
Should the Kubernetes API server also be made accessible only through the bastion?
Deploying a Kubernetes cluster with the default Kops settings isn’t secure at all and shouldn’t be used in production as such. There are multiple configuration settings that can be done using kops edit command. Following points should be considered after creating a Kubnertes Cluster via Kops:
Cluster Nodes in Private Subnets (existing private subnets can be specified using --subnets with the latest version of kops)
Private API LoadBalancer (--api-loadbalancer-type internal)
Restrict API Loadbalancer to certain private IP range (--admin-access 10.xx.xx.xx/24)
Restrict SSH access to Cluster Node to particular IP (--ssh-access xx.xx.xx.xx/32)
Hardened Image can also be provisioned as Cluster Nodes (--image )
Authorization level must be RBAC. With latest Kubernetes version, RBAC is enabled by default.
The Audit logs can be enabled via configuration in Kops edit cluster.
kubeAPIServer:
auditLogMaxAge: 10
auditLogMaxBackups: 1
auditLogMaxSize: 100
auditLogPath: /var/log/kube-apiserver-audit.log
auditPolicyFile: /srv/kubernetes/audit.yaml
Kops provides reasonable defaults, so the simple answer is : it is reasonably safe to use kops provisioned infrastructure as is after provisioning.

Specify network security group for docker-machine to use

I'm getting started using docker-machine on my Windows 2016 box. I'm trying to create some VMs in Azure but I have a particular network security group that I want for it to use and which already exists in Azure. I ran docker-machine create --driver azure and looked over the small help text which tells me how to set the resource group, subnet, etc but I don't see an option for network security group. Is there a way to specify an existing network security group for docker-machine to use when creating VMs in Azure?
Ok, so according to the documentation, you should use Subnet\VNet or Availability Set. The reason you are asking this is because you don't understand how NSG's work in Azure. NSG's are attached to a VNet or Subnet, so deploying a VM\Container into that Subnet\VNet will effectively attach that NSG to the entity you are deploying. But as the documentation states - "Once the machine is created, you can modify Network Security Group rules and open ports of the machine from the Azure Portal.".
So I suppose it creates a new NSG each time you deploy something, so there's no way to achieve that what you are trying (at least for now).
What you could try is deploy to an existing VNet and check if no new NSG are being created specifically for that container host which you are deploying. If that holds and you have an NSG in place, you've achieved what you want exactly.

Resources