Subnets in kubernetes - azure

I'm still experimenting with migrating my service fabric application to kubernetes. In service fabric I have two node types (effectively subnets), and for each service I configure which subnet it will be deployed to. I cannot see an equivalent option in the kubernetes yml file. Is this possible in kubernetes?

First of all, they are not effectively subnets. They could all be in the same subnet. But with AKS you have node pools. similarly to Service Fabric those could be in different subnets (inside the same vnet*, afaik). Then you would use nodeSelectors to assign pods to nodes on the specific node pool.
Same principle would apply if you are creating kubernetes cluster yourself, you would need to label nodes and use nodeSelectors to target specific nodes for your deployments.

In Azure the AKS cluster can be deployed to a specific subnet. If you are looking for deployment level isolation, deploy the two node types to different namespaces in k8s cluster. That way the node types get isolation and can be reached using service name and namespace combination

I want my backend services that access my SQL database in a different subnet to the front-end. This way I can limit access to the DB to backend subnet only.
This is an older way to solve network security. A more modern way is called Zero Trust Networking, see e.g. BeyondCorp at Google or the book Zero Trust Networks.
Limit who can access what
The Kubernetes way to limit what service can access what service is by using Network Policies.
A network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.
NetworkPolicy resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods.
This is a more software defined way to limit access, than the older subnet-based. And the rules are more declarative using Kubernetes Labels instead of hard-to-understand IP-numbers.
This should be used in combination with authentication.
For inspiration, also read We built network isolation for 1,500 services to make Monzo more secure.
In the Security team at Monzo, one of our goals is to move towards a completely zero trust platform. This means that in theory, we'd be able to run malicious code inside our platform with no risk – the code wouldn't be able to interact with anything dangerous without the security team granting special access.
Istio Service Entry
In addition, if using Istio, you can limit access to external services by using Istio Service Entry. It is possible to use custom Network Policies for the same behavior as well, e.g. Cilium.

Related

Azure Kubernetes Service with custom routing tables

We are trying to deploy a Kubernetes cluster with help of Azure Kubernetes Service (AKS) to our existing virtual network. This virtual network has custom route tables.
The deployment process is done via an external application. Permissions should be given to this application with help of Service Principal. As per the documentation says under the Limitations section:
Permissions must be assigned before cluster creation, ensure you are using a service principal with write permissions to your custom subnet and custom route table.
We have a security team which are responsible for giving permissions to service principals, managing networking. Without knowing exactly what rules will be written into the route tables by the AKS, they wont give the permission to the proper service principal.
Does somebody know what rules the AKS wants to write into those route tables?
The documentation you are pointing to is for a cluster using Kubenet networking. Is there a reason why you don't want to use Azure CNI instead? If you are using Azure CNI, you will off course consume more IP adresses, but AKS will not need to write into the route table.
With that said, if you really want to use Kubenet, the rules that will be write on the route table will depend on what you are deploying inside your cluster since Kubenet is using the route table to route the traffic... It will adds rules throughout the cluster lifecycle when you will add Pods, Services, etc.

How can I easily lock-down access to an Azure resource?

I have a resource (specifically, a Kubernetes service deployed to my AKS cluster) to which I want to limit access. I've looked around through the MSDN documentation on What is Azure Virtual Network?, VPN Gateway design, and more, but I don't see a clear way that I can either:
Require AAD authentication before a specific IP/Port is accessed, or
Whitelist access coming from a specific IP/subnet (eg, specifying CIDR format www.xxx.yyy.zzz/nn that should get access).
There seem to be ways to restrict access that require I install some a RADIUS VPN client, but I don't want to require this. It seems like there are a ton of hoops to jump through -- is there a way I can block all incoming traffic to my AKS cluster except from specific AAD roles or from specific IP ranges?
It would be helpful to understand what you intend to use AKS for (web site, batch computing, etc.)
First you should fully explore the networking options offered by the service itself. Start with locking down to your personal IP address and the service will likely (based on Azure docs) append a deny-all to the end of your networking rules. To get the IP address it sees you from, try IP Chicken.
I offer two additional options: Application Gateway or API Management.
One way to lock this down, based on the information you shared, is Application Gateway.
Application Gateway (Product Page)
Ingress Controller for AKS
"Application Gateway Ingress Controller (AGIC) allows you to use Application Gateway as the ingress for an Azure Kubernetes Service (AKS) cluster." - from Azure Docs
API Management
You also have API Management paired with policies on that resource. It can restrict by AAD (check pricing tier for details) and IP address (on any pricing tier).
If the organic networking options of AKS don't cover your use case, I would choose API Management. Price and options are better for what it seems you are aiming for.

Multiple Azure App Gateway for Different namespaces in Azure AKS

I am trying to create an Application Gateway for AKS. My requirement is to create Multiple Application Gateways for each Namespace in AKS.
Is it possible to do so? And additionally can I use the Ingress controller for Load balancing for each namespace?
to sum it up, you can attach application gateway (or multiple ones) like you normally would. application gateways are not aware of k8s primitives, so they cannot really route to the namespace, they will route to the node, and your ingress\service should handle it.
but there's an Application Gateway Ingress available (current not GA), which can do that for you. you will define ingress resources and it will configure application gateway according to those. Not sure if it can configure multiple of those, but you dont really need multiple unless you exceed inbound ports.

How Failover works when Primary VM Set get restarted?

Above is sample configuration for Azure Service Fabric.
I have created with Wizard and I have deployed one Asp.net core Application and that I am able to access from out side.
Now if you look at the image below Service Fabric is being access with sfclustertemp.westus2.cloudapp.azure.com. I am able to access application with
sfclustertemp.westus2.cloudapp.azure.com/api/values.
Now if I restart primary VM set it should transfer load to secondary and I have a thought that it should done automatically but it is not as Second Load Balancer has different dns name. ( If I specify different dns name then it is accessible).
I have understanding cluser has one id so it is common for both load balancer.
Is such configuration possible ?
Maybe you could use Azure Traffic Manager with health probes.
However, instead of using multiple node types for fail-over options during reboot, have a look at 'Durability tiers'. Using Silver or Gold will have the effect that reboots are performed sequentially on machine groups (grouped by fault domain), instead of all at once.
The durability tier is used to indicate to the system the privileges
that your VMs have with the underlying Azure infrastructure. In the
primary node type, this privilege allows Service Fabric to pause any
VM level infrastructure request (such as a VM reboot, VM reimage, or
VM migration) that impact the quorum requirements for the system
services and your stateful services.
There is misconception on what is a SF cluster.
On your diagram, the part you describe on the left as 'Service Fabric' does not belong there.
Service Fabric is nothing more than applications and services deployed in the cluster nodes, when you create a cluster, you define a primary node type, will be there where service fabric will deployed the services used for managing the cluster.
A node type will be formed by:
A VM Scale Set: machines with OS and SF services installed
A load balancer with dns and IP, forwarding requests to the VM Scale Set
So what you describe there, should be represented as:
NodeTypeA (Primary)
Load Balancer (cluster domain + IP)
VM Scale Set
SF management services (explorer, DNS)
Your applications
NodeTypeB
Load Balancer (other dns + IP)
VM Scale Set
Your applications
Given that:
the first concern is, if the Primary Node goes down, you will lose your cluster, because the management services won't be available to manage your service instances.
second: you shouldn't rely on node types for this kind of reliability, you should increase the reliability of your cluster adding more nodes to the node types.
third: if the concern is a data center outage, you could:
Create a custom cluster that span multiple regions
Add a reverse proxy or API gateway in front of your service to route the request wherever your service is.

How to deploy AKS (Azure container service) in a VPN?

I want to deploy some kubernetes workloads, which are visible from some other VM's on Azure but not visible from the outside world.
For example: I might have a VM running a Zuul Gateway which for some routes I want to redirect to the K8s cluster, yet I don't want to allow people to directly access my K8s cluster.
Is it possible to place my AKS inside a VPN? If so, how should I achieve this?
In addition to options, pointed out by #4c74356b41, you can run ingress controller on the cluster, and limit it to your internal server IP only
So this isnt possible now (at least out of the box) due to the nature of AKS being a service with no VNet integration as of yet. You can try to hack around this, but it will probably not work really well as your agents need to talk to the master.
I see 2 options:
Use internal load balancers instead of public ones to expose your services
Use ACS which has vnet integration, but I'm not sure if you can apply 2 routes to the same vnet

Resources