Custom metrics using active TLS connections for AKS HPA - azure

I am running a service in AKS pods that would establish TLS connections with the client. There is a hard limit of 5K active connections per pod. I need a way to determine number of active TLS connections per pod and auto scale (HPA) when it reaches a threshold (say 3.5K TLS connections) and scale down when active connections are below 1K.
Is there a way to collect such metrics in AKS and scale based on that metrics. Kindly suggest.

By default, scale-up operations performed manually or by the cluster autoscaler require the allocation and provisioning of new nodes, and scale-down operations delete nodes. Scale-down Mode allows you to decide whether you would like to delete or deallocate the nodes in your Azure Kubernetes Service (AKS) cluster upon scaling down.
There is not any microsoft document that autoscale based on TLS
connection per pod.
Kubernetes has a cluster autoscaler, that adjusts the number of nodes based on the requested compute resources in the node pool. By default, the cluster autoscaler checks the Metrics API server every 10 seconds for any required changes in node count. If the cluster autoscale determines that a change is required, the number of nodes in your AKS cluster is increased or decreased accordingly. The cluster autoscaler works with Kubernetes RBAC-enabled AKS clusters that run Kubernetes 1.10.x or higher.
Cluster autoscaler is typically used alongside the horizontal pod autoscaler. When combined, the horizontal pod autoscaler increases or decreases the number of pods based on application demand, and the cluster autoscaler adjusts the number of nodes as needed to run those additional pods accordingly.
To get started with the cluster autoscaler in AKS, see Cluster Autoscaler on AKS.
Reference : https://learn.microsoft.com/en-us/azure/aks/concepts-scale#cluster-autoscaler
For Counting the TLS connection to particuler nodes can be done using Plateform metrics ->Microsoft.Blockchain/blockchainMembers->ClusterCommEgressTlsConnectionCount
You can refer the same here.

Related

Azure AKS auto scale vs. the belonging Scale Set Auto Scale

In Azure K8s service, you can scale up the node pool but only we define the min and max nodes.
When i check the node pool scale set scale settings, i found it set to manual.
So i assume that the Node Pool auto scale does't rely on the belonging scale set, but i wonder, can we just rely on the scale set auto scale with the several metric roles instead of the very limited Node Pool scale settings ?
The AKS autoscaling works slightly different as the VMSS autoscaling.
From the official docs:
The cluster autoscaler watches for pods that can't be scheduled on
nodes because of resource constraints. The cluster then automatically
increases the number of nodes.
The AKS autoscaler is tightly coupled with the control plane and the kube-scheduler, so it takes resource requests and limits into account that is far the better scaling method as the VMSS autoscaler (for k8s workload) that is anyway not supported for AKS:
The cluster autoscaler is a Kubernetes component. Although the AKS
cluster uses a virtual machine scale set for the nodes, don't manually
enable or edit settings for scale set autoscale in the Azure portal or
using the Azure CLI.

Azure Kubernetes Services scale up trigger

I am trying to figure out what is the trigger to scale AKS cluster out horizontally with nodes. I am having a cluster that runs on 103% CPU for 5+ minutes but there is no action taken. Any ideas what the triggers are and how I could customize them? If I start more jobs the cluster will lower the CPU allocation for all pods.
The article that MS has doesn't have anything specific around that https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler
You need to notice that:
The cluster autoscaler is a Kubernetes component. Although the AKS
cluster uses a virtual machine scale set for the nodes, don't manually
enable or edit settings for scale set autoscale in the Azure portal or
using the Azure CLI. Let the Kubernetes cluster autoscaler manage the
required scale settings.
Which brings us to the actual Kubernetes Cluster Autoscaler:
Cluster Autoscaler is a tool that automatically adjusts the size of
the Kubernetes cluster when one of the following conditions is true:
there are pods that failed to run in the cluster due to insufficient resources.
there are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing
nodes.
The first condition above is the trigger you are looking for.
To get more details regarding the installation and configuration you can go through the Cluster Autoscaler on Azure. For example, you can customize your CA based on the Resources:
When scaling from an empty VM Scale Set (0 instances), Cluster
Autoscaler will evaluate the provided presources (cpu, memory,
ephemeral-storage) based on that VM Scale Set's backing instance type.
This can be overridden (for instance, to account for system reserved
resources) by specifying capacities with VMSS tags, formated as:
k8s.io_cluster-autoscaler_node-template_resources_<resource name>: <resource value>. For instance:
k8s.io_cluster-autoscaler_node-template_resources_cpu: 3800m
k8s.io_cluster-autoscaler_node-template_resources_memory: 11Gi

How to run a node auto-scaler script without using a cron-job in Kubernetes ( PKS)

I have a node auto-scaling shell script which takes care of auto-scaling the worker nodes based on the average CPU/memory of all the nodes in the Kubernetes cluster.
I currently run this script from a bastion where I have the pks, kubectl cli installed and have also configured a cron-job to run it every 5 minutes.
Is there any other way to do this in Kubernetes ( PKS on AWS) ?
Or may be without using a cron-job, as the auto-scaling becomes completely dependent on the cron.
Thanks
TL;DR: Autoscale with k8s
To setup autoscaling on k8s use:
kubectl autoscale -f <controller>.yaml --min=3 --max=5
Note: PKS over AWS is an overkill
You mentioned PKS
Using PKS over AWS infrastructure seems as overkill. Just because AWS has EKS
To work with AWS cloud, VMware recommends VMC on AWS
PKS autoscale
If you do insist to use PKS over AWS, you may try this sample repo: pks-autoscale
Author of the repo also has great PKS quickstart guide for aws
Scaling on AWS
EKS autoscaling
AWS EKS supports three-dimensional scaling:
Cluster Autoscaler — The Kubernetes Cluster Autoscaler automatically adjusts the number of nodes in your cluster when pods fail to launch due to lack of resources or when nodes in the cluster are underutilized and their pods can be rescheduled on to other nodes in the cluster.
Horizontal Pod Autoscaler — The Kubernetes Horizontal Pod Autoscaler automatically scales the number of pods in a deployment, replication controller, or replica set based on that resource's CPU utilization.
Vertical Pod Autoscaler — The Kubernetes Vertical Pod Autoscaler automatically adjusts the CPU and memory reservations for your pods to help "right size" your applications. This can help you to better use your cluster resources and free up CPU and memory for other pods.
EC2 Auto Scaling
If you decided to build your own k8s cluster using PKS, you may use EC2 auto scaling - just create an Auto Scaling Group.
Using aws-cli:
aws autoscaling create-auto-scaling-group --auto-scaling-group-name <my-asg> --launch-configuration-name <my-launch-config> --min-size 3 --max-size 5 --vpc-zone-identifier "<zones>
EC2 predictive scaling
Recently, AWS introduced predictive scaling for EC2:
... predictive scaling. Using data collected from your actual EC2 usage and further informed by billions of data points drawn from our own observations, we use well-trained Machine Learning models to predict your expected traffic (and EC2 usage) including daily and weekly patterns.
If you mean EKS on AWS than there are different auto-scaling options

Azure AKS: Control which node should be removed while downscaling

I have an AKS-Cluster in Azure. When I scale down the cluster with az aks scale for example I want to control which Node should be removed.
I cannot find a documentation that describes how Azure decides.
Will Azure prefer removing nodes that are already cordoned or drained?
Deleting it from the Azure Portal is not an option, because I want an application to communicate with Azure via CLI or API.
First of all, it's impossible to control which node to remove when you scale down the AKS cluster. Then I will show you how do the nodes change when you scale the AKS cluster.
When you do not use the VMSS as the agent pool, it means the AKS cluster use the individual VMs as the nodes. If you scale up, then it will increase the nodes with the index after the existing nodes. For example, the cluster has one node with the index 0 and then it will use the index 1 if you scale up one node. And if you scale down, it will remove the nodes with the biggest index in the sequence at first.
When you use the VMSS as the agent pool, it will comply with the scale rules of VMSS. And you can see the VMSS scale rules in the changes of VMSS scale up and down.
Also, you can take a look at the Azure CLI command az aks scale that scale the AKS cluster and the REST API.

How to define autoscale rule on memory in Azure VMSS

I have create a VMSS in Azure Portal, to have the autoscale feature for my application. My application resided in Kubernetes cluster - around 10 microservices.
I want to create an Scale out rule, that if there is no enough memory , then increase the VM instance. But I don't see an option to set the rule based on memory. There are rules which we can define based on CPU utilization, disk space etc... But this won't help me to solve the problem. For my 10 microservice to work each service having 5 pods, i need to set a rule based on memory. If I set the rule based on CPU, the VM doesn;t scale up, as the CPU is not utilised much. Issue is with memory.
I get the error "0/3 nodes are available: 3 Insufficient pods.
The node was low on resource: [MemoryPressure]. "
I read that the memory rule is not available in host metrics in Azure, but it can enabled via guest metrics. To enable guest metrics, i see below link .
https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-guest-based-autoscale-linux
But I don't see an option to edit the template as defined in the above link. There is only "export Template" option visible for VMSS, where you cannot edit the template.
Could anyone please help me on this issue , to define memory rule for VMSS in Azure ?
No option seen to enable guest metrics for VMSS. No option to edit the template, only "export Template" option visible, where you cannot edit the template.
For the AKS autoscale, you just need to enable the autoscale function for your AKS cluster, set the min and max count of the nodes and then it will scale itself. You do not need to set the autoscale rule for it. Take a look at the AKS cluster autoscale.
When does Cluster Autoscaler change the size of a cluster?
Cluster Autoscaler increases the size of the cluster when:
there are pods that failed to schedule on any of the current nodes
due to insufficient resources.
adding a node similar to the nodes currently present in the cluster
would help.
Cluster Autoscaler decreases the size of the cluster when some nodes are consistently unneeded for a significant amount of time. A node is unneeded when it has low utilization and all of its important pods can be moved elsewhere.
And that what you have seen in the VMSS, the metric server is already installed in the high version AKS. If not install, you can install yourself and the steps here.

Resources