Is it possible to scale up pods dynamically based on input parameters in AKS? - azure

I'm new in the Kubernetes universe and I have some doubts about an implementation that I want to do.
I have the following scenario: I have 200 instances of a worker that executes some business logic and the unique thing that differentiate them is the input parameters.
I was thinking in to use AKS to scale up this infrastructure according to the input parameter and dynamically, only create a new pod when exists the demand for the worker with the input parameter "XYZ".
Simple architecture draft:
I have an API that receives a request and with base in this request, an orchestrator send the request for the correct worker.
So I'd like to know if this type of architecture is possible with AKS and if is a good approach.

This is one of the scenario where you can use Azure Functions with ACI or with KEDA to autoscore the containers based on the demand.
Use the AKS virtual node to provision pods inside Azure Container
Instances that start in seconds. This enables AKS to run with just
enough capacity for your average workload. As you run out of capacity
in your AKS cluster, scale out additional pods in Azure Container
Instances without additional servers to manage.
Here is my blog on Scale Applications with Kuberenetes-based Event Driven AutoScaling

You can do this with Kubernetes ingress controller
https://www.nginx.com/products/nginx/kubernetes-ingress-controller/
This is how to set it up on Azure Kubernetes
https://learn.microsoft.com/en-us/azure/aks/ingress-tls

Related

How to kick off Linux script in AKS from Web App (AZURE) on-demand

Given that I have a 24x7 AKS Cluster on AZURE, for which afaik Kubernetes cannot stop/pause a pod and then resume it standardly,
with, in my case, a small Container in a Pod, and for that Pod it can be sidelined via --replicas=0,
then, how can I, on-demand, best kick off a LINIX script packaged in that Pod/Container which may be not running,
from an AZURE Web App?
I thought using ssh should work, after first upscaling the pod to 1 replica. Is this correct?
I am curious if there are simple http calls in AZURE to do this. I see CLI and Powershell to start/stop AKS cluster, but that is different of course.
You can interact remotely with AKS by different methods. The key here is to use the control plane API to deploy your kubernetes resource programmatically (https://kubernetes.io/docs/concepts/overview/kubernetes-api/) .
In order to do that, you should use client libraries that enable that kind of access. Many examples can be found here for different programming languages:
https://github.com/kubernetes-client
ssh is not really recommended since that is sort of a god access to the cluster and its usage is not meant for your purpose.

Is there any way to find the Node scalability time on Azure Kubernetes Service (AKS) using Logs?

I want to find the Node scalability time on Azure Kubernetes Service (AKS) using Logs.
It's possible with some assumptions.
This information is taken from Azure AKS documentation (consider getting familiar with it, it describes how to enable, where to look at and etc):
To diagnose and debug autoscaler events, logs and status can be
retrieved from the autoscaler add-on.
AKS manages the cluster autoscaler on your behalf and runs it in the
managed control plane. You can enable control plane node to see the
logs and operations from CA (cluster autoscaler).
The same cluster-autoscaler is used across different platforms, each of them can have some specific setup (e.g. for Azure AKS). Based on it, logs should have events like:
status, scaleUp, scaleDown, eventResult

Reading from AKS Master node

From whatever i read, i could not find a way to connect to master node in Azure kubernetes Service. I have a requirement to read some parameters like 'enable-admission-plugins' which is possible from master node. Is there any third party api available to get this info.
More specific i need to read the files 'kube-apiserver.yaml', 'kube-controller-manager.yaml'
No, this is not possible. Masters are managed by Microsoft and you dont have access to them. All the configurations are to be done through the AKS api (mostly when you create it).
Azure Kubernetes Service (AKS) makes it simple to deploy a managed
Kubernetes cluster in Azure. AKS reduces the complexity and
operational overhead of managing Kubernetes by offloading much of that
responsibility to Azure. As a hosted Kubernetes service, Azure handles
critical tasks like health monitoring and maintenance for you. The
Kubernetes masters are managed by Azure. You only manage and maintain
the agent nodes.

azure kubernates service managed service for application log management

problem statement.
as per my understanding, we can run an elastic search, kibana and logstash etc as a pod in kubernates cluster for log management. but it is also memory heavy intensive application. AWS provides various managed services like Cloudwatch, cloud trail and ELK stack for log management.
do we have a similar substitute in Azure as well i.e. some managed service?
you can use AKS with Azure Monitor (reading). I'm not sure you can apply this to not AKS cluster (at least not in a straight forward fashion).
Onboarding (for AKS clusters) is really simple and can be done using various methods (portal included).
You can read more on the docs I've linked (for example, about capabilities).
Azure Monitor for Containers is available now and once integrated some cluster metrics as well as the logs will be automatically collected and made available through log analytics.

Is Azure Resource Manager equivalent to what kubernetes is for Docker

Can you think of Azure Resource Manager as the equivalent to what kubernetes is for Docker?
I think that the two are slightly different (caveat: I have only cursory knowledge of Resource Manager)
Azure Resource Manager lets you think about a collection of separate resources as a single composite application. Much like Google's Deployment Manager. It makes it easier to create repeatable deployments, and make sense of a big collection of heterogeneous resources as belonging to a single app.
Kubernetes is on the other hand turns a collection of virtual machines into a new resource type (a cluster). It goes beyond configuration and deployment of resources and acts as a runtime environment for distributed apps. So it has an API that can be used during runtime to deploy and wire in your containers, dynamically scale up/scale down your cluster, and it will make sure that your intent is being met (if you ask for three running containers of a certain type, it will make sure that there are always three healthy containers of that type running).

Resources