Is Azure Resource Manager equivalent to what kubernetes is for Docker - azure

Can you think of Azure Resource Manager as the equivalent to what kubernetes is for Docker?

I think that the two are slightly different (caveat: I have only cursory knowledge of Resource Manager)
Azure Resource Manager lets you think about a collection of separate resources as a single composite application. Much like Google's Deployment Manager. It makes it easier to create repeatable deployments, and make sense of a big collection of heterogeneous resources as belonging to a single app.
Kubernetes is on the other hand turns a collection of virtual machines into a new resource type (a cluster). It goes beyond configuration and deployment of resources and acts as a runtime environment for distributed apps. So it has an API that can be used during runtime to deploy and wire in your containers, dynamically scale up/scale down your cluster, and it will make sure that your intent is being met (if you ask for three running containers of a certain type, it will make sure that there are always three healthy containers of that type running).

Related

Azure Functions can be containerized. What is the use case for it?

Azure functions can be containerized but what are the actual use cases for it. Is it portability and ease of running it in any Kubernetes environment on prem or otherwise? Or anything further?
As far as I Know,
We can run the Azure Functions in a serverless fashion i.e., backend VMs and servers managed by the Vendor (Azure). Also, I believe there are 2 Azure Container Services like Container Instances and Kubernetes Service.
Azure Kubernetes Service handles large volume of containers.
Much like running multiple virtual machines on a single physical host, you can run multiple containers in a single physical or virtual host.
In VMs, you look at OS, disk, internet, updating the VM and patching, updating the applications present in VM and all you have to manage, whereas in containers, you don’t have to look at OS, you can easily provision the services like databases, python runtime in the container and utilize them.
Example:
You have control over the VM, but containers are not like that.
Let’s say If I’m the web developer / data scientist / data analyst who wants to work only on SQL Database.
It can be installed on the Virtual Machine, and it is also available through containers.
The primary difference would be,
When you deploy on containers, it would be a simple package which would let you only focus on SQL Database, all the other configuration like dependencies like OS, Configuration comes as part of that package can be taken care by that Container Service.
But in the VM, the moment you install SQL Database, there are other dependencies you need to look at.

Hi can I have a custom script to be executed in AKS node group?

I would like to tweak some settings in AKS node group with something like userdata in AWS. Is it possible to do in AKS?
how abt using
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine_scale_set_extension
The underlying Virtual Machine Scale Set (VMSS) is an implementation detail, and one that you do not get to adjust outside of SKU and disk choice. Just like you cannot pick the image that goes on the VMSS; you also cannot use VM Extensions on that scale set, without being out of support. Any direct manipulation of those VMSSs (from an Azure resource provider perspective) behind your nodepools puts you out of support. The only supported affordance to perform host (node)-level actions is via deploying your custom script work in a DaemonSet to the cluster. This is fully supported, and will give you the ability to run (almost) anything you need at the host level. Examples being installing/executing custom security agents, FIM solutions, anti-virus.
From the support FAQ:
Any modification done directly to the agent nodes using any of the IaaS APIs renders the cluster unsupportable. Any modification done to the agent nodes must be done using kubernetes-native mechanisms such as Daemon Sets.

Azure Functions App: what's the feature to publish a Docker container about?

I know that it is possible to create a Docker container based on the Azure Functions runtime. An example of this process is described in this article.
The benefit is that Azure Functions can be used anywhere - I could deploy the Container to AWS if I wanted.
But here's where it's becoming unclear to me:
When you create a new Functions app in Azure portal, there’s a switch labeled “Publish” and it allows the select either “Code” or “Docker Container”.
If I select "Docker Container", I can configure a Docker image to be used. This is documented in Microsoft's docs.
My questions are:
Why would I want to deploy a Docker container, that contains the Functions runtime into a Functions App, instead of just deploying it to Azure Container Instances?
How does the container approach affect scaling? Who is responsible for scheduling and executing the functions? The runtime in the container, or the Functions runtime on Azure?
There are a couple of advantages of using a docker container
No sandbox (windows plans only), no limitations
Guaranteed to work since the same image would be used for tests, staging and production (Not that normal deploy won't but there are things like a different version of Node.JS for example)
For languages like Python, there are cases where external dependencies need to built (C++ libraries, etc.) and with containers you can guarantee that everything works as expected since the container would have been tested already (and just built once)
Auto Scale. When deployed to Azure Container Instances, you function app won't have the same dynamic scale as compared to when deployed to a function app.
There are 2 main things to understand - invocations and instances.
Each instance, i.e., the functions run time and in this case a container, can handle many invocations depending on the CPU/RAM it has
Number of instances is scaled up by Azure based on the events coming in like HTTP Requests, Queue Messages, etc.
When running in kubernetes, the scale is done by solutions like KEDA / KNative / HPA or similar. Also, using AKS Virtual Nodes (note limitations), you can scale without having to add "real" nodes to your AKS cluster.

How to deploy and run multiple different Azure Cloud Services on same Azure VM?

I have few services which has very low usage, but they are quite hardware intensive when they are used. Right now they run separately. Every of them has its own Web Role and Worker Roles. Every of this roles run on its own VM with specified size and instances count. That is I believe standard way.
I would like to rent one quite powerful VM from Azure, on which I would like to deploy every of my services. Ideally without changing structure of code so I can keep them in separate projects/solutions. They will run simultaneously and all of them will be accessible.
You cannot deploy more than one cloud service package to a set of role instance vm's. If you want all of your services to run in a single role instance (or set of instances), you need to bundle all of your services into that role. How you do that, and how you organize your code, is strictly up to you.

Which pieces do or do not persist in an Azure Cloud Service Web Role?

My understanding of the VMs involved in Azure Cloud Services is that at least some parts of it are not meant to persist throughout the lifetime of the service (unlike regular VMs that you can create through Azure).
This is why you must use Startup Tasks in your ServiceDefinition.csdef file in order to configure certain things.
However, after playing around with it for a while, I can't figure out what does and does not persist.
For instance, I installed an ISAPI filter into IIS by logging into remote desktop. That seems to have persisted across deployments and even a reimaging.
Is there a list somewhere of what does and does not persist and when that persistence will end (what triggers the clearing of it)?
See http://blogs.msdn.com/b/kwill/archive/2012/10/05/windows-azure-disk-partition-preservation.aspx for information about what is preserved on an Azure PaaS VM in different scenarios.
In short, the only things that will truly persist are things packaged in your cscfg/cspkg (ie. startup tasks). Anything else done at runtime or via RDP will eventually be removed.
See - How to: Update a cloud service role or deployment - in most cases, an UPDATE to an existing deployment will preserve local data while updating the application code for your cloud service.
Be aware that if you change the size of a role (that is, the size of a virtual machine that hosts a role instance) or the number of roles, each role instance (virtual machine) must be re-imaged, and any local data will be lost.
Also if you use the standard deployment practice of creating a new deployment in the staging slot and then swapping the VIP, you will also lose all local data (these are new VMs).

Resources