I am creating some VMs in Azure using Azure CLI. These VMs require different setups. For example, one machine needs to be set up as a domain controller and therefore its setup includes activities such as creating domain users, etc. While the activities for other VMs include things like joining the domain, set up fire share, etc. Currently, any activity on the individual VMs is performed manually. However, I would like to automate that process starting from creating the VMs and then performing setup on individual VM. What could be the best way of doing it? Can this type of setup on individual VMs be performed remotely?
You will want to look at the Azure Desired State Configuration (DSC) extension. DSC is a declarative platform used for configuration, deployment, and management of systems. It consists of three primary components:
Configurations are declarative PowerShell scripts which define and
configure instances of resources. Upon running the configuration, DSC
(and the resources being called by the configuration) will simply
"make it so", ensuring that the system exists in the state laid out
by the configuration. DSC configurations are also idempotent: the
Local Configuration Manager (LCM) will continue to ensure that
machines are configured in whatever state the configuration declares.
Resources are the "make it so" part of DSC. They contain the code
that put and keep the target of a configuration in the specified
state. Resources reside in PowerShell modules and can be written to
model something as generic as a file or a Windows process, or as
specific as an IIS server or a VM running in Azure.
The Local
Configuration Manager (LCM) is the engine by which DSC facilitates
the interaction between resources and configurations. The LCM
regularly polls the system using the control flow implemented by
resources to ensure that the state defined by a configuration is
maintained. If the system is out of state, the LCM makes calls to the
code in resources to "make it so" according to the configuration.
An example Azure ARM template that uses DSC to stand up a domain controller can be seen here:
https://github.com/Azure/azure-quickstart-templates/tree/master/active-directory-new-domain
Further Reading
https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/dsc-overview
https://learn.microsoft.com/en-us/powershell/scripting/dsc/overview/overview?view=powershell-7.1
Related
Currently, we are hosting five websites on a Linux VM. The websites reside in their separate directories and are hosted by Nginx. The SSL is terminated at Azure Application gateway which sends the traffic to the VM. If a file is updated in a remote repository, the local copy is updated by a cron task which is a simple Bash script running git pull and few additional lines. Not all five websites need to be updated at the same time.
We created the image of the VM and provisioned a VMSS set up.
What could be the easiest or standard way of deploying the codes to the VMSS? The codes also need some manual changes each time due to client's requirements.
Have a look into Azure Durable Functions as an active scripted deployment manager.
You can configure your Durable Function to be triggered via a cron schedule, then it can orchestrate a series of tasks, monitoring for responses from the deployment targets for acceptable response before continuing each step or even waiting for user input to proceed.
By authoring your complex workflow using either of C#/JavaScript/Python/PowerShell you are only limited by your own ability to transform your manual process into a scripted one.
Azure Functions is just one option of many, it really comes down to the complexity of your workflow and the individual tasks. Octopus Deploy is a common product used to automate Azure application deployments and may have templates that match your current process, I go straight to Durable Functions when I find it too hard to configure complex steps that involve waiting for specific responses from targets before proceeding to the next step, and I want to use C# to evaluate those responses or perhaps reuse some of my application logic as part of the workflow.
I would like to tweak some settings in AKS node group with something like userdata in AWS. Is it possible to do in AKS?
how abt using
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine_scale_set_extension
The underlying Virtual Machine Scale Set (VMSS) is an implementation detail, and one that you do not get to adjust outside of SKU and disk choice. Just like you cannot pick the image that goes on the VMSS; you also cannot use VM Extensions on that scale set, without being out of support. Any direct manipulation of those VMSSs (from an Azure resource provider perspective) behind your nodepools puts you out of support. The only supported affordance to perform host (node)-level actions is via deploying your custom script work in a DaemonSet to the cluster. This is fully supported, and will give you the ability to run (almost) anything you need at the host level. Examples being installing/executing custom security agents, FIM solutions, anti-virus.
From the support FAQ:
Any modification done directly to the agent nodes using any of the IaaS APIs renders the cluster unsupportable. Any modification done to the agent nodes must be done using kubernetes-native mechanisms such as Daemon Sets.
CloudFormation doesn't provide tools for orchestrating deployment of several/many stacks. For example consider a microservice/layered architecture where many stacks need to be deployed together to replicate an environment. With cloudformation you need to use a tool like stacker or something home grown to solve the problem.
Does Terraform offer a multi-stack deployment orchestration solution?
Terraform operates on a directory level so you can simply define both stacks at the same place as a big group of resources or as modules.
In Terraform, if you need to deploy multiple resources together at the same time then you would typically use a module and then present a smaller surface area for configuring that module. This also extends to creating modules of modules.
So if you had a module that you created that deployed one service that contained a load balancer, service of some form (such as an ECS task definition; Kubernetes pod, service, deployment etc definition; an AMI) and a database and another module that contained a queue and another service you could then create an over-arching module that contains both of those modules so they are deployed at the same time with a smaller amount of configuration that may be shared between them.
Modules also allow you to define the source location as remote such as a git location or from a Terraform registry (both the public one or a private one) which means the Terraform code for the modules don't have to be stored in the same place or checked out/cloned into the same directory.
Can you think of Azure Resource Manager as the equivalent to what kubernetes is for Docker?
I think that the two are slightly different (caveat: I have only cursory knowledge of Resource Manager)
Azure Resource Manager lets you think about a collection of separate resources as a single composite application. Much like Google's Deployment Manager. It makes it easier to create repeatable deployments, and make sense of a big collection of heterogeneous resources as belonging to a single app.
Kubernetes is on the other hand turns a collection of virtual machines into a new resource type (a cluster). It goes beyond configuration and deployment of resources and acts as a runtime environment for distributed apps. So it has an API that can be used during runtime to deploy and wire in your containers, dynamically scale up/scale down your cluster, and it will make sure that your intent is being met (if you ask for three running containers of a certain type, it will make sure that there are always three healthy containers of that type running).
My understanding of the VMs involved in Azure Cloud Services is that at least some parts of it are not meant to persist throughout the lifetime of the service (unlike regular VMs that you can create through Azure).
This is why you must use Startup Tasks in your ServiceDefinition.csdef file in order to configure certain things.
However, after playing around with it for a while, I can't figure out what does and does not persist.
For instance, I installed an ISAPI filter into IIS by logging into remote desktop. That seems to have persisted across deployments and even a reimaging.
Is there a list somewhere of what does and does not persist and when that persistence will end (what triggers the clearing of it)?
See http://blogs.msdn.com/b/kwill/archive/2012/10/05/windows-azure-disk-partition-preservation.aspx for information about what is preserved on an Azure PaaS VM in different scenarios.
In short, the only things that will truly persist are things packaged in your cscfg/cspkg (ie. startup tasks). Anything else done at runtime or via RDP will eventually be removed.
See - How to: Update a cloud service role or deployment - in most cases, an UPDATE to an existing deployment will preserve local data while updating the application code for your cloud service.
Be aware that if you change the size of a role (that is, the size of a virtual machine that hosts a role instance) or the number of roles, each role instance (virtual machine) must be re-imaged, and any local data will be lost.
Also if you use the standard deployment practice of creating a new deployment in the staging slot and then swapping the VIP, you will also lose all local data (these are new VMs).