I've created an template-based deployment that over-provisions a number of Linux VMs. I'd like to autoscale them as per classic instances, where Azure will turn on/turn off instances according to CPU load.
Is this possible with ARM mode? And if not, is there a suggested alternative method? The only examples I can find are around using Application Insights and PaaS functionality. I've got a Python app running in Docker on Ubuntu hosts.
For IaaS, you must use virtual machine scale sets to use autoscale, else you need to stick with PaaS (web apps).
For this you would first need to create an availability group for the VMs. The resource decleration in the ARM template looks something like this:
{
"type": "Microsoft.Compute/availabilitySets",
"name": "[variables('availabilitySetName')]",
"apiVersion": "2015-05-01-preview",
"location": "[parameters('location')]",
"properties": {
"platformFaultDomainCount": "2"
}
}
Then for the virtual machine resource the decliration in the ARM Template would look something like this:
{
"apiVersion": "2015-05-01-preview",
"type": "Microsoft.Compute/virtualMachines",
"name": "[concat(variables('vmName'), '0')]",
"location": "[parameters('location')]",
"dependsOn": [
"[concat('Microsoft.Storage/storageAccounts/', parameters('newStorageAccountName'))]",
"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'), '0')]",
"[concat('Microsoft.Compute/availabilitySets/', variables('availabilitySetName'))]"
],
"properties": {
"availabilitySet": {
"id": "[resourceId('Microsoft.Compute/availabilitySets', variables('availabilitySetName'))]"
},
...},
The quckstart templates are a good ref: https://raw.githubusercontent.com/azure/azure-quickstart-templates/master/201-2-vms-2-FDs-no-resource-loops/azuredeploy.json
Once you have two or more VMs of the same size in an availability set, you would configure autoscale using microsoft.insights/autoscalesettings, which I beleive you referenced in the question. This is done at the cloud service so it will work similar to PaaS... like so:
{
"apiVersion": "2014-04-01",
"name": "[concat(variables('vmName'), '-', resourceGroup().name)]",
"type": "microsoft.insights/autoscalesettings",
"location": "East US",
...},
A pretty good example is here: https://raw.githubusercontent.com/Azure/azure-quickstart-templates/6abc9f320e39d9d75dffb60846e88ab80d3ff33a/201-web-app-sql-database/azuredeploy.json
I also setup autoscale using the portal first and reviewed ARMExplorer to get a better idea of how things should look in my code. ARMExplorer is here: Azure Resource Explorer
Related
Context: looking to build out a test lab in Azure. The goal is to have VMs spun up from a CI/CD pipeline to run end2end automation tests. The VMs will need to be deployed based on a custom image. However, I don't want to maintain specific virtual machine images which have certain software installed in various flavors and permutations.
Furthermore, looking to have a self service and declarative solution where teams can specify in automation templates or scripts etc which software they need provisioned on the VM after it comes up, desired state.
Example: get me a VM based on image template X and install package A version 2.3, package B version 1.2 and and configure OS with setting X, Y and Z.
Software packages can come from various sources. MSIs, chocolatey, copy deploys etc.
There seems to be so many ways of doing it - seems like a jungle. Azure VM Apps? Powershell Desired State Configuration? Something else?
Cheers
Furthermore, looking to have a self service and declarative solution
where teams can specify in automation templates or scripts etc which
software they need provisioned on the VM after it comes up, desired
state. There seems to be so many ways of doing it - seems like a
jungle. Azure VM Apps? Powershell Desired State Configuration?
Something else?
There are 2 more ways you can accomplish this task.
You can make use of custom script extension in your pipeline and store the scripts with various packages or softwares in the storage account and use different scripts for installing different packages for different VM’s. Here, Your teams can just create a new script and store it in an Azure Storage account, And you can use any script with the package to deploy your VM.
Custom Script Extension:-
I created one Storage account and uploaded my custom script with package to install IIS server in Azure VM.
Now, While deploying your VM you can select this Custom Script in the Advanced tab like below:-
Select extension search for Custom Script Extension :-
You can browse the Storage account and pick your script to be installed in the VM. You can also install this script after VM deployment by going to VM > Left pane > VM + Extensions + application.
Script got deployed inside the VM and IIS server was installed successfully :-
As You want to automate this in your Azure DevOps pipeline, You can make use of ARM Template to install the Custom script extension in your VM pipeline. You can make use of TeamServicesagent property in ARM template to connect to your DevOps organization and deployment group in the ARM template and deploy the extension, Refer below :-
ARM Template :-
{
"name": "vmname",
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2021-03-01",
"location": "[resourceGroup().location]",
"resources": [
{
"name": "[concat('vmname','/TeamServicesAgent')]",
"type": "Microsoft.Compute/virtualMachines/extensions",
"location": "[resourceGroup().location]",
"apiVersion": "2021-03-01",
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines/','vmname')]"
],
"properties": {
"publisher": "Microsoft.VisualStudio.Services",
"type": "TeamServicesAgent",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"VSTSAccountName": "AzureDevOpsorg",
"TeamProject": "Azuredevopsproject",
"DeploymentGroup": "Deploymentgroup",
"AgentName": "vmname"
},
"protectedSettings": {
"PATToken": "personal-access-token-azuredevops"
}
}
}
],
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts', toLower('vmstore8677676'))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "Standard_D2s_v3"
},
"osProfile": {
"computerName": "vmname",
"adminUsername": "username",
"adminPassword": "Password"
},
"storageProfile": {
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "2019-Datacenter",
"version": "latest"
},
"osDisk": {
"name": "windowsVM1OSDisk",
"caching": "ReadWrite",
"createOption": "FromImage"
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', 'app-interface')]"
}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "[reference(resourceId('Microsoft.Storage/storageAccounts/', toLower('storaegeaccountname'))).primaryEndpoints.blob]"
}
}
}
},
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat('vmname', '/config-app')]",
"location": "[resourceGroup().location]",
"apiVersion": "2018-06-01",
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines/', 'vmname')]"
],
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.10",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": [
"https://storageaccountname.blob.core.windows.net/installers/script.ps1?sp=r&st=2022-08-13T16:32:07Z&se=sas-token"
],
"commandToExecute": "powershell -ExecutionPolicy Unrestricted -File script.ps1"
}
}}
],
"outputs": {}
}
You need to generate SAS URL for the script file in your Azure storage account.
You can make use of Azure Dev-Test Labs and deploy a custom artifacts inside your Dev-test labs and different packages for different VM’s and copy the ARM Template and tasks of VM in the release pipeline of Azure DevOps.
Dev-Test Labs:-
I created one Azure Dev-Test Lab resource like below:-
Now, You can directly select from the bunch of pre-built images here:-
After selecting an Image create the VM > And Add Artifacts, Here you can add any desired package that needs to be installed in your VM
You can create multiple Dev-test labs according to your requirements and add additional packages as artifacts after the deployment of the VM.
You can click on apply artifacts and add additional or custom packages to your VM’s.
You can also automate this deployment via ARM template, Refer here :-
azure-docs/devtest-lab-use-resource-manager-template.md at main · MicrosoftDocs/azure-docs · GitHub
You can automate Azure Dev-Test lab deployment in Azure DevOps by following the steps given in this document:-
Integrate Azure DevTest Labs into Azure Pipelines - Azure DevTest Labs | Microsoft Learn
Apart from these methods, You can use chef and puppet to automate your deployments and packages.
Chef - Chef extension for Azure VMs - Azure Virtual Machines | Microsoft Learn
Puppet - Get Started on Azure With Puppet | Puppet by Perforce
I have created an instance of Azure Kubernetes Service (AKS) and have discovered that apart from the resource group I created the AKS instance in, two other resource groups were created for me. Here is what my resource groups and their contents looks like:
MyResourceGroup-Production
MyAKSInstance - Azure Kubernetes Service (AKS)
DefaultResourceGroup-WEU
ContainerInsights(MyAKSInstance) - Solution
MyAKSInstance - Log Analytics
MC_MyResourceGroup-Production_MyAKSInstance_westeurope
agentpool-availabilitySet-36219400 - Availability set
aks-agentpool-36219400-0 - Virtual machine
aks-agentpool-36219400-0_OsDisk_1_09469b24b1ff4526bcfd5d00840cfbbc - Disk
aks-agentpool-36219400-nic-0 - Network interface
aks-agentpool-36219400-nsg - Network security group
aks-agentpool-36219400-routetable - Route table
aks-vnet-36219400 - Virtual network
I have a few questions about these two separate resource groups:
Can I rename the resource groups or control how they are named from my ARM template in the first place at the time of creation?
Can I move the contents of DefaultResourceGroup-WEU into MyResourceGroup-Production?
Can I safely edit their settings?
The DefaultResourceGroup-WEU seems to be created if you enable Log Analytics. Can I use this instance for accepting logs from other instances?
UPDATE
I managed to pre-create a log analytics resource and use that for Kubernetes. However, there is a third resource that I'm having trouble moving into my resource group:
{
"type": "Microsoft.Resources/deployments",
"name": "SolutionDeployment",
"apiVersion": "2017-05-10",
"resourceGroup": "[split(parameters('omsWorkspaceId'),'/')[4]]",
"subscriptionId": "[split(parameters('omsWorkspaceId'),'/')[2]]",
"properties": {
"mode": "Incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {},
"variables": {},
"resources": [
{
"apiVersion": "2015-11-01-preview",
"type": "Microsoft.OperationsManagement/solutions",
"location": "[parameters('workspaceRegion')]",
"name": "[concat('ContainerInsights', '(', split(parameters('omsWorkspaceId'),'/')[8], ')')]",
"properties": {
"workspaceResourceId": "[parameters('omsWorkspaceId')]"
},
"plan": {
"name": "[concat('ContainerInsights', '(', split(parameters('omsWorkspaceId'),'/')[8], ')')]",
"product": "[concat('OMSGallery/', 'ContainerInsights')]",
"promotionCode": "",
"publisher": "Microsoft"
}
}
]
}
},
"dependsOn": []
}
No, you cant.
Yes, but I'd advice against it. I'd advice remove the health metrics from AKS, delete that resource group, create OMS in the same resource group with AKS (or wherever you need your OMS to be) and then use that OMS. it will just create container solution for you in the same resource group where oms is in.
To extent, if you break anything AKS wont fix it
Yes you can, but you better rework it like I mention in point 2.
I try to provision an Azure Redis Cache using an ARM template. This is working as expected with the exception that I can't specify the access keys.
Usually I work with the generated keys which is probably the recommended way - but in this case I wan't to provide thos keys within my deployment (for some legacy reasons).
Q: Is it possible to provide the access keys within an ARM template? Or can I set them after the deployment using PowerShell?
Here is a snippet of my ARM template:
"resources": [
{
"type": "Microsoft.Cache/Redis",
"name": "[parameters('myRedis_name')]",
"apiVersion": "2016-04-01",
"location": "West Europe",
"tags": {},
"properties": {
"redisVersion": "3.2",
"sku": {
"name": "Standard",
"family": "C",
"capacity": 1
},
"enableNonSslPort": false,
"redisConfiguration": {
"maxclients": "1000",
"maxmemory-reserved": "50",
"maxmemory-delta": "50"
}
},
"resources": [],
"dependsOn": []
},
Just like Azure Storage keys (or DocumentDB keys, etc), you have no ability to specify the keys. You may either use what's provided or, at any time, regenerate the keys (either primary or secondary). This is how keys are managed, regardless whether using ARM or the portal. Here's a screengrab where you can see the regen options:
There's no way to enter a specific key of your own.
According to the documentation I can enable the Azure Event Hubs Archive feature using an Azure Resource Manager template. The template takes a blobContainerName argument:
"The blob container where you want your event data be archived."
But afaik it's not possible to create a blob container using an ARM template, then how am I supposed to enable the Archive feature on an Event Hub?
The purpose of the ARM template is to provision everything from scratch, not to manually create some of the resources using the portal.
It wasn't possible before to create containers in your storage account, but this has been changed. New functionality has been added to the ARM template for Storage Accounts which enable you to create containers.
To create a storage account with a container called theNameOfMyContainer, add this to your resources block of the ARM template.
{
"name": "[parameters('storageAccountName')]",
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2018-02-01",
"location": "[resourceGroup().location]",
"kind": "StorageV2",
"sku": {
"name": "Standard_LRS",
"tier": "Standard"
},
"properties": {
"accessTier": "Hot"
},
"resources": [{
"name": "[concat('default/', 'theNameOfMyContainer')]",
"type": "blobServices/containers",
"apiVersion": "2018-03-01-preview",
"dependsOn": [
"[parameters('storageAccountName')]"
],
"properties": {
"publicAccess": "Blob"
}
}]
}
To my knowledge, you can use None, Blob or Container for your publicAccess.
It's still not possible to create Queues and Tables, but hopefull this will be added soon.
Just like you said, there is no way to create a blob in Azure ARM Template, so the only logical answer to this question is: supply existing blob at deployment time. One way to do that would be to create a blob with powershell and pass it as a parameter to ARM Deployment.
I want to deploy a website (Microsoft.Web/sites) resource to a existing hosting plan (Microsoft.Web/serverfarms) without having to define the sku, workersize, etc. in the ARM template. It should just use the hosting plan as-is without changing it. But the sku seems to be required for the hosting plan definition and the hosting plan definition seems to be required for the website definition.
At the moment we read the sku of the hosting plan and set it as a parameter in the ARM template, but sometimes it still triggers a scaling operation in azure and restarts all websites on the hosting plan.
The only thing you need in the ARM Template to set the hosting plan is the resourceId of that serverFarm - that's the serverFarmId property below...
"name": "[variables('websiteName')]",
"type": "Microsoft.Web/sites",
"location": "centralus",
"apiVersion": "2015-08-01",
"dependsOn": [ ],
"tags": {
"displayName": "website"
},
"properties": {
"name": "[variables('websiteName')]",
"serverFarmId": "[resourceId(parameters('serverFarmResourceGroupName'), 'Microsoft.Web/serverFarms', parameters('AppSvcPlanName'))]"
}
That's barebones, but it will put a web app into the existing serverFarm.