Does anybody know how to set up Service Fabric cluster with VMs on managed disks(both OS and Data)? I would be very interested to know how to do this using template config.
You need to change VMSS api version to 2016-04-30-preview and storageProfile to this:
"storageProfile": {
"imageReference": {
"publisher": "[parameters('vmImagePublisher')]",
"offer": "[parameters('vmImageOffer')]",
"sku": "[parameters('vmImageSku')]",
"version": "[parameters('vmImageVersion')]"
},
"osDisk": {
"createOption": "FromImage"
"managedDisk": {
"storageAccountType": "Standard_LRS"
# defauls to Standard_LRS,
# you can choose to pick Premium_LRS if your VM size supports premium storage
# or you can omit this node completely if you need standard storage
}
}
}
Storage Accounts are redundant when using managed disks (you don't need them, Azure handles that for you).
Related
Context: looking to build out a test lab in Azure. The goal is to have VMs spun up from a CI/CD pipeline to run end2end automation tests. The VMs will need to be deployed based on a custom image. However, I don't want to maintain specific virtual machine images which have certain software installed in various flavors and permutations.
Furthermore, looking to have a self service and declarative solution where teams can specify in automation templates or scripts etc which software they need provisioned on the VM after it comes up, desired state.
Example: get me a VM based on image template X and install package A version 2.3, package B version 1.2 and and configure OS with setting X, Y and Z.
Software packages can come from various sources. MSIs, chocolatey, copy deploys etc.
There seems to be so many ways of doing it - seems like a jungle. Azure VM Apps? Powershell Desired State Configuration? Something else?
Cheers
Furthermore, looking to have a self service and declarative solution
where teams can specify in automation templates or scripts etc which
software they need provisioned on the VM after it comes up, desired
state. There seems to be so many ways of doing it - seems like a
jungle. Azure VM Apps? Powershell Desired State Configuration?
Something else?
There are 2 more ways you can accomplish this task.
You can make use of custom script extension in your pipeline and store the scripts with various packages or softwares in the storage account and use different scripts for installing different packages for different VM’s. Here, Your teams can just create a new script and store it in an Azure Storage account, And you can use any script with the package to deploy your VM.
Custom Script Extension:-
I created one Storage account and uploaded my custom script with package to install IIS server in Azure VM.
Now, While deploying your VM you can select this Custom Script in the Advanced tab like below:-
Select extension search for Custom Script Extension :-
You can browse the Storage account and pick your script to be installed in the VM. You can also install this script after VM deployment by going to VM > Left pane > VM + Extensions + application.
Script got deployed inside the VM and IIS server was installed successfully :-
As You want to automate this in your Azure DevOps pipeline, You can make use of ARM Template to install the Custom script extension in your VM pipeline. You can make use of TeamServicesagent property in ARM template to connect to your DevOps organization and deployment group in the ARM template and deploy the extension, Refer below :-
ARM Template :-
{
"name": "vmname",
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2021-03-01",
"location": "[resourceGroup().location]",
"resources": [
{
"name": "[concat('vmname','/TeamServicesAgent')]",
"type": "Microsoft.Compute/virtualMachines/extensions",
"location": "[resourceGroup().location]",
"apiVersion": "2021-03-01",
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines/','vmname')]"
],
"properties": {
"publisher": "Microsoft.VisualStudio.Services",
"type": "TeamServicesAgent",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"VSTSAccountName": "AzureDevOpsorg",
"TeamProject": "Azuredevopsproject",
"DeploymentGroup": "Deploymentgroup",
"AgentName": "vmname"
},
"protectedSettings": {
"PATToken": "personal-access-token-azuredevops"
}
}
}
],
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts', toLower('vmstore8677676'))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "Standard_D2s_v3"
},
"osProfile": {
"computerName": "vmname",
"adminUsername": "username",
"adminPassword": "Password"
},
"storageProfile": {
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "2019-Datacenter",
"version": "latest"
},
"osDisk": {
"name": "windowsVM1OSDisk",
"caching": "ReadWrite",
"createOption": "FromImage"
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', 'app-interface')]"
}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "[reference(resourceId('Microsoft.Storage/storageAccounts/', toLower('storaegeaccountname'))).primaryEndpoints.blob]"
}
}
}
},
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat('vmname', '/config-app')]",
"location": "[resourceGroup().location]",
"apiVersion": "2018-06-01",
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines/', 'vmname')]"
],
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.10",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": [
"https://storageaccountname.blob.core.windows.net/installers/script.ps1?sp=r&st=2022-08-13T16:32:07Z&se=sas-token"
],
"commandToExecute": "powershell -ExecutionPolicy Unrestricted -File script.ps1"
}
}}
],
"outputs": {}
}
You need to generate SAS URL for the script file in your Azure storage account.
You can make use of Azure Dev-Test Labs and deploy a custom artifacts inside your Dev-test labs and different packages for different VM’s and copy the ARM Template and tasks of VM in the release pipeline of Azure DevOps.
Dev-Test Labs:-
I created one Azure Dev-Test Lab resource like below:-
Now, You can directly select from the bunch of pre-built images here:-
After selecting an Image create the VM > And Add Artifacts, Here you can add any desired package that needs to be installed in your VM
You can create multiple Dev-test labs according to your requirements and add additional packages as artifacts after the deployment of the VM.
You can click on apply artifacts and add additional or custom packages to your VM’s.
You can also automate this deployment via ARM template, Refer here :-
azure-docs/devtest-lab-use-resource-manager-template.md at main · MicrosoftDocs/azure-docs · GitHub
You can automate Azure Dev-Test lab deployment in Azure DevOps by following the steps given in this document:-
Integrate Azure DevTest Labs into Azure Pipelines - Azure DevTest Labs | Microsoft Learn
Apart from these methods, You can use chef and puppet to automate your deployments and packages.
Chef - Chef extension for Azure VMs - Azure Virtual Machines | Microsoft Learn
Puppet - Get Started on Azure With Puppet | Puppet by Perforce
I am currently deploying some new azure VMs using a template. This template contains a link to a VHD image and uses availability sets.
After having a look at the azure docs, I cannot seem to tell or find out if it's possible to use my current procedure to deploy the VM in a specific zone.
I changed my template to use zones rather than sets but when I use it in Azure CLI I have this error message returned:
"Virtual Machines deployed to an Availability Zone must use managed disks."
I then tried to add the managed disk section to the template without success.
Below there is the pseudocode of the template related to the storage of the VM:
"storageProfile": {
"osDisk": {
"managedDisk": {
"storageAccountType": "StandardSSD_LRS"
},
"osType": "Linux",
"name": "myName.vhd",
"createOption": "FromImage",
"image": {
"uri": "myUri.vhd"
},
"vhd": {
"uri": "myVhdImageUri.vhd"
},
"caching": "ReadWrite"
}
}
You have to convert disk to managed disk first. Then you will be able to use it in your template.
We are using Azure SQL template to deploy VMs with managed disks instead of storage blobs. Unfortunately, the auto generated managed disk names are not desired and we cannot find a way to change them in the deployment template.
Is there a way to rename a managed disk post deployment? (or during)
Well, it is super easy to give them names. there's a name property for that...
"storageProfile": {
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "2016-Datacenter",
"version": "latest"
},
"osDisk": {
"createOption": "FromImage",
"name": "somename" <<< THIS IS IT
}
},
I'm not sure that its possible to rename after you've created the disk. might be possible if you create a managed disk out of managed disk and you would be able to supply the name for the new one.
I've been trying to get Premium managed disks (SSD) enabled for Azure Virtual Machine Scale Sets, but I don't seem to get it setup.
Standard (HHD) seems to work for managed disks.
Anybody got this working?
Just pick SSD capable VM's when creating the VMSS.
The VMSS portal page would say that its still using HDD, but if you check the actual resource properties it would say:
"storageProfile": {
"osDisk": {
"createOption": "FromImage",
"caching": "ReadWrite",
"managedDisk": {
"storageAccountType": "Premium_LRS"
}
},
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "2016-Datacenter",
"version": "latest"
}
},
According to the documentation I can enable the Azure Event Hubs Archive feature using an Azure Resource Manager template. The template takes a blobContainerName argument:
"The blob container where you want your event data be archived."
But afaik it's not possible to create a blob container using an ARM template, then how am I supposed to enable the Archive feature on an Event Hub?
The purpose of the ARM template is to provision everything from scratch, not to manually create some of the resources using the portal.
It wasn't possible before to create containers in your storage account, but this has been changed. New functionality has been added to the ARM template for Storage Accounts which enable you to create containers.
To create a storage account with a container called theNameOfMyContainer, add this to your resources block of the ARM template.
{
"name": "[parameters('storageAccountName')]",
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2018-02-01",
"location": "[resourceGroup().location]",
"kind": "StorageV2",
"sku": {
"name": "Standard_LRS",
"tier": "Standard"
},
"properties": {
"accessTier": "Hot"
},
"resources": [{
"name": "[concat('default/', 'theNameOfMyContainer')]",
"type": "blobServices/containers",
"apiVersion": "2018-03-01-preview",
"dependsOn": [
"[parameters('storageAccountName')]"
],
"properties": {
"publicAccess": "Blob"
}
}]
}
To my knowledge, you can use None, Blob or Container for your publicAccess.
It's still not possible to create Queues and Tables, but hopefull this will be added soon.
Just like you said, there is no way to create a blob in Azure ARM Template, so the only logical answer to this question is: supply existing blob at deployment time. One way to do that would be to create a blob with powershell and pass it as a parameter to ARM Deployment.