Context: looking to build out a test lab in Azure. The goal is to have VMs spun up from a CI/CD pipeline to run end2end automation tests. The VMs will need to be deployed based on a custom image. However, I don't want to maintain specific virtual machine images which have certain software installed in various flavors and permutations.
Furthermore, looking to have a self service and declarative solution where teams can specify in automation templates or scripts etc which software they need provisioned on the VM after it comes up, desired state.
Example: get me a VM based on image template X and install package A version 2.3, package B version 1.2 and and configure OS with setting X, Y and Z.
Software packages can come from various sources. MSIs, chocolatey, copy deploys etc.
There seems to be so many ways of doing it - seems like a jungle. Azure VM Apps? Powershell Desired State Configuration? Something else?
Cheers
Furthermore, looking to have a self service and declarative solution
where teams can specify in automation templates or scripts etc which
software they need provisioned on the VM after it comes up, desired
state. There seems to be so many ways of doing it - seems like a
jungle. Azure VM Apps? Powershell Desired State Configuration?
Something else?
There are 2 more ways you can accomplish this task.
You can make use of custom script extension in your pipeline and store the scripts with various packages or softwares in the storage account and use different scripts for installing different packages for different VM’s. Here, Your teams can just create a new script and store it in an Azure Storage account, And you can use any script with the package to deploy your VM.
Custom Script Extension:-
I created one Storage account and uploaded my custom script with package to install IIS server in Azure VM.
Now, While deploying your VM you can select this Custom Script in the Advanced tab like below:-
Select extension search for Custom Script Extension :-
You can browse the Storage account and pick your script to be installed in the VM. You can also install this script after VM deployment by going to VM > Left pane > VM + Extensions + application.
Script got deployed inside the VM and IIS server was installed successfully :-
As You want to automate this in your Azure DevOps pipeline, You can make use of ARM Template to install the Custom script extension in your VM pipeline. You can make use of TeamServicesagent property in ARM template to connect to your DevOps organization and deployment group in the ARM template and deploy the extension, Refer below :-
ARM Template :-
{
"name": "vmname",
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2021-03-01",
"location": "[resourceGroup().location]",
"resources": [
{
"name": "[concat('vmname','/TeamServicesAgent')]",
"type": "Microsoft.Compute/virtualMachines/extensions",
"location": "[resourceGroup().location]",
"apiVersion": "2021-03-01",
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines/','vmname')]"
],
"properties": {
"publisher": "Microsoft.VisualStudio.Services",
"type": "TeamServicesAgent",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"VSTSAccountName": "AzureDevOpsorg",
"TeamProject": "Azuredevopsproject",
"DeploymentGroup": "Deploymentgroup",
"AgentName": "vmname"
},
"protectedSettings": {
"PATToken": "personal-access-token-azuredevops"
}
}
}
],
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts', toLower('vmstore8677676'))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "Standard_D2s_v3"
},
"osProfile": {
"computerName": "vmname",
"adminUsername": "username",
"adminPassword": "Password"
},
"storageProfile": {
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "2019-Datacenter",
"version": "latest"
},
"osDisk": {
"name": "windowsVM1OSDisk",
"caching": "ReadWrite",
"createOption": "FromImage"
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', 'app-interface')]"
}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "[reference(resourceId('Microsoft.Storage/storageAccounts/', toLower('storaegeaccountname'))).primaryEndpoints.blob]"
}
}
}
},
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat('vmname', '/config-app')]",
"location": "[resourceGroup().location]",
"apiVersion": "2018-06-01",
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines/', 'vmname')]"
],
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.10",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": [
"https://storageaccountname.blob.core.windows.net/installers/script.ps1?sp=r&st=2022-08-13T16:32:07Z&se=sas-token"
],
"commandToExecute": "powershell -ExecutionPolicy Unrestricted -File script.ps1"
}
}}
],
"outputs": {}
}
You need to generate SAS URL for the script file in your Azure storage account.
You can make use of Azure Dev-Test Labs and deploy a custom artifacts inside your Dev-test labs and different packages for different VM’s and copy the ARM Template and tasks of VM in the release pipeline of Azure DevOps.
Dev-Test Labs:-
I created one Azure Dev-Test Lab resource like below:-
Now, You can directly select from the bunch of pre-built images here:-
After selecting an Image create the VM > And Add Artifacts, Here you can add any desired package that needs to be installed in your VM
You can create multiple Dev-test labs according to your requirements and add additional packages as artifacts after the deployment of the VM.
You can click on apply artifacts and add additional or custom packages to your VM’s.
You can also automate this deployment via ARM template, Refer here :-
azure-docs/devtest-lab-use-resource-manager-template.md at main · MicrosoftDocs/azure-docs · GitHub
You can automate Azure Dev-Test lab deployment in Azure DevOps by following the steps given in this document:-
Integrate Azure DevTest Labs into Azure Pipelines - Azure DevTest Labs | Microsoft Learn
Apart from these methods, You can use chef and puppet to automate your deployments and packages.
Chef - Chef extension for Azure VMs - Azure Virtual Machines | Microsoft Learn
Puppet - Get Started on Azure With Puppet | Puppet by Perforce
Related
I want to make an automation process where every vm should connect with a log analytics workspace. So can anyone please help me, how do I connect a VM with log analytics workspace via REST API or Nodejs SDK ?
or
How do I enable virtual machine Insight through REST API or Nodejs SDK ?
How do I enable virtual machine Insight through REST API or Nodejs SDK
?
You can manage to do it with virtual Machine Extensions to enable the following agents.
Log Analytics agent. the VM extension for Windows and Linux.
Dependency agent. the VM extension for Windows and Linux.
Also, Before a Log Analytics workspace can be used with VM insights, it must have the VMInsights solution installed. Read Configuring VM insights.
For example, I click the green try it button in this REST API Virtual Machine Extensions - Create Or Update and provide my parameters and body to call this API.
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/extensions/{vmExtensionName}?api-version=2020-12-01
The requests body like this for windows VM will be deployed in order.
Deploy MicrosoftMonitoringAgent
{
"location": "<location>",
"properties": {
"publisher": "Microsoft.EnterpriseCloud.Monitoring",
"type": "MicrosoftMonitoringAgent",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": "true",
"settings": {
"workspaceId": "<workspaceId>",
"stopOnMultipleConnections": "true"
},
"protectedSettings": {
"workspaceKey": "<workspaceKey>"
}
}
}
Once the above extension is provisioned, you can deploy DependencyAgentWindows.
{
"location": "<location>",
"properties": {
"publisher": "Microsoft.Azure.Monitoring.DependencyAgent",
"type": "DependencyAgentWindows",
"typeHandlerVersion": "9.5",
"autoUpgradeMinorVersion": "true",
"settings": {
"workspaceId": "<workspaceId>"
},
"protectedSettings": {
"workspaceKey": "<workspaceKey>"
}
}
}
I'm trying to deploy a SQL server and SQL data warehouse in arm template mode in Azure CLI. The issue is, the template fails because it's using a SQL server name to create a data warehouse. So, my question is how to stop the data warehouse deployment until the SQL server is deployed successfully?
Or is there any way to stop it until the SQL server is successfully deployed?
You would use the dependsOn property of a resource definition:
{
"type": "Microsoft.Compute/virtualMachineScaleSets",
"name": "[variables('namingInfix')]",
"location": "[variables('location')]",
"apiVersion": "2016-03-30",
"tags": {
"displayName": "VMScaleSet"
},
"dependsOn": [
"[variables('loadBalancerName')]",
"[variables('virtualNetworkName')]",
"storageLoop",
],
...
}
In the example above, the vm scale set does not get created until the load balancer, vnet and storage account are first created.
Documentation on how to use it: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-define-dependencies
How can I deploy my function app attached to a VNet using an arm template?
Right now my arm template deploys an AppServicePlan based function app just fine and sets the "vnetName" in the "site" resource "properties" section...but after the deploy the function app still shows as "not configured" for any VNet. I can then go into the portal and add to the desired VNet in a couple of clicks...but why doesn't it work via the arm template? Can someone please provide a sample arm template to do this?
Upon further review this is not possible via only ARM. Here are the instructions to connect a web app to a VNet: https://learn.microsoft.com/en-gb/azure/app-service-web/app-service-vnet-integration-powershell
Old Answer: Here is another post trying to achieve the same with Web Apps (Functions is built on Web Apps): https://stackoverflow.com/a/39518349/5915331
If I had to guess based on the powershell command, try adding a sub resource to the site of type config with name web, similar to the arm template for a github deployment: https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/app-service-web/app-service-web-arm-from-github-provision.md#web-app
There may be some magic happening under the hood, however.
{
"name": "[parameters('siteName')]",
"type": "Microsoft.Web/sites"
"location": ...,
"apiVersion": ...,
"properties": ...,
"resources": [
"name": "web",
"type": "config",
"apiVersion": "2015-08-01",
"dependsOn": [
"[resourceId('Microsoft.Web/Sites', parameters('siteName'))]"
],
"properties": {
"vnetName": "[parameters('vnetName')]"
},
]
}
I can do everything with ARM except create the VPN package - which is what makes the difference between just setting the resource properties and actually making the app connect to the VNET correctly.
Everything in this article (https://learn.microsoft.com/en-gb/azure/app-service-web/app-service-vnet-integration-powershell) can be easily converted from PowerShell to ARM except for Get-AzureRmVpnClientPackage
Hypothetically, if you are using a legacy VNET, you could get the VPN client package URL via ARM because the legacy VNET resource provider supports that operation (https://learn.microsoft.com/en-us/azure/active-directory/role-based-access-control-resource-provider-operations):
Microsoft.ClassicNetwork/virtualNetworks/gateways/listPackage/action
The ARM VNET provider does not seem to
According to the documentation I can enable the Azure Event Hubs Archive feature using an Azure Resource Manager template. The template takes a blobContainerName argument:
"The blob container where you want your event data be archived."
But afaik it's not possible to create a blob container using an ARM template, then how am I supposed to enable the Archive feature on an Event Hub?
The purpose of the ARM template is to provision everything from scratch, not to manually create some of the resources using the portal.
It wasn't possible before to create containers in your storage account, but this has been changed. New functionality has been added to the ARM template for Storage Accounts which enable you to create containers.
To create a storage account with a container called theNameOfMyContainer, add this to your resources block of the ARM template.
{
"name": "[parameters('storageAccountName')]",
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2018-02-01",
"location": "[resourceGroup().location]",
"kind": "StorageV2",
"sku": {
"name": "Standard_LRS",
"tier": "Standard"
},
"properties": {
"accessTier": "Hot"
},
"resources": [{
"name": "[concat('default/', 'theNameOfMyContainer')]",
"type": "blobServices/containers",
"apiVersion": "2018-03-01-preview",
"dependsOn": [
"[parameters('storageAccountName')]"
],
"properties": {
"publicAccess": "Blob"
}
}]
}
To my knowledge, you can use None, Blob or Container for your publicAccess.
It's still not possible to create Queues and Tables, but hopefull this will be added soon.
Just like you said, there is no way to create a blob in Azure ARM Template, so the only logical answer to this question is: supply existing blob at deployment time. One way to do that would be to create a blob with powershell and pass it as a parameter to ARM Deployment.
I want to deploy a website (Microsoft.Web/sites) resource to a existing hosting plan (Microsoft.Web/serverfarms) without having to define the sku, workersize, etc. in the ARM template. It should just use the hosting plan as-is without changing it. But the sku seems to be required for the hosting plan definition and the hosting plan definition seems to be required for the website definition.
At the moment we read the sku of the hosting plan and set it as a parameter in the ARM template, but sometimes it still triggers a scaling operation in azure and restarts all websites on the hosting plan.
The only thing you need in the ARM Template to set the hosting plan is the resourceId of that serverFarm - that's the serverFarmId property below...
"name": "[variables('websiteName')]",
"type": "Microsoft.Web/sites",
"location": "centralus",
"apiVersion": "2015-08-01",
"dependsOn": [ ],
"tags": {
"displayName": "website"
},
"properties": {
"name": "[variables('websiteName')]",
"serverFarmId": "[resourceId(parameters('serverFarmResourceGroupName'), 'Microsoft.Web/serverFarms', parameters('AppSvcPlanName'))]"
}
That's barebones, but it will put a web app into the existing serverFarm.