Azure Resource Manager: attach VHD to a VM during provisioning? - azure

I'm trying to attach an existing VHD disk from a Storage Account to VM during Azure Resource manager provisioning with a template.
My dataDisk resource is:
"dataDisks": [
{
"name": "jmdisk",
"diskSizeGB": "100",
"lun": 0,
"vhd": {
"uri": "https://jmje.blob.core.windows.net/vhds/jenkinshome.vhd"
},
"createOption": "attach"
}
]
But during deploy - I have an error from Azure:
STATUSMESSAGE{
"error": {
"code": "OperationNotAllowed",
"target": "dataDisk",
"message": "Addition of a blob based disk to VM with managed disks is not supported."
}
}
Unfortunately can't google anything related, i.e. - a correct way to attach an existing disk.
UPD Solved this by just creating new Managed disk and copy data there.

You can create a managed disk from an existing blob -- you can see a sample of that here: https://github.com/chagarw/MDPP/blob/master/101-create-image-availabilityset-2vm-from-blob/azuredeploy.json
It uses existing blobs for both OS and data, you don't have to do it that way... In your case it sounds like you want an implicit OS disk and then an explicit data disk? Which you could also do, just use different images for each.

Well, the error gives it up, you are probably not familiar with Managed Disks yet. So you are creating a VM with OS disk as managed, in that case you cannot use existing disks to attach to a VM, just create a VM with regular disk (just like you do with data disk).

Related

ARM Template deploymentScripts custom container instance fileshare name

I have created a template which deploys a deploymentScripts to do some work in SQL. Since we locked down SQL on a network level I needed to create a custom container instance which is connected to my vnet. That Container Instance needs to have a volume mounted on an Azure Storage Account. The fileshare on that is created by the deploymentScripts deployment but you don't seem to be able to set that. How do I reliably get to that value so I can create that myself and mount? Here's the part of the container instance I'm talking about. Its the "shareName": "hvtqyj3nqhygoazscripts" I'm looking for. Seems to be using the uniqueString() function concatenated with 'azscripts'. What is the input to the uniqueString() function?
"volumes": [
{
"name": "azscripts",
"azureFile": {
"shareName": "hvtqyj3nqhygoazscripts",
"storageAccountName": "<storename>",
"storageAccountKey": "<key>"
}
}
]
In DeploymentScripts (DS), ACI is not seen as a shared resource, so you cannot create it in advance and use it in DS (there's 1 ACI per DS), but you can use an existing Storage Account as seen here. File shares are controlled by DS to isolate script content and outputs so you cannot control the name.

How do I provision throughput on a container?

I created a Cosmos Db account, database, and containers using this ARM template. Deployment via Azure DevOps Release pipeline is working.
I used this ARM template to adjust the database throughput. It is also in a Release pipeline and is working.
Currently the throughput is provisioned at the database level and shared across all containers. How do I provision throughput at the container level? I tried running this ARM template to update throughput at the container level. It appears that once shared throughput is provisioned at the database level there's no way to provision throughput at the container level.
I found this reference document but throughput is not listed. Am I missing something super obvious or is the desired functionality not implemented yet?
UPDATE:
When attempting to update the container with the above template I get the following:
2019-05-29T20:25:10.5166366Z There were errors in your deployment. Error code: DeploymentFailed.
2019-05-29T20:25:10.5236514Z ##[error]At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.
2019-05-29T20:25:10.5246027Z ##[error]Details:
2019-05-29T20:25:10.5246412Z ##[error]NotFound: {
"code": "NotFound",
"message": "Entity with the specified id does not exist in the system.\r\nActivityId: 7ba84...b52b2, Microsoft.Azure.Documents.Common/2.4.0.0"
} undefined
2019-05-29T20:25:10.5246730Z ##[error]Task failed while creating or updating the template deployment.
I was also experiencing the same error:
"code": "NotFound",
"message": "Entity with the specified id does not exist in the system.
I was deploying an ARM template via DevOps pipeline to change the configuration of an existing resource in Azure.
The existing resource had a dedicated throughput defined at the container/collection level, and my ARM template was trying to defined the throughput at the database level...
Once adjusted my deployment pipeline worked.
Here is some info on my throughput provisioning fix: https://github.com/MicrosoftDocs/azure-docs/issues/30853
I believe you have to create the container with a dedicated throughput, first. I have not seen any documentation for changing a container from shared to dedicated throughput. In the Microsoft documentation, the example is creating containers with both shared and dedicated throughput.
Set throughput on a database and a container
You can combine the two models. Provisioning throughput on both the database and the container is allowed. The following example shows how to provision throughput on an Azure Cosmos database and a container:
You can create an Azure Cosmos database named Z with provisioned throughput of "K" RUs.
Next, create five containers named A, B, C, D, and E within the database. When creating container B, make sure to enable Provision dedicated throughput for this container option and explicitly configure "P" RUs of provisioned throughput on this container. Note that you can configure shared and dedicated throughput only when creating the database and container.
The "K" RUs throughput is shared across the four containers A, C, D, and E. The exact amount of throughput available to A, C, D, or E varies. There are no SLAs for each individual container’s throughput.
The container named B is guaranteed to get the "P" RUs throughput all the time. It's backed by SLAs.
There is a prereq ARM template in a subfolder for the 101-cosmosdb-sql-container-ru-update. In the prereq version, the container has the throughput property set when the container is created. After the container is created with dedicated throughput, the update template works without error. I have tried it out and verified that it works.
{
"type": "Microsoft.DocumentDB/databaseAccounts/apis/databases",
"name": "[concat(variables('accountName'), '/sql/', variables('databaseName'))]",
"apiVersion": "2016-03-31",
"dependsOn": [ "[resourceId('Microsoft.DocumentDB/databaseAccounts/', variables('accountName'))]" ],
"properties":{
"resource":{
"id": "[variables('databaseName')]"
},
"options": { "throughput": "[variables('databaseThroughput')]" }
}
},
{
"type": "Microsoft.DocumentDb/databaseAccounts/apis/databases/containers",
"name": "[concat(variables('accountName'), '/sql/', variables('databaseName'), '/', variables('containerName'))]",
"apiVersion": "2016-03-31",
"dependsOn": [ "[resourceId('Microsoft.DocumentDB/databaseAccounts/apis/databases', variables('accountName'), 'sql', variables('databaseName'))]" ],
"properties":
{
"resource":{
"id": "[variables('containerName')]",
"partitionKey": {
"paths": [
"/MyPartitionKey1"
],
"kind": "Hash"
}
},
"options": { "throughput": "[variables('containerThroughput')]" }
}
}

Azure VM image does not have Source Blob URI

After following the instructions for creating a managed image in Azure, I'm trying to create a VM from the managed image inside the ARM template. The ARM template requires a source blob URI which should be listed on the VM image page within the Azure portal, but it's blank (see screen shot below).
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/capture-image-resource
Did I miss a step somewhere?
yes, to create a vm from the managed disk image you need its resource id, not its uri (because it doesnt have one). Here's an ARM Template bit to create a VM from the managed disk image:
"storageProfile": {
"imageReference": {
"id": "[resourceId('Microsoft.Compute/images', concat(parameters('vmPrefix'), '-gateway-osImage'))]"
},
"osDisk": {
"name": "[concat(parameters('vmPrefix'), '-gateway-os-vhd')]",
"createOption": "FromImage"
}
},

How to add data disk to SF VMSS?

I'm trying to deploy service fabric with data disks, and I add data disk configuration to my deploy template
"dataDisks":[{
"lun": 1,
"createOption": "Empty",
"diskSizeGB": 1023
}
]
And I got error:
Activity: Write-Error
Message: ==================
3:46:16 PM - Resource Microsoft.Compute/virtualMachineScaleSets 'inode' failed with message '{
"error": {
"code": "OperationNotAllowed",
"target": "dataDisk",
"message": "Addition of a managed disk to a VM with blob based disks is not supported."
}
}'
Is it even possible? to add data disk to sf scale set? I'm looking for template example
Change your cluster VM template definition so that it uses Managed Disks.
Then you can attach managed data disks too.

What are Resource Disks in Microsoft Azure?

What is the definition of a resource disk in Microsoft Azure? I see the term used in the response to my REST call to get the available Virtual Machine sizes. The documentation says that "ResourceDiskSizeInMB" specifies the size in MB of the temporary or resource disk. However I don't see the term resource disk being used anywhere else.
Does this mean that resource disks are the same as temporary disks?
Does this mean that resource disks are the same as temporary disks?
You are right, when we list all available virtual machine sizes for a subscription in a location, we will get the information about "resourceDiskSizeInMb", it is Local SSD(temporary disk), we can find it in this page.
{
"maxDataDiskCount": 2,
"memoryInMb": 3584,
"name": "Standard_D1_v2",
"numberOfCores": 1,
"osDiskSizeInMb": 1047552,
"resourceDiskSizeInMb": 51200
},
The term Resource Disk is used in /etc/waagent.conf. You can specify a mount point along with filesystem type and a few other options.

Resources