I'm trying to deploy service fabric with data disks, and I add data disk configuration to my deploy template
"dataDisks":[{
"lun": 1,
"createOption": "Empty",
"diskSizeGB": 1023
}
]
And I got error:
Activity: Write-Error
Message: ==================
3:46:16 PM - Resource Microsoft.Compute/virtualMachineScaleSets 'inode' failed with message '{
"error": {
"code": "OperationNotAllowed",
"target": "dataDisk",
"message": "Addition of a managed disk to a VM with blob based disks is not supported."
}
}'
Is it even possible? to add data disk to sf scale set? I'm looking for template example
Change your cluster VM template definition so that it uses Managed Disks.
Then you can attach managed data disks too.
Related
I am currently deploying some new azure VMs using a template. This template contains a link to a VHD image and uses availability sets.
After having a look at the azure docs, I cannot seem to tell or find out if it's possible to use my current procedure to deploy the VM in a specific zone.
I changed my template to use zones rather than sets but when I use it in Azure CLI I have this error message returned:
"Virtual Machines deployed to an Availability Zone must use managed disks."
I then tried to add the managed disk section to the template without success.
Below there is the pseudocode of the template related to the storage of the VM:
"storageProfile": {
"osDisk": {
"managedDisk": {
"storageAccountType": "StandardSSD_LRS"
},
"osType": "Linux",
"name": "myName.vhd",
"createOption": "FromImage",
"image": {
"uri": "myUri.vhd"
},
"vhd": {
"uri": "myVhdImageUri.vhd"
},
"caching": "ReadWrite"
}
}
You have to convert disk to managed disk first. Then you will be able to use it in your template.
I created a Cosmos Db account, database, and containers using this ARM template. Deployment via Azure DevOps Release pipeline is working.
I used this ARM template to adjust the database throughput. It is also in a Release pipeline and is working.
Currently the throughput is provisioned at the database level and shared across all containers. How do I provision throughput at the container level? I tried running this ARM template to update throughput at the container level. It appears that once shared throughput is provisioned at the database level there's no way to provision throughput at the container level.
I found this reference document but throughput is not listed. Am I missing something super obvious or is the desired functionality not implemented yet?
UPDATE:
When attempting to update the container with the above template I get the following:
2019-05-29T20:25:10.5166366Z There were errors in your deployment. Error code: DeploymentFailed.
2019-05-29T20:25:10.5236514Z ##[error]At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.
2019-05-29T20:25:10.5246027Z ##[error]Details:
2019-05-29T20:25:10.5246412Z ##[error]NotFound: {
"code": "NotFound",
"message": "Entity with the specified id does not exist in the system.\r\nActivityId: 7ba84...b52b2, Microsoft.Azure.Documents.Common/2.4.0.0"
} undefined
2019-05-29T20:25:10.5246730Z ##[error]Task failed while creating or updating the template deployment.
I was also experiencing the same error:
"code": "NotFound",
"message": "Entity with the specified id does not exist in the system.
I was deploying an ARM template via DevOps pipeline to change the configuration of an existing resource in Azure.
The existing resource had a dedicated throughput defined at the container/collection level, and my ARM template was trying to defined the throughput at the database level...
Once adjusted my deployment pipeline worked.
Here is some info on my throughput provisioning fix: https://github.com/MicrosoftDocs/azure-docs/issues/30853
I believe you have to create the container with a dedicated throughput, first. I have not seen any documentation for changing a container from shared to dedicated throughput. In the Microsoft documentation, the example is creating containers with both shared and dedicated throughput.
Set throughput on a database and a container
You can combine the two models. Provisioning throughput on both the database and the container is allowed. The following example shows how to provision throughput on an Azure Cosmos database and a container:
You can create an Azure Cosmos database named Z with provisioned throughput of "K" RUs.
Next, create five containers named A, B, C, D, and E within the database. When creating container B, make sure to enable Provision dedicated throughput for this container option and explicitly configure "P" RUs of provisioned throughput on this container. Note that you can configure shared and dedicated throughput only when creating the database and container.
The "K" RUs throughput is shared across the four containers A, C, D, and E. The exact amount of throughput available to A, C, D, or E varies. There are no SLAs for each individual container’s throughput.
The container named B is guaranteed to get the "P" RUs throughput all the time. It's backed by SLAs.
There is a prereq ARM template in a subfolder for the 101-cosmosdb-sql-container-ru-update. In the prereq version, the container has the throughput property set when the container is created. After the container is created with dedicated throughput, the update template works without error. I have tried it out and verified that it works.
{
"type": "Microsoft.DocumentDB/databaseAccounts/apis/databases",
"name": "[concat(variables('accountName'), '/sql/', variables('databaseName'))]",
"apiVersion": "2016-03-31",
"dependsOn": [ "[resourceId('Microsoft.DocumentDB/databaseAccounts/', variables('accountName'))]" ],
"properties":{
"resource":{
"id": "[variables('databaseName')]"
},
"options": { "throughput": "[variables('databaseThroughput')]" }
}
},
{
"type": "Microsoft.DocumentDb/databaseAccounts/apis/databases/containers",
"name": "[concat(variables('accountName'), '/sql/', variables('databaseName'), '/', variables('containerName'))]",
"apiVersion": "2016-03-31",
"dependsOn": [ "[resourceId('Microsoft.DocumentDB/databaseAccounts/apis/databases', variables('accountName'), 'sql', variables('databaseName'))]" ],
"properties":
{
"resource":{
"id": "[variables('containerName')]",
"partitionKey": {
"paths": [
"/MyPartitionKey1"
],
"kind": "Hash"
}
},
"options": { "throughput": "[variables('containerThroughput')]" }
}
}
created an image from a linuxvm and all went well until i tried to deploy. got the following error:
Deployment failed. Correlation ID: 52a94279-233b-45c1-96c4-8c9f3d5d95bc. {
"error": {
"code": "InvalidParameter",
"message": "StorageProfile.dataDisks.lun does not have required value(s) for image specified in storage profile.",
"target": "storageProfile"
}
}
any ideas?
When you create the Linux and add the data disk to the VM in the template, the disk object lun is necessary and you must input an integer parameter.
Specifies the logical unit number of the data disk. This value is used
to identify data disks within the VM and therefore must be unique for
each data disk attached to a VM
For more details, see Datadisk object in the template.
After following the instructions for creating a managed image in Azure, I'm trying to create a VM from the managed image inside the ARM template. The ARM template requires a source blob URI which should be listed on the VM image page within the Azure portal, but it's blank (see screen shot below).
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/capture-image-resource
Did I miss a step somewhere?
yes, to create a vm from the managed disk image you need its resource id, not its uri (because it doesnt have one). Here's an ARM Template bit to create a VM from the managed disk image:
"storageProfile": {
"imageReference": {
"id": "[resourceId('Microsoft.Compute/images', concat(parameters('vmPrefix'), '-gateway-osImage'))]"
},
"osDisk": {
"name": "[concat(parameters('vmPrefix'), '-gateway-os-vhd')]",
"createOption": "FromImage"
}
},
I'm trying to attach an existing VHD disk from a Storage Account to VM during Azure Resource manager provisioning with a template.
My dataDisk resource is:
"dataDisks": [
{
"name": "jmdisk",
"diskSizeGB": "100",
"lun": 0,
"vhd": {
"uri": "https://jmje.blob.core.windows.net/vhds/jenkinshome.vhd"
},
"createOption": "attach"
}
]
But during deploy - I have an error from Azure:
STATUSMESSAGE{
"error": {
"code": "OperationNotAllowed",
"target": "dataDisk",
"message": "Addition of a blob based disk to VM with managed disks is not supported."
}
}
Unfortunately can't google anything related, i.e. - a correct way to attach an existing disk.
UPD Solved this by just creating new Managed disk and copy data there.
You can create a managed disk from an existing blob -- you can see a sample of that here: https://github.com/chagarw/MDPP/blob/master/101-create-image-availabilityset-2vm-from-blob/azuredeploy.json
It uses existing blobs for both OS and data, you don't have to do it that way... In your case it sounds like you want an implicit OS disk and then an explicit data disk? Which you could also do, just use different images for each.
Well, the error gives it up, you are probably not familiar with Managed Disks yet. So you are creating a VM with OS disk as managed, in that case you cannot use existing disks to attach to a VM, just create a VM with regular disk (just like you do with data disk).