created an image from a linuxvm and all went well until i tried to deploy. got the following error:
Deployment failed. Correlation ID: 52a94279-233b-45c1-96c4-8c9f3d5d95bc. {
"error": {
"code": "InvalidParameter",
"message": "StorageProfile.dataDisks.lun does not have required value(s) for image specified in storage profile.",
"target": "storageProfile"
}
}
any ideas?
When you create the Linux and add the data disk to the VM in the template, the disk object lun is necessary and you must input an integer parameter.
Specifies the logical unit number of the data disk. This value is used
to identify data disks within the VM and therefore must be unique for
each data disk attached to a VM
For more details, see Datadisk object in the template.
Related
I was trying to deploy flatcar image on Azure, but I am not able to deploy it. following are the steps I performed
I downloaded latest azure supported VHD from https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_azure_image.vhd.bz2.
I uploaded this VHD to azure storage blob and converted it to an image as recommended by Azure guides
I tried creating VM out of this image. VM gets created successfully, but we can see one error while creating VM and VM creation is shown as failed (Even though it is actually successful). Following is the error which I can see:
{
"code": "DeploymentFailed",
"message": "At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.",
"details": [
{
"code": "VMExtensionHandlerNonTransientError",
"message": "The handler for VM extension type 'Microsoft.Azure.Diagnostics.LinuxDiagnostic' has reported terminal failure for VM extension 'LinuxDiagnostic' with error message: '[ExtensionOperationError] Non-zero exit code: 1, /var/lib/waagent/Microsoft.Azure.Diagnostics.LinuxDiagnostic-3.0.141/diagnostic.py -install\n[stdout]\n\n\n[stderr]\n File \"/var/lib/waagent/Microsoft.Azure.Diagnostics.LinuxDiagnostic-3.0.141/diagnostic.py\", line 54\n print 'A local import (e.g., waagent) failed. Exception: {0}\\n' \\\n ^\nSyntaxError: invalid syntax\n'.\r\n \r\n'Install handler failed for the extension. More information on troubleshooting is available at https://aka.ms/VMExtensionLinuxDiagnosticsTroubleshoot'"
}
]
}
I tried going through link provided, but it didn't help much.
I also tried another option as following
Deployed flatcar VM through Azure marketplace
Captured generalized image out of this VM
Deployed VM using the image created in above step
Even with this approach I am getting same error.
for now, waagent (Azure Linux agent) does not support python 3.x, hence this syntax error exists. You need to have python 2.x on your OS to not have this issue.
I am using the following command to create an azure batch pool. Please note I am using a custom image. Also please note that I have authenticated batch with Active Directory:
az batch pool create --json-file pool.json
The pool.json file looks like the following
{
"id": "WEPool004",
"vmSize": "Standard_NC6",
"virtualMachineConfiguration": {
"imageReference": {
"virtual_machine_image_id": "/subscriptions/{sub id}/resourceGroups/{resource group name}/providers/Microsoft.Compute/images/{image definition name}",
"publisher": null,
"offer": null,
"sku": null
},
"nodeAgentSKUId": "batch.node.ubuntu 18.04"
},
"targetDedicatedNodes": 1
}
Azure CLI complains with the error:
Reason: The specified resource id must be of the format /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/galleries/{galleryName}/images/{galleryImageName}/versions/{galleryImageVersionName} or /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/galleries/{galleryName}/images/{galleryImageName}
Now this means that this page is outdated, since I have followed their format.
If I follow the format specified in the error, I need to make an image gallery and subsequently an image definition. If I do make those two and then replace the virtual_machine_image_id, it complains NOT on the azure CLI, but on the pools page of the azure portal, it displays the following error message:
Message:
Desired number of dedicated nodes could not be allocated
Values:
Reason - The specified image is not found
So either I encounter the error that virtual_machine_image_id is of invalid format, or it is simply not found. Hence, I can hypothesize that I am making an error while I make the image definition and image gallery. Please can anyone point to me to the correct direction.
Please note that I followed this tutorial for Azure CLI for batch.
The error I made was that I did not create a snapshot. I used the URL from the image definition created using that snapshot as specified in this tutorial. Now its working.
while moving VM from one resource group to another this error encountered while there is no SQL VM associated with VM still getting this error
{
**"code": "ResourceMoveProviderValidationFailed",**
"message": "Resource move validation failed. Please see details. Diagnostic information: timestamp '20200908T142742Z', subscription id 'xxx-xxx-xxxx', tracking id 'xxxxxxx-414a-xxxxx-adb4-xxxxxx', request correlation id 'xxxxxxxxxxxx'.",
"details": [
{
"code": "MissingMoveResources",
"target": "Microsoft.SqlVirtualMachine/SqlVirtualMachines",
"message": **"Cannot move resource(s) because following resources /subscriptions/xxxxxxxxx/resourceGroups/myrgroup/providers/Microsoft.SqlVirtualMachine/sqlVirtualMachines/xxxxx0020 need to be included in move request to target resource group as well. Please include these and try again.**"
}
]
}
The error code 409 MissingMoveResources is documented in the Azure SQL VM REST API documentation as:
409 MissingMoveResources - Cannot move resources(s) because some
resources are missing in the request.
So, going by the error details posted above, it does mean that the Virtual Machine you're looking at is linked to a SQL Virtual Machine. The easiest way would be to verify it from the Portal itself:
As seen in the screenshot above:
Presence of the SQL Server Configuration tab under the Settings blade, and
Publisher being MicrosoftSQLServer
confirm the same.
Therefore, you'd have to know the associated SQL Virtual Machine and include that as well in your request to complete the move operation successfully. You can get to the SQL VM by accessing the SQL Server configuration tab.
I am developing a Logic App which is scheduled every minute and creates a storage blob with logging data. My problem is that I must create a container for the blob manually to get it working. If I create blob within a not existing container
I get the following error:
"body": {
"status": 404,
"message": "Specified container tmp does not exist.\r\nclientRequestId: 1111111-2222222-3333-00000-4444444444",
"source": "azureblob-we.azconn-we.p.azurewebsites.net"
}
This stackoverflow question suggested putting the container name in the blob name
but If I do so I get the same error message: (also with /tmp/log1.txt)
{
"status": 404,
"message": "Specified container tmp does not exist.\r\nclientRequestId: 1234-8998989-a93e-d87940249da8",
"source": "azureblob-we.azconn-we.p.azurewebsites.net"
}
So you may say that is not a big deal, but I have to deploy this Logic App multiple times with a ARM template and there is no possibility to create a container in an storage account (see this link).
Do I really need to create the container manually or write a extra azure function to check if the container exists?
I have run in this before, you have a couple of options:
You write something that runs after the ARM template to provision the container. This is simple enough if you are provisioning through VSTS release management where you can just add another step.
You move from ARM templates to provisioning in PowerShell where you have the power to do the container creation. See New-AzureStorageContainer.
CustomData can be injected to a virtual machine at the time of cloning. I am looking for a way to inject/update the custom data on existing VM.
I am using Java SDK for Azure, and in VirtualMachine.update I couldn't figure out a method to do it.
Updating or adding custom data as part of VM update is not supported by the compute service, hence this option is not exposed in fluent VM update flow. An attempt to change custom data through VM update will cause below error:
{
"error": {
"code": "PropertyChangeNotAllowed",
"target": "customData",
"message": "Changing property 'customData' is not allowed."
}
}