I am trying to upload a VHD using the syntax below:
csupload Add-PersistentVMImage –Destination "<BlobStorageURL>/<YourImagesFolder>/<VHDName>" -Label <VHDName> -LiteralPath <PathToVHDFile> -OS Windows
I am assuming the <BlobStorageURL> can be seen in the portal and is http://xx.blob.core.windows.net/ where "xx" has been replaced with the real subdomain. BUT, please can someone explain where I can create or get <YourImagesFolder> from?
Kind Regards,
Chris
The xx is the name of your storage account, and the *.blob.core.windows.net URL points to the blob storage of that account (besides blob storage your account also contains queues and table storage).
Follow this guide to create a new storage account in the new Windows Azure Portal: Create a Windows Azure Storage Account
Related
I have a couple of storage accounts in my Azure subscription. I know my VMs are using them because when I look at the Boot diagnostics blade for those VMs in the portal, I can see a diagnostic screenshot and a serial log (apparently the storage account is where this information is held). However, I’ve looked high and low and can’t find the setting that specifies which storage account is being used by which VM.
I also tried the Powershell script mentioned in
Powershell to List Azure VMs with storage account name. However, any field in the output that relates to a storage account is empty.
Can someone please point me in the right direction?
Thanks
If you mean the storage account for the VM diagnostic, then you can get the storage account URL with the command Get-AzVM like this:
$vm = Get-AzVM -Name vmName
$vm.DiagnosticsProfile.BootDiagnostics
Here it shows you the URL of the storage account that stored the VM diagnostic logs, but this is the only thing you can find in the VM. If you want to get more details about the storage account, just run the command Get-AzStorageAccount.
I actually want to know which storage account the VMs are using.
I ran the Powershell that you suggested both at my workstation and in the Azure shell console. However, even though the Enabled property returned True, nothing was returned for the StorageUri property (hopefully the image that I pasted into this reply will show this).
As per the pasted image, I also ran an Az command that did return a couple of URIs (one for the boot image and one for the serial log).
However, I don't believe that this information will lead me to discover which storage account is being used. Any other suggestion you can offer will be appreciated.
enter image description here
We Want to update the access tiers in the ADLS Gen2 for Multiple paths and want to use Azure CLI or Python Code as per our requirement.
According to Microsoft Documentation, We see only Portal and Power shell code to do it.
Can anyone let us know if we can explore through the mentioned code.
Are you looking for this command -
az storage account update -g <resource-group> -n <storage-account> --set kind=StorageV2 --access-tier=<Hot/Cool>
I was able to update the access tier of Azure Data Lake Storage Gen2 using this command.
I'm not sure you want to change account access tier or blob access tier.
If you want to change blob access tier.
You can try this command:
az storage blob set-tier --account-key 00000000 --account-name MyAccount --container-name MyContainer --name MyBlob --tier Hot
tier value choose among Archive, Cool, Hot.
Below is my test screenshot,it works:
Here is the API document.
In TFS I selected Azure VMs File Copy:
My machine is classic and I created classic storage account. I set up the connection using username and password, not management certificate.
The storage account and cloud service I had to populate myself, because they did not appear in the drop-down menu (so possibly something is wrong already at this stage).
In the Cloud Service I entered MyMachine.cloudapp.net.
The task starts, it seems to login successfully, but throws:
Unable to find type [Hyak.Common.CloudException]
Log:
2017-11-24T14:21:28.80333Z Add-AzureAccount -Credential $psCredential
2017-11-24T14:21:35.866333Z Select-AzureSubscription -SubscriptionId -Default
2017-11-24T14:21:35.882333Z Set-AzureSubscription -SubscriptionId yy -CurrentStorageAccountName yyy
2017-11-24T14:21:35.898333Z ##[debug]Starting Azure File Copy Task
2017-11-24T14:21:35.898333Z ##[debug]connectedServiceNameSelector = ConnectedServiceName
2017-11-24T14:21:35.898333Z [debug]connectedServiceName = yyyyyy
(..)
2017-11-24T14:21:35.991333Z ##[debug]Loading AzureUtilityLTE9.8.ps1
2017-11-24T14:21:36.007333Z ##[debug]Connection type used is
UsernamePassword
2017-11-24T14:21:36.022333Z ##[debug]Azure
CallRetrieving storage key for the storage account:
mystorageaccount
2017-11-24T14:21:38.924333Z ##[error]Unable to find type
[Hyak.Common.CloudException].
Please help.
Actually you don't need to manually type the storage account, it will auto appear in the drop list. You just need to specify a pre-existing classic storage account. It is also used as an intermediary for copying files to Azure VMs.
Classic Storage Account
Required if you select Azure Classic for the Azure Connection Type
parameter. The name of an existing storage account within the Azure
subscription.
According to your log, the issue may related to the storage account setting. Double check this configuration under your Azure subscription.
Also suggest you go through this documentation to get more info of the Azure File Copy task. Such as make sure the machine should configured to allow WinRM connections.
I am trying to create a template deployment using Azure Resource Management library for .net from a custom VM image. The goal can be achieved by creating a resource group and deploy needed resources (using the template file) using the aforementioned library. There is a requirement that upon deleting the resource group i also need to delete the Vhd that is created upon VM creation in the resource group. but if i delete the resource group, vhd file is not deleted because it is created in different resource group (hence different storage account, where the vm image exists) and there is a fact that custom vm image need to be present at creation time in the very same storage account that will host your virtual machine's vhd. i cant delete the storage account containing the custom image. So, is there a way that i can copy the custom image (vhd) from a storage account to my newly created resource group's storage account using Resource Management library for .Net?
or are there any other workarounds to delete the vhd of created vm without deleting the custom vm image??
Use
Microsoft Azure Storage Explorer
Copy/Paste the VHD from
Storage Account A\Blob Containers\uploads
to
Storage Account B\Blob Containers\uploads
There isn't any way to copy a blob during a template deployment, currently the storage resource provide doesn't support data plane operation during template deployments. You could break it into multiple deployments, for example:
Deploy storage account for new VM into new resource group
Run code to copy custom image for VHD to new storage account in the new resource group
Deploy VM to new resource group
But - if you're doing this through code, likely the simplest approach is to grab the URI of the OS's VHD before you delete the Resource Group that holds the VM. Then after you delete that RG, delete that blob. This PowerShell code will give you the uri of the vhd blob for a VM (.net SDK will be similar).
(Get-AzureRmVm -ResourceGroupName [name of RG]).StorageProfile.OsDisk.Vhd.Uri
If you have more than one VM in the RG, then you'll get an array and you can iterate through it.
Another possible workaround is to download the vhd file from the storage account to your local file system, delete the container or storage account, which ever you want. Then push this vhd file on the local system to the azure storage account you newly created using,
Add-azureRmVhd -Destination 'newstorageaccounturi' -LocalFilePath 'C:\users....\' -NumberOfUploaderThreads 5
Although, I do not understand why you want to delete the vhd of this created vm? Do you want to attach a different disk to it?
Is it possible to set up a custom domain for a Azure Resource Manager (ARM) storage account using Azure Powershell? If so, how?
I tried to set up a custom domain through the Azure Preview Web Portal but that functionality does not yet exist for the new resource manager storage accounts.
Using this documentation, I am able to login and see the properties of my new RM storage account, but I am unsure how to update the CustomDomain property. I expected to find an example/documentation of how this worked with the old storage accounts, but I have not found anything.
I have found a solution that worked for us. You can use the Set-AzureRmStorageAccount command to set properties on an existing storage group. Not sure how I missed this one.
Set-AzureRmStorageAccount -ResourceGroupName "<YOUR RESOURCE GROUPNAME>" -Name "<YOUR STORAGE ACCOUNT NAME>" -CustomDomainName <YOUR.CUSTOM.DOMAIN> -UseSubDomain $true
In case, like me, you get ResourceGroupNotFound do following command to select your subscription before (you get your subscription id in the Azure Portal):
Select-AzureRmSubscription -SubscriptionId <YourSubscriptionID>