Azure kubernetes change pod disk type - azure

i need to change my azure kubernetes pod disk type from premium ssd to standard ssd, but the disk contains some data, can i directly change the type or need to migrate the data first?. thanks
disk type changed with old data still exists

To change pod disk type from Premium to Standard, you can create new pod with standard disk type and then migrate/transfer the data. Finally, delete the old premium ssd pod.
See more details regarding Azure volume disk in :
https://learn.microsoft.com/en-us/azure/aks/concepts-storage#persistent-volumes

Related

not able to extend azure SQL vm data disk size from 4TB to 8Tb using Terraform

I am not able to extend azure SQL vm data disk size from 4TB to 8TB using Terraform
It's giving error me the error
Error updating managed disk.disks can not be resized beyond 4TB when attached to vm
Note: this happens although the VM is in stopped (deallocated state)
As per official documentation, You can now resize your managed disks without deallocating your VM.
To register for the feature, use the following command:
Register-AzProviderFeature -FeatureName "LiveResize" -ProviderNamespace "Microsoft.Compute"
It may take a few minutes for registration to take complete. To confirm that you've registered, use the following command:
Register-AzProviderFeature -FeatureName "LiveResize" -ProviderNamespace "Microsoft.Compute"
Note - The new size should be greater than the existing disk size. The
maximum allowed is 4,095 GB for OS disks. (It's possible to expand the
VHD blob beyond that size, but the OS works only with the first 4,095
GB of space.)

How to delete attached data disk after the VM is deleted in Terraform?

I am using Terraform to create on-demand Azure infrastructure for my users. The infrastructure includes VM and 256 GB data disk. At end of the day, the 'Terraform Destroy' will delete the VM. But the data disk is not deleted. Is there a way to delete the attached disk on destroy command?

Update Cache setting on a "live" data disk for a Azure VM using azure cli/bash

If I want to update host-cache settings on a live data disk, what is the approach?
Should I do a script where I first take a snapshot and then create a new disk from that with the new host-cache settings?
Is there any other way of doing this with azurecli/bash?
You could try to use az vm update to update disk cache to a VM.
az vm update -n name -g group --disk-caching os=ReadWrite
Use singular value to apply across, or specify individual disks, e.g. os=ReadWrite 0=None 1=ReadOnly should enable update os disk and 2 data disks.

Unable to start Azure VM after size change

I have Azure VM (Windows Server 2012R2 with SQL Server).
Since I was changed the size I cannot start the VM, When I'm trying to start I got the following failed error:
Provisioning state Provisioning failed. One or more errors occurred while preparing VM disks. See disk instance view for details.. DiskProcessingError
DISKS
MyVM_OsDisk_1_47aaea403b8948fb8d0e3ba0e81e2fas Provisioning failed. Requested operation cannot be performed because storage account type 'Premium_LRS' is not supported for VM size 'Standard_D2_v3'.. VMSizeDoesntSupportPremiumStorage
MyVM_disk2_ccc04be996a5471688d357bf6f955fab Provisioning failed. Requested operation cannot be performed because storage account type 'Premium_LRS' is not supported for VM size 'Standard_D2_v3'.. VMSizeDoesntSupportPremiumStorage
What Is the problem and how can I solve it please?
Thanks!
As the error details shows, this is because Premium disk is not supported for D2_V3 VM Size.
Solution :
If you want to use SSD premium Disk for your VM , you can Resize your VM size to DS-series, DSv2-series, GS-series, Ls-series, and Fs-series VMs.
If you don't mind using Standard HDD disk, but want to use D2_V3 VMsize. You can Change the Disk type to Standard (If your disks are managed).
Deallocate your VM > Disk > Choose the disk > Change the Account type to standard > save
Additional, I assume that your disks are managed. If not, you'd better resize your VM rather than change back to standard disk.

Dynamically created volumes from Kubernetes not being auto-deleted on Azure

I have a questions about kubernetes and the default reclaim behavior of dynamically provisioned volumes. The reclaim policy is "delete" for dynamically created volumes in Azure, but after the persistent volume claim and persistent volume have been deleted using kubectl, the page blob on the vhd still exists and is not going away.
This is an issue because every time I restart the cluster, I get an new 1 Gib page blob I now have to pay for, and the old one, which is unused, does not go way. They show up as unleased in the portal and I am able to manually delete them in the storage account. However, will not delete themselves. According to "kubectl get pv" and "kubectl get pvc," they do not exist.
According to all the documentation I can find, they should go away upon deletion using "kubectl":
http://blog.kubernetes.io/2016/10/dynamic-provisioning-and-storage-in-kubernetes.html
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming
Any help on this issue would be much appreciated.
EDIT: I have found that this issue appears only when you delete the persistent volume before you delete the persistent volume claim.I know that is not intended behavior but it should be fixed or throw an error.

Resources