default snapshot location for velero - azure

I want to use velero with my azure Kubernetes cluster to backup cluster data and persistent volumes.
Like doc says I have annotated the pods and even backup job shows 4 snapshots successful.
I managed to take the backup for the cluster and I can see it in my azure storage account. the problem is I see only gz files and one json file in my storage accounts velero designated container. Shouldn't I see a file equivalent to my PVs ?(which is about 10GB)

This in fact is the correct setup. You should see only json files and gziped files in the backup folder within valero container.
These files have pointers to actual snapshots in azure. look for the snapshots within resource group you specified during backup. There should be snapshots corresponding to PVC size.

Related

How to add filter to Container for deleting blobs except some blobs in a virtual folder?

I am having set of folders in container named records (Azure storage account). In general what ever the blobs(folders) present in records container it will be deleted as per lifecycle management rule.
Rule: if blob exists more than 30days than it will delete the blob.
But As per my case, All blobs (folders) should delete except one blob (folder) where the blob(folder) name is Backup in the container.
Is there any way to add a rule for not deleting particular blob(In my case it is folder)?
So backup folder shouldn't delete when the existing rule run.
Create a lease for the particular blob using the azure portal for example. A lease prevents processes from doing anything with the blob. This includes lifecycle management rules.
You can also acquire or break a lease using the rest api or one of the many storage SKDs.
Another option would be to not use the lifecycle management rules but write a scheduled azure function that deletes blob older than 30 days except the ones having backup in their name.
Please do note: if you have enabled "Hierarchical namespace" then you have the concept of directories, but those cannot be leased. If you did not then you should realise that folders are a virtual construct and as such cannot be leased as they are actually blobs. See the docs. So in that case you have to individually take a lease on each blob or write a script that does it once.

How to copy azure disk from one location to another just by using terraform

The challenge is to have a way to copy osdisk and datadisk from vm located in first location, to another one and of course spawn there a new virtual machine.
So far I found disks-upload-vhd-to-managed-disk-cli article and was able to copy disk between different location by using azcopy utility and creating sas uri links.
As I use terraform everywhere I don't like to use external tools for such job.
I try already abuse azurerm_managed_disk to be able make a copy of my disk to another location but it seams that it's not possible, and those disk need to be in the same place.
So maybe some of you have idea how to make such copy of the disks (or entire vm) in different location just by terraform way and of course I don't mean here to use local-exec to wrap azcopy in it :)
Best Regards.
To copy the managed disk to another region, except the AzCopy command, you can only copy the disk with a generated SAS URL into a storage page blob in another region. And then create a managed disk from the VHD file in that storage blob.
The steps here:
export the disk that you want to copy and then it will generate a SAS URL;
create a storage page blob in the storage container which in the target region from the SAS URL;
create a managed disk from the page blob in the same region.

Update Snapshot Location Velero Azure

Currently have velero up and running and it's working great. The only issue I have is that the snap shots of the volumes are being created in the same region as the originals which kinda defeats the purpose of disaster recovery. This flag
--snapshot-location-config
doesn't have arg for region. I know there is a config for the default snap shot location
volumesnapshotlocations.velero.io "default"
Does anyone know how to modify the default so I can get my snap shots into new regions?
Snapshots creation from the main region into a different region is not supported.
Azure zone-redundant snapshots and images for managed disks have a decent 99.9999999999% (12 9's) durability. The availability zones in a region are usually physically separated and even if an outage affects one AZ, you can still access your data from a redundant AZ.
However, if you fear calamities that can affect several square kilometers(multiple zones in a region), you can manually move the snapshots in a different region or even automate the process. Here is a guide to do it.
--snapshot-location-config doesn't have arg for region
--snapshot-location-config doesn't create the storage, you must do so yourself. You can specify a different region, a different Azure subscription, or even a different provider, like AWS.
For Azure, follow the instructions here to create your storage container.
If your provider supports a region config (Azure does not - see Volume Snapshot Location Config doc and Backup Storage Location Config doc), it is configurable using the --config, e.g. --config region=us-west-2. Check your provider plugin to see whether different regions are supported, what the key name is, and what possible values are supported.
Refer to the Velero locations documentation for examples of using multiple snapshot and backup locations.
Update:
Although velero snapshot-location create allows you to specify a --provider, the Limitations/Caveats section of the Location documentation specifically states that only a single set of credentials is supported, and furthermore that Azure specifically does not allow creation of snapshots in a different region:
Velero only supports a single set of credentials for VolumeSnapshotLocations. Velero will always use the credentials provided at install time (stored in the cloud-credentials secret) for volume snapshots.
Volume snapshots are still limited by where your provider allows you to create snapshots. For example, AWS and Azure do not allow you to create a volume snapshot in a different region than where the volume is. If you try to take a Velero backup using a volume snapshot location with a different region than where your cluster’s volumes are, the backup will fail.
I personally find this confusing -- how could one use a different provider without specifying credentials? Regardless, it seems as if storage of a snapshots in a different region in Azure is not possible.

How to delete attached data disk after the VM is deleted in Terraform?

I am using Terraform to create on-demand Azure infrastructure for my users. The infrastructure includes VM and 256 GB data disk. At end of the day, the 'Terraform Destroy' will delete the VM. But the data disk is not deleted. Is there a way to delete the attached disk on destroy command?

Cannot delete blob: There is currently a lease on the blob and no lease ID was specified in the request

When I attempt to delete a blob from my storage account container, I get an error message, "There is currently a lease on the blob and no lease ID was specified in the request."
I have 4 virtual machine instances. I also have 8 virtual machine disks, 4 of which are in use (one by each of the virtual machine instances). Strangely, I have 10 blobs listed in my single storage account's lone container, called vhds. Here is a screenshot of the 10 blobs, highlighting the two that I cannot delete.
Can anyone give me guidance on how to delete these blobs? I have no use for them and I'd like to cut down on my storage costs for my subscription.
You need to delete the disks from the Virtual Machines section of the portal.
Navigate to Virtual Machines -> Disks
Delete the disks
Check this MSDN blog post for the complete instructions:
http://blogs.msdn.com/b/windows_azure_technical_support_wats_team/archive/2013/02/05/iaas-unable-to-delete-vhd-there-is-currently-a-lease-on-the-blob.aspx
Alternatively, you can just kill the lease on the Blobs with PowerShell:
(Get-AzureRmStorageAccount -Name "STORAGE_ACCOUNT_NAME" | Get-AzureStorageBlob -name "CONTAINER_NAME").ICloudBlob.BreakLease()
Just realize when you do this, the VM's that use this storage will not be able to turn on. (And you should turn them off if they aren't already before you do this.
However, if you might use the VM's again in the future this technique allows you to:
Stop the VM in question.
Download a copy of the VHD.
Release the lease on the VHD
Delete the VHD in the storage account.
Insert arbitrary time period where you don't need the VM
Upload the VHD to the same storage account with the same container and same file name.
Start the VM back up and have it work :-).
There is an alternate (easier) way to break a lease if you use (or download) Microsoft Azure Storage Explorer (a really cool tool to manage Azure Storage).
You can browse to the Storage Account and find the relevant file (vhd) and then select the Break Lease option.
The same CAUTIONS above apply and the Explorer tool makes these clear.
You should have images associated to your VMs. Even if you have deleted your VMs, the images have to be explicitly deleted.
Once the images are deleted, you should see VHD getting cleared as well

Resources