Dynamically created volumes from Kubernetes not being auto-deleted on Azure - azure

I have a questions about kubernetes and the default reclaim behavior of dynamically provisioned volumes. The reclaim policy is "delete" for dynamically created volumes in Azure, but after the persistent volume claim and persistent volume have been deleted using kubectl, the page blob on the vhd still exists and is not going away.
This is an issue because every time I restart the cluster, I get an new 1 Gib page blob I now have to pay for, and the old one, which is unused, does not go way. They show up as unleased in the portal and I am able to manually delete them in the storage account. However, will not delete themselves. According to "kubectl get pv" and "kubectl get pvc," they do not exist.
According to all the documentation I can find, they should go away upon deletion using "kubectl":
http://blog.kubernetes.io/2016/10/dynamic-provisioning-and-storage-in-kubernetes.html
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming
Any help on this issue would be much appreciated.
EDIT: I have found that this issue appears only when you delete the persistent volume before you delete the persistent volume claim.I know that is not intended behavior but it should be fixed or throw an error.

Related

Update Snapshot Location Velero Azure

Currently have velero up and running and it's working great. The only issue I have is that the snap shots of the volumes are being created in the same region as the originals which kinda defeats the purpose of disaster recovery. This flag
--snapshot-location-config
doesn't have arg for region. I know there is a config for the default snap shot location
volumesnapshotlocations.velero.io "default"
Does anyone know how to modify the default so I can get my snap shots into new regions?
Snapshots creation from the main region into a different region is not supported.
Azure zone-redundant snapshots and images for managed disks have a decent 99.9999999999% (12 9's) durability. The availability zones in a region are usually physically separated and even if an outage affects one AZ, you can still access your data from a redundant AZ.
However, if you fear calamities that can affect several square kilometers(multiple zones in a region), you can manually move the snapshots in a different region or even automate the process. Here is a guide to do it.
--snapshot-location-config doesn't have arg for region
--snapshot-location-config doesn't create the storage, you must do so yourself. You can specify a different region, a different Azure subscription, or even a different provider, like AWS.
For Azure, follow the instructions here to create your storage container.
If your provider supports a region config (Azure does not - see Volume Snapshot Location Config doc and Backup Storage Location Config doc), it is configurable using the --config, e.g. --config region=us-west-2. Check your provider plugin to see whether different regions are supported, what the key name is, and what possible values are supported.
Refer to the Velero locations documentation for examples of using multiple snapshot and backup locations.
Update:
Although velero snapshot-location create allows you to specify a --provider, the Limitations/Caveats section of the Location documentation specifically states that only a single set of credentials is supported, and furthermore that Azure specifically does not allow creation of snapshots in a different region:
Velero only supports a single set of credentials for VolumeSnapshotLocations. Velero will always use the credentials provided at install time (stored in the cloud-credentials secret) for volume snapshots.
Volume snapshots are still limited by where your provider allows you to create snapshots. For example, AWS and Azure do not allow you to create a volume snapshot in a different region than where the volume is. If you try to take a Velero backup using a volume snapshot location with a different region than where your cluster’s volumes are, the backup will fail.
I personally find this confusing -- how could one use a different provider without specifying credentials? Regardless, it seems as if storage of a snapshots in a different region in Azure is not possible.

How to copy one storage account's container's blobs to another storage account's container's blobs

I have two storage accounts(storage1) and (storage2) and both of them have containers called data.
Now, storage1's data contains folder called database-files which contains lots of folders recursively. I mean it's kind of huge.
What I am trying to do is I want to copy database-files and everything that's in it from storage1's data container to storage2's data container. Note: both storage accounts are in the same resource group and subscription.
Here is what I've tried:
az storage blob copy start-batch --source-account-name "storage1" --source-container "data" --account-name "storage2" --destination-container "data"
This worked fine, but The problem is time it takes is ridiculously big and I can't wait this much because I want to do this command for one of my release . Which means that i need this as soon as fast, so that my deployment happens fast.
Is there any way to make it faster? maybe zip it/copy it/unzip it? Even If I use AzCopy, I have no idea how it's going to help with timing. All it helps is it doesn't have point of failure and also I have no idea how to use it via azure cli.
How can I proceed?

Storage account connectivity method for AKS

I'm setting up a Storage Account so I can Dynamically create and use a persistent volume with Azure Files in Azure Kubernetes Service (AKS). Doing this to:
Have a PV and PVC for the database
A place to store the application files
AKS does create a storage account in the MC_<resource-group>_<aks-name>_<region> resource group that is automatically created. However, that storage account is destroyed if the node size/VM is changed (not node count), so it shouldn't be used since you'll lose your files and database if you need a node size/VM with more resources.
This documentation, nor any other I've really come across, says what the best practice is for the Connectivity method:
Public endpoint (all networks)
Public endpoint (selected networks)
Private endpoint
The first option sounds like a bad idea.
The second option allows me to select a virtual network, and there are two choices:
MC_<resource-group>_<aks-name>_<region>... again, doesn't seem like a good idea because if the node size/VM is changed, the connection will be broke.
aks-vnet-<number>... not sure what this is, but looks like it is part of the previous resource group so will also be destroyed in the previously mentioned scenario.
The third option contains a number of options some of which are included the second option.
So how should I securely set this up for AKS to share files with the application and persist database files?
EDIT
Looking at the both the "Firewalls and virtual networks" and "Private endpoint connections" for the storage account that comes with the AKS node, it looks like it is just setup for "All networks"... so maybe having that were my actual PV and PVC will be stored isn't such an issue...? Could use some clarity on the topic.
not sure where the problem lies. all the assets generated by AKS are tied to AKS lifecycle. if you delete AKS it will delete the MC_* resource group (and that it 100% right). Not sure what do you mean about storage account being destroyed, it wouldn't get destroyed unless you remove the pvc and set the delete action to reclaim.
Reading: https://learn.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv
As for the networking part, selected networks with selecting the AKS nodes network should be the way to go. you can figure that network out by looking at the AKS nodes or the AKS agent pool definition(s). I dont think this is configurable only using kubernetes primitives, so that would be a manual\scripted action after storage account is created.

Azure: Unable to delete container, there is a lease on the blob but no lease ID provided

I have deleted everything but for a storage container which has a number of page and block blobs. These won't delete as it says there is a lease and no lease ID is provided.
I have seen other posts about deleting VM disks. I don't have any VMs now just these blobs left.
Thanks for taking a look.
Please try the steps outlined in this article, and see if that solves your problem:
https://azure.microsoft.com/en-us/documentation/articles/storage-cannot-delete-storage-account-container-vhd/

Azure DIsk Management

I just started using azure virtual machines and I must admit I still have a few questions regarding the disk management:
I manage my machines via the Node JS API in the following way:
azure vm create INSTANCE b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-12_10-amd64-server- 20130227-en-us-30GB azureuser XXXXXX --ssh --location "West US" -t ./azure.pem
azure vm start INSTANCE
//do whatever
azure vm shutdown INSTANCE
azure vm delete INSTANCE
After deleting the instance I still have a buch of disks left, which are not deleted but which I am charged (i.e. deducted from my free trial). Are they not deleted by default?
Is there an API call to delete them (only found the corresponding REST calls, but kind of unwilling to mix NODE JS and Rest api calls).
Can I specify one of those existing disks when starting a new instance?
Thanks for your answers!
Jörg
After deleting the instance I still have a buch of disks left, which are not deleted but which I am charged (i.e. deducted from my free trial). Are they not deleted by default? Is there an API call to delete them (only found the corresponding REST calls, but kind of unwilling to mix NODE JS and Rest api calls).
Yes, the disks are not deleted by default. I believe the reason for that is to reuse those disks to spin off new VMs. To delete the disk (which is a page blob stored in Windows Azure Blob Storage) you could possibly use Azure SDK for Node: https://github.com/WindowsAzure/azure-sdk-for-node.
Can I specify one of those existing disks when starting a new
instance?
Yes, you can. For that you would need to find the disk image and then use the following command:
azure vm create myVM myImage myusername --location "West US"
Where "myImage" is the name of the image. For more details, please visit: http://www.windowsazure.com/en-us/develop/nodejs/how-to-guides/command-line-tools/#VMs
Yes when a VM is deleted the disk is left behind. Within the portal you can apply this disk image to a new VM instance on creation. There's some specific guidance on creating VMs from the API with existing disk images here:
http://www.windowsazure.com/en-us/develop/nodejs/how-to-guides/command-line-tools/#VMs

Resources