Update Snapshot Location Velero Azure - azure

Currently have velero up and running and it's working great. The only issue I have is that the snap shots of the volumes are being created in the same region as the originals which kinda defeats the purpose of disaster recovery. This flag
--snapshot-location-config
doesn't have arg for region. I know there is a config for the default snap shot location
volumesnapshotlocations.velero.io "default"
Does anyone know how to modify the default so I can get my snap shots into new regions?

Snapshots creation from the main region into a different region is not supported.
Azure zone-redundant snapshots and images for managed disks have a decent 99.9999999999% (12 9's) durability. The availability zones in a region are usually physically separated and even if an outage affects one AZ, you can still access your data from a redundant AZ.
However, if you fear calamities that can affect several square kilometers(multiple zones in a region), you can manually move the snapshots in a different region or even automate the process. Here is a guide to do it.

--snapshot-location-config doesn't have arg for region
--snapshot-location-config doesn't create the storage, you must do so yourself. You can specify a different region, a different Azure subscription, or even a different provider, like AWS.
For Azure, follow the instructions here to create your storage container.
If your provider supports a region config (Azure does not - see Volume Snapshot Location Config doc and Backup Storage Location Config doc), it is configurable using the --config, e.g. --config region=us-west-2. Check your provider plugin to see whether different regions are supported, what the key name is, and what possible values are supported.
Refer to the Velero locations documentation for examples of using multiple snapshot and backup locations.
Update:
Although velero snapshot-location create allows you to specify a --provider, the Limitations/Caveats section of the Location documentation specifically states that only a single set of credentials is supported, and furthermore that Azure specifically does not allow creation of snapshots in a different region:
Velero only supports a single set of credentials for VolumeSnapshotLocations. Velero will always use the credentials provided at install time (stored in the cloud-credentials secret) for volume snapshots.
Volume snapshots are still limited by where your provider allows you to create snapshots. For example, AWS and Azure do not allow you to create a volume snapshot in a different region than where the volume is. If you try to take a Velero backup using a volume snapshot location with a different region than where your cluster’s volumes are, the backup will fail.
I personally find this confusing -- how could one use a different provider without specifying credentials? Regardless, it seems as if storage of a snapshots in a different region in Azure is not possible.

Related

Alternate Method for Azure Disaster Recovery

Currently for our Azure Disaster recovery plan we replicate workloads from a primary site/region to a secondary site. Where we mirror the source VM config and create required or associated resource groups, storage accounts, virtual networks, etc.
We are looking into an alternate method the wouldn't require a second resource group. This would require:
Use one, already existing resource group; i.e. testGroup-rg in East-US
Deploy new IaC components into the same RG but in Central-US
So in the singular resource group, if we wanted a function app, we would have two sets of components. testFuncApp in East-US and testFuncApp in Central-US.
This way we would only ever have one set of IaC created. Of course we would need to automate how to flow traffic etc. into a particular region if both exist.
Is this a possibility? If it is, is it even necessary/worth it?
Unfortunately there is no way to use the same RG. We need to have a resource group in target region if not Site Recovery creates a new resource group in the target region, with an "asr" suffix.

default snapshot location for velero

I want to use velero with my azure Kubernetes cluster to backup cluster data and persistent volumes.
Like doc says I have annotated the pods and even backup job shows 4 snapshots successful.
I managed to take the backup for the cluster and I can see it in my azure storage account. the problem is I see only gz files and one json file in my storage accounts velero designated container. Shouldn't I see a file equivalent to my PVs ?(which is about 10GB)
This in fact is the correct setup. You should see only json files and gziped files in the backup folder within valero container.
These files have pointers to actual snapshots in azure. look for the snapshots within resource group you specified during backup. There should be snapshots corresponding to PVC size.

Storage account connectivity method for AKS

I'm setting up a Storage Account so I can Dynamically create and use a persistent volume with Azure Files in Azure Kubernetes Service (AKS). Doing this to:
Have a PV and PVC for the database
A place to store the application files
AKS does create a storage account in the MC_<resource-group>_<aks-name>_<region> resource group that is automatically created. However, that storage account is destroyed if the node size/VM is changed (not node count), so it shouldn't be used since you'll lose your files and database if you need a node size/VM with more resources.
This documentation, nor any other I've really come across, says what the best practice is for the Connectivity method:
Public endpoint (all networks)
Public endpoint (selected networks)
Private endpoint
The first option sounds like a bad idea.
The second option allows me to select a virtual network, and there are two choices:
MC_<resource-group>_<aks-name>_<region>... again, doesn't seem like a good idea because if the node size/VM is changed, the connection will be broke.
aks-vnet-<number>... not sure what this is, but looks like it is part of the previous resource group so will also be destroyed in the previously mentioned scenario.
The third option contains a number of options some of which are included the second option.
So how should I securely set this up for AKS to share files with the application and persist database files?
EDIT
Looking at the both the "Firewalls and virtual networks" and "Private endpoint connections" for the storage account that comes with the AKS node, it looks like it is just setup for "All networks"... so maybe having that were my actual PV and PVC will be stored isn't such an issue...? Could use some clarity on the topic.
not sure where the problem lies. all the assets generated by AKS are tied to AKS lifecycle. if you delete AKS it will delete the MC_* resource group (and that it 100% right). Not sure what do you mean about storage account being destroyed, it wouldn't get destroyed unless you remove the pvc and set the delete action to reclaim.
Reading: https://learn.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv
As for the networking part, selected networks with selecting the AKS nodes network should be the way to go. you can figure that network out by looking at the AKS nodes or the AKS agent pool definition(s). I dont think this is configurable only using kubernetes primitives, so that would be a manual\scripted action after storage account is created.

When creating a AzureFile persistent volume via a dynamic persistent volume claim the AzureFile created has no metadata

I'm successfully creating persistent volumes dynamically on Azure using the kubernetes.io/azure-file provider. I can configure it to use the storage account I want and I'm generally happy except the resultant objects in Azure (as viewed through the Azure portal or using az cli) have no metadata associated with them.
I'd like to contrast this with the setup I have on AWS using the kubernetes.io/aws-ebs where the EBS volumes get tagged with tags like KubernetesCluster, Name, kubernetes.io/created-for/pv/name and kubernetes.io/created-for/pvc/name.
This info was invaluable when we lost our cluster and had to write a script to re-attach these existing volumes by creating PVs. It's also useful for a host of other reasons.
Is it possible to achieve this behaviour with the kubernetes.io/azure-file ?
The only possible way to meet your requirement is the metadata of the file share. But you need to add the info you want to the metadata yourself.
Metadata for a share or file resource is stored as name-value pairs
associated with the resource. Metadata names must adhere to the naming
rules for C# identifiers.
For more details, see Metadata names.

Using Packer to Spin a VM and extract the image in an availability set

We have our corporate requirement ( due to pricing and whitelisting) to have Availability sets in our Azure subscription and resources like Compute should be spun inside that particular availability set. Since Packer while creating the Image spins up a temporary VM inside a temporary resource Group , I am confused (since did not find any documentation around it) if we can configure packer to spin the temporary VM inside the whitelisted availability set.
One possible way I can think of is to spin up the VM in the Resource Group which we created for the Availability Set (Since everything in Azure needs to be inside the Resource Group) that way I am guessing it will be tracked as part of billing but I am still not sure if the intermittent VM will be part of availability set.
Please help and suggest if there is an alternate way to the same .

Resources