Terraform State File in storage account - azure

We have Terraform state file stored in the Azure Storage Account. In case storage account went down we will be screwed. What is the best way to store the file? where?

AFAIK, there are two methods to store a terraform state file i.e. Locally in your machine or in a Storage account in azure .
In case storage account went down we will be screwed. What is the best
way to store the file? where?
As confirmed , You are using Standard_LRS which is not preferred as per the Microsoft Document if you are looking for high availability.
Locally redundant storage (LRS) copies your data synchronously three
times within a single physical location in the primary region. LRS is
the least expensive replication option, but is not recommended for
applications requiring high availability or durability.
So, as a solution you can change the storage account type as per your requirement to Standard_GRS or Standard_ZRS so that your data is present in two locations i.e. replicated.
You can change it by going to your storage account>>Configuration>>replication as shown below:
If You want more details on Disaster recovery (if one location is down) or data protection from Accidental Deletes then please refer the below documents:
Disaster recovery and storage account failover - Azure Storage | Microsoft Docs
Soft delete for containers - Azure Storage | Microsoft Docs

Related

sync data between two storage accounts in azure

Consider I have two Storage account i,e. Storage 1, Storage 2. When there is an entry to the Storage 1, the entry should be automatically synced to Storage 2 in Azure for all(table, file, blob). Is there anyway?
As Gaurav mentioned, Redundancy doesn't mean backup, if data is deleted from the original data location, it will replicate and will be deleted from the other locations. If you are specifically looking for backup solutions, I'd recommend checking the following documentation
Below are some of the available backup options:
If the purpose of your backup is to make sure you always have data available even in case of a rack/datacenter/region level failure(azure infrastructure failure) then
we have an option to select a replication strategy when we create a storage account. In this case you would not have access to the backup copy rather these copes are used by Microsoft to recover when a failure is identified. These are -
Locally redundant storage (LRS)
Zone-redundant storage (ZRS)
Geo-redundant storage (GRS)
Read-access geo-redundant storage (RA-GRS)
For option 1, 2 and 3 the replica is not available unless Microsoft initiates failover.
Option 4 provides read-only access to the data in the secondary location, in addition to geo-replication( GRS Option 3.)
Not sure if you had a different purpose for taking a backup.

Azure blob container backup and recovery

I am thinking of using Azure Blob Storage for document management system which I am developing. All Blobs ( images,videos, word/excel/pdf etc) will be stored in Azure Blob storage. As I understand, I need to create container and these files can be stored within the container.
I would like to know how to safeguard against accidental/malicious deletion of the container. If a container is deleted, all the files it contains will be lost. I am trying to figure out how to put backup and recovery mechanism in place for my storage account so that it is always guaranteed that if something happens to a container, I can recover files inside it.
Is there any way provided by Microsoft Azure for such backup and recovery or Do I need explicitly write a code in such a way that files are stored in two separate Blob storage account.
Anyone with access to your storage account's key (primary or secondary; there are two keys for a storage account) can manipulate the storage account in any way they see fit. The only way to ensure nothing happens? Don't give anyone access to the key(s). If you place the storage account within a resource group that only you have permissions on, you'll at least prevent others with access to the subscription from discovering the storage account and accessing it.
Within the subscription itself, you can place a lock on the actual resource (the storage account), so that nobody with access to the subscription accidentally deletes the entire storage account.
Note: with storage account keys, you do have the ability to regenerate the keys at any time. So if you ever suspected a key was compromised, you can perform a re-gen action.
Backups
There are several backup solutions offered for blob storage in case if containers get deleted.more product info can be found here:https://azure.microsoft.com/en-us/services/backup/
Redundancy
If you are concerned about availability, "The data in your Microsoft Azure storage account is always replicated to ensure durability and high availability. Replication copies your data, either within the same data center, or to a second data center, depending on which replication option you choose." , there are several replication options:
Locally redundant storage (LRS)
Zone-redundant storage (ZRS)
Geo-redundant storage (GRS)
Read-access geo-redundant storage (RA-GRS)
More details can be found here:
https://learn.microsoft.com/en-us/azure/storage/common/storage-redundancy
Managing Access
Finally, managing access to your storage account would be the best way to secure and ensure you'll avoid any loss on your data. You can provide read access only if you don't want anyone to delete files,folders etc.. through the use of SAS: Shared Access Signatures, allows you to create policies and provide access based on Read, Write, List, Delete, etc.. A quick GIF demo can be seen here: https://azure.microsoft.com/en-us/updates/manage-stored-access-policies-for-storage-accounts-from-within-the-azure-portal/
We are using blob to store documents and for documents management.
To prevent deletion of the blob, you can now enable soft deletion as described in here:
https://azure.microsoft.com/en-us/blog/soft-delete-for-azure-storage-blobs-ga/
You can also create your own automation around powershell,azcopy to do incremental and full backups.
The last element would be to use RA-GRS blobs where you can read from a secondary blob in read mode in another region in case the data center goes down.
Designing Highly Available Applications using RA-GRS
https://learn.microsoft.com/en-us/azure/storage/common/storage-designing-ha-apps-with-ragrs?toc=%2fazure%2fstorage%2fqueues%2ftoc.json
Use Microsoft's Azure Storage Explorer. It will allow you to download the full contents of blob containers including folders and subfolders with blobs. Conversely, you can upload to containers in the same way. Simple and free!

Azure Blob Storage: Does Microsoft Implement Redundant Backups?

I've searched the web and contacted technical support yet no one seems to be able to give me a straight answer on whether items in Azure Blob Storage are backed up or not.
What I mean is, do I need to create a twin storage account as a "backup" and program copies of all content from one storage to another, or are the contents of a client's Blob Storage automatically redundantly backed up by Microsoft?
I know with AWS, storage is redundantly backed up via onsite drives as well as across other nodes in the cluster.
do I need to create a twin storage account as a "backup" and program
copies of all content from one storage to another, or are the contents
of a client's Blob Storage automatically redundantly backed up by
Microsoft?
Yes, you will need to do backup manually. Azure Storage does not back up the contents of your storage account automatically.
Azure Storage does provide geo-redundant replication (provided you configure the redundancy level for your storage account as GRS or RA-GRS) but that is not back up. Once you delete content from your primary account (location, it will automatically be removed from secondary account (geo-redundant location).
Both AWS (EBS) and Azure(Blob Storage) options provides durability by replicating the data across different data centers. This is for the high availability and durability of the data to provide the guarantee by the cloud provider.
In order to ensure that your data is durable, Azure Storage has the
ability to keep (and manage) multiple copies of your data. This is
called replication, or sometimes redundancy. When you set up your
storage account, you select a replication type. In most cases, this
setting can be modified after the storage account is set up.
For more details refer the replication section in documentation.
If you need to capture changes to the storage and allow restore to previous versions (e.g In situations like data corruption or application feature requirements like restore points, backups), you need to take a SnapShot manually. This is common for both AWS and Azure.
For more details on creating a Snapshot of Blob in Azure refer the documentation.

Microsoft Azure change from geo redundant to local redundant

I have recently brought Microsoft Azure product for storage purpose of a NAS.
At first i chose the "read access geo redundant" and made a schedule for my NAS to backup.
Today i have changed it to local redundant (i saw the price difference) but my synology NAS is not finished backing up yet. Will it automaticly change to local redundant, or should i cancel the backup and re-do it?
To answer your question, you don't have to do anything. Azure will automatically convert your storage account's redundancy type from RAGRS to LRS.
To elaborate more, essentially the way RAGRS works is that data is written to the primary storage account and then through some background process the data is replicated to the secondary storage account. Once you change the redundancy to LRS, the replication stops.
One more point I would like to mention. If you're storing your data in blob storage for backup purpose only, may I suggest that you look at Cool Storage offering from Azure Storage. Compared to standard storage accounts, cost of storing data in a Cool Storage account is much lesser.

Can I use Azure Storage geo-replication as source?

I know Azure will geo-replication a copy of current storage account to another location,
my questions is: can I access another location in program, even just read only
I asked this, because this allow me to build another deploy in different geo-location for performance and disaster-proof like what Azure did. For current setup, if I use same source of storage in different geo-location, I have to pay extra bandwidth cost.
You can only access your storage account by its primary name. In the event of failover, that name will be mapped to the alternate datacenter. You cannot access the failover storage directly, nor can you choose when to trigger a failover. For a multi-site setup as you described, you'd need to duplicate your data (which would then add the cost of storage in datacenter #2). This does give you ultimate flexibility in your DR and performance planning, but at an added cost of storage and bandwidth (egress-only).
Last week the storage team announced read-only access to the failover storage: Windows Azure Storage Redundancy Options and Read Access Geo Redundant Storage.
This means you can now deploy your application in a different datacenter which can be used for "full" failover (meaning that the storage will also be available there). Even if it's only read-only, your application will still be online - but simply in "degraded" mode.
The steps on how you can implement this with traffic manager are described here: http://fabriccontroller.net/blog/posts/adding-failover-to-your-application-with-read-access-geo-redundant-storage-and-the-windows-azure-traffic-manager/

Resources