I want to implement a multi-region architecture in azure. My current architecture is running in one region and I am using blob storage to save my data. I was wondering if the blob storage can be shared between users in different regions? because I have seen that the replication is read-only otherwise I have to create another blob storage for the other regions ? and how I synchronize it to have the same data so the users in different regions can see the same content in the software?
I was wondering if the blob storage can be shared between users in
different regions?
Blob storage can certainly be shared between users in different regions. Blob storage resources are accessible over HTTP protocol so it doesn't really matter where your users are.
However please note that you may incur extra charges for data egress if the blob storage data is consumed by the application in your secondary regions.
Furthermore, you will notice some increased latency for both reads and writes. You can reduce the read latency by fronting your blob storage with CDN (but then you will pay extra for the CDN).
Related
I am using Azure CDN with blob storage account as endpoint. My data is static and changes only once per year. I am not sure: does it has some impact when I choose blob storage account "Redundancy" as "LRS" vs "GRS"?
I hope that CDN will cache my data in different regions and that blob storage is needed only for first time when CDN gets data from blob storage.
From the CDN perspective, I don't think the redundancy matters because CDN will cache the content and the content will be served from CDN nodes. Redundancy becomes important from data protection perspective.
If you go with LRS and assuming that the datacenter becomes completely inoperable, then you will lose all the source content.
If you opt for GRS, at least you have a copy of your content (though it will not be directly accessible to you) and in case datacenter becomes completely inoperable, your data is not lost and Microsoft will switch to the secondary location.
Recommendation would be to go with GRS over LRS. If you want access to the content in secondary region, then you should go with RA-GRS.
We are migrating from an on-premises virtual machine to Azure cloud. The virtual machine will eventually be decommissioned and we have many files and folders that we don't want to lose, like old websites and databases, scripts, programs etc.
We use an Azure storage account for storing and retrieving images via blob containers for the live websites.
Q: What is the best and most cost effective way to backup large amount of files unused in production, rarely accessed, from an on-premises virtual machine to Azure cloud?
Changing the Access tier to Azure Archive Storage(if storing data in Blobs) would be your best option. A few notes:
The Archive storage tier is only available at the blob level and not at the storage account level.
Archive storage is offline and offers the lowest storage costs but also the highest access costs
Hot, Cool, and Archive tiers can be set at the object level.
Additional info can be found here:https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers
recommendation would be to move those unused files to Azure storage archives, which is cost effective and easily accessible when required.
https://azure.microsoft.com/en-us/services/storage/archive/
I do not understand how to find out my stats on azure blob storage. Egress and ingress show data in volume, not in reads/writes and I do not think this is necessarily data operations, because there is no way something is downloading 20 gigs of data a day from the blob storage (shows this much egress). Pricing, on the other hand, is all read-write operations.
I want to find out the usage statistics on my blob storage so I could adapt the storage strategy, put the relevant stuff in hot/cold storage, archive things appropriately. I need practical data for analysis.
The metrics in portal are mostly error counts.
Azure Storage Analytics provides more detailed metrics (aggregated per minute and hour) about all services (e.g. Blob, File, Table and Queue) in the storage account usage, such as:
user;GetBlob -> TotalRequests, TotalBillableRequests, TotalIngress, TotalEgress, Availability, etc.;
Find more details at https://learn.microsoft.com/en-us/azure/storage/common/storage-analytics.
I've searched the web and contacted technical support yet no one seems to be able to give me a straight answer on whether items in Azure Blob Storage are backed up or not.
What I mean is, do I need to create a twin storage account as a "backup" and program copies of all content from one storage to another, or are the contents of a client's Blob Storage automatically redundantly backed up by Microsoft?
I know with AWS, storage is redundantly backed up via onsite drives as well as across other nodes in the cluster.
do I need to create a twin storage account as a "backup" and program
copies of all content from one storage to another, or are the contents
of a client's Blob Storage automatically redundantly backed up by
Microsoft?
Yes, you will need to do backup manually. Azure Storage does not back up the contents of your storage account automatically.
Azure Storage does provide geo-redundant replication (provided you configure the redundancy level for your storage account as GRS or RA-GRS) but that is not back up. Once you delete content from your primary account (location, it will automatically be removed from secondary account (geo-redundant location).
Both AWS (EBS) and Azure(Blob Storage) options provides durability by replicating the data across different data centers. This is for the high availability and durability of the data to provide the guarantee by the cloud provider.
In order to ensure that your data is durable, Azure Storage has the
ability to keep (and manage) multiple copies of your data. This is
called replication, or sometimes redundancy. When you set up your
storage account, you select a replication type. In most cases, this
setting can be modified after the storage account is set up.
For more details refer the replication section in documentation.
If you need to capture changes to the storage and allow restore to previous versions (e.g In situations like data corruption or application feature requirements like restore points, backups), you need to take a SnapShot manually. This is common for both AWS and Azure.
For more details on creating a Snapshot of Blob in Azure refer the documentation.
I have two webapps on separate plans each has multiple instances of large size (P3) and it says I get 250GB of storage on P3.
I also have azure storage to store photos.
I want to know, how is Azure storage related to the webapp plans... meaning, what if I reduce the webapp to S3 where it's only 50GB, how will that affect storage?
Also, do I get 50GB for each instances or for the entire plan?
Thank you
Azure App Service plans represent the collection of physical resources used to host your apps.
App Service plans define:
Region (West US, East US, etc.)
Scale count (one, two, three instances, etc.)
Instance size (Small, Medium, Large) SKU (Free, Shared, Basic, Standard, Premium)
If you scale down your App Service plan to S3, yes you will get 50GB storage.
This storage includes/stores all of the resources and deployment files, logs etc.
You can only store data/files up to the available Storage according to the pricing tier that you choose. To increase the storage you can scale up your pricing tier.
Also, note that increase/decreasing instances is nothing but increase/decrease the number of VM instances that run your app. You get only one Storage account for all the instances not individual Storage.
Before scaling based on instance count, you should consider that scaling is affected by Pricing tier in addition to instance count. Different pricing tiers can have different numbers cores and memory, and so they will have better performance for the same number of instances (which is Scale up or Scale down).
For more details, you may refer the Azure App Service plans in-depth overview and App Service pricing.
Hope this answers your questions.
App Service storage is completely different than Azure Storage (blobs/tables/queues).
App Service Storage
For a given tier size (e.g. S1), you get a specific amount of durable storage, shared across all instances of your web app. So, if you get 50GB for a given tier, and you have 5 instances, all 5 instances share that 50GB storage (and all see and use the same directories/files).
All files in your Web App's allocated storage are manipulated via standard file I/O operations.
App Service Storage is durable (meaning there's no single disk to fail, and you won't lose any info stored), until you delete your web app. Then all resources (including the allocated storage, in this example 50GB) are removed.
Azure Storage
Azure Storage, such as blobs, is managed completely independently of web apps. You must access each item in storage (a table, a queue, a blob / container) via REST or a language-specific SDK. A single blob can be as large as 4.75TB, far larger than the largest App Service plan's storage limit.
Unlike App Service / Web App storage, you cannot work with a blob with normal file I/O operations. As I mentioned already, you need to work via API/SDK. If, say, you needed to perform an operation on a blob (e.g. opening/manipulating a zip file), you would typically copy that blob down to working storage in your Web App instance (or VM, etc.), manipulate the file there, then upload the updated file back to blob storage.
Azure Storage is durable (triple-replicated within a region), but has additional options for replication to secondary regions, and even further, allowing for read-only access to the secondary region. Azure Storage also supports additional features such as snapshots, public access to private blobs (through Shared Access Policies & Signatures), and global caching via CDN. Azure Storage will remain in place even if you delete your Web App.
Note: There is also Azure File Storage (backed by Azure Storage), which provides a 5TB file share, and acts similarly to the file share provided by Web Apps. However: You cannot mount an Azure File Storage share with a Web App (though you can access it via API/SDK).