Can I use azure functionapp storage account for other purposes like storing files in blob storage? - azure

Can I use azure function app storage account for other purposes like storing files in blob storage? If yes will it according to Microsoft guidelines and also will it cause any performance issue? Specially when size of blob storage get increased to GBs?
I am near to production, so please come up with any suggestions, best practices, solutions as soon as possible.

Can I use azure function app storage account for other purposes like
storing files in blob storage?
Yes, you can.
If yes will it according to Microsoft guidelines and also will it
cause any performance issue? Specially when size of blob storage get
increased to GBs?
It depends. Each Azure Storage account has some pre-defined throughput limits. As long as you stay within those limits, you should be fine.
Having said this, ideally you should have a separate storage account. Considering creation of storage account doesn't cost you anything till the time you do some transactions in it, you may be better off creating a separate account to store data required by your application.

Related

Limiting initial size of an Azure Storage container

I would like to know if there's a way to create an Azure storage container of a specific size, say 20 gb. I know it can be created without any restriction (I think up to 200 TB?), but can it be created with a specific size? What if I need that kind of set up? Like giving a user 20 gb initially, then at a later time increasing it to, say 50? Is that possible?
Like, how do I create that boundary/limitation for a new user that signs up my app?
Not possible with the service by itself. This should be a feature implemented in your app.
As mentioned in other answer, it is not possible to do with Blob Storage at the service level and you will have to implement your own logic to calculate the size of the blob container.
If restricting container size is the most important feature you are after, you may want to look at Azure File Storage. Equivalent to a blob container is a File Share there and you can set the quota for a File Share and change it dynamically. The quota of a File Share can be any value between 1GB - 5TB (100TB in case of Premium File Storage account) at the time of writing this answer.
Azure File Storage and Blob Storage are somewhat similar but they are meant to serve different purposes. However for simple object storage purposes you can use either of the two (File Storage is more expensive that Blob Storage though).

Azure StorageV2 public containers

We have stored 200.000+ images in a classic azure blob account with standard performance. We include the blob URLs in the HTML of our application so the browser downloads the images directly from the blob storage. However, this is really slow. A simple 2kb image can take up to 200ms to download. Download speeds are irregular.
I made a new storage account, now V2 with premium performance. However, now I can't make any public containers anymore. The portal returns the error: 'This is a premium 'StorageV2 (general purpose v2)' account. Containers within this storage account must be set to 'private' access level.'
How can I host images in an Azure environment with good performance without having to deploy them on my web role?
Azure storage V2 with premium only supports private access level. You should consider using BlockBlobStorage accounts with premium in your case, which supports the public access.
And here is the benefit of BlockBlobStorage accounts:
Compared with general-purpose v2 and BlobStorage accounts, BlockBlobStorage accounts provide low and consistent latency, and higher transaction rates.
Here is the screenshot of create a BlockBlobStorage accounts with premium:
Azure storage account have certain limits (like, 20000 IOPS limit per account) which might interfere with performance at the scale you are talking about. Steps you can take to check if this is the root case - split your images into several storage accounts and see if that fixes performance.
Alternatively (and probably better) you should use Azure CDN attached to the storage account to fix this performance issue (and even make it faster).
https://learn.microsoft.com/en-us/azure/cdn/cdn-create-a-storage-account-with-cdn

Azure Blob Storage Cost Analysis

Last month we got 5K bill from Azure for my production workload, $1160 only for blob storage.
I have a single storage account for all my services (Function, WebJob etc.), Under storage account, I'm only using Blob and I didn't store any big file on that account.
I have many Functions and Webjobs processing data from Eventhub and storing checkpoint information into block blob. One of my function processing 15M request per-day and storing Checkpoint in the blob.
I re-visit Microsoft documentation but unable to break this cost with my containers/Areas. Basically, I want to understand Storage, Ingress, Egress and Read/Write wise cost so I can take appropriate action.
If the issue is still not rectified, for more specialized assistance on this kindly contact Azure Billing and Subscription team would be the best to provide more insight and guidance on this scenario: https://azure.microsoft.com/en-us/support/options/, it's free, and it's the best choice for scenario.

Azure Blob Storage: Does Microsoft Implement Redundant Backups?

I've searched the web and contacted technical support yet no one seems to be able to give me a straight answer on whether items in Azure Blob Storage are backed up or not.
What I mean is, do I need to create a twin storage account as a "backup" and program copies of all content from one storage to another, or are the contents of a client's Blob Storage automatically redundantly backed up by Microsoft?
I know with AWS, storage is redundantly backed up via onsite drives as well as across other nodes in the cluster.
do I need to create a twin storage account as a "backup" and program
copies of all content from one storage to another, or are the contents
of a client's Blob Storage automatically redundantly backed up by
Microsoft?
Yes, you will need to do backup manually. Azure Storage does not back up the contents of your storage account automatically.
Azure Storage does provide geo-redundant replication (provided you configure the redundancy level for your storage account as GRS or RA-GRS) but that is not back up. Once you delete content from your primary account (location, it will automatically be removed from secondary account (geo-redundant location).
Both AWS (EBS) and Azure(Blob Storage) options provides durability by replicating the data across different data centers. This is for the high availability and durability of the data to provide the guarantee by the cloud provider.
In order to ensure that your data is durable, Azure Storage has the
ability to keep (and manage) multiple copies of your data. This is
called replication, or sometimes redundancy. When you set up your
storage account, you select a replication type. In most cases, this
setting can be modified after the storage account is set up.
For more details refer the replication section in documentation.
If you need to capture changes to the storage and allow restore to previous versions (e.g In situations like data corruption or application feature requirements like restore points, backups), you need to take a SnapShot manually. This is common for both AWS and Azure.
For more details on creating a Snapshot of Blob in Azure refer the documentation.

Is this a sensible Azure Blob Storage setup and are there restructuring tools to help me migrate to it?

I think we have gone slightly wrong on the way we have used Azure storage in a SAAS system. We created a storage account per client (Securtiy was prime consideration) and containers per system area e.g. Vehicle, Work etc
Having done further reading it seems a suggestion would be that we should have used one account for all clients. Each client should have a container (so we can programmatically create it) which we then secure. Then files should just be structured using "virtual" folder structure e.g. Container called "Client A". Then Files for the Jobs (in Work area of system) stored like Work/Jobs/{entity id}/blah.pdf. Does this sound sensible?
If so we now have about 10 accounts that we need to restructure. Are there any tools that will let us easily copy one accounts contents to another containers account? I appreciate we probably can't move the files between accounts (as we set them up ages ago so can't use native copy function) so I guess some sort of copy. There are GB of files across all the accounts.
It may not be such a bad idea to keep different storage accounts per client. The benefits of doing that (to me) are:
Better security as mentioned by you.
You'll be able to achieve better throughput / client as each client will have their own storage account. If you keep one storage account for all clients, and if one client starts hitting that account badly other clients will be impacted.
Better scalability. Each storage account can hold up to 200 TB of data. So if you keep just one storage account and assuming each client consumes 100 GB of data, you'll be able to accommodate only 2000 clients (I hope my math is right :)). With individual storage accounts, you won't be restricted in that sense.
There're some downsides as well. Some of them are:
Management would be a nightmare. Imagining you have 2000 customers then you would end up managing 2000 storage accounts.
You may be limited by Windows Azure. Currently by default you get about 10 or 20 storage accounts per subscription and you would need to contact support to manually up that limit. They can do that for you but I would imagine you would want this to be a self-service model where you would be able to create as many storage accounts as you want without contacting support.
Now coming to your question about tooling, you could possibly write something on your own which makes use of Copy Blob functionality. This functionality allows you to copy blob data across storage accounts asynchronously. Basically this is what you would do is:
First create a blob container for each client in the target storage account.
Enumerate all blob containers in source storage account.
For each blob container in source storage account, enumerate the blobs.
Copy each blob asynchronously to target storage account in the client's blob container.
If you're a PowerShell fan, you can look into Cerebrata's Azure Management Cmdlets (http://www.cerebrata.com/Products/AzureManagementCmdlets) as well which wraps this functionality. I could have recommended Cerebrata's Azure Management Studio as well but I haven't tried this functionality just yet there [Disclosure: I'm one of the devs on Cerebrata team].
Hope this helps.
Adding to Gaurav Mantri answer...
You can have shared storage account for customers and use Shared Access Signature(SAS) to limiting access to particular container or blobs(as well as for tables and queues)...
http://msdn.microsoft.com/en-us/library/windowsazure/hh508996.aspx

Resources