Cost of migration to Azure File Storage - azure

I wanted to check with you all just to confirm some final numbers.
I would like to migrate around 20 TBs of data (around 40 million files) into Azure File Share or Blob.
I am getting a little confused with the Azure Cost Estimator since it seems to talk about Azure transactions/Operations fees and storage at rest but not the cost to migrate data from On-prem file server to Azure Files.
Am I wrong to Assume all data migrations to Azure Files (Hot, Cool, Transaction Optimized) and Azure Blob (Archive or Cool) are free of charge? Cost will only start after the migration is completed and users start interacting with the data?
Please let me know what the correct answer is here.
Thank you all in advance!

There's nothing special about migration. If you copy files from on-prem to Azure the normal billing meters will charge you for it. You could use something like Azure Data Box to migrate data, but that's a separate service with its own billing.

Related

Is there something like transfer acceleration in Azure Blob Storage?

I would like to create an Azure Storage Account, and use blob storage in the region US West.
However my business needs is to upload/download files from all over the world and not just US West.
When I download/upload files from India or places that are far from US West, there is a severe degradation in performance.
For downloads I could use Geo Redundant read replica. This partially solves the problem. However the this is increasing the cost significantly. Also the time take for replication is several minutes and this is not fitting for me.
In AWS S3 storage, there is a feature called transfer acceleration. Transfer acceleration speeds up the uploads/downloads by optimizing the routing of packets. Is there any similar feature in Azure?
You may use Azcopy(AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. This article helps you download AzCopy, connect to your storage account, and then transfer files.) Fast Data Transfer or Azure Data Factory(A fully managed, serverless data integration solution for ingesting, preparing, and transforming all your data at scale.)
High-Throughput with Azure Blob Storage
You should look at the Azure Storage library https://azure.microsoft.com/en-us/blog/introducing-azure-storage-data-movement-library-preview-2/
You should also take into account the performance guidelines from the Azure Storage Team https://azure.microsoft.com/en-us/documentation/articles/storage-performance-checklist/
This article provides an overview of some of the common Azure data transfer solutions. The article also links out to recommended options depending on the network bandwidth in your environment and the size of the data you intend to transfer. Choose an Azure solution for data transfer

Azure Blob Storage Cost Analysis

Last month we got 5K bill from Azure for my production workload, $1160 only for blob storage.
I have a single storage account for all my services (Function, WebJob etc.), Under storage account, I'm only using Blob and I didn't store any big file on that account.
I have many Functions and Webjobs processing data from Eventhub and storing checkpoint information into block blob. One of my function processing 15M request per-day and storing Checkpoint in the blob.
I re-visit Microsoft documentation but unable to break this cost with my containers/Areas. Basically, I want to understand Storage, Ingress, Egress and Read/Write wise cost so I can take appropriate action.
If the issue is still not rectified, for more specialized assistance on this kindly contact Azure Billing and Subscription team would be the best to provide more insight and guidance on this scenario: https://azure.microsoft.com/en-us/support/options/, it's free, and it's the best choice for scenario.

Azure: how to count storage transactions

Can anyone explain me how count storage transactions ?
For example, I need storage for 10 GB, and daily incremental is about 100mb.
How to count the transactions ?
Azure
Azure Storage Team had published a blog long time back on this - http://blogs.msdn.com/b/windowsazurestorage/archive/2010/07/09/understanding-windows-azure-storage-billing-bandwidth-transactions-and-capacity.aspx. To understand how you're going to get charged for using Azure Storage, I would highly recommend reading this post.
Azure Storage also provides detailed analytics on the operations performed against your storage account. You can find information about the transactions by looking at storage analytics data. You may find this link helpful for that: http://blogs.msdn.com/b/windowsazurestorage/archive/tags/analytics+2d00+logging+_2600_amp_3b00_+metrics/.
Every single access to the storage counts as one transaction (even local, EDIT: eg. web-app to storage). Then you just have to calculate an average.
Read more
Transactions include both read and write operations to storage.

Azure storage metrics data

I am trying to implement azure storage metrics code in my role but i am checking if there is easy way to get azure storage metric data about my files usage. my code is stable and i do not want to change code again.
Actually if you already have Windows Azure role running, then you don't need to make any changes to your code and you still can get Windows Azure Blob storage Metrics data.
I have written a blog about it last time as Collecting Windows Azure Storage REST API level metrics data without a single line of programming, just by using tools
Please try above and see if this works for you.
Storage analytics is disabled by default, so any operations against your storage up til now has not been logged for analysis.
You may choose to enable analytics at any time, for both logging (detailed access information for every single object) and metrics (hourly rollups). Further, you may choose which specific storage service to track (blobs, tables, queues) and which operations to track (read, write, delete). Once analytics are enabled, you may access the resulting analytics data from any app (as long as you have the storage account name + key).
Persistent Systems just published a blog post on enabling storage analytics for Java apps. The same principles may be applied to a .net app (and the sdk's are very similar).
Additionally, Full Scale 180 published a sample app encapsulating storage analytics (based on REST API, as it was written before SDK v1.6 came out).

Azure blobs, what are they for?

I'm reading about Azure blobs and storage, and there are things I don't understand.
First, you can hire Azure for just hosting, but when you create a web role ... do you need storage for the .dll's and other files (.js and .css) ?? Or there are a small storage quota in a worker role you can use? how long is it? I cannot understand getting charged every time a browser download a CSS file, so I guess I can store those things in another kind of storage.
Second, you get charged for transaction and bandwidth, so it's not a good idea to provide direct links to the blobs in your websites, then... what do you do? Download it from your web site code and write to the client output stream on the fly from ASP.NET? I think I've read that internal trafic/transactions are for free, so it looks like a "too-good-for-be-truth" solution :D
Is the trafic between hosting and storage also free?
Thanks in advance.
First, to answer your main question: blobs are best used for dynamic data files. If you run a YouTube sorta site, you would use blobs to store videos in every compressed state and thumbnails to images generated from those videos. Tables within table storage are best for dynamic data that does not require files. For example comments on YouTube videos would likely be best stored by tables in ATS.
You generally want a storage account for at least: publishing your deployments into Azure and to have your compute nodes transfer their diagnostic data to, for when you're deployed and need to monitor your compute nodes
Even though you publish your deployments THROUGH a storage account, the deployment code lives on your compute nodes. .CSS/.HTML files served by your app are served through your node's storage space which you get plenty of (it is NOT a good place for your dynamic data however)
You pay for traffic/data that crosses the Azure data center boundary, irregardless where it came from. Furthermore, transactions (reads or writes) between your azure table storage and anywhere else are not free. You also pay for storing the data in the storage account (storing data on compute nodes themselves is not metered). Data that does not leave their data center is not subject to transfer fees. Now in reality, the costs are so low, that you have to be pushing gigabytes per day to start noticing
Don't store any dynamic data only on compute instances. That data will get purged whenever you redeploy your app or whenever they decide to move your app onto a different node.
Hope this helps

Resources