How would Azure storage be billed? - azure

According to the Azure homepage, it says:
[Storage, measured in GB] (http://www.windowsazure.com/en-us/pricing/details/)
Storage is billed in units of the average daily amount of data stored (in GB) over a monthly period. For example, if you consistently utilized 10 GB of storage for the first half of the month and none for the second half of the month, you would be billed for your average usage of 5 GB of storage.
I don't understand clearly about the term, "utilization" here. Let's say I have 10GB data in my Azure table storage, and only 1 GB (out of 10GB) data is actually "read" during this month. In this case, will I be paying based on the storage space I've been using (i.e., 10GB) or based on the data I have actually read (i.e., 1GB) ?

Azure Storage uses three "knobs" to measure your costs: transactions, out-bound bandwidth, and storage.
In your example, this would be 10gb of data stored, 1 gb of out-bound bandwidth (assuming the consumers are outside of the Azure Datacenter that hosts your application), and any transactions (the REST request/responses used to retrieve information from Storage) you need to get at the data.
However, the average daily stored per period only refers to storage (aka data at rest). Its simply measured daily and an average for the billing period calculated. And if I recall what I've been told it the past correctly, the "daily" amount is the peak you had on any given day. But please don't hold me to that part.
I would suggest reading the following Azure Storage Team blog to understand Billing thoroughly:
http://blogs.msdn.com/b/windowsazurestorage/archive/2010/07/09/understanding-windows-azure-storage-billing-bandwidth-transactions-and-capacity.aspx

Related

I am new to Azure SQL DataBase. Can any one tell me What is Compute Cost/ VCORE /Second 0.000175

I am new to Azure SQL DataBase. Can any one tell me What is Compute Cost/ VCORE /Second 0.000175
i want to know total bill of my azure account per month.
Perhaps this MS learn document might be of use.
I believe you define a minimum and maximum amount of vcores used and it automatically scales between them based on requests. They are billed per second of usage. When there are no requests/no usage there are no vcores active.

Charged for Publishing Azure Functions from VisualStudio?

Do you get billed in Azure for each time you publish an Azure Functions from VS?
The short answer is Yes, you get charged for publishing an Azure Function from Visual Studio. But "each time?" well, not really.
So, let's get to understand how that works. Azure functions although they offer you 1,000,000 executions per month (considering the execution time and memory), your code has to live somewhere which is the Function Storage Account.
Storage Accounts pricing could be broken down into two main costs:
Storage:
You pay for storage per month (pay-as-you-go) unless you are on a Premium storage plan. The first 50 TB of data in a Hot access tier of blob data is ~$0.0184 to $0.0424 per GB depends on where your data is hosted and its redundancy.
Now in your case that cost will incure once per 50 TB per month
Transfer:
When you deploy your data via Visual Studio you're effectively making API calls to Write your data which is charged (also depends on your data host location and redundancy) per 10,000 operations that includes every time you or your function does PutBlob, PutBlock, PutBlockList, AppendBlock, SnapshotBlob, CopyBlob, and SetBlobTier on that Storage Account. The operations cost varies from $0.05 to $0.091 for every 10,000 operations.
Others:
Other costs may incure using features such as Blob Index, Blob Changes, and encryption.
Conclusion
Publishing your Function from Visual Studio contributes to the overall cost of the Functions's Storage Accout. However, the cost is very small (sub $1) even if you published your function thousands of times every month.
For more information about Azure Blob Storage pricing visit https://azure.microsoft.com/en-us/pricing/details/storage/blobs/#pricing

Estimating Azure Search Cost

I'm fairly new to Azure platform and need some help with cost estimates for Azure Search service.
Every month we will have about 500GB worth of files that will dropped into Azure Blob Storage. We would like to index these files using Azure Search just based on the file names.
When I look at the Standard S2 pricing, it has the following text for storage:
100 GB/partition (max 1.2 TB documents per service). What does this mean? Does it mean that once my storage crosses 1.2TB, I'll need to purchase another Service?
Any help would be greatly appreciated.
If a tier's capacity turns out to be too low based on your needs, you will need to provision a new service at the higher tier and then reload your indexes. Kindly note that there is no in-place upgrade of the same service from one SKU to another.
Storage is constrained by disk space or by a hard limit on the maximum number of indexes, document, or other high-level resources, whichever comes first.
A service is provisioned at a specific tier. Jumping tiers to gain capacity involves provisioning a new service (there is no in-place upgrade). For more information, see Choose a SKU or tier. To learn more about adjusting capacity within a service you've already provisioned, see Scale resource levels for query and indexing workloads.
Check the document for more details on this topic.
S2 offers the following (at this time) - Storage per partition = 100 GB, with Partitions per service = 12 and, the Partition size = 100 GB.
You could use Pricing Calculator (https://azure.microsoft.com/pricing/calculator/) for the cost estimation as well.
The storage limits for Azure Search refers to the size of documents in the index, which can be bigger or smaller than your original blobs depending on your use case.
For example, if you want to do text searches on blob content then the index size will be bigger than your original blobs. Whereas if you only want to search for file names, then size of the original blobs becomes irrelevant and only the number of blobs will affect your Azure Search index size.

Is Azure Outbound Bandwidth free for first 5 GB every month?

I am confused about Azure Bandwidth outbound data-transfer pricing. The official website says that the First 5 GB/Month is free.
Suppose I have used 5 GB in January, then in February will it get reset and restart counting of 5 GB again? Are first 5 GB free in every month? Are these bandwidths free irrespective of resource i.e Virtual machines, App Services etc?
Yes, the first 5 GB of Outbound Data Transfer is free each month. This means any and all* outbound traffic from your Azure Resources.
So either when you're downloading data from Azure Storage, have a (data intensive) app running in a VM sending out a lot of data or download Azure SQL Database backups every night: you're consuming outbound data.
Please be advised that data going out of the Azure Region is counted as outbound traffic. So data that you copy between Azure Regions is counted as outbound data.
*As you can see in the article you shared:
Bandwidth refers to data moving in and out of Azure data centers other than those explicitly covered by the Content Delivery Network or ExpressRoute pricing.

Azure: Why is it advised to use multiple storage accounts for a virtual machine scale set?

In the documentation for Virtual Machine Scale Sets it says
Spread out the first letters of storage account names as much as possible
I have two questions to this:
Why should you use multiple Storage Accounts at all?
Why is Azure creating 5 Storage Accounts if I create a new Virtual Machine Scale Set through portal?
Why should I spread the first letters as much as possible?
The answer to this lies in the limits of Azure. If you look at the storage limits specifically, you will find that the storage account is capped at 20k IOPS.
Total Request Rate (assuming 1KB object size) per storage account
Up to 20,000 IOPS, entities per second, or messages per second
So that means that your Scale Set would effectively be capped at 20k IOPS, no matter how many VM's you put in it.
As for the storage Account naming, I have no clue, but looking at the templates they are linking to, they are not doing it:
"uniqueStringArray": [
"[concat(uniqueString(concat(resourceGroup().id, variables('newStorageAccountSuffix'), '0')))]",
"[concat(uniqueString(concat(resourceGroup().id, variables('newStorageAccountSuffix'), '1')))]",
"[concat(uniqueString(concat(resourceGroup().id, variables('newStorageAccountSuffix'), '2')))]",
"[concat(uniqueString(concat(resourceGroup().id, variables('newStorageAccountSuffix'), '3')))]",
"[concat(uniqueString(concat(resourceGroup().id, variables('newStorageAccountSuffix'), '4')))]"
],
I suspect, this may be somehow linked to how the storage accounts are distributed among nodes hosting them (so say accounts starting with 'A' are all hosted on the same cluster or near by clusters).
It's about avoiding throttling
https://learn.microsoft.com/en-us/azure/storage/storage-scalability-targets
For standard storage accounts: A standard storage account has a
maximum total request rate of 20,000 IOPS. The total IOPS across all
of your virtual machine disks in a standard storage account should not
exceed this limit.
You can roughly calculate the number of highly utilized disks
supported by a single standard storage account based on the request
rate limit. For example, for a Basic Tier VM, the maximum number of
highly utilized disks is about 66 (20,000/300 IOPS per disk), and for
a Standard Tier VM, it is about 40 (20,000/500 IOPS per disk), as
shown in the table below.
There's is no price difference between 5 or 1 storage accounts, so why not 5?
If you create 5 SA in different Storage Rack/Stomp (Datacenter infrastructure) you have less chance to be throttled, and they have better chance to distribute traffic load. So I think those are the reasons

Resources