Azure BLOB storage - Pricing and speed "Within the data center" - azure

I am quite sure I know the answer, just want to make sure I got this right.
From Azure In Action :
If I use the CloudBlobClient from a WCF service that sits in my WebRole, to access blobs (read/write/update) , so :
1) Does read/write/update charge as transaction or are they free ?
2) Does the speed of accessing those blobs is fast as mentioned in the note ?

If I use the CloudBlobClient from a WCF service that sits in my
WebRole, to access blobs (read/write/update) , so :
1) Does read/write/update charge as transaction or are they free ?
Transaction metering is independent of where the requests are made from. Storage read/write/update is done via REST API calls (or through an SDK call that wraps the REST API calls). Each successful REST API call will effectively count as a transaction. Specific details of what constitutes a transaction (as well as what's NOT counted as a transaction) may be found here.
By accessing blob storage from your Worker / Web role, you'll avoid Internet-based speed issues, and you won't pay for any data egress. (Note: Data ingress to the data center is free).
2) Does the speed of accessing those blobs is fast as mentioned in the note ?
Speed between your role instance and storage is governed by two things:
Network bandwidth. The DS and GS series have documented network bandwidth. The other sizes only advertise IOPS rates for attached disks.
Transaction rate. On a given storage account, there are very specific documented performance targets. This article breaks down the numbers in detail for a storage account itself, as well as targets for blobs, tables and queues.

Related

"Storage account management operations" and ClientThrottlingError

According to Storage account scale limits each storage account in Azure can handle 20.000 requests per second.
But there is also Storage resource provider scale limits that restricts Storage account management operations (read) to 800 request per 5 minutes.
We seem to have reached the latter limit, and we are wondering what kind of operations are counted as Storage account management operations. We got a few minutes with intermittent 503 responses in our production system this morning, having 2600 GetBlob operations in 5 minutes.
Which operations count as Storage account management operations?
Does it matter whether we use BlobClient from the blob storage SDK, or HttpClient from .NET?
How do we read blob properties and metadata, and download blobs to (possibly) achieve 20.000 requests per second?
Are there any other ideas on what can lead to throttling when the load isn't that high altogether?
UPDATE:
After communication with Microsoft support (the proper ones...), they could inform us of the following:
The type of throttling you experienced is a partition throttling error. This type of error occurs when the client does too many requests against the same partition server. When such happens and the partition server gets overloaded, it does internal load balancing operations as part of the normal azure storage healing process.
When the partition being accessed suffers a load balancing operation (reassigning partitions to less loaded servers), the storage service returns 500 or 503 errors.
The limits I previously mentioned (the 800 reads for 5 minutes) are indeed for management operations and not for data ones. In your case, the GetBlob ones are data operations and are not covered by these hard limits. After analyzing the ingress/egress limit and also the transactions per second of your storage account, I verified that you also seem to be far away from hitting the threshold.
Just for the record and improved searchability: In Metrics these errors showed up as ClientOtherError and ClientThrottlingError.
Which operations count as Storage account management operations?
All the operations listed here are considered as storage account management operations. Essentially the operations you perform on managing the storage account themselves (and not the data in them) are considered as management operations.
Does it matter whether we use BlobClient from the blob storage SDK, or
HttpClient from .NET?
No. These operations deal with the data and not considered as part of management operations. These operations have separate throughput limit.
How do we read blob properties and metadata, and download blobs to
(possibly) achieve 20.000 requests per second?
Please see answer to previous question.

MS Azure Storage transactions

I'm confused with Storage transactions in Azure Storage for page bolb, so what is mean Storage transactions in Azure Storage.
I want to use to upload all of my website image to their, so I do not know how many transaction i need.
so I i have 1000 photos in my website and every day 1000 gusts visit my website how many Storage transactions required to retrieve from Azure storage to client.
so what about Bandwidth? most of my big file will be in Azure storage only small amount of html and css will be in Azure Web App Storage so I do not know that I need to buy bandwidth or not required since most of load come to Storage transactions.
The first area we would want to cover for transactions is what equals 1 transaction to Windows Azure Storage. Each and every REST call to Windows Azure Blobs, Tables and Queues counts as 1 transaction (whether that transaction is counted towards billing is determined by the billing classification discussed later in this posting). The REST calls are detailed here:
Blobs
Table
Queues
Each one of the above REST calls counts as 1 transaction. This includes the following types of requests:
Query/List Requests and Continuation Tokens – A Table Query, and Listing Blob Containers, Tables and Queues can return continuation tokens. This means that the query/listing must be continued to complete it. As described above, each REST request to the storage service counts as 1 transaction. Therefore, each continuation of the query/list counts as an additional 1 transaction, since it is another REST request to the storage service.
For more information refer: https://blogs.msdn.microsoft.com/windowsazurestorage/2010/07/08/understanding-windows-azure-storage-billing-bandwidth-transactions-and-capacity/

Does accessing blob storage within Azure use bandwidth?

I am thinking about creating a Azure app service that accesses files from a Azure storage container, manipulates the file, and then returns the result to the end user. Does Azure consider transferring data from the storage blob to the App Service as bandwidth usage? I am wondering if doing this will incur a charge two times for every operation - once for blob -> app service and another for app service -> end user.
Azure uses internal bandwidth across it's service fabric, so there is no charge for bandwidth utilization. However, any read/writes are transactions against storage, and there is a (nominal) cost. You can use the Azure calculators, based on your region, to determine and approximate costs for data storage + storage transaction. https://azure.microsoft.com/en-us/pricing/details/storage/
There is no bandwidth charge as long as the data remains within a single Azure region.
As Neil mentioned, bandwidth within a region is not metered (between any services). You'll still be metered for outbound bandwidth from Web App to end user. And if you download blobs from storage in a different region to your Web App, that bandwidth is metered.
Also, if you ever choose to download direct from blob to end-user, that outbound bandwidth is also metered.

Azure Cloud Role included storage = extra costs?

I'm currently working out the cost-analysis for my upcoming Azure project. I am tempted to use a Azure Cloud Role, because it has a certain amount of storage included in the offer. However, I have the feeling that it is too good to be true.
Therefore, I was wondering. Do you have to pay transaction-costs/ storage costs on this "included" storage? I can't find any information about this on the Azure website, and I want to be as accurate as possible (even if the cost of transactions is almost nothing).
EDIT:
To clarify, I specifically want to know about the transaction costs on the storage. Do you have to pay a small cost per transaction on the storage (like with Blob/Table storage), or is this included in the offer as well?
EDIT 2:
I am talking about the storage included with the Cloud Services (web/worker) and not a separate Table/blob storage.
Can you clarify which offer you're referring to?
With Cloud Services (web/worker roles), each VM instance has some local storage associated with it, which is free of charge and, because it's a local disk, there are no transactions or related fees associated with this storage. As Rik pointed out in his answer, that data is not durable: it's on a single disk and will be gone forever if, say, the disk crashes.
If you're storing data in Blobs, Tables, or Queues (Windows Azure Storage), then you pay per GB ($0.095 cents per GB per month for geo-redundant storage, or $0.07 per GB per month for locally-redundant storage), and a penny per 100,000 transactions. And as long as your storage account is in the same data center as your Cloud Service, there's no data egress fees.
Now we come back to the question of which offer you're referring to. The free 90-day trial, for instance, comes with 70GB of Windows Azure Storage, and 50M transactions monthly included. MSDN subscriptions come with included storage and transactions as well. If you're just working with a pay-as-you-go subscription, you'll pay for storage plus transactions.
The storage is included, but not guaranteed to be persistent. Your role could be shut down and started on a different physical location, which has no impact on the availability of your role, but you'll lose your whatever you have in storage, I.E. the included storage is very much temporary.
As for transaction costs, you only pay for outgoing data, not incoming data or data within Azure (one role to another).
You pay per GB, and $0,01 per 100.000 transactions

Storage Transaction Profiler for Windows Azure Web Deploy Accelerator

I've recently begun using the Web Deployment Accelerator for my Windows Azure account. It is providing an immediate return in time saved and is an excellent offering.
However since "everything" is now stored to Azure Storage rather to the regular E:Drive I am immediately seeing a cost consequence for using the tool.
In one day I have racked up a mighty 4 cent NZD charge. In order to do that I had to burn through about 80,000 storage transactions and frankly i cant figure where they all went.
I uploaded 6 sites that are very small wouldn't have more than 300 files each. So I'm wondering:
a. is there is a profiling tool for the Web Deployment Accelerator that will allow me to see where and how 80,000 storage transactions were used for such a small offering. Is it storage transaction intensive tool? Has any cost analysis been carried out in terms of how this tool operates? Has it been optimised with cost in mind?
b. If I'm using this tool do i pay for 2 storage transactions per http request to a site? As since the tool now writes the web server logs to table storage, that would be one storage request to pull the http request resource (img, script, etc) and a storage request to write the log entry as well would it not?
I'm not concerned about current charges I 'm concerned about the future if i start rolling all my hosted business into the cloud. I mean Im now being charged even just to "look" at my data right? If i list the contents of a storage folder using a tool like Azure Storage Explorer that's x number of storage transactions where x = number of files in the folder?
Not sure of a 3rd-party profiler tool, but Windows Azure Storage logging and metrics will give you very detailed info regarding both individual accesses and hourly rollups. It's pretty straightforward to enable, and the November 2011 SDK includes support for the API calls required for enabling. See here for an overview of what's offered for metrics and logging.
My team worked with Fullscale180 to build a storage library, Azure Store XRay, to demonstrate how to enable and query storage metrics and logging. Note: This was published before the SDK had logging and metrics support, so it uses the REST API calls instead. But that won't impact you if you try to use the library.
You can also look at another code demo, Cloud Ninja, which calls the XRay library for its metrics display (see here for running demo).
Regarding querying storage for objects in blob containers: that's not a 1:1 transaction:file scenario. You can specify the maximum number of blobs to return when listing items in a container. It's possible that all blobs are returned in one transaction. Of course, if you then grab each blob, each of these will be at least one transaction (depending on blob size). See here for details about listing blobs.

Resources