Because I have been charge a lot of money by storage, and I have so many storage account. I want to know how can I check how much my total storage usage in my subscription ?
I have check this this similar question, but they only provide how to check each storage account.
The quickest way:
Nav to azure portal -> Monitor -> Storage accounts: Then select your subscription; For storage accounts, select "All" -> At last, click "Capacity". The screenshot as below:
Remember that you should calculate the total space used one by one, there is no such built-in feature. Or you can write a code calculate total space used one by one.
Related
How would I find the total cost of an Azure function (including costs incurred by the Azure function on the Azure storage account such as cost of LRS List and Create Container Operations when multiple Azure functions are using the same Azure storage account)?
You can See functions costs by going to
Cost Management >> Cost Analysis >> View CostByResource >> Sort By ResourceType
For me, it doesn't give much details, the best option is to have an App Service Plan which is applied to the function, where you can get CPU/Memory consumption cost:
This time Select previews (Sort By ResourceType):
As for the Storage Account cost related to a function, you'll get all details for Storage costs but not specific to functions ( I'd assume you'll use storage account only for functions)
According to the official documentation for Azure storage accounts, at most 250 accounts can be created per subscription. Is there a way to request a higher limit through Azure support or is this a hard limit that they won't be able to adjust due to technical constraints?
Please note the new feature in public preview by Azure Storage which helps solve this pain point ..
https://azure.microsoft.com/en-us/updates/preview-5kaccountlimit/
Azure Storage is announcing public preview of the ability to create an additional 5000 Azure Storage accounts per subscription per region. This is a 20 times increase from the current limit of 250 and helps you create several hundred or thousand storage accounts to address your storage needs within a single subscription, instead of creating additional subscriptions.
Learn more - https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-create-additional-5000-azure-storage-accounts/ba-p/3465466
You misunderstood it's limit as per the official documentation.
It says: "Number of storage accounts per region per subscription xxx".
Note the limit is per region per subscription. So in a subscription, you can create at most 250 storage account in a region(like EAST US), but you can still create at most 250 storage account in another region(like CENTRAL US) in the same subscription.
You can also find the limits in azure portal -> your subscription -> Usage + quotas. The screenshot as below:
Having used Azure for some time now, I'm well aware of the default 20,000 IOPS limit of an Azure Storage Account. What I've yet to find however is up to date documentation on how to monitor an account's IOPS in order to determine whether or not it's being throttled. This is important when debugging performance issues for applications, VMs, and ASR replication - to name but three possible uses.
If anyone knows the correct way to keep track of an account's total IOPS and/or whether it's being throttled at any point in time, I'd appreciate it - if there's a simple solution for monitoring this over time, all the better, otherwise if all that exists is an API/PowerShell cmdlet, I guess I'll have to write something to save the data periodically over time.
You can monitor your storage account for throttling using Azure Monitor | Metrics. There are 3 metrics relevant to your question, which are
AnonymousThrottlingError
SASThrottlingError
ThrottlingError
These metrics exist for each of the 4 storage account abstractions (blob, file, table, queue). If you're unsure how your storage account is being used then monitor these metrics for all 4 services. Things like ASR, Backup and VM's are going to be using the blob service.
To configure this, go to the Azure Monitor | Metrics blade in the portal and select the storage account(s) you want to monitor. Then check off the metrics you're interested in. The image blow shows the chart with these 3 metrics configured for the blob service.
You can also configure an alert based on these metrics to alert you when any of these throttling events occur.
As for measuring the IOPS for the storage account, you could monitor the Transactions metric for the storage account. This is not really measuring the IOPS, but it does give you some visibility into the number of transactions (which sort of relates to IOPS) across the storage account. You can configure this from the storage account blade and clicking Metrics in the Monitoring section as shown below.
We are considering moving archived data after some retention period to the newer cool tier of Azure Storage (https://azure.microsoft.com/en-us/blog/introducing-azure-cool-storage/).
Can I programatically set up something that will automatically change the tier or move content to a cool tier storage after some period of time?
In additional, we can change the Blob Storage characteristic at any point. But when change from cool to hot, you have to pay for a lot of I/O for converting the blob type. Converting from hot to cool is free. We could find more details at this document.
Can I programmatically set up something that will automatically change the tier or move content to a cool tier storage after some period of time?
Yes, (since August 2021) it's now possible to have Azure automatically move blobs between Hot, Cool, and Archive tiers - this is done using a Lifecycle management policy which you can configure.
This is documented in this page: https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview
Some caveats:
You can't force-run a policy: once you've set-up the policy you'll simply have to wait... up to 48 hours or even longer.
So if you want to quickly archive or cool your blobs immediately then you should use another approach. Unfortunately I can't make any recommendations right at the time of me writing this.
Your blobs need to be readable by Azure itself, so this won't work with super secure or inaccessible storage accounts.
You need to enable blob access-tracking first (I describe how to do this in the section below).
Note that blob access tracking costs extra money, but I can't find any pricing information available on access-tracking.
The policy rules you can create are somewhat restrictive as you can only filter the set of blobs to auto-archive or automatically move to the Cool tier based on only these parameters:
The Blob's Last-Modified time and Last-Accessed time - but only in relative days-ago (so you can't use absolute dates or complex interval/date-range expressions).
The blob type (only Block blobs can be auto-moved between tiers; Append blobs can only be auto-deleted (not auto-moved); while Page blobs aren't supported at all)
The blob container name, and optionally a blob name prefix match.
But you can't filter by blob-name prefix match without specifying a container name.
And by exact-matching Blob Tags already indexed by a Blob Index.
And Blob Indexes have their own limitations too.
The only possible rule actions are to move blobs between tiers or to delete blobs. You can't move blobs between containers or accounts or edit any blob metadata or change a blob's access-policy and so on...
While there is an action that will move a Cool blob back to the Hot tier on the first access/request (which is nice), I'd have personally liked to see an option to promote the blob back to hot only after 2 or 3 (or more) requests in a given time-window, instead of always moving it on the first request, which means junk HTTP requests (e.g. web spiders, your own blob scanner tools, etc) might incur the more expensive Cool-tier data transfer/read fees - and you can’t move blobs to cool without a 30-day wait.
To set up a blob lifecycle policy to automatically move Hot blobs to the Cool tier if not accessed for 90 days (as an example), do this:
(Azure Portal screenshots taken as I wrote this, in August 2022 - if the screenshots or instructions are outdated leave a comment and I'll update them)
Open the Azure Portal (https://portal.azure.com) and navigate to your Storage Account's "blade".
BTW, if your storage account is older than a few years then ensure your Storage account is upgraded to GeneralPurposeV2 or Blob-only.
Look for "Lifecycle Management" in the sidebar:
Ensure "Enable access tracking" is enabled.
The tooltip says access tracking has additional charges, but I can't find any information about what those charges are.
NOTE: The access tracking feature is distinct from Azure Blob Inventory, but lots of pages and Google results incorrectly report Blob Inventory pricing
Click "Add a rule" and complete the wizard:
Enter a Rule name like move-blobs-not-accessed-for-90-days-to-cool-tier
If you only want to auto-move blobs in specific containers (instead of the entire account) then check the "Limit blobs with filters".
On the "Base blobs" tab of the wizard you can specify the Last-Modified and Last-Accessed rules and actions as described above.
Note that you can have up to 3 conditions in a single policy, for the 3 possible actions: move-to-cool (tierToCool), move-to-archive (tierToArchive), and delete-blob (delete).
And that's it - so now you have to wait up to 48 hours for the policy to start to take effect, and maybe wait even longer for Azure to actually finish moving the blobs between tiers.
As I write this, I've had my policy set-up for about 2 hours now and there's still no-sign that it's even started yet. YMMV.
I'm currently working out the cost-analysis for my upcoming Azure project. I am tempted to use a Azure Cloud Role, because it has a certain amount of storage included in the offer. However, I have the feeling that it is too good to be true.
Therefore, I was wondering. Do you have to pay transaction-costs/ storage costs on this "included" storage? I can't find any information about this on the Azure website, and I want to be as accurate as possible (even if the cost of transactions is almost nothing).
EDIT:
To clarify, I specifically want to know about the transaction costs on the storage. Do you have to pay a small cost per transaction on the storage (like with Blob/Table storage), or is this included in the offer as well?
EDIT 2:
I am talking about the storage included with the Cloud Services (web/worker) and not a separate Table/blob storage.
Can you clarify which offer you're referring to?
With Cloud Services (web/worker roles), each VM instance has some local storage associated with it, which is free of charge and, because it's a local disk, there are no transactions or related fees associated with this storage. As Rik pointed out in his answer, that data is not durable: it's on a single disk and will be gone forever if, say, the disk crashes.
If you're storing data in Blobs, Tables, or Queues (Windows Azure Storage), then you pay per GB ($0.095 cents per GB per month for geo-redundant storage, or $0.07 per GB per month for locally-redundant storage), and a penny per 100,000 transactions. And as long as your storage account is in the same data center as your Cloud Service, there's no data egress fees.
Now we come back to the question of which offer you're referring to. The free 90-day trial, for instance, comes with 70GB of Windows Azure Storage, and 50M transactions monthly included. MSDN subscriptions come with included storage and transactions as well. If you're just working with a pay-as-you-go subscription, you'll pay for storage plus transactions.
The storage is included, but not guaranteed to be persistent. Your role could be shut down and started on a different physical location, which has no impact on the availability of your role, but you'll lose your whatever you have in storage, I.E. the included storage is very much temporary.
As for transaction costs, you only pay for outgoing data, not incoming data or data within Azure (one role to another).
You pay per GB, and $0,01 per 100.000 transactions