is there a way to query all storage accounts in subscriptions called "storage" and get the used capacity for each one. I'm trying to create an alert rule if they exceed 1024 gb capacity, but I'm not seeing used capacity anywhere. You can go into each storage account and configure the alert within the storage account, but that's not practical.
I can get some information from storage account using, this for availability:
AzureMetrics
| where TimeGenerated > ago(1d)
| where ResourceProvider == "MICROSOFT.STORAGE"
| where _ResourceId contains "storage"
| where MetricName =~ "availability"
| project TimeGenerated, ResourceGroup, _SubscriptionId, Resource, MetricName, Average, UnitName
Thanks in advance :)
UsedCapacity
Exporting platform metrics to other locations
Using diagnostic settings is the easiest way to route the metrics, but
there are some limitations:
Exportability. All metrics are exportable
through the REST API, but some can't be exported through diagnostic
Microsoft.Storage/storageAccounts
Metric
Exportable via Diagnostic Settings?
Metric Display Name
Unit
Aggregation Type
Description
Dimensions
UsedCapacity
Yes
Used capacity
Bytes
Average
The amount of storage used by the storage account. For standard storage accounts, it's the sum of capacity used by blob, table, file, and queue. For premium storage accounts and Blob storage accounts, it is the same as BlobCapacity or FileCapacity.
No Dimensions
AzureMetrics
| where ResourceProvider == "MICROSOFT.STORAGE"
| where MetricName == "UsedCapacity"
| where SubscriptionId == "ebb79bc0-aa86-44a7-8111-cabbe0c43993"
| summarize arg_max(TimeGenerated, ResourceGroup, Resource, Average) by _ResourceId
Fiddle
Related
How to create a dashboard that shows the amount of storage left in a storage account?
I don't want to create alerts but I want to create a tile in the dashboard that either show one of the following:
the amount of storage used by the storage account against its quota, hence some sort of percentage
The amount of remaining storage space
The amount of remaining storage space as a percentage
Thanks to #Norrin Rad.
By referring to this, I have written a query for storage availability below:
AzureMetrics
| where TimeGenerated > ago(timespan) //(eg:24 hrs)
| where ResourceProvider == "MICROSOFT.STORAGE"
| where _ResourceId contains "storage"
| where MetricName =~ "availability"
| project
TimeGenerated,
ResourceGroup,
SubscriptionID,
Resource, //(Here it is storage account)
MetricName,
Average,
UnitName
Go to Azure portal and follow these steps:
Monitor->Logs->Write a query->save it
After running the query, you will be able to see the logs as shown below:
After successfully executing the query, You will be able to pin it to your dashboards.
Note: While saving the query make sure that you need to save content to azure storage account also by assigning user managed identity with the role "Storage Blob data contributor" (Eg: Blob storage)
Reference link
We have a azure blob storage container which is NFS mounted on linux Virtual Machines. This container has blobs in Hot tier, Cool tier and Archive tier.
I see a lot of transactions (per day) in Azure Metrics for this storage account. We only have one container in the storage account.
Can you please help me narrow down from where these transactions are coming? And also may I know which category (Read, Iterative Read, All other operations) I should consider these transactions as in the Blob transactions pricing table?
API NFS3LookUp Screenshot
API NFS3GetAttr Screenshot
For your transaction issue make use of Azure Storage Analytics metrics (classic) | Microsoft Docs,This issues mainly because of many Duplicate records may exist for logs created in the same hour
Make sure you enable Azure Storage Analytics metrics, in your storage account use blob container $logs which you can proceed to more log entries and find information and $logs are not worked when List Containers operation is accomplished.
For example :
http://<accountname>.blob.core.windows.net/$logs.
You can try the below PowerShell cmdlet to filter the list of log blobs at specify time which you can done in your storage account container
Get-AzStorageBlob -Container '$logs' |
Where-Object {
$_.Name -match 'blob/2014/05/21/05' -and
$_.ICloudBlob.Metadata.LogType -match 'write'
} |
ForEach-Object {
"{0} {1} {2} {3}" –f $_.Name,
$_.ICloudBlob.Metadata.StartTime,
$_.ICloudBlob.Metadata.EndTime,
$_.ICloudBlob.Metadata.LogType
}
Otherwise, You can make use of azure application Insights it can automatically detect performance and resolve your diagnose issues io understand what actually required
For your information please refer below link which answered by sVathis & Gaurav Mantri :
How can I find the source of my Hot LRS Write Operations on Azure Storage Account?
Azure Storage Analytics logging | Microsoft Docs
And also may I know which category (Read, Iterative Read, All other operations) I should consider these transactions as in the Blob transactions pricing table
You can use azure Pricing calculator Pricing is depends on the region where the data is stored
please refer this offical documents for operations based on your requirement
I would like to retrieve all the resource names along with their types belonging to a particular subscription and resource group, along with the tags of the resources.
I should be able to dump them in a CSV file where the first column would be subscription, then resource group followed by resource name, type and tags. I should be able to filter the CSV as to what i need to see.
I need to run this for all my subscriptions in a particular tenant so that i get this information for all subscriptions in my tenant.
Can anyone please help me writing a KQL query for this so that i can run from the portal.
Thanks
had a similar challenge with KQL to provide a user friendly names for Subscriptions in Azure Workbooks. I found a solution on link
The trick is to list the subscriptionnames from the table resourcecontainers and then join the results with your resources query
The answer to your question will look like this:
resources
| join kind=inner (
resourcecontainers
| where type == 'microsoft.resources/subscriptions'
| project subscriptionId, subscriptionName = name)
on subscriptionId
| project subscriptionName, resourceGroup, name, type, tags
Using KQL in Azure Resource Graph is actually an ideal way to retrieve this information. You can run the KQL queries from the Azure Portal using Resource Graph Explorer then export (or use PowerShell with the Search-AzGraph cmdlet and pipe to Export-Csv).
Resource Graph allows queries to the ARM graph backend using KQL, which is an extremely powerful and preferred method to access Azure configuration data. All subscriptions in the tenant are in scope by default (if checked off).
Please review Resource Graph concepts and query samples in Microsoft's docs:
Explore your Azure resources with Resource Graph
Starter Resource Graph query samples
Advanced Resource Graph query samples
Query below; if you choose to export all subscriptions and RGs at once just remove the subscriptionId and resourceGroup where clauses:
resources | where subscriptionId == "subscription-id-here" | where resourceGroup == "rg-name-here" | project subscriptionId, resourceGroup, name, type, tags
Yes, #Ivan is right. KQL is certainly not meant for this purpose. Kusto query language, or KQL is the primary means of interaction with Azure Data Explorer and work with log data on Azure.
The simplest way to get this information about your Azure resources is from the Azure Portal itself, by viewing and filtering Azure resource information.
As your query spans across your Subscriptions, you could also run queries from Azure Resource Graph.
Azure PowerShell and Azure CLI would be other great ways to get detailed information about your Azure resources. Here is another post with a similar ask.
I'd like to create a dashboard in the Azure Portal that displays the number of active virtual machines per resource group. In this case I'm not interested in any deallocated or stopped VM's.
Since filtering the virtual machines blade doesn't work for the VM's power state, I turned to the Resource Graph. From there the solution gets close, but it doesn't seem possible to filter on power state (yet).
resources
| where type == "microsoft.compute/virtualmachines"
| summarize count() by resourceGroup
| order by resourceGroup asc
Is there a way to combine this data with another data table to be able to filter on power state and get only the running virtual machines? Or maybe a different solution altogether to just display the number of running VM's on a dashboard?
There doesn't seem to be a table that holds the PowerState of the VM in the Resource Graph schema (at least I couldn't find it)
Since you had stated that you would also like to hear about altogether a different approach, I want to suggest the PowerShell route
You can get the PowerState of the VM using the below command
Get-AzVM -Status
This output you may write to a Azure table storage. (this link has details of how to use PowerShell to interact with Azure Storage Accounts [https://learn.microsoft.com/en-us/azure/storage/tables/table-storage-how-to-use-powershell]
You can build a Power BI report on top of this table storage filtering only for PowerState == running and light up your report.
Now to schedule this, you will need to
a) Create an Automation Account. Details on how to create automation account can be found here [https://learn.microsoft.com/en-us/azure/automation/automation-create-standalone-account]
b) Create a PowerShell runbook which get the VM status and inserts rows to table storage
c) Create a schedule and link the runbook to it.
Details on how to schedule can be found here [https://learn.microsoft.com/en-us/azure/automation/shared-resources/schedules]
Thus, using Azure Automation Account and a Runbook (point b) you can setup a schedule and link the runbook with that schedule. Whenever the runbook executes it gets the current powerstatus and uploads it to Azure Table storage as per the schedule which would keep the PowerBI updated.
Hope this helps
hope the example below works for you
resources
| where type == "microsoft.compute/virtualmachines"
| where properties.extended.instanceView.powerState.displayStatus=="VM running"
| summarize count() by resourceGroup
| order by resourceGroup asc
Cheers,
Is there a possible way to get the VM creation date ?
I've tried the following by now
AzureActivity
| where TimeGenerated > ago(90d)
| where ResourceProvider == "Microsoft.Compute" and OperationName == "Create or Update Virtual Machine"
| project Resource ,Datum = format_datetime(EventSubmissionTimestamp, 'MM') ,Caller
| distinct Datum , Resource , Caller
| order by Datum
This kusto query will read the logs from the VM's connected to it. and get all the Create or update virtual machine operations from a vm and its caller ID.
But this is create and update So it gives me double values every time an VM is being updated.
I tried also in PowerShell
$GetVM = Get-AzureRMVM
Foreach ($vms in $GetVM)
{
$vm = get-azurermvm -name $vms.Name -ResourceGroupName $vms.ResourceGroupName
$log = Get-AzureRmLog -ResourceId $vm.Id -StartTime (Get-Date).AddDays(-90) -WarningAction silentlyContinue
Write-Output "- Found VM creation at $($log.EventTimestamp) for VM $($log.Id.split("/")[8]) in Resource Group $($log.ResourceGroupName) found in Azure logs"
}
But Can't seem to find the creation date inside the log files either. Does anyone have a clue if it is possible to find the creation date of a Virtual Machine inside a scripting language , Kusto , Powershell , ...
The easiest way that worked for me to get the Azure VM creation date was to look at the creation date of the OS Disk
Browse to your VM on Azure Portal
On Left Hand side, click on the blade "Disks"
Under OS Disk section, click on your OS Disk.
In the Overview blade of your OS Disk, you can see Time Created field.
Note: All my Azure VMs were created with the OS Disk and were never changed.
Hope it helps. Cheers.
There is no direct way to find out the creation date if it's later than 90 days. But here is a nice workaround solution: https://savilltech.com/2018/02/13/checking-the-creation-time-of-an-azure-iaas-vm/
You can use azure cli
use below command
az vm list
This will list json data with fields and you can filter
date = vm['timeCreated']
//"timeCreated": "2022-06-24T14:13:00.326985+00:00",
The portal does show Created for a cloud service in the Dashboard of a Cloud Service, but that is not shown for a specific VM (which you can see with Azure PowerShell with Get-AzureService <cloud service name> | select DateCreated).
When you do a Quick Create of a VM, that will always create a new cloud service, so the time created would be the same for VM and cloud service. But since you can add multiple VMs to a cloud service, you can't always rely on that.
On the VM's Dashboard in the portal, at the bottom if you look at the VHD column, the VHD name includes the date the disk was created as part of the name, though this is only true for VMs created from an image. If the VM was created from a disk, the name could be anything. You can get that OS disk name in Azure PowerShell with Get-AzureVM <cloud service name> <VM name> | Get-AzureOSDisk | select medialink.
Operation Logs under Management Services in the portal lets you search the last 30 days for operations, so if the VM was created in the last month, you can find evidence of the operation there (for example CreateHostedService and CreateDeployment operations).
For Windows VMs created from an image, the timestamp on WaSetup.log and WaSetup.xml in C:\Windows\panther\ reflect when the VM was provisioned.
Hope it helps.
If you check Deployments in the respective resource group, you will see Last Modified date for each of the deployment in that RG.
I found another way to get it working for me by tweaking your ActivityLog query instead of Powershell. Using the HTTPRequest property seemed to give me what I needed.
AzureActivity
| where TimeGenerated > ago(7d)
| where ResourceProvider contains "Microsoft.Compute" and OperationName == "Create or Update Virtual Machine"
| where HTTPRequest contains "PUT"
| project VMName = Resource, Created_On = format_datetime(EventSubmissionTimestamp,'dd-MM-yyyy-HHtt'), User = Caller
| distinct Created_On, VMName, User
| order by Created_On
In my case, I was trying to get the VMs deleted in the last 7 days. For some reason the time wasn't displaying properly for the query below, hence I had to convert it to my timezone.
AzureActivity
| where TimeGenerated > ago(7d)
| where ResourceProvider == "Microsoft.Compute" and OperationName == "Delete Virtual Machine"
| where HTTPRequest contains "DELETE"
| extend MyTimeZone = EventSubmissionTimestamp + 8h
| project VM_Name = Resource, Deleted_On = format_datetime(MyTimeZone, 'dd-MM-yyyy-HHtt'), User = Caller
| distinct Deleted_On , VM_Name , User
| order by Deleted_On