Difference between ARM SKU Name and SKU Name in Azure EA - azure

I have been working with the Azure Consumption APIs and I noticed that these two APIs return very similar variables with different names:
The reservationcharges endpoint defines armSkuName as:
+-----------------------+---------+----------------------------------------------+
| Property Name | Type | Description |
+-----------------------+---------+----------------------------------------------+
| armSkuName | string | String representing the purchased resource. |
+-----------------------+---------+----------------------------------------------+
The reservationsummaries endpoint defines skuName as:
+-----------------------+---------+----------------------------------------------+
| Property Name | Type | Description |
+-----------------------+---------+----------------------------------------------+
| skuName | string | String representing the purchased resource. |
+-----------------------+---------+----------------------------------------------+
I know that ARM SKU means Azure Resource Manager Stock Keeping Unit and SKU means Stock Keeping Unit, but I'm unclear on the distinction between the two?
Is the armSkuName specifically naming it as an ARM SKU as opposed to a classic ASM SKU? Does skuName encompass both?
Are there any Azure docs that explain the distinction between the two? Is there a distinction to make?

Related

Deploying multi-region AppSync API with latency-based routing custom-domain via CDK

I am attempting to deploy an AWS AppSync API into two AWS regions (accessible via the same hostname - using Route53 latency-based routing) using CDK.
I first ran into the issue that I couldn't deploy an AWS::AppSync::DomainName resource into the second region using the same custom-domain name as the first region, I was experiencing CloudFormation failures which stated:
Invalid request provided: CNAME already exists
This was my assumption about how this ought to be configured:
|-----------------------------|
| my-appsync-api.example.com |
| 2 CNAMES: 1 x ase2, 1 x ew1 |
|-----------------------------|
|
------------------------------------------
| |
|----------------------------| |----------------------------|
| my-appsync-api.example.com | | my-appsync-api.example.com |
| AppSync custom domain name | | AppSync custom domain name |
|----------------------------| |----------------------------|
| |
|---------------------| |--------------------|
| ase2.cloudfront.net | | ew1.cloudfront.net |
|---------------------| |--------------------|
| |
|---------| |---------|
| AppSync | | AppSync |
|---------| |---------|
Given I had setup my Route53 records as CNAMES, I assumed that I should change those to be A records using AWS' alias feature to point at the AppSync domain name. However, although this is possible via the Route53 console, it is not possible via the CDK. When I tried to set this up, I found that there is currently no Route53 target for AppSync (as there is for ApiGateway, CloudFront etc...).
My next attempt was to configure region-specific custom-domain names for each AWS::AppSync::DomainName resources, and create region-specific CNAMES for each; then finally create latency-based routing A records with the desired domain name which route to their respective region-specific domain:
|-----------------------------|
| my-appsync-api.example.com |
| 2 CNAMES: 1 x ase2, 1 x ew1 |
|-----------------------------|
|
-------------------------------------------
| |
|---------------------------------| |--------------------------------|
| my-ase2-appsync-api.example.com | | my-ew1-appsync-api.example.com |
| CNAME for ase2 | | CNAME for ew1 |
|---------------------------------| |--------------------------------|
| |
|---------------------------------| |--------------------------------|
| my-ase2-appsync-api.example.com | | my-ew1-appsync-api.example.com |
| AppSync custom domain name | | AppSync custom domain name |
|---------------------------------| |--------------------------------|
| |
|---------------------| |--------------------|
| ase2.cloudfront.net | | ew1.cloudfront.net |
|---------------------| |--------------------|
| |
|---------| |---------|
| AppSync | | AppSync |
|---------| |---------|
Alas, this did not work either, I ended up with an SSL issue, I assume because the CloudFront distribution (under the hood of AppSync) was configured with the region-specific domain.
It looks to me like the only option I have (given it appears that you can only have AWS::AppSync::DomainName resources with unique custom domain names) is to have a unique custom domain name per-region and then pop an API Gateway proxy in-front of AppSync... although, it adds around 200ms (at least) of latency this way. It'd be great if there was a better way.
Have you seen this blog post by AWS?
The setup looks like something you try to achieve.

KQL :: How to Join Resources and AzureActivity

I found a list of KQL queries that are helping me digging into unused resources on Azure.
With this query for example I can see a list of Orphaned Disks:
Resources
| where type has "microsoft.compute/disks"
| extend diskState = tostring(properties.diskState)
| where managedBy == "" and diskState != 'ActiveSAS'
or diskState == 'Unattached' and diskState != 'ActiveSAS'
| project id, diskState, resourceGroup, location, subscriptionId
which nicely render into this:
But I would like to add 3 more columns to it:
Who created the resource
When the resource was created
Ideally how much it cost that resource in the last 30 days
I see that I probably have to Join AzureActivity in order to find who created the resource.
I still have no idea if KQL can help me find the costs per activity.

adding a drop down filter in KQL chart

I have a chart in azure monitor (app insight to be exact) essentially I have 5 servers 2 of which are used by client B and 3 by client C. what I want to all server displayed but a drop down option so client B or c can be chosen
At the moment I have two charts which show the servers stood alone and another combined
With the following query
Perf
|extend iif(Computer =="X","B",iif(Computer =="Y","B","C")) | where CounterName == "% Processor Time" | where ObjectName == "Processor" | where Computer contains "SQL" | summarize avg(CounterValue) by bin(TimeGenerated, 5min), iif(Computer =="X","B",iif(Computer
=="Y","B","C")) // bin is used to set the time grain to 15 minutes | render timechart
A solution that may work for you
As suggested by Peter Bons and after testing in our local environment.
In your application insights create a workbook with a new parameter
Pick "dropdown" as the parameter type
Pick "query" as "get data from" option
Set your data source from where you are getting your data.
The below query is for getting the list of subscriptions you can use your own query for getting the list of servers
ResourceContainers | where type =~ "microsoft.resources/subscriptions"
// add any other filters you want here
| project id, name, group=tenantId
For further information you can go through the Microsoft Document.

Azure log analytics query for how much and what data has vm consumed

I would like to have query that would return something like for single vm. So query should be showing results of single vm and what kinda log type / solutions it has used and how much.
I don't know if this is even possible to do anything similar maybe? Tips?
With this query I'm able to list total usage for all vm's reporting to laws but I would like to have more details about a single vm
find where TimeGenerated > ago(30d) project _BilledSize, _IsBillable, Computer
| where _IsBillable == true
| extend computerName = tolower(tostring(split(Computer, '.')[0]))
| summarize BillableDataBytes = sum(_BilledSize) by computerName
| sort by BillableDataBytes nulls last
Mostly you would be able to accomplish it by querying standard columns or properties _BilledSize, Type, _IsBillable and Computer.
Below is the sample query for your reference:
union withsource=tt *
| where TimeGenerated between (ago(7d) .. now())
| where _IsBillable == true
| where isnotempty(Computer)
| where Computer == "MM-VM-RHEL-7"
| summarize BillableDataBytes = sum(_BilledSize) by Computer, _IsBillable, Type
| render piechart
Below is the screenshot for illustration:
Related references:
Log data usage - Understanding ingested data volume
Standard columns in logs

How can I access custom event values from Azure AppInsights analytics?

I am reporting some custom events to Azure, within the custom event is a value being held under the customMeasurements object named 'totalTime'.
The event itself looks like this:
loading-time: {
customMeasurements : {
totalTime: 123
}
}
I'm trying to create a graph of the average total time of all the events reported to azure per hour. So I need to be able to collect and average the values within the events.
I can't seem to figure out how to access the customMeasurements values from within the Azure AppInsights Analytics. Here is some of the code that Azure provided.
union customEvents
| where timestamp between(datetime("2019-11-10T16:00:00.000Z")..datetime("2019-11-11T16:00:00.000Z"))
| where name == "loading-time"
| summarize Ocurrences=count() by bin(timestamp, 1h)
| order by timestamp asc
| render barchart
This code simply counts the number of reported events within the last 24 hours and displays them per hour.
I have tried to access the customMeasurements object held in the event by doing
summarize Occurrences=avg(customMeasurements["totalTime"])
But Azure doesn't like that, so I'm doing it wrong. How can I access the values I require? I can't seem to find any documentation either.
It can be useful to project the data from the customDimensions / customMeasurements property collecton into a new variable that you'll use for further aggregation. You'll normally need to cast the dimensions data to the expected type, using one of the todecimal, toint, tostring functions.
For example, I have some extra measurements on dependency telemetry, so I can do something like so
dependencies
| project ["ResponseCompletionTime"] = todecimal(customMeasurements.ResponseToCompletion), timestamp
| summarize avg(ResponseCompletionTime) by bin(timestamp, 1h)
Your query might look something like,
customEvents
| where timestamp between(datetime("2019-11-10T16:00:00.000Z")..datetime("2019-11-11T16:00:00.000Z"))
| where name == "loading-time"
| project ["TotalTime"] = toint(customMeasurements.totalTime), timestamp
| summarize avg(TotalTime) by bin(timestamp, 1h)
| render barchart

Resources