Hi I've been trying to convert a query from bytes to GB, however I'm getting some strange results. I also get the same component showing up incrementally increasing in size, which makes sense as we are getting the bytes used, which we wold expect to see increased, but I would only like the latest set of data (when the query was run). Can anyone see where I'm going wrong?
AzureMetrics
| where TimeGenerated >=ago(1d)
| where Resource contains "LB"
| where MetricName contains "ByteCount"
| extend TotalGB = Total/1024
| summarize by Resource, TimeGenerated, TotalGB, MetricName, UnitName
| sort by TotalGB desc
| render piechart
In the table below, it shows the loadbal-1 reporting several times in a short window and the same for loadbal2 and 13. I'd like to capture these all in a single line, also I think I might have messed up the query for "TotalGB" (converting bytes to GB)
I would only like the latest set of data (when the query was run)
you can use the arg_max() aggregation function.
for example:
AzureMetrics
| where TimeGenerated >= ago(1d)
| where Resource has "LoadBal"
| where MetricName == "ByteCount"
| summarize arg_max(TimeGenerated, *) by Resource
I've been trying to convert a query from bytes to GB, however I'm getting some strange results... I think I might have messed up the query for "TotalGB" (converting bytes to GB)
If the raw data is in bytes, then you need to divide it by exp2(30) (or 1024*1024*1024) to get the value as GBs.
Or, you can use the format_bytes() function instead
for example:
print bytes = 18027051483.0
| extend gb_1 = format_bytes(bytes, 2),
gb_2 = bytes/exp2(30),
gb_3 = bytes/1024/1024/1024
bytes
gb_1
gb_2
gb_3
18027051483
16.79 GB
16.7889999998733
16.7889999998733
Related
I'm trying to create a KQL query which is to capture all private endpoints and see if the bytes in or out equal null (0), however when I runa query for private endpoints all I get is basic infomration.
Resources
| where type =~ 'Microsoft.network/privateendpoints'
| mvexpand ProvisioningState = properties.provisioningState
| mvexpand PLSprop = properties.networkInterfaces
| mvexpand PLSprop = properties.networkInterfaces
| mvexpand PLSprop2 = properties.subnet
Is there a way to get bytes in or out ad their values, or any that equal 0?
Thanks
Update:
Documentation typo ("out" instead of "in") has been fixed
Microsoft.Network/privateEndpoints
Metric
Exportable via Diagnostic Settings?
Metric Display Name
Unit
Aggregation Type
Description
Dimensions
PEBytesIn
Yes
Bytes In
Count
Total
Total number of Bytes In*
No Dimensions
PEBytesOut
Yes
Bytes Out
Count
Total
Total number of Bytes Out
No Dimensions
I have recently started working with Kusto. I am stuck with a use case where i need to confirm the approach i am taking is right.
I have data in the following format
In the above example, if the status is 1 and if the time frame is equal to 15 seconds then i need to assume it as 1 occurrence.
So in this case 2 occurrence of status.
My approach was
if the current and next rows status is equal to 1 then take the time difference and do row_cum_sum and break it if the next(STATUS)!=0.
Even though the approach is giving me correct output, I am assuming the performance can slow down once the size is increased.
I am looking for an alternative approach if any. Also adding the complete scenario to reproduce this with a sample data.
.create-or-alter function with (folder = "Tests", skipvalidation = "true") InsertFakeTrue() {
range LoopTime from ago(365d) to now() step 6s
| project TIME=LoopTime,STATUS=toint(1)
}
.create-or-alter function with (folder = "Tests", skipvalidation = "true") InsertFakeFalse() {
range LoopTime from ago(365d) to now() step 29s
| project TIME=LoopTime,STATUS=toint(0)
}
.set-or-append FAKEDATA <| InsertFakeTrue();
.set-or-append FAKEDATA <| InsertFakeFalse();
FAKEDATA
| order by TIME asc
| serialize
| extend cstatus=STATUS
| extend nstatus=next(STATUS)
| extend WindowRowSum=row_cumsum(iff(nstatus ==1 and cstatus ==1, datetime_diff('second',next(TIME),TIME),0),cstatus !=1)
| extend windowCount=iff(nstatus !=1 or isnull(next(TIME)), iff(WindowRowSum ==15, 1,iff(WindowRowSum >15,(WindowRowSum/15)+((WindowRowSum%15)/15),0)),0 )
| summarize IDLE_COUNT=sum(windowCount)
The approach in the question is the way to achieve such calculations in Kusto and given that the logic requires sorting is also efficient (as long as the sorted data can reside on a single machine).
Regarding union operator - it runs in parallel by default, you can control the concurrency and spread using hints, see: union operator
I'm trying to create a custom metric alert based on some metrics in my Application Insights logs. Below is the query I'm using;
let start = customEvents
| where customDimensions.configName == "configName"
| where name == "name"
| extend timestamp, correlationId = tostring(customDimensions.correlationId), configName = tostring(customDimensions.configName);
let ending = customEvents
| where customDimensions.configName == configName"
| where name == "anotherName"
| where customDimensions.taskName == "taskName"
| extend timestamp, correlationId = tostring(customDimensions.correlationId), configName = tostring(customDimensions.configName), name= name, nameTimeStamp= timestamp ;
let timeDiffs = start
| join (ending) on correlationId
| extend timeDiff = nameTimeStamp- timestamp
| project timeDiff, timestamp, nameTimeStamp, name, anotherName, correlationId;
timeDiffs
| summarize AggregatedValue=avg(timeDiff) by bin(timestamp, 1m)
When I run this query in Analytics page, I get results, however when I try to create a custom metric alert, I got the error Search Query should contain 'AggregatedValue' and 'bin(timestamp, [roundTo])' for Metric alert type
The only response I found was adding AggregatedValue which I already have, I'm not sure why custom metric alert page is giving me this error.
I found what was wrong with my query. Essentially, aggregated value needs to be numeric, however AggregatedValue=avg(timeDiff) produces time value, but it was in seconds, so it was a bit hard to notice. Converting it to int solves the problem,
I have just updated last bit as follows
timeDiffs
| summarize AggregatedValue=toint(avg(timeDiff)/time(1ms)) by bin(timestamp, 5m)
This brings another challenge on Aggregate On while creating the alert as AggregatedValue is not part of the grouping that is coming after by statement.
When grabbing search result using Azure Log Analytics Search REST API
I'm able to receive only the first 5000 results (as by the specs, at the top of the document), but know there are many more (by the "total" attribute in the metadata in the response).
Is there a way to paginate so to get the entire result set?
One hacky way would be to attempt to break down the desired time-range iteratively until the "total" is less than 5000 for that timeframe, and do this process iteratively for the entire desired time-range - but this is guesswork that will cost many redundant requests.
While it doesn't appear to be a way to paginate using the REST API itself, you can use your query to perform the pagination. The two key operators here are TOP and SKIP:
Suppose you want page n with pagesize x (starting at page 1), then append to your query:
query | skip (n-1) * x | top x.
For a full reference list, see https://learn.microsoft.com/en-us/azure/log-analytics/log-analytics-search-reference
Yes, skip operation is not available anymore but if you want create pagination there is still an option. You need to count total count of entries, use a simple math and two opposite sortings.
Prerequisites for this query are values: ContainerName, Namespace, Page, PageSize.
I'm using it in Workbook where these values are set by fields.
let containers = KubePodInventory
| where ContainerName matches regex '^.*{ContainerName}$' and Namespace == '{Namespace}'
| distinct ContainerID
| project ContainerID;
let TotalCount = toscalar(ContainerLog
| where ContainerID in (containers)
| where LogEntry contains '{SearchText}'
| summarize CountOfLogs = count()
| project CountOfLogs);
ContainerLog
| where ContainerID in (containers)
| where LogEntry contains '{SearchText}'
| extend Log=replace(#'(\x1b\[[0-9]*m|\x1b\[0 [0-9]*m)','', LogEntry)
| project TimeGenerated, Log
| sort by TimeGenerated asc
| take {PageSize}*{Page}
| top iff({PageSize}*{Page} > TotalCount, TotalCount - ({PageSize}*({Page} - 1)) , {PageSize}) by TimeGenerated desc;
// The '| extend' is not needed if in logs are not the annoying special characters
I'm trying to scale down the SQL Azure Database from P3 to S2 but I'm getting error below.
Database scale operation from P3 Premium to S2 Standard failed for xxDB.
ErrorCode: undefined
ErrorMessage: The edition 'Standard' does not support the database max size '500 GB'.
What's the best way to scale in?
it seems you have to first change db size .. and then do your degradation of tiers.below is the entire syntax for changing,i dont have azure DB to play with..
ALTER DATABASE database_name
{
MODIFY NAME =new_database_name
| MODIFY ( <edition_options> [, ... n] )
| COLLATE collation_name
| SET { <db_update_options> }
| ADD SECONDARY ON SERVER <partner_server_name>
[WITH (<add-secondary-option>::= [, ... n] ) ]
| REMOVE SECONDARY ON SERVER <partner_server_name>
| FAILOVER
| FORCE_FAILOVER_ALLOW_DATA_LOSS
}
<edition_options> ::=
{
MAXSIZE = { 100 MB | 500 MB |1 | 5 | 10 | 20 | 30 … 150 … 500 } GB
| EDITION = { 'web' | 'business' | 'basic' | 'standard' | 'premium' }
| SERVICE_OBJECTIVE =
{ 'S0' | 'S1' | 'S2' | 'S3'| 'P1' | 'P2' | 'P3' | 'P4'| 'P6' | 'P11'
{ ELASTIC_POOL (name = <elastic_pool_name>) }
}
}
Then try degrading
Refereences:
https://azure.microsoft.com/en-us/documentation/articles/sql-database-scale-up/
in the above link look for below section:
NOTE:
Changing your database pricing tier does not change the max database size. To change your database max size use Transact-SQL (T-SQL) or PowerShell.
You should be able to scale down a database and change edition (e.g. from Premium to Standard) without first changing the MaxSize property. Can you share how you did this (e.g. T-SQL, Azure portal, PowerShell, REST API call, C# client SDK). Is it possible that in your code you took the existing database MaxSize property and submitted that in your database update request - because that would cause the error you saw (Standard doesn't support 500GB). The answer indicating that you need to separately scale and change MaxSize is correct currently when you scale up from Standard to Premium and want to get the benefit of the greater storage allocation with Premium, but that apparently wasn't what you were doing.
We are looking to make the behavior more intelligent here, but in the meantime it's possible you hit a bug.