I'm trying to create a KQL query which is to capture all private endpoints and see if the bytes in or out equal null (0), however when I runa query for private endpoints all I get is basic infomration.
Resources
| where type =~ 'Microsoft.network/privateendpoints'
| mvexpand ProvisioningState = properties.provisioningState
| mvexpand PLSprop = properties.networkInterfaces
| mvexpand PLSprop = properties.networkInterfaces
| mvexpand PLSprop2 = properties.subnet
Is there a way to get bytes in or out ad their values, or any that equal 0?
Thanks
Update:
Documentation typo ("out" instead of "in") has been fixed
Microsoft.Network/privateEndpoints
Metric
Exportable via Diagnostic Settings?
Metric Display Name
Unit
Aggregation Type
Description
Dimensions
PEBytesIn
Yes
Bytes In
Count
Total
Total number of Bytes In*
No Dimensions
PEBytesOut
Yes
Bytes Out
Count
Total
Total number of Bytes Out
No Dimensions
Related
Hi I've been trying to convert a query from bytes to GB, however I'm getting some strange results. I also get the same component showing up incrementally increasing in size, which makes sense as we are getting the bytes used, which we wold expect to see increased, but I would only like the latest set of data (when the query was run). Can anyone see where I'm going wrong?
AzureMetrics
| where TimeGenerated >=ago(1d)
| where Resource contains "LB"
| where MetricName contains "ByteCount"
| extend TotalGB = Total/1024
| summarize by Resource, TimeGenerated, TotalGB, MetricName, UnitName
| sort by TotalGB desc
| render piechart
In the table below, it shows the loadbal-1 reporting several times in a short window and the same for loadbal2 and 13. I'd like to capture these all in a single line, also I think I might have messed up the query for "TotalGB" (converting bytes to GB)
I would only like the latest set of data (when the query was run)
you can use the arg_max() aggregation function.
for example:
AzureMetrics
| where TimeGenerated >= ago(1d)
| where Resource has "LoadBal"
| where MetricName == "ByteCount"
| summarize arg_max(TimeGenerated, *) by Resource
I've been trying to convert a query from bytes to GB, however I'm getting some strange results... I think I might have messed up the query for "TotalGB" (converting bytes to GB)
If the raw data is in bytes, then you need to divide it by exp2(30) (or 1024*1024*1024) to get the value as GBs.
Or, you can use the format_bytes() function instead
for example:
print bytes = 18027051483.0
| extend gb_1 = format_bytes(bytes, 2),
gb_2 = bytes/exp2(30),
gb_3 = bytes/1024/1024/1024
bytes
gb_1
gb_2
gb_3
18027051483
16.79 GB
16.7889999998733
16.7889999998733
I am using following query to review inbound connections of VMs:
// the machines of interest
let ips=materialize(ServiceMapComputer_CL
| summarize ips=makeset(todynamic(Ipv4Addresses_s)) by MonitoredMachine=ResourceName_s
| mvexpand ips to typeof(string));
let StartDateTime = datetime(2020-07-01T00:00:00Z);
let EndDateTime = datetime(2021-01-01T01:00:00Z);
VMConnection
| where Direction == 'inbound'
| where TimeGenerated > StartDateTime and TimeGenerated < EndDateTime
| join kind=inner (ips) on $left.DestinationIp == $right.ips
| summarize sum(LinksEstablished) by Computer, Direction, SourceIp, DestinationIp, DestinationPort, RemoteDnsCanonicalNames, Protocol
There are few ip addresses that I would like to filter out because they are useless and could confuse. Any tips how I could filter out from result ip addresses e.g 10.30.0.0/20 and 10.40.0.0/25?
It is not quite clear how your input data looks and how you define IPs to filter out.
Therefore, the answer below is to get you started:
let ServiceMapComputer_CL = datatable(Ipv4Addresses_s:string, ResourceName_s:string)
[
'10.0.30.0/20', 'a',
'10.40.0.0/25', 'a',
'11.1.30.0/20', 'b', // only record that will be left
];
ServiceMapComputer_CL
| where not(ipv4_is_match(Ipv4Addresses_s, '10.0.30.0') or ipv4_is_match(Ipv4Addresses_s, '10.40.0.0'))
| distinct Ipv4Addresses_s, ResourceName_s
Please, also note that 'mvexpand' operator should be replaced with 'mv-expand' : the semantics of two are different ('mvexpand' is a deprecated version - and it also has inner limitation of expanding by default only 128 values, which can cause incorrect results to be returned).
I'm trying to create a custom metric alert based on some metrics in my Application Insights logs. Below is the query I'm using;
let start = customEvents
| where customDimensions.configName == "configName"
| where name == "name"
| extend timestamp, correlationId = tostring(customDimensions.correlationId), configName = tostring(customDimensions.configName);
let ending = customEvents
| where customDimensions.configName == configName"
| where name == "anotherName"
| where customDimensions.taskName == "taskName"
| extend timestamp, correlationId = tostring(customDimensions.correlationId), configName = tostring(customDimensions.configName), name= name, nameTimeStamp= timestamp ;
let timeDiffs = start
| join (ending) on correlationId
| extend timeDiff = nameTimeStamp- timestamp
| project timeDiff, timestamp, nameTimeStamp, name, anotherName, correlationId;
timeDiffs
| summarize AggregatedValue=avg(timeDiff) by bin(timestamp, 1m)
When I run this query in Analytics page, I get results, however when I try to create a custom metric alert, I got the error Search Query should contain 'AggregatedValue' and 'bin(timestamp, [roundTo])' for Metric alert type
The only response I found was adding AggregatedValue which I already have, I'm not sure why custom metric alert page is giving me this error.
I found what was wrong with my query. Essentially, aggregated value needs to be numeric, however AggregatedValue=avg(timeDiff) produces time value, but it was in seconds, so it was a bit hard to notice. Converting it to int solves the problem,
I have just updated last bit as follows
timeDiffs
| summarize AggregatedValue=toint(avg(timeDiff)/time(1ms)) by bin(timestamp, 5m)
This brings another challenge on Aggregate On while creating the alert as AggregatedValue is not part of the grouping that is coming after by statement.
When grabbing search result using Azure Log Analytics Search REST API
I'm able to receive only the first 5000 results (as by the specs, at the top of the document), but know there are many more (by the "total" attribute in the metadata in the response).
Is there a way to paginate so to get the entire result set?
One hacky way would be to attempt to break down the desired time-range iteratively until the "total" is less than 5000 for that timeframe, and do this process iteratively for the entire desired time-range - but this is guesswork that will cost many redundant requests.
While it doesn't appear to be a way to paginate using the REST API itself, you can use your query to perform the pagination. The two key operators here are TOP and SKIP:
Suppose you want page n with pagesize x (starting at page 1), then append to your query:
query | skip (n-1) * x | top x.
For a full reference list, see https://learn.microsoft.com/en-us/azure/log-analytics/log-analytics-search-reference
Yes, skip operation is not available anymore but if you want create pagination there is still an option. You need to count total count of entries, use a simple math and two opposite sortings.
Prerequisites for this query are values: ContainerName, Namespace, Page, PageSize.
I'm using it in Workbook where these values are set by fields.
let containers = KubePodInventory
| where ContainerName matches regex '^.*{ContainerName}$' and Namespace == '{Namespace}'
| distinct ContainerID
| project ContainerID;
let TotalCount = toscalar(ContainerLog
| where ContainerID in (containers)
| where LogEntry contains '{SearchText}'
| summarize CountOfLogs = count()
| project CountOfLogs);
ContainerLog
| where ContainerID in (containers)
| where LogEntry contains '{SearchText}'
| extend Log=replace(#'(\x1b\[[0-9]*m|\x1b\[0 [0-9]*m)','', LogEntry)
| project TimeGenerated, Log
| sort by TimeGenerated asc
| take {PageSize}*{Page}
| top iff({PageSize}*{Page} > TotalCount, TotalCount - ({PageSize}*({Page} - 1)) , {PageSize}) by TimeGenerated desc;
// The '| extend' is not needed if in logs are not the annoying special characters
I'm trying to scale down the SQL Azure Database from P3 to S2 but I'm getting error below.
Database scale operation from P3 Premium to S2 Standard failed for xxDB.
ErrorCode: undefined
ErrorMessage: The edition 'Standard' does not support the database max size '500 GB'.
What's the best way to scale in?
it seems you have to first change db size .. and then do your degradation of tiers.below is the entire syntax for changing,i dont have azure DB to play with..
ALTER DATABASE database_name
{
MODIFY NAME =new_database_name
| MODIFY ( <edition_options> [, ... n] )
| COLLATE collation_name
| SET { <db_update_options> }
| ADD SECONDARY ON SERVER <partner_server_name>
[WITH (<add-secondary-option>::= [, ... n] ) ]
| REMOVE SECONDARY ON SERVER <partner_server_name>
| FAILOVER
| FORCE_FAILOVER_ALLOW_DATA_LOSS
}
<edition_options> ::=
{
MAXSIZE = { 100 MB | 500 MB |1 | 5 | 10 | 20 | 30 … 150 … 500 } GB
| EDITION = { 'web' | 'business' | 'basic' | 'standard' | 'premium' }
| SERVICE_OBJECTIVE =
{ 'S0' | 'S1' | 'S2' | 'S3'| 'P1' | 'P2' | 'P3' | 'P4'| 'P6' | 'P11'
{ ELASTIC_POOL (name = <elastic_pool_name>) }
}
}
Then try degrading
Refereences:
https://azure.microsoft.com/en-us/documentation/articles/sql-database-scale-up/
in the above link look for below section:
NOTE:
Changing your database pricing tier does not change the max database size. To change your database max size use Transact-SQL (T-SQL) or PowerShell.
You should be able to scale down a database and change edition (e.g. from Premium to Standard) without first changing the MaxSize property. Can you share how you did this (e.g. T-SQL, Azure portal, PowerShell, REST API call, C# client SDK). Is it possible that in your code you took the existing database MaxSize property and submitted that in your database update request - because that would cause the error you saw (Standard doesn't support 500GB). The answer indicating that you need to separately scale and change MaxSize is correct currently when you scale up from Standard to Premium and want to get the benefit of the greater storage allocation with Premium, but that apparently wasn't what you were doing.
We are looking to make the behavior more intelligent here, but in the meantime it's possible you hit a bug.