Im writing query in logs in alert rule of the vm to get the results of average vm cpu is more then some value like 80 for every 1 hr need to fire alert. I need to get the condition true if the query results true by getting the average cpu value for last 1hr of the vm is greater than 80. Then it passes the condition to fire alert but i don't want any duplication of alert like last 3hr cpu value is more than 80, it shouldn't trigger 3 alert. Lookin for one alert only as per the current alert is still active. Also is any condition to close the fired alert if it is cleared as per condition by integrating with azure devops board work item?
Kql query:
Perf
| where TimeGenerated > ago(1h)
| where CounterName == "% Processor Time" and InstanceName == "_Total" and Countervalue > 80
| summarize avg(CounterValue) by Computer, Countervalue
I don't want to fire alert if the alert already fired in last result to avoid duplication. How to set in query and alert rules condition in azure monitor
A couple of problems:
The query filters on Countervalue > 80 which means that if during that hour the CPU loads spikes above 80 even just once you are guaranteed to have an average of more than 80. That condition probably just needs to be removed.
The query summarizes by Countervalue which means you get one row per distinct CPU usage value which doesn't really make sense.
Should probably look like this:
Perf
| where TimeGenerated >= ago(1h)
| where CounterName == "% Processor Time" and InstanceName == "_Total"
| summarize avg(CounterValue) by Computer
Now to the problem that you do not want to trigger the alarm again if the alarm was already triggered: The solution here would be to look at the previous' hour average as well and see if it was above 80 and only trigger if it just rose above 80. So something like this:
Perf
| where TimeGenerated >= ago(2h)
| where CounterName == "% Processor Time" and InstanceName == "_Total"
| summarize AvgCPU = avg(CounterValue) by bin(TimeGenerated, 1h), Computer
| order by TimeGenerated asc
| extend ShouldTrigger = prev(AvgCPU) <= 80 and AvgCPU > 80
| summarize arg_max(TimeGenerated, ShouldTrigger)
Then set your alarm condition to be `ShouldTrigger == true`
Related
Hi Kusto Query Language(KQL) lovers,
I am trying to write a query in Kusto Query Language (KQL), that can compare count of APIs that failed today in a specific time (lets say 2:30 p.m. to 3 p.m.) with respect to count of APIs that failed yesterday in the same timeframe (2:30 p.m. to 3 p.m.).
For instance, if today, in last 30 min operation X was failed 10 times with failure code 400, I need to see count with which operation X failed today in last 30 minutes (Same time frame).
For this purpose, I used Scalar function bin() and wrote following query that extracts data from request table:
requests
|where timestamp > ago(1d)
| where client_Type != "Browser"
| where (cloud_RoleName == 'X')
| where name != 'HTTP GET'
| where success == false
| summarize count() by bin (timestamp, 1d), name, resultCode
|sort by timestamp
Here is the output I got when using timestamp > ago(1d). This way, I was shown APIs that failed today and yesterday but there is no clear comparison between both dates.
Is there any way I can display count of APIs that failed yesterday on separate Column adjacent to the count_ Column that has count of corresponding APIs that failed today?
I know of project operator that adds extra column but I don't know how to incorporate and assign count of APIs that failed yesterday to project operator.
Kindly add to my knowledge of any relevant function or operator in KQL that can achieve this task.
The other way I tried was to define two variables, startDateTime and endDateTime to get the data of specific time as shown below.
Blank Output when I defined variables to define selected time frame:
let startDateTime = todatetime("2023-02-07 06:35:00.0");
let endDateTime = todatetime("2023-02-07 06:35:00.0");
requests
|where timestamp > startDateTime and timestamp < endDateTime
| where client_Type != "Browser"
| where (cloud_RoleName == 'web-memberappservice-weu-prod')
| where name != 'HTTP GET'
| where success == false
| summarize count() by bin (timestamp, 1d), name, resultCode
|sort by timestamp
I searched about KQL query to compare count of failed APIs for today with respect to count of APIs that failed yesterday and checked some results from Stack overflow which are not helping me in achieving desired result.
I tried these links but queries on these links do not reflect what I want to achieve:
Best way to show today Vs yesterday Vs week in KQL azure monitror
kql query to compare the hour which has minimum number of TriggersStarted from last week to today past hour TriggersStarted
What am I expecting?
I want a query that can display count of APIs that failed yesterday on separate Column adjacent to the count_ Column that has count of corresponding APIs that failed today.
I know of project operator that adds extra column but I don't know how to incorporate and assign count of APIs that failed yesterday to project operator.
Kindly identify any relevant function or operation that can help in this regard.
* The where clause was added for performance reasons.
// Sample data generation. Not part of the solution.
let requests = materialize(range i from 1 to 100000 step 1 | extend timestamp = ago(2d * rand()), name = tostring(dynamic(["PUT", "POST", "PATCH", "GET"])[toint(rand(4))]), resultCode = 400 + toint(rand(3)));
// Solution starts here.
let _period = 30m;
requests
| where timestamp between (ago(_period) .. _period)
or timestamp between (ago(_period + 1d) .. _period)
| summarize todayCount = countif(timestamp between (ago(_period) .. _period))
,YesterdayCount = countif(timestamp between (ago(_period + 1d) .. _period))
by name, resultCode
|sort by name asc, resultCode asc
name
resultCode
todayCount
YesterdayCount
GET
400
91
100
GET
401
98
98
GET
402
109
89
PATCH
400
93
77
PATCH
401
84
85
PATCH
402
74
82
POST
400
78
85
POST
401
96
77
POST
402
85
102
PUT
400
98
81
PUT
401
97
85
PUT
402
77
83
Fiddle
I am using azure customer metrics to store application usage metrics, I am exporting the stats every 5 minutes. I am using the query below to create a aggregated series without any gaps.
I expect the start to be 5/10/2020, 12:00:00.000 AM and end to be 5/14/2020, 12:00:00.000 AM. However in my results, start is fine , but the end is 5/10/2020, 10:35:00.000 AM. I am running this query on 5/13/2020, 4:09:07.878 AM. The min timestamp in my data is 5/11/2020, 12:54:06.489 PM and max is 5/12/2020, 2:32:47.459 PM.
What is wrong with my query? why the make-series wouldn't give rows beyond day 1
let start = floor(ago(1d),1d);
let end = floor(now(+1d),1d);
customMetrics
| where timestamp >= start
| where name == "endpoint_access_count_count_view"
| extend customMetric_valueSum = iif(itemType == 'customMetric',valueSum,todouble(''))
| make-series n_req = sum(customMetric_valueSum) on timestamp from start to end step 5m
| mvexpand n_req,timestamp
| extend timestamp=todatetime(timestamp),n_req=toint(n_req)
mvexpand, unlike mv-expand (note the hyphen), has a default limit of 128 values, so your results get truncated.
https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/mvexpandoperator
in Azure workbooks with the below query I am able to get avg of 2 columns a as per time range selected but here least time we can select is 30 mins we have a requirement to show last 1 min status of the result for that I need another column and show last 1 mins status
let start = {TimeRange:start};
let grain = {TimeRange:grain};
workspace(name).site1_CL
| extend healty=iff(Status_s == 'Connected' , 100 , 0)
| summarize table1= avg(healty) by ClientName_s
|join
(workspace(name).site2_CL
| extend Availability=iff(StatusDescription_s == 'OK' , 100 , 0)
|summarize table2=avg(Availability) by ClientName_s
)
on ClientName_s
| extend HealthStatus=(table1+table2)/2
| project Client=ClientName_s,Environment=EnvName_s,HealthStatus
req another column and show current status instead aggregation of selected timerange this column should override selected timerange and show last 1 minute aggregation of 2 tables
Couldn´t you just set the start to use the value you need?
let start = now(-1m); //last minute
I'm trying to meticulously track interest growth and monthly payments on a loan in excel. Instead of manually putting in the amount for each first day of the month, is there a way to write an excel if statement so that it will be a certain value for the first day of a month and zero on all other days?
Sort of like: =IF("day is first day", $100,$0) That way I can drag the formula all the way down. Worth noting that inside the quotation marks will be a cell number that points to the column directly left of the formula with a date in it.
So like this:
| Date | Payment | Balance |
|:--------:|:-------:|:-------:|
| 01/30/16 | $0 | $1000 |
| 01/31/16 | $0 | $1000 |
| 02/01/16 | $100 | $900 |
| 02/02/16 | $0 | $900 |
Try this
=IF(A2=Date(Year(A2),Month(A2),1),100,0)
You need the DAY Function.
=IF(DAY(A2) = 1,100,0)
You can use the DAY() function to get the day number
=IF(DAY(A1)=1,100,0)
In my input I have a key , a lower bound of range R1, a upper bound of range R1. and some data.I have to insert this data only after getting ensured that my input range R1 should not overlap any other present ranges already present in cassandra.
So with before each insert i have to fire a select query
key | lowerbound | upperbound | data
------+------------+------------+------------------------------------------------------------------------
1024 | 1000 | 1100 | <blob>
1024 | 1500 | 1900 | <blob>
1024 | 2000 | 2900 | <blob>
1024 | 3000 | 3900 | <blob>
1024 | 4000 | 4500 | <blob>
Case1 Given Range R(S,E)=(1,999)
This is a positive case hence system should Insert the data
Case2: Given Range R(S,E)=(1001,1010)
this is a Negative case hence system should discard the data
I have a solution with one Range query and one programmatic check solution
please let me know whether this kind of problem statement have solution in Cassandra if yes can it be optimized to get a better performance
You don't have a better solution for your problem: this is the only way. Possibly in future Lightweight Transactions might help also for these situations but now the only solution you have is to read before writing. One more consideration: ensure to avoid double insertion in concurrency situation (if this can happen in your application).
Cheers,
Carlo