I have a query to check average response times over:
Last 24 hours
24-192 hours
The difference between them as a percentage
let requests0to24HoursAgo = requests
| where timestamp > ago(24h)
| summarize last0to24HoursAverageRequestDuration=avg(duration), id=1;
let requests24to192HoursAgo = requests
| where timestamp > ago(192h)
| where timestamp < ago(24h)
| summarize last24to192HoursAverageRequestDuration=avg(duration), id=1;
let diff = requests0to24HoursAgo
| join
requests24to192HoursAgo
on id
| extend Diff = (last0to24HoursAverageRequestDuration - last24to192HoursAverageRequestDuration) / last24to192HoursAverageRequestDuration * 100
| project
["Average response (last 0-24 hours)"]=last0to24HoursAverageRequestDuration,
["Average response (last 24-192 hours)"]=last24to192HoursAverageRequestDuration,
Diff;
diff
This works perfectly in the Logs section in Azure, but as soon as I pin the query to a dashboard, it's unable to run it with the date range "Set in query" and returns NaN for 2 of the values.
When I click "Open Editing Pane", set it to "Set in Query" and run it, it works. When I then click "Apply", it is still broken on the dashboard.
As per the documentation, In log analytics the default time range of 24 hours applied to all queries.
We have tested in our local environment, we tried overriding the time range parameter using the dashboard tile setting which didnt help the request you made looks like a feature request.
Would suggest you to submit a feedback forum & raise the same issue over Microsoft Q&A
I stumbled across the same issue when the timespan I set within the query (| where timestamp > ago(7d)) was ignored in the dashboard.
I've tested the way with the tile settings, like VenkateshDodda-MT mentioned:
Open the tile settings in the dashboard (-> Configure tile settings)
Tick Override the dashboard time settings at the tile level.
Select a greater timespan than 24h
Although there is no Set in query option like in the query editor, it would be enough to set a timespan of 30 days in your case.
I've also successfully tested it with your query.
Related
I'm trying to visualize the browser statistics of our app hosted in Azure.
For that I'm using the nginx logs and run an Azure Log Analytics query like this:
ContainerLog
| where LogEntrySource == "stdout" and LogEntry has "nginx"
| extend logEntry=parse_json(LogEntry)
| extend userAgent=parse_user_agent(logEntry.nginx.http_user_agent, "browser")
| extend browser=parse_json(userAgent)
| summarize count=count() by tostring(browser.Browser.Family)
| sort by ['count']
| render piechart with (legend=hidden)
Then I'm getting this diagram, which is exactly what I want:
But the query is very very slow. If I set the time range to more than just the last few hours it takes several minutes or doesn't work at all.
My solution is to use a search job like this:
ContainerLog
| where LogEntrySource == "stdout" and LogEntry has "nginx"
| extend d=parse_json(LogEntry)
| extend user_agent=parse_user_agent(d.nginx.http_user_agent, "browser")
| extend browser=parse_json(user_agent)
It creates a new table BrowserStats_SRCH on which I can do this search query:
BrowserStats_SRCH
| summarize count=count() by tostring(browser.Browser.Family)
| sort by ['count']
| render piechart with (legend=hidden)
This is much faster now and only takes some seconds.
But my problem is, how can I keep this up-to-date? Preferably this search job would run once a day automatically and refreshed the BrowserStats_SRCH table so that new queries on that table run always on the most recent logs. Is this possible? Right now I can't even trigger the search job manually again, because then I get the error "A destination table with this name already exists".
In the end I would like to have a deeplink to the pie chart with the browser stats without the need to do any further click. Any help would be appreciated.
But my problem is, how can I keep this up-to-date? Preferably this search job would run once a day automatically and refreshed the BrowserStats_SRCH table so that new queries on that table run always on the most recent logs. Is this possible?
You can leverage the api to create a search job. Then use a timer triggered azure function or logic app to call that api on a schedule.
PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-00000000000/resourcegroups/testRG/providers/Microsoft.OperationalInsights/workspaces/testWS/tables/Syslog_suspected_SRCH?api-version=2021-12-01-preview
with a request body containing the query
{
"properties": {
"searchResults": {
"query": "Syslog | where * has 'suspected.exe'",
"limit": 1000,
"startSearchTime": "2020-01-01T00:00:00Z",
"endSearchTime": "2020-01-31T00:00:00Z"
}
}
}
Or you can use the Azure CLI:
az monitor log-analytics workspace table search-job create --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name HeartbeatByIp_SRCH --search-query 'Heartbeat | where ComputerIP has "00.000.00.000"' --limit 1500 --start-search-time "2022-01-01T00:00:00.000Z" --end-search-time "2022-01-08T00:00:00.000Z" --no-wait
Right now I can't even trigger the search job manually again, because then I get the error "A destination table with this name already exists".
Before you start the job as described above, remove the old result table using an api call:
DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}?api-version=2021-12-01-preview
Optionally, you could check the status of the job using this api before you delete it to make sure it is not InProgress or Deleting
I currently have an alert setup for Data Factory that sends an email alert if the pipeline runs longer than 120 minutes, following this tutorial: https://www.techtalkcorner.com/long-running-azure-data-factory-pipelines/. So when a pipeline does in fact run longer than the expected time, I do receive an alert however, I am also getting additional & unexpected alerts.
My query looks like:
ADFPipelineRun
| where Status =="InProgress" // Pipeline is in progress
| where RunId !in (( ADFPipelineRun | where Status in ("Succeeded","Failed","Cancelled") | project RunId ) ) // Subquery, pipeline hasn't finished
| where datetime_diff('minute', now(), Start) > 120 // It has been running for more than 120 minutes
I received an alert email on September 28th of course saying a pipeline was running longer than the 120 minutes but when trying to find the pipeline in the Azure Data Factory pipeline runs nothing shows up. In the alert email there is a button that says, "View the alert in Azure monitor" and when I go to that I can then press "View Query Results" above the shown query. Here I can re-enter the query above and filter the date to show all pipelines running longer than 120 minutes since September 27th and it returns 3 pipelines.
Something I noticed about these pipelines is the end time column:
I'm thinking that at some point the UTC time is not properly configured and for that reason, maybe the alert is triggered? Is there something I am doing wrong, or a better way to do this to avoid a bunch of false alarms?
To create Preemptive warnings for long-running jobs.
Create activity.
Click on blank space.
Follow path: Settings > Elapsed time metric
Refer Operationalize Data Pipelines - Azure Data Factory
I'm not sure if you're seeing false alerts. What you've shown here looks like the correct behavior.
You need to keep in mind:
Duration threshold should be offset by the time it takes for the logs to appear in Azure Monitor.
The email alert takes you to the query that triggered the event. Your query is only showing "InProgress" statues and so the End property is not set/updated. You'll need to extend your query to look at one of the other statues to see the actual duration.
Run another query with the RunId of the suspect runs to inspect the durations.
ADFPipelineRun
| where RunId == 'bf461c8b-0b1e-43c4-9cdf-7d9f7ccc6f06'
| distinct TimeGenerated, OperationName, RunId, Start, End, Status
For example:
We have a requirement to get status of windows service when it is started and stopped do that I have returned one query, but I am facing issue when joining 2 tables to get output.
I have tried using inner and left outer joins but still getting duplicates
Event
| where EventLog == "System" and EventID == 7036 and Source == "Service Control Manager"
| parse kind=relaxed EventData with * '<Data Name="param1">' Windows_Service_Name '</Data><Data Name="param2">' Windows_Service_State '</Data>' *
| where Windows_Service_State == "running" and Windows_Service_Name == "Microsoft Monitoring Agent Azure VM Extension Heartbeat Service"
| extend startedtime = TimeGenerated
| join (
Event
| where EventLog == "System" and EventID == 7036 and Source == "Service Control Manager"
| parse kind=relaxed EventData with * '<Data Name="param1">' Windows_Service_Name '</Data><Data Name="param2">' Windows_Service_State '</Data>' *
| where Windows_Service_State == "stopped" and Windows_Service_Name == "Microsoft Monitoring Agent Azure VM Extension Heartbeat Service"
| extend stoppedtime = TimeGenerated
) on Computer
| extend downtime = startedtime - stoppedtime
| project Computer, Windows_Service_Name,stoppedtime , startedtime ,downtime
| top 10 by Windows_Service_Name desc
we want to get no of times that service started and stopped if the service restarted multiple times in a day we are getting duplicate timings in starttime when joining please have a look on link (https://ibb.co/JzqxjC0)
I am not sure I fully understand what is going on, since I don't have access to the data. But. I can see you are using the default join flavor.
The default is inner unique:
The inner-join function is like the standard inner-join from the SQL world. An output record is produced whenever a record on the left side has the same join key as the record on the right side.
Which means a new line in the result is created on every match between the left and the right side. Therefore. let's assume you have a computer that was restarted twice, so it has 2 lines of stopped, and 2 lines of running. That will produce 4 rows in Kusto answer.
Looking at your picture, it makes sense to me because you have lines with negative downtime. I guess that is not possible.
What I would do, is look for an identifier that is unique on every Computer run. Then you can join on that, and stay safe not to generate data that you don't want.
I have a question about azure log analytics alerts, in that I don't quite understand how the time frame works within the context of setting up an alert based on an aggregated value.
I have the code below:
Event | where Source == "EventLog" and EventID == 6008 | project TimeGenerated, Computer | summarize AggregatedValue = count(TimeGenerated) by Computer, bin_at(TimeGenerated,24h, datetime(now()))
For time window : 24/03/2019, 09:46:29 - 25/03/2019, 09:46:29
In the above the alert configuration interface insights on adding the bin_at(TimeGenerated,24h, datetime(now())) so I add the function, passing the arguments for a 24h time period. If you are already adding this then what is the point of the time frame.
Basically the result I am looking for is capturing this event over a 24 hour period and alerting when the event count is over 2. I don't understand why a time window is also necessary on top of this because I just want to run the code every five minutes and alert if it detects more than two instances of this event.
Can anyone help with this?
AFAIK you may use the query something like shown below to accomplish your requirement of capturing the required event over a time period of 24 hour.
Event
| where Source == "EventLog" and EventID == 6008
| where TimeGenerated > ago(24h)
| summarize AggregatedValue= any(EventID) by Computer, bin(TimeGenerated, 1s)
The '1s' in this sample query is the time frame with which we are aggregating and getting the output from Log Analytics workspace repository. For more information, refer https://learn.microsoft.com/en-us/azure/kusto/query/summarizeoperator
And to create an alert, you may have to go to Azure portal -> YOURLOGANALYTICSWORKSPACE -> Monitoring tile -> Alerts -> Manager alert rules -> New alert rule -> Add condition -> Custom log search -> Paste any of the above queries under 'Search query' section -> Type '2' under 'Threshold value' parameter of 'Alert logic' section -> Click 'Done' -> Under 'Action Groups' section, select existing action group or create a new one as explained in the below mentioned article -> Update 'Alert Details' -> Click on 'Create alert rule'.
https://learn.microsoft.com/en-us/azure/azure-monitor/platform/action-groups
Hope this helps!! Cheers!! :)
To answer your question in the comments part, yes the alert insists on adding the bin function and that's the reason I have provided relevant query along with bin function by having '1s' and tried to explain about it in my previous answer.
If you put '1s' in bin function then you would fetch output from Log Analytics by aggregating value of any EventID in the timespan of 1second. So output would look something like shown below where aaaaaaa is considered as a VM name, x is considered as a particular time.
If you put '24h' instead of '1s' in bin function then you would fetch output from Log Analytics by aggregating value of any EventID in the timespan of 24hours. So output would look something like shown below where aaaaaaa is considered as a VM name, x is considered as a particular time.
So in this case, we should not be using '24h' in bin function along with 'any' aggregation because if we use it then we would see only one occurrence of output in 24hours of timespan and that doesn't help you to find out event occurrence count using the above provided query having 'any' for aggregation. Instead you may use 'count' aggregation instead of 'any' if you want to have '24h' in bin function. Then this query would look something like shown below.
Event
| where Source == "EventLog" and EventID == 6008
| where TimeGenerated > ago(24h)
| summarize AggregatedValue= count(EventID) by Computer, bin(TimeGenerated, 24h)
The output of this query would look something like shown below where aaaaaaa is considered as a VM name, x is considered as a particular time, y and z are considered as some numbers.
One other note is, all the above mentioned queries and outputs are in the context of setting up an alert based on an aggregated value i.e., setting up an alert when opting 'metric measurement' under alert logic based on section. In other words, aggregatedvalue column is expected in alert query when you opt 'metric measurement' under alert logic based on section. But when you say 'you get a count of the events' that means If i am not wrong, may be you are opting 'number of results' under alert logic based on section, which would not required any aggregation column in the query.
Hope this clarifies!! Cheers!!
Pretty new to AI queries so any help will be much appreciated.
We have a host of custom events for particular actions for example, booking appointments, ordering products, setting an address. I would like to run a query to look at users who performed both ordering a product and setting their address in the same session. I can get a count and dcount of either events happening but struggling to specify that both happen in the same session We capture the User_authID as well session id with the custom events. Any ideas?
Thanks,
Chris
There's a few ways to do this. I find using the in operator and a subquery to be the easiest to read. Here's an example that does it:
let timeRange = ago(1d);
let sessionsWithBothEvents = customEvents
| where timestamp > timeRange
| summarize CountEvent1=countif(name == "event1"), CountEvent2=countif(name == "event2") by session_Id
| where CountEvent1 > 0 and CountEvent2 > 0
| project session_Id;
customEvents
| where timestamp > timeRange
| where session_Id in (sessionsWithBothEvents)
// Here you have all the events in sessions that contained at least one instance of each event
// From here you can dcount users, etc.
It is important to note that this approach will only work for up to 1 million session ids that match the criteria. This is due to the limitations of the in operator. See https://docs.loganalytics.io/docs/Language-Reference/Scalar-operators/in_!in-operators for more information.