Filter data from CustomEvent - azure

I have data in azure Insights saved in custom events formats.
Now I need to create a dashboard page in my website that will pull data from insights and will show graphs on that data.
Questions is that how I can filter data from the customEvents based on data saved there. like based on custom events or custom data.
Provide me any resource from where I can see that how $filer, $search,$query works?
I am here https://dev.applicationinsights.io/quickstart but not looks like enough.
I tried to add filter like
startswith(customEvent/name, 'BotMessageReceived')
in https://dev.applicationinsights.io/apiexplorer/events
but it not working. is says "Something went wrong while running the query",
I have customEvents which name start with BotMessageReceived
Thanks
Dalvir

update:
There is no like operator, if you wanna use timestamp as a filter, you should use one of the three methods below:
customEvents
| where timestamp >= datetime('2018-11-23T00:00:00.000') and timestamp <=
datetime('2018-11-23T23:59:00.000')
customEvents
| where tostring(timestamp) contains "2018-12-11"
customEvents
| where timestamp between(datetime('2018-11-23T00:00:00.000') ..
datetime('2018-11-23T23:59:00.000') )
Please use this:
customEvents
| where name startswith "BotMessageReceived"
And if you use the api you metioned above, you can use:
https://api.applicationinsights.io/v1/apps/Your_application_id/query?
query=customEvents | where name startswith "BotMessageReceived"
It works at my side.

Related

Azure Log Analytics: How to display AppServiceConsoleLogs AND AppServiceHTTPLogs together?

I can run the 2 queries below to view the logs for a certain time, separately.
AppServiceConsoleLogs | where TimeGenerated >= datetime('2021-04-10 14:00')
AppServiceHTTPLogs | where TimeGenerated >= datetime('2021-04-10 14:00')
How do I combine these into a single query to view the logs together?
The union operator does the job to show all records from the specified tables.
I used the query below and no the problems you mentioned:
union requests, traces
| where timestamp > ago(1d)
The screenshot of the query result:
If you still have the problem, please share us the screenshot and more detailed info.

Transcribing Splunk's "transaction" Command into Azure Log Analytics / Azure Data Analytics / Kusto

We're using AKS and have our container logs writing to Log Analytics. We have an application that emits several print statements in the container log per request, and we'd like to group all of those events/log lines into aggregate events, one event per incoming request, so it's easier for us to find lines of interest. So, for example, if the request started with the line "GET /my/app" and then later the application printed something about an access check, we want to be able to search through all the log lines for that request with something like | where LogEntry contains "GET /my/app" and LogEntry contains "access_check".
I'm used to queries with Splunk. Over there, this type of inquiry would be a cinch to handle with the transaction command:
But, with Log Analytics, it seems like multiple commands are needed to pull this off. Seems like I need to use extend with row_window_session in order to give all the related log lines a common timestamp, then summarize with make_list to group the lines of log output together into a JSON blob, then finally parse_json and strcat_array to assemble the lines into a newline-separated string.
Something like this:
ContainerLog
| sort by TimeGenerated asc
| extend RequestStarted= row_window_session(TimeGenerated, 30s, 2s, ContainerID != prev(ContainerID))
| summarize logLines = make_list(LogEntry) by RequestStarted
| extend parsedLogLines = strcat_array(parse_json(logLines), "\n")
| where parsedLogLines contains "GET /my/app" and parsedLogLines contains "access_check"
| project Timestamp=RequestStarted, LogEntry=parsedLogLines
Is there a better/faster/more straightforward way to be able to group multiple lines for the same request together into one event and then perform a search across the contents of that event?
After reading your question, there is no such an easy way to do that in azure log analytics.
If the logs are in this format, you need to do some other work to meet your requirement.

How can I access custom event values from Azure AppInsights analytics?

I am reporting some custom events to Azure, within the custom event is a value being held under the customMeasurements object named 'totalTime'.
The event itself looks like this:
loading-time: {
customMeasurements : {
totalTime: 123
}
}
I'm trying to create a graph of the average total time of all the events reported to azure per hour. So I need to be able to collect and average the values within the events.
I can't seem to figure out how to access the customMeasurements values from within the Azure AppInsights Analytics. Here is some of the code that Azure provided.
union customEvents
| where timestamp between(datetime("2019-11-10T16:00:00.000Z")..datetime("2019-11-11T16:00:00.000Z"))
| where name == "loading-time"
| summarize Ocurrences=count() by bin(timestamp, 1h)
| order by timestamp asc
| render barchart
This code simply counts the number of reported events within the last 24 hours and displays them per hour.
I have tried to access the customMeasurements object held in the event by doing
summarize Occurrences=avg(customMeasurements["totalTime"])
But Azure doesn't like that, so I'm doing it wrong. How can I access the values I require? I can't seem to find any documentation either.
It can be useful to project the data from the customDimensions / customMeasurements property collecton into a new variable that you'll use for further aggregation. You'll normally need to cast the dimensions data to the expected type, using one of the todecimal, toint, tostring functions.
For example, I have some extra measurements on dependency telemetry, so I can do something like so
dependencies
| project ["ResponseCompletionTime"] = todecimal(customMeasurements.ResponseToCompletion), timestamp
| summarize avg(ResponseCompletionTime) by bin(timestamp, 1h)
Your query might look something like,
customEvents
| where timestamp between(datetime("2019-11-10T16:00:00.000Z")..datetime("2019-11-11T16:00:00.000Z"))
| where name == "loading-time"
| project ["TotalTime"] = toint(customMeasurements.totalTime), timestamp
| summarize avg(TotalTime) by bin(timestamp, 1h)
| render barchart

Azure Log Analytics - Alerts Advice

I have a question about azure log analytics alerts, in that I don't quite understand how the time frame works within the context of setting up an alert based on an aggregated value.
I have the code below:
Event | where Source == "EventLog" and EventID == 6008 | project TimeGenerated, Computer | summarize AggregatedValue = count(TimeGenerated) by Computer, bin_at(TimeGenerated,24h, datetime(now()))
For time window : 24/03/2019, 09:46:29 - 25/03/2019, 09:46:29
In the above the alert configuration interface insights on adding the bin_at(TimeGenerated,24h, datetime(now())) so I add the function, passing the arguments for a 24h time period. If you are already adding this then what is the point of the time frame.
Basically the result I am looking for is capturing this event over a 24 hour period and alerting when the event count is over 2. I don't understand why a time window is also necessary on top of this because I just want to run the code every five minutes and alert if it detects more than two instances of this event.
Can anyone help with this?
AFAIK you may use the query something like shown below to accomplish your requirement of capturing the required event over a time period of 24 hour.
Event
| where Source == "EventLog" and EventID == 6008
| where TimeGenerated > ago(24h)
| summarize AggregatedValue= any(EventID) by Computer, bin(TimeGenerated, 1s)
The '1s' in this sample query is the time frame with which we are aggregating and getting the output from Log Analytics workspace repository. For more information, refer https://learn.microsoft.com/en-us/azure/kusto/query/summarizeoperator
And to create an alert, you may have to go to Azure portal -> YOURLOGANALYTICSWORKSPACE -> Monitoring tile -> Alerts -> Manager alert rules -> New alert rule -> Add condition -> Custom log search -> Paste any of the above queries under 'Search query' section -> Type '2' under 'Threshold value' parameter of 'Alert logic' section -> Click 'Done' -> Under 'Action Groups' section, select existing action group or create a new one as explained in the below mentioned article -> Update 'Alert Details' -> Click on 'Create alert rule'.
https://learn.microsoft.com/en-us/azure/azure-monitor/platform/action-groups
Hope this helps!! Cheers!! :)
To answer your question in the comments part, yes the alert insists on adding the bin function and that's the reason I have provided relevant query along with bin function by having '1s' and tried to explain about it in my previous answer.
If you put '1s' in bin function then you would fetch output from Log Analytics by aggregating value of any EventID in the timespan of 1second. So output would look something like shown below where aaaaaaa is considered as a VM name, x is considered as a particular time.
If you put '24h' instead of '1s' in bin function then you would fetch output from Log Analytics by aggregating value of any EventID in the timespan of 24hours. So output would look something like shown below where aaaaaaa is considered as a VM name, x is considered as a particular time.
So in this case, we should not be using '24h' in bin function along with 'any' aggregation because if we use it then we would see only one occurrence of output in 24hours of timespan and that doesn't help you to find out event occurrence count using the above provided query having 'any' for aggregation. Instead you may use 'count' aggregation instead of 'any' if you want to have '24h' in bin function. Then this query would look something like shown below.
Event
| where Source == "EventLog" and EventID == 6008
| where TimeGenerated > ago(24h)
| summarize AggregatedValue= count(EventID) by Computer, bin(TimeGenerated, 24h)
The output of this query would look something like shown below where aaaaaaa is considered as a VM name, x is considered as a particular time, y and z are considered as some numbers.
One other note is, all the above mentioned queries and outputs are in the context of setting up an alert based on an aggregated value i.e., setting up an alert when opting 'metric measurement' under alert logic based on section. In other words, aggregatedvalue column is expected in alert query when you opt 'metric measurement' under alert logic based on section. But when you say 'you get a count of the events' that means If i am not wrong, may be you are opting 'number of results' under alert logic based on section, which would not required any aggregation column in the query.
Hope this clarifies!! Cheers!!

How to get the Qna Maker "Q" from Analytics Application Insights?

I have created my chatbot's knowledge base with Qna Maker and I am trying to visualize some statistics with Analytics Application Insights.
What I want to do
I would like to create a chart with the most requested Qna Maker questions.
My problem
I can't find the Qna Maker questions in the customDimensions traces on Analytics but only their Id :
My question
Is their a way to get the Qna Maker Question linked to this Id directly from the Analytics Application Insights tool ?
Thank you.
PS : I had to use "Q" instead of "Question" in the title due to Stackoverflow rules.
Not directly.
the only info you have in appinsights is whatever was submitted with the data. so if they aren't sending the question (odd that they send the answer but not the question?) then you're out of luck.
As a workaround, you could create a custom table in your application insights instance:
https://learn.microsoft.com/en-us/azure/application-insights/app-insights-analytics-import
and populate that table with the id and question.
then you could join those two things in analytics queries in the analytics tool or in workbooks.
if you're looking for a query with question and answer linked through id, here is your answer:
requests
| where url endswith "generateAnswer"
| project timestamp, id, name, resultCode, duration
| parse name with *"/knowledgebases/"KbId"/generateAnswer"
| join kind= inner (
traces | extend id = operation_ParentId
) on id
| extend question = tostring(customDimensions['Question'])
| extend answer = tostring(customDimensions['Answer'])
| project KbId, timestamp, resultCode, duration, question, answer
This doesn't necessarily solve your issue but might be of help for other people looking for a simple report of question/answer to improve their QnA maker.
The sample could be found in the official documentation:
https://learn.microsoft.com/en-us/azure/cognitive-services/qnamaker/how-to/get-analytics-knowledge-base
Here is a query that I came up with that will pull in the id of the knowlegebase question, the question the user typed, and the knowlegebase anwer. It also ties multiple questions together if they are from the same session.
I have not yet been able to find a way to identify a way to get the knowlegebase question associated to the id though.
requests
| where url endswith 'generateAnswer'
| project id, url, sessionId = extract('^[a-z0-9]{7}', 0, itemId)
| parse kind = regex url with *'(?i)knowledgebases/'knowlegeBaseId'/generateAnswer'
| join kind= inner
(
traces
| extend id = operation_ParentId
| where message == 'QnAMaker GenerateAnswer'
| extend userQuestion = tostring(customDimensions['Question'])
| extend knowlegeBaseQuestionId = tostring(customDimensions['Id'])
| extend knowlegeBaseAnswer = tostring(customDimensions['Answer'])
| extend score = tostring(customDimensions['Score'])
)
on id
| where timestamp >= ago(10d)
| project knowlegeBaseId, timestamp, userQuestion, knowlegeBaseQuestionId, knowlegeBaseAnswer, score, sessionId
| order by timestamp asc

Resources