Azure log analytics query for how much and what data has vm consumed - azure

I would like to have query that would return something like for single vm. So query should be showing results of single vm and what kinda log type / solutions it has used and how much.
I don't know if this is even possible to do anything similar maybe? Tips?
With this query I'm able to list total usage for all vm's reporting to laws but I would like to have more details about a single vm
find where TimeGenerated > ago(30d) project _BilledSize, _IsBillable, Computer
| where _IsBillable == true
| extend computerName = tolower(tostring(split(Computer, '.')[0]))
| summarize BillableDataBytes = sum(_BilledSize) by computerName
| sort by BillableDataBytes nulls last

Mostly you would be able to accomplish it by querying standard columns or properties _BilledSize, Type, _IsBillable and Computer.
Below is the sample query for your reference:
union withsource=tt *
| where TimeGenerated between (ago(7d) .. now())
| where _IsBillable == true
| where isnotempty(Computer)
| where Computer == "MM-VM-RHEL-7"
| summarize BillableDataBytes = sum(_BilledSize) by Computer, _IsBillable, Type
| render piechart
Below is the screenshot for illustration:
Related references:
Log data usage - Understanding ingested data volume
Standard columns in logs

Related

KQl to extract RCE attempts

I am trying to query For Remote Code Execution Attempt alerts, Does anyone have an idea how to go about this.
SecurityAlert
| where TimeGenerated >= ago(20d)
| where AlertName contains "Remote code execution attempt"
| extend Entities = tostring(parse_json(Entities)[0])
| project Entities, AlertName, Status
I am trying to output the Hostnames and other information

adding a drop down filter in KQL chart

I have a chart in azure monitor (app insight to be exact) essentially I have 5 servers 2 of which are used by client B and 3 by client C. what I want to all server displayed but a drop down option so client B or c can be chosen
At the moment I have two charts which show the servers stood alone and another combined
With the following query
Perf
|extend iif(Computer =="X","B",iif(Computer =="Y","B","C")) | where CounterName == "% Processor Time" | where ObjectName == "Processor" | where Computer contains "SQL" | summarize avg(CounterValue) by bin(TimeGenerated, 5min), iif(Computer =="X","B",iif(Computer
=="Y","B","C")) // bin is used to set the time grain to 15 minutes | render timechart
A solution that may work for you
As suggested by Peter Bons and after testing in our local environment.
In your application insights create a workbook with a new parameter
Pick "dropdown" as the parameter type
Pick "query" as "get data from" option
Set your data source from where you are getting your data.
The below query is for getting the list of subscriptions you can use your own query for getting the list of servers
ResourceContainers | where type =~ "microsoft.resources/subscriptions"
// add any other filters you want here
| project id, name, group=tenantId
For further information you can go through the Microsoft Document.

Azure Log Analytics: How to display AppServiceConsoleLogs AND AppServiceHTTPLogs together?

I can run the 2 queries below to view the logs for a certain time, separately.
AppServiceConsoleLogs | where TimeGenerated >= datetime('2021-04-10 14:00')
AppServiceHTTPLogs | where TimeGenerated >= datetime('2021-04-10 14:00')
How do I combine these into a single query to view the logs together?
The union operator does the job to show all records from the specified tables.
I used the query below and no the problems you mentioned:
union requests, traces
| where timestamp > ago(1d)
The screenshot of the query result:
If you still have the problem, please share us the screenshot and more detailed info.

How can I access custom event values from Azure AppInsights analytics?

I am reporting some custom events to Azure, within the custom event is a value being held under the customMeasurements object named 'totalTime'.
The event itself looks like this:
loading-time: {
customMeasurements : {
totalTime: 123
}
}
I'm trying to create a graph of the average total time of all the events reported to azure per hour. So I need to be able to collect and average the values within the events.
I can't seem to figure out how to access the customMeasurements values from within the Azure AppInsights Analytics. Here is some of the code that Azure provided.
union customEvents
| where timestamp between(datetime("2019-11-10T16:00:00.000Z")..datetime("2019-11-11T16:00:00.000Z"))
| where name == "loading-time"
| summarize Ocurrences=count() by bin(timestamp, 1h)
| order by timestamp asc
| render barchart
This code simply counts the number of reported events within the last 24 hours and displays them per hour.
I have tried to access the customMeasurements object held in the event by doing
summarize Occurrences=avg(customMeasurements["totalTime"])
But Azure doesn't like that, so I'm doing it wrong. How can I access the values I require? I can't seem to find any documentation either.
It can be useful to project the data from the customDimensions / customMeasurements property collecton into a new variable that you'll use for further aggregation. You'll normally need to cast the dimensions data to the expected type, using one of the todecimal, toint, tostring functions.
For example, I have some extra measurements on dependency telemetry, so I can do something like so
dependencies
| project ["ResponseCompletionTime"] = todecimal(customMeasurements.ResponseToCompletion), timestamp
| summarize avg(ResponseCompletionTime) by bin(timestamp, 1h)
Your query might look something like,
customEvents
| where timestamp between(datetime("2019-11-10T16:00:00.000Z")..datetime("2019-11-11T16:00:00.000Z"))
| where name == "loading-time"
| project ["TotalTime"] = toint(customMeasurements.totalTime), timestamp
| summarize avg(TotalTime) by bin(timestamp, 1h)
| render barchart

getting duplicates records when joining in kql

We have a requirement to get status of windows service when it is started and stopped do that I have returned one query, but I am facing issue when joining 2 tables to get output.
I have tried using inner and left outer joins but still getting duplicates
Event
| where EventLog == "System" and EventID == 7036 and Source == "Service Control Manager"
| parse kind=relaxed EventData with * '<Data Name="param1">' Windows_Service_Name '</Data><Data Name="param2">' Windows_Service_State '</Data>' *
| where Windows_Service_State == "running" and Windows_Service_Name == "Microsoft Monitoring Agent Azure VM Extension Heartbeat Service"
| extend startedtime = TimeGenerated
| join (
Event
| where EventLog == "System" and EventID == 7036 and Source == "Service Control Manager"
| parse kind=relaxed EventData with * '<Data Name="param1">' Windows_Service_Name '</Data><Data Name="param2">' Windows_Service_State '</Data>' *
| where Windows_Service_State == "stopped" and Windows_Service_Name == "Microsoft Monitoring Agent Azure VM Extension Heartbeat Service"
| extend stoppedtime = TimeGenerated
) on Computer
| extend downtime = startedtime - stoppedtime
| project Computer, Windows_Service_Name,stoppedtime , startedtime ,downtime
| top 10 by Windows_Service_Name desc
we want to get no of times that service started and stopped if the service restarted multiple times in a day we are getting duplicate timings in starttime when joining please have a look on link (https://ibb.co/JzqxjC0)
I am not sure I fully understand what is going on, since I don't have access to the data. But. I can see you are using the default join flavor.
The default is inner unique:
The inner-join function is like the standard inner-join from the SQL world. An output record is produced whenever a record on the left side has the same join key as the record on the right side.
Which means a new line in the result is created on every match between the left and the right side. Therefore. let's assume you have a computer that was restarted twice, so it has 2 lines of stopped, and 2 lines of running. That will produce 4 rows in Kusto answer.
Looking at your picture, it makes sense to me because you have lines with negative downtime. I guess that is not possible.
What I would do, is look for an identifier that is unique on every Computer run. Then you can join on that, and stay safe not to generate data that you don't want.

Resources