Azure IoT Hub - how many data device has sent? - azure

I would like to know how much data a device has sent in a period of time to IoT Hub.
Currently I have following base query:
AzureDiagnostics
| where TimeGenerated between (datetime("2022-10-01") .. datetime('2022-11-08'))
| extend DeviceId = extractjson("$.deviceId", properties_s)
| extend MessageSize = toint(extractjson("$.messageSize", properties_s))
| where DeviceId == "deviceId"
which returns me log entries for a device. The logs that have a message size properties, are those of type D2CTwinOperations with operation name update or read.
Complete query which sums the message sizes looks like this:
AzureDiagnostics
| where TimeGenerated between (datetime("2022-10-01") .. datetime('2022-11-08'))
| extend DeviceId = extractjson("$.deviceId", properties_s)
| extend MessageSize = toint(extractjson("$.messageSize", properties_s))
| where DeviceId == "deviceId"
| where MessageSize > 0
| summarize totalSizeInBytes = sum(MessageSize) by bin(TimeGenerated, 1d)
| extend totalSizeInKiloBytes = totalSizeInBytes/1024
| order by TimeGenerated asc
What with the D2C messages that are not twin operations, i.e. device is sending an message/event that is not a device twin update. Can I query for those somehow? And do they have message size associated with it?

There are different category options available in Diagnostic Settings section of the Azure IoT Hub that lets you log different category of messages in the AzureDiagnostics logs. Please find the below image displaying different options available.
I am not sure if the other logs generated from the D2C message/event has the property message size in it. But you can see different category logs get generated and filter some set of logs that suit your need.
You can also enable "AllMetrics" option on the Diagnostic setting page to
generate a lot of Azure platform generated metrics to monitor your Azure IoT Hub.
Please refer to the resource Monitoring Azure IoT Hub data to find different metrics provided to you by out of the box functionality. Here are some of the daily quota metrics available to you by enabling this setting.
Kindly note that while the metrics provides you the cumulative data transferred to Azure IoT Hub by all the connected devices, there is no current metric to let you know data transferred per IoT device.
A work around for a similar question has been posted on the following thread -- Azure IoTHub - How to get usage data per device
Here is the solution shared on the thread

Related

The message from Azure IoT hub is reaching partially to Stream Analytics and thereby SQL db

The message from Azure IoT edge device installed on Raspberry Pi is sent to the cloud.
In the logs of IoT edge device on Azure Portal the message sent to the cloud is seen (in the troubleshoot section).
In the Stream Analytics the input is referring to IoT hub, the connection is valid.
In the Query section of Stream Analytics when loading the received message content some part of the message is lost. However, the message sent to the cloud from the device is full.
Is there any way to see where the message is lost?
Is the to make a query before stream analytics to find out the data type maybe?
We have to investigate this much more deeper, but i have few suggestions to look at this issue, sharing as below, helping with initial query...!
Try this sample query that you can use to check the message flow from the IoT Edge device to the Stream Analytics job:
IoTHub
| where DeviceId == "your_device_id"
| join (
StreamAnalytics
| where JobName == "your_stream_analytics_job_name"
) on IoTHub.MessageId == StreamAnalytics.InputMessageId
| project TimeGenerated, DeviceId, IoTHub.MessageId, StreamAnalytics.InputMessageId
Try below query to check the data type of the messages being sent from the IoT Edge device:
IoTHub
| where DeviceId == "your_device_id"
| project TimeGenerated, DeviceId, MessageId, json_tostring(Properties) as Properties
I prefer to use "Test" feature in the Stream Analytics query section to see if it processes the message as expected.

How to create alert Azure monitor to send daily usage of Sentinel

How to Create Alert Azure monitor to send daily usage of Sentinel.
According to documentation:
When the daily cap is reached for a Log Analytics workspace, a banner is displayed in the Azure portal, and an event is written to the Operations table in the workspace. You should create an alert rule to proactively notify you when this occurs.
When the daily cap is reached, you can receive an alert by creating a log alert rule by specifying the target scope and conditions.
To view the effect of the daily cap, try following Kusto query, according to documentation:
let DailyCapResetHour=14;
Usage
| where DataType !in ("SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent")
| where TimeGenerated > ago(32d)
| extend StartTime=datetime_add("hour",-1*DailyCapResetHour,StartTime)
| where StartTime > startofday(ago(31d))
| where IsBillable
| summarize IngestedGbBetweenDailyCapResets=sum(Quantity)/1000. by day=bin(StartTime , 1d) // Quantity in units of MB
| render areachart
References: Daily quota for Sentinel, Ingestion Cost Spike detection Playbook and How to analyze Microsoft Sentinel Daily Cap Alerts

How to get only create logs of Virtual Machine in Azure?

So, I can see create_or_update logs of my VM on activity logs. There is no filter just to get the create logs as much as I am aware.
So is there any way where I can just see the create logs of a VM using API or commands?
You can follow below steps to achieve your requirement
You need to enable diagnostic settings to activity logs.
refer https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log#send-to-log-analytics-workspace for enabling the diagnostic settings.
Once the Log analytics workspace is established, you can query the logs as
AzureActivity
| where OperationName == 'Create or Update Virtual Machine' and ActivitySubstatusValue == 'Created'
| order by TimeGenerated desc
above output will show only the Create operations. You can further filter it based on your requirement.

Failure to trigger alarm for custom HeartBeat metric in Azure

There are set of machine hosted in on premise network in my company.
There's one windows service, lets say ServiceX, is running on each of these host.
These ServiceX after each interval polls database.
Before polling the database, I have added following line to emit a metric
TelemetryClient().GetMetric("AgentHeartBeat").TrackValue(1)
So, every time the ServiceX polls, I can see a heart beat custom metric in Azure Application Insights.
Now to setup the alarm I am using following query,
customMetrics
| where name == 'AgentHeartBeat'
| summarize AggregatedValue = avg(valueMin) by bin(timestamp, 1min),
cloud_RoleInstance
Note, cloud_RoleInstance value = Environment.MachineName
Please check below figure for complete Alarm configuration.
To test the alarm I turned of one of machine (hosting ServiceX), but this alarm still not triggered.
I am not sure what exactly I need to change to make it working.

Alert for Azure Virtual Machine running for X hours?

I use an Azure VM for personal purposes and use it mostly like I would use a laptop for checking email etc. However, I have several times forgot to stop the VM when I am done using it and thus have had it run idle for days, if not weeks, resulting in unnecessarily high billing.
I want to set up an email (and if possible also SMS and push notification) alert.
I have looked at the alert function in the advisor, but it does not seem to have enough customization to handle such a specific alert (which would also reduce Microsoft's income!).
Do you know any relatively simple way to set up such an alert?
You can take use of Log Analytics workspaces and Custom log search.
The below are the steps to create an alert, which will send the alert if the azure vm is running exactly 1 hour.
First:
you need to create a Log Analytics workspaces and connect to azure vm as per this link.
Sencod:
1.In azure portal, nav to Azure Monitor -> Alerts -> New alert rule.
2.In the "Create rule" page, for Resource, select the Log Analytics workspaces you created ealier. Screenshot as below:
Then for Condition, please select Custom log search. Screenshot as below:
Then in the Configure signal logic page, in Search query, input the following query:
Heartbeat
| where Computer == "yangtestvm" //this is your azure vm name
| order by TimeGenerated desc
For Alert logic: set Based on as Number of results, set Operator as Equal to, set Threshold value as 60.
For Evaluated based on: set Period as 60, set Frequency as 5.
The screenshot as below:
Note:
for the above settings, I query the Heartbeat table. For azure vm which is running, it always sends data to log analytics to the Heartbeat table per minute. So if I want to check if the azure vm is running exactly 1 hour(means it sends 60 data to Heartbeat table), just use the above query, and set the Threshold value to 60.
Another thing is the Period, it also needs to be set as 1 hour(60 minutes) since I just check if the azure vm is running for 1 hour; for Frequecy, you can set it any value you like.
If you understand what I explains, you can change these values as per your need.
At last, set the other settings for this alert.
Please let me know if you still have more issues about this.
Another option is to use the Azure Activity log to determine if a VM has been running for more than a specified amount of time. The benefit to this approach is that you don't need to enable Diagnostic Logging (Log Analytics), it also supports appliances that can't have an agent installed (i.e. NVAs).
The logic behind this query is to determine if the VM is in a running state, and if so has it been running for more than a specified period of time (MaxUpTime).
This is achieved by getting the most recent event of type 'Start' or 'Deallocate', then checking if this event is of type 'Start' and was generated more than 'MaxUpTime' ago
let DaysOfLogsToCheck = ago(7days);
let MaxUptime = ago(2h); // If the VM has been up for this long we want to know about it
AzureActivity
| where TimeGenerated > DaysOfLogsToCheck
// ActivityStatus == "Succeeded" makes more sense, but in practice it can be out of order, so "Started" is better in the real world
| where OperationName in ("Deallocate Virtual Machine", "Start Virtual Machine") and ActivityStatus == "Started"
// We need to keep only the most recent entry of type 'Deallocate Virtual Machine' or 'Start Virtual Machine'
| top 1 by TimeGenerated desc
// Check if the most recent entry was "Start Virtual Machine" and is older than MaxUpTime
| where OperationName == "Start Virtual Machine" and TimeGenerated <= MaxUptime
| project TimeGenerated, Resource, OperationName, ActivityStatus, ResourceId

Resources