How to monitor a Windows Service on an Azure VM? - azure

I have a windows service running on a Azure VM availability set.
What is the best way to instrument monitoring for this service utilizing any of the Azure monitoring solutions?

If you just want to monitor if it's running or not, you can use Log Analytics. More details please refer to this article.
I have tested it at my side, it works well.
1.Create a workspace and Enable the Log Analytics VM Extension as per this doc.
2.Once step 1 is completed, nav to your workspace -> in the left panel, select Advanced settings -> Data -> Windows Event Logs, then in the textbox, type "system", then select system in the dropdown -> click the add button.
3.click Save button.
4.In the left panel, click Logs. Then in the query editor, type the following command(please note that the == is case sensitive):
Event
| where TimeGenerated >ago(1d)
| where EventLog == "System" and EventID ==7036 and Source == "Service Control Manager"
| parse kind=relaxed EventData with * '<Data Name="param1">' Windows_Service_Name '</Data><Data Name="param2">' Windows_Service_State '</Data>'*
//you can add a filter by service name here like | where Windows_Service_Name =="Windows Update"
| sort by TimeGenerated desc
| project Computer, Windows_Service_Name, Windows_Service_State, TimeGenerated
5.The test result:

Related

Azure Heartbeat displaying status of all Virtual Machines with Color indicators

I am trying to write a query in Azure Monitor > Logs which displays the status of all virtual machines. I am currently able to display all VMs (in a selected scope) with their heartbeats but can't mention their status (with a green/red code) in the table.
My end goal is to display it on Azure Dashboard so that everyone in the team could look at the status of VMs.
I am pretty new to Azure and still trying to understand how it works. Any guidance will be appreciated.
My current simple heartbeat query is
Heartbeat
| summarize arg_max(TimeGenerated, *) by Computer
This display the following columns,
Computer
TimeGenerated
SourceComputerId
ComputerIP
Category
OSType
along with other details.
I tried to reproduce the same in my environment to create an Azure Dashboard for checking Status of Azure VM:
Go to Azure Portal > Virtual Machines > Click on pin- blade option > Create new.
Create a new dashboard, like below.
Note: If you select a shared option, whoever have RBAC access,they can be able to view the dashboard.
To change the dashboard view to Donut Chart.
Please follow the below steps.
Click on setting option> View >Summery.
Successfully created a dashboard with status.
Assign the RBAC role to user to view the dashboard.
Ex : Monitoring Reader

How to get only create logs of Virtual Machine in Azure?

So, I can see create_or_update logs of my VM on activity logs. There is no filter just to get the create logs as much as I am aware.
So is there any way where I can just see the create logs of a VM using API or commands?
You can follow below steps to achieve your requirement
You need to enable diagnostic settings to activity logs.
refer https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log#send-to-log-analytics-workspace for enabling the diagnostic settings.
Once the Log analytics workspace is established, you can query the logs as
AzureActivity
| where OperationName == 'Create or Update Virtual Machine' and ActivitySubstatusValue == 'Created'
| order by TimeGenerated desc
above output will show only the Create operations. You can further filter it based on your requirement.

Alert for Azure Virtual Machine running for X hours?

I use an Azure VM for personal purposes and use it mostly like I would use a laptop for checking email etc. However, I have several times forgot to stop the VM when I am done using it and thus have had it run idle for days, if not weeks, resulting in unnecessarily high billing.
I want to set up an email (and if possible also SMS and push notification) alert.
I have looked at the alert function in the advisor, but it does not seem to have enough customization to handle such a specific alert (which would also reduce Microsoft's income!).
Do you know any relatively simple way to set up such an alert?
You can take use of Log Analytics workspaces and Custom log search.
The below are the steps to create an alert, which will send the alert if the azure vm is running exactly 1 hour.
First:
you need to create a Log Analytics workspaces and connect to azure vm as per this link.
Sencod:
1.In azure portal, nav to Azure Monitor -> Alerts -> New alert rule.
2.In the "Create rule" page, for Resource, select the Log Analytics workspaces you created ealier. Screenshot as below:
Then for Condition, please select Custom log search. Screenshot as below:
Then in the Configure signal logic page, in Search query, input the following query:
Heartbeat
| where Computer == "yangtestvm" //this is your azure vm name
| order by TimeGenerated desc
For Alert logic: set Based on as Number of results, set Operator as Equal to, set Threshold value as 60.
For Evaluated based on: set Period as 60, set Frequency as 5.
The screenshot as below:
Note:
for the above settings, I query the Heartbeat table. For azure vm which is running, it always sends data to log analytics to the Heartbeat table per minute. So if I want to check if the azure vm is running exactly 1 hour(means it sends 60 data to Heartbeat table), just use the above query, and set the Threshold value to 60.
Another thing is the Period, it also needs to be set as 1 hour(60 minutes) since I just check if the azure vm is running for 1 hour; for Frequecy, you can set it any value you like.
If you understand what I explains, you can change these values as per your need.
At last, set the other settings for this alert.
Please let me know if you still have more issues about this.
Another option is to use the Azure Activity log to determine if a VM has been running for more than a specified amount of time. The benefit to this approach is that you don't need to enable Diagnostic Logging (Log Analytics), it also supports appliances that can't have an agent installed (i.e. NVAs).
The logic behind this query is to determine if the VM is in a running state, and if so has it been running for more than a specified period of time (MaxUpTime).
This is achieved by getting the most recent event of type 'Start' or 'Deallocate', then checking if this event is of type 'Start' and was generated more than 'MaxUpTime' ago
let DaysOfLogsToCheck = ago(7days);
let MaxUptime = ago(2h); // If the VM has been up for this long we want to know about it
AzureActivity
| where TimeGenerated > DaysOfLogsToCheck
// ActivityStatus == "Succeeded" makes more sense, but in practice it can be out of order, so "Started" is better in the real world
| where OperationName in ("Deallocate Virtual Machine", "Start Virtual Machine") and ActivityStatus == "Started"
// We need to keep only the most recent entry of type 'Deallocate Virtual Machine' or 'Start Virtual Machine'
| top 1 by TimeGenerated desc
// Check if the most recent entry was "Start Virtual Machine" and is older than MaxUpTime
| where OperationName == "Start Virtual Machine" and TimeGenerated <= MaxUptime
| project TimeGenerated, Resource, OperationName, ActivityStatus, ResourceId

Azure Monitor alert on a custom metric filtered by cloud_RoleInstance

I'm able to create an alert based on my custom metric. However, i'd like to have several different alerts, for each cloud_RoleInstance I have. Is it possible somehow?
If the logs are stored in Azure Log Analytics or Azure Application Insights, then you can use Custom Log Search alert(in step 5 of this article). Note you need to create one alert as per one cloud_RoleInstance in the query.
Steps as blow:
Step 1:
In azure portal -> Nav to azure monitor -> Alerts -> New alert rule, then in the resource, select the Azure Log Analytics or Azure Application Insights.
Step 2:
Then in Condition, select Add, then select "Custom log search":
Step 3:
Then in new window, write you query to trigger the alert, remember use where clause to filter the cloud_RoleInstance.
And note that:
change "Based on" from "Number of results" to "Metric measurement",
and use this query:
customMetrics
| where name == 'MyMetricName'
| where cloud_RoleInstance == 'MyInstanceName'
| summarize AggregatedValue = sum(value) by bin(timestamp, 5m)

How to get instances count for VMSS in Application Insights?

I have a Virtual Machine Scale Set (VMSS) with autoscaling rules. I can get the performance metrics of a host but there is no graph for instances count.
There is a graph on VMSS settings "Scaling" -> "Run history", like this.
But how I can get it from Metrics and place on the dashboard?
By default, having a VMSS does not emit anything to Application Insights (AI) unless you configure an app / platform (like Service Fabric for example) to use AI.
So, if you do have software running on the VMSS that emits to AI then you could write an AI analytics query to get the instance count like this:
requests
| summarize dcount(cloud_RoleInstance) by bin(timestamp, 1h)
Typically cloud_RoleInstance contains a VM identifier so that is what I used in the query. It does show the distinct count of VMs.
This only works reliable if the software runs on all VMs in the VMSS and if all VMs emit data to AI at least once an hour. Of course you can adapt the script to your liking / requirements.
operators used:
dcount: counts the unique occurences of the specified field
bin: group results in slots of 1 hour
Thanks Peter Bons, it's that I need!
As I run Docker on the VM I can add OMS agent container and use it's data.
This is what I wanted.
ContainerInventory
| where TimeGenerated >= ago(3h)
| where Name contains "frontend"
| summarize dcount(Computer) by bin(TimeGenerated, 5m)
On Azure portal, navigate to VMSS, select required VMSS -> Scaling under Settings from left-navigation-panel -> Click on 'Run History' tab on right-side-panel
The easy way is after you have gone to the 'Run History' tab just click the 'Pin to Dashboard' button. You can see this button in the image supplied in the question.

Resources