Custom Cloudwatch Metrics - amazon-rds

I am using AWS RDS SQL server and I need to do enhanced level monitoring via Cloudwatch. By default there are some basic monitoring available but I want use custom metrics as well.
In my scenario I need to create an alarm whenever we get more number of deadlock in SQL server. We are able to fetch the details of deadlock via script and I need to prepare custom metrics for the same.
Can any one help on this or kindly suggest any alternate solution?

Related

How to use Azure Kusto Query Language (KQL) to query Jmeter Graph such as Active Threads Over times

If Jmeter already connect to Azure(like JMeter logs sent to a platform
like Log Analytics workspace) , you get all Jmeter data you want. You can easily use KQL to query the Jmeter data.
But you just don't know how to query Jmeter Graph - Active Threads Over times
Is there any Query code for it? Thanks
Hi I'm Charlie from the Microsoft for Founders Hub team. I'm not usually here, so may not see a follow up question, but do want help.
KQL is used to query telemetry and logs from technologies based on Azure Data Explorer (i.e,Application Insights, Logs Analytics workspace, Search in Sharepoint).
This said, you must have your JMeter logs sent to a platform like Log Analytics workspace before you can continue to query them. If you have, please follow these link to learn how to interact with that workspace in Azure:
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/log-query-overview
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/log-analytics-tutorial
To help you further, please include how you connected JMeter Logs to Azure.

AzureMetrics Table vs Metrics in Azure Dasboard

I am working on Azure Monitor Dashboards.
I need to check the Health Status of my App Service.
If i run use option Metrics (2nd Image) add Health Status metrics and create chart
vs
If i run query on AzureMetrics Table will both return same result? I mean HOW both options are different from each other?
Both use the same source. The difference is that using the "Metrics" blade you can create charts withouth having to write queries using Kusto and anyone with basic knowledge can quickly create charts.
When using the "Logs" blade you have to write a query using Kusto to get the desired results and format the chart manually but you have more control in what and how data is displayed.
If i run query on AzureMetrics Table will both return same result? I
mean HOW both options are different from each other?
The difference between logs and metrics is,
Metrics reveal a service or application's tendencies and proclivities,
while logs focus on specific events. The goal of logs is to save as
much information—mostly technical informations—as possible about a
single event. Log data can be used to investigate occurrences and
assist with root-cause analysis of problems or defects, as well as an
increasing number of other applications.
For more information please refer the below links:-
MSFT TECHCOMMUNITY|Difference between Log Analytics and Monitor
Blogs|Azure Monitor and Azure Log Analytics & Logs or Metrics.

Log Analytics Workspace Table-Level RBAC and Row-Level Security

We have a table in Azure Log Analytics that keeps the logs from many different systems.
For example, our CommonSecurityLog table has the logs from different Firewalls. I have created a custom RBAC role that allows access to this specific table only but would like to go further and limit the access to specific rows only.
I did some research but can't find a way to do this, is it possible?
There's no way to do this natively in Azure - RBAC only supports controlling access at the Table level.
EDIT:
So, as #FidelCasto mentioned, there's also the option of using Custom Logs. This will be helpful in many cases when you need to collect Custom Windows-related, Application-related. This could be a more user-friendly option but obviously there will be other cases where it will not apply, specially when you have devices sending non-standard logs.
If your requirements are not met by the option above, the only other catch-all option is to put a Log Collector between the firewalls and Azure, and use a script to filter the logs before sending them over via the Log Analytics (OpInsights) REST API. You could use a PowerShell script to handle this.
Each Firewall would send their logs to a local/remote Log Collector.
Have a script query/filter through the logs with If/Else based on the Firewall name.
For each Firewall, you would create a new Log-Type based on the Firewall name. Log-Type corresponds to the table name in Log Analytics.
Assign permission based on the newly created custom tables.
It's not as straight-forward but gets the job done!

Monitoring & Detecting Exceptions in Applications using Cloud Monitoring

I am new to GCP and come from an Azure background. Is there an equivalent of "Azure Application Insights" on the GCP side for Monitoring Applications?
Let me explain my use case more clearly with an example: If I have a .NET based web application running on a Windows VM on GCP can Google Cloud Monitoring help detect Exceptions raised by the running application and send out alerts.
Any pointers/links to further explore this type of monitoring capability would be helpful.
Cloud Monitoring will provide you with many statisctics - most probably with what you need. And if there aren't any metrics to suit you need you may create ones based on the logs collected from the VM.
By default there is a number of logs being ingested but if you want to have full range and experiment with various ones you may want to install a monitoring agent. Go through the documentation and have a look.
You can then use the metrics to create charts and have a live view on a number of things such as cpu utilisation, disk IO/s, dropped/sent/received packets etc. Here's the Cloud Monitoring documentation.
And finally - you can create alerts based on the metrics (set thresholds, time periods etc). They can be simple e-mail alerts for example but they can be sent via pub-sub and trigger some functions or apps too.
Since you're new to GCP it's a lot of reading ahead of you but you will easily find documentation for most of GCP's services.
If you provide more details I can update my answer and give you more precise answer.

Is there a script to create azure custom alerts format and any log analytics query to get azure VM status

I have below two questions can someone help on them.
1.Is there a script or a way to create custom alert format for azure alerts?
2.Is there a way to pin all the azure VM status to dashboard?
Regarding #1, the feature to customize or configure alert email format is currently not supported. If interested, I suggest you to raise your feedback / feature request here in UserVoice / feedback forum. Responsible product / feature team would triage / start checking feasibility and would prioritize the feedback.
Regarding #2, If 'status' is meant as 'PowerState' (i.e., status of VM whether it is running, deallocated, etc.) or if it's meant as 'StatusCode' (i.e., ok, etc.) or if it's meant as 'ProvisioningState' (i.e., succeeded, etc.) then I don't think we have straight-forward way for it so that we can ingest that particular data directly to dashboard but said that, you may just leverage 'Heartbeat' Log Analytics Kusto table at first place and create a custom view as dashboard using view designer but as views in Azure Monitor are being phased out and replaced with workbooks so I suggest leveraging these workbooks now.
If not, you may leverage a new feature called as Azure Monitor for VMs which basically helps to analyze the performance and health of your Windows and Linux VMs, and monitor their processes and dependencies on other resources and external processes. Here again, you can create interactive reports Azure Monitor for VMs with VM insights workbooks.
Hope these inputs helps!

Resources