I am looking for help to read the data from 100+ subscriptions at one go. I have a requirement to create a dashboard which can read the data from all the subscriptions at one go and show us the trend in graphical format.
For ex - CPU utilization of VMs.
Can we read the CPU utilization from all the VMs across all the subscriptions and put the highest one in the graph, so that it would be easy for the monitoring team to monitor the platform. It is always easier to see the data in the dashboard rather than going through 1000 emails on daily basis.
You can use a metrics step to query many resources at once, if you need true metrics.
but there isn't generally a "top x metric value across N subs" functionality of metrics (yet), you are querying a specific metric for specific resources. you could use workbooks to find the resources in a sub, and use those resources in a metrics step.
Most of these things are limited though, i know metrics is limited to 200 resources a a time.
If you were using something like log analytics, and had all the VMs emitting metrics to the same workspace, then you could do this as a single log analytics query, though.
Related
I have two identical resource groups for dev and qa.
They have the same services and same configurations.
At some point, I found that appinsight in QA contains a lot of performance counters which are produced by the app service plan.
I've tried to compare their configuration, but they look similar.
What can cause such different behaviour?
According to Document it says,
At the set sample interval, Azure Monitor gathers data from all deployed performance counters on all agents. For the time period defined by your log analytics workspace, the raw data is accessible in all log query views and has not been aggregated.
According to Document it says,
Performance counters show how the system resources are performing. Performance counter data ingestion depends on your environment size and usage. In most cases, performance counters should make up 80 to 99% of your data ingestion for Azure Monitor for Azure Virtual Desktop.
References:
https://learn.microsoft.com/en-us/dotnet/framework/debug-trace-profile/performance-counters
How to reduce the costs of azure data factory (we have pipelines for data movement between tables, and triggers, datasets, alerts) ?
Thanks in advance,
How to reduce the costs of azure data factory (we have pipelines for data movement between tables, and triggers, datasets, alerts) ?
Before reducing costs, we need to monitor what are the costs to avoid unnecessary costs.
We could:
First, at the beginning of the ETL project, you conduct proof of
concept and use a combination of per-pipeline consumption and pricing
calculator to estimate costs.
After you have deployed your pipelines to production, you use the
cost management features to set budgets and monitor costs. You can
also review the forecasted costs and identify spending trends.
In addition, you can view per-pipeline consumption and per-activity
consumption information to understand which pipelines and which
activities are costliest and identify candidates for cost reduction.
Please refer the document Plan and manage costs for Azure Data Factory for some more details.
In addition, we could save costs from running ADF pipelines using Triggers.
Check this thread for the details.
If you are using many copy activities to move data then the under settings tab you can change the default Data Integration Unit to be 2. The default is 4. This can make a big difference if you have many copy activities.
This will reduce the performance of the copy activity however.
.
I want to see when my resources are idling (e.g. certain resources might only be used during business hours and not used for any other background process). I'd like to do that preferably through an API call.
It would all depends on the type of resource and what you are wanting to do. You could use the Azure Monitor API or Azure Data Explorer API with Kusto to query out specific metrics for your different services. Depending on the type of data, this would require you to have more analytics enabled.
Here are some examples based on types of services.
Azure App Service - You could query for CPU, Memory, HTTP Requests, etc. This would give you an idea of activity. These same metrics tie into the auto-scaling.
Azure VMs - CPU, Memory, Disk IO, etc. You could determine your baseline then you would know when it is idle or not.
Azure Storage - Transactions, Ingress, Egress, Requests, etc. You could use that to determine if there is activity in your storage account.
As you can see it all depends on what you want to define as idling. If the goal is to reduce costs, then that will be difficult with many of these services. You could scale up and down your App Services with some scripts or scale in/out based on metrics. Same can be done with your Azure VMs, or using stopping and starting. Storage will not be able to be adjusted, but you are only charged for storage and egress so that is dictated by activity.
Hope this helps.
no, this is not possible. how do you define "idling"? how would azure know if your service does anything or not? besides, most of the PaaS resources cannot be stopped, so whats the use of that.
You can use Azure Advisor to get cost optimization advice, or Azure Monitor directly to gather performance data and then analyze it, but its not going to be trivial.
I know that Application Insights store data for only some period of time.
What do I need to do if I want to do analysis on a long time-period, let's say a year?
I know we have continuous export thing and as suggested, we can use power BI on the stored data in BLOBs but that will have a cost associated with it. Another way is code and transform that JSON data in BLOBs to some excel representations.
Is there any other way apart from these two for doing analysis in long time-periods of AI data? Something that picks data stored in BLOBs and uses that to show analytics?
There are two things being developed - ability to specify different retention period (1 year will be more expensive) and ability to do analytics query on top of blobs. Unfortunately, both of them are not yet available.
Will be providing updates to this answer.
Update: It is possible to specify a retention for Application Insights resources.
We run a software application on azure for one of our customers. The customer want to see the performance of the systems. This consist of two parts. One is the metric information of the servers and they also want to see some information I want to provide by custom logging.
My plan is to give the customer access to the portal and only allow him access to the metric information and the custom tables.
It seems to me that by assigning a role to the customer I should be able to block all the other possibilities.
Does someone can me tell which actions I have to allow/forbid to achieve this? Or were I can find the information for this?
Solution #1
Instead of giving Read access to the virtual machine which may breaks security policy, I'd recommend to go with Azure Log Analytics (ref: https://learn.microsoft.com/en-us/azure/log-analytics/log-analytics-overview
) workspace. That said, you will need to create a workspace which collects and stores server metrics (ref: https://learn.microsoft.com/en-us/azure/log-analytics/log-analytics-quick-collect-windows-computer) and other custom metrics.
Your customer will be given access to the workspace only which he can see all metrics in a dashboard. If there is a need for log filtering, you can use Log Analytics query language (ref: https://learn.microsoft.com/en-us/azure/log-analytics/log-analytics-log-search-transition)
Log Analytics is a paid service. You are given free up to 10 workspaces per subscription. The workspace is considered an Azure resource so the limit follows by subscription limit, which means you can create up to 800 workspaces per a resource group. A subscription can allow 800 * 800 (for reference if you would like to do capacity planning for your workspace-based solution). For Log Analytics pricing, read here (https://azure.microsoft.com/en-us/pricing/details/log-analytics/).
Log Analytics is a good choice as its value proportion is to offer your customer intuitive dashboard to monitor their virtual machine performance, and to offer Near Real Time monitoring. And this solution is a cloud native compatibility.
There is a management solution which offers a bundle of VM capacity and performance monitoring which you can try now https://learn.microsoft.com/en-us/azure/log-analytics/log-analytics-capacity
Solution #2
Log Analytics might not be your choice because it might add more Azure service and operational cost. If you need a cheaper cost, you would need to collect your virtual machine by Performance Counter which is a built-in feature in Windows OS. With Performance Counter you can export to Excel file, or visualize into Power BI or some custom chart.
Other Solutions
You can utilize Azure Monitor and API to get data, For example, this API https://learn.microsoft.com/en-us/rest/api/monitor/metricdefinitions/list. You would certainly need to visualize or format in some intuitive way to satisfy your customer. It can be a custom front-end web, or Power BI or even Excel with chart.
You can just query to Azure Blob Storage and use Stream Analytics combining with Power BI to visualize your data (https://thuansoldier.net/?p=7187).
There is not a single solution. This really depends on your existing resource capacity, financial stuff or so on.