We run a software application on azure for one of our customers. The customer want to see the performance of the systems. This consist of two parts. One is the metric information of the servers and they also want to see some information I want to provide by custom logging.
My plan is to give the customer access to the portal and only allow him access to the metric information and the custom tables.
It seems to me that by assigning a role to the customer I should be able to block all the other possibilities.
Does someone can me tell which actions I have to allow/forbid to achieve this? Or were I can find the information for this?
Solution #1
Instead of giving Read access to the virtual machine which may breaks security policy, I'd recommend to go with Azure Log Analytics (ref: https://learn.microsoft.com/en-us/azure/log-analytics/log-analytics-overview
) workspace. That said, you will need to create a workspace which collects and stores server metrics (ref: https://learn.microsoft.com/en-us/azure/log-analytics/log-analytics-quick-collect-windows-computer) and other custom metrics.
Your customer will be given access to the workspace only which he can see all metrics in a dashboard. If there is a need for log filtering, you can use Log Analytics query language (ref: https://learn.microsoft.com/en-us/azure/log-analytics/log-analytics-log-search-transition)
Log Analytics is a paid service. You are given free up to 10 workspaces per subscription. The workspace is considered an Azure resource so the limit follows by subscription limit, which means you can create up to 800 workspaces per a resource group. A subscription can allow 800 * 800 (for reference if you would like to do capacity planning for your workspace-based solution). For Log Analytics pricing, read here (https://azure.microsoft.com/en-us/pricing/details/log-analytics/).
Log Analytics is a good choice as its value proportion is to offer your customer intuitive dashboard to monitor their virtual machine performance, and to offer Near Real Time monitoring. And this solution is a cloud native compatibility.
There is a management solution which offers a bundle of VM capacity and performance monitoring which you can try now https://learn.microsoft.com/en-us/azure/log-analytics/log-analytics-capacity
Solution #2
Log Analytics might not be your choice because it might add more Azure service and operational cost. If you need a cheaper cost, you would need to collect your virtual machine by Performance Counter which is a built-in feature in Windows OS. With Performance Counter you can export to Excel file, or visualize into Power BI or some custom chart.
Other Solutions
You can utilize Azure Monitor and API to get data, For example, this API https://learn.microsoft.com/en-us/rest/api/monitor/metricdefinitions/list. You would certainly need to visualize or format in some intuitive way to satisfy your customer. It can be a custom front-end web, or Power BI or even Excel with chart.
You can just query to Azure Blob Storage and use Stream Analytics combining with Power BI to visualize your data (https://thuansoldier.net/?p=7187).
There is not a single solution. This really depends on your existing resource capacity, financial stuff or so on.
Related
As part of a client requirement, I've been asked to set up central log repository for different Azure workloads including Storage accounts & databases. I see a default diagnostic setting in place but all of those are disabled. To enable these, we need to enable certain logs/metrics which will further be ingested into the workspace. Now I want to make a cost-effective & most accurate selection of the logs/metrics for storage accounts. Can someone with more profound knowledge into this domain enlighten me about it?
Similarly for Postgre SQL & Cosmos DB databases too, I have to make such decision. Please help me with this.
Please check the below points and references in detail.
Selection:
You can select the logs for the operations that you want to Get all the details you wish for.selection depends on the requirement.
A good practice is to go through your agents and monitoring settings
and see exactly what you are logging. Capture logs which are
important for your monitoring purpose.
Choose the cheapest region to create and store your log analytics
workspace.
If you have very high volume of the log ingestion then it would be
prudent to opt for azure commitment tier.
In case you need to export the log analytics data, rather than
exporting all the data, you can filter it and send only relevant log
data
Above things can significantly reduce your azure billing cost and help you to save money in using azure monitor effectively. Understand Azure Monitor and Log Analytics Pricing and Cost Optimization (azurelib.com)
Storing:
Log data can accumulate in your account over time which can increase the cost of storage.
If you need log data for only a small period of time, you can reduce
your costs by modifying the log data retention period to less days.
Use lifecycle policy to move data between access tiers.
Data ingested into Log Analytics workspace can be retained at no
additional charge(free) up to the first 31 days.
See
Design considerations and change the data retention if
not needed more than that. See Monitoring Azure Blob Storage
| Microsoft Docs.
Storage Insights is a dashboard on top of Azure Storage metrics and
logs. You can use Storage Insights to examine the transaction volume
and used capacity of all your accounts. That information can help
you decide which accounts you might want to retire.
Analyze:
Analyze the used capacity and monitor the use of the container.
you can consider reducing the total cost by exporting logs to
storage account, and then using a serverless query solution on top
of log data.See blob storage monitoring/optimize cost for infrequent
queries
Organize data into access tiers.Log Analytics has Commitment Tiers,
which can save you as much as 30 percent compared to the
Pay-As-You-Go price.
You should periodically review this information to determine if you
can reduce your charges by moving to another tier
References:
Plan and manage costs for Azure Blob Storage | Microsoft Docs
Azure Monitor Logs pricing details - Azure Monitor | Microsoft Docs
Azure Monitor Log Analytics too Expensive? Part 2 - Save Some Money
| Thomas Stringer (trstringer.com)
There's an awful lot of disjointed documentation on monitoring network/resources in Azure. What I'm looking for is which pieces are needed to get information from VMs, NVA firewalls, azure load balancers, and other network resources and network connectivity into a single pain of glass in Azure. Only concerned about Azure, not on-prem for now.
I've come across azure monitor, log analytics work spaces, event hub, vm extensions, network watcher, insights, etc...but I'm not sure which are required and which are not. One doc leads to the next and I end up with 30 tabs open. I'll also need to be able to push logs to other security devices such as a SIEM.
Does anyone know of a deployment guide that wraps this all up in a more logical fashion? Does anyone have any feedback on which pieces from azure (not 3rd parties) are required at a minimum to accomplish a single pane of glass to view my Azure environment holistically?
General overview of observability in Azure
Likely, the thing you're looking for is Azure Monitor. It's an umbrella term for everything observability related inside Azure.
To store Metrics and Logs you need Log Analytics: it can query data with kusto query language, visualize results, define Alerts on queries.
Alerts is quite a complex beast, as it is spread across the entire cloud. Two types that I use the most:
log-analytics alert (which I mentioned above)
Alerts tab, which is available at every Azure component view. for example, open resource group, and scroll down to Monitoring section
Each component also has a subset of built-in metrics. Likely, you noticed that many azure components on the Overview view display some charts. For example, Azure Storage Account displays Total egress, Total ingress, and other line-charts. When you click on these charts you can customize them. These metrics and charts are free to use.
Microsoft also has all-in-one observability solution for Azure Functions and Web Apps: Application Insights
Dashboards allows to join multiple charts into a single view and share it with others.
If you care about security, Azure proposes Azure Security Center
Deployment/management strategy
I suggest to start with:
Create Log Analytics Workspace, which is the storage for metrics and logs. The azure docs article explains how to design it: how many instances to use, how to rate-limit ingestion (it might be expensive if goes out of control), how to access it and so on.
To get Azure components logs, look for Diagnostic Settings tab at a component page at Azure portal, but not all components has it (sic!). I suggest
sending the most critical data to Log Analytics workspace to store them in a queryable format for 30 days (it's in free tier). This is needed for investigating current issues with your infrastructure
if you might need logs later than 30 days - send them to Storage Account
you mentioned SIEM integration - route required events to Event Hub and then process the stream according to your requirements
So, if you need long-term storage - you need to create Azure Storage Account.
If you need real-time analysis - you need to build a pipeline based on Azure Event Hub.
If you have Azure Functions and Web Apps - add Application Insights. According to my experience, I would suggest starting with a separate instance per each Azure Function resource or Service.
Create Alerts for each component separately. If you do it through UI - open component page at the portal and look for Alerts tab there. If you're automating the process (please do so as soon as possible), do not expect easy trip: I used ARM templates and terraform - in both cases, there are dozens of barely documented features.
Join related components core-metrics into Dashboards and share it with the team. This guide is a good starting point. Note, when you share the dashboard, it's also persisted as an azure resource in the subscription.
I need to build a dashboard which will visuallize the usage and cost of many azure subscriptions. accounts, departments.
My plan was:
Send the data that is 'behind' the Azure Cost Analysis view, to the log analytics workspace.
In the log analytics workspace, perform custom aggregations / filters.
Display those aggregations as charts in Azure Metrics or directly in Azure Dashboard.
Problem is with step 1, I dont know how to send the data that is 'behind' the Azure Cost Analysis view, to the log analytics workspace.
I thought of two solutions:
Fetching the data from azure cost & billing API.
Schedule Export cost analysis data to a storage account, and then somehow moving the data from the storage account to the log analytics workspace.
Both solutions seems to me a bit 'overkilling' - is there a more direct approach to send the cost analysis data to log analytics workspace?
If there is no option such as that, I would be happy to know how would you suggest moving the exported data from the storage account to the log analytics, or do you have some other idea?
Thank you!
The only native solution is, to schedule from the Costs-Blade an export of the Costs as CSV into a StorageAccount. If you want to load the Data into a Log-Analytics-Workspace, Azure Automation and a scheduled Script would work.
I believe a direct approach is currently not available but I see this feature request raised in UserVoice / feedback forum for the same requirement. If interested, you may upvote it because in general the responsible Azure product / feature team would triage / start checking feasibility and prioritizing a received feedback based on various factors like number of votes a feedback receives, feasibility, open prioritized backlog items, etc.
I would suggest you to fetch the data from azure cost & billing API and send that data to Log Analytics from a REST API client by using the HTTP Data Collector API. For more information and illustration with examples, refer this Azure document. Or else if you want to fetch the data from azure cost & billing API and store it in a machine then you may go with custom logs. For more information w.r.t it, refer this Azure document.
Other related references:
Use cost alerts to monitor usage and spending
Supported metrics with Azure Monitor
I want to see when my resources are idling (e.g. certain resources might only be used during business hours and not used for any other background process). I'd like to do that preferably through an API call.
It would all depends on the type of resource and what you are wanting to do. You could use the Azure Monitor API or Azure Data Explorer API with Kusto to query out specific metrics for your different services. Depending on the type of data, this would require you to have more analytics enabled.
Here are some examples based on types of services.
Azure App Service - You could query for CPU, Memory, HTTP Requests, etc. This would give you an idea of activity. These same metrics tie into the auto-scaling.
Azure VMs - CPU, Memory, Disk IO, etc. You could determine your baseline then you would know when it is idle or not.
Azure Storage - Transactions, Ingress, Egress, Requests, etc. You could use that to determine if there is activity in your storage account.
As you can see it all depends on what you want to define as idling. If the goal is to reduce costs, then that will be difficult with many of these services. You could scale up and down your App Services with some scripts or scale in/out based on metrics. Same can be done with your Azure VMs, or using stopping and starting. Storage will not be able to be adjusted, but you are only charged for storage and egress so that is dictated by activity.
Hope this helps.
no, this is not possible. how do you define "idling"? how would azure know if your service does anything or not? besides, most of the PaaS resources cannot be stopped, so whats the use of that.
You can use Azure Advisor to get cost optimization advice, or Azure Monitor directly to gather performance data and then analyze it, but its not going to be trivial.
I have multiple services running on Service Fabric. I would like to add Application Insight for logging. I'm just wondering whether I have to add an Application Insight resource for each microservice or only one is common for all. What is the best practice?
There is no such thing a the best practice for this. It really depends. Some considerations:
Pricing: depending on the level (basic or enterprise) you will get an amount of data for free / included in the base price. See the docs. So in some cases, depending on the amount of traffic you can reduce costs by having a dedicated AI resource per service. AI resources for services that send data below the threshold of the AI pricing plan are then (almost) free.
Querying: if you split up services per AI resource getting an overview of the whole system is difficult since at the moment you cannot create queries spanning multiple AI resources.
Responsibility: If you have multiple teams working on multiple services it might be an option to have an AI resource per team so they have a good insight in only the parts they are responsible for.
If you do decide to use a shared AI resource there are options like custom telemetry initializers to include custom data that further identify which ASF application or service is sending the data if it is not included by default.
See also Add Application Insight to a existing Azure Service Fabric cluster for more info about how to integrate AI.
Now, when it comes to bring data together you do have some additional options that may or may not need additional services or configuration. For example:
PowerBi: You can visualize data of AI resources using dashboards, see https://learn.microsoft.com/en-us/azure/application-insights/app-insights-export-power-bi
OMS: Operation Management Suite, See https://blogs.technet.microsoft.com/msoms/2016/09/26/application-insights-connector-in-oms/. As Jesse mentions you can link multiple AI Resources
Custom dashboards: Using the rest api you can create your own solution that displays data for one or more AI resources.