Azure Cognitive Search usage monitoring - azure

I have multiple Azure Cognitive Search services, distributed among different subscriptions. I would like to monitor the usage of each service, what should include:
storage: current/quota
no. indexes: current/quota
no. indexers: current/quota
no. data sources: current/quota
as I can access them through Overview/Usage pane in Azure portal or through Management RestAPI.
I would like to push this data to Grafana monitoring but I have some problems around that:
I am not able to fetch this type of data from Metrics
diagnostic settings does not allow to export this type of data (only metrics and operation logs)
Since it's possible to access the usage data through RestAPI I was thinking about creating Function App, that will ping each search service to collect the data and then push it to Log Analytics, which I can then use in Grafana. Maybe I can have one function app per subscription and use RBAC to grant access to search services, but still I don't like to have one app that have access to multiple search services.
How can I push the data from RestAPI to Log Analytics/Grafana other than using a function app?

As of now, it looks like there is no other option other than pushing this data via additional function to your destination.

Related

How to get hollistic view of Azure environment

There's an awful lot of disjointed documentation on monitoring network/resources in Azure. What I'm looking for is which pieces are needed to get information from VMs, NVA firewalls, azure load balancers, and other network resources and network connectivity into a single pain of glass in Azure. Only concerned about Azure, not on-prem for now.
I've come across azure monitor, log analytics work spaces, event hub, vm extensions, network watcher, insights, etc...but I'm not sure which are required and which are not. One doc leads to the next and I end up with 30 tabs open. I'll also need to be able to push logs to other security devices such as a SIEM.
Does anyone know of a deployment guide that wraps this all up in a more logical fashion? Does anyone have any feedback on which pieces from azure (not 3rd parties) are required at a minimum to accomplish a single pane of glass to view my Azure environment holistically?
General overview of observability in Azure
Likely, the thing you're looking for is Azure Monitor. It's an umbrella term for everything observability related inside Azure.
To store Metrics and Logs you need Log Analytics: it can query data with kusto query language, visualize results, define Alerts on queries.
Alerts is quite a complex beast, as it is spread across the entire cloud. Two types that I use the most:
log-analytics alert (which I mentioned above)
Alerts tab, which is available at every Azure component view. for example, open resource group, and scroll down to Monitoring section
Each component also has a subset of built-in metrics. Likely, you noticed that many azure components on the Overview view display some charts. For example, Azure Storage Account displays Total egress, Total ingress, and other line-charts. When you click on these charts you can customize them. These metrics and charts are free to use.
Microsoft also has all-in-one observability solution for Azure Functions and Web Apps: Application Insights
Dashboards allows to join multiple charts into a single view and share it with others.
If you care about security, Azure proposes Azure Security Center
Deployment/management strategy
I suggest to start with:
Create Log Analytics Workspace, which is the storage for metrics and logs. The azure docs article explains how to design it: how many instances to use, how to rate-limit ingestion (it might be expensive if goes out of control), how to access it and so on.
To get Azure components logs, look for Diagnostic Settings tab at a component page at Azure portal, but not all components has it (sic!). I suggest
sending the most critical data to Log Analytics workspace to store them in a queryable format for 30 days (it's in free tier). This is needed for investigating current issues with your infrastructure
if you might need logs later than 30 days - send them to Storage Account
you mentioned SIEM integration - route required events to Event Hub and then process the stream according to your requirements
So, if you need long-term storage - you need to create Azure Storage Account.
If you need real-time analysis - you need to build a pipeline based on Azure Event Hub.
If you have Azure Functions and Web Apps - add Application Insights. According to my experience, I would suggest starting with a separate instance per each Azure Function resource or Service.
Create Alerts for each component separately. If you do it through UI - open component page at the portal and look for Alerts tab there. If you're automating the process (please do so as soon as possible), do not expect easy trip: I used ARM templates and terraform - in both cases, there are dozens of barely documented features.
Join related components core-metrics into Dashboards and share it with the team. This guide is a good starting point. Note, when you share the dashboard, it's also persisted as an azure resource in the subscription.

Best Practice to store Azure WebJob Logs incl. Data in Azure

I have several Azure WebJobs (.Net Framework, Not .Net Core) running which interact with an Azure Service Bus. Now I want to have a convenient way to store and analyze their Log-Messages (incl. the related Message from the Service Bus). We are talking about a lot of Log Messages per Day.
My Idea is to send the Logs to an Azure Event Hub and store them in an Azure SQL Database. Later I can have for example a WebApp that enables Users to conveniently browse and analyze the Logs and view the Messages.
Is this a bad Idea? Should I instead use Application Insights?
Application insight charges are more than your implementation. So i would say this is good idea. Just one change i would send each logs to logic apps and do some processing like sorting error logs, info logs etc differently. Also why are you thinking about SQL when this can be stored in non SQL Azure tables and fetch them from there.

Is there a way to feed IIS logs into App Insights from Log analytics workspace?

We've logs(W3CIISLogs) on Log analytics workspace for websites hosted on VMs. Similarly we have app insights enabled for websites hosted on App service. Now we want to access telemetry data of both type of websites thru single interface, either via app insights or via Log analytics. Just wondering if it's possible and what's the best way.
With Azure Monitor you can now query not only across multiple Log Analytics workspaces, but also data from a specific Application Insights app in the same resource group, another resource group, or another subscription. This provides you with a system-wide view of your data. You can only perform these types of queries in Log Analytics.
Querying across Log Analytics workspaces and from Application Insights - reference another workspace in your query, use the workspace identifier and for an app from Application Insights, use the app identifier.
Cross-resource query limits:
The number of Application Insights resources that you can include in
a single query is limited to 100.
Cross-resource query is not supported in View Designer. You can Author a query in Log
Analytics and pin it to Azure dashboard and visualize a log search.
Cross-resource query in log alerts is supported in the new
scheduledQueryRules API. By default, Azure Monitor uses the legacy
Log Analytics Alert API for creating new log alert rules from Azure
portal, unless you switch from legacy Log Alerts API. After the
switch, the new API becomes the default for new alert rules in Azure
portal and it lets you create cross-resource query log alerts rules.
You can create cross-resource query log alert rules without making
the switch by using the ARM template for scheduledQueryRules API –
but this alert rule is manageable though scheduledQueryRules API and
not from Azure portal.
Documentation Reference - Cross-Resource Log queries in Azure Monitor
Hope the above information helps.

Approach for creating consolidated trace/ logs for on-premises solution consuming Azure services

Following is the proposed transition in our application:
Web Application is deployed in on-premises IIS (Web Server 1).
Web Application has one functionality (for example, Generate Invoice for selected customer).
For each new request of Generate Invoice, the web application is writing message to the Azure Service Bus Queue.
Azure function gets triggered for each new message in Azure Service Bus Queue.
Azure function triggers Web API (deployed on-premises).
Web API generates Invoice for the customer and stores in the local file storage.
As of now, we have everything setup on-premises, and instead of Service Bus and Azure function, we directly consume Web API. With this type of infrastructure in place, we are currently logging all events in an MongoDB collection, and providing single consolidated view to the user. So they can identify what happened to the Generate Invoice request, and at which level and with which error it got failed (in case of failures).
With the new proposed architecture, we are in process of identifying ways for logging and tracing here, and display consolidated view to the users.
The only option, I can think of is to log all events in Azure Cosmos DB from everywhere (i.e., Website, Service bus, function, Web API), and then provide consolidated view.
Can anyone suggest if the suggested approach looks OK? Or if anyone has some better solution?
Application Insights monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It leverages the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations and diagnose errors without waiting for a user to report them.
Workbooks combine data visualizations, Analytics queries, and text into interactive documents. You can use workbooks to group together common usage information, consolidate information from a particular incident, or report back to your team on your application's usage.
For more details, you could refer to this article.

Azure service bus statistics/Monitoring

I want to make a dashboard which shows the status of our Azure services bus queues and displays the history for "messages added to queue", "length of queue" and "messages processed" etc. Using the Azure Management Portal, I can see that most of these statistics manually for each queue.
Is there any way to get access to the data that is displayed in the Management Portal through one of the APIs as I want to combine the data from number of queues that we use into a single interface. I have searched in vain but I don't want to log my own statistics as that seems like redoing a task that Microsoft already perform.
Currently with REST API all I can see is how to get the current approximate count of messages in the queue.
There is an API for this now (wasn't back when the OP created the thread):
https://msdn.microsoft.com/en-gb/library/azure/dn163589.aspx (REST)
https://msdn.microsoft.com/en-us/library/mt348562.aspx (.NET)
Also, I believe it should be available via Azure Insights API:
https://msdn.microsoft.com/en-us/library/microsoft.azure.insights.aspx
It is possible to fetch the Count of Messages in a Queue, Incoming Messages, Outgoing Messages with the help of the latest Azure Monitor Metrics, with which you can build you own Dashboard. Or you can make use of the Azure Monitor in Azure portal, which allows you to configure dashboards and alerts.

Resources