service fabric, control what events get saved to table storage - azure

I have a service fabric cluster on Azure and it has a very simple app running on it. The app is from this tutorial.
When running the app locally, the Visual Studio Diagnostic Events shows 3 events.
CRM
MasterCRM
ServiceMessage
I believe the CRM and MasterCRM are related to the cluster manager and the ServiceMessage shows events from my app, in this case just a message saying the current value of a counter.
This data is also saved in a table storage, I was wondering is there any way for me to control what gets saved to the table storage? Right now my table consists of pages and pages of CRM and MasterCRM messages and I've yet to see messages from my app, I'm sure if I keep going I might eventually see it, but so far no luck.
I'd like to just save the events from my app to the table storage and ignore the rest. I've looked around and found no way to do it.

The events you refer to are coming from ETW from the fabric runtime (CRM, MasterCRM) and your application (ServiceMessage) like you mentioned. The diagnostics viewer in Visual Studio is getting these events directly from ETW and not Azure Table Storage. If you want to filter the events showing up in the diagnostics viewer you can click the gear icon and edit the sources listed.
*CRM comes from Microsoft-ServiceFabric:5:0x4000000000000000.
Controlling what events get uploaded to Azure Table Storage in an Azure hosted cluster would require editing the ARM template's diagnostics section similarly.

Related

Push notification from azure blobstore to arbitrary number of webapps

I use data stored in in a blob for some configuration for some azure web apps, and I'd like to react to changes to it in near realtime. Currently I just set a timed event and periodically check if the etag of the blob has changed, and if it has then download the new blob.
This is ok, but I don't want to poll the blob too often, and I also want to be reactive. The devs changing the values in the blob want to be able to test the new values quickly.
The web app scales up and down, and each instance of the web app needs to download the config file. So, as far as I can tell, I can't just use the event system that azure storage has, as that would only send a notification to one instance.
Is there a recommended way to do this?
Per my understanding, you want to centralize manage your azure web apps. Once some config has been changed, your app services should reload configs on time automatically. Actually, Azure App Configuration provides this kind of functionality.
You can also config the condition to reload all configs in code. This is a .net core sample here. And you find other samples under the Enable dynamic configuration blade.

How to get hollistic view of Azure environment

There's an awful lot of disjointed documentation on monitoring network/resources in Azure. What I'm looking for is which pieces are needed to get information from VMs, NVA firewalls, azure load balancers, and other network resources and network connectivity into a single pain of glass in Azure. Only concerned about Azure, not on-prem for now.
I've come across azure monitor, log analytics work spaces, event hub, vm extensions, network watcher, insights, etc...but I'm not sure which are required and which are not. One doc leads to the next and I end up with 30 tabs open. I'll also need to be able to push logs to other security devices such as a SIEM.
Does anyone know of a deployment guide that wraps this all up in a more logical fashion? Does anyone have any feedback on which pieces from azure (not 3rd parties) are required at a minimum to accomplish a single pane of glass to view my Azure environment holistically?
General overview of observability in Azure
Likely, the thing you're looking for is Azure Monitor. It's an umbrella term for everything observability related inside Azure.
To store Metrics and Logs you need Log Analytics: it can query data with kusto query language, visualize results, define Alerts on queries.
Alerts is quite a complex beast, as it is spread across the entire cloud. Two types that I use the most:
log-analytics alert (which I mentioned above)
Alerts tab, which is available at every Azure component view. for example, open resource group, and scroll down to Monitoring section
Each component also has a subset of built-in metrics. Likely, you noticed that many azure components on the Overview view display some charts. For example, Azure Storage Account displays Total egress, Total ingress, and other line-charts. When you click on these charts you can customize them. These metrics and charts are free to use.
Microsoft also has all-in-one observability solution for Azure Functions and Web Apps: Application Insights
Dashboards allows to join multiple charts into a single view and share it with others.
If you care about security, Azure proposes Azure Security Center
Deployment/management strategy
I suggest to start with:
Create Log Analytics Workspace, which is the storage for metrics and logs. The azure docs article explains how to design it: how many instances to use, how to rate-limit ingestion (it might be expensive if goes out of control), how to access it and so on.
To get Azure components logs, look for Diagnostic Settings tab at a component page at Azure portal, but not all components has it (sic!). I suggest
sending the most critical data to Log Analytics workspace to store them in a queryable format for 30 days (it's in free tier). This is needed for investigating current issues with your infrastructure
if you might need logs later than 30 days - send them to Storage Account
you mentioned SIEM integration - route required events to Event Hub and then process the stream according to your requirements
So, if you need long-term storage - you need to create Azure Storage Account.
If you need real-time analysis - you need to build a pipeline based on Azure Event Hub.
If you have Azure Functions and Web Apps - add Application Insights. According to my experience, I would suggest starting with a separate instance per each Azure Function resource or Service.
Create Alerts for each component separately. If you do it through UI - open component page at the portal and look for Alerts tab there. If you're automating the process (please do so as soon as possible), do not expect easy trip: I used ARM templates and terraform - in both cases, there are dozens of barely documented features.
Join related components core-metrics into Dashboards and share it with the team. This guide is a good starting point. Note, when you share the dashboard, it's also persisted as an azure resource in the subscription.

Best Practice to store Azure WebJob Logs incl. Data in Azure

I have several Azure WebJobs (.Net Framework, Not .Net Core) running which interact with an Azure Service Bus. Now I want to have a convenient way to store and analyze their Log-Messages (incl. the related Message from the Service Bus). We are talking about a lot of Log Messages per Day.
My Idea is to send the Logs to an Azure Event Hub and store them in an Azure SQL Database. Later I can have for example a WebApp that enables Users to conveniently browse and analyze the Logs and view the Messages.
Is this a bad Idea? Should I instead use Application Insights?
Application insight charges are more than your implementation. So i would say this is good idea. Just one change i would send each logs to logic apps and do some processing like sorting error logs, info logs etc differently. Also why are you thinking about SQL when this can be stored in non SQL Azure tables and fetch them from there.

Approach for creating consolidated trace/ logs for on-premises solution consuming Azure services

Following is the proposed transition in our application:
Web Application is deployed in on-premises IIS (Web Server 1).
Web Application has one functionality (for example, Generate Invoice for selected customer).
For each new request of Generate Invoice, the web application is writing message to the Azure Service Bus Queue.
Azure function gets triggered for each new message in Azure Service Bus Queue.
Azure function triggers Web API (deployed on-premises).
Web API generates Invoice for the customer and stores in the local file storage.
As of now, we have everything setup on-premises, and instead of Service Bus and Azure function, we directly consume Web API. With this type of infrastructure in place, we are currently logging all events in an MongoDB collection, and providing single consolidated view to the user. So they can identify what happened to the Generate Invoice request, and at which level and with which error it got failed (in case of failures).
With the new proposed architecture, we are in process of identifying ways for logging and tracing here, and display consolidated view to the users.
The only option, I can think of is to log all events in Azure Cosmos DB from everywhere (i.e., Website, Service bus, function, Web API), and then provide consolidated view.
Can anyone suggest if the suggested approach looks OK? Or if anyone has some better solution?
Application Insights monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It leverages the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations and diagnose errors without waiting for a user to report them.
Workbooks combine data visualizations, Analytics queries, and text into interactive documents. You can use workbooks to group together common usage information, consolidate information from a particular incident, or report back to your team on your application's usage.
For more details, you could refer to this article.

is azure diagnostics only available through code?

Is Azure diagnostics only implemented through code? Windows has the Event Viewer where various types of information can be accessed. ASP.Net websites have a Trace.axd file at the root that can viewed for trace information.
I was thinking that something similar might exist in Azure. However, based on the following url, Azure Diagnostics appears to require a custom code implementation:
https://azure.microsoft.com/en-us/documentation/articles/cloud-services-dotnet-diagnostics/#overview
Is there an easier, more built-in way to access Azure diagnostics like I described for other systems above? Or does a custom Worker role need to be created to capture and process this information?
Azure Worker Roles have extensive diagnostics that you can configure up.
You get to them via the Role configuration:
Then, through the various tabs, you can configure up specific types of diagnostics and have them periodically transferred to a Table Storage account for later analysis.
You can also enable a transfer of application specific logs, which is handy and something that I use to avoid having to remote into the service to view logs:
(here, I transfer all files under the AppRoot\logs folder to a blob container named wad-processor-logs, and do so every minute.)
If you go through the tabs, you will find that you have the ability to extensively monitor quite a bit of detail, including custom Performance Counters.
Finally, you can also connect to your cloud service via the Server Explorer, and dig into the same information:
Right-click on the instance, and select View Diagnostics Data.
(a recent deployment, so not much to see)
So, yes, you can get access to Event Logs, IIS Logs and custom application logs without writing custom code. Additionally, you can implement custom code to capture additional Performance Counters and other trace logging if you wish.
"Azure diagnostics" is a bit vague since there are a variety of services in Azure, each with potentially different diagnostic experiences. The article you linked to talks about Cloud Services, but are you restricted to using Cloud Services?
Another popular option is Azure App Service, which allows you many more options for capturing logs, including streaming them, etc. Here is an article which goes into more details: https://azure.microsoft.com/en-us/documentation/articles/web-sites-enable-diagnostic-log/

Resources