Supported events from Azure to QRadar - azure

Are the resource logs (which are part of platform logs) from Azure supported in QRadar or do we need to build a custom parser for each of the resource type in the subscription?
I read the DSM documentation of QRadar, and it mentions platform activity logs, but not resource logs. Let’s take an example where we get gateway logs, websocket connection logs, request logs, etc. from our Azure deployment. Are all resource logs supported by QRadar to be taken from event hub and integrate to QRadar (list of supported resource logs by QRadar)?

if I understand your question correctly you are looking to extend existing parsers to QR without having to implement custom properties.
For this IBM has published the "IBM QRadar Content Extension for Azure":
https://exchange.xforce.ibmcloud.com/hub/extension/7a89f51852efa37de0809457ef1006dd
I recommend installing another extension "Microsoft Azure Security Center Connected Assets & Risks Connector" (https://exchange.xforce.ibmcloud.com/hub/extension/0dbfab6a22bca7add7a99fa19fdd426f), which allows you to monitor other risk events via ASC and integrate assets that are not yet parsed into the QR.
And probably the best scenario how to solve issue with Azure log data is to run side-by-side QR + Sentinel and use Azure Sentinel and turn on Data Connectors for Azure specific resources. This keeps you up to date with integration, data parsing and current buildin rules. We have this scenario deployed and it is for selected sources (Exchange, Teams, risk signins, etc.) and we monitor them via buildin rules in Sentinel. Subsequently, we integrate them into the QR see. https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/azure-sentinel-side-by-side-with-qradar/ba-p/1488333. We finally store the logs in QRadar, but we use Sentinel for Azure-specific rules and then integrate the incidents into QR.
Regards.

Related

Forward event logs to Sentinel using Azure Monitor Agent (AMA)

We are implementing Sentinel and we want to forward to it the Security Events log, Application Log, and Others Custom event log. For the Security, Events log there is a connector who can forward it to Sentinel, but for the other event log how can I forward it using the Azure Monitor Agent?
Our servers are Windows server 2019 with Azure Monitor Agent onboarded Arc-enabled (on-premises)
Thanks.
Just create a Data Collection Rule to get your other data to the Log Analytics Workspace that Sentinel is attached to. That is all the Windows Security Events via AMA connector you referenced in your question is doing. The other things you get from a built-in connector, you will need to create on your own. These include workbooks and analytics rules.
Considerations
Just be conscious of the fact that if you are sending data to Sentinel, and you are not using any analytics rules, you are wasting money. Sentinel is a Security Event and Incident Management (SEIM) and you will pay extra (on top of the LogA costs) for alerting and automation. You are better off sending data that has no alerting value to a separate LogA workspace. You can then do a cross-workspace query to hunt on that dataset.
Best Practices
https://learn.microsoft.com/en-us/azure/sentinel/billing-reduce-costs#separate-non-security-data-in-a-different-workspace

How to get hollistic view of Azure environment

There's an awful lot of disjointed documentation on monitoring network/resources in Azure. What I'm looking for is which pieces are needed to get information from VMs, NVA firewalls, azure load balancers, and other network resources and network connectivity into a single pain of glass in Azure. Only concerned about Azure, not on-prem for now.
I've come across azure monitor, log analytics work spaces, event hub, vm extensions, network watcher, insights, etc...but I'm not sure which are required and which are not. One doc leads to the next and I end up with 30 tabs open. I'll also need to be able to push logs to other security devices such as a SIEM.
Does anyone know of a deployment guide that wraps this all up in a more logical fashion? Does anyone have any feedback on which pieces from azure (not 3rd parties) are required at a minimum to accomplish a single pane of glass to view my Azure environment holistically?
General overview of observability in Azure
Likely, the thing you're looking for is Azure Monitor. It's an umbrella term for everything observability related inside Azure.
To store Metrics and Logs you need Log Analytics: it can query data with kusto query language, visualize results, define Alerts on queries.
Alerts is quite a complex beast, as it is spread across the entire cloud. Two types that I use the most:
log-analytics alert (which I mentioned above)
Alerts tab, which is available at every Azure component view. for example, open resource group, and scroll down to Monitoring section
Each component also has a subset of built-in metrics. Likely, you noticed that many azure components on the Overview view display some charts. For example, Azure Storage Account displays Total egress, Total ingress, and other line-charts. When you click on these charts you can customize them. These metrics and charts are free to use.
Microsoft also has all-in-one observability solution for Azure Functions and Web Apps: Application Insights
Dashboards allows to join multiple charts into a single view and share it with others.
If you care about security, Azure proposes Azure Security Center
Deployment/management strategy
I suggest to start with:
Create Log Analytics Workspace, which is the storage for metrics and logs. The azure docs article explains how to design it: how many instances to use, how to rate-limit ingestion (it might be expensive if goes out of control), how to access it and so on.
To get Azure components logs, look for Diagnostic Settings tab at a component page at Azure portal, but not all components has it (sic!). I suggest
sending the most critical data to Log Analytics workspace to store them in a queryable format for 30 days (it's in free tier). This is needed for investigating current issues with your infrastructure
if you might need logs later than 30 days - send them to Storage Account
you mentioned SIEM integration - route required events to Event Hub and then process the stream according to your requirements
So, if you need long-term storage - you need to create Azure Storage Account.
If you need real-time analysis - you need to build a pipeline based on Azure Event Hub.
If you have Azure Functions and Web Apps - add Application Insights. According to my experience, I would suggest starting with a separate instance per each Azure Function resource or Service.
Create Alerts for each component separately. If you do it through UI - open component page at the portal and look for Alerts tab there. If you're automating the process (please do so as soon as possible), do not expect easy trip: I used ARM templates and terraform - in both cases, there are dozens of barely documented features.
Join related components core-metrics into Dashboards and share it with the team. This guide is a good starting point. Note, when you share the dashboard, it's also persisted as an azure resource in the subscription.

Can I use Azure Policy to audit or enforce Azure Activity Log diagnostic settings?

Built in audits if a Log Profile exists, but not the destination stores. Would like to specify the Storage Account/Event Hub/Log Analytics Workspace.
Can I enforce those settings? In that approach, to also utilize modern diagnostic settings and send to a storage account and log analytics workspace.
Two (2) options to configure diagnostic settings (besides doing this manually on each resources):
Azure Resource Template (ARM)
This requires you to have a deeper understanding of Azure and Resources. However, it gives you all the flexibility to configure any type of resources and targets (storage, event hub or log analytics). This option does not come with additional feature to check compliancy and remediate any configuration drift.
See Microsoft documentation here
Azure Policy
Configuration can be done through the portal (look for 'policy' under 'all services'). However, only the following resources are covered: Services Bus, Search Services, Event Hub, Stream Analytics, Data Lake Gen 1. On the other hand it comes with compliancy dashboard and remediation.
See Microsoft documentation here

Approach for creating consolidated trace/ logs for on-premises solution consuming Azure services

Following is the proposed transition in our application:
Web Application is deployed in on-premises IIS (Web Server 1).
Web Application has one functionality (for example, Generate Invoice for selected customer).
For each new request of Generate Invoice, the web application is writing message to the Azure Service Bus Queue.
Azure function gets triggered for each new message in Azure Service Bus Queue.
Azure function triggers Web API (deployed on-premises).
Web API generates Invoice for the customer and stores in the local file storage.
As of now, we have everything setup on-premises, and instead of Service Bus and Azure function, we directly consume Web API. With this type of infrastructure in place, we are currently logging all events in an MongoDB collection, and providing single consolidated view to the user. So they can identify what happened to the Generate Invoice request, and at which level and with which error it got failed (in case of failures).
With the new proposed architecture, we are in process of identifying ways for logging and tracing here, and display consolidated view to the users.
The only option, I can think of is to log all events in Azure Cosmos DB from everywhere (i.e., Website, Service bus, function, Web API), and then provide consolidated view.
Can anyone suggest if the suggested approach looks OK? Or if anyone has some better solution?
Application Insights monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It leverages the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations and diagnose errors without waiting for a user to report them.
Workbooks combine data visualizations, Analytics queries, and text into interactive documents. You can use workbooks to group together common usage information, consolidate information from a particular incident, or report back to your team on your application's usage.
For more details, you could refer to this article.

is azure diagnostics only available through code?

Is Azure diagnostics only implemented through code? Windows has the Event Viewer where various types of information can be accessed. ASP.Net websites have a Trace.axd file at the root that can viewed for trace information.
I was thinking that something similar might exist in Azure. However, based on the following url, Azure Diagnostics appears to require a custom code implementation:
https://azure.microsoft.com/en-us/documentation/articles/cloud-services-dotnet-diagnostics/#overview
Is there an easier, more built-in way to access Azure diagnostics like I described for other systems above? Or does a custom Worker role need to be created to capture and process this information?
Azure Worker Roles have extensive diagnostics that you can configure up.
You get to them via the Role configuration:
Then, through the various tabs, you can configure up specific types of diagnostics and have them periodically transferred to a Table Storage account for later analysis.
You can also enable a transfer of application specific logs, which is handy and something that I use to avoid having to remote into the service to view logs:
(here, I transfer all files under the AppRoot\logs folder to a blob container named wad-processor-logs, and do so every minute.)
If you go through the tabs, you will find that you have the ability to extensively monitor quite a bit of detail, including custom Performance Counters.
Finally, you can also connect to your cloud service via the Server Explorer, and dig into the same information:
Right-click on the instance, and select View Diagnostics Data.
(a recent deployment, so not much to see)
So, yes, you can get access to Event Logs, IIS Logs and custom application logs without writing custom code. Additionally, you can implement custom code to capture additional Performance Counters and other trace logging if you wish.
"Azure diagnostics" is a bit vague since there are a variety of services in Azure, each with potentially different diagnostic experiences. The article you linked to talks about Cloud Services, but are you restricted to using Cloud Services?
Another popular option is Azure App Service, which allows you many more options for capturing logs, including streaming them, etc. Here is an article which goes into more details: https://azure.microsoft.com/en-us/documentation/articles/web-sites-enable-diagnostic-log/

Resources