Defender for Containers: what's needed for log analysis? - azure

Looking for some info on Defender for containers and specifically how it gets the logs that it analyzes.
Documentation says the following (doc link):
To protect your Kubernetes containers, Defender for Containers receives and analyzes:
Audit logs and security events from the API server
Cluster configuration information from the control plane
Workload configuration from Azure Policy
Security signals and events from the node level
Where/how does it get the audit logs? Does it matter if they are sent to log analytics workspaces or to a storage account?
Which audit logs does it use? Following types are configurable:
Kubernetes API Server
Kubernetes Audit
Kubernetes Controller Manager
Kubernetes Scheduler
Kubernetes Cluster Autoscaler
Kubernetes Audit Admin Logs
guard
Kubernetes Cloud Controller Manager
csi-azuredisk-controller
csi-azurefile-controller
csi-snapshot-controller
Can I assume defender only uses the Kubernetes API Server type logs? Or are the other tiers also analyzed?
Thanks!
Dove into the documentation but didn't find a definitive answer.

I have created the AKS cluster for that one node pool created
For this cluster one new node pool created we can check the logs for the cluster and also find the logs in Microsoft defender for cloud
Getting the logs from Microsoft defender cloud
Where/how does it get the audit logs
You have to enable the categories of logs required from the AKS Cluster > Diagnostic Settings:
Based on the type of logs requirement, enable the category of we can select all categories.
Which audit logs does it use? Following types are configurable:
kube-audit gives the all-audit log data which are security category audit logs from the Kubernetes cluster.
Does it matter if they are sent to log analytics workspaces or to a storage account?
In Storage Account, you cannot get querying feature for filtering the logs but in Log Analytics workspaces, you can query the logs based on filters like time duration of logs where you can also increase the retention period of data.
Can I assume defender only uses the Kubernetes API Server type logs?
As you can see in the above screenshot, Microsoft defender showing all the logs like Kubernetes API Server, Security related logs and you can do deep by selecting each of the log along with the remediation steps.

Related

Is there any way to find the Node scalability time on Azure Kubernetes Service (AKS) using Logs?

I want to find the Node scalability time on Azure Kubernetes Service (AKS) using Logs.
It's possible with some assumptions.
This information is taken from Azure AKS documentation (consider getting familiar with it, it describes how to enable, where to look at and etc):
To diagnose and debug autoscaler events, logs and status can be
retrieved from the autoscaler add-on.
AKS manages the cluster autoscaler on your behalf and runs it in the
managed control plane. You can enable control plane node to see the
logs and operations from CA (cluster autoscaler).
The same cluster-autoscaler is used across different platforms, each of them can have some specific setup (e.g. for Azure AKS). Based on it, logs should have events like:
status, scaleUp, scaleDown, eventResult

Azure application gateway firewall logs not being populated to log analytics workspace

We have provisioned the instance of the Azure app gateway (Standard v2 East AU region) and has enabled the diagnostics settings of it to dump all metrics and logs to the log analytics workspace and this seem to be working fine, however we wanted to additional insights of the request and hence have scaled up the tier and enabled the WAF v2 (as shown in the image below).
Now based on this documentation here https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-diagnostics#diagnostic-logging and after waiting for some time, we expected that the firewall logs will be automatically populating in the same log analytics workspace however this does not seem to work and they are simply not populated there.
Note that we can see the "ApplicationGatewayAccessLog" logs and below query is evident of the same AzureDiagnostics | distinct Category that returns only one category i.e. "ApplicationGatewayAccessLog"
Does anyone know if we are missing something or have any input?
Sometimes, the output is not the same when you explore data from Application Gateway ---logs and from your specific Log Analytics workspace---logs. You cam compare these results on your side. See this issue.
In this case, you should have finished some access actions to your Application Gateway and trigger the firewall access log collection before the data can be collected by the Azure monitoring. Though document stated Firewall logs are collected every 60 seconds. Sometimes, the data delays(even more than 2 days) to be logged in the logs and your located region also impacts on the data display time. From this blog, you can see hourly log of firewall actions on the WAF.
For more information, you can use Log Analytics to examine Application Gateway Web Application Firewall Logs.

Persistent AKS container's log

I have an AKS cluster on Dev Env that has some containers running on it. I have also enabled the Azure Log Analytics on it. But I can see the logs of the current container that are running not the old ones which has been killed or stopped.
I'm wondering how I can keep the logs permanently even the container that has been deleted ot stopped. What is the best solution for it? I hoped that Azure logs can do it but it seems it doesn't.
Your best option is to use Azure Monitor for Containers (can be enabled post-deployment of the cluster): https://learn.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-overview
It collects logs from pods and stores them in a Log Analytics workspace, where they can be queried using the Kusto Query Language.

azure kubernates service managed service for application log management

problem statement.
as per my understanding, we can run an elastic search, kibana and logstash etc as a pod in kubernates cluster for log management. but it is also memory heavy intensive application. AWS provides various managed services like Cloudwatch, cloud trail and ELK stack for log management.
do we have a similar substitute in Azure as well i.e. some managed service?
you can use AKS with Azure Monitor (reading). I'm not sure you can apply this to not AKS cluster (at least not in a straight forward fashion).
Onboarding (for AKS clusters) is really simple and can be done using various methods (portal included).
You can read more on the docs I've linked (for example, about capabilities).
Azure Monitor for Containers is available now and once integrated some cluster metrics as well as the logs will be automatically collected and made available through log analytics.

Logging/Monitoring of Kubernetes cluster in Azure Contianer Service

Does Azure Container Service integrate with Azure Monitor?
Wondering what the best way is to do logging/monitoring of kubernetes cluster?
If you are looking for monitoring tools on Azure, you may want to use Azure OMS (Opertation Management Suite). This gives you the ability to monitor the container inventory, performance, and logs in a single location. To my understanding, the stats of the container is only available for Linux nodes now, if you are deploying your k8s cluster on the Azure Portal.
To do this, you need to first create an OMS account. By this time, you should have the Workspace ID and the key available. The next step would
be to create the oms pod on each node using a DaemonSet.
For the detailed setup, take a look at https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-oms.
For third party tools, Grafana+influxdb is one of the ways I have tried before. Basically it provides you with the metrics on two levels: POD and NODE respectively. The displayed metrics included CPU Usage, Memory Usage, Network Usage and Filesystem Usage, etc. Of course, you can always alter your query to add extra metrics.
For the implementation of this approach, you can refer to https://github.com/Azure/acs-engine/blob/master/docs/kubernetes/monitoring.md.
Hope this helps :)
you can use this CLI command to browse through kubernetes cluster deployed using azure container service.
az acs kubernetes browse -g -n
This way you can see kubernetes webui
also you can use kubectl proxy command.

Resources