I have an AKS cluster on Dev Env that has some containers running on it. I have also enabled the Azure Log Analytics on it. But I can see the logs of the current container that are running not the old ones which has been killed or stopped.
I'm wondering how I can keep the logs permanently even the container that has been deleted ot stopped. What is the best solution for it? I hoped that Azure logs can do it but it seems it doesn't.
Your best option is to use Azure Monitor for Containers (can be enabled post-deployment of the cluster): https://learn.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-overview
It collects logs from pods and stores them in a Log Analytics workspace, where they can be queried using the Kusto Query Language.
Related
Looking for some info on Defender for containers and specifically how it gets the logs that it analyzes.
Documentation says the following (doc link):
To protect your Kubernetes containers, Defender for Containers receives and analyzes:
Audit logs and security events from the API server
Cluster configuration information from the control plane
Workload configuration from Azure Policy
Security signals and events from the node level
Where/how does it get the audit logs? Does it matter if they are sent to log analytics workspaces or to a storage account?
Which audit logs does it use? Following types are configurable:
Kubernetes API Server
Kubernetes Audit
Kubernetes Controller Manager
Kubernetes Scheduler
Kubernetes Cluster Autoscaler
Kubernetes Audit Admin Logs
guard
Kubernetes Cloud Controller Manager
csi-azuredisk-controller
csi-azurefile-controller
csi-snapshot-controller
Can I assume defender only uses the Kubernetes API Server type logs? Or are the other tiers also analyzed?
Thanks!
Dove into the documentation but didn't find a definitive answer.
I have created the AKS cluster for that one node pool created
For this cluster one new node pool created we can check the logs for the cluster and also find the logs in Microsoft defender for cloud
Getting the logs from Microsoft defender cloud
Where/how does it get the audit logs
You have to enable the categories of logs required from the AKS Cluster > Diagnostic Settings:
Based on the type of logs requirement, enable the category of we can select all categories.
Which audit logs does it use? Following types are configurable:
kube-audit gives the all-audit log data which are security category audit logs from the Kubernetes cluster.
Does it matter if they are sent to log analytics workspaces or to a storage account?
In Storage Account, you cannot get querying feature for filtering the logs but in Log Analytics workspaces, you can query the logs based on filters like time duration of logs where you can also increase the retention period of data.
Can I assume defender only uses the Kubernetes API Server type logs?
As you can see in the above screenshot, Microsoft defender showing all the logs like Kubernetes API Server, Security related logs and you can do deep by selecting each of the log along with the remediation steps.
On Azure:
I have an AKS cluster and a load abalancer.
I am keep getting an operation "Create or Update Load Balancer" in the load balancer log,
every 1-2 minutes. What can cause this log, and can I stop this operation from occuring ?
You can click the log and look at the change history which is a preview version, then you can see more details about the log like this:
These logs are written by the reason that all you change the services with the Load Balancer type.
And unfortunately, I'm afraid you cannot stop the logs. You can only create the AKS cluster and the agent pools and set some configurations, but do not include the log. All the settings for the Infrastructure of the AKS are not configurable to you. It's designed by Azure.
problem statement.
as per my understanding, we can run an elastic search, kibana and logstash etc as a pod in kubernates cluster for log management. but it is also memory heavy intensive application. AWS provides various managed services like Cloudwatch, cloud trail and ELK stack for log management.
do we have a similar substitute in Azure as well i.e. some managed service?
you can use AKS with Azure Monitor (reading). I'm not sure you can apply this to not AKS cluster (at least not in a straight forward fashion).
Onboarding (for AKS clusters) is really simple and can be done using various methods (portal included).
You can read more on the docs I've linked (for example, about capabilities).
Azure Monitor for Containers is available now and once integrated some cluster metrics as well as the logs will be automatically collected and made available through log analytics.
Does Azure Container Service integrate with Azure Monitor?
Wondering what the best way is to do logging/monitoring of kubernetes cluster?
If you are looking for monitoring tools on Azure, you may want to use Azure OMS (Opertation Management Suite). This gives you the ability to monitor the container inventory, performance, and logs in a single location. To my understanding, the stats of the container is only available for Linux nodes now, if you are deploying your k8s cluster on the Azure Portal.
To do this, you need to first create an OMS account. By this time, you should have the Workspace ID and the key available. The next step would
be to create the oms pod on each node using a DaemonSet.
For the detailed setup, take a look at https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-oms.
For third party tools, Grafana+influxdb is one of the ways I have tried before. Basically it provides you with the metrics on two levels: POD and NODE respectively. The displayed metrics included CPU Usage, Memory Usage, Network Usage and Filesystem Usage, etc. Of course, you can always alter your query to add extra metrics.
For the implementation of this approach, you can refer to https://github.com/Azure/acs-engine/blob/master/docs/kubernetes/monitoring.md.
Hope this helps :)
you can use this CLI command to browse through kubernetes cluster deployed using azure container service.
az acs kubernetes browse -g -n
This way you can see kubernetes webui
also you can use kubectl proxy command.
There are many docker containers which are running on a given server. I have configured each application instance running inside the container to send the logs to the standard output. Now, I am interested in the following:
Manage the logs coming out of all these containers.
Store all of these logs and also rotate them as needed.
Only store the logs locally on the server or on a shared location but NOT sending them to any cloud/log management service.
Probably coming up with a container which can take up the task of log management independently.
Any suggestions on how I can achieve this?
You can create data container which will gather all logs from all of your containers.
Then you can process those logs for example with ELK stack (Elastic search, logstash, kibana) to aggregate and transoform and visualise your logs for your need.
also rotate them as needed.
Docker will have logrotate in a future version, but you can already use logrotate for docker container http://blog.sandro-keil.de/2015/03/11/logrotate-for-docker-container/