Where are kubelet logs in AKS stored? - azure

I would like to view kubelet logs going back in time in Azure AKS. All I could find from Azure docs was how to ssh into the nodes and list the logs (https://learn.microsoft.com/en-us/azure/aks/kubelet-logs) but I feel like this has to be aggregated in Log Analytics somewhere right ?
However I wasn't able to find anything in Log Analytics for Kubernetes. Am I missing something ?
We have omsagent daemonset installed and Microsoft.OperationalInsights/workspaces is enabled
Thanks :)

I tried to reproduce this issue in my environment and got below results:
I created resource group and VM by setting up the Subscription
az account set --subscription "subscription_name"
az group create --location westus resourcegroup_name
created the AKS Cluster with the parameter to enable the AKS Container Insights
The following Example will creates the Cluster with name AKSRG.
az aks create -g myResourceGroup -n resourceGroup_name --enable-managed-identity --node-count 1 --enable-addons monitoring --enable-msi-auth-for-monitoring --generate-ssh-keys
Here I have configured the kubectl to connect the kubernetes cluster with the get-credentials command
I have created the interactive shell connection to the node
using kubectl debug
kubectl debug node/pod_name -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
I have used the below Command in after hash(#) tag
journalctl -u kubelet -o cat
To get the logs check the nodes and pods
We can use the below command to check the KUBE LOGS
kubectl logs pod_name
Reference:
View kubelet logs in Azure Kubernetes Service (AKS) - Azure Kubernetes Service | Microsoft Docs

Related

Kubernetes many restarts but pod keeps running

I'm seeing a lot of restarts on all the pods of every service that I have deployed on Kubernetes.
But when I see the logs in real time:
kubectl -n my-namespace logs -c my-pod -f my-pod-some-hash --tail=50
I see nothing, there's no restarts, there's no signal of failure. Readiness keep workings. So what it means all those restarts? Where or how can I get more info about those restarts?
Edit:
By viewing the pod details of the pod that has 158 on the picture above, I can see this, but I don't know what it means or if it's related to the restarts:
Replication via one sample example pod with CLI commands
If any pod restarts, in order to check the logs of the previous run user "--previous"
Step1:
Connect to cluster using below command
az aks get-credentials --resource-group <resourcegroupname> --name <Clustername>
Step2:
verify the pod logs
kubectl get pods
Step3:
Verify the restart pods logs using command
kubectl logs <PodName> --previous

Issue while enabling prometheus monitoring in azure k8s

Getting error while configuring prometheus in azure kubernetes
I tried to reproduce the same issue in my environment and got the below results
I have the cluster and I am trying to configure the Prometheus in azure Kubernetes and I got the successful deployment
To verify the agent is deployed or not use the below commands
kubectl get ds <dep_name> --namespace=kube-system
kubectl get rrs --namespace=kube-system
This error getting because of you are using the service principal instead of managed identity
For enabling the managed identity please follow the below commands
AKS cluster with service principal first disable monitoring and then upgrade to managed identity, the azure public cloud is supporting for this migration
To get the log analytics workspace id
az aks show -g <rg_name> -n <cluster_name> | grep -i "logAnalyticsWorkspaceResourceID"
For disable the monitoring use the below command
az aks disable-addons -a monitoring -g <rg_name> -n <cluster_name>
Or I can get it on portal in the azure monitor logs
I have upgrade the cluster to system managed identity, use the below command to upgrade
az aks update -g <rg_name> -n <cluster_name> --enable-managed-identity
I have enable the monitoring addon with the managed identity authentication
az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <rg_name> -n <cluster_name> --workspace-resource-id <workspace_resource_id>
For more information use this document for Reference

kubectl get nodes not returning any result

Issue: kubectl get nodes, returning an empty result
Cloud provider: Azure
K8s cluster built from scratch with VMSS instances/VMs
azureuser#khway-vms000000:~$ kubectl get no
No resources found in default namespace.
I am a bit stuck and do not know what else I could check to get to the bottom of this issue.
Thanks in advance!
it seems like you logged on to one of the nodes of the managed VMSS.
Instead do (e.g. from your dev machine):
az aks get-credentials --name MyManagedCluster --resource-group MyResourceGroup
https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az_aks_get_credentials
then you can run kubectl

How to configure kubectl from another pc?Namespaces are empty

I have successfully setup the kubectl cli on my laptop to connect to my azure cluster.
If I make, for example:
kubectl config get-contexts
I get my namespaces and I can see my resources by navigating to the current namespace.
Now I need to replicate this setup on another laptop of mine and I made the following:
az login <--login to azure
az aks install-cli <--install of kubectl
az aks get-credentials --resource-group myResourceGroup --name myCluster <--linking kubectl to my cluster
Problem is that, if I make get-contexts again I only get the default namespace. And, of course, that namespace is empty as I put my deployment in another one.
What am I missing?
so I'm not sure what the actual question is. if your resources are in different namespace, you can query those namespaces like you normally would:
kubectl get pods -n othernamespace
kubectl edit deployment xxx -n othernamespace
you can set the default namespace for the context like so:
kubectl set-context xxx --namespace othernamespace

Get kubernetes credentials from Azure

I have created a Azure container Service
orchestratorType : Kubernetes
Group-Name : Mygrp
DNS : MyDNs
Now I have to install kubectl on m/c and service , pods too
For this I logged into my account from azure CLI
I need azure kubernetes credentials for kubectl and command for taht is
az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
From above Info , I know cluster-resource-group is Mygrp (or I am wrong ?) but what will be cluster-name ?
Or their is something I have configure for this ?
We can find the --name via Azure Portal:

Resources