How to configure kubectl from another pc?Namespaces are empty - azure

I have successfully setup the kubectl cli on my laptop to connect to my azure cluster.
If I make, for example:
kubectl config get-contexts
I get my namespaces and I can see my resources by navigating to the current namespace.
Now I need to replicate this setup on another laptop of mine and I made the following:
az login <--login to azure
az aks install-cli <--install of kubectl
az aks get-credentials --resource-group myResourceGroup --name myCluster <--linking kubectl to my cluster
Problem is that, if I make get-contexts again I only get the default namespace. And, of course, that namespace is empty as I put my deployment in another one.
What am I missing?

so I'm not sure what the actual question is. if your resources are in different namespace, you can query those namespaces like you normally would:
kubectl get pods -n othernamespace
kubectl edit deployment xxx -n othernamespace
you can set the default namespace for the context like so:
kubectl set-context xxx --namespace othernamespace

Related

Where are kubelet logs in AKS stored?

I would like to view kubelet logs going back in time in Azure AKS. All I could find from Azure docs was how to ssh into the nodes and list the logs (https://learn.microsoft.com/en-us/azure/aks/kubelet-logs) but I feel like this has to be aggregated in Log Analytics somewhere right ?
However I wasn't able to find anything in Log Analytics for Kubernetes. Am I missing something ?
We have omsagent daemonset installed and Microsoft.OperationalInsights/workspaces is enabled
Thanks :)
I tried to reproduce this issue in my environment and got below results:
I created resource group and VM by setting up the Subscription
az account set --subscription "subscription_name"
az group create --location westus resourcegroup_name
created the AKS Cluster with the parameter to enable the AKS Container Insights
The following Example will creates the Cluster with name AKSRG.
az aks create -g myResourceGroup -n resourceGroup_name --enable-managed-identity --node-count 1 --enable-addons monitoring --enable-msi-auth-for-monitoring --generate-ssh-keys
Here I have configured the kubectl to connect the kubernetes cluster with the get-credentials command
I have created the interactive shell connection to the node
using kubectl debug
kubectl debug node/pod_name -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
I have used the below Command in after hash(#) tag
journalctl -u kubelet -o cat
To get the logs check the nodes and pods
We can use the below command to check the KUBE LOGS
kubectl logs pod_name
Reference:
View kubelet logs in Azure Kubernetes Service (AKS) - Azure Kubernetes Service | Microsoft Docs

Load Balancer Service type in azure VM

I have created a Kubernetes cluster ( 1 master, 2 workers VMs) using kubeadm on Azure. Node type service is working as expected. But Load Balancer service type is not working.
I have created the public IP address in azure and attached this IP to the service. I could see IP Address is attached for the service but this IP address is not accessible from outside.
And I have created the load balancer in Azure and attached the load balancer public IP address to the service that I have created in azure. This option also didn't work.
Just curious to know how to configure Load Balancer Service type in azure VM.
I have tried with aks and it worked with out any issues.
• I would suggest you to please follow the steps as given by me below for creating an AKS cluster in Azure and attaching a load balancer to that AKS cluster with a public IP for the front end for it. The steps for doing the said should be as below: -
a) Firstly, execute the below command in Azure CLI in Azure BASH cloud shell. The below creates an AKS cluster with two nodes in it of type ‘Linux’ with a ‘Standard’ load balancer in the said resource group where the ‘VM set type’ should be set as ‘VirtualMachineScaleSets’ with the appropriate version of Kubernetes being specified in it: -
az aks create \
--resource-group <resource group name>\
--name <AKS cluster name> \
--vm-set-type <VMSS or Availability set> \
--node-count <node count> \
--generate-ssh-keys \
--kubernetes-version <version number> \
--load-balancer-sku <basic or standard SKU>
Sample command: -
az aks create \
--resource-group AKSrg \
--name AKStestcluster \
--vm-set-type VirtualMachineScaleSets \
--node-count 2 \
--generate-ssh-keys \
--kubernetes-version 1.16.8 \
--load-balancer-sku standard
I would suggest you to please use the below command to check the installed version of Kubernetes orchestrator in your Azure BASH cloud shell according to the region specified and use the appropriate version in the above command: -
az aks get-versions --location eastus --output table
Then, I would suggest you use the below command for getting credentials of the AKS cluster created: -
az aks get-credentials --resource-group <resource group name> --name <AKS cluster name>
b) Then execute the below command for getting the information of the created nodes: -
kubectl get nodes
Once the information is fetched, then load the appropriate ‘YAML’ files in the AKS cluster and apply them to be run as an application on them. Then, check the service state as below: -
kubectl get service <application service name> --watch
c) Then press ‘Ctrl+C’, after noting the public IP address of the load balancer. Then execute the below command for setting the managed outbound public IP for the AKS cluster and the configured load balancer: -
az aks update \
--resource-group myResourceGroup \
--name myAKSCluster \
--load-balancer-managed-outbound-ip-count 1 ’
This will ensure that the services running in the back end will have only one public IP in the front end. In this way, you will be able to create an AKS cluster with load balancer having a public IP address.

kubectl get nodes not returning any result

Issue: kubectl get nodes, returning an empty result
Cloud provider: Azure
K8s cluster built from scratch with VMSS instances/VMs
azureuser#khway-vms000000:~$ kubectl get no
No resources found in default namespace.
I am a bit stuck and do not know what else I could check to get to the bottom of this issue.
Thanks in advance!
it seems like you logged on to one of the nodes of the managed VMSS.
Instead do (e.g. from your dev machine):
az aks get-credentials --name MyManagedCluster --resource-group MyResourceGroup
https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az_aks_get_credentials
then you can run kubectl

AKS - Error from server (Forbidden): User cannot create resource "namespaces" in API group "" at the cluster scope

I am trying to use the K8S through Azure AKS.
But when doing a simple command like: kubectl create namespace airflow
I get the following error message:
Error from server (Forbidden): namespaces is forbidden: User
"xxx" cannot create resource "namespaces" in API
group "" at the cluster scope
I have already commanded az aks get-credentials to connect to the cluster and then I try to create the namespace but without success.
In my case, this works when I use this command:
az aks get-credentials --resource-group <RESOURCE GROUP NAME> --name <AKS Cluster Name> --admin
You dont have sufficient privileges to create namespace in the k8s cluster though you have access to the cluster
Check the below command to know if you have permission to create namespace
# kubectl auth can-i create namespace
yes
Make sure your /.kube/config has been configured with correct user name and credentials. Then run the following command to set the context:
Kubectl config set-context —user=xx yourclustername

Kubernetes dashboard starting with Forbidden Errors

How do you apply already create clusterrolebindings to a cluster in Azure Kubernetes?
I have a new cluster and I'm trying to open and view it in the browser, but I am getting forbidden errors.
I tried to run this script, but the terminal says I've already created it. Now I don't know how to apply it to this cluster. Is there a way to do this in the Azure GUI? Any help or suggestions would be great. Thanks!!
az aks get-credentials --resource-group myAKScluster --name myAKScluster --admin
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
az aks browse --resource-group myAKScluster --name myAKScluster
It because that you enable the RBAC of your AKS cluster and access to it has been disabled by default. You can follow the troubleshooting described here and the solution here.

Resources