Disconnect with Azure ACS form Local Machine - azure

I had pull my azure acs credentials using below command and I can communicate with kubernetes machine on Azure from my local machine
az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
But Now I wanted to disconnect this connection so that my kubctl can connect with other machine , it can be local or any other machine (I am trying to connect with local).
But everytime I ran kubectl command it communicate with Azure ACS

For your scenario, we can use kubectl config use-context CONTEXT_NAME to switch default cluster to others, in this way, we can switch to another k8s cluster.
We can use this command to list k8s contexts:
root#shui:~# kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
jasontest321mgmt jasontest321mgmt jasontest321mgmt-admin
* jasonk8s321mgmt jasonk8s321mgmt jasonk8s321mgmt-admin
Specify k8s cluster name, we can use this commandkubectl config use-context CONTEXT_NAME:
root#shui:~# kubectl config use-context -h
Sets the current-context in a kubeconfig file
Examples:
# Use the context for the minikube cluster
kubectl config use-context minikube
Usage:
kubectl config use-context CONTEXT_NAME [options]
For example:
root#shui:~# kubectl config use-context jasontest321mgmt
Switched to context "jasontest321mgmt".

Related

Where are kubelet logs in AKS stored?

I would like to view kubelet logs going back in time in Azure AKS. All I could find from Azure docs was how to ssh into the nodes and list the logs (https://learn.microsoft.com/en-us/azure/aks/kubelet-logs) but I feel like this has to be aggregated in Log Analytics somewhere right ?
However I wasn't able to find anything in Log Analytics for Kubernetes. Am I missing something ?
We have omsagent daemonset installed and Microsoft.OperationalInsights/workspaces is enabled
Thanks :)
I tried to reproduce this issue in my environment and got below results:
I created resource group and VM by setting up the Subscription
az account set --subscription "subscription_name"
az group create --location westus resourcegroup_name
created the AKS Cluster with the parameter to enable the AKS Container Insights
The following Example will creates the Cluster with name AKSRG.
az aks create -g myResourceGroup -n resourceGroup_name --enable-managed-identity --node-count 1 --enable-addons monitoring --enable-msi-auth-for-monitoring --generate-ssh-keys
Here I have configured the kubectl to connect the kubernetes cluster with the get-credentials command
I have created the interactive shell connection to the node
using kubectl debug
kubectl debug node/pod_name -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
I have used the below Command in after hash(#) tag
journalctl -u kubelet -o cat
To get the logs check the nodes and pods
We can use the below command to check the KUBE LOGS
kubectl logs pod_name
Reference:
View kubelet logs in Azure Kubernetes Service (AKS) - Azure Kubernetes Service | Microsoft Docs

How to use cloud shell SSH into AKS Cluster and test the connection from AKS inside

Our company blocks the ssh port. How to use cloud shell to ssh into an AKS cluster, so we can curl from there to an external URL to test the connection? Tks.
this wouldn't really make a lot of sense, but you'd need to just open up your ssh ports to the azure region your cloudshell is in (determined by your storage, i suppose).
But a better way would be to just do:
kubectl exec -it -n pod_namespace podname /bin/bash (or /bin/sh)
this would open up a bash session on the pod on your AKS and you'd be able to test your curl requests.
For your requirements, you can use pod in the AKS cluster as a jump box, and then ssh the AKS cluster nodes inside the pod.
Steps here:
Get the nodes IP:
kubectl get nodes -o wide
Create a pod in the AKS cluster and create a bash session with the pod:
kubectl run --generator=run-pod/v1 -it --rm aks-ssh --image=debian
Install ssh client inside the pod:
apt-get update && apt-get install openssh-client -y
Copy ssh key that used when you create the AKS cluster to the pod:
kubectl cp ~/.ssh/id_rsa $(kubectl get pod -l run=aks-ssh -o jsonpath='{.items[0].metadata.name}'):/id_rsa
Or use the password, if you forget it, you can find the AKS nodes and reset the password.
Choose one node to SSH it:
ssh -i id_rsa azureuser#node_Ip
For more details, see Create the SSH connection to the AKS cluster nodes.

How to configure kubectl from another pc?Namespaces are empty

I have successfully setup the kubectl cli on my laptop to connect to my azure cluster.
If I make, for example:
kubectl config get-contexts
I get my namespaces and I can see my resources by navigating to the current namespace.
Now I need to replicate this setup on another laptop of mine and I made the following:
az login <--login to azure
az aks install-cli <--install of kubectl
az aks get-credentials --resource-group myResourceGroup --name myCluster <--linking kubectl to my cluster
Problem is that, if I make get-contexts again I only get the default namespace. And, of course, that namespace is empty as I put my deployment in another one.
What am I missing?
so I'm not sure what the actual question is. if your resources are in different namespace, you can query those namespaces like you normally would:
kubectl get pods -n othernamespace
kubectl edit deployment xxx -n othernamespace
you can set the default namespace for the context like so:
kubectl set-context xxx --namespace othernamespace

AKS using Kubernetes : not able to connect to cluster nodes once logged in to the cluster through azure-cli on Ubuntu

I am getting issues when trying to getting the information about the nodes created using AKS(Azure Connected Service) for Kubernetes after the execution of creating the clusters and getting the credentials.
I am using the azure-cli on ubuntu linux machine.
Followed the Url for creation of clusters: https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough
I get the following error when using the command kubectl get nodes
after execution of connecting to cluster using
az aks get-credentials --resource-group <resource_group_name> --name <cluster_name>
Error:
kubectl get nodes
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding (get nodes)
I do get the same error when i use :
kubectl get pods -n kube-system -o=wide
When i connect back as another user by the following commands i.e.,
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
I will be able to retrieve the nodes i.e..,
kubectl get nodes
NAME STATUS ROLES AGE VERSION
<host-name> Ready master 20m v1.10.0
~$ kubectl get pods -n kube-system -o=wide
NAME READY STATUS RESTARTS AGE
etcd-actaz-prod-nb1 1/1 Running 0
kube-apiserver-actaz-prod-nb1 1/1 Running 0
kube-controller-manager-actaz-prod-nb1 1/1 Running 0
kube-dns-86f4d74b45-4qshc 3/3 Running 0
kube-flannel-ds-bld76 1/1 Running 0
kube-proxy-5s65r 1/1 Running 0
kube-scheduler-actaz-prod-nb1 1/1 Running 0
But this is actually overwriting newly clustered information from file $HOME/.kube/config
Am i missing something when we connect to AKS-cluster get-credentials command-let that's leading me to the error
*Error from server (InternalError): an error on the server ("") has prevented the request from succeeding (get nodes)*
After you
az aks get-credentials -n cluster-name -g resource-group
If should have merged to your local configuration:
/home/user-name/.kube/config
Can you check your config
kubectl config view
And check if it is pointing to the right cluster.
Assuming you have chosen default configuartion while deploying AKS. So You need to create SSH key pair to login to AKS Node.
Push above created public key to AKS node using "az vm user update" {plz take help to know what all switch you need to pass. It quite simple)
To create an SSH connection to an AKS node, you run a helper pod in your AKS cluster. This helper pod provides you with SSH access into the cluster and then additional SSH node access.
To create and use this helper pod, complete the following steps:
- Run a debian (or any other container like centos7 etc) container image and attach a terminal session to it. This container can be used to create an SSH session with any node in the AKS cluster:
kubectl run -it --rm aks-ssh --image=debian
The base Debian image doesn't include SSH components.
apt-get update && apt-get install openssh-client -y
Copy private key (the one you created in the begining to pod) using kubelet cmd. kubelet toolkit must be present on your machine from where you created ssh pair.
kubectl cp :/
Now you will see private key file on your container location, change the private key permission to 600 and now able to ssh your AKS node
Hope this helps.

Configure Kubernetes for an Azure cluster

I followed the guide to getting Kubernetes running in Azure here:
http://kubernetes.io/docs/getting-started-guides/coreos/azure/
In order to create pods, etc., the guide has you ssh into the master node kube-00 in the cloud service and run kubectl commands there:
ssh -F ./output/kube_randomid_ssh_conf kube-00
Once in you can run the following:
kubectl get nodes
kubectl create -f ~/guestbook-example/
Is it possible to run these kubectl commands without logging to the master node, e.g., how can I set up kubectl to connect to the cluster hosted in Azure from my development machine instead of ssh'ing into the node this way?
I tried creating a context, user and cluster in the config but the values I tried using did not work.
Edit
For some more background the tutorial creates the azure cluster using a script using the Azure CLI. It ends up looking like this:
Resource Group: kube-randomid
- Cloud Service: kube-randomid
- VM: etcd-00
- VM: etcd-01
- VM: etcd-02
- VM: kube-00
- VM: kube-01
- VM: kube-02
It creates a Virtual Network that all of these VM's live in. As far as I can tell all of the machines in the cloud service share a single virtual IP.
The kubectl command line tool is just a wrapper to execute remote HTTPS API REST calls on the kubernetes cluster. If you want to be able to do so from your own machine you need to open the correct port (443) on your master node and pass along some parameters to the kubectl tool as specified in this tutorial:
https://coreos.com/kubernetes/docs/latest/configure-kubectl.html

Resources