Renewing ACS Kubernetes cluster - azure

I've ACS kubernetes cluster running on azure vmss, recently I renewed my acs service principal by adding the new key in /etc/kubernetes/azure.json in master and worker nodes and restarted them but the issue is new nodes created as part of scaling are not able to get the new service principal key.

Updating azure.json is not enough.
In order to update your cluster with new credentials, you should use az aks update-credentials command
az aks update-credentials \
--resource-group myResourceGroup \
--name myAKSCluster \
--reset-service-principal \
--service-principal $SP_ID \
--client-secret $SP_SECRET
After that cluster autoscaler will use updated principal for the new instances
Update:
For acs cluster you have to manually update service principal on each worker node.
Or you may use custom script extension, which you can integrate with Azure Resource Manager template or run by Azure Virtual Machines REST API

Related

Creating an AKS cluster with Kubernetes RBAC and AD Integration using a Service Principal. How can it also assign itself cluster admin?

I have a service principal which is an Owner on the subscription that I am using to create an Azure Kubernetes Service cluster as part of a script. I want my cluster to use:
Kubernetes RBAC --> enable
AKS-managed AAD --> enable
Local accounts --> disabled
I would like the same Service Principal creating the cluster to be able to create k8s roles and role bindings however in order to do this the Service Principal seems to need a cluster-admin role binding.
When creating the cluster there is the option of adding an array of "admin group object ids" which seems to create cluster-admin role bindings for AD Groups. However the SPN cannot be a part of a Group.
Is there anyway around this process?
I tried to reproduce the same in my environment and got the results as below:
To assign Azure Kubernetes Service RBAC Cluster Admin to service principal you can make use of below cli command:
az role assignment create --assignee <appId> --scope <resourceScope> --role Azure Kubernetes Service RBAC Cluster Admin
When I run this command kubernetes roles are added successfully like below
Alternatively, In azure AD create a group add service principal as a member like below:
Now, Add the group in cluster configuration like below
You can use the below the cli command to create the aks cluster using service principal like below:
az aks create \
--resource-group myResourceGroup \
--name myAKSCluster \
--service-principal <appId> \
--client-secret <password>
Reference:
Use a service principal with Azure Kubernetes Services (AKS) - Azure Kubernetes Service | Microsoft Learn

az aks with --admin switch does not require a password?

If I connect to my AKS cluster with,
az aks get-credentials --resource-group <rgname> --name <clustername> --admin
it does not require any credentials. Is this expected? Or is it using my "Az login" credentials and passing that through? My cluster is enabled for AD access but I was reading that the --admin flag can be used to force it to use the k8s admin. Should this be blocked for security reasons?
Sorry, quite new to AKS and Kubernetes.
Yes, The below cmdlet will not require any addinational credential to connect to the AKS, Az login is enough to connect to the AKS who has access of subscription in which AKS created.
az aks get-credentials --resource-group <rgname> --name <clustername> --admin
--admin flag can be used to force it to use the k8s admin. Should this be blocked for security reasons?
Yes you are correct,This should be blocked for secuirity purpose, But unfortunatlly switch –admin access on or off using a simple switch with az aks commands still in preview state, This is not recommanded for production use as of now.
For more information how to disable local user account (–admin) in Azure Kubernetes Service you can refer this document
There is also workaround given in this Github Disccussion you can also go through that.

How to Simulate Eviction of nodes in Azure Kubernetes

I have spot instance nodes in Azure Kubernetes Cluster. I want to simulate the eviction of a node so as to debug my code but not able to. All I could find in azure docs is we can simulate eviction for a single spot instance, using the following:
az vm simulate-eviction --resource-group test-eastus --name test-vm-26
However, I need to simulate the eviction of a spot node pool or a spot node in an AKS cluster.
For simulating evictions, there is no AKS REST API or Azure CLI command because evictions of the underlying infrastructure is not handled by AKS RP.
Only during creation of the AKS cluster the AKS RP can set eviction Policy on the underlying infrastructure by instructing the Azure Compute RP to do so.
Instead to simulate the eviction of node infrastructure, the customer can use az vmsss simulate-eviction command or the corresponding REST API.
az vmss simulate-eviction
az vmss simulate-eviction --instance-id
--name
--resource-group
[--subscription]
Reference Documents:
https://learn.microsoft.com/en-us/cli/azure/vmss?view=azure-cli-latest#az_vmss_simulate_eviction
https://learn.microsoft.com/en-us/rest/api/compute/virtual-machine-scale-set-vms/simulate-eviction
Use the following commands to get the name of the vmss with nodepool:
1.
az aks nodepool list -g $ClusterRG --cluster-name $ClusterName -o
table
Get the desired node pool name from the output
2.
CLUSTER_RESOURCE_GROUP=$(az aks show –resource-group YOUR_Resource_Group --name YOUR_AKS_Cluster --query
nodeResourceGroup -o tsv)
az vmss list -g $CLUSTER_RESOURCE_GROUP --query "[?tags.poolName == '<NODE_POOL_NAME>'].{VMSS_Name:name}" -o tsv
References:
https://louisshih.gitbooks.io/kubernetes/content/chapter1.html
https://ystatit.medium.com/azure-ssh-into-aks-nodes-471c07ad91ef
https://learn.microsoft.com/en-us/cli/azure/vmss?view=azure-cli-latest#az_vmss_list_instances
(you may create vmss if you dont have it configured. Refer :create a VMSS)

azure aks failed to create

I'm trying to create aks cluster with command
az aks create --node-vm-size Standard_A2 --resource-group dev --name cluster --node-count 1 --generate-ssh-keys --debug
It successfully creates the AD App for the cluster.
Anyway, it shows the error:
Operation failed with status: 'Bad Request'. Details: Service
principal clientID: not found in Active Directory tenant
.
The clientId is the id of the app in the AD it has created.
I don't have even an idea where does it take the tenant guid.
So does somebody knows how can I solve the issue?
Info about my subscription:
One account, one directory (Default), two subscriptions (trial expired, and bizspark one).
So in my experience I had to specify clientId\clientSecret to the az aks command to be able to créate aks cluster. I dont think that's a permissions issue (because I definitely have permissions to créate new service principal on my subscriptions), but rather a bug.
az aks create --resource-group aks --name aks --location westeurope --service-principal guid --client-secret 'secret'

Where to find Kubernetes API credentials with AKS?

I'm trying to follow this guide to setting up a K8s cluster with external-dns' Azure DNS provider.
The guide states that:
When your Kubernetes cluster is created by ACS, a file named /etc/kubernetes/azure.json is created to store the Azure credentials for API access. Kubernetes uses this file for the Azure cloud provider.
When I create a cluster using aks (e.g. az aks create --resource-group myResourceGroup --name myK8sCluster --node-count 1 --generate-ssh-keys) this file doesn't exist.
Where do the API credentials get stored when using AKS?
Essentially I'm trying to work out where to point this command:
kubectl create secret generic azure-config-file --from-
file=/etc/kubernetes/azure.json
From what I can see when using AKS the /etc/kubernetes/azure.json doesn't get created. As an alternative I followed the instructions for use with non Azure hosted sites and created a service principal (https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/azure.md#optional-create-service-principal)
Creating the service principal produces some json that contains most of the detail. This can be used to manually create the azure.json file and the secret can be created from it.
Use this command to get credentials:
az aks get-credentials --resource-group myResourceGroup --name myK8sCluster
Source:
https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough
Did you try this command ?
cat ~/.kube/config
It provided all i needed for my CI to connect to the Kubernetes Cluster and use API

Resources