I'm trying to follow this guide to setting up a K8s cluster with external-dns' Azure DNS provider.
The guide states that:
When your Kubernetes cluster is created by ACS, a file named /etc/kubernetes/azure.json is created to store the Azure credentials for API access. Kubernetes uses this file for the Azure cloud provider.
When I create a cluster using aks (e.g. az aks create --resource-group myResourceGroup --name myK8sCluster --node-count 1 --generate-ssh-keys) this file doesn't exist.
Where do the API credentials get stored when using AKS?
Essentially I'm trying to work out where to point this command:
kubectl create secret generic azure-config-file --from-
file=/etc/kubernetes/azure.json
From what I can see when using AKS the /etc/kubernetes/azure.json doesn't get created. As an alternative I followed the instructions for use with non Azure hosted sites and created a service principal (https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/azure.md#optional-create-service-principal)
Creating the service principal produces some json that contains most of the detail. This can be used to manually create the azure.json file and the secret can be created from it.
Use this command to get credentials:
az aks get-credentials --resource-group myResourceGroup --name myK8sCluster
Source:
https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough
Did you try this command ?
cat ~/.kube/config
It provided all i needed for my CI to connect to the Kubernetes Cluster and use API
Related
If I connect to my AKS cluster with,
az aks get-credentials --resource-group <rgname> --name <clustername> --admin
it does not require any credentials. Is this expected? Or is it using my "Az login" credentials and passing that through? My cluster is enabled for AD access but I was reading that the --admin flag can be used to force it to use the k8s admin. Should this be blocked for security reasons?
Sorry, quite new to AKS and Kubernetes.
Yes, The below cmdlet will not require any addinational credential to connect to the AKS, Az login is enough to connect to the AKS who has access of subscription in which AKS created.
az aks get-credentials --resource-group <rgname> --name <clustername> --admin
--admin flag can be used to force it to use the k8s admin. Should this be blocked for security reasons?
Yes you are correct,This should be blocked for secuirity purpose, But unfortunatlly switch –admin access on or off using a simple switch with az aks commands still in preview state, This is not recommanded for production use as of now.
For more information how to disable local user account (–admin) in Azure Kubernetes Service you can refer this document
There is also workaround given in this Github Disccussion you can also go through that.
A few weeks ago, I was able to use the Azure CLI to create my Container Registry (ACR) and Kubernetes (AKS) cluster. I could push images to my ACR and have AKS pull images successfully - everything worked great. Every now and then, I would have to refresh my login with az acr login --name <acrName>, but not a big deal.
Today, I found that when I go to deploy an updated image to my AKS cluster, I got a status of ImagePullBackOff:
Failed to pull image "MY_ACR.azurecr.io/MY_IMAGE:v1": rpc error: code = Unknown desc = Error response from daemon: Get https://MY_ACR.azurecr.io/v2/MY_IMAGE/manifests/v1: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.
I couldn't remember what I needed to do to make this work, so I went through my original steps and created an entirely new resource group, ACR, AKS cluster, and service principal connecting them. I pushed images to my ACR and was able to apply my Kubernetes manifest, and everything worked again.
A couple hours later, when I applied an updated manifest, I again got the same error message. As part of my setup, I created a service principal:
az ad sp create-for-rbac --skip-assignment
az role assignment create --assignee <principal's appId> --scope <my ACR's id> --role Reader
I also used --role acrpull. It seems like the authentication has timed out, and the documentation for Authenticate with an Azure container registry says that individual AD identities will time out after 3 hours, but even after running az acr login --name <acrName>, I'm not able to fix the issue.
What are the required steps to get my AKS cluster to be able to authenticate again to my ACR?
I'll note that I also attached the ACR according to the documentation at Authenticate with Azure Container Registry from Azure Kubernetes Service by running:
az aks update -n cluster_name -g resource_group --attach-acr acr_name
I also tried using the ACR id instead of the name. After a minute or so, the command completed, and even a half hour+ later, I get the same permissions issue.
The easiest way to integrate AKS with ACR is to leverage the --attach-acr option during cluster creation. This will have AKS manage the service principal for your and handle the token refresh's
https://learn.microsoft.com/en-us/azure/aks/cluster-container-registry-integration#create-a-new-aks-cluster-with-acr-integration
According to the documentation, Azure Kubernetes Service Cluster User Role allows access to Microsoft.ContainerService/managedClusters/listClusterUserCredential/action API call only.
My user is part of an AD group that has Azure Kubernetes Service Cluster User Role permissions on the AKS cluster and all the cluster role and cluster role bindings have been applied via kubectl.
I can double check and verify that access to dashboard and permissions work with these steps:
1. az login
2. az aks get-credentials --resource-group rg --name aks
3. kubectl proxy
4. Open web connection
5. Get prompt on terminal to login via device code flow
6. Return to web connection on dashboard
7. I can correctly verify that my permissions apply,
i.e. deleting a job does not work and this falls in line with my
kubectl clusterrole bindings to the Azure AD group.
However when I try to use the az aks browse command to open the browser automatically like this, i.e. without kubectl proxy:
1. az login
2. az aks get-credentials --resource-group rg --name aks
3. az aks browse --resource-grouprg --name aks
I keep getting the following error:
The client 'xxx' with object id 'yyyy' does not have authorization to perform action
'Microsoft.ContainerService/managedClusters/read' over scope
'/subscriptions/qqq/resourceGroups/rg/providers/Microsoft.ContainerService/managedClusters/aks'
or the scope is invalid. If access was recently granted, please refresh your credentials.
A dirty solution was to apply Reader role on the AKS cluster for that AD group - then this issue goes away but why does az aks browse require Microsoft.ContainerService/managedClusters/read permission and why is that not included in Azure Kubernetes Service Cluster User Role?
What is happening here?
Currently, the command
az aks browse --resource-grouprg --name aks isn't working with the more recent version of AKS, you can find the full details here.
https://github.com/MicrosoftDocs/azure-docs/issues/23789
Also, your current problem might also be that your user XXX doesn't have the right IAM access level at the Subscription/ResourceGroup level.
I am trying to experiment with Preview feature available in Azure AKS as per documentation available we need to have the following requirements
Kubernetes version 1.12.4 or later
Azure CLI version 2.0.55 or later.
add aks preview :- az extension add --name aks-preview
register scale set provider:- az feature register --name VMSSPreview --namespace Microsoft.ContainerService
ensure that it is registerd
created AKS cluster with terraform
when i try to apply following command
az aks update --resource-group rg-euwest-d04-dvag-001 --name k8s-euwest-d04-dvag-dfs-dfsapp-001 --enable-cluster-autoscaler --min-count 3 --max-count 5
error
Operation failed with status: 'Bad Request'. Details: AgentPool
'' has set auto scaling as enabled but is not on Virtual
Machine Scale Sets, this is not allowed
As per my understanding, it is not supported at this time through terraform or from Azure Portal but only possible from Azure CLI
Your cluster needs to be created via Azure CLI to enable autoscaling. So if you have created on evia Azure portal, you need to delete it and create new one through Azure CLI. Ref: https://github.com/MicrosoftDocs/azure-docs/issues/29199
I've setup kubernetes in azure using the azure acs and the azure cli.
az account list
az account set --subscription foobar
az group create --name foobar --location westus
az acs create --orchestrator-type=kubernetes --resource-group foobar --master-count 1 --name=foobar --dns-prefix=foobar
I want to be able to setup a site to site vpn, so that kubernetes can reach internal services in my datacenter.
Unfortunatly azure acs sets up kubernetes on a 10.0.0.0 network which overlaps with other resources in azure and my datacenter.
I can't find any way to change which subnet kubernetes runs on in acs. Is there a way to change the prefered network?
There does not appear to be a way to choose network from the acs create command
az acs create --name
--resource-group
[--admin-password]
[--admin-username]
[--agent-count]
[--agent-vm-size]
[--client-secret]
[--dns-prefix]
[--generate-ssh-keys]
[--location]
[--master-count]
[--no-wait]
[--orchestrator-type {Custom, DCOS, Kubernetes, Swarm}]
[--service-principal]
[--ssh-key-value]
[--tags]
[--validate]
[--windows]
No, there's no way of doing that. There might be a way to create a new kubernetes to existing vnet, but I'm not aware of that.
Your another option would be to delete all vm's and recreate them in the new vnet. No guarantee it would work.
With ACS through its CLI you can specify subnet id so the acs is created in a particular VNET. However this is only available in certain regions