Workload identity with application on Kubernetes service:kubernetes - azure

I am trying to deploy and manage the Kubernetes cluster using OpenID Connect issuer, I have followed this Microsoft Document to deploy the application on AKS for that I have created the resource group and install the AKS preview extension
`az group create --name myResourceGroup --location eastus
az extension add --name aks-preview
az extension update --name aks-preview
#register the enabled workload identity
az feature register --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview"az feature show --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview"
az provider register --namespace Microsoft.ContainerService`
After that when I am trying to create the Kubernetes cluster with --enable-oidc-isser I am getting below error, it is taking more than 10 minutes and showing some error
(OIDCIssuerUnsupportedk8sVersion) OIDC issuer feature requires at least Kubernetes version 1.20.0. Code: OIDCIssuerUnsupportedK8sVersion Message: OIDC issuer feature requires at least Kubernetes version 1.20.0
I have upgraded to the latest version but still getting same error.
How can I export OIDC Issuer to set the Environment variables on the cluster.
Thanks in Advance :)

I tried to reproduce the same issue in my environment and got the below results
My current version is 1.23.12
I have created the resource group and install the extension preview and registered the EnabledworkloadidentityPreview using below command
az feature register --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview"
To verify the status used the below command
az feature show --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview"
I have created the AKS cluster with --enable-oidc-issuer parameter to use the OIDC issuer using below command
az aks create -g rg -n cluster --node-count 1 --enable-oidc-issuer --enable-workload-identity --generate-ssh-keys
When I check in my environment and got the same error
To resolve this issue I have upgraded my AKS version using below commands
My current version is 1.23.12
I have upgraded the to newest version using this SO answer
My current version is 1.24.3 and when I the below OIDC command I am able access
az aks create -g <rg-name> -n cluster --node-count 1 --enable-oidc-issuer --enable-workload-identity --generate-ssh-keys
NOTE: This error will occur if the version is more than 1.20.0 we have to upgrade the version to latest version not to current version then only it will work

Related

Issue while enabling prometheus monitoring in azure k8s

Getting error while configuring prometheus in azure kubernetes
I tried to reproduce the same issue in my environment and got the below results
I have the cluster and I am trying to configure the Prometheus in azure Kubernetes and I got the successful deployment
To verify the agent is deployed or not use the below commands
kubectl get ds <dep_name> --namespace=kube-system
kubectl get rrs --namespace=kube-system
This error getting because of you are using the service principal instead of managed identity
For enabling the managed identity please follow the below commands
AKS cluster with service principal first disable monitoring and then upgrade to managed identity, the azure public cloud is supporting for this migration
To get the log analytics workspace id
az aks show -g <rg_name> -n <cluster_name> | grep -i "logAnalyticsWorkspaceResourceID"
For disable the monitoring use the below command
az aks disable-addons -a monitoring -g <rg_name> -n <cluster_name>
Or I can get it on portal in the azure monitor logs
I have upgrade the cluster to system managed identity, use the below command to upgrade
az aks update -g <rg_name> -n <cluster_name> --enable-managed-identity
I have enable the monitoring addon with the managed identity authentication
az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <rg_name> -n <cluster_name> --workspace-resource-id <workspace_resource_id>
For more information use this document for Reference

Cannot get Azure container network profile Id

We are actually deploying container to Azure using Azure CLI and the create command as specify the sample documentation below :
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-vnet
In this dosucmentation it is clearly specify from the sample command below that when the container and the Vnet/Subnet gets created, azure create for you a Network Profile Id ( that is need for yaml deplyoement)
az container create --name appcontainer --resource-group myResourceGroup --image mcr.microsoft.com/azuredocs/aci-helloworld --vnet aci-vnet --vnet-address-prefix 10.0.0.0/16 --subnet aci-subnet --subnet-address-prefix 10.0.0.0/24
After the container gets created successfully you are supposed to get Network profile name or ID, which you can obtain using "az network profile list"
Which in fact does not return anything
UPDATE :
I update m Azure CLI to 2.30 in powershell but the result is the same the output of the command return nohing even if container and vnet gets succesfully created
Output result
Thanks for your help
regards
I have tested in my environment.
I deployed a container to a new virtual network using the below command:
az container create --name appcontainer --resource-group myResourceGroup --image mcr.microsoft.com/azuredocs/aci-helloworld --vnet aci-vnet --vnet-address-prefix 10.0.0.0/16 --subnet aci-subnet --subnet-address-prefix 10.0.0.0/24
The container got successfully created.
To get the Network Profile ID, I used the below command:
az network profile list --resource-group myResourceGroup --query [0].id --output tsv
In this way, we can fetch the Network Profile ID
If network profile is not getting created using CLI, try using ARM template
The same happened to me. I solve it using Azure CLI version 2.27.2. Any newer version leaves me with the same problem.
There seems to be a problem with the latest versions of the Azure CLI

Custom Script Extension failing on Virtual Scale Set upgrade

I have created an Azure virtual scaleset with Linux VMs. I have to run Azure CLI commands via release pipelines on these VMs so I am trying to install a Azure CLI using Custom Extensions so that every time a new VM is up CLI is installed on the VM.
I have created a .sh file with below command on blob storage :
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
I ran below command on from the CLI to deploy the custom extension :
az vmss extension set --vmss-name <VMSS Name> --resource-group <Resource Group> --name CustomScript --version 2.0 --publisher Microsoft.Azure.Extensions --settings '{"FileUris": ["https://<Blobscriptpath>/preinstallscript.sh"],"commandToExecute": "bash /preinstallscript.sh"}'
This command is installing the extension and I can see that on Azure but when I am upgrading the VM instance I am getting below error:
"Failed to upgrade virtual machine instance ''. Error: Multiple VM extensions failed to be provisioned on the VM. Please see the VM extension instance view for other failures. The first extension failed due to the error: VM has reported a failure when processing extension 'CustomScript'. Error message: "Enable failed: failed to get configuration: json validation error: invalid public settings JSON: FileUris: Additional property FileUris is not allowed""
Below are the images from Azure Portal showing the Extension:
Please suggest if I am missing something.
According to the error message, Additional property FileUris is not allowed, you should use fileUris instead of FileUris. Read the Property values.
Also, If the blob is not public, you need to provide the storage account name and key to access the blob. For example, you can provide a .sh file on the blob storage.
#!/bin/bash
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
and deploy the custom extension with CLI.
az vmss extension set --vmss-name <VMSS Name> --resource-group <Resource Group> --name CustomScript --version 2.0 --publisher Microsoft.Azure.Extensions --settings '{"fileUris": ["https://xxx.blob.core.windows.net/shscripts/preinstallscript.sh"],"commandToExecute": "sh preinstallscript.sh"}'
Edit
After installing the VMSS, you can upgrade the VMSS instance to take this script effect.

Kubernetes dashboard starting with Forbidden Errors

How do you apply already create clusterrolebindings to a cluster in Azure Kubernetes?
I have a new cluster and I'm trying to open and view it in the browser, but I am getting forbidden errors.
I tried to run this script, but the terminal says I've already created it. Now I don't know how to apply it to this cluster. Is there a way to do this in the Azure GUI? Any help or suggestions would be great. Thanks!!
az aks get-credentials --resource-group myAKScluster --name myAKScluster --admin
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
az aks browse --resource-group myAKScluster --name myAKScluster
It because that you enable the RBAC of your AKS cluster and access to it has been disabled by default. You can follow the troubleshooting described here and the solution here.

Azure container service not creating the agent nodes

I have been working on the deployment of windows container from Azure Container Registry to Azure Container Service with Kubernetes Orchestra it was working fine previously.
Now I'm trying to create an acs kubernetes cluster of windows but the create command is only creating a master node and while deploying I'm getting the following error No nodes are available that match all of the following predicates:: MatchNodeSelector (1)
I have followed this link https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-windows-walkthrough to create the windows based kubernetes cluster.
This is the command I have used to create the cluster
az acs create --orchestrator-type=kubernetes \
--resource-group myResourceGroup \
--name=myK8sCluster \
--agent-count=2 \
--generate-ssh-keys \
--windows --admin-username azureuser \
--admin-password myPassword12
As per the above documentation, the above command should create a cluster named myK8sCluster with one Linux master node and two Windows agent nodes.
To verify the creation of cluster I have used the below command
kubectl get nodes
NAME STATUS AGE VERSION
k8s-master-98dc3136-0 Ready 5m v1.7.7
According to the above command, it shows that it created only the Linux master node, not the two windows agent nodes.
But in my case I require the windows agent nodes to deploy a windows based container in the cluster.
So I assume that due this I'm getting the following error while deploying No nodes are available that match all of the following predicates:: MatchNodeSelector (1)
As the documentation points out, ACS with a target of Kubernetes is deprecated. You want to use AKS (Azure Kubernetes as a Service).
To go about this, start here: https://learn.microsoft.com/en-us/azure/aks/windows-container-cli
Make sure you have the latest version of the CLI installed on your machine if you choose to do it locally, or use the Azure Cloud Shell.
Follow the guide on the rest of the steps as it will walk you through the commands.
For your issue, as I know the possible reason is that you need to enable the WindowsPreview feather. You can have a check through the CLI command like this:
az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/WindowsPreview')].{Name:name,State:properties.state}"
When it's OK, you also need to pay attention to the Kubernetes version. When I use the command that you have used, then the windows nodes are created successfully, but it just shows the master when I execute the command kubectl get nodes. Even if I can see the windows node in the group.
Then I try the command with additional parameter --orchestrator-version and set the value as 1.12.7 and the whole command like below:
az acs create --orchestrator-type=kubernetes \
--resource-group myResourceGroup \
--name=myK8sCluster \
--agent-count=2 \
--generate-ssh-keys \
--windows --admin-username azureuser \
--admin-password myPassword12 \
--orchestrator-version \
--location westcentralus
Then it works well and the command kubectl get nodes -o wide show like below:
But as you know, the ACS will be deprecated. So I would suggest you use the AKS with Windows node in the preview version. Or you can use the aks-engine as the AKS Engine is the next version of the ACS-Engine project.

Resources