Error while detaching AKS cluster through Azure ML SDK extension - azure

I created an AKS cluster using Azure Machine Learning SDK extension and I attached to the workspace created. When the cluster is created and attached, I doesn't show any error. When I am trying to detach it from workspace, it is not accepting the operations.
I would like to detach the existing AKS cluster from workspace either by program manner, using CLI or even using Azure portal.

If we are using any extensions of SDK or Azure CLI for machine learning to detach AKS cluster, it will not work and it will not get deleted or detached. Instead, we need to use Azure CLI with AKS. There are two types of implementations we can perform.
Python:
Aks_target.detach()
Azure CLI:
Before performing this step, we need to get the details of the working AKS cluster name attached to our workspace. Resource Group details and workspace name
az ml computertarget detach -n youraksname -g yourresourcegroup -w yourworkspacename

Related

Changing --network-plugin in Azure Kubernetes Service for existing cluster

I'm trying to implement Azure Key Vault such that API keys, credentials and other Kubernetes secrets are read into production and staging environments. Ultimately, I'd like to try to expand that to local development environments so devs don't have to mess with it at all. It is just read in when they start their cluster.
Anyway, I'm following this to enable Pod Identities:
https://learn.microsoft.com/en-us/azure/aks/use-azure-ad-pod-identity
When I get to this step, I'm modifying the:
az aks create -g myResourceGroup -n myAKSCluster --enable-managed-identity --enable-pod-identity --network-plugin azure
To the following because I'm trying to change an existing cluster:
az aks update -g myResourceGroup -n myAKSCluster --enable-managed-identity --enable-pod-identity --network-plugin azure
This doesn't work and figured out I need to run each flag one at a time, so I had to run --enable-managed-identity first since --enable-pod-identity depends on it.
At any rate, when I get to the --enable-pod-identity I get the following error:
Operation failed with status: 'Bad Request'. Details: Network plugin kubenet is not supported to use with PodIdentity addon.
So I try the --network-plugin azure and get:
az: error: unrecognized arguments: --network-plugin azure
Apparently this is flag is not available with update.
Poking around in the Azure portal for the AKS resource, I do see kubenet listed, but I'm not able to change it.
So, the question: Is it possible to change the Network Plugin on existing cluster or do I need to start a new?
EDIT: Looks like others are having similar issues on existing clusters:
https://github.com/Azure/AKS/issues/2094
Is it possible to change the Network Plugin on the existing cluster or do
I need to start a new?
It's impossible to change the network plugin on the existing cluster, so you need to create a new cluster and set the network plugin with azure at the creation time. You can find there is no parameter --network-plugin in the CLI command az aks update even if you install the aks-preview extension. It means it does not support changing the network plugin of the existing cluster.

How to create a Windows node pool in AKS cluster?

I'm trying to add a node pool that can run Windows based containers. What I see in Azure portal is a disabled option to select Windows as the OS. The hint says: Windows node pools require a Windows authentication profile. I tried googling the possible solution but found nothing.
How can I provide the Windows authentication profile to the existing AKS cluster to make AKS run Windows based containers?
Looks like there is an open issue regarding this situation.
The problem is while creating the cluster for the first time, you didn't provide any --windows-admin-password and --windows-admin-username. Therefore, when you try to create a new windows node pool that will create VM's, The VM's don't have any Windows authentication profile.
If you look at the cluster resource az aks show and don't see the Window profile, then you would have to create a new cluster, for example using AZ CLI:
az aks create -g MyResourceGroup -n MyManagedCluster --load-balancer-sku Standard --network-plugin azure --windows-admin-username azure --windows-admin-password 'replacePassword1234$'
If you created your cluster using terraform, you can add this section:
# Create AKS Cluster
resource "azurerm_kubernetes_cluster" "akscluster" {
# Code goes here..
windows_profile {
admin_username = "azure"
admin_password = "azure"
}
}
Pay attention to this thread as well.

Azure k8s dashboard does not open

I have k8s cluster on Azure and can not access the dashboard.
To access it I was doing aks browse --resource-group <res_group> --name <cluster_name>
It does not open after accidentally deleted the kube-dashboard pod.
Error:
Couldn't find the Kubernetes dashboard pod.
Did try to enable-disbale dashboard add-on on Azure.
Re-install k8s-dashboard. (Azure did not allow)
Any ideas on how to solve the issue and restart the dashboard?
Did find the following solution that worked for me:
Created another Azure k8s cluster. For each cluster Azure makes a dashboard
deployment.
Copied the yaml files with the command:
kubectl get deployment -n kube-system <kubernetes-dasboard-xxx>
for each "deployment, replicaSet, service and pod related to dashboard"
Recreated them into the old not working cluster.
Upgraded-downgraded the cluster version to re-deploy the objects.
Depends on your k8s version, AKS doesn't enable dashboard while creating a new cluster. You can find details in below link.
https://learn.microsoft.com/en-us/azure/aks/kubernetes-dashboard
And I suggest you, can directly install dashboard from kubernetes dashboard page, it is installing dashboard another namespace(it it better actually) and you can create and RBAC account to see all resources as an admin privileges.
https://github.com/kubernetes/dashboard
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
And also you can use --enable-addons
https://learn.microsoft.com/en-us/azure/aks/kubernetes-dashboard

Error while applying Node Autoscaler for existing AKS cluster

I am trying to experiment with Preview feature available in Azure AKS as per documentation available we need to have the following requirements
Kubernetes version 1.12.4 or later
Azure CLI version 2.0.55 or later.
add aks preview :- az extension add --name aks-preview
register scale set provider:- az feature register --name VMSSPreview --namespace Microsoft.ContainerService
ensure that it is registerd
created AKS cluster with terraform
when i try to apply following command
az aks update --resource-group rg-euwest-d04-dvag-001 --name k8s-euwest-d04-dvag-dfs-dfsapp-001 --enable-cluster-autoscaler --min-count 3 --max-count 5
error
Operation failed with status: 'Bad Request'. Details: AgentPool
'' has set auto scaling as enabled but is not on Virtual
Machine Scale Sets, this is not allowed
As per my understanding, it is not supported at this time through terraform or from Azure Portal but only possible from Azure CLI
Your cluster needs to be created via Azure CLI to enable autoscaling. So if you have created on evia Azure portal, you need to delete it and create new one through Azure CLI. Ref: https://github.com/MicrosoftDocs/azure-docs/issues/29199

Unable to open the kubernetes dashboard in Azure Kubernetes Service

I created kubernetes cluster in my Azure resource group using Azure Kubernetes Service and login into cluster with the help of resource group credentials through Azure CLI. I could opened the kubernetes dashboard successfully for the first time. After that i deleted my resource group and other resource groups which are defaultly created along with kubernetes cluster. I created resource group and kubernetes cluster one more time in my azure account. i am trying to open the kubernetes dashboard next time, getting error like 8001 port not open. I tried with proxy port-forwarding, but i don't have idea how to hit the dashboard url with different port?.
Could anybody suggest me how to resolve this issue?
I think you need to delete your kubernetes config and pull new one with az aks get-credentials or whatever you are using, because you are probably still using config from the previous cluster (hint: it wont work because its a different cluster).
you can do that by deleting this file: ~/.kube/config and pull the new one and try kubectl get nodes. if that works try the port-forward. It is not port related. something is wrong with your config\az cli
ok, I recall in the previous question you mentioned you started using RBAC, you need to add cluster role to the dashboard:
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
https://learn.microsoft.com/en-us/azure/aks/kubernetes-dashboard#for-rbac-enabled-clusters

Resources