I deleted a windows node in AKS thinking it will recreate automatically, but it doesnt recreate.
Now, in portal.azure.com I see 2 in node count, but 1 in node ready.
How can I recreate the node I deleted?
I tried to recreate the delete the nodes in my environment and got the below results
I have created the resource group, container, and storage account
I have created the AKs cluster in that cluster I have the 1 node which is running the pods
I can use tool called VELERO to backup the whole AKS cluster
Backup will be stored in the Azure storage account
I have created the credential file to configure the credentials in Velero.
cat << EOF > /tmp/credentials-velero
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=${AZURE_STORAGE_ACCOUNT_ACCESS_KEY}
AZURE_CLOUD_NAME=<any name>
EOF
I have installed the velero cilent using this link
I have installed the velero on the aks cluster using below command
velero install \
--provider azure \
--plugins velero/velero-plugin-for-microsoft-azure:v1.1.0 \
--bucket $BLOB_CONTAINER \
--secret-file /tmp/credentials-velero \
--backup-location-config resourceGroup=$rg-name,storageAccount=$storage-name,storageAccountKeyEnvVar=access-key,subscriptionId=sub_id \
--use-volume-snapshots=false
I have deleted the nodes which I have created using below command
kubectl delete node node-name -n namespace-name
Now I have velero in Kubernetes I can create and schedule the backup
To create the backup and schedule the backup and restore the backup
velero backup create kubernetes-cluster
velero backup create node-backup --include-resources nodepool
velero schedule create kubernetes-weekly --schedule="#weekly" --ttl 720h0m0s
velero schedule create pv-backup-weekly --schedule="#weekly" --include-resources node
velero restore create kubernetes-restore --from-backup kubernetes-cluster
velero restore create pvc-restore --from-backup pv-backup
After backup the backup files will be stored in the containers of the storage account
Note: If we update the cluster to latest version also it will backup if we have the backup option
Related
I have an AKS cluster and it's private. I want to access it from my local and i added necessary commands for kubeconfig. Now, i can list pods with invoke command. But i want to access directly like kubectl get pods command. (i dont want do alias)
az aks command invoke \
--resource-group rg-network-spokes \
--name aks_dev_cluster \
--command "kubectl get pods -A"
If your aks cluster is private, it means its controle plane is not exposed on internet and therefore you can not use kubectl to interact with the API without being into the same vnet as your cluster
You have a few options to do so, such as :
Create a VM in the same VNET as your cluster and install kubectl client
Create a VPN to connect your computer on the aks's network
If you are starting with Azure, I would suggest going with the first option as setting up a VPN can be a bit more tedious.
You can download the kubeconfig file in this path "/home/.kube/config" and now you are good to go.
Or, Use Kubernetes Lens to manage from a UI.
Your AKS cluster is private you must be accessing it via VPN right? You can connect to VPN to access the cluster over private network. You can download the kubeconfig via this command.
Pre-req Azure Cli must be installed.
az aks get-credentials --resource-group my-rg --name my-aks --file my-aks-kubeconfig-ss
It will generate a kubeconfig for you with name my-aks-kubeconfig-ss. You can copy this config and paste inside .kube/ folder or your choice. You can access AKS cluster from Lens via UI mode.
Second option is to use lens.
Install lens. After installation lens press ctrl + shift + A and a windows will open asking for kubeconfig. Copy the content from my-aks-kubeconfig-ss and paste it here. Bingo your cluster is added inside Lens.
I am using Azure CLI version 2.34.1. I ran following commands to create a resource group and then a virtual machine. Note that I used options to delete relevant resources when the VM is deleted.
az group create --name myTestRG --location eastus
az vm create --resource-group myTestRG --name myTestWindows11VM --image MicrosoftWindowsDesktop:windows-11:win11-21h2-pro:22000.493.220201 --admin-username someusername --os-disk-delete-option delete --nic-delete-option delete
Later I deleted the VM using following command.
az vm delete --name MyTestWin11VM --resource-group myTestRG -y
However, when I browse to the portal, the resource group still showing following resources that are relevant to the VM.
What I may be doing wrong? Is there anyway to delete all resources associated to VM when I delete the virtual machine itself?
UPDATE ITS A BUG:
The way Azure works is to group resources in Resource Groups - its a mandatory field in all creation of services. Azure does this because many resources have dependencies, such as a VM with a NIC, VNet & NSG.
You can use this to your advantage and simply delete the Resource Group:
az group delete --name myTestRG
Azure will work out the dependency order, eg NSG, VNet, NIC, VM. You can read up on how it does the ordering: https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/delete-resource-group?tabs=azure-cli
What happens if I have multiple VMs in a Resource Group and I only want to delete one?
There's 3 new options --os-disk-delete-option, --data-disk-delete-option, --nic-delete-option to support deleting VMs and dependencies:
az vm create \
--resource-group myResourceGroup \
--name myVM \
--image UbuntuLTS \
--public-ip-sku Standard \
--nic-delete-option delete \
--os-disk-delete-option delete \
--admin-username azureuser \
--generate-ssh-keys
Otherwise script the whole thing using Azure Resource Manager Templates (ARM Templates), or the new tool to generate ARM Templates called Bicep. It's worth continuing with raw CLI commands and delete dependencies in order. IF you get good with the CLI you end up with a library of commands that you can use with ARM templates.
I have Gitlab CI pipeline in which I am creating Azure kubernetes cluster using terraform scripts.
After creating kubernetes cluster, I need kube config so that I can connect to cluster.
I am running following command to create the kube config :
$ az aks get-credentials --resource-group test1 --name k8stest --overwrite-existing
Merged "k8stest" as current context in C:\Windows\system32\config\systemprofile\.kube\config
Cleaning up file based variables
instead of users home folder , it is creating kube config in some systemprofile folder
how can I restrict gitlab to create kube config in users home folder ONLY ?
I have been working on the deployment of windows container from Azure Container Registry to Azure Container Service with Kubernetes Orchestra it was working fine previously.
Now I'm trying to create an acs kubernetes cluster of windows but the create command is only creating a master node and while deploying I'm getting the following error No nodes are available that match all of the following predicates:: MatchNodeSelector (1)
I have followed this link https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-windows-walkthrough to create the windows based kubernetes cluster.
This is the command I have used to create the cluster
az acs create --orchestrator-type=kubernetes \
--resource-group myResourceGroup \
--name=myK8sCluster \
--agent-count=2 \
--generate-ssh-keys \
--windows --admin-username azureuser \
--admin-password myPassword12
As per the above documentation, the above command should create a cluster named myK8sCluster with one Linux master node and two Windows agent nodes.
To verify the creation of cluster I have used the below command
kubectl get nodes
NAME STATUS AGE VERSION
k8s-master-98dc3136-0 Ready 5m v1.7.7
According to the above command, it shows that it created only the Linux master node, not the two windows agent nodes.
But in my case I require the windows agent nodes to deploy a windows based container in the cluster.
So I assume that due this I'm getting the following error while deploying No nodes are available that match all of the following predicates:: MatchNodeSelector (1)
As the documentation points out, ACS with a target of Kubernetes is deprecated. You want to use AKS (Azure Kubernetes as a Service).
To go about this, start here: https://learn.microsoft.com/en-us/azure/aks/windows-container-cli
Make sure you have the latest version of the CLI installed on your machine if you choose to do it locally, or use the Azure Cloud Shell.
Follow the guide on the rest of the steps as it will walk you through the commands.
For your issue, as I know the possible reason is that you need to enable the WindowsPreview feather. You can have a check through the CLI command like this:
az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/WindowsPreview')].{Name:name,State:properties.state}"
When it's OK, you also need to pay attention to the Kubernetes version. When I use the command that you have used, then the windows nodes are created successfully, but it just shows the master when I execute the command kubectl get nodes. Even if I can see the windows node in the group.
Then I try the command with additional parameter --orchestrator-version and set the value as 1.12.7 and the whole command like below:
az acs create --orchestrator-type=kubernetes \
--resource-group myResourceGroup \
--name=myK8sCluster \
--agent-count=2 \
--generate-ssh-keys \
--windows --admin-username azureuser \
--admin-password myPassword12 \
--orchestrator-version \
--location westcentralus
Then it works well and the command kubectl get nodes -o wide show like below:
But as you know, the ACS will be deprecated. So I would suggest you use the AKS with Windows node in the preview version. Or you can use the aks-engine as the AKS Engine is the next version of the ACS-Engine project.
Problem statement
i had created multiple pipelines in my jenkins env which can deploy kubernates objects to multiple cluster. if i execute single job at a time it works well but it might provide unstable output if multiple jobs executed for different environments
Basic steps for deploying to AKS cluster
login to azure
az login --service-principal -u $AZURE_CLIENT_ID -p $AZURE_CLIENT_SECRET -t $AZURE_TENANT_ID
get credentials
az aks get-credentials --resource-group "+resourceGroup+" --name "+clustername+" --overwrite-existing
kubectl apply
kubectl apply -f myk8sfiles.yml
when i execute single pipeline job it works fine but when i try to execute multiple pipeline jobs i assume my az aks get-credentials and kubectl apply commands will provide unstable output.
How can i execute deployment to multiple AKS clusters in parallel?
just save credentials to a specific place on disk for each cluster and use those specific credentials from the kubectl.
reading: https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/