how to get full yaml file form running azure AKS - azure

I tried to pull yaml from my running AKS, which kubectl options I can run in order to pull full yaml file form running azure AKS , need it for only 1 AKS Name ?
full yaml file means - the same as if I go on azure portal and click and go inside of my AKS, then on the left pane I can click: "export template" , but it is in json, I need the same in yaml

In the following answer I have referenced Azure CLI commands. You can find installation instructions here.
If you want the managedCluster object of an AKS cluster in yaml format please run:
az aks show -g $ResourceGroupName -n $AKSClusterName -o yaml
If you want specific Kubernetes resource(s) in a yaml format
First run
az aks get-credentials -g $ResourceGroupName -n $AKSClusterName
to get the access credentials for the AKS cluster and merge them into the kubeconfig file.
Now you can run:
kubectl get $resource-type $resource-name -n $namespace -o yaml
Please replace $resource-type with the correct Kubernetes resource type (e.g. pod, node, deployment, service, replicaset, ingress etc.) and $resource-name with the corresponding desired resource name. If you want to get a list of all resources of $resource-type you can ignore specifying $resource-name. If you want to list resources of $resource-type in all namespaces please replace -n $namespace with --all-namespaces
For example, if you want to get the list of all pods in the namespace development in yaml format then, you should run:
kubectl get pods -n development -o yaml
References:
https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands

Related

"az connectedk8s connect" has error "Problem loading the kubeconfig file.module 'collections' has no attribute 'Hashable'"

I am trying to connect my non aks k8s cluster to azure arc. I want to attempt to do this entirely through the cli. Looking at the quickstart-connect-cluster guide it skips right from resource group creation to the az connectedk8s connect step.
When attempting to connect to my cluster currently I get the following error:
$ az connectedk8s connect --name $STACK_NAME --resource-group $STACK_NAME --location eastus --tags Datacenter=miami-lab City=Miami StateOrDistrict=Florada CountryOrRegion=USA
This operation might take a while...
Problem loading the kubeconfig file.module 'collections' has no attribute 'Hashable'
I believe I may need to run some other az command to create any resources I may be missing under https://portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/allresources
Am I missing some other resources I need to create before running the above command? If so, what is the az command needed to create these missing resrouces?

az storage container list. doesnt work, referencing deleted storage

I am following this tutorial, running az cli(v 2.11) on my MacOS locally:
https://learn.microsoft.com/en-us/learn/modules/provision-infrastructure-azure-pipelines/6-run-terraform-remote-storage
after following a few steps including this one:
az storage account create --name tfsa$UNIQUE_ID --resource-group tf-storage-rg --sku Standard_LRS
and have run this command:
az storage container list --query "[].{name:name}" --output tsv
i receive the following:
HTTPSConnectionPool(host='mystorageaccount20822.blob.core.windows.net', port=443): Max retries exceeded with url: /?comp=list&maxresults=5000 (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x10d2566a0>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known'))
The above command works in cloud shell, but fails in my local shell (running v 2.20, up to date)
on cloud shell i do get this warning though:
There are no credentials provided in your command and environment, we
will query for the account key inside your storage account. Please
provide --connection-string, --account-key or --sas-token as
credentials, or use --auth-mode login if you have required RBAC
roles inyour command. For more information about RBAC roles in
storage, visit
https://learn.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli.
I had previously created a mystorageaccount20822 a couple weeks ago but deleted it... my AZ CLI is still bound to this previous account? Is there a way to tell my az cli (on mac) to sync up with the current resources i have running. In Azure Portal mystorageaccount20822 does NOT exist.
Does Azure CLI cache some values or something? is there some hidden config file that has the old 'mystorageaccount20822' set and the CLI is trying to reference that each time instead of the new account named tfsa$UNIQUE_ID ?
After running the command with debug:
az storage container list --debug --account-name tfsa$UNIQUE_ID --query [].name --output tsv
I was able to see that it was setting it.
It turns out it had set the environment variable 'AZURE_STORAGE_CONNECTION_STRING' from a tutorial a few days ago, which was overriding a property when the command was sent, to use an old examples value. After unsetting that environment variable, the command worked.

How to use variables in Azure CLI step in Azure CI/CD

I'm trying to deploy my configuration settings using Azure CLI step in my release pipeline.
There are multiple commands in it that I need to run which looks like this.
- *az webapp config settings set -g resourceGroupName -n WebAppName
--settings somekey=somevalue*
Note that There is a resourcegroupname and webappname that are present in it.
How Can I create a Variable for those two values and use them in my CLI.
I have tried adding them to the arguments but it didn't work.
Getting below error
validation error: Parameter 'resource_group_name' must conform to the following pattern: '^[-\\w\\._\\(\\)]+[^\\.]$'.
The reason being that I have 30 odd values and I need to create those variables for webapp which is present in multiple resource groups, So that I can simply swap the name of resource group and run the CLI.
You need to define variable in your pipeline and then use them in Azure Cli step.
So if you have variable named ResourceGroup you should have
az webapp config settings set -g $(ResourceGroup) -n WebAppName --settings somekey=somevalue
You can also create variable group and reuse it across your pipelines.

The Resource 'Microsoft.Sql/servers/server/databases/ABC' under resource group 'xyz' was not found

While Copy a production database locally with AZ CLI it has been copied successfully. But when I am integrating it into the Azure DevOps I am getting the ERROR : The Resource 'Microsoft.Sql/servers/mi-tools/databases/ABC' under resource group 'xyz' was not found.
Here is the code that I need to execute in my Pipeline.
az sql db copy --subscription $(SubscriptionName) --dest-server $(ServerName) --name $(ProductionDatabaseName) --dest-name $(CopyDatabaseName) --resource-group $(ResourceGroupName) --server $(ServerName) -f Gen5 -c 4 --compute-model Serverless
While deleting a database through the Azure DevOps pipeline it will take some time to delete. And my next line copied the database So it will execute immediately and the database not completely deleted.
You have to do it within two-step in Azure DeOps.

Azure function app create

I´m using az functionapp create for creating function ap in Azure, where apparts of creating the function app it also hooks it to a bitbucket repo. I´m using parametere --deployment-source-url -u but it seems is not working this way and is giving me an error. This is done by a jenkin file pipeline
node {
stage('Azure Login') {
withCredentials([azureServicePrincipal('6-8afd-ae40e9cf1e74')]) {
sh 'az login --service-principal -u $AZURE_CLIENT_ID -p $AZURE_CLIENT_SECRET -t $AZURE_TENANT_ID'
sh 'az account set -s $AZURE_SUBSCRIPTION_ID'
}
}
stage('Build Azure FuntionApp') {
sh 'az functionapp create -g $RG_NAME -p $SP_NAME -n grey-$JOB_NAME-$BUILD_NUMBER -s $SA_NAME --deployment-source-url https:// bitbucket.org/xxxx/functions/s***strong text***rc/develop --debug'
}
If I put --deployment-source-url -u https://user#bitbucket.org I get:
ERROR: az functionapp create: error: argument
--deployment-source-url/-u: expected one argument
I tried without the -u just : --deployment-source-url https://#bitbucket.org
and the job gets done, but the link with bitbucket repos is not made. Getting this:
So how is it that this work? how come if I put user it says invalid argument and if I don´t it pases but It can find user. Does anyone ever used this command to create a function app? thanks!
If you want to create azure function via azure-cli, you could change the deployment resource url after --deployment-source-url. You could refer to my command to create a function with a blob trigger, replace the url of yours. It works fine on my side.
Note: The Access level should be public, you could check it in Settings like the screenshot below.
az functionapp create --deployment-source-url https://bitbucket.org/xxx/azure-function --resource-group resourcegroupname --consumption-plan-location westeurope --name joyfun22 --storage-account <storage_name>
Besides, you also can use a github repository to create a function.
For example, to use the command below to create a function with a blob trigger.
az functionapp create --deployment-source-url https://github.com/Joyw1/Azure-Function-Trigger --resource-group myResourceGroup --consumption-plan-location westeurope --name <app_name> --storage-account <storage_name>
Update:
If your Access level is private. You need a access token to access your bitbucket repository. Please follow the steps bellow.
1.Go to the Bitbucket Labs -> Access Management -> OAuth -> Add consumer
More details, refer to this link.
2.Enable authenticated git deployment with Azure CLI
#!/bin/bash
gitrepo=<Replace with your GitHub repo URL e.g. https://github.com/Azure-Samples/functions-quickstart.git>
token=<Replace with a GitHub access token>
# Enable authenticated git deployment
az functionapp deployment source update-token \
--git-token $token
For complete command, refer to this link.

Resources