Custom Script Extension failing on Virtual Scale Set upgrade - linux

I have created an Azure virtual scaleset with Linux VMs. I have to run Azure CLI commands via release pipelines on these VMs so I am trying to install a Azure CLI using Custom Extensions so that every time a new VM is up CLI is installed on the VM.
I have created a .sh file with below command on blob storage :
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
I ran below command on from the CLI to deploy the custom extension :
az vmss extension set --vmss-name <VMSS Name> --resource-group <Resource Group> --name CustomScript --version 2.0 --publisher Microsoft.Azure.Extensions --settings '{"FileUris": ["https://<Blobscriptpath>/preinstallscript.sh"],"commandToExecute": "bash /preinstallscript.sh"}'
This command is installing the extension and I can see that on Azure but when I am upgrading the VM instance I am getting below error:
"Failed to upgrade virtual machine instance ''. Error: Multiple VM extensions failed to be provisioned on the VM. Please see the VM extension instance view for other failures. The first extension failed due to the error: VM has reported a failure when processing extension 'CustomScript'. Error message: "Enable failed: failed to get configuration: json validation error: invalid public settings JSON: FileUris: Additional property FileUris is not allowed""
Below are the images from Azure Portal showing the Extension:
Please suggest if I am missing something.

According to the error message, Additional property FileUris is not allowed, you should use fileUris instead of FileUris. Read the Property values.
Also, If the blob is not public, you need to provide the storage account name and key to access the blob. For example, you can provide a .sh file on the blob storage.
#!/bin/bash
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
and deploy the custom extension with CLI.
az vmss extension set --vmss-name <VMSS Name> --resource-group <Resource Group> --name CustomScript --version 2.0 --publisher Microsoft.Azure.Extensions --settings '{"fileUris": ["https://xxx.blob.core.windows.net/shscripts/preinstallscript.sh"],"commandToExecute": "sh preinstallscript.sh"}'
Edit
After installing the VMSS, you can upgrade the VMSS instance to take this script effect.

Related

Workload identity with application on Kubernetes service:kubernetes

I am trying to deploy and manage the Kubernetes cluster using OpenID Connect issuer, I have followed this Microsoft Document to deploy the application on AKS for that I have created the resource group and install the AKS preview extension
`az group create --name myResourceGroup --location eastus
az extension add --name aks-preview
az extension update --name aks-preview
#register the enabled workload identity
az feature register --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview"az feature show --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview"
az provider register --namespace Microsoft.ContainerService`
After that when I am trying to create the Kubernetes cluster with --enable-oidc-isser I am getting below error, it is taking more than 10 minutes and showing some error
(OIDCIssuerUnsupportedk8sVersion) OIDC issuer feature requires at least Kubernetes version 1.20.0. Code: OIDCIssuerUnsupportedK8sVersion Message: OIDC issuer feature requires at least Kubernetes version 1.20.0
I have upgraded to the latest version but still getting same error.
How can I export OIDC Issuer to set the Environment variables on the cluster.
Thanks in Advance :)
I tried to reproduce the same issue in my environment and got the below results
My current version is 1.23.12
I have created the resource group and install the extension preview and registered the EnabledworkloadidentityPreview using below command
az feature register --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview"
To verify the status used the below command
az feature show --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview"
I have created the AKS cluster with --enable-oidc-issuer parameter to use the OIDC issuer using below command
az aks create -g rg -n cluster --node-count 1 --enable-oidc-issuer --enable-workload-identity --generate-ssh-keys
When I check in my environment and got the same error
To resolve this issue I have upgraded my AKS version using below commands
My current version is 1.23.12
I have upgraded the to newest version using this SO answer
My current version is 1.24.3 and when I the below OIDC command I am able access
az aks create -g <rg-name> -n cluster --node-count 1 --enable-oidc-issuer --enable-workload-identity --generate-ssh-keys
NOTE: This error will occur if the version is more than 1.20.0 we have to upgrade the version to latest version not to current version then only it will work

Azure VM run-command fails with piped commands

I am trying to install Azure CLI on an ubuntu VM running in Azure.
I am using the run-command cli operation to execute the command
az vm run-command invoke --command-id RunShellScript --name trainingVM-1 --resource-group azure-privsec --scripts "curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash"
When running a non-piped command, the command works fine but when I run the above piped command to install azure-cli, it fails with the error:
Could not retrieve credential from local cache for service principal 8bf34d22-7230-47e7-907e-e0df201xxxxx. Please run 'az login' for this service principal.
One of the reasons could be that I am using an old version of Azure cli(2.29.2). For now, I dont have the leverage to upgrade the azure-cli version.
What can be the best way to install azure-cli on my VM using the run-command cli call.
Edit:
Splitting into two commands like this works for me but I am still trying to see if we can consolidate it into a single command:
az vm run-command invoke --command-id RunShellScript --name trainingVM-1 --resource-group azure-privsec --scripts "wget -O /tmp/file.sh https://aka.ms/InstallAzureCLIDeb"
az vm run-command invoke --command-id RunShellScript --name trainingVM-1 --resource-group azure-privsec --scripts "sudo bash /tmp/file.sh"
Tested in Azure PowerShell and Azure DevOps pipeline, your first Azure VM run-command run successfully.
Use Azure PowerShell, I can run this command directly without any issue.
Use Azure CLI task in the DevOps pipeline and use a Microsoft-hosted agent.
Please ensure that the service principal corresponding to the Azure Resource Manager connection(Service Connection) has sufficient permission for your Azure VM, such as owner or contributor permissions.
Then I can see the command run successfully.
There are some docs for your reference:
Manage service connections
Connect to Microsoft Azure

Azure VM command fails with no credential error

I am running a command from the azure CLI to execute a script on a linux VM running in Azure.
The command looks like this:
az vm run-command invoke --command-id RunShellScript --name <\VMName> --resource-group <\RGName> --scripts "curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash"
The script installs azure CLI on the VM.
The command results into error:
Could not retrieve credential from local cache for service principal 0cf4e5cf-de8f-4524-8be6-5e2exxxxxx. Please run 'az login' for this service principal.
Can someone help me understand the problem here.
When I run the command without the piped sudo, the command works fine.
I tried to reproduce the same in my environment I got the same error as below
az vm run-command invoke --command-id RunShellScript --name <\VMName> --resource-group <\RGName> --scripts "curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash"
This error may occur if you are not using latest version, please check your version like below and try to update your version:
az version
Also, I have tried logging in with my service principal in Azure CLI and tried running the command to install azure-cli inside the VM and it run successfully.
az login --service principal -u <AppID> -p <secret> --tenant <TenantID>
You can also install latest version locally by using this MsDocs
Reference:
Azure CLI GitHub Action fails with Azure CLI 2.30.0: Could not retrieve credential from local cache for service principal · Issue #20154 · Azure/azure-cli · GitHub

Read entire file from Azure VM, via Azure CLI

I'm trying to retrieve a docker log for an application, from an Azure Devops Pipeline. In a pipeline step, I can run a bash script on the VM that's hosting the application, using the Azure CLI like this. This reads the docker log from the VM hosting the application:
az vmss run-command invoke --command-id RunShellScript --instance-id 0 -n aks-myservicename-12345678-vmss -g my-resource-group --scripts "docker logs [container_id]" > testing.txt
The problem is that the output of az vmss run-command invoke is limited to a max 4096 bytes - see https://learn.microsoft.com/en-us/azure/virtual-machines/linux/run-command#restrictions
Is there any way I can read the entire docker log?
The following reference documents an --output-blob-uri parameter that can be used with az vm run-command create. I'm wondering if that gives a way to output the docker log to a file that I can then access by using another Azure CLI step. I haven't found any documentation though on how this works https://learn.microsoft.com/en-us/cli/azure/vmss/run-command?view=azure-cli-latest

VM Agent is either unavailable, or not installed, which may prevent VMAccess from Running

I created a VM using a managed data disk but there is a problem of VMAgent for connecting through ssh.
Command is:
az vm create --resource-group Test_Group --location eastus --name myVM2 --os-type linux --attach-os-disk disk1
The error is:
VM Agent is either unavailable, or not installed, which may prevent VMAccess from Running.
If I try to install a new VMAgent it throws an error of MultipleVMAgents not allowed and current enablevmaccess extension shows status(unavailable).

Resources