Azure VM extension update failure - azure

I tried to add a custom script to VM through extensions. I have observed that when vm is created, Microsoft.Azure.Extensions.CustomScript type is created with name "cse-agent" by default. So I try to update extension by encoding the file with script property
az vm extension set \
--resource-group test_RG \
--vm-name aks-agentpool \
--name CustomScript \
--subscription ${SUBSCRIPTION_ID} \
--publisher Microsoft.Azure.Extensions \
--settings '{"script": "'"$value"'"}'
$value represents the script file encoded in base 64.
Doing that gives me an error:
Deployment failed. Correlation ID: xxxx-xxxx-xxx-xxxxx.
VM has reported a failure when processing extension 'cse-agent'.
Error message: "Enable failed: failed to get configuration: invalid configuration:
'commandToExecute' and 'script' were both specified, but only one is validate at a time"
From the documentation, it is mentioned that when script attribute is present,
there is no need for commandToExecute. As you can see above I haven't mentioned commandToExecute, it's somehow taking it from previous extension. Is there a way to update it without deleting it? Also it will be interesting to know what impact will cse-agent extension will create when deleted.
FYI: I have tried deleting 'cse-agent' extension from VM and added my extension. It worked.

the CSE-AGENT vm extension is crucial and manages all of the post install needed to configure the nodes to be considered a valid Kubernetes nodes. Removing this CSE will break the VMs and will render your cluster inoperable.
IF you are interested in applying changes to nodes in an existing cluster, while not officially supported, you could leverage the following project.
https://github.com/juan-lee/knode
This allows you to configure the nodes using a DaemonSet, which helps when you node pools have the auto-scaling feature enabled.
for simple Node alteration of the filesystem, a privilege pod with host path will also work
https://dev.to/dannypsnl/privileged-pod-debug-kubernetes-node-5129

Related

Adding whl files to an Azure Synapse spark pool

According to the documentation, we should be able to add custom libraries as follows:
az synapse spark pool update --name testpool \
--workspace-name testsynapseworkspace --resource-group rg \
--package-action Add --package package1.jar package2.jar
However, when I try this with my python package whl files, I get an error message that the package does not exist.
> $new_package_names = "PACKAGE1-1.0.1-py3-none-any.whl PACKAGE2-1.0.6.3-py3-none-any.whl PACKAGE3-1.0.0-py3-none-any.whl PACKAGE4-1.0.1-py3-none-any.whl"
> az synapse spark pool update --name $pool_name --workspace-name $workspace_name --resource-group $resource_group --package-action Add --package $new_package_names
I receive the following error:
(LibraryDoesNotExistInWorkspace) The LibraryArtifact PACKAGE1-1.0.1-py3-none-any.whl PACKAGE2-1.0.6.3-py3-none-any.whl PACKAGE3-1.0.0-py3-none-any.whl PACKAGE4-1.0.1-py3-none-any.whl does not exist.
Code: LibraryDoesNotExistInWorkspace
Message: The LibraryArtifact PACKAGE1-1.0.1-py3-none-any.whl PACKAGE2-1.0.6.3-py3-none-any.whl PACKAGE3-1.0.0-py3-none-any.whl PACKAGE4-1.0.1-py3-none-any.whl does not exist.
The same works if I have only one package in the variable $new_package_names.
It looks to me like Azure thinks it's all one package instead of four different ones. All four are uploaded to the synapse workspace and available for selection when I do the same process manually. Does anyone know of a fix for this issue? Does it only work for .jar files for some reason?
Turns out that it really comes down to the format in which I pass the package names to the function. Something apparently changed internally as the previous way did not work anymore.
As MartinJaffer from Microsoft answered in the MS Q&A forum:
"""
If you are using az in powershell, there is a better way to go about this.
$new_package_names = "PACKAGE1-1.0.1-py3-none-any.whl" , "PACKAGE2-1.0.6.3-py3-none-any.whl" , "PACKAGE3-1.0.0-py3-none-any.whl" , "PACKAGE4-1.0.1-py3-none-any.whl"
az synapse spark pool update --name $pool_name --workspace-name $workspace_name --resource-group $resource_group --package-action Add --package #new_package_names
Here we changed new_package_names into an array type, and use the # splatter operator to seperate them.
As simpler example, it makes the following two excerpts be equivalent:
Copy-Item "test.txt" "test2.txt" -WhatIf
$ArrayArguments = "test.txt", "test2.txt"
Copy-Item #ArrayArguments -WhatIf
"""
Utilizing the splatter operator when passing the parameters worked perfectly.

azure cli not recognizing the following command az ml data create -f <file-name>.yml

got a folder called data-asset which contains a yaml file with the following
type: uri_folder
name: <name_of_data>
description: <description goes here>
path: <path>
In a pipeline am referencing this using azure cli inline script using the following command az ml data create -f .yml but getting error
full error-D:\a\1\s\ETL\data-asset>az ml data create -f data-asset.yml
ERROR: 'ml' is misspelled or not recognized by the system.
Examples from AI knowledge base:
az extension add --name anextension
Add extension by name
trying to implement this https://learn.microsoft.com/en-us/azure/machine-learning/how-to-create-register-data-assets?tabs=CLI
how can a resolve this?
One of the workaround you can follow to resolve the above issue;
Based on this GitHub issue as suggested by #adba-msft .
Please make sure that you have upgraded your azure cli to latest and
Azure CLI ML extension v2 is being used.
To check and upgrade the cli we can use the below cmdlts:
az version
az upgrade
For more information please refer this similar SO THREAD|'create' is misspelled or not recognized by the system on az ml dataset create .
I did observe the same issue after trying the aforementioned suggestion by #Dor Lugasi-Gal it works for me with (in my case az ml -h) after installed the extension with az extension add -n ml -y can able to get the result of az ml -h without any error.
SCREENSHOT FOR REFERENCE:-

Azure startup script is not executed

I've learned how to deploy .sh scripts to Azure with Azure CLI. But it seems like I have no clear understanding of how they work.
I'm creating the script that simply unarchives a .tgz archive in a current directory of Azure Web App, and then just deletes it. Quite simple:
New-Item ./startup.sh
Set-Content ./startup.sh '#!/bin/sh'
Add-Content ./startup.sh 'tar zxvf archive.tgz; rm-rf ./archive.tgz'
And then I deploy the script like this:
az webapp deploy --resource-group Group
--name Name
--src-path ./startup.sh
--target-path /home/site/wwwroot/startup.sh
--type=startup
Supposedly, it should appear in /home/site/wwwroot/, but for some reason it never does. No matter how I try. I thought it just gets executed and then deleted automatically (since I specified it as a startup script), but the archive is there, not unarchived at all.
My stack is .NET Core.
What am I doing wrong, and what's the right way to do what I need to do? Thank you.
I don't know if it makes sense, but I think the problem might be that you're using the target-path parameter while you should be using path instead.
From the documentation you cited, when describing the Azure CLI functionality, they state:
The CLI command uses the Kudu publish API to deploy the package and can be
fully customized.
The Kudu publish API reference indicates, when describing the different values for type and especially startup:
type=startup: Deploy a script that App Service automatically uses as the
startup script for your app. By default, the script is deployed to
D:\home\site\scripts\<name-of-source> for Windows and
home/site/wwwroot/startup.sh for Linux. The target path can be specified
with path.
Note the use of path:
The absolute path to deploy the artifact to. For example,
"/home/site/deployments/tools/driver.jar", "/home/site/scripts/helper.sh".
I never tested it, I am aware that the option is not described when taking about the az webapp deploy command itself, and it may be just an error in the documentation, but it may work:
az webapp deploy --resource-group Group
--name Name
--src-path ./startup.sh
--path /home/site/wwwroot/startup.sh
--type=startup
Note that the path you are providing is the default one; as a consequence, you could safely delete it if required:
az webapp deploy --resource-group Group
--name Name
--src-path ./startup.sh
--type=startup
Finally, try including some debug or echo commands in your script: perhaps the problem can be motivated for any permissions issue and having some traces in the logs could be helpful as well.

The Resource under resource group was not found error

I just started learning Azure by following Pluralsight course. I'm following Author's video and doing the same in my system.
To create App service, used the following command.
>az webapp create -p MahaAppServicePlan -g MAHAResourceGroup -n datingapp -l
I have already created MahaAppServicePlan app service plan, and MAHAResourceGroup resource group. Now, I am trying to create datingapp webapp. Hence, issued the command like above. But, I am getting below error.
ResourceNotFound - The Resource 'Microsoft.Web/sites/datingapp' under resource group 'MAHAResourceGroup' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix
I followed the above link with the hope that some suggestion could be helpful to me, but no luck.
When googled, I've got some resources, but with my existing knowledge, I am unable to tune it to my requirement. Can anyone please suggest me how to fix the above error!
I'm not use to work with powershell but I have recreated your problem and I get this:
If you explore the log info you will see something like this:
I confirm that the error its that the app Name it's invalid, I have manually created the app service and see this:
I can see in this last image that the Runtime its mandatory which in the documentation does not say(https://learn.microsoft.com/en-us/cli/azure/webapp?view=azure-cli-latest#az-webapp-create). But if you add the -r "your choose runtime" you will execute the command with succed:
az webapp create -g MAHAResourceGroup -p MahaAppServicePlan -n webappteststackoverflow -r "DOTNETCORE|3.1"
You can see the available runtimes with this command:
az webapp list-runtimes

Helm - Spark operator examples/spark-pi.yaml does not exist

I've deployed Spark Operator to GKE using the Helm Chart to a custom namespace:
helm install --name sparkoperator incubator/sparkoperator --namespace custom-ns --set sparkJobNamespace=custom-ns
and confirmed the operator running in the cluster with helm status sparkoperator.
However when I'm trying to run the Spark Pi example kubectl apply -f examples/spark-pi.yaml I'm getting the following error:
the path "examples/spark-pi.yaml" does not exist
There are few things that I probably still don't get:
Where is actually examples/spark-pi.yaml located after deploying the operator?
What else should I check and what other steps should I take to make the example work?
Please find the spark-pi.yaml file here.
You should copy it to your filesystem, customize it if needed, and provide a valid path to it with kubectl apply -f path/to/spark-pi.yaml.
kubectl apply needs a yaml file either locally on the system where you are running kubectl command or it can be a http/https endpoint hosting the file.

Resources