Cannot list pipeline steps using AzureML CLI - azure-machine-learning-service

I'm trying to list steps in a pipeline using AzureML CLI extension, but get an error:
>az ml run list -g <group> -w <workspace> --pipeline-run-id 00886abe-3f4e-4412-aec3-584e8c991665
UserErrorException:
Message: Cannot specify ['--last'] for pipeline runs
InnerException None
ErrorResponse
{
"error": {
"code": "UserError",
"message": "Cannot specify ['--last'] for pipeline runs"
}
}
From help it looks like --last option takes the default value 10 despite the fact that it is not supported for the --pipeline-run-id. How the latter is supposed to work?

Related

errorCode": "6000" - azure synapse running the pipeline

I have error when running a pipeline in azure synapse. If I execute the synapse notebook manually it works good (readind and writing). But when I call the name notebook from the ForEach activity inside the pipeline it fails to run. I have the following error. Previously, I had no porblem.
Error
{
"errorCode": "6000",
"message": "{\n \"code\": 400,\n \"message\": \"Failed to run notebook due to invalid request. [Error: Not supported language in Synapse: ]\",\n \"result\": {\n \"errorMessage\": null,\n \"details\": null\n }\n}",
"failureType": "UserError",
"target": "1 - Full load Parquet",
"details": []
}
Maybe worth to mention that, If I run first cmd i notebook manually and then trigger the pipeline, it run perfectly. So I dont know why but I requires the manuall start of engine?

How to submit local jobs with dsl.pipeline

Trying to run and debug a pipeline locally. Pipeline is imeplemented with azure.ml.component.dsl.pipeline. When I try to set default_compute_target='local', the compute target cannot be found:
local not found in workspace, assume this is an AmlCompute
...
File "/home/amirabdi/miniconda3/envs/stm/lib/python3.8/site-packages/azure/ml/component/run_settings.py", line 596, in _get_compute_type
raise InvalidTargetSpecifiedError(message="Cannot find compute '{}' in workspace.".format(compute_name))
azure.ml.component._util._exceptions.InvalidTargetSpecifiedError: InvalidTargetSpecifiedError:
Message: Cannot find compute 'local' in workspace.
InnerException None
ErrorResponse
{
"error": {
"code": "UserError",
"message": "Cannot find compute 'local' in workspace."
}
}
The local run, for example, can be achieved with azureml.core.ScriptRunConfig.
src = ScriptRunConfig(script="train.py", compute_target="local", environment=myenv)
run = exp.submit(src)
We have different types of compute targets and one of those is local computer.
Create an experiment
from azureml.core import Experiment
experiment_name = 'my_experiment'
experiment = Experiment(workspace=ws, name=experiment_name)
Select the compute target where we need to run
compute_target='local'
If the compute_target is not mentioned or ScriptRunConfig is not mentioned, then AzureML will run the script locally
from azureml.core import Environment
myenv = Environment("user-managed-env")
myenv.python.user_managed_dependencies = True
Create the script job, based on the procedure mentioned in link
Submit the experiment
run = experiment.submit(config=src)
run.wait_for_completion(show_output=True)
To check for the troubleshooting the procedure, check with link

How to have the root cause of a BadRequestError during azure function deployment?

I'm trying to deploy a new version of an already existing Azure Function using the cli using the following command:
az functionapp deployment source config-zip -g "resourcegroupeok" -n "function-app" --src MyNewFunction.zip
But I only get an error:
BadRequestError: Operation returned an invalid status code 'Bad Request'
Is there a way to increase verbosity or to have more infos on what to check?
MyNewFunction.zip contains a JAR and a host.json file.
NB: When I try to put a wrong resourge group name of a wrong function app name, I have a precise error telling me to check theses values.
Example:
The function app 'Bad-Function-App' was not found in resource group 'resourcegroupeok'. Please make sure these values are correct.
As suggested in the comment by #Marco, the correct option is --debug

Get exit code from `az vm run-command` in Azure pipeline

I'm running a rather hefty build in my Azure pipeline, which involves processing a large amount of data, and hence requires too much memory for my buildagent to handle. My approach is therefore to start up an linux VM, run the build there, and push up the resulting docker image to my container registry.
To achieve this, I'm using the Azure CLI task to issue commands to the VM (e.g. az vm start, az vm run-command ... etc).
The problem I am facing is that az vm run-command "succeeds" even if the script that you run on the VM returns a nonzero status code. For example, this "bad" vm script:
az vm run-command invoke -g <group> -n <vmName> --command-id RunShellScript --scripts "cd /nonexistent/path"
returns the following response:
{
"value": [
{
"code": "ProvisioningState/succeeded",
"displayStatus": "Provisioning succeeded",
"level": "Info",
"message": "Enable succeeded: \n[stdout]\n\n[stderr]\n/var/lib/waagent/run-command/download/87/script.sh: 1: cd: can't cd to /nonexistent/path\n",
"time": null
}
]
}
So, the command succeeds, presumably because it succeeded in executing the script on the VM. The fact that the script actually failed on the VM is buried in the response "message"
I would like my Azure pipeline task to fail if the script on the VM returns a nonzero status code. How would I achieve that?
One idea would be to parse the response (somehow) and search the text under stderr - but that sounds like a real hassle, and I'm not sure even how to "access" the response within the task.
Have you enabled the option "Fail on Standard Error" on the Azure CLI task? If not, you can try to enable it and run the pipeline again to see if the error "cd: can't cd to /nonexistent/path" can make the task run failed.
If the task still is passed, the error "cd: can't cd to /nonexistent/path" should not be a Standard Error. In this situation, you may need to add more command lines in your script to monitor the output logs of the az command. Once there is any output message shows error, execute "exit 1" to exit the script and return a Standard Error to make the task be failed.
I solved this by using the SSH pipeline task - this allowed me to connect to the VM via SSH, and run the given script on the machine "directly" via SSH.
This means from the context of the task, you get the status code from the script itself running on the VM. You also see any console output inside the task logs, which was obscured when using az vm run-command.
Here's an example:
- task: SSH#0
displayName: My VM script
timeoutInMinutes: 10
inputs:
sshEndpoint: <sshConnectionName>
runOptions: inline
inline: |
echo "Write your script here"
Not that the SSH connection needs to be set up as a service connection using the Azure pipelines UI. You reference the name of the service connection you set up in yaml.

Using PsExec in Jenkins, even if the script fails, it shows Success

I am trying to run a powershell script which first logins to azure and then deploys the zip file to azure using psexec.
I am using the following command:
F:\jenkins\VMScripts\PsExec64.exe \\WINSU9 -u "WINSU9\administrator" -p mypassword /accepteula -h PowerShell -noninteractive -File C:\Shared\Trial\webappscript.ps1
I am getting the output as:
PsExec v2.2 - Execute processes remotely
Copyright (C) 2001-2016 Mark Russinovich
Sysinternals - www.sysinternals.com
[
{
"cloudName": "AzureCloud",
"id": "a7b6d14fddef2",
"isDefault": true,
"name": "subscription_name",
"state": "Enabled",
"tenantId": "b41cd",
"user": {
"name": "username#user.com",
"type": "user"
}
}
]
WARNING: Getting scm site credentials for zip deploymentConnecting to WINSU9...
Starting PSEXESVC service on WINSU9...
Connecting with PsExec service on WINSU9...
Starting PowerShell on WINSU9...
PowerShell exited on WINSU9 with error code 0.
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
It is just giving the output of az login command but the output of deployment is not showing. Also if the deployment fails, it will still show success. But it should show failure.
Answering my question so that others facing the same issue can get help here. As #Alex said that powershell is exiting with error code 0, I tried to return the error code 1 whenever any command fails. Since the output of Azure CLI is in json format, I stored that output in a variable and checked if it contains anything. The sample of the code is written below.
$output = az login -u "username" -p "password" | ConvertFrom-Json
if (!$output) {
Write-Error "Error validating the credentials"
exit 1
}
The Jenkins job succeeded because PSExec.exe returned exit code 0, which means that no errors were encountered. Jenkins jobs will fail if the underlying scripts fail (e.g. returning non-zero exit codes, like 1). If the PSExec.exe application isn't doing what you want it to - I would wrap it in another script which performs post-deploy validation, and returns 1 if the deployment failed.
See How/When does Execute Shell mark a build as failure in Jenkins? for more details.
You can use powershell step, this should hand out the error directly as well.

Resources