In the past I used IoTHub Explorer for logging in and creating a session to then do further operations (like calling device methods). IoTHub Explorer has been deprecated by Microsoft. (I'm doing some application-level test automation)
How can I create sessions as I did with the explorer using the azure CLI az?
Here is what I did in the past:
iothub-explorer login "HostName=..."
iothub-explorer device-method <device> "<method>" ...
Here is what I do now:
az iot hub invoke-device-method -l "HostName=..." -n <hub-name> -d <device -method-name <method>
As can be seen, I have to provide the -l-option to every call to az iot. Ideally I can avoid this by creating a session.
I tried to use az login which opens a website, not ideal for test-automation. And even after then, calling az iot hub invoke-device-method without -l leads to an exception: AttributeError: 'IotHubResourceOperations' object has no attribute 'config'
I tried to generate a sas-token but I'm not sure what to do with it.
Turns out, my azure-cli environment was not properly set up: refer to https://github.com/Azure/azure-cli/issues/15461. Do not mix up Debian/system packages of azure-cli (do not use actually) and pip installed ones. Do everything with pip, either as a user or as root.
I created a new virtualenv to clean it:
$ virtualenv ~/python-venv/azure-venv
$ . ~/python-venv/azure-cli/bin/activate
(azure-venv) $ pip install azure-cli
(azure-venv) $ az login
(azure-venv) $ az iot hub generate-sas-token --duration 3600 -n <hubname> -l <login-string>
(azure-venv) $ az iot hub invoke-device-method -n <hub-name> -d <device --method-name <method>
And it works.
Related
got a folder called data-asset which contains a yaml file with the following
type: uri_folder
name: <name_of_data>
description: <description goes here>
path: <path>
In a pipeline am referencing this using azure cli inline script using the following command az ml data create -f .yml but getting error
full error-D:\a\1\s\ETL\data-asset>az ml data create -f data-asset.yml
ERROR: 'ml' is misspelled or not recognized by the system.
Examples from AI knowledge base:
az extension add --name anextension
Add extension by name
trying to implement this https://learn.microsoft.com/en-us/azure/machine-learning/how-to-create-register-data-assets?tabs=CLI
how can a resolve this?
One of the workaround you can follow to resolve the above issue;
Based on this GitHub issue as suggested by #adba-msft .
Please make sure that you have upgraded your azure cli to latest and
Azure CLI ML extension v2 is being used.
To check and upgrade the cli we can use the below cmdlts:
az version
az upgrade
For more information please refer this similar SO THREAD|'create' is misspelled or not recognized by the system on az ml dataset create .
I did observe the same issue after trying the aforementioned suggestion by #Dor Lugasi-Gal it works for me with (in my case az ml -h) after installed the extension with az extension add -n ml -y can able to get the result of az ml -h without any error.
SCREENSHOT FOR REFERENCE:-
I've learned how to deploy .sh scripts to Azure with Azure CLI. But it seems like I have no clear understanding of how they work.
I'm creating the script that simply unarchives a .tgz archive in a current directory of Azure Web App, and then just deletes it. Quite simple:
New-Item ./startup.sh
Set-Content ./startup.sh '#!/bin/sh'
Add-Content ./startup.sh 'tar zxvf archive.tgz; rm-rf ./archive.tgz'
And then I deploy the script like this:
az webapp deploy --resource-group Group
--name Name
--src-path ./startup.sh
--target-path /home/site/wwwroot/startup.sh
--type=startup
Supposedly, it should appear in /home/site/wwwroot/, but for some reason it never does. No matter how I try. I thought it just gets executed and then deleted automatically (since I specified it as a startup script), but the archive is there, not unarchived at all.
My stack is .NET Core.
What am I doing wrong, and what's the right way to do what I need to do? Thank you.
I don't know if it makes sense, but I think the problem might be that you're using the target-path parameter while you should be using path instead.
From the documentation you cited, when describing the Azure CLI functionality, they state:
The CLI command uses the Kudu publish API to deploy the package and can be
fully customized.
The Kudu publish API reference indicates, when describing the different values for type and especially startup:
type=startup: Deploy a script that App Service automatically uses as the
startup script for your app. By default, the script is deployed to
D:\home\site\scripts\<name-of-source> for Windows and
home/site/wwwroot/startup.sh for Linux. The target path can be specified
with path.
Note the use of path:
The absolute path to deploy the artifact to. For example,
"/home/site/deployments/tools/driver.jar", "/home/site/scripts/helper.sh".
I never tested it, I am aware that the option is not described when taking about the az webapp deploy command itself, and it may be just an error in the documentation, but it may work:
az webapp deploy --resource-group Group
--name Name
--src-path ./startup.sh
--path /home/site/wwwroot/startup.sh
--type=startup
Note that the path you are providing is the default one; as a consequence, you could safely delete it if required:
az webapp deploy --resource-group Group
--name Name
--src-path ./startup.sh
--type=startup
Finally, try including some debug or echo commands in your script: perhaps the problem can be motivated for any permissions issue and having some traces in the logs could be helpful as well.
I just started learning Azure by following Pluralsight course. I'm following Author's video and doing the same in my system.
To create App service, used the following command.
>az webapp create -p MahaAppServicePlan -g MAHAResourceGroup -n datingapp -l
I have already created MahaAppServicePlan app service plan, and MAHAResourceGroup resource group. Now, I am trying to create datingapp webapp. Hence, issued the command like above. But, I am getting below error.
ResourceNotFound - The Resource 'Microsoft.Web/sites/datingapp' under resource group 'MAHAResourceGroup' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix
I followed the above link with the hope that some suggestion could be helpful to me, but no luck.
When googled, I've got some resources, but with my existing knowledge, I am unable to tune it to my requirement. Can anyone please suggest me how to fix the above error!
I'm not use to work with powershell but I have recreated your problem and I get this:
If you explore the log info you will see something like this:
I confirm that the error its that the app Name it's invalid, I have manually created the app service and see this:
I can see in this last image that the Runtime its mandatory which in the documentation does not say(https://learn.microsoft.com/en-us/cli/azure/webapp?view=azure-cli-latest#az-webapp-create). But if you add the -r "your choose runtime" you will execute the command with succed:
az webapp create -g MAHAResourceGroup -p MahaAppServicePlan -n webappteststackoverflow -r "DOTNETCORE|3.1"
You can see the available runtimes with this command:
az webapp list-runtimes
I tried to add a custom script to VM through extensions. I have observed that when vm is created, Microsoft.Azure.Extensions.CustomScript type is created with name "cse-agent" by default. So I try to update extension by encoding the file with script property
az vm extension set \
--resource-group test_RG \
--vm-name aks-agentpool \
--name CustomScript \
--subscription ${SUBSCRIPTION_ID} \
--publisher Microsoft.Azure.Extensions \
--settings '{"script": "'"$value"'"}'
$value represents the script file encoded in base 64.
Doing that gives me an error:
Deployment failed. Correlation ID: xxxx-xxxx-xxx-xxxxx.
VM has reported a failure when processing extension 'cse-agent'.
Error message: "Enable failed: failed to get configuration: invalid configuration:
'commandToExecute' and 'script' were both specified, but only one is validate at a time"
From the documentation, it is mentioned that when script attribute is present,
there is no need for commandToExecute. As you can see above I haven't mentioned commandToExecute, it's somehow taking it from previous extension. Is there a way to update it without deleting it? Also it will be interesting to know what impact will cse-agent extension will create when deleted.
FYI: I have tried deleting 'cse-agent' extension from VM and added my extension. It worked.
the CSE-AGENT vm extension is crucial and manages all of the post install needed to configure the nodes to be considered a valid Kubernetes nodes. Removing this CSE will break the VMs and will render your cluster inoperable.
IF you are interested in applying changes to nodes in an existing cluster, while not officially supported, you could leverage the following project.
https://github.com/juan-lee/knode
This allows you to configure the nodes using a DaemonSet, which helps when you node pools have the auto-scaling feature enabled.
for simple Node alteration of the filesystem, a privilege pod with host path will also work
https://dev.to/dannypsnl/privileged-pod-debug-kubernetes-node-5129
In our company I use Azure ML and I have the following issue. I specify a conda_requirements.yaml file with the PyTorch estimator class, like so (... are placeholders so that I do not have to type everything out):
from azureml.train.dnn import PyTorch
est = PyTorch(source_directory=’.’, script_params=..., compute_target=..., entry_script=..., conda_dependencies_file_path=’conda_requirements.yaml’, environment_variables=..., framework_version=’1.1’)
The conda_requirements.yaml (shortened version of the pip part) looks like this:
dependencies:
- conda=4.5.11
- conda-package-handling=1.3.10
- python=3.6.2
- cython=0.29.10
- scikit-learn==0.21.2
- anaconda::cloudpickle==1.2.1
- anaconda::cffi==1.12.3
- anaconda::mxnet=1.1.0
- anaconda::psutil==5.6.3
- anaconda::pip=19.1.1
- anaconda::six==1.12.0
- anaconda::mkl==2019.4
- conda-forge::openmpi=3.1.2
- conda-forge::pycparser==2.19
- tensorboard==1.13.1
- tensorflow==1.13.1
- pip:
- torch==1.1.0
- torchvision==0.2.1
This successfully builds on Azure. Now in order to reuse the resulting docker image in that case, I use the custom_docker_image parameter to pass to the
from azureml.train.estimator import Estimator
est = Estimator(source_directory=’.’, script_params=..., compute_target=..., entry_script=..., custom_docker_image=’<container registry name>.azurecr.io/azureml/azureml_c3a4f...’, environment_variables=...)
But now Azure somehow seems to rebuild the image again and when I run the experiment it cannot install torch. So it seems to only install the conda dependencies and not the pip dependencies, but actually I do not want Azure to rebuild the image. Can I solve this somehow?
I attempted to somehow build a docker image from my Docker file and then add to the registry. I can do az login and according to https://learn.microsoft.com/en-us/azure/container-registry/container-registry-authentication I then should also be able to do an acr login and push. This does not work.
Even using the credentials from
az acr credential show –name <container registry name>
and then doing a
docker login <container registry name>.azurecr.io –u <username from credentials above> -p <password from credentials above>
does not work.
The error message is authentication required even though I used
az login
successfully. Would also be happy if someone could explain that to me in addition to how to reuse docker images when using Azure ML.
Thank you!
AzureML should actually cache your docker image once it was created. The service will hash the base docker info and the contents of the conda.yaml file and will use that as the hash key -- unless you change any of that information, the docker should come from the ACR.
As for the custom docker usage, did you set the parameter user_managed=True? Otherwise, AzureML will consider your docker to be a base image on top of which it will create the conda environment per your yaml file.
There is an example of how to use a custom docker image in this notebook:
https://github.com/Azure/MachineLearningNotebooks/blob/4170a394edd36413edebdbab347afb0d833c94ee/how-to-use-azureml/training-with-deep-learning/how-to-use-estimator/how-to-use-estimator.ipynb