ACR TASK tagging - azure

I am creating one ACR task with below script in azure cli to patch my azure container image when base image is updated and its working fine.
az acr task create
--registry Myregistry
--name myacrtask
--image myimage:{{.Run.ID}}
--context https://dev.azure.com/testaccount/myproject/_git/acr-build-helloworld-node.git#master
--file Dockerfile-app
--commit-trigger-enabled true
--base-image-trigger-enabled true
--git-access-token *****************************
Now my image having tag with "Run ID" as you can see in my command which is getting generated when task is running.
Now I want to create a custom tag like CurrentDate and some text like below.
if today's date is 09032020 then tag should be like
09032020_sometext
I am not sure how i can generate this kind of tag on the place of run id. I tried like
--image myimage:{{$(date +'%m%d%Y-BAU')}}
but no luck..
Any suggestion will be really apprecited.
Thanks
Rajiv

You can change the tag like this in the command:
--image myimage:$(date +%m%d%Y)-BAU
Then it will work fine. And the tag will like this:
And if you want the tag like 09032020_sometext, maybe you need to change the - into _.

Related

Adding whl files to an Azure Synapse spark pool

According to the documentation, we should be able to add custom libraries as follows:
az synapse spark pool update --name testpool \
--workspace-name testsynapseworkspace --resource-group rg \
--package-action Add --package package1.jar package2.jar
However, when I try this with my python package whl files, I get an error message that the package does not exist.
> $new_package_names = "PACKAGE1-1.0.1-py3-none-any.whl PACKAGE2-1.0.6.3-py3-none-any.whl PACKAGE3-1.0.0-py3-none-any.whl PACKAGE4-1.0.1-py3-none-any.whl"
> az synapse spark pool update --name $pool_name --workspace-name $workspace_name --resource-group $resource_group --package-action Add --package $new_package_names
I receive the following error:
(LibraryDoesNotExistInWorkspace) The LibraryArtifact PACKAGE1-1.0.1-py3-none-any.whl PACKAGE2-1.0.6.3-py3-none-any.whl PACKAGE3-1.0.0-py3-none-any.whl PACKAGE4-1.0.1-py3-none-any.whl does not exist.
Code: LibraryDoesNotExistInWorkspace
Message: The LibraryArtifact PACKAGE1-1.0.1-py3-none-any.whl PACKAGE2-1.0.6.3-py3-none-any.whl PACKAGE3-1.0.0-py3-none-any.whl PACKAGE4-1.0.1-py3-none-any.whl does not exist.
The same works if I have only one package in the variable $new_package_names.
It looks to me like Azure thinks it's all one package instead of four different ones. All four are uploaded to the synapse workspace and available for selection when I do the same process manually. Does anyone know of a fix for this issue? Does it only work for .jar files for some reason?
Turns out that it really comes down to the format in which I pass the package names to the function. Something apparently changed internally as the previous way did not work anymore.
As MartinJaffer from Microsoft answered in the MS Q&A forum:
"""
If you are using az in powershell, there is a better way to go about this.
$new_package_names = "PACKAGE1-1.0.1-py3-none-any.whl" , "PACKAGE2-1.0.6.3-py3-none-any.whl" , "PACKAGE3-1.0.0-py3-none-any.whl" , "PACKAGE4-1.0.1-py3-none-any.whl"
az synapse spark pool update --name $pool_name --workspace-name $workspace_name --resource-group $resource_group --package-action Add --package #new_package_names
Here we changed new_package_names into an array type, and use the # splatter operator to seperate them.
As simpler example, it makes the following two excerpts be equivalent:
Copy-Item "test.txt" "test2.txt" -WhatIf
$ArrayArguments = "test.txt", "test2.txt"
Copy-Item #ArrayArguments -WhatIf
"""
Utilizing the splatter operator when passing the parameters worked perfectly.

Azure startup script is not executed

I've learned how to deploy .sh scripts to Azure with Azure CLI. But it seems like I have no clear understanding of how they work.
I'm creating the script that simply unarchives a .tgz archive in a current directory of Azure Web App, and then just deletes it. Quite simple:
New-Item ./startup.sh
Set-Content ./startup.sh '#!/bin/sh'
Add-Content ./startup.sh 'tar zxvf archive.tgz; rm-rf ./archive.tgz'
And then I deploy the script like this:
az webapp deploy --resource-group Group
--name Name
--src-path ./startup.sh
--target-path /home/site/wwwroot/startup.sh
--type=startup
Supposedly, it should appear in /home/site/wwwroot/, but for some reason it never does. No matter how I try. I thought it just gets executed and then deleted automatically (since I specified it as a startup script), but the archive is there, not unarchived at all.
My stack is .NET Core.
What am I doing wrong, and what's the right way to do what I need to do? Thank you.
I don't know if it makes sense, but I think the problem might be that you're using the target-path parameter while you should be using path instead.
From the documentation you cited, when describing the Azure CLI functionality, they state:
The CLI command uses the Kudu publish API to deploy the package and can be
fully customized.
The Kudu publish API reference indicates, when describing the different values for type and especially startup:
type=startup: Deploy a script that App Service automatically uses as the
startup script for your app. By default, the script is deployed to
D:\home\site\scripts\<name-of-source> for Windows and
home/site/wwwroot/startup.sh for Linux. The target path can be specified
with path.
Note the use of path:
The absolute path to deploy the artifact to. For example,
"/home/site/deployments/tools/driver.jar", "/home/site/scripts/helper.sh".
I never tested it, I am aware that the option is not described when taking about the az webapp deploy command itself, and it may be just an error in the documentation, but it may work:
az webapp deploy --resource-group Group
--name Name
--src-path ./startup.sh
--path /home/site/wwwroot/startup.sh
--type=startup
Note that the path you are providing is the default one; as a consequence, you could safely delete it if required:
az webapp deploy --resource-group Group
--name Name
--src-path ./startup.sh
--type=startup
Finally, try including some debug or echo commands in your script: perhaps the problem can be motivated for any permissions issue and having some traces in the logs could be helpful as well.

Azure Container Instance | Environment Variables from an Environment Variables File

How can I create an Azure container instance and configure it with an environment variables file?
Something that'd be equivalent to Docker's --env-file flag for the run command. I couldn't find a way to do that but I'm new to both Azure and Docker.
So it'd look something like: az container create <...> --env-file myEnvFile where myEnvFile is stored somewhere on Azure so I could grab it, like how Docker can grab such a file locally.
You can find what you want here https://learn.microsoft.com/en-us/cli/azure/container?view=azure-cli-latest#az-container-create
i.e.
az container create -g MyResourceGroup --name myapp --image myimage:latest --environment-variables key1=value1 key2=value2
Apologies, realised you want it from a file, if running in a script can you not have the file set local environment variables or parse the file to set them and then run the command above?
I'm really sure there is no parameter to set the environment variables of the Azure container instance from a file only through one command.
You can take a look at the parameter --environment-variables in the command az container create:
A list of environment variables for the container. Space-separated
values in 'key=value' format.
It requires the value of a list. So you can read from the file to create a list and then use the list as the value of the parameter --environment-variables in the create command.
As far as I'm aware, from answers and my research, this is currently not supported.

How to reuse successfully built docker images in Azure ML?

In our company I use Azure ML and I have the following issue. I specify a conda_requirements.yaml file with the PyTorch estimator class, like so (... are placeholders so that I do not have to type everything out):
from azureml.train.dnn import PyTorch
est = PyTorch(source_directory=’.’, script_params=..., compute_target=..., entry_script=..., conda_dependencies_file_path=’conda_requirements.yaml’, environment_variables=..., framework_version=’1.1’)
The conda_requirements.yaml (shortened version of the pip part) looks like this:
dependencies:
- conda=4.5.11
- conda-package-handling=1.3.10
- python=3.6.2
- cython=0.29.10
- scikit-learn==0.21.2
- anaconda::cloudpickle==1.2.1
- anaconda::cffi==1.12.3
- anaconda::mxnet=1.1.0
- anaconda::psutil==5.6.3
- anaconda::pip=19.1.1
- anaconda::six==1.12.0
- anaconda::mkl==2019.4
- conda-forge::openmpi=3.1.2
- conda-forge::pycparser==2.19
- tensorboard==1.13.1
- tensorflow==1.13.1
- pip:
- torch==1.1.0
- torchvision==0.2.1
This successfully builds on Azure. Now in order to reuse the resulting docker image in that case, I use the custom_docker_image parameter to pass to the
from azureml.train.estimator import Estimator
est = Estimator(source_directory=’.’, script_params=..., compute_target=..., entry_script=..., custom_docker_image=’<container registry name>.azurecr.io/azureml/azureml_c3a4f...’, environment_variables=...)
But now Azure somehow seems to rebuild the image again and when I run the experiment it cannot install torch. So it seems to only install the conda dependencies and not the pip dependencies, but actually I do not want Azure to rebuild the image. Can I solve this somehow?
I attempted to somehow build a docker image from my Docker file and then add to the registry. I can do az login and according to https://learn.microsoft.com/en-us/azure/container-registry/container-registry-authentication I then should also be able to do an acr login and push. This does not work.
Even using the credentials from
az acr credential show –name <container registry name>
and then doing a
docker login <container registry name>.azurecr.io –u <username from credentials above> -p <password from credentials above>
does not work.
The error message is authentication required even though I used
az login
successfully. Would also be happy if someone could explain that to me in addition to how to reuse docker images when using Azure ML.
Thank you!
AzureML should actually cache your docker image once it was created. The service will hash the base docker info and the contents of the conda.yaml file and will use that as the hash key -- unless you change any of that information, the docker should come from the ACR.
As for the custom docker usage, did you set the parameter user_managed=True? Otherwise, AzureML will consider your docker to be a base image on top of which it will create the conda environment per your yaml file.
There is an example of how to use a custom docker image in this notebook:
https://github.com/Azure/MachineLearningNotebooks/blob/4170a394edd36413edebdbab347afb0d833c94ee/how-to-use-azureml/training-with-deep-learning/how-to-use-estimator/how-to-use-estimator.ipynb

Update Docker tag using Docker task on Azure DevOps pipeline

I'm trying to change the tag of a Docker image using a Docker task on an Azure DevOps pipeline, without success.
Consider the following Docker image hosted on an Azure container registry:
My task is configured as follows:
$(DockerImageName) value is agents/standard-linux-docker2:310851
I'm trying to change the Docker image tag (e.g. to latest) but so far I wasn't able to make it work. I've also tried to set the arguments as well, without success.
Task fails with the following error message:
Error response from daemon: No such image: agents/standard-linux-docker2:310851
/usr/bin/docker failed with return code: 1
What am I missing here?
Try using Azure CLI Task. Run the following command and select the options in the image.
az acr import --name xxxxxacr --source xxxxxacr.azurecr.io/xxx/xxx-api:stage --image xxxxyyyyyyy/yyyyyyyy-api:prod --force

Resources