Deleting environments from azureml studio - azure-machine-learning-service

How may I delete an environment from azure machine learning workspace? I can create and list them but could not figure out how I may delete them?

If you want to remove the listed item in the Environments tab of your Azure ML Studio, you can try the following command with your Azure CLI
az ml environment delete -n <environment_name> -g <Resource-group name> -w <Workspace name> -v <version of the environment>

Environments are versioned however, so you could just register over it with the new definition for now to "erase" consumers picking up the old one by default as a workaround (if you just wanted to reuse the name).
If you're interested in shortening the list of returned values, might be easier to use ws.environments["mysecondenv"] instead

Currently, Environments cannot be deleted from the GUI.
Here's what worked for me:
From the CMD prompt, launch Azure CLI az login
Once you verify your subscription/ RG/ Workspace, list the available Environments using the command az ml environment list
Before deleting, it is a good practice to list the environment: az ml environment list --name name-of-your-env
Once the enviornment is confirmed, you can delete it by using the command az ml environment archive --name name-of-your-env
Please note there wont be any output generated if the deletion/ archiving process is successful.
Instead of delete, the archive command seems to work. For more info here

You probably mean un-registering an environment because the environment itself dose not exist under your workspace, unless you build a container with this specific environment.
The answer is currently no. See here: https://github.com/Azure/MachineLearningNotebooks/issues/1201
Same is with experiments and I thinks models as well. Makes sense to me actually because you track them all together.

Related

IBM Cloud Code Engine: How can I deploy an app from GitLab source without CLI

I create a project and saved it in GitLab.
I tried to download the IBM Cloud CLI to my Windows 10 system and I failed, I try to do it Run as administrator as mention in the CLI docs.
Now, I want to deploy this code project without CLI from source code. I could not find any docs about it.
I read about Dockerfile I should insert into my project but I know nothing about it.
Please help me in 2 ways:
Deploy my project with source code (Gitlab connect to IBM Cloud Code Engine).
Deploy my project using CLI in the windows 10 system.
I just did the same thing as part 1 of your question yesterday. As a prerequisite, you will need a container registry to put things into, such as a free account on Docker Hub.
Start on the Code Engine console.
Choose Start with Source Code, and paste in your on Gitlab URL. (The default is a sample repo which may be useful, especially https://github.com/IBM/hello.
On the next page, you can accept most of the defaults but you will need to create a project. That's free, but it needs a credit card so you can use a Pay As You Go account.
You'll also need to Specify Build Details
Here you tell it about your source repo, and where your Dockerfile lives. If you don't have a Dockerfile, you can probably find a sample one for your chosen runtime (C#? Node.js? Java?), or you can try using the Cloud Native buildpack. The build pack will try and work out how to run your code by inspecting what files you have.
Finally, on the last page of the build details, you will need to tell it where your container registry lives. This is a repository used to store the built images. If you set up a personal account on docker hub, you can just enter the credentials.
Once you've done that, you can choose Done in the sidebar:
and then Create
You should get a page which shows your image is building
and then once the build is done, in the top right you'll get a link to take you to your app's web page:
If you get stuck, there's a good set of documentation.

Azure DevOps Self-Hosted Agent - How to replicate cloud-hosted agents?

I wonder if, for simplicity reasons, it is possible to create Azure DevOps self-hosted agents locally, reproducing all capabilities of the cloud-hosted ones. I need to use self-hosted agents, but do not want to create installation and upgrade scripts for each and every application on them.
I would imagine there is something like a VM image with all tools preinstalled; possibly the same as in Azure DevOps. This could potentially have the benefit of 100% compatibility.
What I have found so far:
Azure devops - Preparing self hosted test agents wants to automate agent installation; ansible and silent installers are suggested to solve the issue
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops suggests to run agents in docker.
https://github.com/microsoft/azure-pipelines-image-generation which has been replaced by https://github.com/actions/virtual-environments and contains packer files, but I cannot find any kind of documentation
How can I create "the perfect Azure DevOps agent"?
How can I create "the perfect Azure DevOps agent"?
I have had the same request as you before, I agree with your views 2 and 3.
But because I am not very proficient in docker technology and need to maintain my docker environment frequently, I choose to use packer to build my image.
You could check below great document for some more details:
Build your own Hosted VSTS Agent Cloud: Part 1 - Build
Build your own Hosted VSTS Agent Cloud: Part 2 - Deploy
Looks like we are in the same rabbit-hole. I started out with the same question as you posted, and it looks like you are on to the answer - setup a VM localy or on Azure and you are good to go. The links in the answer from "Leo Liu" is probably a good place to start. However - VMs are not Docker and this doesn't answer your broader question.
If the question is rephrased as "Why don't Microsoft provide an easy way to setup self-hosted agents in docker containers?" I believe the answer lies in their business model. Local VMs need Windows licenses and Azure VMs are billed by the hour...
But, conspiracy theories aside, I don't think there is an easy way to setup a dockerised version of the cloud-hosted agents. It's probably not a very good idea either. Docker containers are meant to be small, and with all of the dependencies on those agents they are anything but small. It's also never going to be "perfect", as you say, since the dockerized windows isn't identical to what's running in their cloud-hosted VMs.
I have started tweeking something that is not perfect, but might work:
setup a docker-agent according to the documentation here
add the essence from the PS-scripts corresponding to the packages you need from here
add commands to the Dockerfile to COPY and RUN the scripts
docker build as usual and you should have a container that's a bit more capable and an agent that reports its capabilities in a similar way to the cloud-agents
In an ideal world there would be a repository with all of the tweeked scripts, and a community that kept them updated. In an even more ideal world that would be a Microsoft hosted repository, but like I said - that's probably unlikely.
Here is some code to get you started. Maybe I'll publish a more finished version somewhere in the future.
init.ps1 with some lines borrowed from here:
Write-Host "Install chocolatey"
$chocoExePath = 'C:\ProgramData\Chocolatey\bin'
if ($($env:Path).ToLower().Contains($($chocoExePath).ToLower())) {
Write-Host "Chocolatey found in PATH, skipping install..."
Exit
}
$systemPath = [Environment]::GetEnvironmentVariable('Path', [System.EnvironmentVariableTarget]::Machine)
$systemPath += ';' + $chocoExePath
[Environment]::SetEnvironmentVariable("PATH", $systemPath, [System.EnvironmentVariableTarget]::Machine)
$userPath = [Environment]::GetEnvironmentVariable('Path', [System.EnvironmentVariableTarget]::User)
if ($userPath) {
$env:Path = $systemPath + ";" + $userPath
}
else {
$env:Path = $systemPath
}
Invoke-Expression ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))
choco feature enable -n allowGlobalConfirmation
Remove-Item -Path $env:ChocolateyInstall\bin\cpack.exe -Force
Import-Module "$env:ChocolateyInstall\helpers\chocolateyInstaller.psm1" -Force
Get-ToolsLocation
Modified Dockerfile from the Microsoft documentation, that also runs the script on build:
FROM mcr.microsoft.com/windows/servercore:ltsc2019
COPY init.ps1 /Windows/Temp/init.ps1
RUN powershell -executionpolicy bypass C:\Windows\Temp\init.ps1
WORKDIR /azp
COPY start.ps1 .
CMD powershell .\start.ps1

Saving output to variable not working in Azure-CLI DevOps task

Trying to save an output of azure advisor recommendation on to a variable so that i can pass on to next task.
However no matter what syntax I try, and believe me i have tried all possible combinations, the variable doesn't get saved.
Interestingly, these work in cloud shell (bash)
for E.g.
rgName="$(az group list --query "[?tags.Test=='yes'].name" --output tsv)"
az group show -n $rgName
This works just fine in cloud shell. But not in DevOps azurecli task.
I also referred to multiple examples given in stack overflow itself, but none of them work.
Using task version 1.*
rgName' is not recognised as an internal or external command,
Can someone give an working example for DevOps Task?
Note: BTW the whole reason for me using cli is because cant find Advisor RM module and the Az module wont load correctly in task version 4.*
As Shayki mentioned above, task.setvariable can help with setting a variable from a script. The same has been detailed in this document. In a nutshell, you would have to do this:
rgName=$(az group list --query "[?tags.Test=='yes'].name" -o tsv | tr '\n' ' ')
echo "##vso[task.setvariable variable=RESULT]$rgName"
The task.setvariable is a logging command and does not update the environment variables, but it does make the new variable available to downstream steps within the same job. Notice that the results are separated by CRLF and not spaces, and hence the trimming tr '\n' ' '. Now, in the subsequent tasks where you need the variable, you may use it in this way:
echo "Result: $(RESULT)"
Refer to this blog to get a detailed walkthrough. Hope this helps!
What fixed this for me was to use a Linux agent job and not a Windows agent then add the Azure CLI task to run on the Linux agent, specifically the ubuntu-16.04.

automate adding capabilities to Azure DevOps self-hosted agents

As far as I know, Azure DevOps agents are capable of automatically detecting its own capabilities. Based on the documentation, so as long as I restart the host once a new piece of software has been installed, the capability should be registered automatically.
What I am having trouble doing right now is getting the agent to detect the presence of Yarn on a self-hosted agent on a windows host. Looking at the PATH environment variable shows the existence of the Yarn executable, but it is not listed as a capability despite having the host restarted. My current workaround is to manually add Yarn to the capability list and setting its value to true.
As a side note, yarn was installed via Ansible using win_chocolatey plugin. The install was successful with no errors.
I am wondering a few things
1) Am I missing something which is causing this issue?
2) Is this an inherent issue with Yarn? If this is an inherent issue with Yarn, is there a way to automate the process of manually adding yarn as a capability?
Capabilities for a windows agent come from the environmental variables.
if you want to set a value you add a line that adds that adds an entry to the machine.
[System.Environment]::SetEnvironmentVariable("CAPABILITYNAME", "value", "Machine")
when you start the service it then picks this up.
I am currently trying to do something similar for a set of linux agents...
The interesting thing about capabilities is that they are not paths. for example it might show you have msbuild for 2019 and 2017, but I have not been able to use those as pipeline variables.

Jenkins Azure VM Agent: Environment variables

I've set up a Jenkins CI/CD on Azure using Azure VM agents to build my android application.
For the build agent I use a template that is an "Advanced Image Configuration" using the following Image Reference:
Canonical, UbuntuServer, 16.04-LTS, latest
In my Initialization Script I installed all required components to build my application (e.g. the android-sdk). It is run as Root, using sudo command for every operation.
The first time I launched my build it failed, because ANDROID_HOME was not defined. So I decided to add the Environment Injector Plugin to solve this.
My Questions are:
Is it possible to define the ENV within the Initialization script too?
Do I have to configure my agent in a different way?
Will I have to create and configure a VM image and use that instead?
Edit / Solution:
sudo cat >> /etc/environment <<EOL
ANDROID_HOME=/opt/android-sdk
PATH=${PATH}:/opt/android-sdk/tools:/opt/android-sdk/tools/bin:/opt/android-sdk/platform-tools
EOL
This was successful thanks for all the help :)
yeah, why not? just set an environment variable as part of your script.
not sure what you ask here, what do you want to achieve?
i dont like images, i prefer an automated way to create a working vm with scripts. but you can certainly do that

Resources