In the Sprint 171 Update of Azure DevOps, Microsoft announced to support Linux/ARM64 hosted agents. To be able to use that as Microsoft hosted agent, I need to know the correct label for such an image. I can not find it anywhere.
We can add the task Bash and enter the script printenv to list all env variable, then check the variable AGENT_OSARCHITECTURE, as the test result, all ubuntu hosted agent architectures are x64 instead of ARM64, you can raise this issue to Developer Community, the Azure DevOps product team will check it and give you a detailed explanation..
As a workaround, we can install Linux ARM64 self-hosted agent, you can refer to this doc for more details.
That release announcement is pretty brief. I didn't necessarily take is as hosted agents would be supported, just that you could self-host the agent if you wanted.
If you want to find the details of what is supported and available on the latest images, that is all captured on the GitHub page for virtual Environments. Specifically, you can find the YAML label.
As-of 2020-09, I don't see anything referencing ARM64 available.
ubuntu-20.04, ubuntu-latest or ubuntu-18.04, ubuntu-16.04, macos-latest or macos-10.15, windows-latest or windows-2019, windows-2016
I found out the solution now.
If you install the QEMU package on the hosted agent, this can emulate any ARM device and arm applications can be executed. At least for the usage of docker, that works well.
Related
NOTE: I'm an embedded programmer, so devops stuff is mildly mysterious to me and I might be using the wrong terms.
When creating my BitBucket self hosted runners, do I HAVE to use docker in docker, or can I take the self-hosted runner container image and add my required tools and licenses to it?
i.e. the docker command it gives me when I create a self-hosted runner has this in it: docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner, can I just create my own Dockerfile image which uses that, and add my software packages, environment variables, etc. and invoke that instead of the original one?
Or do I necessarily need to do docker-in-docker?
As I mentioned in the beginning, a lot of the devops stuff is just what google/stackexchange tells me to do and thus vaguely cargo-cultish. Getting credentials and other stuff from the self-hosted runner image into my docker-in-docker image (without building credentials into the image) seems like its more work to me.
Thanks for any insight
In azure devops i'm getting a warning about the removal of microsoft host agent that uses windows 2016 (vs2017-win2016)
https://github.com/actions/virtual-environments/issues/4312
What i want to know in regards to that, is if on the pipelines agent jobs where the agent specification is set to windows 2016, if they will automatically start using a newer version of windows agent or stop working completly.
The github topic seems to indicate that.
The ones were the agent job inherits from the pipeline, i believe there is no problem, besides that for some reason the task(s) are tied to windows 2016.
And what about the pipelines defined in the Releases section
When i click Create release
will it only fail after i try to deploy a created release?
I think, your pipelines will fail. There was a situation when MS just "friendly" reminded about depreciation:
Check this issue: https://github.com/actions/virtual-environments/issues/4312
Releases also contain the same issue. You have to update their jobs to use the new agent type:
I wonder if, for simplicity reasons, it is possible to create Azure DevOps self-hosted agents locally, reproducing all capabilities of the cloud-hosted ones. I need to use self-hosted agents, but do not want to create installation and upgrade scripts for each and every application on them.
I would imagine there is something like a VM image with all tools preinstalled; possibly the same as in Azure DevOps. This could potentially have the benefit of 100% compatibility.
What I have found so far:
Azure devops - Preparing self hosted test agents wants to automate agent installation; ansible and silent installers are suggested to solve the issue
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops suggests to run agents in docker.
https://github.com/microsoft/azure-pipelines-image-generation which has been replaced by https://github.com/actions/virtual-environments and contains packer files, but I cannot find any kind of documentation
How can I create "the perfect Azure DevOps agent"?
How can I create "the perfect Azure DevOps agent"?
I have had the same request as you before, I agree with your views 2 and 3.
But because I am not very proficient in docker technology and need to maintain my docker environment frequently, I choose to use packer to build my image.
You could check below great document for some more details:
Build your own Hosted VSTS Agent Cloud: Part 1 - Build
Build your own Hosted VSTS Agent Cloud: Part 2 - Deploy
Looks like we are in the same rabbit-hole. I started out with the same question as you posted, and it looks like you are on to the answer - setup a VM localy or on Azure and you are good to go. The links in the answer from "Leo Liu" is probably a good place to start. However - VMs are not Docker and this doesn't answer your broader question.
If the question is rephrased as "Why don't Microsoft provide an easy way to setup self-hosted agents in docker containers?" I believe the answer lies in their business model. Local VMs need Windows licenses and Azure VMs are billed by the hour...
But, conspiracy theories aside, I don't think there is an easy way to setup a dockerised version of the cloud-hosted agents. It's probably not a very good idea either. Docker containers are meant to be small, and with all of the dependencies on those agents they are anything but small. It's also never going to be "perfect", as you say, since the dockerized windows isn't identical to what's running in their cloud-hosted VMs.
I have started tweeking something that is not perfect, but might work:
setup a docker-agent according to the documentation here
add the essence from the PS-scripts corresponding to the packages you need from here
add commands to the Dockerfile to COPY and RUN the scripts
docker build as usual and you should have a container that's a bit more capable and an agent that reports its capabilities in a similar way to the cloud-agents
In an ideal world there would be a repository with all of the tweeked scripts, and a community that kept them updated. In an even more ideal world that would be a Microsoft hosted repository, but like I said - that's probably unlikely.
Here is some code to get you started. Maybe I'll publish a more finished version somewhere in the future.
init.ps1 with some lines borrowed from here:
Write-Host "Install chocolatey"
$chocoExePath = 'C:\ProgramData\Chocolatey\bin'
if ($($env:Path).ToLower().Contains($($chocoExePath).ToLower())) {
Write-Host "Chocolatey found in PATH, skipping install..."
Exit
}
$systemPath = [Environment]::GetEnvironmentVariable('Path', [System.EnvironmentVariableTarget]::Machine)
$systemPath += ';' + $chocoExePath
[Environment]::SetEnvironmentVariable("PATH", $systemPath, [System.EnvironmentVariableTarget]::Machine)
$userPath = [Environment]::GetEnvironmentVariable('Path', [System.EnvironmentVariableTarget]::User)
if ($userPath) {
$env:Path = $systemPath + ";" + $userPath
}
else {
$env:Path = $systemPath
}
Invoke-Expression ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))
choco feature enable -n allowGlobalConfirmation
Remove-Item -Path $env:ChocolateyInstall\bin\cpack.exe -Force
Import-Module "$env:ChocolateyInstall\helpers\chocolateyInstaller.psm1" -Force
Get-ToolsLocation
Modified Dockerfile from the Microsoft documentation, that also runs the script on build:
FROM mcr.microsoft.com/windows/servercore:ltsc2019
COPY init.ps1 /Windows/Temp/init.ps1
RUN powershell -executionpolicy bypass C:\Windows\Temp\init.ps1
WORKDIR /azp
COPY start.ps1 .
CMD powershell .\start.ps1
I have a C project that I'd like to be tested on multiple different C compilers. I'm currently testing it using Azure Pipelines, but I'm not sure what the best way to add more compilers to my workflow.
Currently, I just use a script to sudo apt install a few other things I need for testing, but Azure warns me not to do this. I also run into a problem where the latest version of TCC isn't available through apt install, so I currently can't test that through my current method.
Is there a proper way to do this? I'm thinking maybe specify a VM for Azure to use, onto which I've already installed whatever software I need. I have no idea if this is possible or how to do it though. Looking through the Azure Pipelines documentation hasn't been very helpful either since I don't know what I'm looking for.
(Please let me know if anything is not clear, I'm not 100% sure of the proper terminology surrounding this.)
EDIT: I basically want to be able to add something like this to my azure-pipelines.yml:
- job:
displayName: "C TCC Ubuntu"
pool:
vmImage: 'ubuntu-latest'
steps:
- script: |
set -e
cmake -DCMAKE_C_COMPILER=tcc .
make
displayName: "Compile"
- script:
./tests
displayName: "Run Tests"
except with the vmImage being a custom one onto which I've already installed tcc. In case that's not possible, any other sort of work-around is also appreciated.
Azure DevOps pipelines has two models for agents, self-hosted or hosted. You could run a self-hosted agent that you preinstall your tool chain. That brings without management of that server and the cost of it sitting idle. To do self-hosted here are the docs that walk you through the installation.
I would encourage you to use the hosted agents as it gives you the most flexibility and doesn't limit you to just one operating system to execute your build against if you so desire. With that said, the common pattern with the hosted agents are to install your tools in a task like you have said you are doing. The Azure DevOps Extension marketplace has several examples of people creating extensions to install tools. Here is an example for Rust, notice the installer screenshot.
If you don't want to incur the penalty of installing your compiler on every build, you could also leverage the ability of the hosted agents to use a container to build your software. You could then prebuild a container image that has your compiler and other tools installed and instruct Azure DevOps to use that in the hosted agent to do your compilation. Here is that documentation.
As far as I know, Azure DevOps agents are capable of automatically detecting its own capabilities. Based on the documentation, so as long as I restart the host once a new piece of software has been installed, the capability should be registered automatically.
What I am having trouble doing right now is getting the agent to detect the presence of Yarn on a self-hosted agent on a windows host. Looking at the PATH environment variable shows the existence of the Yarn executable, but it is not listed as a capability despite having the host restarted. My current workaround is to manually add Yarn to the capability list and setting its value to true.
As a side note, yarn was installed via Ansible using win_chocolatey plugin. The install was successful with no errors.
I am wondering a few things
1) Am I missing something which is causing this issue?
2) Is this an inherent issue with Yarn? If this is an inherent issue with Yarn, is there a way to automate the process of manually adding yarn as a capability?
Capabilities for a windows agent come from the environmental variables.
if you want to set a value you add a line that adds that adds an entry to the machine.
[System.Environment]::SetEnvironmentVariable("CAPABILITYNAME", "value", "Machine")
when you start the service it then picks this up.
I am currently trying to do something similar for a set of linux agents...
The interesting thing about capabilities is that they are not paths. for example it might show you have msbuild for 2019 and 2017, but I have not been able to use those as pipeline variables.