Installing compilers on Azure Pipelines - azure

I have a C project that I'd like to be tested on multiple different C compilers. I'm currently testing it using Azure Pipelines, but I'm not sure what the best way to add more compilers to my workflow.
Currently, I just use a script to sudo apt install a few other things I need for testing, but Azure warns me not to do this. I also run into a problem where the latest version of TCC isn't available through apt install, so I currently can't test that through my current method.
Is there a proper way to do this? I'm thinking maybe specify a VM for Azure to use, onto which I've already installed whatever software I need. I have no idea if this is possible or how to do it though. Looking through the Azure Pipelines documentation hasn't been very helpful either since I don't know what I'm looking for.
(Please let me know if anything is not clear, I'm not 100% sure of the proper terminology surrounding this.)
EDIT: I basically want to be able to add something like this to my azure-pipelines.yml:
- job:
displayName: "C TCC Ubuntu"
pool:
vmImage: 'ubuntu-latest'
steps:
- script: |
set -e
cmake -DCMAKE_C_COMPILER=tcc .
make
displayName: "Compile"
- script:
./tests
displayName: "Run Tests"
except with the vmImage being a custom one onto which I've already installed tcc. In case that's not possible, any other sort of work-around is also appreciated.

Azure DevOps pipelines has two models for agents, self-hosted or hosted. You could run a self-hosted agent that you preinstall your tool chain. That brings without management of that server and the cost of it sitting idle. To do self-hosted here are the docs that walk you through the installation.
I would encourage you to use the hosted agents as it gives you the most flexibility and doesn't limit you to just one operating system to execute your build against if you so desire. With that said, the common pattern with the hosted agents are to install your tools in a task like you have said you are doing. The Azure DevOps Extension marketplace has several examples of people creating extensions to install tools. Here is an example for Rust, notice the installer screenshot.
If you don't want to incur the penalty of installing your compiler on every build, you could also leverage the ability of the hosted agents to use a container to build your software. You could then prebuild a container image that has your compiler and other tools installed and instruct Azure DevOps to use that in the hosted agent to do your compilation. Here is that documentation.

Related

How to restore NuGet package in Azure Pipeline?

I am new to Azure DevOps and trying to create my first Azure pipeline. I have a ASP.NET MVC project and there are a few NuGet packages that need to be restored before the MSBuild step.
Unfortunately, the NuGet restore is failing with the following error:
The pipeline is not valid. Job Job_1: Step 'NuGetCommand' references
task 'NuGetCommand' at version '2.194.0' contains an execution handler
that relies on NodeJS version '6' which is restricted by your
administrator.
NodeJS 6 came disabled out of the box so we are not going to enable it.
My Questions:
Is there an alternative to NuGet restore that does not use NodeJS?
Is there a way to update the NodeJS6 to a higher version?
update 23-Nov-2021
I have found a work around for the time being. I am using a custom PowerShell script to restore NuGet Packages and build Visual Studio project
$msBuildExe = 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Current\Bin\MSBuild.exe'
Write-Host "Restoring NuGet packages" -foregroundcolor green
& "$($msBuildExe)" "$($path)" /p:Configuration=Release /p:platform=x86 /t:restore
Note: $path here is the path to my .csproj file
Apparently, other people are also getting the same issue and it is just a matter of time that the task is updated by the OpenSource community.
Here are some similar issues being faced in other tasks as well:
https://github.com/microsoft/azure-pipelines-tasks/issues/15526
https://github.com/microsoft/azure-pipelines-tasks/issues/15511
https://github.com/microsoft/azure-pipelines-tasks/issues/15516
https://github.com/microsoft/azure-pipelines-tasks/issues/15525
It's AzureDevOps' NuGetCommand task that uses NodeJS, not NuGet itself. Therefore, you can find a way to restore without using Azure DevOps' NuGetCommand task.
Idea 1: use DotnetCoreCli task instead. However, this probably won't work for you since you said your project is ASP.NET MVC, rather than ASP.NET Core. Also, it also appears to need NodeJS to run.
Idea 2: Use MSBuild restore. You can test on your local machine whether or not this works by clearing your global packages folder, or temporarily configuring NuGet to use a different path, and then running msbuild -t:restore My.sln from a Developer PowerShell For Visual Studio prompt. If your project uses packages.config, rather than PackageReference, you'll need to also pass -p:RestorePackagesConfig=true (although maybe this is currently broken). I'm not an expert on Azure Pipelines tasks, so I don't know what it means that this task defines both PowerShell and Node execution entry points, but maybe it means it will work even if your CI agent doesn't allow NodeJS.
Idea 3: Don't use any of the built-in tasks, just use - script: or - task: PowerShell#2, but even that is a little questionable whether it'll work since even the powershell task defines a Node execution entry point. I'm guessing it will work, but I don't have access to a CI agent where NodeJS is forbidden, so I couldn't test even if I wanted to. Anyway, if this works, then you can run MSBuild yourself (but it might also be your responsibility to find msbuild.exe if it's not on the path). Or you can download nuget.exe yourself and execute it in your script. The point is, if you can get Azure Pipeline's script task working, you can run any script and do everything you need yourself.
Idea 4: Use Microsoft Hosted agents. They have documented all the software they pre-install on the machines, which includes Node JS. Downside is that once you exceed the free quota it costs money, and I've worked for companies where it's easier to get money to buy hardware once-off, and pretend that maintenance of the server is free, even though it reduces team productivity, rather than pay for a monthly service. So, I'll totally understand if this is not an option for you.
Idea 5: Talk to whoever maintains your CI agents and convince them to allow & install NodeJS. It's clearly a fundamental part of Azure Pipelines. The tasks are open source on github, and you can see that pretty much all of them use NodeJS to orchestrate whatever work it does. Frankly, I thought the agent software itself was a NodeJS application, so I'm surprised that it runs without NodeJS.

Azure DevOps Self-Hosted Agent - How to replicate cloud-hosted agents?

I wonder if, for simplicity reasons, it is possible to create Azure DevOps self-hosted agents locally, reproducing all capabilities of the cloud-hosted ones. I need to use self-hosted agents, but do not want to create installation and upgrade scripts for each and every application on them.
I would imagine there is something like a VM image with all tools preinstalled; possibly the same as in Azure DevOps. This could potentially have the benefit of 100% compatibility.
What I have found so far:
Azure devops - Preparing self hosted test agents wants to automate agent installation; ansible and silent installers are suggested to solve the issue
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops suggests to run agents in docker.
https://github.com/microsoft/azure-pipelines-image-generation which has been replaced by https://github.com/actions/virtual-environments and contains packer files, but I cannot find any kind of documentation
How can I create "the perfect Azure DevOps agent"?
How can I create "the perfect Azure DevOps agent"?
I have had the same request as you before, I agree with your views 2 and 3.
But because I am not very proficient in docker technology and need to maintain my docker environment frequently, I choose to use packer to build my image.
You could check below great document for some more details:
Build your own Hosted VSTS Agent Cloud: Part 1 - Build
Build your own Hosted VSTS Agent Cloud: Part 2 - Deploy
Looks like we are in the same rabbit-hole. I started out with the same question as you posted, and it looks like you are on to the answer - setup a VM localy or on Azure and you are good to go. The links in the answer from "Leo Liu" is probably a good place to start. However - VMs are not Docker and this doesn't answer your broader question.
If the question is rephrased as "Why don't Microsoft provide an easy way to setup self-hosted agents in docker containers?" I believe the answer lies in their business model. Local VMs need Windows licenses and Azure VMs are billed by the hour...
But, conspiracy theories aside, I don't think there is an easy way to setup a dockerised version of the cloud-hosted agents. It's probably not a very good idea either. Docker containers are meant to be small, and with all of the dependencies on those agents they are anything but small. It's also never going to be "perfect", as you say, since the dockerized windows isn't identical to what's running in their cloud-hosted VMs.
I have started tweeking something that is not perfect, but might work:
setup a docker-agent according to the documentation here
add the essence from the PS-scripts corresponding to the packages you need from here
add commands to the Dockerfile to COPY and RUN the scripts
docker build as usual and you should have a container that's a bit more capable and an agent that reports its capabilities in a similar way to the cloud-agents
In an ideal world there would be a repository with all of the tweeked scripts, and a community that kept them updated. In an even more ideal world that would be a Microsoft hosted repository, but like I said - that's probably unlikely.
Here is some code to get you started. Maybe I'll publish a more finished version somewhere in the future.
init.ps1 with some lines borrowed from here:
Write-Host "Install chocolatey"
$chocoExePath = 'C:\ProgramData\Chocolatey\bin'
if ($($env:Path).ToLower().Contains($($chocoExePath).ToLower())) {
Write-Host "Chocolatey found in PATH, skipping install..."
Exit
}
$systemPath = [Environment]::GetEnvironmentVariable('Path', [System.EnvironmentVariableTarget]::Machine)
$systemPath += ';' + $chocoExePath
[Environment]::SetEnvironmentVariable("PATH", $systemPath, [System.EnvironmentVariableTarget]::Machine)
$userPath = [Environment]::GetEnvironmentVariable('Path', [System.EnvironmentVariableTarget]::User)
if ($userPath) {
$env:Path = $systemPath + ";" + $userPath
}
else {
$env:Path = $systemPath
}
Invoke-Expression ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))
choco feature enable -n allowGlobalConfirmation
Remove-Item -Path $env:ChocolateyInstall\bin\cpack.exe -Force
Import-Module "$env:ChocolateyInstall\helpers\chocolateyInstaller.psm1" -Force
Get-ToolsLocation
Modified Dockerfile from the Microsoft documentation, that also runs the script on build:
FROM mcr.microsoft.com/windows/servercore:ltsc2019
COPY init.ps1 /Windows/Temp/init.ps1
RUN powershell -executionpolicy bypass C:\Windows\Temp\init.ps1
WORKDIR /azp
COPY start.ps1 .
CMD powershell .\start.ps1

ARM64 label of Azure Devops hosted agent

In the Sprint 171 Update of Azure DevOps, Microsoft announced to support Linux/ARM64 hosted agents. To be able to use that as Microsoft hosted agent, I need to know the correct label for such an image. I can not find it anywhere.
We can add the task Bash and enter the script printenv to list all env variable, then check the variable AGENT_OSARCHITECTURE, as the test result, all ubuntu hosted agent architectures are x64 instead of ARM64, you can raise this issue to Developer Community, the Azure DevOps product team will check it and give you a detailed explanation..
As a workaround, we can install Linux ARM64 self-hosted agent, you can refer to this doc for more details.
That release announcement is pretty brief. I didn't necessarily take is as hosted agents would be supported, just that you could self-host the agent if you wanted.
If you want to find the details of what is supported and available on the latest images, that is all captured on the GitHub page for virtual Environments. Specifically, you can find the YAML label.
As-of 2020-09, I don't see anything referencing ARM64 available.
ubuntu-20.04, ubuntu-latest or ubuntu-18.04, ubuntu-16.04, macos-latest or macos-10.15, windows-latest or windows-2019, windows-2016
I found out the solution now.
If you install the QEMU package on the hosted agent, this can emulate any ARM device and arm applications can be executed. At least for the usage of docker, that works well.

When using CI/CD in VSTS, should the publish step be part of the build or the "release"?

I have a VSTS project and I'm setting up CI/CD at the moment. All fine, but I seem to have 2 options for the publishing step:
Option 1: it's a task as part of the CI Build, e.g. see build step 3 here:
https://medium.com/#flu.lund/setting-up-a-ci-pipeline-for-deploying-your-angular-application-to-azure-using-visual-studio-team-f686c8f190cf
Option 2: The build phase produces artifacts, and as part of a separate release phase these artifacts are published, see here:
https://learn.microsoft.com/en-us/vsts/build-release/actions/ci-cd-part-1?view=vsts
Both options seem well supported in the MS documentation, is one of these options better than the other? Or is it a case of pros & cons for each and it depends on circumstances, etc?
Thanks!
You should definitely use "Option 2". Your build should not make changes in your environments whatsoever, that is strictly what a "release" is. That link you have under "Option 1" is the wrong way to do it, a build should be just that, compiling code and making artifacts, not actually deploying code.
When you mesh build/releases together, you make it very difficult to debug build issues. Since your code is always being released, you really have to disable the "deploy" step to get any idea of what was built before you deployed.
Also, the nice thing about creating an artifact is you have a deployable package, and if in the future you need to rollback to a previous working version, you have that ready to go. Using the "build only" strategy, you'd have to revert your code or make unnecessary backups to achieve this.
I think you'll find any new Microsoft documentation pointing you toward this approach, and VSTS is completely set up like this. Even the "Configure Continuous Delivery in Azure..." feature in Visual Studio 2017 will create a build and a release.
Almost all build tasks are the same as release tasks, so you can deploy the app after building the project in build process.
Also there are many differences between release and build, for example, many environments, deployment group phase in release.
So which way is better is per to your detail requirement, for example, if build > deploy > other process is simple, you can do it just in build.
Regarding Publish artifact task, it is used to publish the files to VSTS server or other place (e.g. shared folder), which can be used in release as Artifact (Click Add artifact > Build in release definition), you also can download them for troubleshooting, for example, if you are using Hosted Agent that you can’t access, but you want to get some files (e.g. build result), you can add publish artifact task to publish to VSTS server, then download them (Open build result > Artifacts)

How to deploy Go program from windows to CentOS server

I have a Go package running on Windows and is working fine but now I'm at a stage where I would like to test this on production CentOS 6.5 server.
What is the best practice to deploy this from Windows to CentOS?
Would I have to use my Git repo to distribute to Linux operating system, compile then deploy the binary to the server?
Also I have multiple files, so I would imagine go build *.go would suffice or are there better options for doing compilation?
What is the best practice to deploy this from Windows to CentOS?
As far as best practices go I would recommend using continuous integration. You can setup jenkins, or there are some cloud options out there: codeship.io, travis-ci.org, drone.io, wercker.com, ... Some of them have free plans available.
Basically you'd commit your code to git and push that out to Github (or Bitbucket if you want free private repos). The continuous integration server will be notified whenever you push out changes, and will build, test and create a release tar archive of your project. You can then take this resulting tar and download it to your CentOS box. In 6.5 you'll need to create an init.d script to keep your program up and running. You can see an example here (the system v script).
CentoOS 7 uses systemd now which would be slightly easier to setup.
Taking this one step further it's also possible to setup continuos deployment, in which the download, extraction and installation can also be automated. Depending on your project it may or may not make sense to set up continuous deployment. (Auto-pushing to production might be a little too automatic) You can find an example in wercker here.
Although there is an an up-front cost to setting up continuous integration if this is a project that other people will contribute too, or one that you intend to work on long-term, the cost will definitely be worth it. (Future you will be greatful when you come back to this project 6 months from now, change 1 line of code, and don't have to remember all the manual steps it took to deploy)

Resources