As far as I know, Azure DevOps agents are capable of automatically detecting its own capabilities. Based on the documentation, so as long as I restart the host once a new piece of software has been installed, the capability should be registered automatically.
What I am having trouble doing right now is getting the agent to detect the presence of Yarn on a self-hosted agent on a windows host. Looking at the PATH environment variable shows the existence of the Yarn executable, but it is not listed as a capability despite having the host restarted. My current workaround is to manually add Yarn to the capability list and setting its value to true.
As a side note, yarn was installed via Ansible using win_chocolatey plugin. The install was successful with no errors.
I am wondering a few things
1) Am I missing something which is causing this issue?
2) Is this an inherent issue with Yarn? If this is an inherent issue with Yarn, is there a way to automate the process of manually adding yarn as a capability?
Capabilities for a windows agent come from the environmental variables.
if you want to set a value you add a line that adds that adds an entry to the machine.
[System.Environment]::SetEnvironmentVariable("CAPABILITYNAME", "value", "Machine")
when you start the service it then picks this up.
I am currently trying to do something similar for a set of linux agents...
The interesting thing about capabilities is that they are not paths. for example it might show you have msbuild for 2019 and 2017, but I have not been able to use those as pipeline variables.
Related
I am new to Azure DevOps and trying to create my first Azure pipeline. I have a ASP.NET MVC project and there are a few NuGet packages that need to be restored before the MSBuild step.
Unfortunately, the NuGet restore is failing with the following error:
The pipeline is not valid. Job Job_1: Step 'NuGetCommand' references
task 'NuGetCommand' at version '2.194.0' contains an execution handler
that relies on NodeJS version '6' which is restricted by your
administrator.
NodeJS 6 came disabled out of the box so we are not going to enable it.
My Questions:
Is there an alternative to NuGet restore that does not use NodeJS?
Is there a way to update the NodeJS6 to a higher version?
update 23-Nov-2021
I have found a work around for the time being. I am using a custom PowerShell script to restore NuGet Packages and build Visual Studio project
$msBuildExe = 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Current\Bin\MSBuild.exe'
Write-Host "Restoring NuGet packages" -foregroundcolor green
& "$($msBuildExe)" "$($path)" /p:Configuration=Release /p:platform=x86 /t:restore
Note: $path here is the path to my .csproj file
Apparently, other people are also getting the same issue and it is just a matter of time that the task is updated by the OpenSource community.
Here are some similar issues being faced in other tasks as well:
https://github.com/microsoft/azure-pipelines-tasks/issues/15526
https://github.com/microsoft/azure-pipelines-tasks/issues/15511
https://github.com/microsoft/azure-pipelines-tasks/issues/15516
https://github.com/microsoft/azure-pipelines-tasks/issues/15525
It's AzureDevOps' NuGetCommand task that uses NodeJS, not NuGet itself. Therefore, you can find a way to restore without using Azure DevOps' NuGetCommand task.
Idea 1: use DotnetCoreCli task instead. However, this probably won't work for you since you said your project is ASP.NET MVC, rather than ASP.NET Core. Also, it also appears to need NodeJS to run.
Idea 2: Use MSBuild restore. You can test on your local machine whether or not this works by clearing your global packages folder, or temporarily configuring NuGet to use a different path, and then running msbuild -t:restore My.sln from a Developer PowerShell For Visual Studio prompt. If your project uses packages.config, rather than PackageReference, you'll need to also pass -p:RestorePackagesConfig=true (although maybe this is currently broken). I'm not an expert on Azure Pipelines tasks, so I don't know what it means that this task defines both PowerShell and Node execution entry points, but maybe it means it will work even if your CI agent doesn't allow NodeJS.
Idea 3: Don't use any of the built-in tasks, just use - script: or - task: PowerShell#2, but even that is a little questionable whether it'll work since even the powershell task defines a Node execution entry point. I'm guessing it will work, but I don't have access to a CI agent where NodeJS is forbidden, so I couldn't test even if I wanted to. Anyway, if this works, then you can run MSBuild yourself (but it might also be your responsibility to find msbuild.exe if it's not on the path). Or you can download nuget.exe yourself and execute it in your script. The point is, if you can get Azure Pipeline's script task working, you can run any script and do everything you need yourself.
Idea 4: Use Microsoft Hosted agents. They have documented all the software they pre-install on the machines, which includes Node JS. Downside is that once you exceed the free quota it costs money, and I've worked for companies where it's easier to get money to buy hardware once-off, and pretend that maintenance of the server is free, even though it reduces team productivity, rather than pay for a monthly service. So, I'll totally understand if this is not an option for you.
Idea 5: Talk to whoever maintains your CI agents and convince them to allow & install NodeJS. It's clearly a fundamental part of Azure Pipelines. The tasks are open source on github, and you can see that pretty much all of them use NodeJS to orchestrate whatever work it does. Frankly, I thought the agent software itself was a NodeJS application, so I'm surprised that it runs without NodeJS.
I have a C project that I'd like to be tested on multiple different C compilers. I'm currently testing it using Azure Pipelines, but I'm not sure what the best way to add more compilers to my workflow.
Currently, I just use a script to sudo apt install a few other things I need for testing, but Azure warns me not to do this. I also run into a problem where the latest version of TCC isn't available through apt install, so I currently can't test that through my current method.
Is there a proper way to do this? I'm thinking maybe specify a VM for Azure to use, onto which I've already installed whatever software I need. I have no idea if this is possible or how to do it though. Looking through the Azure Pipelines documentation hasn't been very helpful either since I don't know what I'm looking for.
(Please let me know if anything is not clear, I'm not 100% sure of the proper terminology surrounding this.)
EDIT: I basically want to be able to add something like this to my azure-pipelines.yml:
- job:
displayName: "C TCC Ubuntu"
pool:
vmImage: 'ubuntu-latest'
steps:
- script: |
set -e
cmake -DCMAKE_C_COMPILER=tcc .
make
displayName: "Compile"
- script:
./tests
displayName: "Run Tests"
except with the vmImage being a custom one onto which I've already installed tcc. In case that's not possible, any other sort of work-around is also appreciated.
Azure DevOps pipelines has two models for agents, self-hosted or hosted. You could run a self-hosted agent that you preinstall your tool chain. That brings without management of that server and the cost of it sitting idle. To do self-hosted here are the docs that walk you through the installation.
I would encourage you to use the hosted agents as it gives you the most flexibility and doesn't limit you to just one operating system to execute your build against if you so desire. With that said, the common pattern with the hosted agents are to install your tools in a task like you have said you are doing. The Azure DevOps Extension marketplace has several examples of people creating extensions to install tools. Here is an example for Rust, notice the installer screenshot.
If you don't want to incur the penalty of installing your compiler on every build, you could also leverage the ability of the hosted agents to use a container to build your software. You could then prebuild a container image that has your compiler and other tools installed and instruct Azure DevOps to use that in the hosted agent to do your compilation. Here is that documentation.
I've set up a Jenkins CI/CD on Azure using Azure VM agents to build my android application.
For the build agent I use a template that is an "Advanced Image Configuration" using the following Image Reference:
Canonical, UbuntuServer, 16.04-LTS, latest
In my Initialization Script I installed all required components to build my application (e.g. the android-sdk). It is run as Root, using sudo command for every operation.
The first time I launched my build it failed, because ANDROID_HOME was not defined. So I decided to add the Environment Injector Plugin to solve this.
My Questions are:
Is it possible to define the ENV within the Initialization script too?
Do I have to configure my agent in a different way?
Will I have to create and configure a VM image and use that instead?
Edit / Solution:
sudo cat >> /etc/environment <<EOL
ANDROID_HOME=/opt/android-sdk
PATH=${PATH}:/opt/android-sdk/tools:/opt/android-sdk/tools/bin:/opt/android-sdk/platform-tools
EOL
This was successful thanks for all the help :)
yeah, why not? just set an environment variable as part of your script.
not sure what you ask here, what do you want to achieve?
i dont like images, i prefer an automated way to create a working vm with scripts. but you can certainly do that
I'm evaluating gitlab community edition as a self hosted version. Everything is fine with the product except anyone who has access to pipelines (master or admin) can run deployment on production.
I saw their issue board and i see that this is a feature that will not be coming to gitlab anytime soon.
See https://gitlab.com/gitlab-org/gitlab-ce/issues/20261
For now, I plan on deploying my spring boot applications using the following strategy.
A separate runner installed on the production server
There will be an install script with instructions some where in production server
gitlab-runner user will only have permissions to run the above specific script (Somehow).
Have validations in the script for GITLAB_USER_NAME variable.
But I also see that there are disadvantages in this approach.
GITLAB_USER_NAME is an environment variable which can be overridden easily thus compromising the validation.
Things are complicated when introducing new prod servers.
Too many changes apart from .gitlab-ci.yml... CI/CD was supposed to be simple not painful...
Do I have any alternate approaches or hacks for this...?
Just implemented puppet directory environments on our puppetmasters and I'm looking for a way to disable specific environments.
I have a production and a staging environment and what I want to avoid is someone accidentally running puppet with the staging environment on a production server and vice versa. There is one puppetmaster on each of those environments so I want to disable the "wrong" one on each. They both use the same repo which includes all code.
I'm on puppet 3.8.7 if that's relevant but since we're planning on upgrading soon a solution that works for some other version is also welcome.