Injection of Golang environment variables into Azure Pipeline - azure

I am currently migrating some build components to Azure Pipelines and am attempting to set some environment variables for all Golang related processes. I wish to execute the following command within the pipeline:
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build [...]
When utilizing the provided Golang integrations, it is easy to add arguments for Go related processes, but setting an environment variable for all (or for every individual) Go process does not seem possible. Neither GoTool or the default Go task seem to support it, and performing a script task with a shell execution in it do not seem to be supported either.
I have also tried adding an environment variable to the entire pipelines process that defines the desired flags, but these appear to be ignored by the Go task provided by Azure Pipelines itself.
Would there be a way to do add these flags to each (or a single) go process, such as how I do it in the following code block (in which the flags input line was made-up by me)?
- task: Go#0
inputs:
flags: 'CGO_ENABLED=0 GOOS=linux GOARCH=amd64'
command: 'build'
arguments: '[...]'
workingDirectory: '$(System.DefaultWorkingDirectory)'
displayName: 'Build the application'

Based on the information I was able to find and many hours of debugging, I ended up using a workaround in which I ran the golang commands in a CmdLine#2 task, instead. Due to how GoTool#0 sets up the pipeline and environment, this is possible.
Thus, the code snippet below worked for my purposes.
steps:
- task: GoTool#0
inputs:
version: '1.19.0'
- task: CmdLine#2
inputs:
script: 'CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build'
workingDirectory: '$(System.DefaultWorkingDirectory)'

Related

Azure DevOps Pipelines - .NET Core Solution with Multiple Projects - How to do a single build and multiple project publishes?

Currently, I have a pipeline set up for this .NET Core solution, composed of several project apps, but I'm building each "deliverable project" independently (REST APIs, gRPC APIs, ASP.NET Core MVC apps and so on - one dotnet restore, dotnet build, dotnet test, dotnet publish, docker build and docker push tasks for each executable in this solution). It works.
Each "project sub-pipeline" is an "Agent Job" inside of the main pipeline. It's required by the client that I build the projects with the win-x64 RID and pack each App/API in Windows Containers (dotnet CLI doesn't allow passing a RID to build a whole solution - error NETSDK1134).
The question is: How could I do a single restore/build/test cycle for the whole solution (to improve build time) and only have the publish steps separated for each project?
I'm aware of this question but I think it is a different thing. Could you guys point me to some docs where I could find more info related to my case?
I researched a bit about MSBuild Publish Profiles and custom MSBuild projects but didn't manage to find something similar.
Thank you in advance.
If you want to do a single restore/build/test you need to publish your whole source folder as an artifact then in your artifact you will get your code and build assemblies. Then reuse this artifact for later steps. It could look like:
trigger:
- master
pool:
vmImage: 'windows-latest'
variables:
buildConfiguration: 'Release'
steps:
- task: DotNetCoreCLI#2
inputs:
command: 'restore'
feedsToUse: 'select'
vstsFeed: 'my-vsts-feed' # A series of numbers and letters
- task: DotNetCoreCLI#2
inputs:
command: 'build'
arguments: '--configuration $(buildConfiguration)'
displayName: 'dotnet build $(buildConfiguration)'
- task: DotNetCoreCLI#2
inputs:
command: test
projects: '**/*Tests/*.csproj'
arguments: '--configuration $(buildConfiguration)'
- task: PublishPipelineArtifact#1
inputs:
targetPath: '$(System.DefaultWorkingDirectory)'
artifactName: 'solution'
then on deployment job you will get whole solution with restored packages and build code ready for further steps.
Regarding .NET RID as far as I see here you have two options:
A single RID can be set in the element of your project file.
pass it with --runtime to dotnet build

How to configure condition based pipeline | Azure Pipeline

I have come across a scenario where I want to build source code dependant upon the source directory.
I have 2 languages in the same git repository (dotnet & Python).
I wanted to build the source code using single Azure Pipelines
If the commit is done for both (dotnet & Python) all the task should get executed
& if the commit is done for a specific directory only the respective language should get executed.
Please let me know how can I achieve this using -condition or if there are any other alternatives
Below is my azure-pipelines.yml
#Trigger and pool name configuration
variables:
name: files
steps:
- script: $(files) = git diff-tree --no-commit-id --name-only -r $(Build.SourceVersion)"
displayName: 'display Last Committed Files'
## Here I am getting changed files
##Dotnet/Server.cs
##Py/Hello.py
- task: PythonScript#0 ## It should only get called when there are changes in /Py
inputs:
scriptSource: 'inline'
script: 'print(''Hello, FromPython!'')'
condition: eq('${{ variables.files }}', '/Py')
- task: DotNetCoreCLI#2 ## It should only get called when there are changes in /Dotnet
inputs:
command: 'build'
projects: '**/*.csproj'
condition: eq('${{ variables.files }}', '/Dotnet')
Any help will be appreciated
I don't think it's possible to do directly what you want. All these task conditions are evaluated at the beginning of the pipeline execution. So if you set pipeline variables in any particular task, even the first one, it's already too late.
If you really want to do this, you probably have to go scripting all the way. So you set the variables in the first script using syntax from here:
(if there are C# files) echo '##vso[task.setvariable variable=DotNet]true'
(if there are Py files) echo '##vso[task.setvariable variable=Python]true'
And then in other scripts you evaluate them like:
if $(DotNet) = 'true' then dotnet build
Something among these lines. That'll probably be quite subtle so maybe it would make sense to reconsider the flow on some higher level but without extra context it's hard to say.

Running azure powershell script through YAML release pipeline

I have my normal and working release pipeline that, by given a certain deployment group, performs some tasks:
Copies a script
Executes that powershell script (on the target machines defined in the Deployment Group)
Deletes the script
I know that YAML doesn't support deployment groups, but (lucky me!) so far my deployment group has only one machine, let's call it MyTestVM .
So what I am trying to achieve mainly is simply executing a powershell script on that vm . Normally, what happenes with the release pipeline, is that you have a tentacle/release agent installed on the VM, your deployment target (which is inside the Deployment Group) is hooked up to that, and your release pipeline (thanks to the Deployment Group specification) is able to use that release agent on the machine and do whatever it wants on the VM itself.
I need the same... but through YAML ! I know there is PowerShellOnTargetMachines command available in YAML but I don't want to use that. It uses PSSession, it requires SSL certificates and many other things. I just want to use the already existing agent on the VM !
What I have in place so far:
pool: 'Private Pool'
steps:
- task: DownloadPipelineArtifact#2
inputs:
buildType: 'specific'
project: 'blahblah'
definition: 'blah'
buildVersionToDownload: 'latest'
targetPath: '$(Pipeline.Workspace)'
- task: CopyFiles#2
displayName: 'Copy Files to: C:\TestScript'
inputs:
SourceFolder: '$(Pipeline.Workspace)/Scripts/'
Contents: '**/*.ps1'
TargetFolder: 'C:\TestScript'
CleanTargetFolder: true
OverWrite: true
The first part just downloads the Artifact containing my script. And then to be honest I am not even sure I need to copy the script in the second part.. first because I don't think it is copying the script to the VM target workspace, but it is copying it to the VM where the azure pipeline agent is installed. And second: I think I can just reference it from my artifact.. but this is not the important part.
How can I make my YAML pipeline make use of the release agent installed on the VM in the same way that a normal release pipeline does?
Reached somehow a solution. First of all worth mentioning that since deployment groups don't work with YAML pipelines the way to proceed is to create an Environment and add as resource your target VM.
So I didn't need to create my own hosted agent or anything special since the problem was the target itself and not the agent running the pipeline.
By creating an Environment and adding a resource (in my case a VM) to that environment, we create also a new release agent on the target itself. So my target VM will now have 2 release agents: the old one that can be used by normal release pipelines, and the new one, attached to the Environment resource on Azure Devops that can be used by YAML pipelines.
Now I am finally able to hit my VM:
- stage: PerformScriptInVM
jobs:
- deployment: VMDeploy
pool:
vmImage: 'windows-latest'
# watch out: this creates an environment if it doesn’t exist
environment:
name: My Environment Name
resourceType: VirtualMachine
strategy:
runOnce:
deploy:
steps:
- task: DownloadPipelineArtifact#2
inputs:
buildType: 'specific'
project: 'blahblahblah'
definition: 'blah'
buildVersionToDownload: 'latest'
targetPath: '$(Pipeline.Workspace)'
- task: PowerShell#2
displayName: 'PowerShell Script'
inputs:
targetType: filePath
filePath: '$(Pipeline.Workspace)/Scripts/TestScript.ps1'
arguments: 'whatever your script needs..'
To get the job to run on the specific release agent you want, you can do two things:
Create a pool and only put your release agent into it.
pool: 'My Pool with only one release agent'
Use an existing pool, and publish/demand a capability for your agent.
On the agent machine itself, add a system environment variable (for example, MyCustomCapability. Give it a value like 1
then your pipeline becomes:
pool:
name: 'My pool with potentially more than one agent'
demands: 'MyCustomCapability'
If only this agent has this environment variable set, then only this agent can execute the job

Sending Azure Build Artifacts to Feed

I have been having issues with sending build artifacts to my feed and can't figure out where my issue is at.
I forked over this repository from an Azure document since I am new to this and learning to create a CI/CD pipeline (https://github.com/Azure-Samples/python-docs-hello-world).
With the twine or universal package publishing setup guides there are steps for connecting to the feed such as creating a .piyrc file in your home directory but is that done locally or somewhere within the pipeline YAML?
Universal Publishing YAML
steps:
- task: UniversalPackages#0
displayName: 'Universal publish'
inputs:
command: publish
vstsFeed: 'cd75ead1-7beb-42f9-9477-e958501bb986'
publishDirectory: '$(Pipeline.Workspace)'
vstsFeedPublish: 'cd75ead1-7beb-42f9-9477-e958501bb986'
vstsFeedPackagePublish: drop
Twine Method
twine upload -r {Feed} --config-file $(PYPIRC_PATH) $(Pipeline.Workspace)
With Universal Publishing I receive an error about the path provided as being invalid.
With Twine I get an error about InvalidDistribution: Cannot find file (or expand pattern)
The $(Pipeline.Workspace) that I have written above was created as a path in my build pipeline to copy all files over from an Archive step. I see the artifact being made in the build pipeline and then downloaded in the first step of the release pipeline so I'm not sure what is going on or if it's something as simple as using the incorrect path.
With Twine I get an error about InvalidDistribution: Cannot find file (or expand pattern)
You need to specify the specific artifacts path instead of using the $(Pipeline.Workspace).
The $(pipeline.workspcae) is equal to the $(Agent.BuildDirectory). You could refer to this doc.
From the Github link, it seems that you want to publish a python package to feed.
You could refer to the following steps to create CI\CD.
In CI , you could Build sdist and publish the artifact to pipeline.
Here is the sample:
steps:
- task: UsePythonVersion#0
displayName: 'Use Python 3.6'
inputs:
versionSpec: 3.6
- script: 'python setup.py sdist'
displayName: 'Build sdist'
- task: PublishBuildArtifacts#1
displayName: 'Publish Artifact: dist'
inputs:
PathtoPublish: dist
ArtifactName: dist
In CD , you could set the build artifacts as resource and use twine to upload the python package to feed.
Here is an example:
twine upload -r AzureTest23 --config-file $(PYPIRC_PATH) D:\a\r1\a\{Source alias}\dist\*
The twine authenticate task could give the $(PYPIRC_PATH) variable.
If you want to determine your correct path, you can find it in the release log.
Note: If there are spaces or special characters in the path, they need to be escaped in cmd, otherwise they will not be recognized.
The name relates with the source alias, you could change it in artifacts source.
By the way, if you use the Universal Publish task, you also need to give the correct path.

azure DevOps pipeline CI/CD

I am using an Open-Source project Magda (https://magda.io/docs/building-and-running) and want to make an Azure CI/CD Pipeline.
For this project, there are some prerequisites like having sbt + yarn + docker + java installed.
How can I specify those requirements in the azure-pipelines.yml file.
Is it possible in azure-pipelines.yml file, to just write scripts? Without any use of jobs or tasks? And what's the difference between them (Tasks,Jobs ... )
(I'm currently starting with it, so I don't have much experience)
That's my current azure-pipelines.yml file (if there is something wrong please tell me)
# Node.js
# Build a general Node.js project with npm.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/javascript
trigger:
- release
pool:
vmImage: 'ubuntu-latest'
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.0.0'
displayName: 'Install Node.js'
- script: |
npm install
npm run build
displayName: 'npm install and build'
- script: |
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
displayName: 'install Helm '
- script: |
yarn global add lerna
yarn global add #gov.au/pancake
yarn install
displayName: 'install lerna & pancake packages'
- script: |
export NODE_OPTIONS=--max-old-space-size=8192
displayName: 'set Env Variable '
- script: |
lerna run build --stream --concurrency=1 --include-dependencies
lerna run docker-build-local --stream --concurrency=4 --include-filtered-dependencies
displayName: 'Build lerna '
I recommend you read this Key concepts for new Azure Pipelines users
It is possible to put all your stuff in one script step, but now you have logical separation, and this helps navigate and read file than one really long step.
Here you have some bascis from above mentioned documentation:
A trigger tells a Pipeline to run.
A pipeline is made up of one or more stages. A pipeline can deploy to one or more environments.
A stage is a way of organizing jobs in a pipeline and each stage can have one or more jobs.
Each job runs on one agent. A job can also be agentless.
Each agent runs a job that contains one or more steps.
A step can be a task or script and is the smallest building block of a pipeline.
A task is a pre-packaged script that performs an action, such as invoking a REST API or publishing a build artifact.
An artifact is a collection of files or packages published by a run.
But I really recommend you to go through it.
For this project , there are some prerequisites like having sbt + yarn + docker + java installed. How can i specifiy those requirements in the azure-pipelines.yml file.
If you are using Microsoft hosted agents you cannot specify demands
Demands and capabilities apply only to self-hosted agents. When using Microsoft-hosted agents, you select an image for the hosted agent. You cannot use capabilities with hosted agents.
So if you need sth what is not inside the agent you can install it and use taht new piece of sfotware. Later when you job is finished agent is restroed to original version. If you go for self hosted agent you can specify demands and based on agents capabilities it can be assigned to your job.

Resources