how can I simulate the build process of Azure Devops pipeline on the local machine before pushing it to branch to test the possible errors.
the solution gets build locally correct with no errors and warnings. also from the VS command line MSBuild builds the solution with no errors but on some push tries the pipeline build throws many errors mostly related to preprocessor defenition and precompiled header.
I wanted to know how can test the same process locally on my machine without pushing to repo.
azure-pipelines.yml
-------------------
pool:
vmImage: 'vs2017-win2016'
steps:
- task: MSBuild#1
displayName: 'Build solution'
inputs:
platform: 'Win32'
configuration: 'release'
solution: 'mysolution.sln'
- task: VSTest#2
displayName: 'Run Test'
inputs:
platform: 'Win32'
Configuration: 'release'
testAssemblyVer2: |
**\*.Test.dll
!**\*TestAdapter.dll
!**\obj\**
runSettingsFile: project.Test/test.runsettings
codeCoverageEnabled: true
If you are using a git repsotiory you can create another branch and make a pull request. As long as the pull request is not set to auto complete the code will not get committed to the repository.
If you are using a TFVC respository you can setup a gated build that is configured to fail. The pipeline should be a copy of your original pipeline but add a PowerShell task at the end of the build pipeline that throws a terminating error. Be sure to setup this gated build on a separate branch so it does not block normal development.
Write-Error "Fail here" -ErrorAction 'Stop'
You can now make pull requests or trigger a gated build without the code actually being commited.
You can use AzurePipelinesPS to install an agent on your local machine with the Install-APAgent command if you need another agent.
I'm only a few hours into to development with Azure, but I think I found a solution that would work for you. I happen to already have the solution in place. Use gradle, then the default YML just runs gradle and you don't have to worry too much about it after the first run. In the gradle file you could also spin up a docker image if you want and build on that.
The issue you have is most likely related to the difference between your local environment and the one on build agent where this YAML pipeline execute the build. Testing it locally (even if it would be possible) will not help as it will be executed in your environment, where you know already that every component required for the successful build are already installed. On the other side on the environment where build agent is running the build there seems to be missed components (or different versions) which cause your build to fail. Try to compare list of installed components and environment variables (like PATH) on your local machine and on build agent - there might be some difference between them.
Related
How to avoid loading source code for the jobs azure DevOps pipeline every time. How can I make a download once source code, and then use it in all jobs? I set up a parallel launch of my jobs in the pipeline and now I have to spend time loading code every time. Thanks.
How can I make a download once source code, and then use it in all jobs
If you use Microsoft-hosted agents. It cannot be done. Because each job in your pipeline will get a fresh virtual Machine when you run your pipeline, The virtual machine is discarded after one use. So the source code downloaded in one job is not available for another job.
However it is possible in self-hosted agent. You can try creating a self-hosted agent and run your pipeline on this self-hosted agent. See below example:
I have below pipeline for testing on my self-hosted agent.
pool: Default #run pipeline on self-hosted agent
stages:
- stage: Build
jobs:
- job: A
steps:
- checkout: self
- powershell: |
echo "job1"> job1.txt
ls
- job:
dependsOn: A
steps:
- checkout: none
- powershell: |
echo "job2"> job2.txt
ls
See output in the second powershell task: The source code is only loaded for once in the first job. And the following jobs can use the it too.
If you want to skip downloading the source code for your whole pipeline. You can check below steps.
Click the 3dots on your yaml pipeline edit page--> Select Triggers-->Go the Yaml tab-->go to Get sources section--> Check Don't sync sources. See below screenshot.
But if you want to load the source code in some of the jobs. You can then add a script task to run the git clone commands to clone the source in this job (ie. git clone https://$(System.Accesstoken)#dev.azure.com/org/pro/_git/rep )
If you want to skip downloading source code for some of your jobs. You can also use checkout step (ie. checkout: none).
stages:
- stage: Build
jobs:
- job:
steps:
- checkout: none #skip loading source in this job
- job:
steps:
- checkout: self #loading source in this job
This is not possible as job
A stage contains one or more jobs. Each job runs on an agent. A job represents an execution boundary of a set of steps. All of the steps run together on the same agent. For example, you might build two configurations - x86 and x64. In this case, you have one build stage and two jobs.
So technically they run on separate machines:
Question is if you need source code on all those jobs. If not you can disable downloading source code by adding step checkout: none.
Created a .Net core Selenium tests in Azure Repos.
I have the .csproj as well in the Repos.
Added a ASP.Net core task, in the restore task i have given Path to Project as "**/*.csproj".
Got the below error while running the build pipeline.
SYSTEMVSSCONNECTION exists true
SYSTEMVSSCONNECTION exists true
##[error]No files matched the search pattern
Not sure if the Agent didnt find my .csproj file. Any help is deeply appreciated.
I tried to reproduce your issue, but the task ran successfully.
My environment:
Agent: ubuntu-latest
.Net SDK Version: 3.1.403
My script:
- task: DotNetCoreCLI#2
inputs:
command: 'restore'
projects: '**/*.csproj'
feedsToUse: 'select'
You can do the following things to locate your issue:
Rerun the pipeline using my environment configuration.
Write the full and detailed path name. For example, $(System.defaultWorkingDirectory)/WindowsApp.csproj.
Add a command line task to run dotnet restore directly.
Set the system.debug variable to true. Rerun your pipeline and you will see a more detailed run log.
I am using an Open-Source project Magda (https://magda.io/docs/building-and-running) and want to make an Azure CI/CD Pipeline.
For this project, there are some prerequisites like having sbt + yarn + docker + java installed.
How can I specify those requirements in the azure-pipelines.yml file.
Is it possible in azure-pipelines.yml file, to just write scripts? Without any use of jobs or tasks? And what's the difference between them (Tasks,Jobs ... )
(I'm currently starting with it, so I don't have much experience)
That's my current azure-pipelines.yml file (if there is something wrong please tell me)
# Node.js
# Build a general Node.js project with npm.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/javascript
trigger:
- release
pool:
vmImage: 'ubuntu-latest'
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.0.0'
displayName: 'Install Node.js'
- script: |
npm install
npm run build
displayName: 'npm install and build'
- script: |
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
displayName: 'install Helm '
- script: |
yarn global add lerna
yarn global add #gov.au/pancake
yarn install
displayName: 'install lerna & pancake packages'
- script: |
export NODE_OPTIONS=--max-old-space-size=8192
displayName: 'set Env Variable '
- script: |
lerna run build --stream --concurrency=1 --include-dependencies
lerna run docker-build-local --stream --concurrency=4 --include-filtered-dependencies
displayName: 'Build lerna '
I recommend you read this Key concepts for new Azure Pipelines users
It is possible to put all your stuff in one script step, but now you have logical separation, and this helps navigate and read file than one really long step.
Here you have some bascis from above mentioned documentation:
A trigger tells a Pipeline to run.
A pipeline is made up of one or more stages. A pipeline can deploy to one or more environments.
A stage is a way of organizing jobs in a pipeline and each stage can have one or more jobs.
Each job runs on one agent. A job can also be agentless.
Each agent runs a job that contains one or more steps.
A step can be a task or script and is the smallest building block of a pipeline.
A task is a pre-packaged script that performs an action, such as invoking a REST API or publishing a build artifact.
An artifact is a collection of files or packages published by a run.
But I really recommend you to go through it.
For this project , there are some prerequisites like having sbt + yarn + docker + java installed. How can i specifiy those requirements in the azure-pipelines.yml file.
If you are using Microsoft hosted agents you cannot specify demands
Demands and capabilities apply only to self-hosted agents. When using Microsoft-hosted agents, you select an image for the hosted agent. You cannot use capabilities with hosted agents.
So if you need sth what is not inside the agent you can install it and use taht new piece of sfotware. Later when you job is finished agent is restroed to original version. If you go for self hosted agent you can specify demands and based on agents capabilities it can be assigned to your job.
I have set up a build pipeline for a model library that's shared between several of my projects. I'm accessing it through a private feed in Azure DevOps, and it works just fine. I can retrieve the library in Visual Studio and my projects all get the most up-to-date version. However, in the feed are all the dependency libraries used within the model library (e.g. Microsoft.Azure.Storage.Blob, System.Threading, Microsoft.AspNetCore, etc.). I haven't been able to find any guidance on why this is happening, if it's the expected behavior, or if I'm screwing something up. My YAML file for the build pipeline is below:
Also, does anyone know a better to handle package versioning? This seems really hacky, but it was the only way I could get auto-incrementing versions to work.
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
name: $(projectName)-$(majorMinorVersion).$(semanticVersion)
pool:
vmImage: 'windows-latest'
# pipeline variables
variables:
majorMinorVersion: 1.1
# semanticVersion counter is automatically incremented by one in each execution of pipeline
# second parameter is seed value to reset to every time the referenced majorMinorVersion is changed
semanticVersion: $[counter(variables['majorMinorVersion'], 0)]
projectName: 'MyProject.Models'
buildConfiguration: 'Release'
projectPath: 'Shared/MyProject.Models.csproj'
fullVersion: '$(majorMinorVersion).$(semanticVersion)'
steps:
# show version number on start
- task: Bash#3
inputs:
targetType: 'inline'
script: |
echo Building $(projectName)-$(fullVersion)
- task: DotNetCoreCLI#2
inputs:
command: 'pack'
packagesToPack: $(projectPath)
versioningScheme: 'byEnvVar'
versionEnvVar: 'fullVersion'
- task: DotNetCoreCLI#2
inputs:
command: 'push'
packagesToPush: '$(Build.ArtifactStagingDirectory)/MyProject.Models*.nupkg'
nuGetFeedType: 'internal'
publishVstsFeed: '<feed GUID>'
Dependencies getting copied to private feed in Azure DevOps
That because your private Nuget Feed set nuget.org as an Upstream source by default if you set Package from public sources enable when you create the this feed:
Then go to Setting->Upstream source, you will find there are three public sources listed:
When we download any packages from the Upstream sources, it will been cached in the Artifacts, you will see it next time. They are cached packages from upstream sources, so we do not need to download them again from upstream sources next time we use them, and the included upstream sources are all approved by MS, so you do not need to worry about them.
Besides, if you still worry about them, you can disable the Upstream sources, but in this case, you need publish all the dependencies to your private feed, otherwise, your model library package will throw the error could not found the dependencies.
Hope this helps.
I have a free account of gitlab.
I also have a company account (not sure which plan).
I have the exact same project, a wrapper on EventStore.
In the CI pipeline I want to spin up a container with event store so that I can run some integration tests against it.
This is my .gitlab-ci.yml that restores, compiles, runs tests and publishes nuget packages
#Stages
stages:
- ci
- pack
#Global variables
variables:
GITLAB_RUNNER_DOTNET_CORE: mcr.microsoft.com/dotnet/core/sdk:2.2
EVENT_STORE: eventstore/eventstore:release-5.0.2
NUGET_REPOSITORY: $NEXUS_NUGET_REPOSITORY
NUGET_API_KEY: $NEXUS_API_KEY
NUGET_FOLDER_NAME: nupkgs
#Docker image
image: $GITLAB_RUNNER_DOTNET_CORE
#Jobs
ci:
stage: ci
services:
- $EVENT_STORE
variables:
# event store service params testing with standard ports
EVENTSTORE_INT_TCP_PORT: "1113"
EVENTSTORE_EXT_TCP_PORT: "1113"
EVENTSTORE_INT_HTTP_PORT: "2113"
EVENTSTORE_EXT_HTTP_PORT: "2113"
EVENTSTORE_EXT_HTTP_PREFIXES: "http://*:2113/"
script:
- dotnet restore --no-cache --force
- dotnet build --configuration Release
- dotnet vstest test/*Tests/bin/Release/**/*Tests.dll
pack-beta-nuget:
stage: pack
script:
- export VERSION_SUFFIX=beta$CI_PIPELINE_ID
- dotnet pack *.sln --configuration Release --output $NUGET_FOLDER_NAME --version-suffix $VERSION_SUFFIX --include-source --include-symbols -p:SymbolPackageFormat=snupkg
- dotnet nuget push **/*.nupkg --api-key $NUGET_API_KEY --source $NUGET_REPOSITORY
except:
- master
pack-nuget:
stage: pack
script:
- dotnet restore
- dotnet pack *.sln --configuration Release --output $NUGET_FOLDER_NAME
- dotnet nuget push **/*.nupkg --api-key $NUGET_API_KEY --source $NUGET_REPOSITORY
only:
- master
As you can see, I spin up the event store container.
From my integration tests I try to connect to the container within the CI using the following connection string:
"ConnectTo=tcp://admin:changeit#127.0.0.1:1113; HeartBeatTimeout=500;";
With my work account it works fine, there is a container listening on 127.0.0.1 on port 1113 and I can connect to it using the above connection string.
With my free personal account it is unable to connect.
Why?
I suspect it has something to do with the way docker is available on both gitlab CI runners, but why is different?
And more important, how can I configure event store on my personal CI pipeline on my free account so that I can connect to it if the localhost is not a valid host Uri for whatever reason?
Well, you have not provided any details but it seems you're using the Docker executor. In that case, services are not available on localhost but only accessible as service aliases.
This is an extract from the working CI file:
test:
stage: test
script:
- dotnet test
variables:
ASPNETCORE_ENVIRONMENT: Testing
EVENTSTORE_EXT_HTTP_PORT: 2113
EVENTSTORE_EXT_TCP_PORT: 1113
EVENTSTORE_RUN_PROJECTIONS: all
EVENTSTORE_START_STANDARD_PROJECTIONS: "true"
EventStore__ConnectionString: ConnectTo=tcp://admin:changeit#eventstore:1113
services:
- name: eventstore/eventstore:latest
alias: eventstore
only:
refs:
- branches
- tags
For this to work, your appsettings.Testing.json file needs to point to ConnectTo=tcp://admin:changeit#eventstore:1113.
If you want to keep using the appsettings file with the configuration that points to localhost, you can override the setting using env variable in the CI file. Just remember to add environment variables as the configuration source. The code snippet above has such an override, matching our settings structure:
{
"EventStore": {
"ConnectionString": "ConnectTo=whatever"
}
}
If you ever decide using the Kubernetes executor, you will need to revert to using localhost, because Kubernetes executor creates one pod per build with multiple containers, including all service containers. There's an open issue to support service aliases with Kubernetes runners, I think it will be like 12.9 or 13, pretty soon. That being said, using service aliases is a safe, future proof way of making it all work.
P.S. Just noticed that your setup works with one account and doesn't work with another. My guess would be that you either use different executors (Docker doesn't work and Kubernetes works) or different GitLab versions (if the service alias issue has already been fixed).