I have the following in my azure-pipelines.yml
jobs:
- job: TestifFolder1Exists
pool:
vmImage: 'ubuntu-16.04'
steps:
- bash: git log -1 --name-only | grep -c Folder1
failOnStderr: false
- job: Folder1DoesntExist
pool:
vmImage: 'ubuntu-16.04'
dependsOn: TestifFolder1Exists
condition: failed()
- job: Folder1DoesExist
pool:
vmImage: 'ubuntu-16.04'
dependsOn: TestifFolder1Exists
condition: succeeded()
I am trying to test whether a folder has had a change made, so I can publish artifacts from that directory.
The problem I am having is that if there isn't anything written to the folder, the script fails with a
Bash exited with code '1'. (this is what I want) which in turn makes the whole build fail.
If I add continueOnError then the following jobs always run the succeeded job.
How can I let this job fail, without it failing the entire build?
There is a option called continueOnError. It's set to false by default. Change this to true and your task won't stop the job from building.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops&tabs=yaml#controloptions
I didn't figure out how to ignore a failed job, but this is how I solved this particular problem
jobs:
- job: TestifFolder1Exists
pool:
vmImage: 'ubuntu-16.04'
steps:
- bash: |
if [ "$(git log -1 --name-only | grep -c Folder1)" -eq 1 ]; then
echo "##vso[task.setVariable variable=Folder1Changed]true"
fi
- bash: echo succeeded
displayName: Perform some task
condition: eq(variables.Folder1Changed, 'true')
(although it turns out that Azure Devops does what I was trying to create here already - path filters triggers!)
In powershell task there is ignoreLASTEXITCODE property, that turns off propagation of exit code back to pipeline, so you can i.e. analyse it in next task. Not sure why it was not provided for bash. https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/powershell?view=azure-devops
Related
I want to tag my pipeline build during the build process itself. For which based on the official document I tried echo ##vso[build.addbuildtag]testing in the pipeline yaml. There was no error but the build was not tagged either.
I'm able to add tags from portal successfully.
My pipleine yaml below.
pool:
vmImage: ubuntu-latest
jobs:
- job: addingtag
displayName: addingtag
steps:
- script: |
echo ##vso[build.addbuildtag]testing
displayName: addingtag
Below are other combinations I tried and still failed.
echo ##vso[build.addbuildtag] testing
echo ##vso[build.addbuildtag]Tag_testing
You may need to add double quotes, I could successfully add tag by using YAML script like below.
- script: |
echo "##vso[build.addbuildtag]testing"
displayName: addingtag
Let's say I have three pipelines that do the following:
Pipeline 1:
Task A
Pipeline 2:
Task B
Pipeline 3:
Task A
Task B
Now let's say my repo has two directories:
AStuff
BStuff
Is there any way to set path filters such that:
If AStuff has changes but BStuff doesn't, Pipeline 1 runs (and nothing else)
If BStuff has changes but AStuff doesn't, Pipeline 2 runs (and nothing else)
If both AStuff and BStuff have changes, Pipeline 3 runs (and nothing else)
The root of my problem is that I want Task A to run if AStuff has changes, and I want Task B to run if BStuff has changes. But if they both have changes, I would prefer Task A runs and then Task B runs, instead of ADO selecting whichever one it wants to run first. So, alternatively, maybe there's some way for Pipeline 2 to have a triggers/conditions that cause it to run when Pipeline 1 completes, but only if the changes that triggered Pipeline 1 affected the BStuff directory.
No built-in feature to achieve your third requirement.
And it is impossible for the third pipeline to run independently, if the third pipeline runs, it must mean that the first two pipelines will also run. Unless the third pipeline is not in the same environment as the first two.
The following pipeline definition should meet your requirements.
Check_StuffA
trigger:
paths:
include:
- AStuff/*
exclude:
- BStuff/*
pool:
vmImage: ubuntu-latest
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script'
Check_StuffB
trigger:
paths:
include:
- BStuff/*
exclude:
- AStuff/*
pool:
vmImage: ubuntu-latest
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script'
Check_StuffA&B
trigger:
paths:
include:
- AStuff/*
- BStuff/*
pool:
vmImage: ubuntu-latest
jobs:
- job: check
displayName: Check changed files
pool:
vmImage: ubuntu-latest
steps:
- task: ChangedFiles#1
name: CheckChanges
inputs:
rules: |
[CodeChanged]
AStuff/*
[TestsChanged]
BStuff/*
- job: build
displayName: Build only when code changes
dependsOn: check
condition: and(eq(dependencies.check.outputs['CheckChanges.CodeChanged'], 'true'),eq(dependencies.check.outputs['CheckChanges.TestsChanged'], 'true'))
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
Write-Host "Hello World"
Task is running on Node.
Part of the yaml file for the pipeline is:
steps:
- script: |
#!/bin/bash
./plan.sh
displayName: "example deploy"
continueOnError: false
Now, when sometimes the ./plan.sh script fails: but it still shows as a success (green tick) in the pipeline. See below:
How do I make it show a "failed" red cross whenever it fails?
What you are doing now is actually calling the bash script file from PowerShell. Your way of writing it cannot capture the error message. The correct way is as follows:
plan2.sh
xxx
pipeline YAML definition:
trigger:
- none
pool:
vmImage: ubuntu-latest
steps:
- script: |
bash $(System.DefaultWorkingDirectory)/plan2.sh
displayName: "example deploy"
continueOnError: false
Successfully get the issue:
But usually, we use the bash task directly to run the bash script.
plan.sh
{
xxx
# Your code here.
} || {
# save log for exception
echo some error output here.
exit 1
}
pipeline YAML definition part:
steps:
- task: Bash#3
displayName: 'Bash Script'
inputs:
targetType: filePath
filePath: ./plan.sh
Successfully get the issue:
For your script step to signal failure, you need to make the script as a whole return a non-0 exit code. You may need some conditional logic in your script, checking the exit code from plan.sh after it returns.
I was able to solve this by adding
set -o pipefail
In the start of the yaml file.
I can't find a satisfying solution for my case.
I want to start a job manually only when a certain previous job has failed. The job in question dose a validation. I want to make the next job manual so that the user acknowledges that something wasn't good and make him investigate the problem and continue only if he deems that the fail can be ignored.
stages:
- test
- validate
- build
lint:
stage: test
allow_failure: true
script:
- npm run lint
check:reducer:
stage: test
allow_failure: true
script:
- chmod +x ./check-reducers.py
- ./check-reducers.py $CI_PROJECT_ID $CI_COMMIT_BRANCH
except:
- master
- development
fail:pause:
stage: validate
allow_failure: true
script:
- echo The 'validate:reducer' job has failed
- echo Check the job and decide if this should continue
when: manual
needs: ["check:reducer"]
build:
stage: build
script:
- cp --recursive _meta/ $BUILD_PATH
- npm run build
artifacts:
name: "build"
expire_in: 1 week
paths:
- $BUILD_PATH
needs: ["fail:pause"]
I would like that if check:reducer fails, fail:pause to wait for the user input. If check:reducer exits with 0, fail:pause should start automatically or buildshould start.
Unfortunately, this isn't possible as the when keyword is evaluated at the very start of the pipeline (I.e., before any job execution has run), so you cannot set the when condition based on the previous job status.
This is possible if you use a generated gitlab-ci.yml as a child workflow.
stages:
- test
- approve
- deploy
generate-config:
stage: test
script:
- ./bin/run-tests.sh
- ./bin/generate-workflows.sh $?
artifacts:
paths:
- deploy-gitlab-ci.yml
trigger-workflows:
stage: deploy
trigger:
include:
- artifact: deploy-gitlab-ci.yml
job: generate-config
The generate-workflows.sh script writes out a deploy-gitlab-ci.yml that either has the approval job or not based on the return code of the run-test.sh passed as the first argument to the script.
You can make it easier on yourself using includes, where you either include the approve step or not in the generated deploy-gitlab-ci.yml file, and make the steps in the deploy optionally need the approal.
approve-gitlab-ci.yml
approve:
stage: approve
when: manual
script:
- echo "Approved!"
deploy-gitlab-ci.yml
deploy:
stage: deploy
needs:
- job: approve
- optional: true
Then the deploy-gitlab-ci.yml is simply an includes with the jobs to run:
includes:
- approve-gitlab-ci.yml
- deploy-gitlab-ci.yml
I currently have two jobs in my CI file which are nearly identical.
The first is for manually compiling a release build from any git branch.
deploy_internal:
stage: deploy
script: ....<deploy code>
when: manual
The second is to be used by the scheduler to release a daily build from develop branch.
scheduled_deploy_internal:
stage: deploy
script: ....<deploy code from deploy_internal copy/pasted>
only:
variables:
- $MY_DEPLOY_INTERNAL != null
This feels wrong to have all that deploy code repeated in two places. It gets worse. There are also deploy_external, deploy_release, and scheduled variants.
My question:
Is there a way that I can combine deploy_internal and scheduled_deploy_internal such that the manual/scheduled behaviour is retained (DRY basically)?
Alternatively: Is there is a better way that I should structure my jobs?
Edit:
Original title: Deploy job. Execute manually except when scheduled
You can use YAML anchors and aliases to reuse the script.
deploy_internal:
stage: deploy
script:
- &deployment_scripts |
echo "Deployment Started"
bash command 1
bash command 2
when: manual
scheduled_deploy_internal:
stage: deploy
script:
- *deployment_scripts
only:
variables:
- $MY_DEPLOY_INTERNAL != null
Or you can use extends keyword.
.deployment_script:
script:
- echo "Deployment started"
- bash command 1
- bash command 2
deploy_internal:
extends: .deployment_script
stage: deploy
when: manual
scheduled_deploy_internal:
extends: .deployment_script
stage: deploy
only:
variables:
- $MY_DEPLOY_INTERNAL != null
Use GitLab's default section containing a before_script:
default:
before_script:
- ....<deploy code>
job1:
stage: deploy
script: ....<code after than deploy>
job2:
stage: deploy
script: ....<code after than deploy>
Note: the default section fails to function as such if you try to execute a job locally with the gitlab-runner exec command - use YAML anchors instead.