Best Practice for Sinlge Job Stage - Azure Pipeline yml - azure

I have a series of pipeline stages, with only one job per stage.
What is the best practice for having only one job per stage?
Below I have my example yml setup:
trigger:
- main
resources:
- repo: self
stages:
# Test
##########################
- stage: Run_Tests
displayName: Run Tests
jobs:
- job: Run_Tests
displayName: Run Tests
pool:
vmImage: 'ubuntu-18.04'
steps:
# Testing Steps ...
# Build
##########################
- stage: Build
displayName: Build
jobs:
- job: Build
displayName: Build
pool:
vmImage: 'ubuntu-18.04'
steps:
# Build Steps ...
# Deploy
##########################
- stage: Deploy
displayName: Deploy
jobs:
- deployment: VMDeploy
displayName: Deploy
# Deploy Steps ...
I have the below multiple times throughout the file.
jobs:
-jobs:
It seems so unnecessary and cluttered to me.
Am I just being pedantic, or is there a better way to do this?

This could make sense when you use deployment job. It is because environment restrictions aplied on job level are evaluated on stage level. So if you combine Build, Test and Deploy stage and you have approval confogured on envrionemnt used in Deploy job you will be asked for approval before first job start.
For me Build and Test could go together and in fact they should be part of the same job as you may reuse - why because - is the change is valid when build is fine but tests are failed? You may also leverage fact that you already download dependencies for all your project. Having seprate job for test makes sense for me when we are speaking about integration tests.

Related

GitLab CI, How to make sure job execute only if the previous job did?

I have 2 stages with multiple jobs and the jobs in the first stage have some rules that tell them if the need to run or not, so what I am trying to do is to tell some of the jobs in the second stage to execute only if the relevant job in the first stage ran.
I don't want to use the same rules I used for the first stage job to prevent conflicts.
Is there a way to do that?
stages:
- build
- deploy
Build0:
stage: build
extends:
- .Build0Rules
- .Build0Make
Build1:
stage: build
extends:
- .Build1Rules
- .Build1Make
Deploy0:
stage: deploy
dependencies:
- Build0
script:
- bash gitlab-ci/deploy0.sh
Deploy1:
stage: deploy
dependencies:
- Build1
script:
- bash gitlab-ci/deploy1.sh
Thank you in advance :)
No you cannot specify that a job should be added to the pipeline if another job was added to the pipeline. Each job can specify whether it is added to the pipeline using only/except conditions or rules, but these are not able to reference other jobs.
It is possible to generate a pipeline yaml file and then trigger it, but I think this would not be ideal because of the amount of work involved.
stages:
- Build
- Deploy
build:
stage: Build
script:
- do something...
artifacts:
paths:
- deploy-pipeline-gitlab-ci.yml
deploy:
stage: Deploy
trigger:
include:
- artifact: deploy-pipeline-gitlab-ci.yml
job: build
strategy: depend
I would recommend using similar only/except conditions or rules on each job to build the pipeline that you want.
Yes you can. You should check the keyword needs that allow to do what you want: execute a job based on the execution of other jobs, ignoring stages order.
The documentation: https://docs.gitlab.com/ee/ci/yaml/#needs
Here is also an exemple of how to build a DAG (direct acrylic graph) using needs: https://about.gitlab.com/blog/2020/12/10/basics-of-gitlab-ci-updated/#directed-acyclic-graphs-get-faster-and-more-flexible-pipelines
In your case:
Deploy0:
stage: deploy
needs: ["Build0"]
script:
- bash gitlab-ci/deploy0.sh
Deploy1:
stage: deploy
needs: ["Build1"]
script:
- bash gitlab-ci/deploy1.sh
Note you can also specify multiple jobs in the needs command:
needs: ["build0", "test0", "test1"]

Adds needs relations to GitLab CI yaml but got an error: the job was not added to the pipeline

I am trying to add needs between jobs in the Gitlab CI yaml configuration file.
stages:
- build
- test
- package
- deploy
maven-build:
stage: build
only:
- merge_requests
- master
- branches
...
test:
stage: test
needs: [ "maven-build" ]
only:
- merge_requests
- master
...
docker-build:
stage: package
needs: [ "test" ]
only:
- master
...
deploy-stage:
stage: deploy
needs: [ "docker-build" ]
only:
- master
...
deploy-prod:
stage: deploy
needs: [ "docker-build" ]
only:
- master
when: manual
...
I have used the GitLab CI online lint tools to check my syntax, it is correct.
But when I pushed the codes, it always complains:
'test' job needs 'maven-build' job
but it was not added to the pipeline
You can also test your .gitlab-ci.yml in CI Lint
The GitLab CI did not run at all.
Update: Finally I made it. I think the needs position is sensitive, move all needs under the stage, it works. My original scripts included some other configuration between them.
CI-jobs that depend on each other need to have the same limitations!
In your case that would mean to share the same only targets:
stages:
- build
- test
maven-build:
stage: build
only:
- merge_requests
- master
- branches
test:
stage: test
needs: [ "maven-build" ]
only:
- merge_requests
- master
- branches
that should work from my experience^^
Finally I made it. I think the needs position is sensitive, move all needs under the stage, it works
Actually... that might no longer be the case with GitLab 14.2 (August 2021):
Stageless pipelines
Using the needs keyword in your pipeline configuration helps to reduce cycle times by ignoring stage ordering and running jobs without waiting for others to complete.
Previously, needs could only be used between jobs on different stages.
In this release, we’ve removed this limitation so you can define a needs relationship between any job you want.
As a result, you can now create a complete CI/CD pipeline without using stages by including needs in every job to implicitly configure the execution order.
This lets you define a less verbose pipeline that takes less time to create and can run even faster.
See Documentation and Issue.
The rule in both jobs should be that same or otherwise GitLab cannot create job dependency between the jobs when the trigger rule is different.
I don't know why, but if the jobs are in different stages (as in my case), you have to define the jobs that will be done later with "." at the start.
Another interesting thing is GitLab's own CI/CD Lint online text editor does not complain there is an error. So you have to start the pipeline to see the error.
Below, notice the "." in ".success_notification" and ".failure_notification"
stages:
- prepare
- build_and_test
- deploy
- notification
#SOME CODE
build-StandaloneWindows64:
<<: *build
image: $IMAGE:$UNITY_VERSION-windows-mono-$IMAGE_VERSION
variables:
BUILD_TARGET: StandaloneWindows64
.success_notification:
needs:
- job: "build-StandaloneWindows64"
artifacts: true
stage: notification
script:
- wget https://raw.githubusercontent.com/DiscordHooks/gitlab-ci-discord-webhook/master/send.sh
- chmod +x send.sh
- ./send.sh success $WEBHOOK_URL
when: on_success
.failure_notification:
needs:
- job: "build-StandaloneWindows64"
artifacts: true
stage: notification
script:
- wget https://raw.githubusercontent.com/DiscordHooks/gitlab-ci-discord-webhook/master/send.sh
- chmod +x send.sh
- ./send.sh failure $WEBHOOK_URL
when: on_failure
#SOME CODE

Azure YAML-Pipeline skips jobs and I have no idea why

Even though I set System.Debug=True I get no other information than "The job was skipped". Literally just these four words.
I created a YAML-Release pipeline on Azure Devops which basically runs the jobs:
job: build_release
jobs: deployment: deploy_test
jobs: deployment: deploy_stage
To test the behavior I first only ran the first two jobs and deployed to TEST. Now I want to deploy to STAGE but it seems that the pipeline is only working when I start from the beginning / create a new release. But what I want to do right now is to deploy the already existing release from TEST to STAGE. When I try to do that by rerunning the pipeline Azure just skips all steps. Why is this happening? How can I avoid this and rerun the pipeline? I did not set any conditions.
EDIT with additonal info:
Structure of the pipeline
trigger:
- release/*
variables:
...
resources:
- repo: self
pool:
vmImage: $(vmImageName)
stages:
- stage: build_release
displayName: 'awesome build'
condition: contains(variables['Build.SourceBranchName'], 'release/')
jobs:
- job: build_release
steps:
...
- stage: deploy_test
displayName: 'awesome test deploy'
jobs:
- deployment: deploy_test
environment: 'test'
strategy:
runOnce:
deploy:
steps:
...
- stage: deploy_stage
displayName: 'awesome stage deploy'
jobs:
- deployment: deploy_stage
environment: 'stage'
strategy:
runOnce:
deploy:
steps:
...
I tried to trigger it in two different ways which had the same outcome (everything was skipped):
A. I created a new release which was a copy of the previously deployed release.
B. I clicked on run pipeline.
The issue is caused by the condition condition: contains(variables['Build.SourceBranchName'], 'release/'), which you specified for stage build_release.
When the trigger is set to - release/*. The variable variables['Build.SourceBranchName'] will be evaluated to the branch name after the /.
For example:
If you triggered your pipeline from branch release/release1.0. the value of variables['Build.SourceBranchName'] will be release1.0 instead of release/release1.0. So the condition contains(variables['Build.SourceBranchName'], 'release/') will always be false, which caused the stage build_release to be skipped.
And, if you didnot specify dependency for stage deploy_test and stage deploy_stage, the next stage will depends on the previous stage by default. So these two stages also got skipped, since stage build_release is skipped. This is why you saw all the steps were skipped.
Solution:
Using variable Build.SourceBranch in the condition.
Change the condition like below: (The yaml file in the release branches should also be changed like below)
- stage: build_release
displayName: 'awesome build'
condition: contains(variables['Build.SourceBranch'], 'release/') #use Build.SourceBranch
Noted: If you mannaly triggered your pipeline. Please make sure your select to trigger the pipeline from release branches. Or the pipeline will be triggered from main branch by default.
The issue here is when the run of the pipeline was create it sounds like the deploy stage was not selected. As such at compilation time of the pipeline the stage was skipped as it was defined as being skipped within that run.
As for what you are running the first question would be are these changes to run the deploy_stage is this in the main branch? The pipeline by default will run against the main branch unless otherwise specified.

Sending to production or review environments depending on branch

We're using Gitlab-CI but we have some troubles to have review and production environments at the same time.
We have several stages in our .gitlab-ci.yml but here I'll focus on the deploy stage:
deploy:
stage: deploy
script:
- some commands
environment:
name: review/$CI_BUILD_REF_NAME
url: http://$CI_BUILD_REF_SLUG.$DEPLOY_SERVER
on_stop: stop_deploy
only:
- /^feature-[cw]\/.*$/
deploy:
stage: deploy
script:
- some other commands
environment:
name: production
only:
- prod
stop_deploy:
stage: deploy
variables:
GIT_STRATEGY: none
script:
- some clean commands
when: manual
environment:
name: review/$CI_BUILD_REF_NAME
action: stop
only:
- /^feature-[cw]\/.*$/
The issue is that the first job is not run on the branches whose name starts with feature-c/. However when removing the second job, the first job is run on those branches.
The job that deploys to production is correctly run when pushed to prod.
So why does the first job is not run when the second job is defined? Where does the conflict comes from?
Thanks!
The answer is quite simple; they can't have the same name :) Name one deploy-review and the other deploy-prod and its fixed.

How to control a stage play based on previous stage result without using artifacts?

We have a project hosted on an internal Gitlab installation.
The Pipeline of the project has 3 stages:
Build
Tests
Deploy
The objective is to hide or disable the Deploy stage when Tests fails
The problem is that we can't use artifacts because they are lost each time our machines reboot.
My question: Is there an alternative solution to artifacts to achieve this task?
The used .gitlab-ci.yml looks like this:
stages:
- build
- tests
- deploy
build_job:
stage: build
tags:
# - ....
before_script:
# - ....
script:
# - ....
when: manual
only:
- develop
- master
all_tests:
stage: tests
tags:
# - ....
before_script:
# - ....
script:
# - ....
when: manual
only:
- develop
- master
prod:
stage: deploy
tags:
# - ....
script:
# - ....
when: manual
environment: prod
I think you might have misunderstood the purpose of the built-in CI. The goal is to have building and testing all automated on each commit or at least every push. Having all tasks set to manual execution gives you almost no advantage over external CI tools like Jenkins or Bamboo. Your only advantage to local execution of the targets right now is having visibility in a central place.
That said there is no way to conditionally show or hide CI tasks, because it's against the basic idea. If you insist on your idea, you could look up the artifacts of the previous stages and abort the manual execution in case something is wrong.
The problem is that we can't use artifacts because they are lost each time our machines reboot
AFAIK artifacts are uploaded to the master and not saved on the runners. You should be fine having your artifacts passed from stage to stage.
By the way, the default for when is on_success which means to execute build only when all builds from prior stages succeed.

Resources