I currently have two jobs in my CI file which are nearly identical.
The first is for manually compiling a release build from any git branch.
deploy_internal:
stage: deploy
script: ....<deploy code>
when: manual
The second is to be used by the scheduler to release a daily build from develop branch.
scheduled_deploy_internal:
stage: deploy
script: ....<deploy code from deploy_internal copy/pasted>
only:
variables:
- $MY_DEPLOY_INTERNAL != null
This feels wrong to have all that deploy code repeated in two places. It gets worse. There are also deploy_external, deploy_release, and scheduled variants.
My question:
Is there a way that I can combine deploy_internal and scheduled_deploy_internal such that the manual/scheduled behaviour is retained (DRY basically)?
Alternatively: Is there is a better way that I should structure my jobs?
Edit:
Original title: Deploy job. Execute manually except when scheduled
You can use YAML anchors and aliases to reuse the script.
deploy_internal:
stage: deploy
script:
- &deployment_scripts |
echo "Deployment Started"
bash command 1
bash command 2
when: manual
scheduled_deploy_internal:
stage: deploy
script:
- *deployment_scripts
only:
variables:
- $MY_DEPLOY_INTERNAL != null
Or you can use extends keyword.
.deployment_script:
script:
- echo "Deployment started"
- bash command 1
- bash command 2
deploy_internal:
extends: .deployment_script
stage: deploy
when: manual
scheduled_deploy_internal:
extends: .deployment_script
stage: deploy
only:
variables:
- $MY_DEPLOY_INTERNAL != null
Use GitLab's default section containing a before_script:
default:
before_script:
- ....<deploy code>
job1:
stage: deploy
script: ....<code after than deploy>
job2:
stage: deploy
script: ....<code after than deploy>
Note: the default section fails to function as such if you try to execute a job locally with the gitlab-runner exec command - use YAML anchors instead.
Related
I got an error on GitLab Pipeline when I commit the next .gitlab-ci.yml for a repository.
Pipeline executed to build solution, deploy to Artifactory and trigger and API call
Deploy job have to be executed manually, and there are two different job options to execute.
stages:
- build
- deploy
- trigger
variables:
APP_PROJECT_ID: ${CUSTOMER_RELEASED}
build_job:
stage: build
tags:
- dotnet
script:
- echo "build"
only:
- tags
allow_failure: false
.deploy_job_base:
stage: deploy
needs: [build_job]
tags:
- dotnet
script:
- echo "deploy"
dependencies:
- build_job
only:
- tags
deploy_job_sport:
extends: .deploy_job_base
after_script:
- $APP_PROJECT_ID = "2096"
when: manual
allow_failure: false
deploy_job_others:
extends: .deploy_job_base
after_script:
- $APP_PROJECT_ID = "0"
when: manual
allow_failure: false
.trigger_base:
stage: trigger
script:
- echo "Customer Project ID '{$APP_PROJECT_ID}'"
- echo "Call API..."
trigger_sport:
extends: .trigger_base
needs: [deploy_job_sport]
trigger_others:
extends: .trigger_base
needs: [deploy_job_others]
Lint status is correct
but I get that error GitLab Pipeline when I commit changes:
Found errors in your .gitlab-ci.yml: 'trigger_sport' job needs
'deploy_job_sport' job but 'deploy_job_sport' is not in any previous
stage 'trigger_others' job needs 'deploy_job_others' job but
'deploy_job_others' is not in any previous stage
If I remove trigger_sport and trigger_others job and create only one job, it works fine but I don't know how I can target the two manual jobs (deploy_job_sport and deploy_job_others) to a single job.
Do you have any idea?
Thanks in advance.
I think this is related to the fact that you're using only: tags in your template for deploy jobs and the build job also is limited to run when commits contain a tag.
But the trigger template is missing this limitation which most likely causes this error when pushing a commit without a tag because the pipeline creation would add trigger_XY to the pipeline, which has dependencies to the previous deploy_XY jobs.
When updating your job template for trigger jobs to the following this error should be resolved:
.trigger_base:
stage: trigger
script:
- echo "Customer Project ID '{$APP_PROJECT_ID}'"
- echo "Call API..."
only:
- tags
I am trying to use "rules" and "only" keywords to define my pipeline behaviors between merge requests, pushes into dev branch and pushes into master branch.
I noticed several weird behaviors in the Gitlab CI, let's see in my merge_requests pipelines.
With this gitlab-ci.yml file, without any rule, all the jobs are displayed and run.
image: "python:3.8"
stages:
- test_without_only_policy
- test_with_only_policy
test_without_only_policy:
stage: test_without_only_policy
when: manual
script:
- echo "Yay, I am in the pipeline"
test_with_only_policy:
stage: test_with_only_policy
script:
- echo "I am always in the pipeline"
Everything is working as expected, great 👍
With this gitlab-ci.yml file, without an "only" policy in the 2nd job, the 1st job without rules disappears.
image: "python:3.8"
stages:
- test_without_only_policy
- test_with_only_policy
test_without_only_policy:
stage: test_without_only_policy
when: manual
script:
- echo "No, I am not in the pipeline anymore"
test_with_only_policy:
stage: test_with_only_policy
only:
- merge_requests
script:
- echo "I am always in the pipeline"
Why did the 1st job without the rules or "only" policy disappear?
With this gitlab-ci.yml file, with a "rules" keyword in the 2nd job, the 1st job without rules disappears.
image: "python:3.8"
stages:
- test_without_only_policy
- test_with_only_policy
test_without_only_policy:
stage: test_without_only_policy
when: manual
script:
- echo "No, I am not in the pipeline anymore"
test_with_only_policy:
stage: test_with_only_policy
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: manual
script:
- echo "I am always in the pipeline"
Why did the 1st job without the rules or "only" policy disappear?
Thank you for your help, I don't understand why my job disappears when I add rules in other jobs.
According to the documentation for Merge Request Pipelines, if you have a pipeline where only some jobs use only/except/rules with merge_requests, only those jobs will be in the pipeline if it is a Merge Request pipeline. The other jobs will be left out.
Here's the example from the docs:
build:
stage: build
script: ./build
only:
- main
test:
stage: test
script: ./test
only:
- merge_requests
deploy:
stage: deploy
script: ./deploy
only:
- main
In this example, the only job that specifies it should run for Merge Request pipelines is the test job. For standard push pipelines, the build and deploy jobs will run, but when a new merge request is created, a change is pushed to a branch that is the source branch on an existing merge request, or you hit the Run Pipeline button on the Pipelines tab for a Merge Request, it will run a Merge Request pipeline.
Here's another example with a different scenario:
A:
stage: some_stage
only:
- branches
- tags
- merge_requests
script:
- script.sh
B:
stage: some_other_stage
only:
- branches
- tags
- merge_requests
script:
- script.sh
C:
stage: third_stage
only:
- merge_requests
script:
- script.sh
In this example, jobs A and B run for all pipeline types push, tags, merge_requests, etc., but job C only runs for merge_request pipelines.
That's why your test_without_only_policy job will never be in a pipeline where test_with_only_policy runs. test_with_only_policy specifically runs for Merge Request events, but test_without_only_policy does not.
I have like this gitlab ci cd configuration file:
image: docker:git
stages:
- develop
- production
default:
before_script:
- apk update && apk upgrade && apk add git curl
deploy:
stage: develop
script:
- echo "Hello World"
backup:
stage: develop
when:
- manual
- on_success
remove:
stage: develop
when:
- delayed
- on_success
start_in: 30 minutes
In my case job deploy runs automaticaly and job backup must runs manually only when successfully completed job deploy. But in my case this configuration doesn't works and I get error with message:
Found errors in your .gitlab-ci.yml:
jobs:backup when should be one of:
on_success
on_failure
always
manual
delayed
How I can use multiple when option arguments in my case?
Basically you can't because when does not expect an array. You can work around it though with needs. But this solution does only work if you run your jobs in different stages.
image: docker:git
stages:
- deploy
- backup
- remove
deploy:develop:
stage: deploy
script:
- exit 1
backup:develop:
stage: backup
script:
- echo "backup"
when: manual
needs: ["deploy:develop"]
remove:develop:
stage: remove
script:
- echo "remove"
when: delayed
needs: ["backup:develop"]
start_in: 30 minutes
I am trying to add needs between jobs in the Gitlab CI yaml configuration file.
stages:
- build
- test
- package
- deploy
maven-build:
stage: build
only:
- merge_requests
- master
- branches
...
test:
stage: test
needs: [ "maven-build" ]
only:
- merge_requests
- master
...
docker-build:
stage: package
needs: [ "test" ]
only:
- master
...
deploy-stage:
stage: deploy
needs: [ "docker-build" ]
only:
- master
...
deploy-prod:
stage: deploy
needs: [ "docker-build" ]
only:
- master
when: manual
...
I have used the GitLab CI online lint tools to check my syntax, it is correct.
But when I pushed the codes, it always complains:
'test' job needs 'maven-build' job
but it was not added to the pipeline
You can also test your .gitlab-ci.yml in CI Lint
The GitLab CI did not run at all.
Update: Finally I made it. I think the needs position is sensitive, move all needs under the stage, it works. My original scripts included some other configuration between them.
CI-jobs that depend on each other need to have the same limitations!
In your case that would mean to share the same only targets:
stages:
- build
- test
maven-build:
stage: build
only:
- merge_requests
- master
- branches
test:
stage: test
needs: [ "maven-build" ]
only:
- merge_requests
- master
- branches
that should work from my experience^^
Finally I made it. I think the needs position is sensitive, move all needs under the stage, it works
Actually... that might no longer be the case with GitLab 14.2 (August 2021):
Stageless pipelines
Using the needs keyword in your pipeline configuration helps to reduce cycle times by ignoring stage ordering and running jobs without waiting for others to complete.
Previously, needs could only be used between jobs on different stages.
In this release, we’ve removed this limitation so you can define a needs relationship between any job you want.
As a result, you can now create a complete CI/CD pipeline without using stages by including needs in every job to implicitly configure the execution order.
This lets you define a less verbose pipeline that takes less time to create and can run even faster.
See Documentation and Issue.
The rule in both jobs should be that same or otherwise GitLab cannot create job dependency between the jobs when the trigger rule is different.
I don't know why, but if the jobs are in different stages (as in my case), you have to define the jobs that will be done later with "." at the start.
Another interesting thing is GitLab's own CI/CD Lint online text editor does not complain there is an error. So you have to start the pipeline to see the error.
Below, notice the "." in ".success_notification" and ".failure_notification"
stages:
- prepare
- build_and_test
- deploy
- notification
#SOME CODE
build-StandaloneWindows64:
<<: *build
image: $IMAGE:$UNITY_VERSION-windows-mono-$IMAGE_VERSION
variables:
BUILD_TARGET: StandaloneWindows64
.success_notification:
needs:
- job: "build-StandaloneWindows64"
artifacts: true
stage: notification
script:
- wget https://raw.githubusercontent.com/DiscordHooks/gitlab-ci-discord-webhook/master/send.sh
- chmod +x send.sh
- ./send.sh success $WEBHOOK_URL
when: on_success
.failure_notification:
needs:
- job: "build-StandaloneWindows64"
artifacts: true
stage: notification
script:
- wget https://raw.githubusercontent.com/DiscordHooks/gitlab-ci-discord-webhook/master/send.sh
- chmod +x send.sh
- ./send.sh failure $WEBHOOK_URL
when: on_failure
#SOME CODE
I was setting up a gitlab environment. After each push 1 am running 5 test cases. But if the any of the test cases is falling other testcase are skipped.
I want to run all the cases. Because they are independent to each other.
gitlab-ci.yml
stages:
- build
- unit_test_1
- unit_test_2
- unit_test_3
job1:
stage: build
script:
- bash build.sh
job2:
stage: unit_test_1
script:
- bash ./unit_test_1.sh
job3:
stage: unit_test_2
script:
- bash ./unit_test_2.sh
job4:
stage: unit_test_3
script:
- bash ./unit_test_3.sh
If uint_test_1.sh is failing. Other tests are skipped.
You can use the when property to make your jobs run every time, regardless of the status of jobs from prior stages of the build.
stages:
- build
- test
job1:
stage: build
script:
- bash build.sh
job2:
stage: test
when: always
script:
- bash ./unit_test_1.sh
job3:
stage: test
when: always
script:
- bash ./unit_test_2.sh
job4:
stage: test
when: always
script:
- bash ./unit_test_3.sh
Also, if you want to make sure you never have jobs running in parallel then you can configure your runners with concurrency limits.
Configuring it globally will limit all your runners to only run 1 job concurrently between all runners.
Configuring it per runner will limit that runner to only run 1 job concurrently per build token.
You can try like this:
gitlab-ci.yml
stages:
- build
- test
job1:
stage: build
script:
- bash build.sh
job2:
stage: test
script:
- bash ./unit_test_1.sh
job3:
stage: test
script:
- bash ./unit_test_2.sh
job4:
stage: test
script:
- bash ./unit_test_3.sh
The documentation say:
The ordering of elements in stages defines the ordering of builds' execution:
Builds of the same stage are run in parallel.
Builds of the next stage are run after the jobs from the previous stage complete successfully.
https://docs.gitlab.com/ce/ci/yaml/README.html#stages
To run in parallel you have to put the same stage name
https://docs.gitlab.com/ce/ci/pipelines.html#pipelines