Long running job should not prevent a MR from being merged - gitlab

Consider a pipeline with the following jobs:
build: Runs the build and takes 1 minute
report: Runs a static code analysis, posts the result to the MR and takes 59 minutes
Developers should be informed about the results of the report stage as soon as possible, but it should not block the MR from being merged. The pipeline should behave like this:
build must always be successful, before the MR can be merged.
report should always be started eventually and executed successfully, but it should not be mandatory to wait for it in order to be able to merge the MR.
Is there a possibility in gitlab to create such a pipeline?
So far, I am aware of the following options:
Disable the "Pipelines must succeed" setting: In this case, the MR can be merged, even if build is not successful.
Set allow_failure for report to true. In this case, The MR can be merged after build has completed by cancelling the report job, but this violates the requirement that the report should always be executed. Also it is poor developer experience if you have to cancel an optional job before being able to merge.
Execute the report job after merge. This has two drawbacks:
I will get the report only when the MR is merged instead of as soon as possible.
The report job cannot post it's result to the MR, which would notify the involved persons.

You can move your report job into a child pipeline (= a separate .yml file in your project) and trigger it with the trigger keyword and without strategy: depend.
This allows you to trigger the job without waiting for it and without considering its state in your pipeline.

Related

Restrict GitLab pipeline/job based on whether the source is a merge commit

I am specifically not talking about merge requests, which I know I can restrict jobs for with with only: or except: statements, and which have ample questions on the site already. I want to restrict certain jobs (expensive testing) from running for merge commits - for a merge request to be merged, unit testing must pass, so it makes no sense to run unit tests again right after merging.
But the only answer I found to that requirement was to restrict tests from ever running on the master branch completely, and we still want to be able to run tests there, we just don't want the tests to run if the commit was a merge commit coming from a merge request where the tests already ran (we only allow merging if tests pass). So simply restricting tests from running on the master branch is not a good solution for us.
I also found multiple references to $CI_PIPELINE_SOURCE containing "merge_request_event" if the pipeline was triggered by merging a merge request, but that doesn't seem to be the case in current GitLab instances. Instead, the variable contains "push", which seems like a bug or at least a misnomer, considering no pushes are involved when I click Merge now on a merge request.
So I am looking for a .gitlab-ci.yml declaration which will prevent a job from running if the pipeline was triggered by a merge commit, but still allow it to run if a pipeline is created on the same branch for any other reason than a merge commit.
As you observed, looking for merge_request_event in $CI_PIPELINE_SOURCE won't help you here because you're looking to change the pipeline behavior for the pipeline that results after the merge. By the time an MR is merged, the MR is closed and there are no more pipelines caused by merge request events.
Because merge commits are no different from any other kind of commit pushed to the branch, the best way to do this would be to rely on the commit message. When you merge an MR in GitLab, it will have a default uniform message (unless a user edits the merge commit message manually) that will be in the form Merge branch '<source_branch_name>' into '<taget_branch_name>'.
my_job:
rules: # skip merge commits
- if: $CI_COMMIT_MESSAGE =~ /^Merge branch/
# you could make this regex pattern more specific if you wish
when: never
- when: on_success

Customize pipelines list when a pipeline is scheduled multiple times a day, more frequently than the code changes

I need to run a GitLab pipeline at four specific times each day, which I have solved by setting up four schedules, one for each desired point in time. All pipelines run on the same branch, master.
In the list of pipelines, I get the following information for each pipeline:
status (success or not)
pipeline ID, a label indicating the pipeline was triggered by a schedule, and a label indicating the pipeline was run on the latest commit on that branch
the user that triggered the pipeline
branch and commit on which the pipeline was run
status (success, warning, failure) for each stage
duration and time at which the pipeline was run (X hours/days/... ago)
This seems optimized to pipelines which typically run no more than once after each commit: in such a scenario, it is relatively easy to identify a particular pipeline.
In my case, however, the code itself has relatively little changes (the main purpose of the pipeline is to verify against external data which changes several times a day). As a result, I end up with a list of near-identical entries. In fact, the only difference is the time at which the pipeline was run, though for anything older than 24 hours I will get 4 pipelines that ran “2 days ago”.
Is there any way I can customize these entries? For a scheduled pipeline, I would like to have an indicator of the schedule which triggered the pipeline or the time of day (even for pipelines older than 24 hours), optionally a date (e.g. “August 16” rather than “5 days ago”).
To enable the use of absolute times in GitLab:
Click your Avatar in the top right corner.
Click Preferences.
Scroll to Time preferences and uncheck the box next to Use relative times.
Your pipelines will now show the actual date and time at which they were triggered rather than a relative time.
More info here: https://gitlab.com/help/user/profile/preferences#time-preferences

Azure DevOps Releases skip tasks

I'm currently working on implementing CI/CD pipelines for my company in Azure DevOps 2020 (on premise). There is one requirement I just not seem to be able to solve conveniently: skipping certain tasks depending on user input in a release pipeline.
What I want:
User creates new release manually and decides if a task group should be executed.
Agent Tasks:
1. Powershell
2. Task Group (conditional)
3. Task Group
4. Powershell
What I tried:
Splitting the tasks into multiple jobs with the task group depending on a manual intervention task.
does not work, if the manual intervention is rejected the whole execution stops with failed.
Splitting the tasks into multiple stages doing almost the same as above with the same outcome.
Splitting the tasks into multiple stages trigger every stage manually.
not very usable because you have to execute what you want in the correct order and after the previous stages succeeded.
Variable set at release creation (true/false).
Will use that if nothing better comes up but kinda prone to typos and not very usable for the colleagues who will use this. Unfortunately Azure DevOps seems to not support dropdown or checkbox variables for releases. (but works with parameters in builds)
Two Stages one with tasks 1,2,3,4 and one with tasks 1,3,4.
not very desireable for me because of duplication.
Any help would be highly appreciated!
Depends on what the criteria is for the pipelines to run. One recommendation would be two pipeline lines calling the same template. And each pipeline may have a true/false embedded in it to pass as a parameter to the template.
The template will have all the tasks defined in it; however, the conditional one will have a condition like:
condition: and(succeeded(), eq('${{ parameters.runExtraStep}}', true))
This condition would be set at the task level.
Any specific triggers can be defined in the corresponding pipeline.
Here is the documentation on Azure YAML Templates to get you started.
Unfortunately, it's impossible to add custom condition for a Task Group, but this feature is on Roadmap. Check the following user voice and you can vote it:
https://developercommunity.visualstudio.com/idea/365689/task-group-custom-conditions-at-group-and-task-lev.html
The workaround is that you can clone the release definition (right click a release definition > Clone), then remove some tasks or task groups and save it, after that you can create release with corresponding release definition per to detailed scenario.
Finally I decided to stick with Releases and split my tasks into 3 agent jobs. Job 1 with the first powershell, job 2 with the conditional taskgroup that executes only if a variable is true and job 3 with the remaining tasks.
As both cece-dong and dreadedfrost stated, I could've achieved a selectable runtime parameter for the condition with yaml pipelines. Unfortunately one of the task groups needs a specific artifact from a yaml pipeline. Most of the time it would be the "latest", which can be easily achieved with a download artifacts task but sometimes a previous artifact get's chosen. I have found no easy way to achieve this in a way as convenient as it is in releases where you by default have a dropdown with a list of artifacts.
If found this blog post for anyone interested on how you can handle different build artifacts in yaml pipelines.
Thanks for helping me out!

Merge request dropped from merge train with "No stages / jobs for this pipeline."

When I click Start merge train on a merge request (MR), GitLab adds the following to the MR's system notes:
#sferencik started a merge train just now
Great. However, seconds later the following is added to the system notes:
#sferencik removed this merge request from the merge train because No stages / jobs for this pipeline. just now
[emphasis mine]
The list of pipelines doesn't have a new entry: the merge train didn't even get as far as starting one.
The GitLab documentation touches on this. This GitLab issue also talks about this, though in a different context.
What am I doing wrong? I have cut down my .gitlab-ci.yml to the bare minimum, leaving only one stage with one job, which is not conditional. It beats me why GitLab, having performed the speculative merge, should create a pipeline with "no stages / jobs."
This is not a build issue: by the time I click Start merge train, a pipeline has succeeded on my feature branch (the one I want to merge).
Also, if I switch off pipelines for merge results, my MRs have a Merge button instead of Start merge train and it works just fine.
This started happening with our upgrade from GitLab 12.0 to 12.1.
OK, so this is due to my error: the "pipelines for merge requests" feature requires that each job is explicitly marked with
only:
- merge_requests
My jobs didn't have this explicit condition. (In fact, as I describe above, I took care to remove all conditions from my jobs.)
Thus, when I hit Start merge train, a new pipeline is (or would be) instantiated with only those jobs that have the above condition. In my case that's no jobs at all, hence the error message: No stages / jobs for this pipeline..
Possible solutions are:
switch off the pre-merge builds (Settings > General > Merge requests > uncheck Merge pipelines will try to validate the post-merge result prior to merging)
modify your .gitlab-ci.yml, tagging each job with the "only: merge_requests" condition (read here about how to build your condition so your jobs are not limited to merge requests)
This way you have duplicated jobs in Merge Request builds and Merge Train builds.
You can use following config to build either only merge requests or only merge trains.
Merge requests:
merge-request-only:
stage: build
script: echo test
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_EVENT_TYPE == "merged_result"
Merge trains only:
merge-train-only-test:
stage: build
script: echo test
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_EVENT_TYPE == "merge_train"

Defining parallel sequences of jobs in GitLab CI

In my gitlab-ci.yml file, I have defined 3 stages, and the 2nd and 3rd stages have 3 jobs each, resulting in the following structure:
The 1st and 2nd stage works as I intended, however, for the 3rd stage what I'd actually like to have is something like this (the image is a mockup of course), i.e. "parallel sequences" of jobs if you will:
That is, I want "deploy-b" to start if "build-b" is done, and not waiting for the other build tasks to complete.
Is that possible with GitLab pipelines? (Apart from the obvious solution of defining just 2 stages, the second being "Build-and-Deploy", where I just "merge" the script steps of the current build-* and deploy-* jobs.)
This feature was added in the new GitLab release (v12.2)
No, this is not possible by design, a next stage only start if the previous one is done for GitLab version<12.2.

Resources