Defining parallel sequences of jobs in GitLab CI - gitlab

In my gitlab-ci.yml file, I have defined 3 stages, and the 2nd and 3rd stages have 3 jobs each, resulting in the following structure:
The 1st and 2nd stage works as I intended, however, for the 3rd stage what I'd actually like to have is something like this (the image is a mockup of course), i.e. "parallel sequences" of jobs if you will:
That is, I want "deploy-b" to start if "build-b" is done, and not waiting for the other build tasks to complete.
Is that possible with GitLab pipelines? (Apart from the obvious solution of defining just 2 stages, the second being "Build-and-Deploy", where I just "merge" the script steps of the current build-* and deploy-* jobs.)

This feature was added in the new GitLab release (v12.2)

No, this is not possible by design, a next stage only start if the previous one is done for GitLab version<12.2.

Related

GitLab-CI: Why do stageless pipelines show all jobs under stage "test"? Can this be changed?

I've moved my pipeline to a "stageless" one, simply by using needs rules and removing all stage declarations.
This all works fine, but I've noticed that all my jobs now appear under a single stage called "Test".
This is not a functional problem, but it does make developers question why it's the case. Is there any way to change this default stage name with Cloud-hosted GitLab?
Is it as simple as setting all of the jobs to use stage with the same value? Seems like a bit of a hack, and contrary to the instructions to "remove all stage keywords from .gitlab-ci.yml".
As per the documentation on stages,
If a job does not specify a stage, the job is assigned the test stage.
For your question:
Is there any way to change this default stage name with Cloud-hosted GitLab?
You can define a single stage by using the following:
stages:
- some-other-name
and then referring to the new stage name (some-other-name) in each of your jobs, because (from the same reference above)
if a stage is defined but no jobs use it, the stage is not visible in the pipeline

Long running job should not prevent a MR from being merged

Consider a pipeline with the following jobs:
build: Runs the build and takes 1 minute
report: Runs a static code analysis, posts the result to the MR and takes 59 minutes
Developers should be informed about the results of the report stage as soon as possible, but it should not block the MR from being merged. The pipeline should behave like this:
build must always be successful, before the MR can be merged.
report should always be started eventually and executed successfully, but it should not be mandatory to wait for it in order to be able to merge the MR.
Is there a possibility in gitlab to create such a pipeline?
So far, I am aware of the following options:
Disable the "Pipelines must succeed" setting: In this case, the MR can be merged, even if build is not successful.
Set allow_failure for report to true. In this case, The MR can be merged after build has completed by cancelling the report job, but this violates the requirement that the report should always be executed. Also it is poor developer experience if you have to cancel an optional job before being able to merge.
Execute the report job after merge. This has two drawbacks:
I will get the report only when the MR is merged instead of as soon as possible.
The report job cannot post it's result to the MR, which would notify the involved persons.
You can move your report job into a child pipeline (= a separate .yml file in your project) and trigger it with the trigger keyword and without strategy: depend.
This allows you to trigger the job without waiting for it and without considering its state in your pipeline.

Customize pipelines list when a pipeline is scheduled multiple times a day, more frequently than the code changes

I need to run a GitLab pipeline at four specific times each day, which I have solved by setting up four schedules, one for each desired point in time. All pipelines run on the same branch, master.
In the list of pipelines, I get the following information for each pipeline:
status (success or not)
pipeline ID, a label indicating the pipeline was triggered by a schedule, and a label indicating the pipeline was run on the latest commit on that branch
the user that triggered the pipeline
branch and commit on which the pipeline was run
status (success, warning, failure) for each stage
duration and time at which the pipeline was run (X hours/days/... ago)
This seems optimized to pipelines which typically run no more than once after each commit: in such a scenario, it is relatively easy to identify a particular pipeline.
In my case, however, the code itself has relatively little changes (the main purpose of the pipeline is to verify against external data which changes several times a day). As a result, I end up with a list of near-identical entries. In fact, the only difference is the time at which the pipeline was run, though for anything older than 24 hours I will get 4 pipelines that ran “2 days ago”.
Is there any way I can customize these entries? For a scheduled pipeline, I would like to have an indicator of the schedule which triggered the pipeline or the time of day (even for pipelines older than 24 hours), optionally a date (e.g. “August 16” rather than “5 days ago”).
To enable the use of absolute times in GitLab:
Click your Avatar in the top right corner.
Click Preferences.
Scroll to Time preferences and uncheck the box next to Use relative times.
Your pipelines will now show the actual date and time at which they were triggered rather than a relative time.
More info here: https://gitlab.com/help/user/profile/preferences#time-preferences

Azure DevOps Releases skip tasks

I'm currently working on implementing CI/CD pipelines for my company in Azure DevOps 2020 (on premise). There is one requirement I just not seem to be able to solve conveniently: skipping certain tasks depending on user input in a release pipeline.
What I want:
User creates new release manually and decides if a task group should be executed.
Agent Tasks:
1. Powershell
2. Task Group (conditional)
3. Task Group
4. Powershell
What I tried:
Splitting the tasks into multiple jobs with the task group depending on a manual intervention task.
does not work, if the manual intervention is rejected the whole execution stops with failed.
Splitting the tasks into multiple stages doing almost the same as above with the same outcome.
Splitting the tasks into multiple stages trigger every stage manually.
not very usable because you have to execute what you want in the correct order and after the previous stages succeeded.
Variable set at release creation (true/false).
Will use that if nothing better comes up but kinda prone to typos and not very usable for the colleagues who will use this. Unfortunately Azure DevOps seems to not support dropdown or checkbox variables for releases. (but works with parameters in builds)
Two Stages one with tasks 1,2,3,4 and one with tasks 1,3,4.
not very desireable for me because of duplication.
Any help would be highly appreciated!
Depends on what the criteria is for the pipelines to run. One recommendation would be two pipeline lines calling the same template. And each pipeline may have a true/false embedded in it to pass as a parameter to the template.
The template will have all the tasks defined in it; however, the conditional one will have a condition like:
condition: and(succeeded(), eq('${{ parameters.runExtraStep}}', true))
This condition would be set at the task level.
Any specific triggers can be defined in the corresponding pipeline.
Here is the documentation on Azure YAML Templates to get you started.
Unfortunately, it's impossible to add custom condition for a Task Group, but this feature is on Roadmap. Check the following user voice and you can vote it:
https://developercommunity.visualstudio.com/idea/365689/task-group-custom-conditions-at-group-and-task-lev.html
The workaround is that you can clone the release definition (right click a release definition > Clone), then remove some tasks or task groups and save it, after that you can create release with corresponding release definition per to detailed scenario.
Finally I decided to stick with Releases and split my tasks into 3 agent jobs. Job 1 with the first powershell, job 2 with the conditional taskgroup that executes only if a variable is true and job 3 with the remaining tasks.
As both cece-dong and dreadedfrost stated, I could've achieved a selectable runtime parameter for the condition with yaml pipelines. Unfortunately one of the task groups needs a specific artifact from a yaml pipeline. Most of the time it would be the "latest", which can be easily achieved with a download artifacts task but sometimes a previous artifact get's chosen. I have found no easy way to achieve this in a way as convenient as it is in releases where you by default have a dropdown with a list of artifacts.
If found this blog post for anyone interested on how you can handle different build artifacts in yaml pipelines.
Thanks for helping me out!

JCL should read internal reader than completely submit outer JCL

I have a batch job that has 10 steps in STEP5. I have written an internal JCL and I want after Internal reader step are completed successfully my next step in the parent job which is STEP06 to execute. Could you please give any resolution to this problem.
For what you have described, there are 2 approaches:
Break your process into 3 jobs - steps 1-5 as one job, the second
job consisting of the JCL submitted in sep 5 and the third job
consisting of step 6 (or steps 6-10 - you do not make it clear if
the main JCL has 6 steps and the 'inner' JCL 4 steps, making the 10
steps you mention, or if the main JCL has 10 steps).
The execution of the 3 jobs will need to be serialised somehow.
Simply have the 'inner' JCL as a series of steps in the 'outer' JCL
so that you only have once job with steps that run in order.
The typical approach to this sort of issue would be to use scheduler to handle the 3 part process as 3 jobs the middle one perhaps not submitted by the scheduler but monitored/tracked by it.
With a good scheduler setup, there is a good chance that even if the jobs were run on different machines or even different types of machines that they could be tracked.
To have a single job delayed halfway through is possible but would require some sort of program to run in a loop (waiting so as not to use excessive cpu) checking for an event (a dataset being created or deleted, the job could itself could be checked or numerous other ways).
Another way could be to have part 1 submit a job to do part 2 and that job to then submit another as part 3.
Yet another way, perhaps frowned upon depedning upon it's importance, would be to have three jobs the first part submitted to run, the third part submitted but on hold. The first submits the second which releases the third.
Of course there is also the possibility that one job could do everthing as a single job.

Resources