Azure DevOps - Run Build job Conditional statement and expression - azure

I am trying to run a CI stage in Azure DevOps in a self-hosted Linux Agent. The stages look like below:
CI - Build Job:
Task 1: Python script to check a TRUE OR FALSE condition
Task 2: Bash script to execute certain commands
Now, Task 2 Should run only when the Task 1 py script execution contains only "TRUE".
I have referred a few docs which suggested to go with custom conditions from the following link:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=classic
But not sure how to write custom condition as I am new to this.
NOTE: I want to try only in a custom mode, not in YAML

We can define a new variable or update a variable in the python script when the Task 1 py script execution contains only "TRUE", then use the variable in the condition. Sample condition eq(variables['{variable name}'], '{variable value}'), the task 2 will only run if the condition is determined to be successful, if the result is fail, the task 2 will be skipped.

Related

How to exclude gitlab CI job from pipeline based on a value in a project file?

I need to exclude a job from pipeline in case my project version is pre-release.
How I know it's a pre-release?
I set the following in the version info file, that all project files and tools use:
version = "1.2.3-pre"
From CI script, I parse the file, extract the version value, and know whether it's a pre-release or not, and can set the result in an environment variable.
The only way I know to exclude a job from pipeline is to use rules, while, I know also from gitlab docs that:
rules are evaluated before any jobs run
before_script also is claimed to be called with the script, i.e. after applying the rules.
I can stop the job, only after it starts from the script itself, based on the version value, but what I need is to extract the job from the pipeline in the first place, so it's not displayed in the pipeline history. Any idea?
Thanks
How do you run (start) your pipeline, and is the information whether "it's a pre-release" already known at this point?
If yes, then you could add a flag like IS_PRERELEASE as a variable to the pipeline, and use that in the rules: section of your job. The drawback is that this will not work with automatic pipelines (triggered by a commit or MR); but you can use this approach with manually triggered pipelines (https://docs.gitlab.com/ee/ci/variables/#override-a-variable-when-running-a-pipeline-manually) or via the API (https://docs.gitlab.com/ee/ci/triggers/#pass-cicd-variables-in-the-api-call).

Add GitLab CI job to pipeline based on script command result

I have a GitLab CI pipeline with a 'migration' job which I want to be added only if certain files changed between current commit and master branch, but in my current project I'm forced to use GitLab CI pipelines for push event which complicates things.
Docs on rules:changes clearly states that it will glitch and will not work properly without MR (my case of push event), so that's out of question.
Docs on rules:if states that it only works with env variables. But docs on passing CI/CD variables to another job clearly states that
These variables cannot be used as CI/CD variables to configure a
pipeline, but they can be used in job scripts.
So, now I'm stuck. I can just skip running the job in question overriding the script and checking for file changes, but what I want is not adding the job in question to pipeline in first place.
While you can't add a job alone to a pipeline based on the output of a script, you can add a child pipeline dynamically based on the script output. The method of using rules: with dynamic variables won't work because rules: are evaluated at the time the pipeline is created, as you found in the docs.
However, you can achieve the same effect using dynamic child-pipelines feature. The idea is you dynamically create the YAML for the desired pipeline in a job. That YAML created by your job will be used to create a child pipeline, which your pipeline can depend on.
Sadly, to add/remove a Gitlab job based on variables created from a previous job is not possible for a given pipeline
A way to achieve this is if your break your current pipeline to an upstream and downstream
The upstream will have 2 jobs
The first one will use your script to define a variable
This job will trigger the downstream, passing this variable
Upstream
check_val:
...
script:
... Script imposes the logic with the needed checks
... If true
- echo "MY_CONDITIONAL_VAR=true" >> var.env
... If false
- echo "MY_CONDITIONAL_VAR=false" >> var.env
artifacts:
reports:
dotenv: var.env
trigger_your_original_pipeline:
...
variables:
MY_CONDITIONAL_VAR: "$MY_CONDITIONAL_VAR"
trigger:
project: "project_namespance/project"
The downstream would be your original pipeline
Downstream
...
migration:
...
rules:
- if: '$MY_CONDITIONAL_VAR == "true"'
Now the MY_CONDITIONAL_VAR will be available at the start of the pipeline, so you can impose rules to add or not the migration job

Using conditionals in GitLab CI with user intervention

Is that possible to add conditionals that executes if or else block based on a user input in GitLab CI yml?
In the event the user input is a YES then it must trigger a template(ansible tower template) to restart specific service (tower-cli job launch --job-template)
Should I userules:if with when for such a conditional.Could someone put an insight on such a format.I am a first time user in Gitlab and have some experience in Jenkins.
The rules section (and also only and except) are evaluated when the pipeline is first created, but before the first job starts running because it controls which jobs will be in a pipeline or not, so you can't use variables/conditionals that come from another job or manual task variables.
An alternative is to use a normal bash conditional in your script section and simply exit 0; when the job shouldn't run.
...
Job:
stage: my_stage
when: manual
script:
- if [ $MY_USER_INPUT_VARIABLE == "don't run the job" ] then exit 0; fi
- ./run_job.sh
These are just simple examples, but you check the variable for whatever it is you need it to do (or use multiple conditionals if using multiple variables) and if the job shouldn't run, execute exit 0; which will stop the job from processing, but will not mark it as failed. Otherwise, we run whatever we need this job to do.

How to run a specific Gitlab job from another Gitlab pipeline?

Is there any way to extend and run only a specific job from another pipeline in my current pipeline without copy-pasting it?
For example I have two pipelines:
1. build -> code_check -> auto_test -> deploy
2. auto_test* -> report
I want to execute pipeline 2 where auto_test* executes on another runner while keeping the job's keys exactly as they are in pipeline 1 (except for "tags" which I add in the job to be able to use another runner).
I have a process restriction that I can't change anything in pipeline 1 config so I need a way to execute only a specific job.
I have tried to do that through include .gitlab-ci.yaml+extends:. It somewhat works but pipeline 2 will have all jobs from both pipelines and it is not what I would like to see.
The most straightforward way would be just to copy on each update auto_test job specification from pipeline 1 into my gitlab-ci YAML of pipeline 2 and adding tags: ["MyRunner"] but I hoped there is a built-in way to do that.

Jenkins does not update build result to better result

I have a groovy script that changes the build result using setResult(hudson.model.Result.SUCCESS).
But I realized that I cannot change the job result to a better result (only to worse ones). If I will change the code to: build.setResult(hudson.model.Result.Unstable), then when the build will be successful the result will be changed (I can see in the Console Output: Build step 'Groovy Postbuild' changed build result to UNSTABLE.)
But I can't update the result to a better one.
Is there any solution?
(The same problem occurs with groovy postbuild) .
EDIT:
I'm using the MultiJob plugin in my main job for running 3 downstream jobs (named job1, job2, job3). And I wrote a groovy script so that the result of the main job will be determine only by the first two downstream jobs (when job1 and job2 are success, and job3 is unstable - I wish to set the main job result to success).
because of the problem mentioned above I can't do it... any ideas?
Thanks.
I believe that this expected behavior with Jenkins. Other methods of changing the build result (such as the Fail The Build plugin) also cannot "improve" the build status, they can only make it worse (success to unstable to failed).
Using Post Build plugin and Groovy System Script, you can change build result with Result.fromString() , for example, setting result to "Unstable":
build.result = hudson.model.Result.fromString('UNSTABLE')
In the Console you'll see:
[PostBuildScript] - Execution post build scripts.
[Current build status] check if current [ABORTED] is worse or equals then [ABORTED] and better or equals then [UNSTABLE]
Run condition [Current build status] enabling perform for step [Execute system Groovy script]
Script returned: UNSTABLE

Resources