Passing build parameters downstream using groovy. Jenkins build pipeline - groovy

I've seen a number of examples that execute a pre build system groovy script to the effect of
import hudson.model.*
def thr = Thread.currentThread()
def build = thr?.executable
printf "Setting SVN_UPSTREAM as "+ build.getEnvVars()['SVN_REVISION'] +"\n" ;
build.addAction(new ParametersAction(new StringParameterValue('SVN_UPSTREAM', build.getEnvVars()['SVN_REVISION'])))
Which is intended to make SVN_UPSTREAM available to all downstream jobs.
With this in mind I attempt to use $SVN_UPSTREAM in a manually executed downstream job like
https://code.mikeyp.com/svn/mikeyp/client/trunk#$SVN_UPSTREAM
Which is not resolved causing an error.
Can anyone spot the problem here?

The bleeding edge jenkins build pipeline plugin now supports parameter passing. Eliminated the need for the groovy workaround for me.

Make sure that the parameter you are passing downstream is not set as a parameter in the downstream job where you wish to use it. That is, in the downstream job, if you have "This build is parametrized" checked, do not add SVN_UPSTREAM to the list of parameters. If you do, it overrides the preset value.

Related

How to exclude gitlab CI job from pipeline based on a value in a project file?

I need to exclude a job from pipeline in case my project version is pre-release.
How I know it's a pre-release?
I set the following in the version info file, that all project files and tools use:
version = "1.2.3-pre"
From CI script, I parse the file, extract the version value, and know whether it's a pre-release or not, and can set the result in an environment variable.
The only way I know to exclude a job from pipeline is to use rules, while, I know also from gitlab docs that:
rules are evaluated before any jobs run
before_script also is claimed to be called with the script, i.e. after applying the rules.
I can stop the job, only after it starts from the script itself, based on the version value, but what I need is to extract the job from the pipeline in the first place, so it's not displayed in the pipeline history. Any idea?
Thanks
How do you run (start) your pipeline, and is the information whether "it's a pre-release" already known at this point?
If yes, then you could add a flag like IS_PRERELEASE as a variable to the pipeline, and use that in the rules: section of your job. The drawback is that this will not work with automatic pipelines (triggered by a commit or MR); but you can use this approach with manually triggered pipelines (https://docs.gitlab.com/ee/ci/variables/#override-a-variable-when-running-a-pipeline-manually) or via the API (https://docs.gitlab.com/ee/ci/triggers/#pass-cicd-variables-in-the-api-call).

How to read labels in Gitlab CI script

I have a few use cases in my Gitlab setup I would like to be able to support:
If a certain label (let's call it “skip_build”) is set, the deployment steps should not be run when I merge an MR to a main branch. This would be useful when we have multiple MRs being merged right after another and only need the last one built.
If another label (we'll call it “skip_tests”) is set, I should be able to read it as an env var from within the script and alter the flow within the script accordingly (using normal bash syntax), e.g. to alter the package command parameters used a bit. This is useful for small changes where it might not make sense to run a lengthy test suite.
Is this possible with Gitlab, and if so, how?
I’ve tried experimenting with CI_MERGE_REQUEST_LABELS, but it doesn’t seem to be able to read that as an env var from within the script.
You have to use merge request pipelines for the CI_MERGE_REQUEST_LABELS variable (and other MR-related variables) to be present as documented in predefined variables.
You could use a rules: clause to skip jobs. Something like
build:
rules: # only run this job if the regex pattern does not match
- if: $CI_MERGE_REQUEST_LABELS !~ /skip_build/
You can also do this on any other kind of predefined (or user-defined) variable, like branch name, commit messages, MR titles, etc. Whatever works for you.
For example, a built in feature of GitLab is that if your commit message contains [ci skip] it will prevent the pipeline from running. You could implement similar functionality for your jobs and/or pipelines through rules: or workflow:rules:.

Declarative Pipeline using env var as choice parameter value

Disclaimer: I can achieve the behavior I’m looking for with Active Choices plugin, BUT I really want this to work in a Jenkinsfile and controlled with scm because it’s tedious to configure the Active Choices on each job we may need them on. And with it being separate from the Jenkinsfile creation, it’s then one job defined in multiple places. :(
I am looking to verify if this is possible, because I can’t get the syntax right, if it is possible. And I haven’t been able to find any examples online:
pipeline {
environment {
ARTIFACTS = lib.myfunc() // this works well
}
parameters {
choice(name: "Artifacts", choices: ARTIFACTS) // I can’t get this to work
}
}
I cannot use the function inline in the declaration of the parameter. The errors were clear about that, but it seems as though I should be able to do what I’ve written out above.
I am not home, so I do not have the exceptions handy, but I will add them soon. They did not seem very helpful while I was working on this yesterday.
What have I tried?
I’ve tried having the the function return a List Because it requires a list according to the docs, and I’ve also tried (illogically) returning a String in the precise syntax of a list of strings. (It was hacky, like return "['" + artifacts.join("', '") + "']" to look like ['artifact1.zip', 'artifact2.zip']
I also tried things like "$ARTIFACTS" and ${ARTIFACTS} in desperation.
the list of choices has to be supplied as String containing new line characters (\n): choices: 'TESTING\nSTAGING\nPRODUCTION'
I was tipped off by this article:
https://st-g.de/2016/12/parametrized-jenkins-pipelines
Related to a bug:
https://issues.jenkins.io/plugins/servlet/mobile#issue/JENKINS-40358
:shrug:
First, we need to understand that Jenkins starts running your pipeline code by presenting you with Parameters page. Once you've set up the parameters, and pressed Build, then a node is allocated, variables are set, and your code starts to run.
But in your pipeline, as presented above, you want to run some code to prepare the parameters.
This is not how Jenkins usually works. It's definitely not doing the following: allocating a node, setting the variables, running some of your code until parameters clause is reached, stopping all that, presenting you with GUI, and then continuing where it left off. Again, it's not how Jenkins works.
This is why, when writing a new pipeline, your first option to build it is Build and not Build with Parameters. Jenkins hasn't run your code yet; it doesn't have any idea if there are any parameters. When running for the first time, it will remember the parameters (and any choices, if were) as were configured for this (first) run, so in the second run you will see the parameters as configured in the first run. (Generally, in run number n you will see the result of configuration in run number n-1.)
There are a number of ways to overcome this.
If having a "somewhat recent" (and not "current and absolutely up-to-date") situation fits you, your code may need minor changes to work — second time. (I don't know what exactly lib.myfunc() returns but if it's a choice of Development/Staging/Production this might be good enough.)
If having a "somewhat recent" situation is an absolute no-no (e.g. your lib.myfunc() returns the list of git branches, and "list of branches as of yesterday" is unacceptable), then your only solution is ActiveChoice. ActiveChoice allows you to run some code before showing you the Build with Parameters GUI (with script approval etc.).

How to make prestep system groovy script to interrupt jenkins build and set it to SUCCESS?

Say I have a maven2/3 project in jenkins/hudson and BEFORE I run some goals on a maven project configured in the correspoing config.xml file, I want to run a system groovy script (ref. system groovy plugin) during a prestep and interrupt the whole job and set it to SUCCESS if some condition is met (for example say I find something in the log file of the previous job). I DO NOT WANT MAVEN TO START EXECUTING THE GOALS.
I have tried
import hudson.model.*
def thr = Thread.currentThread()
def build = thr?.executable
build.executor.interrupt(hudson.model.Result.SUCCESS)
out.print "HELLO"
But nothing happens, and even "HELLO" is printed in log. But then the build gets ABORTED.
Parsing POMs
Discovered a new module ...
Modules changed, recalculating dependency graph
...
...jdk1.6.0_22/bin/java -Xmx512m -cp ...
<===[JENKINS REMOTING CAPACITY]===>Build was aborted
Thanks for your time.
I do not understand fully, what you want, since you described here some solution, not your exact problem. I know three plugins, which could be useful for your problem as well:
Fail the build plugin lets you set the result of the job, and stops further processing. Any status can be set including success.
Conditional build step plugin lets you define conditions for its child build steps. If the condition is met, the child(ren) will run.
m2 extra build steps - lets you run build steps before or after maven build in a maven job in jenkins. Note, that recently, this plugin is the part of the core jenkins.
So the basic idea is that you could add conditional build step a pre-build step in your job, and the child step could be one fail-the build instance. See the picture below:

On a build of a Jenkins job, is it possible to change build parameters midway through?

We are using Jenkins to automate several of our build and test processes. For some of our process, the engineer starting the build needs to specify a parameter. But the range of possible and optimal values for that parameter change throughout the course of the day.
What I would like to do is let the engineer specify a value - if they know an optimal value - or leave it blank and have a value be calculated by an early build step. If the value is calculated, I would like the calculating build step to update the parameter value of the job. That way, all subsequent build steps don't have to worry about using the parameter or calculating it, they just use the parameter regardless.
It looks like the Groovy Script Plugin might be able to do this, but I can't see how I can SET the build parameters, just GET them.
Found the answer: use the EnvInject Plugin. One of the features is a build step that allows you to "inject" parameters into the build job from a settings file. I used one build step to create the settings file, then another build step to inject the new values. Then, all subsequent build steps and post-build operations used the new value.
Update with an example:
To add a new parameter (REPORT_FILE), based on existing one (JOB_NAME), inject a map with new or modified parameters in the Groovy Script box:
// Setting a map for new build parameters
def paramsMap = [:]
// Set REPORT_FILE based on JOB_NAME
def filename = JOB_NAME.replace(' ','_') + ".html"
paramsMap.put("REPORT_FILE", filename)
// Add or modify other parameters...
return paramsMap
Jenkins does have the ability to parameterize builds. For a string parameter, the developer can leave the field blank and then your build scripts can check to see if the env. variable for the parameter is set. If the env. var. is not set, the script can perform whatever calculation is needed (I don't think Jenkins has "pre-build steps") and pass it along. For a choice parameter the first line can be something like (Default), and again the build script can test its value and act accordingly.
Note on (Default)
I tried leaving the first line of the choice box blank, and Jenkins saved it correctly the first time; but when I came back to reconfigure the build Jenkins ran some kind of trim on options and the leading blank line was removed so I settled on (Default).
I hope this helps,
Zachary

Resources