Using "name" when Configuring Graphite with Jenkins Job DSL - groovy

I'm trying to configure the Graphite integration plugin for my jobs using Jenkins Job DSL. My block looks like this:
coreJobs = [my jobs here]
coreJobs.each{ a ->
// some extra job config here
job("$a") {
project / 'publishers' / 'org.jenkinsci.plugins.graphiteIntegrator.GraphitePublisher' {
selectedIp '192.123.1.456'
metrics {
'org.jenkinsci.plugins.graphiteIntegrator.Metric' {
queueName ".${a}.BuildFailed"
name 'BUILD_FAILED'
}
}
}
}
}
Without this graphite declaration it loops through, creating jobs using the jobs declared in $a. But because the graphite dsl requires a "name" parameter the DSL generator just ignores the jobs declared in $a and creates a job called "BUILD_FAILED" !!
So my question is how can I stop the DSL plugin trying to use the "name" parameter as a job name?
Some additional info, I don't think BUILD_FAILED should be a string. I think it's an object but I'm not sure how I would use that here or if it requires different syntax.
Thanks

After Reading the Documentation again I found an example of a conflicting element:
https://github.com/jenkinsci/job-dsl-plugin/wiki/The-Configure-Block
The doc suggests using the ‘delegate variable'. So my Code now uses:
delegate.name('BUILD_FAILED')
This now means my jobs are created with the right names and no 'BUILD_FAILED' job is generated.

Related

Is this valid Groovy syntax?

Obviously this is part of a Jenkinsfile, however I want to know if this syntax (with necessary modification) can be run outside of the Jenkins preprocessor/groovyshell environment, as a standalone Groovy program, or if this is not a structure that is supported by Groovy alone
pipeline {
stages {
// extra stuff here
}
}

Conditionally Set Environment Azure DevOps

I am working with an Azure Pipeline in which I need to conditionally set the environment property. I am invoking the pipeline from a rest API call by passing Parameters in the body which is documented here.. When I try to access that parameter at compile time to set the environment conditionally though the variable is coming through as empty (assuming it is not accessible at compile time?)
Does anybody know a good way to solve for this via the pipeline or the API call?
After some digging I have found the answer to my question and I hope this helps someone else in the future.
As it turns out the Build REST API does support template parameters that can be used at compile time, the documentation just doesn't explicitly tell you. This is also supported in the Runs endpoint as well.
My payload for my request ended up looking like:
{
"Parameters": "{\"Env\":\"QA\"}",
"templateParameters": {"SkipApproval" : "Y"},
"Definition": {
"Id": 123
},
"SourceBranch": "main"
}
and my pipeline consumed those changes at compile time via the following (abbreviated version) of my pipeline
parameters:
- name: SkipApproval
default: ''
type: string
...
${{if eq(parameters.SkipApproval, 'Y')}}:
environment: NoApproval-All
${{if ne(parameters.SkipApproval, 'Y')}}:
environment: digitalCloud-qa
This is a common area of confusion for YAML pipelines. Run-time variables need to be accessed using a different syntax.
$[ variable ]
YAML pipelines go through several phases.
Compilation - This is where all of the YAML documents (templates, etc) comprising the final pipelines are compiled into a single document. Final values for parameters and variables using ${{}} syntax are inserted into the document.
Runtime - Run-time variables using the $[] syntax are plugged in.
Execution - The final pipeline is run by the agents.
This is a simplification, another explanation from Microsoft is a bit better:
First, expand templates and evaluate template expressions.
Next, evaluate dependencies at the stage level to pick the first stage(s) to run.
For each stage selected to run, two things happen:
All resources used in all jobs are gathered up and validated for authorization to run.
Evaluate dependencies at the job level to pick the first job(s) to run.
For each job selected to run, expand multi-configs (strategy: matrix or strategy: parallel in YAML) into multiple runtime jobs.
For each runtime job, evaluate conditions to decide whether that job is eligible to run.
Request an agent for each eligible runtime job.
...
This ordering helps answer a common question: why can't I use certain variables in my template parameters? Step 1, template expansion, operates solely on the text of the YAML document. Runtime variables don't exist during that step. After step 1, template parameters have been resolved and no longer exist.
[ref: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/runs?view=azure-devops]

Add parameter "Build Selector for Copy Artifact" using Jenkins DSL

I'm converting a Jenkins job from a manual configuration to DSL which means I'm attempting to create a DSL script which creates the job(s) as it is today.
The job is currently parameterized and one of the parameters is of the type "Build Selector for Copy Artifact". I can see in the job XML that it is the copyartifact plugin and specifically I need to use the BuildSelectorParameter.
However the Jenkins DSL API has no guidance on using this plugin to set a parameter - it only has help for using it to create a build step, which is not what I need.
I also can't find anything to do with this under the parameter options in the API.
I want to include something in the DSL seed script which will create a parameter in the generated job matching the one in the image.
parameter
If I need to use the configure block then any tips on that would be welcome to because for a beginner, the documentation on this is pretty useless.
I have found no other way to setup the build selector parameter but using the configure block. This is what I used to set it up:
freeStyleJob {
...
configure { project ->
def paramDefs = project / 'properties' / 'hudson.model.ParametersDefinitionProperty' / 'parameterDefinitions'
paramDefs << 'hudson.plugins.copyartifact.BuildSelectorParameter'(plugin: "copyartifact#1.38.1") {
name('BUILD_SELECTOR')
description('The build number to deploy')
defaultSelector(class: 'hudson.plugins.copyartifact.SpecificBuildSelector') {
buildNumber()
}
}
}
}
In order to reach that, I manually created a job with the build selector parameter. And then looked for the job's XML configuration under jenkins to look at the relevant part, in my case:
<project>
...
<properties>
<hudson.model.ParametersDefinitionProperty>
<parameterDefinitions>
...
<hudson.plugins.copyartifact.BuildSelectorParameter plugin="copyartifact#1.38.1"
<name>BUILD_SELECTOR</name>
<description></description>
<defaultSelector class="hudson.plugins.copyartifact.SpecificBuildSelector">
<buildNumber></buildNumber>
</defaultSelector>
</hudson.plugins.copyartifact.BuildSelectorParameter>
</parameterDefinitions>
</hudson.model.ParametersDefinitionProperty>
</properties>
...
</project>
To replicate that using the configure clause you need to understand the following things:
The first argument to the configure clause is the job node.
Using the / operator will return a child of a node with the given node, if it doesn't exist gets created.
Using the << operator will append to the left-hand-side operand the node given as the right-hand-side operand.
When creating a node, you can give it the attributes in the constructor like: myNodeName(attrributeName: 'attributeValue')
You can pass a lambda to the new node and use it to populate its internal structure.
I have Jenkins version 1.6 (with copy artifact plugin) and you can do it in DSL like this:
job('my-job'){
steps{
copyArtifacts('job-id') {
includePatterns('artifact-name')
buildSelector { latestSuccessful(true) }
}
}
}
full example:
job('03-create-hive-table'){
steps{
copyArtifacts('seed-job-stash') {
includePatterns('jenkins-jobs/scripts/landing/hive/landing-table.sql')
buildSelector { latestSuccessful(true) }
}
copyArtifacts('02-prepare-landing-dir') {
includePatterns('jenkins-jobs/scripts/landing/shell/02-prepare-landing-dir.properties')
buildSelector { latestSuccessful(true) }
}
shell(readFileFromWorkspace('jenkins-jobs/scripts/landing/03-ps-create-hive-table.sh'))
}
wrappers {
environmentVariables {
env('QUEUE', 'default')
env('DB_NAME', 'table_name')
env('VERSION', '20161215')
}
credentialsBinding { file('KEYTAB', 'mycred') }
}
publishers{ archiveArtifacts('03-create-landing-hive-table.properties') }
}

Jenkins Script to Restrict all builds to a given node

We've recently added a second slave node to our Jenkins build environment running a different OS (Linux instead of Windows) for specific builds. Unsurprisingly, this means we need to restrict builds via the "Restrict where this project can be run" setting. However we have a lot of builds (100+) so the prospect of clicking through them all to change this setting manually isn't thrilling me.
Can someone provide a groovy script to achieve this via the Jenkins script console? I've used similar scripts in the past for changing other settings but I can't find any reference for this particular setting.
Managed to figure out the script for myself based on previous scripts and the Jenkins source. Script is as follows:
import hudson.model.*
import hudson.model.labels.*
import hudson.maven.*
import hudson.tasks.*
import hudson.plugins.git.*
hudsonInstance = hudson.model.Hudson.instance
allItems = hudsonInstance.allItems
buildableItems = allItems.findAll{ job -> job instanceof BuildableItemWithBuildWrappers }
buildableItems.each { item ->
boolean shouldSave = false
item.allJobs.each { job ->
job.assignedLabel = new LabelAtom('windows-x86')
}
}
Replace 'windows-x86' with whatever your node label needs to be. You could also do conditional changes based on item.name to filter out some jobs, if necessary.
You could try the Jenkins Job-DSL plugin
which would allow you to create a job to alter your other jobs. This works by providing a build step in a groovy based DSL to modify other jobs.
This one here would add a label to the job 'xxxx'. I've cheated a bit by using the job itself as a template.
job{
using 'xxxx'
name 'xxxx'
label 'Linux'
}
You might need to adjust it if some of you jobs are different types

Jenkins Groovy Postbuild use static file instead of script

Is it possible to load an external groovy script into the groovy post build plugin instead of pasting the script content into each job? We have approximately 200 jobs so updating them all is rather time consuming. I know that I could write a script to update the config files directly (as in this post: Add Jenkins Groovy Postbuild step to all jobs), but these jobs run 24x7 so finding a window when I can restart Jenkins or reload the config is problematic.
Thanks!
Just put the following in the "Groovy script:" field:
evaluate(new File("... groovy script file name ..."));
Also, you might want to go even further.
What if script name or path changes?
Using Template plugin you can create a single "template" job, define call to groovy script (above line) in there, and in all jobs that need it add post-build action called "Use publishers from another project" referencing this template project.
Update: This is what really solved it for me: https://issues.jenkins-ci.org/browse/JENKINS-21480
"I am able to do just that by doing the following. Enter these lines in lieu of the script in the "Groovy script" box:"
// Delegate to an external script
// The filename must match the class name
import JenkinsPostBuild
def postBuild = new JenkinsPostBuild(manager)
postBuild.run()
"Then in the "Additional groovy classpath" box enter the path to that file."
We do it in the following fashion.
We have a file c:\somepath\MyScriptLibClass.groovy (accessible for Jenkins) which contains code of a groovy class MyScriptLibClass. The class contains a number of functions designed to act like static methods (to be mixed in later).
We include this functions writing the following statement in the beginning of sytem groovy and postbuild groovy steps:
[ // include lib scripts
'MyScriptLibClass'
].each{ this.metaClass.mixin(new GroovyScriptEngine('c:\\somepath').loadScriptByName(it+'.groovy')) }
This could look a bit ugly but you need to write it only once for script. You could include more than one script and also use inheritance between library classes.
Here you see that all methods from the library class are mixed in the current script. So if your class looks like:
class MyScriptLibClass {
def setBuildName( String str ){
binding?.variables['manager'].build.displayName = str
}
}
in Groovy Postbuild you could write just:
[ // include lib scripts
'MyScriptLibClass'
].each{ this.metaClass.mixin(new GroovyScriptEngine('c:\\somepath').loadScriptByName(it+'.groovy')) }
setBuildName( 'My Greatest Build' )
and it will change your current build's name.
There are also other ways to load external groovy classes and it is not necessary to use mixing in. For instance you can take a look here Compiling and using Groovy classes from Java at runtime?
How did I solve this:
Create file $JENKINS_HOME/scripts/PostbuildActions.groovy with following content:
public class PostbuildActions {
void setBuildName(Object manager, String str ){
binding?.variables['manager'].build.displayName = str
}
}
In this case in Groovy Postbuild you could write:
File script = new File("${manager.envVars['JENKINS_HOME']}/scripts/PostbuildActions.groovy")
Object actions = new GroovyClassLoader(getClass().getClassLoader()).parseClass(script).newInstance();
actions.setBuildName(manager, 'My Greatest Build');
If you wish to have the Groovy script in your Code Repository, and loaded onto the Build / Test Slave in the workspace, then you need to be aware that Groovy Postbuild runs on the Master.
For us, the master is a Unix Server, while the Build/Test Slaves are Windows PCs on the local network. As a result, prior to using the script, we must open a channel from the master to the Slave, and use a FilePath to the file.
The following worked for us:
// Get an Instance of the Build object, and from there
// the channel from the Master to the Workspace
build = Thread.currentThread().executable
channel = build.workspace.channel;
// Open a FilePath to the script
fp = new FilePath(channel, build.workspace.toString() + "<relative path to the script in Unix notation>")
// Some have suggested that the "Not NULL" check is redundant
// I've kept it for completeness
if(fp != null)
{
// 'Evaluate' requires a string, so read the file contents to a String
script = fp.readToString();
// Execute the script
evaluate(script);
}
I've just faced with the same task and tried to use #Blaskovicz approach.
Unfortunately it does not work for me, but I find upgraded code here (Zach Auclair)
Publish here with minor changes:
postbuild task
//imports
import hudson.model.*
import groovy.lang.GroovyClassLoader;
import groovy.lang.GroovyObject;
import java.io.File;
// define git file
def postBuildFile = manager.build.getEnvVars()["WORKSPACE"] + "/Jenkins/SimpleTaskPostBuildReporter.GROOVY"
def file = new File(postBuildFile)
// load custom class from file
Class groovy = this.class.classLoader.parseClass(file);
// create custom object
GroovyObject groovyObj = (GroovyObject) groovy.newInstance(manager);
// do report
groovyObj.report();
postbuild class file in git repo (SimpleTaskPostBuildReporter.GROOVY)
class SimpleTaskPostBuildReporter {
def manager
public SimpleTaskPostBuildReporter(Object manager){
if(manager == null) {
throw new RuntimeException("Manager object can't be null")
}
this.manager = manager
}
public def report() {
// do work with manager object
}
}
I haven't tried this exactly.
You could try the Jenkins Job DSL plugin which allows you to rebuild jobs from within jenkins using a Groovy DSL and supports post build groovy steps directly from the wiki
Groovy Postbuild
Executes Groovy scripts after a build.
groovyPostBuild(String script, Behavior behavior = Behavior.DoNothing)
Arguments:
script The Groovy script to execute after the build. See the plugin's
page for details on what can be done. behavior optional. If the script
fails, allows you to set mark the build as failed, unstable, or do
nothing. The behavior argument uses an enum, which currently has three
values: DoNothing, MarkUnstable, and MarkFailed.
Examples:
This example will run a groovy script that prints hello, world and if
that fails, it won't affect the build's status:
groovyPostBuild('println "hello, world"') This example will run a
groovy script, and if that fails will mark the build as failed:
groovyPostBuild('// some groovy script', Behavior.MarkFailed) This example
will run a groovy script, and if that fails will mark the
build as unstable:
groovyPostBuild('// some groovy script', Behavior.MarkUnstable) (Since 1.19)
There is a facility to use a template job (this is the bit I haven't tried) which could be the job itself so you only need to add the post build step. If you don't use a template you need to recode the whole project.
My approach is to have a script to regenerate or create all jobs from scratch just so I don't have to apply the same upgrade multiple times. Regenerated jobs keep their build history
I was able to get the following to work (I also posted on this jira issue).
in my postbuild task
this.class.classLoader.parseClass("/home/jenkins/GitlabPostbuildReporter.groovy")
GitlabPostbuildReporter.newInstance(manager).report()
in my file on disk at /home/jenkins/GitlabPostbuildReporter.groovy
class GitlabPostbuildReporter {
def manager
public GitlabPostbuildReporter(manager){
if(manager == null) {
throw new RuntimeException("Manager object musn't be null")
}
this.manager = manager
}
public def report() {
// do work with manager object
}
}

Resources