Jenkins Script to Restrict all builds to a given node - groovy

We've recently added a second slave node to our Jenkins build environment running a different OS (Linux instead of Windows) for specific builds. Unsurprisingly, this means we need to restrict builds via the "Restrict where this project can be run" setting. However we have a lot of builds (100+) so the prospect of clicking through them all to change this setting manually isn't thrilling me.
Can someone provide a groovy script to achieve this via the Jenkins script console? I've used similar scripts in the past for changing other settings but I can't find any reference for this particular setting.

Managed to figure out the script for myself based on previous scripts and the Jenkins source. Script is as follows:
import hudson.model.*
import hudson.model.labels.*
import hudson.maven.*
import hudson.tasks.*
import hudson.plugins.git.*
hudsonInstance = hudson.model.Hudson.instance
allItems = hudsonInstance.allItems
buildableItems = allItems.findAll{ job -> job instanceof BuildableItemWithBuildWrappers }
buildableItems.each { item ->
boolean shouldSave = false
item.allJobs.each { job ->
job.assignedLabel = new LabelAtom('windows-x86')
}
}
Replace 'windows-x86' with whatever your node label needs to be. You could also do conditional changes based on item.name to filter out some jobs, if necessary.

You could try the Jenkins Job-DSL plugin
which would allow you to create a job to alter your other jobs. This works by providing a build step in a groovy based DSL to modify other jobs.
This one here would add a label to the job 'xxxx'. I've cheated a bit by using the job itself as a template.
job{
using 'xxxx'
name 'xxxx'
label 'Linux'
}
You might need to adjust it if some of you jobs are different types

Related

How to define the execution order of cucumber Test Cases

I want to have two different scenarios in the same feature.
The thing is that Scenario 1 needs to be executed before Scenario 2. I have seen that this can be achieved through cucumber Hooks but when digging in the explanations, there's no concrete cucumber implementation in the examples I have found.
How can I get Scenario 1 executed before Scenario 2?
The feature file is like this:
#Events #InsertExhPlan #DelExhPln
Feature: Insert an Exh Plan and then delete it
#InsertExhPlan
Scenario: Add a new ExhPlan
Given I login as admin
And I go to automated test
When I go to ExhPlan section
And Insert a new exh plan
Then The exh plan is listed
#DeleteExhPlan
Scenario: Delete an Exh Plan
Given I login as admin
And Open the automatized tests edition
When I go to the exh plan section
And The new exh plan is deleted
Then The new exhibitor plan is deleted
The Hooks file is:
package com.barrabes.utilities;
import cucumber.api.java.After;
import cucumber.api.java.Before;
import static com.aura.steps.rest.ParentRestStep.logger;
public class Hooks {
#Before(order=1)
public void beforeScenario(){
logger.info("================This will run before every Scenario================");
}
#Before(order=0)
public void beforeScenarioStart(){
logger.info("-----------------Start of Scenario-----------------");
}
#After(order=0)
public void afterScenarioFinish(){
logger.info("-----------------End of Scenario-----------------");
}
#After(order=1)
public void afterScenario(){
logger.info("================This will run after every Scenario================");
}
}
The order is now as it should be but I don't see how does the Hooks file control exection order.
You don't use Hooks for that purpose. Hooks are used for code that you need to run before and/or after tests, and/or before and/of after test suites; not to control the order of features and/or scenarios.
Cucumber scenarios are executed top to bottom. For the example you showed there, Scenario: Add a new ExhPlan will execute before Scenario: Delete an Exh Plan if you pass the tag #Events in the test runner. Also, you should not have the scenario tags at the feature level. So, you should remove #InsertExhPlan and #DelExhPln at the Feature level. Alternatively, you could pass a comma-separated list of scenario tags to the test runner in the order you want. For example, if you need to run scenario 2 before scenario 1, you would pass the tags for the corresponding scenarios in the order you wish them to be executed. Moreover, you can do this as well from your CI environment. For example, you can have Jenkins jobs that execute the tasks in a specific order by passing the scenario tags in that order. And, if you wish to be run in the default order, simply you can pass the feature tag.
About Hooks, this should be for code that needs to be run for all features and scenarios. For specific stuff you need to run for a particular feature, you need to use Background in the Cucumber file. Background block is run before each scenario in a given feature file.

Magnolia Schedule a Groovy Script

On Magnolia, I've created a Groovy script to delete unused users.
When I run the Groovy script directly from the "DEV > Groovy scripts" interface (on the admin central), it works fine.
Now, I'm trying to schedule the execution of that script.
So I've configured a Command and a Scheduler.
The command :
scheduler > config > commands > default > groovyDeleteUsers
with attributes:
- class = my.commandes.GroovyDeleteAllPublicUsersCommand
The Scheduler :
scheduler > config > jobs > deleteUsersJob
with attributes:
active=true
catalog=default
command=groovyDeleteUsers
cron=0 0 8 * * * *
Here is how my Groovy script is structured :
package my.commands;
import info.magnolia.commands.*;
import info.magnolia.context.MgnlContext;
import javax.jcr.Node;
import javax.jcr.RepositoryException;
import javax.jcr.Session;
public class GroovyDeleteAllPublicUsersCommand extends MgnlCommand {
public boolean execute(Context ctx) {
....
}
}
The problem is that the scheduler job is not able to see my command.
Magnolia Can't find command [groovyDeleteUsers] for job in catalog [{default}]
I've try the JCR query : "select * from nt:base where jcr:path like '%/commands/%'" as specified in the documentation, and my newly created command is in the result.
[EDIT] It seems the problem come from the command.
When I try defining a command with an existing class like info.magnolia.commands.impl.ImportCommand the command is well registred by the applicaation.
But when I try with my my.commandes.GroovyDeleteAllPublicUsersCommand the application doesn't registrer my newly created command.
So do you have any idea?
Thanks for helping,
Regards,
Jimmy
Your problem is not about the setup which is totally correct except the command is not in correct place. Try to put your command definitions right under the module rather than config e.g. put it to ui-admincentral/commands
Edit: Apparently yet another problem was about the groovy command vs java one.
For more information and examples: this page should be sufficient.
Cheers,
As far as I know the catalog name must be unique in the system. "default" already exists in ui-admincentral/commands.

Jenkins Groovy: What triggered the build

I was thinking of using a Groovy script for my build job in Jenkins because I have some conditions to check for that might need access to Jenkins API.
Is it possible to find out who or what triggered the build from a Groovy script? Either an SCM change, another project or user. I have just begun reading a little about Groovy and the Jenkins API.
I want to check for the following conditions and build accordingly. Some Pseudocode:
def buildTrigger JenkinsAPI.thisBuild.Trigger
if (buildTrigger == scm) {
execute build_with_automake
def new_version = check_git_and_look_up_tag_for_version
if (new_version) {
execute git tag new_release_candidate
publish release_candidate
}
} else if (buildTrigger == "Build other projects") {
execute build_with_automake
}
The project should build on every SCM change, but only tag and publish if version has been increased. It should also build when a build has been triggered by another project.
I have something similar - I wanted to get the user who triggered the build, this is my code:
for (cause in bld.getCauses()) {
if (cause instanceof Cause.UserIdCause) {
return cause.getUserName()
}
}
(bld is subtype of Run)
So, you can get the causes for your build, and check for their type.
See the different types at Cause javadoc http://javadoc.jenkins-ci.org/hudson/model/Cause.html

Jenkins Groovy Postbuild use static file instead of script

Is it possible to load an external groovy script into the groovy post build plugin instead of pasting the script content into each job? We have approximately 200 jobs so updating them all is rather time consuming. I know that I could write a script to update the config files directly (as in this post: Add Jenkins Groovy Postbuild step to all jobs), but these jobs run 24x7 so finding a window when I can restart Jenkins or reload the config is problematic.
Thanks!
Just put the following in the "Groovy script:" field:
evaluate(new File("... groovy script file name ..."));
Also, you might want to go even further.
What if script name or path changes?
Using Template plugin you can create a single "template" job, define call to groovy script (above line) in there, and in all jobs that need it add post-build action called "Use publishers from another project" referencing this template project.
Update: This is what really solved it for me: https://issues.jenkins-ci.org/browse/JENKINS-21480
"I am able to do just that by doing the following. Enter these lines in lieu of the script in the "Groovy script" box:"
// Delegate to an external script
// The filename must match the class name
import JenkinsPostBuild
def postBuild = new JenkinsPostBuild(manager)
postBuild.run()
"Then in the "Additional groovy classpath" box enter the path to that file."
We do it in the following fashion.
We have a file c:\somepath\MyScriptLibClass.groovy (accessible for Jenkins) which contains code of a groovy class MyScriptLibClass. The class contains a number of functions designed to act like static methods (to be mixed in later).
We include this functions writing the following statement in the beginning of sytem groovy and postbuild groovy steps:
[ // include lib scripts
'MyScriptLibClass'
].each{ this.metaClass.mixin(new GroovyScriptEngine('c:\\somepath').loadScriptByName(it+'.groovy')) }
This could look a bit ugly but you need to write it only once for script. You could include more than one script and also use inheritance between library classes.
Here you see that all methods from the library class are mixed in the current script. So if your class looks like:
class MyScriptLibClass {
def setBuildName( String str ){
binding?.variables['manager'].build.displayName = str
}
}
in Groovy Postbuild you could write just:
[ // include lib scripts
'MyScriptLibClass'
].each{ this.metaClass.mixin(new GroovyScriptEngine('c:\\somepath').loadScriptByName(it+'.groovy')) }
setBuildName( 'My Greatest Build' )
and it will change your current build's name.
There are also other ways to load external groovy classes and it is not necessary to use mixing in. For instance you can take a look here Compiling and using Groovy classes from Java at runtime?
How did I solve this:
Create file $JENKINS_HOME/scripts/PostbuildActions.groovy with following content:
public class PostbuildActions {
void setBuildName(Object manager, String str ){
binding?.variables['manager'].build.displayName = str
}
}
In this case in Groovy Postbuild you could write:
File script = new File("${manager.envVars['JENKINS_HOME']}/scripts/PostbuildActions.groovy")
Object actions = new GroovyClassLoader(getClass().getClassLoader()).parseClass(script).newInstance();
actions.setBuildName(manager, 'My Greatest Build');
If you wish to have the Groovy script in your Code Repository, and loaded onto the Build / Test Slave in the workspace, then you need to be aware that Groovy Postbuild runs on the Master.
For us, the master is a Unix Server, while the Build/Test Slaves are Windows PCs on the local network. As a result, prior to using the script, we must open a channel from the master to the Slave, and use a FilePath to the file.
The following worked for us:
// Get an Instance of the Build object, and from there
// the channel from the Master to the Workspace
build = Thread.currentThread().executable
channel = build.workspace.channel;
// Open a FilePath to the script
fp = new FilePath(channel, build.workspace.toString() + "<relative path to the script in Unix notation>")
// Some have suggested that the "Not NULL" check is redundant
// I've kept it for completeness
if(fp != null)
{
// 'Evaluate' requires a string, so read the file contents to a String
script = fp.readToString();
// Execute the script
evaluate(script);
}
I've just faced with the same task and tried to use #Blaskovicz approach.
Unfortunately it does not work for me, but I find upgraded code here (Zach Auclair)
Publish here with minor changes:
postbuild task
//imports
import hudson.model.*
import groovy.lang.GroovyClassLoader;
import groovy.lang.GroovyObject;
import java.io.File;
// define git file
def postBuildFile = manager.build.getEnvVars()["WORKSPACE"] + "/Jenkins/SimpleTaskPostBuildReporter.GROOVY"
def file = new File(postBuildFile)
// load custom class from file
Class groovy = this.class.classLoader.parseClass(file);
// create custom object
GroovyObject groovyObj = (GroovyObject) groovy.newInstance(manager);
// do report
groovyObj.report();
postbuild class file in git repo (SimpleTaskPostBuildReporter.GROOVY)
class SimpleTaskPostBuildReporter {
def manager
public SimpleTaskPostBuildReporter(Object manager){
if(manager == null) {
throw new RuntimeException("Manager object can't be null")
}
this.manager = manager
}
public def report() {
// do work with manager object
}
}
I haven't tried this exactly.
You could try the Jenkins Job DSL plugin which allows you to rebuild jobs from within jenkins using a Groovy DSL and supports post build groovy steps directly from the wiki
Groovy Postbuild
Executes Groovy scripts after a build.
groovyPostBuild(String script, Behavior behavior = Behavior.DoNothing)
Arguments:
script The Groovy script to execute after the build. See the plugin's
page for details on what can be done. behavior optional. If the script
fails, allows you to set mark the build as failed, unstable, or do
nothing. The behavior argument uses an enum, which currently has three
values: DoNothing, MarkUnstable, and MarkFailed.
Examples:
This example will run a groovy script that prints hello, world and if
that fails, it won't affect the build's status:
groovyPostBuild('println "hello, world"') This example will run a
groovy script, and if that fails will mark the build as failed:
groovyPostBuild('// some groovy script', Behavior.MarkFailed) This example
will run a groovy script, and if that fails will mark the
build as unstable:
groovyPostBuild('// some groovy script', Behavior.MarkUnstable) (Since 1.19)
There is a facility to use a template job (this is the bit I haven't tried) which could be the job itself so you only need to add the post build step. If you don't use a template you need to recode the whole project.
My approach is to have a script to regenerate or create all jobs from scratch just so I don't have to apply the same upgrade multiple times. Regenerated jobs keep their build history
I was able to get the following to work (I also posted on this jira issue).
in my postbuild task
this.class.classLoader.parseClass("/home/jenkins/GitlabPostbuildReporter.groovy")
GitlabPostbuildReporter.newInstance(manager).report()
in my file on disk at /home/jenkins/GitlabPostbuildReporter.groovy
class GitlabPostbuildReporter {
def manager
public GitlabPostbuildReporter(manager){
if(manager == null) {
throw new RuntimeException("Manager object musn't be null")
}
this.manager = manager
}
public def report() {
// do work with manager object
}
}

Pre send script in groovy for jenkins

i have two dependent jobs. i need help for groovy script in jenkins, for writing pre send script for email-ext plugin.
i want to check whether buid reason is upstream cause, then set cancel variable=true
But i don't know how to write if condition in groovy for jenkins..For seperate jobs, will there be any seperate classes in jenkins(so i can create instance and call upstream cause)
is there any way to check build cause of downstream job is due to upstream..
Please help me on this code snippet..
Use Build.getCauses() method. It will return a list of causes for the build. Loop over it and check if there is an object of hudson.model.Cause.UpstreamCause among them.
To get the build object, use the following code snippet:
def thr = Thread.currentThread()
def build = thr?.executable
FYI, here is a link to the complete Jenkins Module API.

Resources