cloudbees, groovy, jobs, folders: How to determine the job result, if the job is within a cloudbees folder? - groovy

Problem: I'm using a script to determine if a certain amount of jobs are in SUCCESS state.
It worked fine as long as I was not using cloudbees folder plugin. I could easily get the list of projects and get the project result. But after I moved the jobs to the cloudbee folder, the jobs and therefore the job results are no longer available!
Q: Does anybody now how to get the job results with groovy from jobs which are located in a Cloudbees folder?

def job = Jenkins.instance.getItemByFullName('foldername/jobname');

Folder plugin provides the getItems() method which can be used to get all immediate items (jobs/folders) under a folder.
folder.getItems()
Check this link to traverse across all the folders in Jenkins.
Displaying the code snippet below,
import jenkins.*
import jenkins.model.*
import hudson.*
import hudson.model.*
import hudson.scm.*
import hudson.tasks.*
import com.cloudbees.hudson.plugins.folder.*
jen = Jenkins.instance
jen.getItems().each{
if(it instanceof Folder){
processFolder(it)
}else{
processJob(it)
}
}
void processJob(Item job){
}
void processFolder(Item folder){
folder.getItems().each{
if(it instanceof Folder){
processFolder(it)
}else{
processJob(it)
}
}
}

Related

How to read cassandra FQL logs in java?

I have a bunch of cassandra FQL logs with the "cq4" extension. I would like to read them in Java, is there a Java class that those log entries can be mapped into?
These are the logs I see.
I want to read this with this code:
import net.openhft.chronicle.Chronicle;
import net.openhft.chronicle.ChronicleQueueBuilder;
import net.openhft.chronicle.ExcerptTailer;
import java.io.IOException;
public class Main{
public static void main(String[] args) throws IOException {
Chronicle chronicle = ChronicleQueueBuilder.indexed("/Users/pavelorekhov/Desktop/fql_logs").build();
ExcerptTailer tailer = chronicle.createTailer();
while (tailer.nextIndex()) {
tailer.readInstance(/*class goes here*/)
}
}
}
I think from the code and screenshot you can understand what kind of class I need in order to read log entries into objects. Does that class exist in some cassandra maven dependency?
You are using Chronicle 3.x, which is very old.
I suggest using Chronicle 5.20.123, which is the version Cassandra uses.
I would assume Cassandra has it's own tool for reading the contents of these file however, you can dump the raw messages with net.openhft.chronicle.queue.main.DumpMain
I ended up cloning cassandra's github repo from here: https://github.com/apache/cassandra
In their code they have the FQLQueryIterator class which you can use to read logs, like so:
SingleChronicleQueue scq = SingleChronicleQueueBuilder.builder().path("/Users/pavelorekhov/Desktop/fql_logs").build();
ExcerptTailer excerptTailer = scq.createTailer();
FQLQueryIterator iterator = new FQLQueryIterator(excerptTailer, 1);
while (iterator.hasNext()) {
FQLQuery fqlQuery = iterator.next(); // object that holds the log entry
// do whatever you need to do with that log entry...
}

Spring state machine UML stays in memory

I've been working with Spring State machines for over a year now trying different ways to implement according to my requirements, and I've come across a serious issue when I use UML.
I use papyrus to draw the UML and I have many UML stored in a certain location. The one I need to use is selected dynamically. That has been done successfully. Now I have come across a serious problem. Below is the code on how I have called the UML.
Resource resource = new FileSystemResource(stmDir+"/"+model+".uml");
UmlStateMachineModelFactory umlBuilder = new UmlStateMachineModelFactory(resource);
umlBuilder.setStateMachineComponentResolver(resolveActionConfig(model));
StateMachineModelFactory<String, String> modelFactory = umlBuilder;
Builder<String, String> builder = StateMachineBuilder.builder();
builder.configureModel().withModel().factory(modelFactory);
builder.configureConfiguration().withConfiguration().beanFactory(new StaticListableBeanFactory());
stateMachine = builder.build();
And as you can see I use new UmlStateMachineModelFactory(resource);
UmlStateMachineModelFactory Class has the following code
#Override
public StateMachineModel<String, String> build() {
Model model = null;
try {
model = UmlUtils.getModel(getResourceUri(resolveResource()).getPath());
} catch (IOException e) {
throw new IllegalArgumentException("Cannot build build model from resource " + resource + " or location " + location, e);
}
UmlModelParser parser = new UmlModelParser(model, this);
DataHolder dataHolder = parser.parseModel();
// we don't set configurationData here, so assume null
return new DefaultStateMachineModel<String, String>(null, dataHolder.getStatesData(), dataHolder.getTransitionsData());
}
and everytime I create one UmlStateMachineModelFactory, it in turn creates one UmlModelParser.
This class has
import org.eclipse.emf.common.util.EList;
import org.eclipse.emf.ecore.util.EcoreUtil;
import org.eclipse.uml2.uml.Activity;
import org.eclipse.uml2.uml.Constraint;
import org.eclipse.uml2.uml.Event;
import org.eclipse.uml2.uml.Model;
import org.eclipse.uml2.uml.OpaqueBehavior;
import org.eclipse.uml2.uml.OpaqueExpression;
import org.eclipse.uml2.uml.PackageableElement;
import org.eclipse.uml2.uml.Pseudostate;
import org.eclipse.uml2.uml.PseudostateKind;
import org.eclipse.uml2.uml.Region;
import org.eclipse.uml2.uml.Signal;
import org.eclipse.uml2.uml.SignalEvent;
import org.eclipse.uml2.uml.State;
import org.eclipse.uml2.uml.StateMachine;
import org.eclipse.uml2.uml.TimeEvent;
import org.eclipse.uml2.uml.Transition;
import org.eclipse.uml2.uml.Trigger;
import org.eclipse.uml2.uml.UMLPackage;
import org.eclipse.uml2.uml.Vertex;
These remain in my memory causing it to use up a large amount of memory and doesn't get collected by the garbage collector. This is causing a lot of trouble as we are using this for a large scale application and many instances are created every few minutes.
Please suggest a workaround.
EDIT- I managed to create a singleton wrapper for this problem but regardless of that, it persists. My colleague had found out that the loaded resources do not unload. so everytime I call builder.build(),
ResourceSet resourceSet = new ResourceSetImpl();
resourceSet.getPackageRegistry().put(UMLPackage.eNS_URI, UMLPackage.eINSTANCE);
resourceSet.getResourceFactoryRegistry().getExtensionToFactoryMap().put(UMLResource.FILE_EXTENSION, UMLResource.Factory.INSTANCE);
resourceSet.createResource(modelUri);
Resource resource = resourceSet.getResource(modelUri, true);
this is called. I wonder is this is causing the heap build-up. Please help
I pushed some fixes per gh572 to master and 1.2.x. Hopefully those work for you. At least I was able to see garbage collection to work better. I'm planning to create releases later this week.

Custom update listener to set subtask's fix-version

I'm developing custom listener which will update subtask's fix version to same value as it's parent issue.
Currently we are using post-function in workflow in order to set subtask's fix version according to parent on subtask creation. This however doesn't cover cases when subtask already exists and parent's fix version gets updated. New value from parent task is not propagated to subtask.
I'm using script runner and I'm creating 'Custom lisatener', for my specific project and specified Event: 'Issue Updated'. I added script as following:
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.config.SubTaskManager
import com.atlassian.jira.event.issue.AbstractIssueEventListener
import com.atlassian.jira.event.issue.IssueEvent
import com.atlassian.jira.event.type.EventDispatchOption
import com.atlassian.jira.issue.Issue
import com.atlassian.jira.issue.IssueManager
import com.atlassian.jira.issue.MutableIssue
import com.atlassian.jira.project.version.Version
import org.apache.log4j.Logger
class CopyFixVersionFromParentToChild extends AbstractIssueEventListener {
Logger log = Logger.getLogger(CopyFixVersionFromParentToChild.class);
SubTaskManager subTaskManager = ComponentAccessor.getComponent(SubTaskManager.class)
IssueManager issueManager = ComponentAccessor.getComponent(IssueManager.class)
#Override
void issueUpdated(IssueEvent event) {
log.warn("\nIssue updated!!!\n")
try {
Issue updatedIssue = event.getIssue()
if (updatedIssue.issueTypeObject.name == "Parent issue type") {
Collection<Version> fixVersions = new ArrayList<Version>()
fixVersions = updatedIssue.getFixVersions()
Collection<Issue> subTasks = updatedIssue.getSubTaskObjects()
if (subTaskManager.subTasksEnabled && !subTasks.empty) {
subTasks.each {
if (it instanceof MutableIssue) {
((MutableIssue) it).setFixVersions(fixVersions)
issueManager.updateIssue(event.getUser(), it, EventDispatchOption.ISSUE_UPDATED, false)
}
}
}
}
} catch (ex) {
log.debug "Event: ${event.getEventTypeId()} fired for ${event.issue} and caught by script 'CopyVersionFromParentToChild'"
log.debug(ex.getMessage())
}
}
}
Problem is, that it doesn't work. I'm not sure whethe rit's problem that my script logic is encapsulated inside class. Do I have to register this in some specific way? Or am I using script runner completely wrong and I'm pasting this script to wrong section? I checked code against JIRA API and it looks like it should work, my IDE doesnt show any warnings/errors.
Also, could anyone give me hints on where to find logging output from custom scripts like this? Whatever message I put into logger, I seem to be unable to find anywhere in JIRA logs (although I'm aware that script might not work for now).
Any response is much appreciated guys, Thanks.
Martin
Well, I figure it out.
Method I posted, which implements listener as groovy class is used in different way than I expected. These kind of script files were used to be located in to specific path in JIRA installation and ScriptRunner would register them into JIRA as listeners.
In in order to create 'simple' listener script which reacts to issue updated event, I had to strip it down to this code
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.event.type.EventDispatchOption
import com.atlassian.jira.issue.Issue
import com.atlassian.jira.issue.IssueManager
import com.atlassian.jira.issue.MutableIssue
import com.atlassian.jira.project.version.Version
IssueManager issueManager = ComponentAccessor.getComponent(IssueManager.class)
Issue updatedIssue = event.getIssue()
Collection<Version> fixVersions = new ArrayList<Version>()
fixVersions = updatedIssue.getFixVersions()
Collection<Issue> subTasks = updatedIssue.getSubTaskObjects()
subTasks.each {
if (it instanceof MutableIssue) {
((MutableIssue) it).setFixVersions(fixVersions)
issueManager.updateIssue(event.getUser(), it, EventDispatchOption.ISSUE_UPDATED, false)
}
}
You past this to script runner interface and it works :-). Hope this helps anyone who's learning ScriptRunner. Cheers.
Matthew

In jenkins job, create file using system groovy in current workspace

my task is to collect node details and list them in certail format. I need to write data to a file and save it as csv file and attach it as artifacts.
But i am not able to create a file using groovy scripts in the jenkins using plugin "Execute System Groovy" as build step
import jenkins.model.Jenkins
import hudson.model.User
import hudson.security.Permission
import hudson.EnvVars
EnvVars envVars = build.getEnvironment(listener);
filename = envVars.get('WORKSPACE') + "\\node_details.txt";
//filename = "${manager.build.workspace.remote}" + "\\node_details.txt"
targetFile = new File(filename);
println "attempting to create file: $targetFile"
if (targetFile.createNewFile()) {
println "Successfully created file $targetFile"
} else {
println "Failed to create file $targetFile"
}
print "Deleting ${targetFile.getAbsolutePath()} : "
println targetFile.delete()
Output obtained
attempting to create file: /home/jenkins/server-name/workspace/GET_NODE_DETAILS\node_details.txt
FATAL: No such file or directory
java.io.IOException: No such file or directory
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:947)
at java_io_File$createNewFile.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:112)
at Script1.run(Script1.groovy:13)
at groovy.lang.GroovyShell.evaluate(GroovyShell.java:682)
at groovy.lang.GroovyShell.evaluate(GroovyShell.java:666)
at hudson.plugins.groovy.SystemGroovy.perform(SystemGroovy.java:81)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:772)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:160)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:535)
at hudson.model.Run.execute(Run.java:1732)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:234)
Some time i see people use "manager" object, how can i get access to it ?
Alos any ideas on how to accomplish the task ?
Problem
Groovy system script is always run in jenkins master node, while the workspace is the file path in your jenkins slave node, which doesn't exist in your master node.
You can verify by the code
theDir = new File(envVars.get('WORKSPACE'))
println theDir.exists()
It will return false
If you don't use slave node, it will return true
Solution As we can't use normal File, we have to use FilePath http://javadoc.jenkins-ci.org/hudson/FilePath.html
if(build.workspace.isRemote())
{
channel = build.workspace.channel;
fp = new FilePath(channel, build.workspace.toString() + "/node_details.txt")
} else {
fp = new FilePath(new File(build.workspace.toString() + "/node_details.txt"))
}
if(fp != null)
{
fp.write("test data", null); //writing to file
}
Then it works in both case.
Answer by #Larry Cai covers one part to write a file to slave node from System Groovy Script (as it runs on Master Node).
The part I am answering is "Some time i see people use "manager" object, how can i get access to it "
This is the object already available in Post Build Groovy Script for accessing a lot of things like environment variables, Build Status, Build Display Name etc.
Quoted from https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin :
"The groovy script can use the variable manager, which provides various methods to decorate your builds.
Those methods can be classified into whitelisted methods and non-whitelisted methods."
To access it, we can directly call it in the post build groovy script. e.g
manager.build.setDescription("custom description")
manager.addShortText("add your message here")
All methods available on manager objects are documented here.
https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin#GroovyPostbuildPlugin-Whitelistedmethods
I suspect the error was caused by the path format, could you try below:
change
filename = envVars.get('WORKSPACE') + "\\node_details.txt";
to
filename = envVars.get('WORKSPACE') + "/node_details.txt";
Because when I tried in my local jenkins server, I get it successfully executed.
The manager object is not available depending on how the groovy is invoked. e.g. in "execute system groovy script".
You can find the BadgeManager class in jenkins GroovyPostBuild plugin API here: https://javadoc.jenkins.io/plugin/groovy-postbuild/org/jvnet/hudson/plugins/groovypostbuild/GroovyPostbuildRecorder.BadgeManager.html#addShortText-java.lang.String-
ANSWER: Import the GroovyPostBuild plugin and create a new manager object. e.g. here a job with "Execute System Groovy Script" create a manager object and call the addShortText method:
// java.lang.Object
// org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildRecorder.BadgeManager
// Constructor and Description
// BadgeManager(hudson.model.Run<?,?> build, hudson.model.TaskListener listener, hudson.model.Result scriptFailureResult)
import hudson.model.*
import org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildAction
def build = Thread.currentThread().executable
manager = new org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildRecorder.BadgeManager(build, null, null)
manager.addShortText("MANAGER TEST", "black", "limegreen", "0px", "white")
This question gives a hint: See here for a nearly working answer:
In jenkins job, create file using system groovy in current workspace
org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildAction and build.getActions().add(GroovyPostbuildAction.createShortText(text, "black", "limegreen", "0px", "white"));

How to Report Results to Sauce Labs using Geb/Spock?

I want to use the Sauce Labs Java REST API to send Pass/Fail status back to the Sauce Labs dashboard. I am using Geb+Spock, and my Gradle build creates a test results directory where results are output in XML. My problem is that the results XML file doesn't seem to be generated until after the Spock specification's cleanupSpec() exits. This causes my code to report the results of the previous test run, rather than the current one. Clearly not what I want!
Is there some way to get to the results from within cleanupSpec() without relying on the XML? Or a way to get the results to file earlier? Or some alternative that will be much better than either of those?
Some code:
In build.gradle, I specify the testResultsDir. This is where the XML file is written after the Spock specifications exit:
drivers.each { driver ->
task "${driver}Test"(type: Test) {
cleanTest
systemProperty "geb.env", driver
testResultsDir = file("$buildDir/test-results/${driver}")
systemProperty "proj.test.resultsDir", testResultsDir
}
}
Here is the setupSpec() and cleanupSpec() in my LoginSpec class:
class LoginSpec extends GebSpec {
#Shared def SauceREST client = new SauceREST("redactedName", "redactedKey")
#Shared def sauceJobID
#Shared def allSpecsPass = true
def setupSpec() {
sauceJobID = driver.getSessionId().toString()
}
def cleanupSpec() {
def String specResultsDir = System.getProperty("proj.test.resultsDir") ?: "./build/test-results"
def String specResultsFile = this.getClass().getName()
def String specResultsXML = "${specResultsDir}/TEST-${specResultsFile}.xml"
def testsuiteResults = new XmlSlurper().parse( new File( specResultsXML ))
// read error and failure counts from the XML
def errors = testsuiteResults.#errors.text()?.toInteger()
def failures = testsuiteResults.#failures.text()?.toInteger()
if ( (errors + failures) > 0 ) { allSpecsPass = false }
if ( allSpecsPass ) {
client.jobPassed(sauceJobID)
} else {
client.jobFailed(sauceJobID)
}
}
}
The rest of this class contains login specifications that do not interact with SauceLabs. When I read the XML, it turns out that it was written at the end of the previous LoginSpec run. I need a way to get to the values of the current run.
Thanks!
Test reports are generated after a Specification has finished execution and the generation is performed by the build system, so in your case by Gradle. Spock has no knowledge of that so you are unable to get that information from within the test.
You can on the other hand quite easily get that information from Gradle. Test task has two methods that might be of interest to you here: addTestListener() and afterSuite(). It seems that the cleaner solution here would be to use the first method, implement a test listener and put your logic in afterSuite() of the listener (and not the task configuration). You would probably need to put that listener implementation in buildSrc as it looks like you have a dependency on SauceREST and you would need to build and compile your listener class before being able to use it as an argument to addTestListener() in build.gradle of your project.
Following on from erdi's suggestion, I've created a Sauce Gradle helper library, which provides a Test Listener that parses the test XML output and invokes the Sauce REST API to set the pass/fail status.
The library can be included by adding the following to your build.gradle file:
import com.saucelabs.gradle.SauceListener
buildscript {
repositories {
mavenCentral()
maven {
url "https://repository-saucelabs.forge.cloudbees.com/release"
}
}
dependencies {
classpath group: 'com.saucelabs', name: 'saucerest', version: '1.0.2'
classpath group: 'com.saucelabs', name: 'sauce_java_common', version: '1.0.14'
classpath group: 'com.saucelabs.gradle', name: 'sauce-gradle-plugin', version: '0.0.1'
}
}
gradle.addListener(new SauceListener("YOUR_SAUCE_USERNAME", "YOUR_SAUCE_ACCESS_KEY"))
You will also need to output the Selenium session id for each test, so that the SauceListener can associate the Sauce Job with the pass/fail status. To do this, include the following output in the stdout:
SauceOnDemandSessionID=SELENIUM_SESSION_ID

Resources