Is there any way to clean up a Jenkins Worflowjob workspace with a groovy script via Jenkins script console? - groovy

Why again this type of question?
This question seems to have been asked multiple times, but all the answers are irelevant for Jenkins Pipeline jobs (plugin: workflow-job).
Situation
I am migrating a bunch of old freestyle jobs from old Jenkins standalone server to a distributed Jenkins env and I've decided to convert them to Jenkins Pipeline jobs (can't use Blue ocean for it as the SCM is SVN.
Anyway, for some of the jobs is not desired to clean up their workspaces as they are sort of sanity/verification jobs and because the size of SVN checkout and built stuff is large (2GB in 20K files, just deleting it is so slow).
However I do occasionally (ad-hoc) need to delete a workspace of such jobs.
And I don't want to do it by:
modifying a Jenkinsfile and committing it to SVN
"Replaying" a pipeline runs with modification
And I don't have r/w access to a FS on that slave node (which would be the easiest thing to do).
Googling
Quick search on the internet avalanched me with dozens of results [1,2, 3, 4, ...] how to cleanWS() from within a Groovy script ran from Jenkins script console.
Unfortunately, none of them tries to delete workspace of true org.jenkinsci.plugins.workflow.job.WorkflowJob item instance of a job.
My Groovy attempt to cleanWS
Based on the answers gathered from the internet I started my Groovy clean up script which can be executed from the Jenkins script console <Jenkins:port/script>
import hudson.model.*
import com.cloudbees.hudson.*
import com.cloudbees.hudson.plugins.*
import com.cloudbees.hudson.plugins.folder.*
import org.jenkinsci.plugins.workflow.job.*
//jobsToRetrieve = ["aFolder/aJobInFolder","topLevelJob"]
jobsToRetrieve = ["Sandbox/PipelineTests/SamplePipeline"]
enumerateItems(Hudson.instance.items)
def enumerateItems(items) {
items.each { item ->
println("===============::: GENERAL INFO::: =======================")
println(" item: " + item)
println(" item FN: " + item.fullName)
println(" item.getClass " + item.getClass())
println("~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~")
if ( !(item instanceof Folder)) {
jobName = item.getFullDisplayName()
println(" :::jobname::: " + jobName)
if (jobsToRetrieve.contains(item.getFullName())) {
if (item instanceof WorkflowJob) {
println("XXXXXXXXXXXXX--- THIS IS THE JOB --- XXXXXXXXXXXXXXXXXXXXX")
println(" item.workspace: " + item.WORKSPACE)
println("XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX")
println(" following methods ain't implemented for WorkflowJob type of Item\nand it will blow out.")
//see https://javadoc.jenkins.io/hudson/model/FreeStyleProject.html
println(" customWS: " + item.getCustomWorkspace())
println(" WS:" + item.getWorkspace())
item.doDoWipeOutWorkspace()
}
}
} else {
println(" :::foldername::: " + item.displayName)
enumerateItems(((Folder) item).getItems())
}
println("==========================================================")
}
}
Results (kinda expected, but disaponting)
as you can see, my script is going to explode on calls of:
item.getCustomWorkspace()
item.getWorkspace()
item.doDoWipeOutWorkspace()
with MissingMethodException
groovy.lang.MissingMethodException: No signature of method: org.jenkinsci.plugins.workflow.job.WorkflowJob.doDoWipeOutWorkspace() is applicable for argument types: () values: []
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:58)
at org.codehaus.groovy.runtime.callsite.PojoMetaClassSite.call(PojoMetaClassSite.java:49)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:117)
at Script1$_enumerateItems_closure1.doCall(Script1.groovy:33)
Simply because those methods ain't available for this type of item, but only for hudson.model.FreeStyleProject
Question: How then I can delete the Pipeline job's workspace?
There is another Jenkins plugin: Workspace Cleanup which is probably used within Jenkinsfile by calling cleanWs() inside a stage() {}, but I didn't figure out how to utilise it from the outside of Jenkinsfile (like from my groovy script ran Jenkins script console).
Is it a bug/request for enhancement of Jenkins Pipeline jobs plugin?
Or is there any other way how to cast the item to something from where I would have access to a desired functionality?

Alright, investigating this more and googling even more and listening to your ideas (this one particularly inspired me by Daniel Spilker) I have achieved what I wanted, which is:
Independently to CLEAN-UP Pipeline Job's WORKSPACE via Jenkins script console
(only using Jenkins available means and no messing up with Job configuration, nor updating Jenkinsfile, nor replaying)
The code is not surprisingly difficult and for manual demonstration it looks like this:
Jenkins jenkins = Jenkins.instance
Job item = jenkins.getItemByFullName('Sandbox/PipelineTests/SamplePipeline')
println("RootDir: " + item.getRootDir())
for (Node node in jenkins.nodes) {
// Make sure slave is online
if (!node.toComputer().online) {
println "Node '$node.nodeName' is currently offline - skipping workspace cleanup"
continue
}
println "Node '$node.nodeName' is online - performing cleanup:"
// Do some cleanup
FilePath wrksp = node.getWorkspaceFor(item)
println("WRKSP " + wrksp)
println("ls " + wrksp.list())
println("Free space " + wrksp.getFreeDiskSpace())
println("===== PERFORMING CLEAN UP!!! =====")
wrksp.deleteContents()
println("ls now " + wrksp.list())
println("Free space now " + wrksp.getFreeDiskSpace())
}
Its output, if your job is found, looks like:
Result
RootDir: /var/lib/jenkins/jobs/Sandbox/jobs/PipelineTests/jobs/SamplePipeline
....
.... other node's output noise
....
Node 'mcs-ubuntu-chch' is online - performing cleanup:
WRKSP /var/lib/jenkins/workspace/Sandbox/PipelineTests/SamplePipeline
ls [/var/lib/jenkins/workspace/Sandbox/PipelineTests/SamplePipeline/README.md, /var/lib/jenkins/workspace/Sandbox/PipelineTests/SamplePipeline/.git]
Free space 3494574714880
===== PERFORMING CLEAN UP!!! =====
ls now []
Free space now 3494574919680
Mission completed:)
References
Mainly Jenkins javadoc
https://javadoc.jenkins.io/hudson/model/Node.html
https://javadoc.jenkins.io/hudson/model/TopLevelItem.html
https://javadoc.jenkins.io/hudson/model/Item.html
https://javadoc.jenkins.io/hudson/FilePath.html

This is not the most beautiful way, but you could just execute the OS command:
def isWin = Jenkins.instance.windows
def cmd = isWin ? "rmdir /s /q $workspace" : "rm -rf $workspace"
cmd.execute()
If you are using your code just once or are not dealing with multiple OS, you can instead shorten the code to the respective command:
"rm -rf $workspace".execute()

Related

Do I need proc.out.close() (groovy execute shell command)?

Based on:
Groovy executing shell commands
I have this groovy script:
def proc = "some bash command".execute()
//proc.out.close() // hm does not seem to be needed...
proc.waitFor()
if (proc.exitValue()) {
def errorMsg = proc.getErrorStream().text
println "[ERROR] $errorMsg"
} else {
println proc.text
}
That I use the execute various linux bash commands. Currently it works fine even without the proc.out.close() statement.
What is the purpose of proc.out.close() and why is it (not?) needed
proc.text is actually proc.getText()
form groovy api doc: Read the text of the output stream of the Process. Closes all the streams associated with the process after retrieving the text.
http://docs.groovy-lang.org/docs/latest/html/groovy-jdk/java/lang/Process.html#getText()
So, when using proc.text you don't need to call proc.out.close()

How to get artifact file's URI using Artifactory's checksum API where multiple artifacts have same SHA-1 / SHA-256 values aka file's content

Artifactory Version: 5.8.4
In Artifactory, files are stored in the internal database via file's checksum (SHA1) and for retrieval purposes, SHA-256 is useful (for verifying if file is intact).
Read this first: https://www.jfrog.com/confluence/display/RTF/Checksum-Based+Storage
Let's say there are 2 Jenkins jobs, which creates few artifacts/file (rpm/jar/etc). In my case, I'll take a simple .txt file which stores date in MM/DD/YYYY format and some other jobA/B specific build result files (jars/rpms etc).
If we focus only on the text file (as I mentioned above), then:
Jenkins_jobA > generates jobA.date_mm_dd_yy.txt
Jenkins_jobA > generates jobB.date_mm_dd_yy.txt
Jenkins jobA and jobB run multiple times per day in no given run order. Sometime jobA runs first and sometime jobB.
As the content of the file are mostly same for both jobs (per day), the SHA-1 value on jobA's .txt file and jobB.txt file will be same i.e. in Artifactory, both files will be stored in the first 2 character based directry folder structure (as per the Check-sum based storage mechanism).
Basically running sha1sum and sha256sum on both files in Linux, would return the exact same output.
Over the time, these artifacts (.txt, etc) gets promoted from one repository to another (promotion process i.e. from snapshot -> stage -> release repo) so my current logic written in Groovy is to find the URI of the artifact sitting behind a "VIRTUAL" repository (which contains a set of physical local repositories in some order) is listed below:
// Groovy code
import groovy.json.JsonSlurper
import groovy.json.JsonOutput
jsonSlurper = new JsonSlurper()
// The following function will take artifact.SHA_256 as it's parameter to find URI of the artifact
def checkSumBasedSearch(artifactSha) {
virt_repo = "jar-repo" // this virtual may have many physical repos release/stage/snapshot for jar(maven) or it can be a YUM repo for (rpm) or generic repo for (.txt file)
// Note: Virtual repos don't span different repo types (i.e. a virtual repository in Artifactory for "Maven" artifacts (jar/war/etc) can NOT see YUM/PyPi/Generic physical Repos).
// Run aqlCmd on Linux, requires "...", "..", "..." for every distinctive words / characters in the cmd line.
checkSum_URL = artifactoryURL + "/api/search/checksum?sha256="
aqlCmd = ["curl", "-u", username + ":" + password, "${checkSum_URL}" + artifactSha + "&repos=" + virt_repo]
}
def procedure = aqlCmd.execute()
def standardOut = new StringBuilder(), standardErr = new StringBuilder()
procedure.waitForProcessOutput(standardOut, standardErr)
// Fail early
if (! standardErr ) {
println "\n\n-- checkSumBasedSearch() - standardErr exists ---\n" + standardErr +"\n\n-- Exiting with error 12!!!\n\n"
System.exit(12)
}
def obj = jsonSlurper.parseText(standardOut.toString())
def results = obj.results
def uri = results[0].uri // This would work, if a file's sha-1 /256 is always different or sits in different repo at least.
return uri
// to get the URL, I can use:
//aqlCmd = ["curl", "-u", username + ":" + password, "${uri}"]
//def procedure = aqlCmd.execute()
//def obj = jsonSlurper.parseText(standardOut.toString())
//def url = obj.downloadUri
//return url
//aqlCmd = [ "curl", "-u", username + ":" + password, "${url}", "-o", somedirectory + "/" + variableContainingSomeArtifactFilenameThatIWant ]
//
// def procedure = aqlCmd.execute()
//def standardOut = new StringBuilder(), standardErr = new StringBuilder()
//procedure.waitForProcessOutput(standardOut, standardErr)
// Now, I'll get the artifact downloaded in some Directory as some Filename.
}
My concern is, as both files (even though different name -or file-<versioned-timestamp>.txt) have same content in them and generated multiple times per day, how can I get a specific versioned file downloaded for jobA or jobB?
In Artifactory, the SHA_256 property for all files containing same content will be same!! (Artifactory will use SHA-1 for storing these files efficiently to save space, new uploads will be just minimal database level transactions transparent to the user).
Questions:
Will the above logic return jobA's file or jobB's .txt file or any Job's .txt file which uploaded it's file first or latest/acc. to LastModified -aka- last upload time?
How can I get jobA's .txt file and jobB's .txt file downloaded for a given timestamp?
Do I need to add more properties during my rest Api call?
If I was just concerned for the file content, then it doesn't matter much (sha-1/256 dependent) whether it's coming from JobA .txt or job's .txt file, but in a complex case, one may have file name containing meaningful info that they'd like to know to find which file was download (A / B)!
You can use AQL (Artifactory Query Langueage)
https://www.jfrog.com/confluence/display/RTF/Artifactory+Query+Language
curl -u<username>:<password> -XPOST https://repo.jfrog.io/artifactory/api/search/aql -H "Content-Type: text/plain" -T ./search
The content of the file named search is:
items.find(
{
"artifact.module.build.name":{"$eq":"<build name>"},
"artifact.sha1":"<sha1>"
}
)
The above logic (in the original question) will return one of them arbitrary, since you are taking the first result returned and there is no guarantee on the order.
Since your text file contains the timestamp in the name, then you can add the name to the aql given above, it will also filter by the name.
AQL search API is more flexible than the checksum search, use it and customise your query according to the parameters you need.
So, I ended up doing this instead of just returning [0]th element from array in every case.
// Do NOT return [0] first element as yet as Artifactory uses SHA-1/256 so return [Nth].uri where artifact's full name matches with the sha256
// def uri = results[0].uri
def nThIndex=0
def foundFlag = 'false'
for (r in results) {
println "> " + r.uri + " < " + r.uri.toString() + " artifact: " + artFullName
if ( r.uri.toString().contains(artFullName) ) {
foundFlag = 'true'
println "- OK - Found artifact: " + artFullName + " at results[" + nThIndex + "] index."
break; // i.e. a match for the artifact name with SHA-256 we want - has been found.
} else {
nThIndex++;
}
}
if ( foundFlag == 'true' ) {
def uri = results[nThIndex].uri
return uri
} else {
// Fail early if results were found based on SHA256 but not for the artifact but for some other filename with same SHA256
if (! standardErr ) {
println "\n\n\n\n-- [Cool] -- checkSum_Search() - SHA-256 unwanted situation occurred !!! -- results Array was set with some values BUT it didn't contain the artifact (" + artFullName + ") that we were looking for \n\n\n-- !!! Artifact NOT FOUND in the results array during checkSum_Search()---\n\n\n-- Exiting with error 17!!!\n\n\n\n"
System.exit(17) // Nooka
}
}

How to run specified step in SoapUI according testcase result

I have project in soapui with more testcases. After running each testcase I need to run one of two http request, depending on results of steps. So if one or more steps in testcase failed, I need to run httprequest1 and if all steps passed I need to run httprequest2. How can I do this? I have tried many scripts... for now my best solution is something like this, just add groovy script at the end of test case. Problem is that it is checking only last step. I have tried many other solutions, but nothing was working for me. Can somebody help me with this? Thank you
def lastResult = testRunner.getResults().last()
def lastResultStatus = lastResult.getStatus().toString()
log.info 'Test + lastResultStatus
if( lastResultStatus == 'FAILED' )
{
testRunner.gotoStepByName( 'httprequest1' )
testRunner.testCase.testSteps["httprequest2"].setDisabled(true)
}
else
{
testRunner.gotoStepByName( 'httprequest2' )
}
another solution that I have tried:
for( r in testRunner.results )
result = r.status.toString()
log.info result
if( result == 'FAILED' )
{
testRunner.gotoStepByName( 'httprequest1' )
testRunner.testCase.testSteps["httprequest2"].setDisabled(true)
}
else
{
testRunner.gotoStepByName( 'httprequest2' )
}
Like it was mentioned in the comment, and based on the details shared in the comments, Conditional GoTo test step can be used. However, it may required multiple of them. Instead Groovy Script can be the best way in this scenario.
Here are the details assuming the following are the steps in the test case.
Test Case:
request step1
request step2
groovy script step (the proposed script to handle the scenario)
request1 step if above step1 & step2 are successful
request2 step otherwise
following step x
following step y
Here is the pseudo code for the proposed Groovy Script mentioned in #3.
Evaluate the previous test step execution result, like what you currently doing.
Based on the condition, run the test step #4 if true, run step #5 otherwise. Here note that do not use gotoStepByName method, instead run step by its name. See sample #15 here
Once the above if ..else is done, then use gotoStepByName to continue the steps #6, #7 (of course, if any).
NOTE: If gotoStepByName is used to run a step in groovy step, then the control will not come back.
Use a testcase teardown to call the step, since you have to do it for all test cases. The teardown script will look something like this:
if(testRunner.status.toString() == "FAILED"){
testRunner.runTestStepByName( "httprequest1")
println "in if"
}else{
testRunner.runTestStepByName( "httprequest2")
println "in else"
}
Note that you have to use the SoapUI Runner to trigger the testcase / suite and the difference in method being called.

Execute shell command from groovy w/out timeout

In groovy, you can execute a shell command like so:
def process = "<some shell command>".execute()
println process.text()
But if the command is a long running command, I find that it times out. Is there a way to prevent this from happening?
I do some long running stufff (45 min+) doing this where I build up a cmdLine object that is the command line to run and then:
def fose = new FileOutputStream(logFileErr)
def foss = new FileOutputStream(logFileStd)
Process proc = cmdLine.execute()
fose << proc.in
fose << proc.err
foss << proc.out
proc.waitFor()
It's been working for me a couple of years now (to the point I haven't had to revisit this solution)

Jenkins Build Test Case pass fail count using groovy script

I want to fetch Total TestCase PASS and FAIL count for a build using groovy script. I am using Junit test results. I am using Multiconfiguration project , so is there any way to find this information on a per configuration basis?
If you use the plugin https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin, you can access the Jenkins TestResultAction directly from the build ie.
import hudson.model.*
def build = manager.build
int total = build.getTestResultAction().getTotalCount()
int failed = build.getTestResultAction().getFailCount()
int skipped = build.getTestResultAction().getSkipCount()
// can also be accessed like build.testResultAction.failCount
manager.listener.logger.println('Total: ' + total)
manager.listener.logger.println('Failed: ' + failed)
manager.listener.logger.println('Skipped: ' + skipped)
manager.listener.logger.println('Passed: ' + (total - failed - skipped))
API for additional TestResultAction methods/properties http://javadoc.jenkins-ci.org/hudson/tasks/test/AbstractTestResultAction.html
If you want to access a matrix build from another job, you can do something like:
def job = Jenkins.instance.getItemByFullName('MyJobName/MyAxisName=MyAxisValue');
def build = job.getLastBuild()
...
For Pipeline (Workflow) Job Type, the logic is slightly different from AlexS' answer that works for most other job types:
build.getActions(hudson.tasks.junit.TestResultAction).each {action ->
action.getTotalCount()
action.getFailCount()
action.getSkipCount()
}
(See http://hudson-ci.org/javadoc/hudson/tasks/junit/TestResultAction.html)
Pipeline jobs don't have a getTestResultAction() method.
I use this logic to disambiguate:
if (build.respondsTo('getTestResultAction')) {
//normal logic, see answer by AlexS
} else {
// pipeline logic above
}
I think you might be able to do it with something like this:
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import org.w3c.dom.Document;
File fXmlFile = new File("junit-tests.xml");
DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();
Document doc = dBuilder.parse(fXmlFile);
doc.getDocumentElement().normalize();
println("Total : " + doc.getDocumentElement().getAttribute("tests"))
println("Failed : " +doc.getDocumentElement().getAttribute("failures"))
println("Errors : " +doc.getDocumentElement().getAttribute("errors"))
I also harvest junit xml test results and use Groovy post-build plugin https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin to check pass/fail/skip counts. Make sure the order of your post-build actions has harvest junit xml before the groovy post-build script.
This example script shows how to get test result, set build description with result counts and also version info from a VERSION.txt file AND how to change overall build result to UNSTABLE in case all tests were skipped.
def currentBuild = Thread.currentThread().executable
// must be run groovy post-build action AFTER harvest junit xml
testResult1 = currentBuild.testResultAction
currentBuild.setDescription(currentBuild.getDescription() + "\n pass:"+testResult1.result.passCount.toString()+", fail:"+testResult1.result.failCount.toString()+", skip:"+testResult1.result.skipCount.toString())
// if no pass, no fail all skip then set result to unstable
if (testResult1.result.passCount == 0 && testResult1.result.failCount == 0 && testResult1.result.skipCount > 0) {
currentBuild.result = hudson.model.Result.UNSTABLE
}
currentBuild.setDescription(currentBuild.getDescription() + "\n" + currentBuild.result.toString())
def ws = manager.build.workspace.getRemote()
myFile = new File(ws + "/VERSION.txt")
desc = myFile.readLines()
currentBuild.setDescription(currentBuild.getDescription() + "\n" + desc)

Resources