I have a build.gradle that compiles my project, runs test creates a jar then packages that with launch4j. I want to be able to use wix to create a installer as well, however I seem to be having a lot of trouble launching it from .execute().
The files necessary for candle and light are held in \build\installer. However trying to access those files by calling execute in the build file is always met with failure.
I have made a second build.gradle in /build/installer that does work. It is:
task buildInstaller {
def command = project.rootDir.toString() + "//" +"LSML Setup.wxs"
def candleCommand = ['candle', command]
def candleProc = candleCommand.execute()
candleProc.waitFor()
def lightCommand = ['light' , '-ext', 'WixUIExtension', "LSML Setup.wixobj"]
def lightProc = lightCommand.execute()
}
Is there some way I can run the second build file from the main one and have it work or is there a way to call execute directly and have it work?
Thanks.
If your project consist of few gradle builds (gradle projects) you should use dependencies. Working with execute() method is a bad idea. I will do it in this way:
ROOT/candle/candle.gradle
task build(type: Exec) {
commandLine 'cmd', '/C', 'candle.exe', '...'
}
ROOT/app/build.gradle
task build(dependsOn: ':candle:build') {
println 'build candle'
}
ROOT/app/settings.gradle
include ':candle'
project(':candle').projectDir = "$rootDir/../candle" as File
BTW I had problems with Exec task so in my projects I replaced it with and.exec() so candle task may look like this:
task candle << {
def productWxsFile = new File(buildDir, "Product.wxs")
ant.exec(executable:candleExe, failonerror: false, resultproperty: 'candleRc') {
arg(value: '-out')
arg(value: buildDir.absolutePath+"\\")
arg(value: '-arch')
arg(value: 'x86')
arg(value: '-dInstallerDir='+installerDir)
arg(value: '-ext')
arg(value: wixHomeDir+"\\WixUtilExtension.dll")
arg(value: productWxsFile)
arg(value: dataWxsFile)
arg(value: '-v')
}
if (!ant.properties['candleRc'].equals('0')) {
throw new Exception('ant.exec failed rc: '+ant.properties['candleRc'])
}
}
More informations about multi projects you will find here http://www.gradle.org/docs/current/userguide/multi_project_builds.html.
The SetupBuilder plugin can do the job. It create the lauch4j launcher for your java application, signed it, create the msi file and signed it. You does not need to work with the complex WIX toolset syntax.
Related
my task is to collect node details and list them in certail format. I need to write data to a file and save it as csv file and attach it as artifacts.
But i am not able to create a file using groovy scripts in the jenkins using plugin "Execute System Groovy" as build step
import jenkins.model.Jenkins
import hudson.model.User
import hudson.security.Permission
import hudson.EnvVars
EnvVars envVars = build.getEnvironment(listener);
filename = envVars.get('WORKSPACE') + "\\node_details.txt";
//filename = "${manager.build.workspace.remote}" + "\\node_details.txt"
targetFile = new File(filename);
println "attempting to create file: $targetFile"
if (targetFile.createNewFile()) {
println "Successfully created file $targetFile"
} else {
println "Failed to create file $targetFile"
}
print "Deleting ${targetFile.getAbsolutePath()} : "
println targetFile.delete()
Output obtained
attempting to create file: /home/jenkins/server-name/workspace/GET_NODE_DETAILS\node_details.txt
FATAL: No such file or directory
java.io.IOException: No such file or directory
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:947)
at java_io_File$createNewFile.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:112)
at Script1.run(Script1.groovy:13)
at groovy.lang.GroovyShell.evaluate(GroovyShell.java:682)
at groovy.lang.GroovyShell.evaluate(GroovyShell.java:666)
at hudson.plugins.groovy.SystemGroovy.perform(SystemGroovy.java:81)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:772)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:160)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:535)
at hudson.model.Run.execute(Run.java:1732)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:234)
Some time i see people use "manager" object, how can i get access to it ?
Alos any ideas on how to accomplish the task ?
Problem
Groovy system script is always run in jenkins master node, while the workspace is the file path in your jenkins slave node, which doesn't exist in your master node.
You can verify by the code
theDir = new File(envVars.get('WORKSPACE'))
println theDir.exists()
It will return false
If you don't use slave node, it will return true
Solution As we can't use normal File, we have to use FilePath http://javadoc.jenkins-ci.org/hudson/FilePath.html
if(build.workspace.isRemote())
{
channel = build.workspace.channel;
fp = new FilePath(channel, build.workspace.toString() + "/node_details.txt")
} else {
fp = new FilePath(new File(build.workspace.toString() + "/node_details.txt"))
}
if(fp != null)
{
fp.write("test data", null); //writing to file
}
Then it works in both case.
Answer by #Larry Cai covers one part to write a file to slave node from System Groovy Script (as it runs on Master Node).
The part I am answering is "Some time i see people use "manager" object, how can i get access to it "
This is the object already available in Post Build Groovy Script for accessing a lot of things like environment variables, Build Status, Build Display Name etc.
Quoted from https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin :
"The groovy script can use the variable manager, which provides various methods to decorate your builds.
Those methods can be classified into whitelisted methods and non-whitelisted methods."
To access it, we can directly call it in the post build groovy script. e.g
manager.build.setDescription("custom description")
manager.addShortText("add your message here")
All methods available on manager objects are documented here.
https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin#GroovyPostbuildPlugin-Whitelistedmethods
I suspect the error was caused by the path format, could you try below:
change
filename = envVars.get('WORKSPACE') + "\\node_details.txt";
to
filename = envVars.get('WORKSPACE') + "/node_details.txt";
Because when I tried in my local jenkins server, I get it successfully executed.
The manager object is not available depending on how the groovy is invoked. e.g. in "execute system groovy script".
You can find the BadgeManager class in jenkins GroovyPostBuild plugin API here: https://javadoc.jenkins.io/plugin/groovy-postbuild/org/jvnet/hudson/plugins/groovypostbuild/GroovyPostbuildRecorder.BadgeManager.html#addShortText-java.lang.String-
ANSWER: Import the GroovyPostBuild plugin and create a new manager object. e.g. here a job with "Execute System Groovy Script" create a manager object and call the addShortText method:
// java.lang.Object
// org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildRecorder.BadgeManager
// Constructor and Description
// BadgeManager(hudson.model.Run<?,?> build, hudson.model.TaskListener listener, hudson.model.Result scriptFailureResult)
import hudson.model.*
import org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildAction
def build = Thread.currentThread().executable
manager = new org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildRecorder.BadgeManager(build, null, null)
manager.addShortText("MANAGER TEST", "black", "limegreen", "0px", "white")
This question gives a hint: See here for a nearly working answer:
In jenkins job, create file using system groovy in current workspace
org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildAction and build.getActions().add(GroovyPostbuildAction.createShortText(text, "black", "limegreen", "0px", "white"));
First off, this is my first foray into Gradle/Groovy (using Gradle 1.10). I'm setting up a multi-project environment where I'm creating a jar artifact in one project and then want to define an Exec task, in another project, which depends on the created jar. I'm setting it up something like this:
// This is from the jar building project
jar {
...
}
configurations {
loaderJar
}
dependencies {
loaderJar files(jar.archivePath)
...
}
// From the project which consumes the built jar
configurations {
loaderJar
}
dependencies {
loaderJar project(path: ":gfxd-demo-loader", configuration: "loaderJar")
}
// This is my test task
task foo << {
configurations.loaderJar.each { println it }
println configurations.loaderJar.collect { it }[0]
// The following line breaks!!!
println configurations.loaderJar[0]
}
When executing the foo task it fails with:
> Could not find method getAt() for arguments [0] on configuration ':loaderJar'.
In my foo task I'm just testing to see how to access the jar. So the question is, why does the very last println fail? if a Configuration object is a Collection/Iterable then surely I should be able to index into it?
Configuration is-a java.util.Iterable, but not a java.util.Collection. As can be seen in the Groovy GDK docs, the getAt method (which corresponds to the [] operator) is defined on Collection, but not on Iterable. Hence, you can't index into a Configuration.
I want to use the Sauce Labs Java REST API to send Pass/Fail status back to the Sauce Labs dashboard. I am using Geb+Spock, and my Gradle build creates a test results directory where results are output in XML. My problem is that the results XML file doesn't seem to be generated until after the Spock specification's cleanupSpec() exits. This causes my code to report the results of the previous test run, rather than the current one. Clearly not what I want!
Is there some way to get to the results from within cleanupSpec() without relying on the XML? Or a way to get the results to file earlier? Or some alternative that will be much better than either of those?
Some code:
In build.gradle, I specify the testResultsDir. This is where the XML file is written after the Spock specifications exit:
drivers.each { driver ->
task "${driver}Test"(type: Test) {
cleanTest
systemProperty "geb.env", driver
testResultsDir = file("$buildDir/test-results/${driver}")
systemProperty "proj.test.resultsDir", testResultsDir
}
}
Here is the setupSpec() and cleanupSpec() in my LoginSpec class:
class LoginSpec extends GebSpec {
#Shared def SauceREST client = new SauceREST("redactedName", "redactedKey")
#Shared def sauceJobID
#Shared def allSpecsPass = true
def setupSpec() {
sauceJobID = driver.getSessionId().toString()
}
def cleanupSpec() {
def String specResultsDir = System.getProperty("proj.test.resultsDir") ?: "./build/test-results"
def String specResultsFile = this.getClass().getName()
def String specResultsXML = "${specResultsDir}/TEST-${specResultsFile}.xml"
def testsuiteResults = new XmlSlurper().parse( new File( specResultsXML ))
// read error and failure counts from the XML
def errors = testsuiteResults.#errors.text()?.toInteger()
def failures = testsuiteResults.#failures.text()?.toInteger()
if ( (errors + failures) > 0 ) { allSpecsPass = false }
if ( allSpecsPass ) {
client.jobPassed(sauceJobID)
} else {
client.jobFailed(sauceJobID)
}
}
}
The rest of this class contains login specifications that do not interact with SauceLabs. When I read the XML, it turns out that it was written at the end of the previous LoginSpec run. I need a way to get to the values of the current run.
Thanks!
Test reports are generated after a Specification has finished execution and the generation is performed by the build system, so in your case by Gradle. Spock has no knowledge of that so you are unable to get that information from within the test.
You can on the other hand quite easily get that information from Gradle. Test task has two methods that might be of interest to you here: addTestListener() and afterSuite(). It seems that the cleaner solution here would be to use the first method, implement a test listener and put your logic in afterSuite() of the listener (and not the task configuration). You would probably need to put that listener implementation in buildSrc as it looks like you have a dependency on SauceREST and you would need to build and compile your listener class before being able to use it as an argument to addTestListener() in build.gradle of your project.
Following on from erdi's suggestion, I've created a Sauce Gradle helper library, which provides a Test Listener that parses the test XML output and invokes the Sauce REST API to set the pass/fail status.
The library can be included by adding the following to your build.gradle file:
import com.saucelabs.gradle.SauceListener
buildscript {
repositories {
mavenCentral()
maven {
url "https://repository-saucelabs.forge.cloudbees.com/release"
}
}
dependencies {
classpath group: 'com.saucelabs', name: 'saucerest', version: '1.0.2'
classpath group: 'com.saucelabs', name: 'sauce_java_common', version: '1.0.14'
classpath group: 'com.saucelabs.gradle', name: 'sauce-gradle-plugin', version: '0.0.1'
}
}
gradle.addListener(new SauceListener("YOUR_SAUCE_USERNAME", "YOUR_SAUCE_ACCESS_KEY"))
You will also need to output the Selenium session id for each test, so that the SauceListener can associate the Sauce Job with the pass/fail status. To do this, include the following output in the stdout:
SauceOnDemandSessionID=SELENIUM_SESSION_ID
I'm using Cucumber for my tests. How do I rerun only the failed tests?
Run Cucumber with rerun formatter:
cucumber -f rerun --out rerun.txt
It will output locations of all failed scenarios to this file.
Then you can rerun them by using
cucumber #rerun.txt
Here is my simple and neat solution.
Step 1: Write your cucumber java file as mentioned below with rerun:target/rerun.txt. Cucumber writes the failed scenarios line numbers in rerun.txt as shown below.
features/MyScenaios.feature:25
features/MyScenaios.feature:45
Later we can use this file in Step 2. Name this file as MyScenarioTests.java. This is your main file to run your tagged scenarios. If your scenarios has failed test cases, MyScenarioTests.java will write/mark them rerun.txt under target directory.
#RunWith(Cucumber.class)
#CucumberOptions(
monochrome = true,
features = "classpath:features",
plugin = {"pretty", "html:target/cucumber-reports",
"json:target/cucumber.json",
"rerun:target/rerun.txt"} //Creates a text file with failed scenarios
,tags = "#mytag"
)
public class MyScenarioTests {
}
Step 2: Create another scenario file as shown below. Let's say this as FailedScenarios.java. Whenever you notice any failed scenario run this file. This file will use target/rerun.txt as an input for running the scenarios.
#RunWith(Cucumber.class)
#CucumberOptions(
monochrome = true,
features = "#target/rerun.txt", //Cucumber picks the failed scenarios from this file
format = {"pretty", "html:target/site/cucumber-pretty",
"json:target/cucumber.json"}
)
public class FailedScenarios {
}
Everytime if you notice any failed scenarios run the file in Step 2
task cucumber() {
dependsOn assemble, compileTestJava
doLast {
javaexec {
main = "io.cucumber.core.cli.Main"
classpath = configurations.cucumberRuntime + sourceSets.main.output + sourceSets.test.output
args = ['--plugin', 'json:target/cucumber-reports/json/cucumber.json',
'--plugin', "rerun:target/rerun.txt",
'--glue', 'steps',
'src/test/resources']
}
}
}
task cucumberRerunFailed() {
doLast {
javaexec {
main = "io.cucumber.core.cli.Main"
classpath = configurations.cucumberRuntime + sourceSets.main.output + sourceSets.test.output
args = ['--plugin', 'json:target/cucumber-reports/json/cucumber.json',
'#target/rerun.txt']
}
}
}
I know this is old, but I found my way here first and then later found a much more up to date answer (not the accepted, Cyril Duchon-Doris' answer):
https://stackoverflow.com/a/41174252/215789
Since cucumber 3.0 you can use --retry to specify how many times to retry a scenario that failed.
https://cucumber.io/blog/open-source/announcing-cucumber-ruby-3-0-0/
Just tack it on to your cucumber command:
cucumber ... --retry 2
You need at least version 1.2.0 in order to use the #target/rerun.txt new feature. After that just create a runner that runs at the end and uses this file. Also, if you are using Jenkins, you can put a tag on the random failures features so the build doesn't fails unless after being ran twice.
Is there a way I can easily make the processing of FileTree files in a smart way in gradle tasks? I basically need to wait for the execution of all files, much like what you can do with GPars, but how do I do this gradle with FileTree?
task compressJs(dependsOn: [copyJsToBuild]) << {
println 'Minifying JS'
fileTree {
from 'build/js'
include '**/*.js'
}.visit { element ->
if (element.file.isFile()) {
println "Minifying ${element.relativePath}"
ant.java(jar: "lib/yuicompressor-2.4.6.jar", fork: true) {
arg(value: "build/js/${element.relativePath}")
arg(value: "-o")
arg(value: "build/js/${element.relativePath}")
}
}
}
}
It would be lovely if I could do something like .visit{}.async(wait:true), but my googling turned up nothing. Is there a way I can easily make this multi-threaded? The processing of one element has no effect on the processing of any other element.
Before thinking about going multi-threaded, I'd try the following:
Run everything in the same JVM. Forking a new JVM for each input file is very inefficient.
Make the compressJs task incremental so that it only executes if some input file has changed since the previous run.
Run the minifier directly rather than via Ant (saves creation of a new class loader for each input file; not sure if it matters).
If this still leaves you unhappy with the performance, and you can't use a more performant minifier, you can still try to go multi-threaded. Gradle won't help you there (yet), but libraries like GPars or the Java Fork/Join framework will.
The GPars solution. Note that the compress() function could be modified to properly accept source dir/target dir/etc, but since all my names are consistent, I'm just using the one argument for now. I was able to cut my build time from 7.3s to 5.4s with only 3 files being minified. I've seen build times spiral out of control, so I'm always wary of performance with this kind of behavior.
import groovyx.gpars.GParsPool
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath 'org.codehaus.gpars:gpars:0.12'
}
}
def compress(String type) {
def elementsToMinify = []
fileTree {
from type
include "**/*.$type"
}.visit { element ->
if (element.file.isFile()) {
elementsToMinify << element
}
}
GParsPool.withPool(8) {
elementsToMinify.eachParallel { element ->
println "Minifying ${element.relativePath}"
def outputFileLocation = "build/$type/${element.relativePath}"
new File(outputFileLocation).parentFile.mkdirs()
ant.java(jar: "lib/yuicompressor-2.4.6.jar", fork: true) {
arg(value: "$type/${element.relativePath}")
arg(value: "-o")
arg(value: outputFileLocation)
}
}
}
}
task compressJs {
inputs.dir new File('js')
outputs.dir new File('build/js')
doLast {
compress('js')
}
}
task compressCss {
inputs.dir new File('css')
outputs.dir new File('build/css')
doLast {
compress('css')
}
}