Reduce code repetition in Groovy closures - groovy

In a piece of Gradle build script, the amount of code i'm repeating is increasing. All tasks have a big part in common, except for a few lines:
task copyZipFile() {
doLast {
def remoteBuildProperties = getRemoteBuildProperties(project)
ant {
taskdef(name: 'ftp',
classname: 'org.apache.tools.ant.taskdefs.optional.net.FTP',
classpath: configurations.ftpAntTask.asPath)
ftp(server: remoteBuildProperties['host.name'],
userid: remoteBuildProperties['username'],
password: remoteBuildProperties['password'],
remotedir: 'some-folder', // This value differs from call to call
passive: 'true') {
// The next two lines also are different per case, and might be less or more lines
fileset(dir: rootProject.buildDir) { include(name: 'build.zip') }
fileset(dir: rootProject.projectDir) { include(name: 'build.properties') }
}
}
}
}
I don't like to repeat myself, so I'd like to reduce this code to a new helper method that does this trick, and a simple caller, something like:
task copyZipFile() {
doLast {
def remoteBuildProperties = getRemoteBuildProperties(project)
upload(remoteBuildProperties, 'some-folder') {
fileset(dir: rootProject.buildDir) { include(name: 'build.zip') }
fileset(dir: rootProject.projectDir) { include(name: 'build.properties') }
}
}
}
How would I achieve this?

You can pass the inner closure to your upload method as the final parameter. Set the delegate to the original builder delegate so the inner closure calls get handled properly. For example:
def upload(remoteBuildProperties, folder, body) {
ant {
taskdef(name: 'ftp',
classname: 'org.apache.tools.ant.taskdefs.optional.net.FTP',
classpath: configurations.ftpAntTask.asPath)
ftp(server: remoteBuildProperties['host.name'],
userid: remoteBuildProperties['username'],
password: remoteBuildProperties['password'],
remotedir: folder,
passive: 'true') {
body.delegate = delegate
body()
}
}
}

Related

Jenkins Library / Groovy Script - Wrap Code dynamically

I'm developing a Jenkins shared library right now.
I wasn't able to figure how to easily "wrap" a code inside a function without copy-pasting the whole code. For example: If a developer sets a value to true, then I want to wrap the whole code inside a function. Right now I want to use this to allow e.g. the gitlabIntegration to be turned off from the Jenkinsfile.
Example:
// vars/stageWrapper.groovy
def call(Map parameters = [:], body) {
stage(stageName) {
if (pushtoGitlab) {
gitlabCommitStatus(stageName) {
if (!containerName) body()
else {
container(containerName) {
body()
}
}
}
} else {
if (!containerName) body()
else {
container(containerName) {
body()
}
}
}
}
}
let the user select if the stage should be pushed to gitlab via the gitlabCommitStatus wrapper.
switch to a specified container or use default container (if none is specified)
To realize this I currently repeat the code, which I really don't like...
Is there any way of achieving the same, but without repeating the same code over and over?
Thank You!
In Groovy you can reuse a Closure in different DSL-Builders by setting it's delegate to builder's delegate.
Something like this should work:
def containerNameBody = { body ->
if (!containerName)
body()
else
container(containerName) {
body()
}
}
def call(Map parameters = [:], body) {
stage(stageName) {
containerNameBody.delegate = delegate
if (pushtoGitlab)
gitlabCommitStatus(stageName) {
containerNameBody body
}
else
containerNameBody body
}
}
How about following approach to pass down the param into function, then decide how to do inside the function by the param value.
def gitHub(gitHubOn) {
}
def gitLab(gitLabOn) {
}
def call(Map parameters = [:], body){
//some code....
foo=bar
gitLab(parameters.gitLabOn)
gitHub(parameters.gitHubOn)
body()
}

Passing different parameters to created jobs

On Jenkins, using Job DLS plugin, I'm trying to prepare one script which will create jobs configured for different environments (dev and preprod). Depends on for which environments this job has to run, different parameters are needed.
In this situation how to define, in the shortest way, that parameters for dev environment include the same as preprod parameters plus additionally i.e. 2 more?
An example of the code which I use is presented below.
def environments = ["DEV", "PREPROD"]
def names = ["name1", "name2", "name3"]
def jobParameters = {
string {
name("browser")
defaultValue("CHROME")
description("Browser on which one tests will run")
trim(true)
}
string {
name("parameter1")
defaultValue("")
description("")
trim(true)
}
}
def jobParametersDev = {
jobParameters
string {
name("parameter2")
defaultValue("")
description("")
trim(true)
}
string {
name("parameter3")
defaultValue("")
description("")
trim(true)
}
}
def createJob(name, env, runCommand, jobParameters) {
job("Job-${-> name}-${-> env}") {
description("My first job for ${-> name}")
parameters(jobParameters)
steps {
shell {
command(runCommand)
}
}
}
}
for (name in names) {
for (env in environments) {
if (env == 'DEV') {
def runCommand = "python35 -u ./TestSuite-${-> name}.py %parameter1% %parameter2%,%parameter3% %browser%"
createJob(name, env, runCommand, jobParametersDev)
} else {
def runCommand = "python35 -u ./TestSuite-${-> name}.py %parameter1% ${-> env} %browser%"
createJob(name, env, runCommand, jobParameters)
}
}
}
To summarise - the last thing which I tried is:
def jobParametersDev = {
jobParameters
...
}
But it doesn't work... Only values for jobParametersDev are visible.
How to add these values? If it's not necessary I don't want to double the same code.
I will be really grateful for any help.
You can not simply call one closure within another. But you can chain method calls. You just need to pass the job reference.
def generateParameters = { job ->
job.parameters {
stringParam('param3', '', '')
// more params here...
}
}
def generateDevParameters = { job ->
generateParameters(job)
job.parameters {
stringParam('param4', '', '')
// more params here...
}
}
def createJob(name, generateParameters) {
def j = job(name) {
// more config here...
}
generateParameters(j)
}
createJob('test1', generateParameters)
createJob('test2', generateDevParameters)

Groovy syntax for enter/exit block

Is there a way to define a block/environment with custom open and close methods? Currently, I have:
script {
withCredentials([usernamePassword(credentialsId: '...', usernameVariable: 'CONFIG_USER', passwordVariable: 'CONFIG_PASS')]) {
def sql = Sql.newInstance("...", CONFIG_USER, CONFIG_PASS, "com.mysql.jdbc.Driver")
sql.rows("SELECT * FROM visualization").each { row ->
println "row ${row.branch}"
}
sql.close()
}
}
I would like to be able to do:
with sqlConnection() { sql ->
sql.rows("SELECT * FROM visualization").each { row ->
println "row ${row.branch}"
}
}
Where it automatically opens/closes the connection accordingly. I am new to Groovy, so it's the syntax I'm concerned about. In Python I would do this using an object __enter__/__exit__.
If I understand you correctly you want a new method sqlConnection() that does the withCredentials part?
You can use a closure parameter to do something before or after something else.
def sqlConnection(Closure withSqlClosure) {
withCredentials([usernamePassword(credentialsId: '...', usernameVariable: 'CONFIG_USER', passwordVariable: 'CONFIG_PASS')]) {
Sql.newInstance("...", CONFIG_USER, CONFIG_PASS, "com.mysql.jdbc.Driver").withCloseable {sql ->
withSqlClosure(sql)
}
}
}
Can be used like this
sqlConnection() { sql ->
sql.rows("SELECT * FROM visualization").each { row ->
println "row ${row.branch}"
}
}
So everything before the call to the closure (withSqlClosure(sql)) corresponds to __enter__ everything after the call is your __exit__. Note that you will need to lookout for exceptions. Usually you will want to wrap the closure call in a try { ... } finally { ... } Statement. Here I used withCloseable which does that for us (assuming Sql.newInstance returns a Closeable).
To aid your IDE and enable #CompileStatic you should also add a #ClosureParams
def sqlConnection(
#ClosureParams(value = groovy.transform.stc.SimpleType,
options = ["your.sql.type"]) Closure withSqlClosure) {
withCredentials([usernamePassword(credentialsId: '...', usernameVariable: 'CONFIG_USER', passwordVariable: 'CONFIG_PASS')]) {
Sql.newInstance("...", CONFIG_USER, CONFIG_PASS, "com.mysql.jdbc.Driver").withCloseable {sql ->
withSqlClosure(sql)
}
}
}
Here your.sql.type is the return type of Sql.newInstance.

Reuse spinner in sync methods without repeating code

In our code, we repeat the same sequence multiple times: starting a spinner, then execute a spawnSync method and update the spinner depending on result. For example here is one of the method:
cloneSync() {
const spinner = ora({
text: 'Cloning repository',
color: 'cyan',
spinner: 'arrow3'
}).start();
let clone = spawnSync('git', ['clone', repository.url, repository.name]);
if (clone.stderr) {
spinner.fail('Error while cloning repository');
throw new Error(clone.stderr);
} else {
spinner.succeed('Successfully cloned repository');
return clone.stdout;
}
}
Another code example so you can see the logic is almost identical:
parseLatestTagAndTransmitToDocker() {
const spinner = ora({
text: 'Checking latest tag',
color: 'cyan',
spinner: 'arrow3'
}).start();
let tag = spawnSync('git', ['describe', '--abbrev=0']);
if (tag.stderr) {
spinner.fail('Error while fetching latest tag of repository');
throw new Error(tag.stderr);
} else {
spinner.text(`Successfully retrieved latest tag: ${tag.stdout}`);
let docker = spawnSync('docker', ['run', 'myimage:latest', tag.stdout]);
if (docker.stderr) {
spinner.fail('Error while transmitting tag to docker image');
throw new Error(docker.stderr)
} else {
spinner.success('Successfully transmitted tag to docker service');
return docker.stdout;
}
}
}
Is it possible, in node 8+ to wrap this code and make it more reusable. I struggle finding a reusable code without having to trigger spinner and the if/else condition. Doing so with async allow use of try/catch and await/async. But here with sync method, I don't find the proper way to code that kind of behavior.
From the two examples you've provided, I can see a "SpinnerSpawner" function that returns a promise:
function spinnerSpawner(spinnerConfig, cmd, args) {
if (typeof(spinnerConfig) == "string") spinnerConfig = {
text: spinnerConfig,
color: "cyan",
spinner: "arrow3"
}
return new Promise(function(resolve, reject) {
let spinner = ora(spinnerConfig).start,
tag = spawnSync(cmd, args)
if (!tag.stdError) {
resolve(spinner, tag)
} else {
reject(spinner, tag)
}
})
}
cloneSync() {
spinnerSpawner("cloning repository", "git", ["clone", repository.url, repository.name])
.then(function(spinner, proc) {
spinner.succeed('Successfully cloned repository');
return proc.stdout;
}, function(spinner, proc) {
spinner.fail('Error while cloning repository');
throw new Error(proc.stderr);
}
)
}
parseLatestTagAndTransmitToDocker() {
spinnerSpawner("Checking latest tag", "git", ["describe", "--abbrev=0"])
.then(
function(spinner, proc) {
spinner.text(`successfully retrieved latest tag: ${proc.stdout}`)
return spinnerSpawner("checking docker", "docker", ["run", "myimage:latest", proc.stdout])
}
).then(
function(spinner, proc) {
spinner.success("Processing completed")
return proc.stdout
},
function(spinner, proc) {
spinner.fail(`processing error: ${proc.stderr}`);
throw new Error(tag.stderr);
}
)
}
as always, my code is pseudo-code and not fit for execution - let alone production!

Groovy: More elegant way to achieve this?

I have following groovy function:
def getDependantLibs(updatedLib) {
def dependants = []
getAllLibs().each { lib ->
try {
def ref = getFromStash("/repos/${lib}/dependencies.json").find { it.name == updatedLib }
if (ref != null) {
dependants.add([ name: lib, version: ref.version ])
}
} catch (err) {}
}
return dependants
}
My question is, can I achieve this in a more elegant way (maybe with groovy Collecion functions like collect, flatten, ...)
Yes, you can use collectMany. E.g.:
def deps = [1,2,3,4]
println deps.collectMany{
try {
if (it&1) {
throw new RuntimeException(it.toString())
}
(0..it).collect{
[x: it]
}
}
catch (Exception e) {
println "failed: $e.message"
return []
}
}
// output:
// failed: 1
// failed: 3
// [[x:0], [x:1], [x:2], [x:0], [x:1], [x:2], [x:3], [x:4]]
Instead of using Collection.each(Closure closure) you can utilize Collection.collect(Closure transform) for transforming each element of the list from one format to another and Collection.findAll(Closure predicate) to filter null elements from final list. Something like this:
def getDependantLibs(updatedLib) {
return getAllLibs().collect { lib ->
try {
[lib: lib, ref: getFromStash("/repos/${lib}/dependencies.json").find { it.name == updatedLib }]
} catch (err) {
[lib: lib, ref: null]
}
}.findAll { it.ref != null }.collect { [name: it.lib, version: it.ref.version] }
}
Using Java 8 Stream API
You can always use Java 8 Stream API with Groovy. The main advantage of using Stream API in this case is that all operations on stream are lazy - they are executed when reduction function is called. It means that you can apply n-number of transformations and only one iteration will be triggered. In Groovy if you apply let's say 3 bulk collection methods, Groovy will iterate 3 times.
Below you can find an example of using Java 8 Stream API in Groovy:
def getDependantLibsWithStream(updatedLib) {
return getAllLibs().stream()
.map({ lib ->
try {
[lib: lib, ref: getFromStash("/repos/${lib}/dependencies.json").find { it.name == updatedLib }]
} catch (err) {
[lib: lib, ref: null]
}
})
.filter({ it.ref != null })
.map({ [name: it.lib, version: it.ref.version] })
.collect(Collectors.toList())
}
I hope it helps.

Resources