Trying to create new jobs whenever there is a new branch entry in my SVN repo and below is the script.
svnCommand = "svn list --xml http://myrepo/svn/repo_name/branches"
def proc = svnCommand.execute()
proc.waitFor()
def xmlOutput = proc.in.text
def lists = new XmlSlurper().parseText(xmlOutput)
def listOfBranches = lists.list.entry.name
listOfBranches.each(){
def branchName = it.text()
println "found branch: '${branchName}'"
}
mavenJob('${branchName}'){
mavenInstallation('M3.3.9')
logRotator(365, 25, -1, -1)
scm {
svn {
location('http://myrepo/svn/repo_name/branches/${branchName}') {
credentials('4t4d8ef-p67a-5298-a011-580ghe898a65')
}
}
}
}
Script is able to iterate thru branches and print the branch names,
found branch: 'feature_01'
but I'm facing the issue , while variable substitution while creating the job name and with the svn branch name.
hudson.model.Failure: ‘$’ is an unsafe character
Jenkins - V.2.32
Job DSL - V.1.57
Any suggestions please.Thanks.
#Rao is right: first - you have to change:
mavenJob('${branchName}')
to:
mavenJob(branchName)
and:
location('http://myrepo/svn/repo_name/branches/${branchName}')
to:
location("http://myrepo/svn/repo_name/branches/${branchName}")
Moreover def branchName = it.text() inside iteration limits scope of variable to this iteration only. Try:
listOfBranches.each() {
branchName = it.text()
println "found branch: '${branchName}'"
}
Related
I am writing a terraform infrastructure pipeline, wherein I take multi-line string param from Jenkins, try to convert it to a map and pass it on to a terraform command.
Following is the code:
import groovy.json.JsonOutput
def parameters = env.params
def config
def service_map = [:]
def service_returned = [:]
node ("master"){
withEnv(['variable="test"', 'DB_ENGINE=sqlite']) {
stage('Input') {
config = readYaml text: "$parameters"
println(config)
config.each{ key, value ->
service_map = "$value"
service_returned = stringToMap(service_map)
println(service_returned)
}
}
stage('Terraform Plan') {
sh
"""
terraform init
terraform plan -var="instance=$service_returned"
"""
}
}
}
def stringToMap(service_string){
def map = [:]
service_string.split(" ").each { param ->
def nameAndValue = param.split(":")
map[nameAndValue[0]] = nameAndValue[1]
}
return map
}
When I print the service_returned map from the method "stringToMap". it gives a map like so:
{service="service", ec2_type="t2.micro"}
which is exactly what is need for terraform as a variable.
But the above code evaluates to this in the console output:
terraform plan -var='instance_ids=[service:"service", ec2_type:"t2.micro"]'
which does not work for terraform.
For reference, this is the input passed in Jenkins:
services:
service:"service"
ec2_type:"t2.micro"
What could be the reason for this?
Is there a way to use the same returned map in the shell module in above code?
there is a problem in terraform command line in your code. should be -var 'foo=bar'
and stringToMap returns not a correct go map syntax
import groovy.json.JsonOutput
//i assume you got this config from yaml
def config = [
services:[
service:"service",
ec2_type:"t2.micro"
]
]
//function that converts groovy plain map to a go lang representation of map
def map2go(map){
return "{"+map.collect{k,v-> "$k=${JsonOutput.toJson(v)}" }.join(",")+"}"
}
goMap = map2go(config.services)
sh """
terraform plan -var 'instance_ids=${goMap}'
"""
I am running below groovy script to fetch dynamic values from aws s3 bucket.
And below script is working fine and it will fetch all objects like shown in below output.
current output is
test-bucket-name/test/folde1/1.war
test-bucket-name/test/folder2/2.war
test-bucket-name/test/folder3/3.txt
Where as I want to display only *.war files in the output from "test-bucket-name" folder like below.
1.war
2.war
My Script:
def command = 'aws s3api list-objects-v2 --bucket=test-bucket-name --output=text'
def proc = command.execute()
proc.waitFor()
def output = proc.in.text
def exitcode= proc.exitValue()
def error = proc.err.text
if (error) {
println "Std Err: ${error}"
println "Process exit code: ${exitcode}"
return exitcode
}
return output.split()
Please let me know how to extract/display only war files from test-bucket-name folder.
Firstly you need to filter the entries that end with .war. Then split every entry once again (with /) and pick the last element:
def input = '''test-bucket-name/test/folde1/1.war
test-bucket-name/test/folder2/2.war
test-bucket-name/test/folder3/3.txt'''
input
.split()
.findAll { it.endsWith('.war') }
.collect { it.split('/') }
.collect { it[-1] }
I have a contants.groovy file as below
def testFilesList = 'test-package.xml'
def testdataFilesList = 'TestData.xml'
def gitId = '9ddfsfc4-fdfdf-sdsd-bd18-fgdgdgdf'
I have another groovy file that will be called in Jenkins pipeline job
def constants
node ('auto111') {
stage("First Stage") {
container('alpine') {
script {
constants = evaluate readTrusted('jenkins_pipeline/constants.groovy')
def gitCredentialsId = constants."${gitId}"
}
}
}
}
But constants."${gitId}" is says "cannot get gitID from null object". How do I get it?
It's because they are local variables and cannot be referenced from outside. Use #Field to turn them into fields.
import groovy.transform.Field
#Field
def testFilesList = 'test-package.xml'
#Field
def testdataFilesList = 'TestData.xml'
#Field
def gitId = '9ddfsfc4-fdfdf-sdsd-bd18-fgdgdgdf'
return this;
Then in the main script you should load it using load step.
script {
//make sure that file exists on this node
checkout scm
def constants = load 'jenkins_pipeline/constants.groovy'
def gitCredentialsId = constants.gitId
}
You can find more details about variable scope in this answer
I am trying to write a Jenkins job (say CopyJob) that copies another job (in this job using the Multijob plugin) and also copies all its downstream jobs to new jobs. The idea is to have a Multijob that serves as a template so it can be copied to new Multijobs (e.g. for a specific branch or feature).
See:
MultiJob_Template
|
+-- Sub1_Template
+-- Sub2_Template
+-- Sub3_Template
CopyJob (Parameters: NewSuffix)
When manually triggering the "CopyJob" it shall create a new MultiJob with new SubJobs:
MultiJob_BranchXYZ
|
+-- Sub1_BranchXYZ
+-- Sub2_BranchXYZ
+-- Sub3_BranchXYZ
So far I was successful with copiing the Multijob and copiing the Subjobs, but I couldn't find a way to make the new Multijob actually depend on the new Subjobs.
My code (for the CopyJob groovy script) so far is:
import jenkins.model.*
import com.tikal.jenkins.plugins.multijob.*
def templateJobName = "MultiJob_Template"
// Retrieve parameters
def newSfx = build.buildVariableResolver.resolve("NewSuffix")
def templateJob = Jenkins.instance.getJob(templateJobName)
// copy Multijob
def newJob = Jenkins.instance.copy(templateJob, 'Multijob_' + newSfx)
newJob.save()
// copy all downstreamjobs
def subs = newJob.getDownstreamProjects()
for (s in subs) {
def oldSubJob = Jenkins.instance.getJob(s.getDisplayName())
def newSubJob = Jenkins.instance.copy(oldSubJob, s.getDisplayName().replaceFirst(/Template/, newSfx))
newSubJob.save()
// how to update the MultiJob_newSfx DownstreamJoblist to use the newSubJob?
// ????
}
I actually managed to solve it myself. Maybe there are other ways too, but it seems best to step through the list of builders and then the list of PhaseJobs of the MultiJob template.
The code of the MultiJob plugin itself helped for this solution.
It is also worth having a look at this question if you are looking for similar things.
import jenkins.model.*
import com.tikal.jenkins.plugins.multijob.*
def jenkinsInstance = jenkins.model.Jenkins.instance
def templateJobName = "Multijob_Template"
// Retrieve parameters
def newSfx = build.buildVariableResolver.resolve("NewSuffix")
// create new MultiJob
def templateJob = Jenkins.instance.getJob(templateJobName)
def newJob = Jenkins.instance.copy(templateJob, 'Multijob_' + newSfx)
newJob.save()
// get MultiJob BuildPhases and clone each PhaseJob
def builders = newJob.getBuilders()
builders.each { builder ->
builder.getPhaseJobs().each() { pj ->
println "cloning phasejob: " + pj.getJobName()
def subTemplate = Jenkins.instance.getJob(pj.getJobName())
def newSubJob = Jenkins.instance.copy(subTemplate, pj.getJobName().replaceFirst(/Template/, newSfx))
newSubJob.save()
pj.setJobName(newSubJob.getDisplayName())
}
}
// update dependencies
jenkinsInstance.rebuildDependencyGraph()
I am using the Scriptler plugin in Jenkins with the parameters:
NewSuffix, TemplateStr, and templateJobName. I tweaked the script from pitseeker to use these and work around a runtime issue in Jenkins v1.580.3:
import jenkins.model.*
import com.tikal.jenkins.plugins.multijob.*
def jenkinsInstance = jenkins.model.Jenkins.instance
// Retrieve parameters
def newSfx = NewSuffix
println "using new suffix " + newSfx
// create new MultiJob
def templateJob = Jenkins.instance.getJob(templateJobName)
println "Found templateJob " + templateJobName
def newJobName = templateJobName.replaceFirst(/$TemplateStr/, newSfx)
def newJob = Jenkins.instance.copy(templateJob, templateJobName + newSfx)
newJob.save()
println "Copied template job to " + newJob.getName()
// get MultiJob BuildPhases and clone each PhaseJob
def builders = newJob.getBuilders()
builders.each { builder ->
builder.getPhaseJobs().each() { pj ->
def pjNameOrig = pj.getJobName()
def pjNameNew = pjNameOrig.replaceFirst(/$TemplateStr/, newSfx)
println "pjNameOrig = $pjNameOrig, pjNameNew=$pjNameNew"
if (pjNameNew != pjNameOrig)
{
println "cloning phasejob: " + pjNameOrig
def subTemplate = Jenkins.instance.getJob(pjNameOrig)
def newSubJob = Jenkins.instance.copy(subTemplate, pjNameNew)
newSubJob.save()
pj.setJobName(newSubJob.getDisplayName())
}
else
{
println "Not cloning phasejob, keeping original job: " + pjNameOrig
}
}
}
// update dependencies
jenkinsInstance.rebuildDependencyGraph()
In Groovy Console I have this:
import groovy.util.*
import org.codehaus.groovy.runtime.*
def gse = new GroovyScriptEngine("c:\\temp")
def script = gse.loadScriptByName("say.groovy")
this.metaClass.mixin script
say("bye")
say.groovy contains
def say(String msg) {
println(msg)
}
Edit: I filed a bug report: https://svn.dentaku.codehaus.org/browse/GROOVY-4214
It's when it hits the line:
this.metaClass.mixin script
The loaded script probably has a reference to the class that loaded it (this class), so when you try and mix it in, you get an endless loop.
A workaround is to do:
def gse = new groovy.util.GroovyScriptEngine( '/tmp' )
def script = gse.loadScriptByName( 'say.groovy' )
script.newInstance().with {
say("bye")
}
[edit]
It seems to work if you use your original script, but change say.groovy to
class Say {
def say( msg ) {
println msg
}
}