Passing values from child declarative Jenkins pipeline to the parent - groovy

I have a declarative pipeline setup where the parent pipeline kicks off child pipeline via the "build job: workflow('workflow-name')" and i'm passing the parameters via the "parameters" directive
The question is, in the child pipeline in one of the stage i'm spawning a shell and writing few values to a file and via the readFile method the contents of the file is read and is set in a groovy variable defined at the top of the child pipeline.
This groovy variable (myVal) is visible in all the stages in the child pipeline, but I need to use the myVal in the parent pipeline,
Question 1 - Will the myVal be accessible in parent pipeline?
Question 2 - If it is not accessible then how can i access it, is it even viable?
As you can see, the child pipeline runs in a container but the parent pipeline will not, i.e. the agent on the parent pipeline is different at that of the child.
def myVal = ''
pipeline {
agent {
docker {
image 'myDockerImage'
label 'myRemoteVM'
args '-v /home/myuser:/home/myuser'
}
}
stages {
stage('step1') {
steps {
script {
sh '''
./myScript.sh
'''
myVal = readFile('myFileName.txt').trim()
}
}
}
}
}
}

Related

Calling a variadic function in a Jenkinsfile fails unpredictably

Context
I'm running Jenkins on Windows, writing declarative pipelines. I'm trying to put multiple commands in a single bat step, while still making the step fail if any of the included commands fail.
Purpose of this is twofold.
The best practices document suggests that creating a step for every little thing might not be the best idea (might also be solved by putting more stuff in batch files, but my builds aren't that big yet)
I want to execute some commands in a Visual Studio command prompt, which is achieved by first calling a .bat file that sets the environment, and then doing any necessary commands.
Code
I wrote the following Groovy code in my Jenkinsfile:
def ExecuteMultipleCmdSteps(String... steps)
{
bat ConcatenateMultipleCmdSteps(steps)
}
String ConcatenateMultipleCmdSteps(String... steps)
{
String[] commands = []
steps.each { commands +="echo ^> Now starting: ${it}"; commands += it; }
return commands.join(" && ")
}
The problem/question
I can't get this to work reliably. That is, in a single Jenkinsfile, I can have multiple calls to ExecuteMultipleCmdSteps(), and some will work as intended, while others will fail with java.lang.NoSuchMethodError: No such DSL method 'ExecuteMultipleCmdSteps' found among steps [addBadge, ...
I have not yet found any pattern in the failures. I thought it only failed when executing from within a warnError block, but now I also have a problem from within a dir() block, while in a different Jenkinsfile, that works fine.
This problem seems to be related to ExecuteMultipleCmdSteps() being a variadic function. If I provide an overload with the correct number of arguments, then that overload is used without problem.
I'm at a loss here. Your input would be most welcome.
Failed solution
At some point I thought it might be a scoping/importing thing, so I enclosed ExecuteMultipleCmdSteps() in a class (code below) as suggested by this answer. Now, the method is called as Helpers.ExecuteMultipleCmdSteps(), and that results in a org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: No such static method found: staticMethod Helpers ExecuteMultipleCmdSteps org.codehaus.groovy.runtime.GStringImpl org.codehaus.groovy.runtime.GStringImpl
public class Helpers {
public static environment
public static void ExecuteMultipleCmdSteps(String... steps)
{
environment.bat ConcatenateMultipleCmdSteps(steps)
}
public static String ConcatenateMultipleCmdSteps(String... steps)
{
String[] commands = []
steps.each { commands +="echo ^> Now starting: ${it}"; commands += it; }
return commands.join(" && ")
}
Minimal failing example
Consider the following:
hello = "Hello"
pipeline {
agent any
stages {
stage("Stage") {
steps {
SillyEcho("Hello")
SillyEcho("${hello}" as String)
SillyEcho("${hello}")
}
}
}
}
def SillyEcho(String... m)
{
echo m.join(" ")
}
I'd expect all calls to SillyEcho() to result in Hello being echoed. In reality, the first two succeed, and the last one results in java.lang.NoSuchMethodError: No such DSL method 'SillyEcho' found among steps [addBadge, addErrorBadge,...
Curiously succeeding example
Consider the following groovy script, pretty much equivalent to the failing example above:
hello = "Hello"
SillyEcho("Hello")
SillyEcho("${hello}" as String)
SillyEcho("${hello}")
def SillyEcho(String... m)
{
println m.join(" ")
}
When pasted into a Groovy Script console (for example the one provided by Jenkins), this succeeds (Hello is printed three times).
Even though I'd expect this example to succeed, I'd also expect it to behave consistently with the failing example, above, so I'm a bit torn on this one.
Thank you for adding the failing and succeeding examples.
I expect your issues are due to the incompatibility of String and GString.
With respect to the differences between running it as a pipeline job and running the script in the Jenkins Script Console, I assume based on this that the Jenkins Script Console is not as strict with type references or tries to cast parameters based upon the function signature. I base this assumption on this script, based upon your script:
hello = "Hello"
hello2 = "${hello}" as String
hello3 = "${hello}"
println hello.getClass()
println hello2.getClass()
println hello3.getClass()
SillyEcho(hello)
SillyEcho(hello2)
SillyEcho(hello3)
def SillyEcho(String... m)
{
println m.getClass()
}
This is the output I got in the Jenkins Script Console:
class java.lang.String
class java.lang.String
class org.codehaus.groovy.runtime.GStringImpl
class [Ljava.lang.String;
class [Ljava.lang.String;
class [Ljava.lang.String;
I expect the pipeline doesn't cast the GString to String but just fails as there is no function with the Gstring as parameter.
For debugging you could try to invoke .toString() an all elements you pass on to your function.
Update
This seems to be a known issue (or at least reported) with the pipeline interpreter: JENKINS-56758.
In the ticket an extra work-around has been described using collections instead of varargs. This would omit the need to type-cast everything.
Not sure if this will answer your question, if not, consider it as a bigger comment.
I like how you borrowed the 'variadic functions' from C++ :)
However, in groovy there is much elegant way how to deal with this.
Try this:
def ExecuteMultipleCmdSteps(steps)
{
sh steps
.collect { "echo \\> Now starting: $it && $it" }
.join(" && ")
}
pipeline {
agent any
stages {
stage ("test") {
steps {
ExecuteMultipleCmdSteps(["pwd", "pwd"])
}
}
}
}
which works just fine for me:
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/TestJob
[Pipeline] {
[Pipeline] stage
[Pipeline] { (test)
[Pipeline] sh
+ echo > Now starting: pwd
> Now starting: pwd
+ pwd
/var/lib/jenkins/workspace/TestJob
+ echo > Now starting: pwd
> Now starting: pwd
+ pwd
/var/lib/jenkins/workspace/TestJob
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
You may want to rewrite your function like this.
The 2 errors you mention may have different causes.
The fist one, "No such DSL method ..." is indeed a scoping one, you found yourself the solution, but I do not understand why the overload works in the same scope.
The second error, may be solved with this answer. However, for me your code from the second approach works also just fine.

String Interpolation in Groovy with Jenkins Pipeline file not working

So I have a Jenkins Pipeline which reads a text file (JSON) using the readFile method provided by Jenkins Pipeline. The text file app.JSON has multiple variables which are already defined in the Jenkins Pipeline.
While the readFile does read the file and convert into string it does not interpolate these variables. What are my options to interpolate these variables besides a simple string replace (Which I want to avoid)
I know I can use the readJSON or JSON parser but I want the output in string so it makes it easier for me to just read it as string and pass it along.
I have tried using Gstrings, ${-> variable} and .toString() method. Nothing worked for me.
Jenkins Pipeline Code
appServerName = 'gaga'
def appMachine = readFile file: 'util-silo-create-v2/app.json'
println appMachine
app.json
{
"name":"${appServerName}",
"fqdn":"${appServerName}"
}
There are more than one variable in both the pipeline and app.json that I want to substitute
The issue is with the readFile method provided by Jenkins Pipeline. Although it is very neat and easy to use it does not interpolate strings.
I expect below output
println appMachine
{
"name":"gaga",
"fqdn":"gaga"
}
Output I am getting
{
"name":"${appServerName}",
"fqdn":"${appServerName}"
}
Your assumption that readFile step (or any other method that reads content from a text file) should bind the variables from the current scope and interpolate variables placeholders in the raw text is wrong. However, you can use Groovy template engine to invoke something similar to GString variables interpolation. Consider the following example:
import groovy.text.SimpleTemplateEngine
def jsonText = '''{
"name":"${appServerName}",
"fqdn":"${appServerName}"
}'''
#NonCPS
def parseJsonWithVariables(String json, Map variables) {
def template = new SimpleTemplateEngine()
return template.createTemplate(json).make(variables.withDefault { it -> "\${$it}" }).toString()
}
node {
stage("Test") {
def parsed = parseJsonWithVariables(jsonText, [
appServerName: "gaga"
])
echo parsed
}
}
The method parseJsonWithVariables does what you expect to get. It is critical to make this method #NonCPS, because the SimpleTemplateEngine, as well as map created using withDefault() are not serializable. It takes a JSON read previously from a file (in this example I use a variable instead for simplicity) and a map of parameters. It converts this map to a map with a default value (the part variables.withDefault { ... } is responsible for that) so the template engine does not complain that there is no property with a given name. In this case the default method returns a variable "as is", but you can return an empty string or a null value instead. Whatever works for you better.
When you run it you will something like this:
[Pipeline] Start of Pipeline (hide)
[Pipeline] node
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] echo
{
"name":"gaga",
"fqdn":"gaga"
}
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

Best way to set reusable property in Jenkinsfile

My Jenkins pipeline shared library can send notification on demand. The user basically has to send the the notifcation channel details like slack channel name or email id for each stage that he wants notifcation.
I do not want user to repeat this property in every stage but rather define it once in Jenkinsfile and I can use that. What would be the best way to set this variable?
Sample:
// this properties needs to be accessed by my groovy files
def properties = "channelNm,token"
node('myNode'){
stage('checkout'){
slackNotify = "yes"
.....
}
stage('compile'){
slackNotify = "yes"
.....
}
}
When you use Jenkins shared library you can create a configuration class and you can expose a DSL script that allows you modifying your configuration object. Take a look at following example. Let's say you have a class called NotificationConfig located in src folder:
src/NotificationConfig.groovy
#Singleton
class NotificationConfig {
String slackChannelName
String email
String otherStuff
}
This class is a singleton which means that you can get instance (a single one) of this with NotificationConfig.instance. Now, let's say you have a DSL script called notificationConfig.groovy located in vars folder:
vars/notificationConfig.groovy
#!groovy
def call(Closure body) {
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = NotificationConfig.instance
body()
}
This is a very simple script that delegates closure body to be executed in context of NotificationConfig object. Now let's take a look at very simple scripted pipeline that uses notificationConfig DSL to set some configuration values:
Jenkinsfile
notificationConfig {
email = 'test#test.com'
slackChannelName = 'test'
}
node {
stage('Test') {
echo NotificationConfig.instance.email
}
}
When I run this simple pipeline I see:
[Pipeline] node
Running on Jenkins in /var/jenkins_home/workspace/test-pipeline
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] echo
test#test.com
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
As you can see you can expose with such DSL NotificationConfig object and let pipeline to define its own values and then access these values in the shared library with NotificationConfig.instance object.
ATTENTION: you can always set up some default values in NotificationConfig object so library users can override them in their pipelines or rely on defaults if needed.
This is pretty popular pattern in Jenkins Pipeline shared libraries. You can read more about it in this Jenkins blog post - https://jenkins.io/blog/2017/10/02/pipeline-templates-with-shared-libraries/

Spark/Gradle -- Getting IP Address in build.gradle to use for starting master and workers

I understand at a basic level the various moving parts of build.gradle build scripts but am having trouble tying it all together.
In Apache Spark standalone mode, just trying to start a master and worker on the same box from build.gradle. (Later will extend with call with $SPARK_HOME/sbin/start-slaves with the proper argument for masterIP.)
Question: How can I assign my IP address to a variable in Groovy/build.gradle so I can pass it to a command in an Exec task? We want this to run on a couple different development machines.
We have a (I think fairly standard) /etc/hosts config with the FQDN and hostname assigned to 127.0.1.1. The driver gets around this OK but starting master and slaves with hostnames is not an option, I need the ip address.
I am trying:
task getMasterIP (type: Exec){
// declare script scope variable using no def or
executable "hostname"
args += "-I"
// need results of hostname call assigned to script scope variable
sparkMasterIP = <resultsOfHostnameCall>
}
// added this because startSlave stops if Master is already running
task startSlaveOnly(dependsOn:'getMasterIP', type: Exec){
executable "/usr/local/spark/sbin/start-slave.sh"
args += "spark://$sparkMasterIP:7077"
doLast {
println "enslaved"
}
}
// now make startSlave call startSlaveOnly after the initial startMaster
task startSlave(dependsOn:'startMaster', type: Exec) {
finalizedBy 'startSlaveOnly'
}
When I try something like suggested in the docs for Exec for Groovy calls:
task getMasterIP (type: Exec){
// declare script scope variable using no def or
sparkMasterIP = executable "hostname"
args += "-I"
}
I get a warning that executable is not recognized.
The " for a little more background on what I am thinking" section, not the main question.
Googling "build.gradle script scope variables" and looking at the first two results, in the basic docs I only see one type of variable and ext properties to be used.
16.4. Declaring variables -- There are two kinds of variables that can be declared in a build script: local variables and extra properties.
But in this other Gradle doc Appendix B. Potential Traps I am seeing two kinds of variables scopes aside from the ext properties:
For Gradle users it is important to understand how Groovy deals with
script variables. Groovy has two types of script variables. One with a
local scope and one with a script-wide scope.
With this example usage:
String localScope1 = 'localScope1'
def localScope2 = 'localScope2'
scriptScope = 'scriptScope'
I am assuming I should be using script-scope variables with no "def" or type declaration.
To fetch local IPs:
// Return all IPv4 addresses
def getLocalIPv4() {
def ip4s = []
NetworkInterface.getNetworkInterfaces()
.findAll { it.isUp() && !it.isLoopback() && !it.isVirtual() }
.each {
it.getInetAddresses()
.findAll { !it.isLoopbackAddress() && it instanceof Inet4Address }
.each { ip4s << it.getHostAddress() }
}
return ip4s
}
// Optionally, return all IPv6 addresses
def getLocalIPv6() {
def ip6s = []
NetworkInterface.getNetworkInterfaces()
.findAll { it.isUp() && !it.isLoopback() && !it.isVirtual() }
.each {
it.getInetAddresses()
.findAll { !it.isLoopbackAddress() && it instanceof Inet6Address }
.each { ip6s << it.getHostAddress() }
}
return ip6s
}
task printIP() doLast {
println getLocalIPv4()
println getLocalIPv6()
}
The two functions above return a list of IPv4 or IPv6 addresses respectively. You might notice that I'm skipping all localhosts, interfaces that are not up, all loopbacks and virtual interfaces. If you want to use the first IPv4 address, you can use it elsewhere as:
getLocalIPv4()[0]
or in your case:
args += "spark://"+ getLocalIPv4()[0] + ":7077"
I found this post that appears to be a more straightforward way of doing this but it limited to Linux platforms, hostname -I doesn't work in Windows and maybe not all Linux distros?
getting hostname
assigning it to variable
using in a build.gradle
task
Here's the task I built as a result, the accepted answer is much better and more universal, this is just for another way of looking at it
task getMasterIP{
doLast {
new ByteArrayOutputStream().withStream { os ->
def result = exec {
executable = 'hostname'
args += '-I'
}
ext.ipAddress = os.toString()
}
}
}
RaGe's answer does a better job of looking at all interfaces on all platforms

Run a remote command on all Jenkins slaves via Masters's script console

I want to run same shell command (very simple shell commands like ls) on all the UNIX slaves
which are connected to the master by using the master's script console.
How can I do this using groovy?
Want to do something like this: Display Information About Nodes
but instead of displaying information, I want to also run some simple UNIX commands on each slave and print the results.
import hudson.util.RemotingDiagnostics;
print_ip = 'println InetAddress.localHost.hostAddress';
print_hostname = 'println InetAddress.localHost.canonicalHostName';
// here it is - the shell command, uname as example
uname = 'def proc = "uname -a".execute(); proc.waitFor(); println proc.in.text';
for (slave in hudson.model.Hudson.instance.slaves) {
println slave.name;
println RemotingDiagnostics.executeGroovy(print_ip, slave.getChannel());
println RemotingDiagnostics.executeGroovy(print_hostname, slave.getChannel());
println RemotingDiagnostics.executeGroovy(uname, slave.getChannel());
}
Until the end, I don't use * to search the agents, but instead I'm reading and parsing their names. For example, if I want to run a job on every agent that has LINUX in name, I will do next:
for (aSlave in hudson.model.Hudson.instance.slaves)
{
/* take into account just agents with LINUX in name*/
AGENT_NAME = aSlave.name
if ( AGENT_NAME.contains('LINUX') )
{
/* you can check also if the agent is online or other attributes */
/* Add agent name as label of the agent */
AGENT_LABELS = aSlave.getLabelString() + " " + AGENT_NAME
aSlave.setLabelString(AGENT_LABELS)
/* For each found agent, create a job that will run on it */
job('My_job_name_' + AGENT_NAME)
{
label(AGENT_NAME)
steps {
/* Do whatever you want here.
This job will run just on this specific agent (due to label set) */
}
}
} /* end if */
} /* end for */
/* If you want to run all jobs in parallel (every job on a specific agent), you can save all found agents in a list and then create one more pipeline job that will contain next line :
' parallel {
b0: {build 'My_job_name_' + AGENT_NAME_LIST[0]},
b1: {build 'My_job_name_' + AGENT_NAME_LIST[1]},
....
}
fastfail: false '
The pipeline looks something like this:
stages {
stage('Checkout repo') {
steps {
//checkout what I need
}
}
stage('Generate Jobs') {
steps {
jobDsl targets:'generate_projects.groovy',
}
}
stage('Build Projects') {
steps {
build job: "build-all",
propagate: true,
wait: true
}
}
}
and then is file generate_projects.groovy, where the actually DSL generation is:
for (agent in hudson.model.Hudson.instance.slaves) {
if (!agent.getComputer().isOffline()) { // check that agent is not offline
node = jenkins.model.Jenkins.instance.getNode(agent.name) // get agent name
agentIPs = node.computer.getChannel().call(new ListPossibleNames())
agentIP = agentIPs[0] // get agent IP
//Create a job that will run on that specific agent
jobName = FOLDER + '/<Job_name>' + agent.name // need to create different names
job(jobName)
{
label(agent.name)
steps
{
shell(<shell script or commands that you want to run>)
}
}
}
}
Beside the above generation of the jobs, you'll need to keep a list of jobs that were generated and add all its elements in "build-all" pipeline job, that will look something like:
parallel(
b0: {build '<Job_name>' + agent.name'},
b1: {build '<Job_name>' + agent.name'},
b2: {build '<Job_name>' + agent.name'},
.....
)
failFast: false
So when you run the pipeline, a job for each agent will be created, and all new created jobs will run in parallel. I use it for updating setup scenario.
Pretty old thread.
I managed the same situation in next way. I have a pipeline jobs that is doing next stages:
- first it checks online agents (since they are physical machine it may happen to be down) using something like: for "(slave in hudson.model.Hudson.instance.slaves)..."
- next stage is to create jobs for each found agent using DSL plugin and list_of_agents.each.
Besides jobs for every online agent, it's created a job that will run all of them in parallel. Of course, the new created jobs contains the commands that I want to run on agents. When I run the pipeline, on all agents will run the same script/commands and output is returned to master pipeline job.

Resources