I'm trying to write down a script that will get the Build Number of a Build that has been triggered by another job. For example:
I have a build job that calls two other jobs(Call/trigger builds on other project). When the main job is finished with success I would like to get the number of the first build job that was triggered from within. The script I'm trying to run founds the main job, however I can't get in any way the build number of the triggered job.
def job = jenkins.model.Jenkins.instance.getItem("Hourly")
job.builds.each {
def build = it
if (it.getResult().toString().equals("SUCCESS")) {The rest of the code should go here!}
I was trying to find it in the Jenkins java-doc API and online, however without any luck. Can somebody please help me with that?
P.S. The script runs after the job has finished(triggered when needed only).
You can parse the build number (of the child job) from the build log (of the parent job).
For example:
j = Jenkins.getInstance();
jobName = "parentJobName";
job = j.getItem(jobName);
bld = job.getBuildByNumber(parentBuildNumber);
def buildLog = bld.getLog(10); //make sure you read enough lines
def group = (buildLog =~ /#(\d+) of Job : childJobName with/ );
println("The triggered build number: ${group[0][1]}");
Related
I have a series of smoke tests that my company uses to validate its web-application. These tests are written in Ruby. We want to splits these tests into a series of tasks within locust.io. I am a newby when it comes to Locust.IO. I have written python code that can run these tasks one after the other in succession. However, when make them locust.io tasks nothing is being reported in the stats window. I can see the tests run in the console but the statistics never get updated. What do I need to do? Here is a snippet of the Locustfile.py I generate.
def RunTask(name, task):
code, logs = RunSmokeTestTask(name, task)
info("Smoke Test Task {0}.{1} returned errorcode {2}".format(name, task, code))
info("Smoke Test Task Log Follows ...")
info(logs)
class SmokeTasks(TaskSet):
#task
def ssoTests_test_access_sso(self):
RunTask("ssoTests.rb", "test_access_sso")
. . .
RunSmokeTestTask is what actually runs the task. It is the same code that I am using when I invoke the task outside of Locust.IO. I can see the info in the logfile. Some of them fail but the statistics never update. I know I am probably missing something silly.
You need to actually report the events. (edit: I realize now that maybe you were hoping that locust/python would be able to detect the requests made from Ruby, but that is not possible. If you are ok with just reporting the whole test as a single "request", then keep reading)
Add something like this to your taskset:
self.user.events.request_success.fire(request_type="runtask", name=name, response_time=total_time, response_length=0)
You'll also need to measure the time it took. Here is a more complete example (but also a little complex):
https://docs.locust.io/en/stable/testing-other-systems.html#sample-xml-rpc-user-client
Note: TaskSets are an advanced (useless, imho) feature, you probably want to put the #task directly under a User, and the RunTask method as well.
something like:
class SmokeUser(User):
def RunTask(self, name, task):
start_time = time.time()
code, logs = RunSmokeTestTask(name, task)
total_time = time.time() - start_time
self.events.request_success.fire(request_type="runtask", name=name, response_time=total_time, response_length=0)
info("Smoke Test Task {0}.{1} returned errorcode {2}".format(name, task, code))
info("Smoke Test Task Log Follows ...")
info(logs)
#task
def ssoTests_test_access_sso(self):
self.RunTask("ssoTests.rb", "test_access_sso")
I am trying to Deploy to a list of servers in parallel to save some time. The names of servers are listed in a collection: serverNames
The original code was:
serverNames.each({
def server = new Server([steps: steps, hostname: it, domain: "test"])
server.stopTomcat()
server.ssh("rm -rf ${WEB_APPS_DIR}/pc*")
PLMScriptUtils.secureCopy(steps, warFileLocation, it, WEB_APPS_DIR)
})
Basically i want to stop the tomcat, rename a file and then copy a war file to a location using the following lines:
server.stopTomcat()
server.ssh("rm -rf ${WEB_APPS_DIR}/pc*")
PLMScriptUtils.secureCopy(steps, warFileLocation, it, WEB_APPS_DIR)
The original code was working properly and it took 1 server from the collection serverNames and performed the 3 line to do the deploy.
But now i have requirement to run the deployment to the servers listed in serverNames parallely
Below is my new modified code:
def threads = []
def th
serverNames.each({
def server = new Server([steps: steps, hostname: it, domain: "test"])
th = new Thread({
steps.echo "doing deployment"
server.stopTomcat()
server.ssh("rm -rf ${WEB_APPS_DIR}/pc*")
PLMScriptUtils.secureCopy(steps, warFileLocation, it, WEB_APPS_DIR)
})
threads << th
})
threads.each {
steps.echo "joining thread"
it.join()
}
threads.each {
steps.echo "starting thread"
it.start()
}
The echo statements were added to visualize the flow.
With this the output is coming as:
joining thread
joining thread
joining thread
joining thread
starting thread
starting thread
starting thread
starting thread
The number of servers in the collection was 4 hence 4 times the thread is being added and started. but it is not executing the 3 lines i want to run in parallel, which means "doing deployment" is not being printed at all and later the build is failing with an exception.
Note that i am running this Groovy code as a pipeline through Jenkins this whole piece of code is actually a function called deploy of the class deployment and my pipeline in jenkins is creating an object of the class deployment and then calling the deploy function
Can anyone help me with this ? I am stuck like hell with this one. :-(
Have a look at the parallel step. In scripted pipelines (which you seem to be using), you can pass it a map of thread name to action (as a Groovy closure) which is then run in parallel.
deployActions = [
Server1: {
// stop tomcat etc.
},
Server2: {
...
}
]
parallel deployActions
It is much simpler and the recommended way of doing it.
I'm attempting to use the Jenkins Job DSL plugin for the first time to create some basic job "templates" before getting into more complex stuff.
Jenkins is running on a Windows 2012 server. The Jenkins version is 1.650 and we are using the Job DSL plugin version 1.51.
Ideally what I would like is for the seed job to be parameterised so that when it is being run the user can enter four things: the Job DSL script location, the name of the generated job, a Slack channel for failure notifications, and an email address for failure notifications.
The first two are fine: I can call the parameters in the groovy script, for example the script understands job("${JOB_NAME}") and takes the name I enter for the job when I run the seed job.
However when I try to do the same thing with a Slack channel the groovy script doesn't seem to want to play. Note that if I specify a Slack channel rather than trying to call a parameter it works fine.
My Job DSL script is here:
job("${JOB_NAME}") {
triggers {
cron("#daily")
}
steps {
shell("echo 'Hello World'")
}
publishers {
slackNotifier {
room("${SLACK_CHANNEL}")
notifyAborted(true)
notifyFailure(true)
notifyNotBuilt(false)
notifyUnstable(true)
notifyBackToNormal(true)
notifySuccess(false)
notifyRepeatedFailure(false)
startNotification(false)
includeTestSummary(false)
includeCustomMessage(false)
customMessage(null)
buildServerUrl(null)
sendAs(null)
commitInfoChoice('NONE')
teamDomain(null)
authToken(null)
}
}
logRotator {
numToKeep(3)
artifactNumToKeep(3)
publishers {
extendedEmail {
recipientList('me#mydomain.com')
defaultSubject('Seed job failed')
defaultContent('Something broken')
contentType('text/html')
triggers {
failure ()
fixed ()
unstable ()
stillUnstable {
subject('Subject')
content('Body')
sendTo {
developers()
requester()
culprits()
}
}
}
}
}
}
}
But starting the seed job fails and gives me this output:
Started by user
Building on master in workspace D:\data\jenkins\workspace\tutorial-job-dsl-2
Disk space threshold is set to :5Gb
Checking disk space Now
Total Disk Space Available is: 28Gb
Node Name: master
Running Prebuild steps
Processing DSL script jobBuilder.groovy
ERROR: (jobBuilder.groovy, line 10) No signature of method: javaposse.jobdsl.plugin.structs.DescribableContext.room() is applicable for argument types: (org.codehaus.groovy.runtime.GStringImpl) values: [#dev]
Possible solutions: wait(), find(), dump(), grep(), any(), wait(long)
[BFA] Scanning build for known causes...
[BFA] No failure causes found
[BFA] Done. 0s
Started calculate disk usage of build
Finished Calculation of disk usage of build in 0 seconds
Started calculate disk usage of workspace
Finished Calculation of disk usage of workspace in 0 seconds
Finished: FAILURE
This is the first time I have tried to do anything with Groovy and I'm sure it's a basic error but would appreciate any help.
Hm, that's a bug in Job DSL, see JENKINS-39153.
You actually do not need to use the template string syntax "${FOO}" if you just want to use the value of FOO. All parameters are string variables which can be used directly:
job(JOB_NAME) {
// ...
publishers {
slackNotifier {
room(SLACK_CHANNEL)
notifyAborted(true)
notifyFailure(true)
notifyNotBuilt(false)
notifyUnstable(true)
notifyBackToNormal(true)
notifySuccess(false)
notifyRepeatedFailure(false)
startNotification(false)
includeTestSummary(false)
includeCustomMessage(false)
customMessage(null)
buildServerUrl(null)
sendAs(null)
commitInfoChoice('NONE')
teamDomain(null)
authToken(null)
}
}
// ...
}
This syntax is more concise and does not trigger the bug.
Using Windows 7 with Soap 5.2.0 freeware.
I also asked about this in the Smart Bear community and was only given recommended posts to read. The posts didn’t relate to this problem.
I have a REST project that has one test suite with one test case containing two test steps. The first step is a groovy step with a groovy script that calls the second test step. The second test step is a REST GET request that sends a string to our API server and receives a response back in JSON format. The second test step has a script assertion that does "log.info Test Is Run", so I can see when the second test is run.
When the groovy script calls the second test step it reads the second test step’s JSON response in the groovy script like this:
def response = context.expand('${PingTest#Response}').toString() // read results
I can also use this for getting JSON response:
def response = testRunner.testCase.getTestStepByName(testStepForPing).getPropertyValue("response")
The project runs as expected when run through the Soap UI but when I run the project with test runner, the response from the groovy script call to get the JSON response is empty, using either of the methods shown above. When run from testrunner, I know the second test step is being called because I see the log.info result in the script log.
This is part of the DOS log that shows the second test step is running and it seems there are no errors for the second test step run.
SoapUI 5.2.0 TestCase Runner
12:09:01,612 INFO [WsdlProject] Loaded project from [file:/C:/LichPublic/_Soap/_EdPeterWorks/DemoPing.xml]
12:09:01,617 INFO [SoapUITestCaseRunner] Running SoapUI tests in project [demo-procurement-api]
12:09:01,619 INFO [SoapUITestCaseRunner] Running Project [demo-procurement-api], runType = SEQUENTIAL
12:09:01,628 INFO [SoapUITestCaseRunner] Running SoapUI testcase [PingTestCase]
12:09:01,633 INFO [SoapUITestCaseRunner] running step [GroovyScriptForPingtest]
12:09:01,932 INFO [WsdlProject] Loaded project from [file:/C:/LichPublic/_Soap/_EdPeterWorks/DemoPing.xml]
12:09:02,110 DEBUG [HttpClientSupport$SoapUIHttpClient] Attempt 1 to execute request
12:09:02,111 DEBUG [SoapUIMultiThreadedHttpConnectionManager$SoapUIDefaultClientConnection] Sending request: GET /SomeLocation/ABC/ping?echoText=PingOne HTTP/1.1
12:09:02,977 DEBUG [SoapUIMultiThreadedHttpConnectionManager$SoapUIDefaultClientConnection] Receiving response: HTTP/1.1 200
12:09:02,982 DEBUG [HttpClientSupport$SoapUIHttpClient] Connection can be kept alive indefinitely
12:09:03,061 INFO [log] **Test Is Run**
This is the testrunner call I use in DOS command line:
“C:\Program Files\SmartBear\SoapUI-5.2.0\bin\testrunner.bat" DemoPing.xml
When the groovy script is run through test runner I get the project using ProjectFactoryRegistry and WsdlProjectFactory. Any advice on why I can’t read JSON response when using testrunner would be appreciated.
I can provide more info/code if needed.
Thanks.
Please try the below script:
import groovy.json.JsonSlurper
//provide the correct rest test step name
def stepName='testStepForPing'
def step = context.testCase.getTestStepByName(stepName)
def response = new String(step.testRequest.messageExchange.response.responseContent)
log.info response
def json = new JsonSlurper().parseText(response)
Thank you Rao! Your suggestion worked when I used it as shown below. The DOS window showed the response text:
// setup stepName as variable for name of test step to run.
def stepName = "PingTest"
// use stepName to get the test step for calling the test step (testCase is a
string variable of the test case name).
def step = context.testCase.getTestStepByName(stepName)
// call the test step.
step.run(testRunner, context)
// show the results.
def response = new String(step.testRequest.messageExchange.response.responseContent)
log.info response // this response shows correctly in the DOS window
The json slurper also works. At your convenience, if you have any suggested links or books describing the technique(s) you used here please let me know of them.
Thanks.
I have a MultiJob Project (made with the Jenkins Multijob plugin), with a series of MultiJob Phases. Let's say one of these jobs is called SubJob01. The jobs that are built are each configured with the "Restrict where this project can be run" option to be tied to one node. SubJob01 is tied to Slave01.
I would like it if these jobs would fail fast when the node is offline, instead of saying "(pending—slave01 is offline)". Specifically, I want there to be a record of the build attempt in SubJob01, with the build being marked as failed. This way, I can configure my MultiJob project to handle the situation as I'd like, instead of using the Jenkins build timeout plugin to abort the whole thing.
Does anyone know of a way to fail-fast a build if all nodes are offline? I could intersperse the MultiJob project with system Groovy scripts to check whether the desired nodes are offline, but that seems like it'd be reinventing, in the wrong place, what should already be a feature.
I ended up creating this solution which has worked well. The first build step of SubJob01 is an Execute system Groovy script, and this is the script:
import java.util.regex.Matcher
import java.util.regex.Pattern
int exitcode = 0
println("Looking for Offline Slaves:");
for (slave in hudson.model.Hudson.instance.slaves) {
if (slave.getComputer().isOffline().toString() == "true"){
println(' * Slave ' + slave.name + " is offline!");
if (slave.name == "Slave01") {
println(' !!!! This is Slave01 !!!!');
exitcode++;
} // if slave.name
} // if slave offline
} // for slave in slaves
println("\n\n");
println "Slave01 is offline: " + hudson.model.Hudson.instance.getNode("Slave01").getComputer().isOffline().toString();
println("\n\n");
if (exitcode > 0){
println("The Slave01 slave is offline - we can not possibly continue....");
println("Please contact IT to resolve the slave down issue before retrying the build.");
return 1;
} // if
println("\n\n");
The jenkins pipeline statement 'beforeAgent true' can be used in evaluating the when condition previous to entering the agent.
stage('Windows') {
when {
beforeAgent true
expression { return ("${TARGET_NODES}".contains("windows")) }
}
agent { label 'win10' }
steps {
cleanWs()
...
}
Ref:
https://www.jenkins.io/doc/book/pipeline/syntax/
https://www.jenkins.io/blog/2018/04/09/whats-in-declarative/