I am looking for a way to make a build run another build right after stoping it (aka clicking a red cross button) but BEFORE actually aborting itself. Post-build Actions is not an option because it will run after the main job is killed/finished, therefore losing the data when child build is run. Passing parameters is also not an option; the reason for this is to keep the parent PID and pass it to second build. I could do it in script and set it as ENV variable but I don't want to do this for each Jenkins job.
Thanks.
You can do this with try catch, the catch block looking like this:
catch (org.jenkinsci.plugins.workflow.steps.FlowInterruptedException e) {
error = e
currentBuild.result = "ABORTED"
build 'your_other_job'
}
Related
I want to create a simple job using NodeJS, Github and Jenkins.
There are an exchange what runs on two servers addresses:
for example, us.exchange.com and eu.exchange.com.
I created an environment variable named SERVERS_LOCATION,
browser.get(`http://${process.env.SERVERS_LOCATION}.exchange.com`);
and a Jenkins parameter named SERVERS_LOCATION_JEN which may takes two options - US and EU.
Also I created a pipeline in Jenkins where I want to run parameterized build by choose one or another option, for that I use pipeline script in jenkinsfile what looks like that:
pipeline{
agent any
options{
disableConcurrentBuilds()
}
stages{
stage("install npm"){
steps{
bat "npm install"
bat "npx webdriver-manager update --versions.chrome 76.0.3809.68"
}
}
stage("executing job"){
steps{
bat "SERVERS_LOCATION=%SERVERS_LOCATION_JEN% npx protractor config/conf.js"
}
}
}
}
The main idea is to take the choosen value from Jenkins variable SERVERS_LOCATION_JEN and put it to environment variable ${process.env.SERVERS_LOCATION}, which can be used in code for further calls.
But when I running this job I have an error:
'SERVERS_LOCATION' is not recognized as an internal or external command,operable program or batch file.
P.S. running that job from git-bash works fine. (Win10 Chrome browser)
Could you point me please what I am doing wrong?
You have to use "set" command to assign a value to a variable in batch, so please use the below code:-
bat "set SERVERS_LOCATION=%SERVERS_LOCATION_JEN% npx protractor config/conf.js"
I'm learning Jenkins Pipelines and I'm trying to execute anything on a Linux build server but I get errors about it being unable to create a folder.
Here is the pipeline code
node('server') {
stage("Build-Release-Linux64-${NODE_NAME}") {
def ws = pwd()
sh "ls -lha ${ws}"
}
}
This is the error I get:
sh: 1: cannot create /opt/perforce/workspace/Dels-Testing-Area/MyStream-main#tmp/durable-07c26e68/pid; jsc=durable-8c9234a2eb6c2feded950bac03c8147a;JENKINS_SERVER_COOKIE=$jsc /opt/perforce/workspace/Dels-Testing-Area/MyStream-main#tmp/durable-07c26e68/script.sh: Directory nonexistent
I've checked the server while this is running and I can see that it does create
the file "/opt/perforce/workspace/Dels-Testing-Area/MyStream-main#tmp/durable-07c26e68/script.sh"
The file contains the following and is being created by Jenkins and not myself:
#!/bin/sh -xe
It does not matter what I try to execute using the sh step, I get the same error.
Can anyone shed some light on why this is happening?
-= UPDATE =-
I'm currently using Jenkins 2.46.2 LTS and there are a number of updates available. I'm going to wait for a quite period and perform a full update and try this again in case it fixes anything.
I found out that the problem was because I had a single quote in my folder name. As soon as I removed the single quote it ran perfectly. This also links to this Jenkins issue [https://issues.jenkins-ci.org/browse/JENKINS-44341] where I added a comment and voted for a fix.
So the fix is, only use the following characters in folder and job names [0-9a-zA-Z_-] excluding the square brackets and also don't use spaces.
I can confirm that using special characters and spaces in the "display name" field of a folder's configuration works fine.
I'm writing Jenkins job using job DSL. It looks like:
job(jobName) {
description("This is my Jenkins job.")
steps {
// Executing some shell here.
}
scm {
// Checking out some branch from Git.
}
triggers {
bitbucketPush()
scm ''
}
}
It works fine, but for some reason, executing my shell script it fails with an errors:
/usr/lib/git-core/git-pull: 83: /usr/lib/git-core/git-sh-setup: sed: not found
basename: write error: Broken pipe
/usr/lib/git-core/git-pull: 299: /usr/lib/git-core/git-sh-setup: uname: not found
etc.
As far as I understand, the issue is with PATH variable. When I'm fixing it in Jenkins from UI (in Configure section) it works fine. (adding something like this: PATH=/usr/local/bin:/usr/bin
As I'm creating a lot of job, it would great to fix this PATH during creation process in my DSL scripts.
How it may be added into my DSL?
The problem is not related to Job DSL. Try to configure the job manually and fix all problems. Then translate you configuration to Job DSL.
In this case there is something wrong with the environment on you build agent, e.g. git is not installed properly.
I'm running test cases with nosetests in Jenkins. In general, it will have 100 test cases and I want to mark the build unstable when less than 20 test cases failed. If more than 20 test cases failed, then, mark the build failed.
The command I ran:
nosetests test.py --tc-file config.yml --tc-format yaml
First of all, I tried to just change the status of the build to Unstable but it still failed.
The groovy script I used:
manager.addWarningBadge("Thou shalt not use deprecated methods.")
manager.createSummary("warning.gif").appendText("<h1>You have been warned!</h1>", false, false, false, "red")
manager.buildUnstable()
The first two lines of code are executed, but the job is still marked as Failed.
Is there anything wrong with my jenkins config? Or the groovy postbuild plugin does not work with nosetest?
This is the console output:
FAILED (failures=2)
Build step 'Execute shell' marked build as failure
Build step 'Groovy Postbuild' marked build as failure
Finished: FAILURE
As DevD outlined, FAILED is a more significant build state than UNSTABLE. This means calling manager.buildUnstable() or manager.build.setResult(hudson.model.Result.UNSTABLE) after a step failed will still leave the build result FAILED.
However, you can override a failed build result state to be UNSTABLE by using reflection:
manager.build.#result = hudson.model.Result.UNSTABLE
Below example iterates over the build log lines looking for particular regex. If found it which will change (downgrade) build status, add badges & append to the build summary.
errpattern = ~/TIMEOUT - Batch \w+ did not complete within \d+ minutes.*/;
pattern = ~/INSERT COMPLETE - Batch of \d+ records was inserted to.*/;
manager.build.logFile.eachLine{ line ->
errmatcher=errpattern.matcher(line)
matcher=pattern.matcher(line)
if (errmatcher.find()) {
// warning message
String errMatchStr = errmatcher.group(0) // line matched
manager.addWarningBadge(errMatchStr);
manager.createSummary("warning.gif").appendText("<h4>${errMatchStr}</h4>", false, false, false, "red");
manager.buildUnstable();
// explicitly set build result
manager.build.#result = hudson.model.Result.UNSTABLE
} else if (matcher.find()) {
// ok
String matchStr = matcher.group(0) // line matched
manager.addInfoBadge(matchStr);
manager.createSummary("clipboard.gif").appendText("<h4>${matchStr}</h4>", false, false, false, "black");
}
}
Note: this iterates over every line, so assumes that these matches are unique, or you want a badge & summary appended for every matched line!
Post-build result is:
Build step 'Execute Groovy script' marked build as failure
Archiving artifacts
Build step 'Groovy Postbuild' changed build result to UNSTABLE
Email was triggered for: Unstable
Actually It is the intended way to work.
Preference
FAILED -> UNSTABLE -> SUCCESS
using groovy post build we can change the lower result(SUCCESS) to higher preference(FAILED/UNSTABLE)..
not vise versa.
as workaround after your Nosetest ,add an execute shell and "exit 0". so always your result will be the lower preference. now by your post build groovy script decide your exit criteria based on test results. This is actually a tweak.. will explore more and update you on this.
Is there a way I can force a gradle task to run again, or reset all tasks back to the not UP-TO-DATE state?
Try to run your build with -C rebuild that rebuilds Gradle's cache.
In newer versions of Gradle, use --rerun-tasks
If you want just a single task to always run, you can set the outputs property inside of the task.
outputs.upToDateWhen { false }
Please be aware that if your task does not have any defined file inputs, Gradle may skip the task, even when using the above code. For example, in a Zip or Copy task there needs to be at least one file provided in the configuration phase of the task definition.
You can use cleanTaskname
Let's say you have
:someproject:sometask1 UP-TO-DATE
:someproject:sometask2 UP-TO-DATE
:someproject:sometask3 UP-TO-DATE
And you want to force let's say sometask2 to run again you can
someproject:cleanSometask2
before you run the task that runs it all.
Apparently in gradle, every task that understands UP-TO-DATE also understand how to clean itself.
I had a tough case where setting outputs.upToDateWhen { false } inside the task or adding the flag --rerun-tasks didn't help since the task's setOnlyIf kept being set to false each time I ran it.
Adding the following to build.gradle forced the execution of myTask:
gradle.taskGraph.whenReady { taskGraph ->
def tasks = taskGraph.getAllTasks()
tasks.each {
def taskName = it.getName()
if(taskName == 'myTask') {
println("Found $taskName")
it.setOnlyIf { true }
it.outputs.upToDateWhen { false }
}
}
}
You can run:
./gradlew clean
It will force the gradle to rebuild.