I dont want to used from CurrentBuild or CurrentUser because this return the user information that who build the job , but i wnat to get the user information that login to jenkins.
for example the job X run by timer and the one user will aborted this , i want to found that which user aborted this job .
you can use the below code. pay attention when you want to execute this code you have to permit to run these methods by go to " manage Jenkins/in process script Approval" and approve them to be executable. by using this code you can get whom aborted Jenkins pipeline not only in manually running job also in running by timer.
pipeline {
agent any
triggers{cron("*/1 * * * *")}
stages {
stage('Hello') {
steps {
sleep(20000)
echo 'Hello World'
}
}
}
post{
aborted{
script{
def causee = ''
def actions = currentBuild.getRawBuild().getActions(jenkins.model.InterruptedBuildAction)
for (action in actions) {
def causes = action.getCauses()
// on cancellation, report who cancelled the build
for (cause in causes) {
causee = cause.getUser().getDisplayName()
cause = null
}
causes = null
action = null
}
actions = null
echo causee
}
}
}
}
Related
I have a nodeJS script that returns non-zero exit codes and I'm running this script with Powershell plugin in Jenkins in a pipeline job. I would like to use these exit codes in the pipeline to set build statuses. I can see the non-zero exit codes e.g. with echo in powershell, but using exit $LastExitCode always exits with code 1.
Here's what I currently have:
def status = powershell(returnStatus: true, script: '''
try{
node foo.js
echo $LastExitCode
exit $LastExitCode
} catch {
exit 0
}
''')
println status
if(status == 666) {
currentBuild.result = 'UNSTABLE'
return
} else if(status == 1) {
currentBuild.result = 'FAILURE'
return
} else {
currentBuild.result = 'SUCCESS'
return
}
The "foo.js" file there is very simple:
console.log("Hello from a js file!");
process.exit(666);
That above code sets build status to failure, since println status prints "1". So, my question is that is it even possible to bubble the non-zero custom exit codes to be used in the pipeline code through Powershell plugin somehow? Or is there some other way to achieve this, totally different from what I'm trying to do here?
UPDATE:
Eventually I scrapped the idea of exit codes for now, and went with an even uglier, hackier way :-(
import org.apache.commons.lang.StringUtils
def filterLogs(String filter_string, int occurrence) {
def logs = currentBuild.rawBuild.getLog(10000).join('\n')
int count = StringUtils.countMatches(logs, filter_string);
if (count > occurrence -1) {
currentBuild.result='UNSTABLE'
}
}
And later in the pipeline after the nodeJS script has run:
stage ('Check logs') {
steps {
filterLogs ('Some specific string I console.log from NodeJS', 1)
}
}
This solution I found here Jenkins Text finder Plugin, How can I use this plugin with jenkinsfile? as an answer to that question.
If that's the only way, I guess I'll have to live with that then.
Is there a way to perform cleanup (or rollback) if the build in Jenkinsfile failed?
I would like to inform our Atlassian Stash instance that the build failed (by doing a curl at the correct URL).
Basically it would be a post step when build status is set to fail.
Should I use try {} catch ()? If so, what exception type should I catch?
Since 2017-02-03, Declarative Pipeline Syntax 1.0 can be used to achieve this post build step functionality.
It is a new syntax for constructing Pipelines, that extends Pipeline with a pre-defined structure and some new steps that enable users to define agents, post actions, environment settings, credentials and stages.
Here is a sample Jenkinsfile with declarative syntax:
pipeline {
agent label:'has-docker', dockerfile: true
environment {
GIT_COMMITTER_NAME = "jenkins"
GIT_COMMITTER_EMAIL = "jenkins#jenkins.io"
}
stages {
stage("Build") {
steps {
sh 'mvn clean install -Dmaven.test.failure.ignore=true'
}
}
stage("Archive"){
steps {
archive "*/target/**/*"
junit '*/target/surefire-reports/*.xml'
}
}
}
post {
always {
deleteDir()
}
success {
mail to:"me#example.com", subject:"SUCCESS: ${currentBuild.fullDisplayName}", body: "Yay, we passed."
}
failure {
mail to:"me#example.com", subject:"FAILURE: ${currentBuild.fullDisplayName}", body: "Boo, we failed."
}
}
}
The post code block is what handles that post step action
Declarative Pipeline Syntax reference is here
I'm currently also searching for a solution to this problem. So far the best I could come up with is to create a wrapper function that runs the pipeline code in a try catch block. If you also want to notify on success you can store the Exception in a variable and move the notification code to a finally block. Also note that you have to rethrow the exception so Jenkins considers the build as failed. Maybe some reader finds a more elegant approach to this problem.
pipeline('linux') {
stage 'Pull'
stage 'Deploy'
echo "Deploying"
throw new FileNotFoundException("Nothing to pull")
// ...
}
def pipeline(String label, Closure body) {
node(label) {
wrap([$class: 'TimestamperBuildWrapper']) {
try {
body.call()
} catch (Exception e) {
emailext subject: "${env.JOB_NAME} - Build # ${env.BUILD_NUMBER} - FAILURE (${e.message})!", to: "me#me.com",body: "..."
throw e; // rethrow so the build is considered failed
}
}
}
}
I manage to solve it by using try:finally. In case of this stage raises an error the stage will be red and finally run the code but if the stage is okay, the stage will be green and finally will run too.
stage('Tests'){
script{
try{
sh """#!/bin/bash -ex
docker stop \$(docker ps -a -q)
docker rm \$(docker ps -a -q)
export DOCKER_TAG=${DOCKER_TAG}
docker-compose -p ${VISUAL_TESTING_PROJECT_TAG} build test
docker-compose -p ${VISUAL_TESTING_PROJECT_TAG} up --abort-on-container-exit --exit-code-from test
"""
}
finally{
sh """#!/bin/bash -ex
export DOCKER_TAG=${DOCKER_TAG}
docker-compose -p ${VISUAL_TESTING_PROJECT_TAG} down
"""
}
}
}
I was trying to find a solution to run different task in different threads (depends/independents)
I have scenario where I need to run one task (which internally runs a server) in different thread before running another task (test, depends on above server) in gradle, after 2nd task completed I need to kill first task.
Again, same as above scenario, run another set of server/test/kill tasks.
task exp{
doFirst{
run1stServerTask.execute()
}
def pool = Executors.newFixedThreadPool(5)
try {
def defer = { closure -> pool.submit(closure as Callable) }
defer {
run1stTest.execute()
// After tests are finished, kill 1st server tasks
}
defer {
run2ndServerTask.execute()
}
defer {
run2ndTest.execute()
// After tests are finished, kill 2nd server tasks
}
}
finally {
pool.shutdown()
}
}
Hope, All above make sense... I am open for another approach if its possible in build.gradle.
I can't connect to Jenkins and execute script via Groovy Remote Plugin.
I found only this documentation: https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Remote+Control+Plugin
My example:
class GroovyDemo {
static void main(def args) {
def transport = new HttpTransport("http://my-jenkins/plugin/groovy-remote/")
def remote = new RemoteControl(transport)
//1
def result1 = remote {
return jenkins.version.value
}
println "Jenkins version was ${result1}."
//2
def result2 = remote {
return Jenkins.getInstance().getItems().size()
}
println "Jobs count: ${result2}."
}
}
I get the result:
Jenkins version was 1.580.3.
0
Why I get zero as number of jobs despite I have many jobs on my Jenkins?
Thanks
You used different variable names for the Jenkins instance - lower case "jenkins" for the successful call and upper case "Jenkins" for the unsuccessful one.
Alright, brand new to gpars so please forgive me if this has an obvious answer.
Here is my scenario. We currently have a piece of our code wrapped in a Thread.start {} block. It does this so it can send messages to an message queue in the background and not block the user request. An issue we have recently ran into with this is for large blocks of work, it is possible for the users to perform another action which would cause this block to execute again. As it is threaded, it is possible for the second batch of messages to get sent before the first causing corrupted data.
I would like to change this process to work as a queue flow with gpars. I've seen examples of creating pools such as
def pool = GParsPool.createPool()
or
def pool = new ForkJoinPool()
and then using the pool as
GParsPool.withExistingPool(pool) {
...
}
This seems like it would account for the case that if the user performs an action again, I could reuse the created pool and the actions would not be performed out of order, provided I have a pool size of one.
My question is, is this the best way to do this with gpars? And furthermore, how do I know when the pool is finished all of its work? Does it terminate when all the work is finished? If so, is there a method that can be used to check if the pool has finished/terminated to know I need a new one?
Any help would be appreciated.
No, explicitly created pools do not terminate by themselves. You have to call shutdown() on them explicitly.
Using withPool() {} command, however, will guarantee that the pool is destroyed once the code block is finished.
Here is the current solution we have to our issue. It should be noted that we followed this route due to our requirements
Work is grouped by some context
Work within a given context is ordered
Work within a given context is synchronous
Additional work for a context should execute after the preceding work
Work should not block the user request
Contexts are asynchronous between each other
Once work for a context is finished, the context should clean up after itself
Given the above, we've implemented the following:
class AsyncService {
def queueContexts
def AsyncService() {
queueContexts = new QueueContexts()
}
def queue(contextString, closure) {
queueContexts.retrieveContextWithWork(contextString, true).send(closure)
}
class QueueContexts {
def contextMap = [:]
def synchronized retrieveContextWithWork(contextString, incrementWork) {
def context = contextMap[contextString]
if (context) {
if (!context.hasWork(incrementWork)) {
contextMap.remove(contextString)
context.terminate()
}
} else {
def queueContexts = this
contextMap[contextString] = new QueueContext({->
queueContexts.retrieveContextWithWork(contextString, false)
})
}
contextMap[contextString]
}
class QueueContext {
def workCount
def actor
def QueueContext(finishClosure) {
workCount = 1
actor = Actors.actor {
loop {
react { closure ->
try {
closure()
} catch (Throwable th) {
log.error("Uncaught exception in async queue context", th)
}
finishClosure()
}
}
}
}
def send(closure) {
actor.send(closure)
}
def terminate(){
actor.terminate()
}
def hasWork(incrementWork) {
workCount += (incrementWork ? 1 : -1)
workCount > 0
}
}
}
}