I just try to erase all Freestyle Jobs from our Jenkins Server. Therefore I call our VS builds directly from the Pipeline job:
bat """chcp 1252 & "PATHTOVS\\devenv.com" /rebuild Release^|$buildBranch $WORKSPACE\\SOLUTION.sln >> ${buildBranch}_$CPNUM_PARAM.txt"""
Now I am wondering how to differentieate between the three Build States (SUCCESS/UNSTABLE/FAILED). Until now I am using a try/catch block. But this isnt very clean and also does not provides the UNStABLE state.
try{
#buildcall
state = 'SUCCESS'
}catch(e){
state = 'FAILED'
}
Unfortuantely I am not sure which error codes from devenv.exe can be retrieved and how to perform this.
Solved it the following way:
try{
//do something
}
}catch(e){
String error = "${e}"
println error
}
Related
I want to build an Azure Devops build task that executes a list of SQL scripts against a SQL Server 2017 database. I followed this tutorial to build the task: https://learn.microsoft.com/en-us/azure/devops/extend/develop/add-build-task?view=azure-devops
The task generally runs successfully and I already executed various scripts against my local database (in SQL Server 2017 Express). I'm using the npm package "mssql/msnodesqlv8" (native SQL Server driver for mssql) to connect and execute the scripts directly in node.js.
async function executeBatches(script: string, pool: sql.ConnectionPool) {
const batches = script.split("\r\nGO");
for (const batch of batches) {
console.log("Executing script:", batch);
await pool.batch(batch);
}
}
Now I found a script that fails in a very strange way. The script in question executes a number of renames over 4 tables in a transaction like this:
BEGIN TRANSACTION
EXEC sp_rename 'table' , 'newTable'
EXEC sp_rename 'newTable.column', 'newColumn', 'COLUMN'
(repeat for several columns)
EXEC sp_rename 'dbo.PK_table', 'PK_newTable'
(repeat for 3 more tables)
COMMIT
In SQL Server Management Studio the script executes correctly. But in the Devops task this script aborts after about 18 sp_rename calls. There is no error thrown and the transaction is left open. The client will continue running (since it got no error) and after executing some more queries SQL Server initiates a rollback and will of course roll back all changes since executing this script.
I switched the statements in the script around and tried commenting out some lines but it always aborts after about 18 sp_rename calls. When I remove enough lines so there are 18 or less sp_rename calls the task can run the script completely and commits the changes (it doesn't matter which lines).
When I remove the transaction it will execute all renames until that magical number and then still abort the script and leave the implicit transaction from the last statement open so it will still roll back all changes after some more queries.
I ran the SQL Profiler and it shows StmtStarting for a rename and then BatchCompleted with error "2 - Abort" but there is no other error or reason shown of why the batch was aborted.
The system_health session shows 2 errors when the script is executed:
Error #1
A connectivity error with tds_flags "DisconnectDueToReadError, NetworkErrorFoundInInputStream, NormalDisconnect" and tds_input_buffer_error 109.
Error #2
A security error with error_code 5023 (which means "The group or resource is not in the correct state to perform the requested operation.")
Searching for these errors online gave no usable results since they either have no solution or are related to login problems which I believe is not the case since I can execute other scripts just fine.
I also already checked the encoding and that the script is correctly read via nodes "fs" library.
Any help or pointers to where I could find a cause for this issue would be greatly appreciated.
EDIT: I got around to building a smaller example with just msnodesqlv8.
import tl = require("azure-pipelines-task-lib/task");
import fs = require("fs");
import path = require("path");
import util = require("util");
import { SqlClient } from "msnodesqlv8";
// tslint:disable-next-line: no-var-requires
const sqlClient: SqlClient = require("msnodesqlv8");
const open = util.promisify(sqlClient.open);
const query = util.promisify(sqlClient.query);
async function run() {
try {
const scriptDirectory = tl.getInput("ScriptDirectory", true) ?? "";
const connectionString = "Driver={ODBC Driver 13 for SQL Server};Server={.\\SQLEXPRESS};Uid={sa};Pwd={start};Database={MyDatabase};Encrypt={yes};TrustServerCertificate={yes}";
const scriptsDir = fs.readdirSync(scriptDirectory);
const con = await open(connectionString);
const close = util.promisify(con.close);
try {
const conQuery = util.promisify(con.query);
for (const file of scriptsDir) {
console.log("Executing:", file);
const script = readFileWithoutBom(path.join(scriptDirectory, file));
console.log("Executing script:", script);
// await query(connectionString, { query_str: script, query_timeout: 120 });
await conQuery({ query_str: script, query_timeout: 120 });
const insert = `INSERT INTO AppliedDatabaseScript (ScriptFile, DateApplied) VALUES ('${file}', GETDATE())`
console.log("Executing script:", insert);
// await query(connectionString, { query_str: insert });
await conQuery({ query_str: insert });
}
} finally {
await close();
}
}
catch (err) {
console.error(err);
tl.setResult(tl.TaskResult.Failed, err.message);
}
}
// Strip BOM. SQL Server won't execute certain scripts with BOM.
function readFileWithoutBom(filePath: string) {
return fs.readFileSync(filePath, "utf-8").replace(/^\uFEFF/, "");
}
The behavior is still the same. I tried with a common connection and separate connection for each query. It will rollback everything in the same connection and continue as if no error occurred. I also fiddled around with the query timeout but there is no relation to the error at all.
I managed to get it working by switching to the non-native driver tedious (also supported by mssql). It seems the native SQL Server driver for node.js is not working correctly.
In case anyone is having problems connecting with tedious (since I couldn't find proper documentation online), these are the basic configuration options:
{
user: "username",
password: "password",
server: "hostname\\instancename",
database: "database",
port: 1433
}
You need to make sure the SQL Server Browser is running and the instance is configured for remote access (even when connecting via localhost) by activating TCP/IP in the SQL Server Configuration Manager.
I have a nodeJS script that returns non-zero exit codes and I'm running this script with Powershell plugin in Jenkins in a pipeline job. I would like to use these exit codes in the pipeline to set build statuses. I can see the non-zero exit codes e.g. with echo in powershell, but using exit $LastExitCode always exits with code 1.
Here's what I currently have:
def status = powershell(returnStatus: true, script: '''
try{
node foo.js
echo $LastExitCode
exit $LastExitCode
} catch {
exit 0
}
''')
println status
if(status == 666) {
currentBuild.result = 'UNSTABLE'
return
} else if(status == 1) {
currentBuild.result = 'FAILURE'
return
} else {
currentBuild.result = 'SUCCESS'
return
}
The "foo.js" file there is very simple:
console.log("Hello from a js file!");
process.exit(666);
That above code sets build status to failure, since println status prints "1". So, my question is that is it even possible to bubble the non-zero custom exit codes to be used in the pipeline code through Powershell plugin somehow? Or is there some other way to achieve this, totally different from what I'm trying to do here?
UPDATE:
Eventually I scrapped the idea of exit codes for now, and went with an even uglier, hackier way :-(
import org.apache.commons.lang.StringUtils
def filterLogs(String filter_string, int occurrence) {
def logs = currentBuild.rawBuild.getLog(10000).join('\n')
int count = StringUtils.countMatches(logs, filter_string);
if (count > occurrence -1) {
currentBuild.result='UNSTABLE'
}
}
And later in the pipeline after the nodeJS script has run:
stage ('Check logs') {
steps {
filterLogs ('Some specific string I console.log from NodeJS', 1)
}
}
This solution I found here Jenkins Text finder Plugin, How can I use this plugin with jenkinsfile? as an answer to that question.
If that's the only way, I guess I'll have to live with that then.
I am new to groovy script and soap UI. i added script assertion to a test step. how ever when i run the test case, script assertions are not running. i have to manually run it to verify my response is correct.
can anyone help me on this please?
My groovy script test assertion:
import groovy.json.JsonSlurper
//grab response
def response = messageExchange.response.responseContent
//Convert to JsonSluper to access data
def list = new JsonSlurper().parseText(response)
//check delegates are in one session per timeslot
for (i = 0; i < list.size(); i++) {
// find all items for this delegate in this timeslot
def itemsForThisDelegateInThisTimeslot = list.findAll {element -> element.delegateId == list[i].delegateId && element.agendaItemId== list[i].agendaItemId}
log.info(list[i].delegateId)
// there should not be more than 1
if(itemsForThisDelegateInThisTimeslot.size() > 1) {
log.info(list[i].delegateId + "Delegate already assigned to a workshop at same time");
//Assert fail in execution
throw new Error ('Fail')
}
}
Firstly, there are no assertions in this Script Assertion. Look up Groovy assert.
If you're 'asserting' that the script is Pass or Fail, you need something like...
assert (response.contains('Something from the response'));
or
assert (someBooleanVar == true);
If it passes, the step goes Green. If it fails, it goes Red.
IMHO, it looks you're throwing an exception when the step fails. I would not use an exception in this way. An exception is there to catch and report code issues. Not a test failure.
Regarding exceptions, look up Groovy Try and Catch.
As for this running (or not) when you run a test case. I suspect it is running, but as you're not asserting anything, you can't see the result.
Ever noticed the Script Log tab at the bottom of the screen? All of your log.info statements will be in here when the test step runs. I'd suggest clearing this log (right-click in the Script Log window...), and then running the test case again and have a look in the Script Log for some of your logging messages.
I have code similar to the one below in my Jenkinsfile:
node {
checkout scm
// do some stuff
try {
// do some maven magic
} catch (error) {
stage "Cleanup after fail"
emailext attachLog: true, body: "Build failed (see ${env.BUILD_URL}): ${error}", subject: "[JENKINS] ${env.JOB_NAME} failed", to: 'someone#example.com'
throw error
} finally {
step $class: 'JUnitResultArchiver', testResults: '**/TEST-*.xml'
}
}
If the above code fails because of some jenkins-pipeline related errors in the try { } (e.g. using unapproved static method) the script fails silently. When I remove the try/catch/finally I can see the errors.
Am I doing something wrong? Shouldn't rethrowing error make the pipeline errors appear in the log?
EDIT:
I've managed to nail down the problem to groovy syntax, when e.g. I use a variable that hasn't been assigned yet.
Example:
echo foo
If foo is not declared/assigned anywhere Jenkins will fail the build and won't show the reason if it is inside the try/catch/finally which rethrows the exception.
This happens when an additional exception is thrown inside the finally block or before the re-throw inside catch. In these cases the RejectedAccessException is swallowed and script-security does not catch it.
Is there a way to perform cleanup (or rollback) if the build in Jenkinsfile failed?
I would like to inform our Atlassian Stash instance that the build failed (by doing a curl at the correct URL).
Basically it would be a post step when build status is set to fail.
Should I use try {} catch ()? If so, what exception type should I catch?
Since 2017-02-03, Declarative Pipeline Syntax 1.0 can be used to achieve this post build step functionality.
It is a new syntax for constructing Pipelines, that extends Pipeline with a pre-defined structure and some new steps that enable users to define agents, post actions, environment settings, credentials and stages.
Here is a sample Jenkinsfile with declarative syntax:
pipeline {
agent label:'has-docker', dockerfile: true
environment {
GIT_COMMITTER_NAME = "jenkins"
GIT_COMMITTER_EMAIL = "jenkins#jenkins.io"
}
stages {
stage("Build") {
steps {
sh 'mvn clean install -Dmaven.test.failure.ignore=true'
}
}
stage("Archive"){
steps {
archive "*/target/**/*"
junit '*/target/surefire-reports/*.xml'
}
}
}
post {
always {
deleteDir()
}
success {
mail to:"me#example.com", subject:"SUCCESS: ${currentBuild.fullDisplayName}", body: "Yay, we passed."
}
failure {
mail to:"me#example.com", subject:"FAILURE: ${currentBuild.fullDisplayName}", body: "Boo, we failed."
}
}
}
The post code block is what handles that post step action
Declarative Pipeline Syntax reference is here
I'm currently also searching for a solution to this problem. So far the best I could come up with is to create a wrapper function that runs the pipeline code in a try catch block. If you also want to notify on success you can store the Exception in a variable and move the notification code to a finally block. Also note that you have to rethrow the exception so Jenkins considers the build as failed. Maybe some reader finds a more elegant approach to this problem.
pipeline('linux') {
stage 'Pull'
stage 'Deploy'
echo "Deploying"
throw new FileNotFoundException("Nothing to pull")
// ...
}
def pipeline(String label, Closure body) {
node(label) {
wrap([$class: 'TimestamperBuildWrapper']) {
try {
body.call()
} catch (Exception e) {
emailext subject: "${env.JOB_NAME} - Build # ${env.BUILD_NUMBER} - FAILURE (${e.message})!", to: "me#me.com",body: "..."
throw e; // rethrow so the build is considered failed
}
}
}
}
I manage to solve it by using try:finally. In case of this stage raises an error the stage will be red and finally run the code but if the stage is okay, the stage will be green and finally will run too.
stage('Tests'){
script{
try{
sh """#!/bin/bash -ex
docker stop \$(docker ps -a -q)
docker rm \$(docker ps -a -q)
export DOCKER_TAG=${DOCKER_TAG}
docker-compose -p ${VISUAL_TESTING_PROJECT_TAG} build test
docker-compose -p ${VISUAL_TESTING_PROJECT_TAG} up --abort-on-container-exit --exit-code-from test
"""
}
finally{
sh """#!/bin/bash -ex
export DOCKER_TAG=${DOCKER_TAG}
docker-compose -p ${VISUAL_TESTING_PROJECT_TAG} down
"""
}
}
}