replacing file variables by envsubst in jenkins pipeline - linux

I want to replace some variables in a file having $variablename, at runtime from jenkins pipeline script. It seems envsubst is the best for my use case. When i execute by command line on linux server its working fine but when i'm executing through jenkins pipeline in sh script, nothing happens.
sonar-scanner.properties:
sonar.projectKey=Project:MavenTest$BRANCHNAME
sonar.projectName=MavenTest$BRANCHNAME
Example of Command line on linux box:
$ export BRANCHNAME=develop
$ envsubst '$BRANCHNAME'
Output:
sonar.projectKey=Project:MavenTestdevelop
sonar.projectName=MavenTestdevelop
But when i'm executing through jenkins file as a script, nothing is changed in file.
jenkins script:
node {
stage('checkout'){
checkout([$class: 'GitSCM', branches: [[name: ':^(?!origin/master$|origin/develop$).*']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'c0ce73db-3864-4360-9c17-d87caf8a9ea5', url: 'http://172.16.4.158:17990/scm/ctoo/testmaven.git']]])
}
stage('initialize variables'){
// Configuring BRANCH_NAME variable
sh 'git name-rev --name-only HEAD > GIT_BRANCH'
sh label: '', script: 'cut -d \'/\' -f 3 GIT_BRANCH > BRANCH'
branchname = readFile('BRANCH').trim()
env.BRANCHNAME = branchname
}
stage('build & SonarQube analysis') {
withSonarQubeEnv('Sonar') {
sh "envsubst '$BRANCHNAME' <sonar-scanner.properties"
}
}
}
Output:
[Pipeline] sh (hide)
envsubst repotest
sonar.projectKey=Project:MavenTest$BRANCHNAME
sonar.projectName=MavenTest$BRANCHNAME
Can someone please help me

Hi I don't have idea about the envbust but this can be achieved by passing sonar parameters via command line to the sonar see the below example:
withSonarQubeEnv('Sonar') {
sh "<sonarscanner path> -Dsonar.projectKey=Project:MavenTest$BRANCHNAME"
}

I had this problem and solved it by using his escape character
\,
for example:
sh "envsubst '\${SERVER_NAME}' < ./config/nginx/nginx.conf.template > ./config/nginx/nginx.conf"

Related

how to write the correct pipline jenkins docker grovy node

I am rewriting my pipline in node, I need to understand how to perform a step with a gait in node now an error is coming from stage('Deploy')
node {
checkout scm
def customImage = docker.build("python-web-tests:${env.BUILD_ID}")
customImage.inside {
sh "python ${env.CMD_PARAMS}"
}
stage('Deploy') {
post {
always {
allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: 'report']]
])
cleanWs()
}
}
}
and this is the old pipeline
pipeline {
agent {label "slave_first"}
stages {
stage("Создание контейнера image") {
steps {
catchError {
script {
docker.build("python-web-tests:${env.BUILD_ID}", "-f Dockerfile .")
}
}
}
}
stage("Running and debugging the test") {
steps {
sh 'ls'
sh 'docker run --rm -e REGION=${REGION} -e DATA=${DATA} -e BUILD_DESCRIPTION=${BUILD_URL} -v ${WORKSPACE}:/tmp python-web-tests:${BUILD_ID} /bin/bash -c "python ${CMD_PARAMS} || exit_code=$?; chmod -R 777 /tmp; exit $exit_code"'
}
}
}
post {
always {
allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: 'report']]
])
cleanWs()
}
}
}
I tried to transfer the method of creating an allure report, but nothing worked, I use the version above, almost everything turned out, you can still add environment variables to the build, for example, those that are specified -e DATA=${DATA} how do I add it
I don't recommend to switch from declarative to scriptive pipeline.
You are losing possibility to use multiple tooling connected with declarative approach like syntax checkers.
If you still want to use scriptive approach try this:
node('slave_first') {
stage('Build') {
checkout scm
def customImage = docker.build("python-web-tests:${env.BUILD_ID}")
customImage.inside {
sh "python ${env.CMD_PARAMS}"
}
}
stage('Deploy') {
allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: 'report']]])
cleanWs()
}
}
There is no post and always directive in scriptive pipelines. It's on your head to catch all exceptions and set status of the job. I guess you were using this page: https://www.jenkins.io/doc/book/pipeline/syntax/, but it's a mistake.
This page only refers to declarative approach and in few cases you have hidden scriptive code as examples.
Also i don't know if you have default agent label set in your Jenkins config, but by looking at your declarative one I think you missed 'slave_first' arg in node object.
those that are specified -e DATA=${DATA} how do I add it
That's a docker question not a Jenkins. If you want to launch docker image and then also have access to some reports located in this container you should mount workspace/file where those output files landed. You should also pass location of those files to allure.
I suggest you to try this:
mount some subfolder in workspace to docker container
cat test report file if it's visible
add allure report with passing this file location to allure step

How to define a test stage in Jenkins pipeline which makes use of curl to check if the server is up or not?

I am using latest Jenkins in my Linux box.
I would like to implement a Test stage like below:
pipeline {
agent any
stages {
stage('Test') {
HTTP_CODE = sh (
script: 'echo $(curl --write-out \\"%{http_code}\\" --silent --output /dev/null http://localhost/)',
returnStdout: true
).trim()
}
}
}
In this stage, I want to execute a bash script to check whether the web server is up or not. HTTP_CODE variable will have the value 200 if everything is fine, and if there is any other value, then it can be treated as error.
How can I implement this logic as a testing stage in my Jenkins pipeline?
Thanks.
You should update your pipeline as follow:
pipeline {
agent any
stages {
stage('Test') {
HTTP_CODE = sh (
script: 'echo $(curl --write-out \\"%{http_code}\\" --silent --output /dev/null http://localhost/)',
returnStdout: true
).trim()
if ('200' != HTTP_CODE) {
currentBuild.result = "FAILURE"
error('Test stage failed!)
}
}
}
}
Regards.

Linux screen dissapears after Jenkins job is done

I have made a Jenkins pipeline for my Angular application.
This Angular application uses SSR, so you have to run it in the background.
Thus I decided to use a screen for this, here is my JenkinsFile:
pipeline {
agent any
environment {
HOME = '.'
}
stages {
stage('build') {
steps {
sh 'npm i'
sh 'npm run build:ssr'
}
}
stage('move') {
steps {
script {
if (BRANCH_NAME == 'master') {
sh 'rm -R /var/www/AngularJenkinsTest_Master/client || true'
sh 'mkdir /var/www/AngularJenkinsTest_Master/client || true'
sh 'cp -R $WORKSPACE/dist/AngularJenkinsTest/* /var/www/AngularJenkinsTest_Master/client'
}
}
}
}
stage('publish') {
steps {
script {
if (BRANCH_NAME == 'master') {
sh 'screen -X -S AngularJenkinsTest_Master kill | true'
sh 'screen -dmS AngularJenkinsTest_Master'
sh 'screen -S AngularJenkinsTest_Master -X stuff "node /var/www/AngularJenkinsTest_Master/client/server/main.js\n"'
}
}
}
}
}
}
As you can see in "publish", I kill the screen (as it is running a node application), I then create a new screen and send a command to it.
I added a screen -ls to the end of it, and it did show that it existed, but when I go to my linux console.
jenkins#server:/root$ screen -ls
No Sockets found in /run/screen/S-jenkins.
This is the output that jenkins gives me:
Jenkins Output
I am new to Jenkins, so maybe I am just being dumb, but is there any reason for this to happen?

Jenkins "sh" messes up gradle command

In short, I want to run this test in Jenkins CI
sh ./gradlew run -PappArgs="['test##password##jdbc:postgresql://database.synchr.net:5432/matrixmaster_core','./viewregression.html']"
Goal is to export in the end viewregression.html and check the results of the test.
It works perfectly ./gradlew run -PappArgs="['test##password##jdbc:postgresql://database.company.net:5432/matrixmaster_core','./viewregression.html']"
but when I add
sh in the begining and close it with quotes Jenkins says:
Script1.groovy: 1: unexpected token: # # line 1, column 8.
Here is my whole jenkins-ci step:
stage('View Regression Test') {
steps {
script {
sh "./gradlew run -PappArgs="['acc##pass##jdbc:postgresql://database.company.net:5432/matrixmaster_core','./viewregression.html']""
}
}
}
}
}

Only build projects if something has changed

We want to split up our project into smaller pieces. Our current CI process goes through a short test phase and then runs a deployment script. However, if nothing has changed in one of the sub project, we do not want to go through the build for this.
Jenkins without pipelines supports exclusions in the SCM configuration (we use git) and based on this, you can configure a specific job to run. However, when using a pipeline, how can I know, if I should build this part or not? How do I get access to the paths that were affected by the last push?
At the moment our script is very simple, and we would like to keep it as simple as possible.
We were playing around with the scripted and the declarative syntax, but could not find a good solution.
Declarative:
#!groovy​
pipeline {
agent any
tools {
nodejs '8.1'
}
stages {
stage('Checkout') {
steps {
checkout scm
}
}
# Only continue, if something has changed
stage('Install') {
steps {
sh 'npm install'
}
}
stage('Test') {
steps {
sh 'npm run test-jenkins'
}
post {
always {
junit "artifacts/test/report.xml"
}
}
}
}
}
Scripted:
#!groovy​
node {
def nodejs = tool name: '8.1', type: 'jenkins.plugins.nodejs.tools.NodeJSInstallation'
env.PATH = "${nodejs}/bin:${env.PATH}"
stage('Checkout') {
checkout scm
}
# Only continue, if something has changed
stage('Install') {
sh 'npm install'
}
stage('Test') {
try {
sh 'npm run test-jenkins'
} finally {
junit "artifacts/test/report.xml"
}
}
}
Thanks to ElpieKay's fast comment on my question, we now have an elegant solution:
Make a tag to the current commit on a successful build
In the next build compare the new commit and the tag for changes
We are using a multi-branch pipeline and a parallel build for the multiple projects we have under the same source root. We iterate through the projects (serviceX) and check in the corresponding directory for a change:
def projects = ['service1', 'service2']
def builders = [:]
for (p in projects) {
def label = p
builders[label] = {
def tag = "${BRANCH_NAME}_last"
node {
echo "Checking for changes compared to ${tag} in directory ${label}"
try {
sh "./check-for-changes ${tag} ${label}"
} catch (ignored) {
echo "Nothing to do"
return
}
dir (label) {
stage(label + ": Install") {
sh "npm install"
}
stage(label + ": Test") {
try {
sh "npm run test-jenkins"
} finally {
junit 'artifacts/test/report.xml'
}
}
echo "Setting tag for the last build on this branch"
sh "git tag -f ${tag}"
}
}
}
}
parallel builders
... and the script to check for changes:
#!/bin/bash
SHA_PREV=$1
if [ -z ${SHA_PREV} ]; then
echo "Usage: `basename $0` <tag> <path>"
exit 1
fi
CHECK_PATH=$2
if [ -z ${CHECK_PATH} ]; then
echo "Usage: `basename $0` <tag> <path>"
exit 1
fi
if `git rev-parse ${SHA_PREV} >/dev/null 2>&1`; then
echo "Found previous tag: ${SHA_PREV}"
else
SHA_PREV=`git rev-list --max-parents=0 HEAD`
echo "Using initial commit: ${SHA_PREV}"
fi
changes=`git diff --name-only ${SHA_PREV} HEAD | grep ${CHECK_PATH}/`
if [ ! -n "${changes}" ]; then
echo "No changes found"
exit 2 # no changes found
fi

Resources