Parameterisation of a groovy Pipeline for use with jenkins - node.js

I have a groovy pipeline which I've inherited from a project that I forked.
I wish to pass in Jenkins Choice Parameters as a Parameterised build. At present, I only wish to expose the environment in which to run ( but will want to parameterise further at a later stage), such that a user can choose it from the Jenkins dropdown and use on demand.
I used the snippet generator to help.
Can someone please help with the syntax? I am using Node with a package.json to run a script and with to pass in either dev or uat:
properties([[$class: 'BuildConfigProjectProperty', name: '', namespace: '', resourceVersion: '', uid: ''], parameters([choice(choices: 'e2e\nuat', description: 'environment ', name: 'env')])])
node('de-no1') {
try {
stage('DEV: Initialise') {
git url: 'https://myrepo.org/mycompany/create-new-user.git', branch: 'master', credentialsId: CREDENTIALS_ID
}
stage('DEV: Install Dependencies') {
sh 'npm install'
}
stage('${params.env}: Generate new users') {
sh 'npm run generate:${params.env}'
archiveArtifacts artifacts: '{params.env}-userids.txt', fingerprint: true
}
This currently fails with:
npm ERR! missing script: generate:{params.env}

Assume you want to replace ${params.env} with a value when you call npm?
If this is the case, you need to use double quotes " to let Groovy know you will be doing String templating...ie:
sh "npm run generate:${params.env}"

Related

how to write the correct pipline jenkins docker grovy node

I am rewriting my pipline in node, I need to understand how to perform a step with a gait in node now an error is coming from stage('Deploy')
node {
checkout scm
def customImage = docker.build("python-web-tests:${env.BUILD_ID}")
customImage.inside {
sh "python ${env.CMD_PARAMS}"
}
stage('Deploy') {
post {
always {
allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: 'report']]
])
cleanWs()
}
}
}
and this is the old pipeline
pipeline {
agent {label "slave_first"}
stages {
stage("Создание контейнера image") {
steps {
catchError {
script {
docker.build("python-web-tests:${env.BUILD_ID}", "-f Dockerfile .")
}
}
}
}
stage("Running and debugging the test") {
steps {
sh 'ls'
sh 'docker run --rm -e REGION=${REGION} -e DATA=${DATA} -e BUILD_DESCRIPTION=${BUILD_URL} -v ${WORKSPACE}:/tmp python-web-tests:${BUILD_ID} /bin/bash -c "python ${CMD_PARAMS} || exit_code=$?; chmod -R 777 /tmp; exit $exit_code"'
}
}
}
post {
always {
allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: 'report']]
])
cleanWs()
}
}
}
I tried to transfer the method of creating an allure report, but nothing worked, I use the version above, almost everything turned out, you can still add environment variables to the build, for example, those that are specified -e DATA=${DATA} how do I add it
I don't recommend to switch from declarative to scriptive pipeline.
You are losing possibility to use multiple tooling connected with declarative approach like syntax checkers.
If you still want to use scriptive approach try this:
node('slave_first') {
stage('Build') {
checkout scm
def customImage = docker.build("python-web-tests:${env.BUILD_ID}")
customImage.inside {
sh "python ${env.CMD_PARAMS}"
}
}
stage('Deploy') {
allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: 'report']]])
cleanWs()
}
}
There is no post and always directive in scriptive pipelines. It's on your head to catch all exceptions and set status of the job. I guess you were using this page: https://www.jenkins.io/doc/book/pipeline/syntax/, but it's a mistake.
This page only refers to declarative approach and in few cases you have hidden scriptive code as examples.
Also i don't know if you have default agent label set in your Jenkins config, but by looking at your declarative one I think you missed 'slave_first' arg in node object.
those that are specified -e DATA=${DATA} how do I add it
That's a docker question not a Jenkins. If you want to launch docker image and then also have access to some reports located in this container you should mount workspace/file where those output files landed. You should also pass location of those files to allure.
I suggest you to try this:
mount some subfolder in workspace to docker container
cat test report file if it's visible
add allure report with passing this file location to allure step

How to set conditions in the parallel build to proceed to next stage if one step is success

I am creating a declarative pipeline in Jenkins. There are 6 stages in it.
First Stage: Scenario Upload
Second Stage: Pull code from Git
Third Stage: Maven Build
Fourth Stage: Its a parallel stage. First step will launch mobile emulator and second step will check device connected or not.
Fifth Stage: I want to start this stage when the second step BUILD SUCCESS else stop the job
Sixth Stage: Send email
I am stuck with point 5 (Fifth Stage). Please help
pipeline {
agent any
stages {
stage("Scenario Upload") {
steps {
script {
def inputFile = input message: 'Upload file', parameters: [file(name: 'CyclosAppStatus.xlsx')]
new hudson.FilePath(new File("$workspace/Cucumber_BDD master/Result/CyclosAppStatus.xlsx")).copyFrom(inputFile)
inputFile.delete()
}
}
}
stage('Git Pull Code') {
steps {
git credentialsId: '708a126a-66bb-4eb5-8826-55cedf6497c3', url: 'https://github.com/divakar-ragupathy/Mobile_Automation_BDD.git'
}
}
stage('Maven Clean Build') {
steps {
bat label: '', script: '''Echo Maven Clean Build...
cd %WORKSPACE%\\ADB_Devices
mvn clean compile'''
}
}
stage('Building Android Setup') {
steps {
parallel(
Invoke_Emulator: {
bat label: '', script: '''Echo Invoking Emulator...
#echo off
set emulName=%Emulator_Name%
echo %emulName%
for /f "tokens=1 delims=:" %%e in ("%emulName%") do (
%ANDROID_AVD_PATH%emulator -avd "%%e" -no-boot-anim -no-snapshot-save -no-snapshot-load
)
endlocal'''
},
Checking_Device: {
bat label: '', script: '''Echo Checking Connected Device...
cd %WORKSPACE%\\ADB_Devices
mvn exec:java -Dexec.mainClass=com.expleo.adbListner.CheckConnectedAdbDevices -Dlog4j.configuration=file:///%WORKSPACE%\\ADB_Devices\\src\\log4j.properties -Dexec.args="%Emulator_Name%"'''
}
)
}
}
}
}
If you declare a variable without the "def" keyword it is global. You can use that to store the condition in the previous stages. In the 5th stage you can use a when block to check this condition.

Jenkins. Invalid agent type "docker" specified. Must be one of [any, label, none]

My JenkinsFile looks like:
pipeline {
agent {
docker {
image 'node:12.16.2'
args '-p 3000:3000'
}
}
stages {
stage('Build') {
steps {
sh 'node --version'
sh 'npm install'
sh 'npm run build'
}
}
stage ('Deliver') {
steps {
sh 'readlink -f ./package.json'
}
}
}
}
I used to have Jenkins locally and this configuration worked, but I deployed it to a remote server and get the following error:
WorkflowScript: 3: Invalid agent type "docker" specified. Must be one of [any, label, none] # line 3, column 9.
docker {
I could not find a solution to this problem on the Internet, please help me
You have to install 2 plugins: Docker plugin and Docker Pipeline.
Go to Jenkins root page > Manage Jenkins > Manage Plugins > Available and search for the plugins. (Learnt from here).
instead of
agent {
docker {
image 'node:12.16.2'
args '-p 3000:3000'
}
}
try
agent {
any {
image 'node:12.16.2'
args '-p 3000:3000'
}
}
that worked for me.
For those that are using CasC you might want to include in plugin declaration
docker:latest
docker-commons:latest
docker-workflow:latest

jenkins pipeline nodeJs

My JenkinsFile script started throwing npm not found error. (it is working for maven but failing at npm)
pipeline {
environment {
JENKINS='true'
}
agent any
stages{
stage('change permissions') {
steps {
sh "chmod 777 ./mvnw "
}
}
stage('clean') {
steps {
sh './mvnw clean install'
}
}
stage('yarn install') {
steps{
sh 'npm install -g yarn'
sh 'yarn install'
}
}
stage('yarn webpack:build') {
steps {
sh 'yarn webpack:build'
}
}
stage('backend tests') {
steps {
sh './mvnw test'
}
}
stage('frontend tests') {
steps {
sh 'yarn test'
}
}
}
}
To fix that
I am trying to setup NodeJs on my jenkins node. I installed the nodejs plugin and wrote the script
pipeline {
agent any
stages {
stage('Build') {
steps {
nodejs(nodeJSInstallationName: 'Node 6.x', configId: '<config-file-provider-id>') {
sh 'npm config ls'
}
}
}
}
}
as shown in the https://wiki.jenkins.io/display/JENKINS/NodeJS+Plugin
I also setup nodejs on global tools config
I also tried the solution in the installing node on jenkins 2.0 using the pipeline plugin
and it throws
Expected to find ‘someKey "someValue"’ # line 4, column 7.
node {
error.
but I am still getting npm not found error on jenkins. I am new to jenkins so any help is appreciated.
Thanks in advance
I was able to fix the issues. Followed the following link and was able to fix the issue. https://medium.com/#gustavo.guss/jenkins-starting-with-pipeline-doing-a-node-js-test-72c6057b67d4
Its a puzzle. ;)
Has a little reference trick.
You need to configure your jenkins to see your nodejs config name.
At Global Tool Configuration, you need define your node config name. It has reference to your Jenkinsfile reference.
Look an Jenkingsfile adapted example with reference:
pipeline {
agent any
tools {nodejs "node"}
stages {
stage('Cloning Git') {
steps {
git 'https://github.com/xxxx'
}
}
stage('Install dependencies') {
steps {
sh 'npm i -save express'
}
}
stage('Test') {
steps {
sh 'node server.js'
}
}
}
}
Complete case to study: Post at Medium by Gustavo Apolinario
Hope it helps!
If you need different version of Node.js and npm, you can install NodeJS plugin for Jenkins.
Go to Manage Jenkins -> Global tools configuration and find NodeJS section.
Select the version you need and name it as you prefer. You can also add npm packages that needs to be installed globally.
In a declarative pipeline, just reference the correct version of node.js to use:
stage('Review node and npm installations') {
steps {
nodejs(nodeJSInstallationName: 'node13') {
sh 'npm -v' //substitute with your code
sh 'node -v'
}
}
}
Full example here: https://pillsfromtheweb.blogspot.com/2020/05/how-to-use-different-nodejs-versions-on.html

How can I use the Jenkins Copy Artifacts Plugin from within the pipelines (jenkinsfile)?

I am trying to find an example of using the Jenkins Copy Artifacts Plugin from within Jenkins pipelines (workflows).
Can anyone point to a sample Groovy code that is using it?
With a declarative Jenkinsfile, you can use following pipeline:
pipeline {
agent any
stages {
stage ('push artifact') {
steps {
sh 'mkdir archive'
sh 'echo test > archive/test.txt'
zip zipFile: 'test.zip', archive: false, dir: 'archive'
archiveArtifacts artifacts: 'test.zip', fingerprint: true
}
}
stage('pull artifact') {
steps {
copyArtifacts filter: 'test.zip', fingerprintArtifacts: true, projectName: env.JOB_NAME, selector: specific(env.BUILD_NUMBER)
unzip zipFile: 'test.zip', dir: './archive_new'
sh 'cat archive_new/test.txt'
}
}
}
}
Before version 1.39 of the CopyArtifact, you must replace second stage with following (thanks #Yeroc) :
stage('pull artifact') {
steps {
step([ $class: 'CopyArtifact',
filter: 'test.zip',
fingerprintArtifacts: true,
projectName: '${JOB_NAME}',
selector: [$class: 'SpecificBuildSelector', buildNumber: '${BUILD_NUMBER}']
])
unzip zipFile: 'test.zip', dir: './archive_new'
sh 'cat archive_new/test.txt'
}
}
With CopyArtifact, I use '${JOB_NAME}' as project name which is the current running project.
Default selector used by CopyArtifact use last successful project build number, never current one (because it's not yet successful, or not). With SpecificBuildSelector you can choose '${BUILD_NUMBER}' which contains current running project build number.
This pipeline works with parallel stages and can manage huge files (I'm using a 300Mb file, it not works with stash/unstash)
This pipeline works perfectly with my Jenkins 2.74, provided you have all needed plugins
If you are using agents in your controller and you want to copy artifacts between each other you can use stash/unstash, for example:
stage 'build'
node{
git 'https://github.com/cloudbees/todo-api.git'
stash includes: 'pom.xml', name: 'pom'
}
stage name: 'test', concurrency: 3
node {
unstash 'pom'
sh 'cat pom.xml'
}
You can see this example in this link:
https://dzone.com/refcardz/continuous-delivery-with-jenkins-workflow
If builds are not running in the same pipeline you can use direct CopyArtifact plugin, here is example: https://www.cloudbees.com/blog/copying-artifacts-between-builds-jenkins-workflow and example code:
node {
// setup env..
// copy the deployment unit from another Job...
step ([$class: 'CopyArtifact',
projectName: 'webapp_build',
filter: 'target/orders.war']);
// deploy 'target/orders.war' to an app host
}
name = "/" + "${env.JOB_NAME}"
def archiveName = 'relNum'
try {
step($class: 'hudson.plugins.copyartifact.CopyArtifact', projectName: name, filter: archiveName)
} catch (none) {
echo 'No artifact to copy from ' + name + ' with name relNum'
writeFile file: archiveName, text: '3'
}
def content = readFile(archiveName).trim()
echo 'value archived: ' + content
try that using copy artifact plugin

Resources