I have set up some credentials environment variables using the Credentials plugin for Jenkins.
I am using them in my Jenkinsfile like this :
pipeline {
agent any
environment {
DEV_GOOGLE_CLIENT_ID = credentials('DEV_GOOGLE_CLIENT_ID')
DEV_GOOGLE_CLIENT_SECRET = credentials('DEV_GOOGLE_CLIENT_SECRET')
}
stages {
stage('Install dependencies') {
steps {
dir("./codes") {
sh 'npm install'
}
}
}
stage('Stop previous forever process') {
steps {
dir("./codes") {
sh 'forever stop dev || ls'
}
}
}
stage('Clean forever logs') {
steps {
dir("./codes") {
sh 'forever cleanlogs'
}
}
}
stage('Test ') {
steps {
dir("./codes") {
sh 'npm run test'
}
}
}
}
}
In my Node.js code, I'm trying to get access to those env variables by writing process.env.DEV_GOOGLE_CLIENT_SECRET but that is not working I am getting undefined ...
Thank you
Which type of credentials did you use: secret text or username and password?
While using username and password you can get username and password separately each other like this
pipeline {
agent any
environment {
DEV_GOOGLE_CLIENT = credentials('DEV_GOOGLE_CLIENT')
}
stages {
stage('Get username and password') {
steps {
echo "username is $DEV_GOOGLE_CLIENT_USR"
echo "password is $DEV_GOOGLE_CLIENT_PSW"
}
}
}
I don't know how to call $DEV_GOOGLE_CLIENT_USR and $DEV_GOOGLE_CLIENT_PSW in node.js, sorry.
You pass your credentials in this order: 1 Jenkins credentials > 2 pipeline environment variable > 2 node.js command line parameter > 3 node.js environment variable
1 Jenkins credentials
Make sure to create a Jenkins credentials as secret text first: https://www.jenkins.io/doc/book/using/using-credentials/
2 pipeline environment variable + node.js command line parameter
pipeline {
agent any
environment {
JENKINS_SECRET_TEXT=credentials('JENKINS_SECRET_TEXT')
}
stages {
stage('Pass secret as command line parameter') {
steps {
sh 'SECRET_ENV_VAR="$JENKINS_SECRET_TEXT" node app.js'
}
}
}
}
3 node.js
console.log("SECRET_ENV_VAR:", process.env.SECRET_ENV_VAR);
Related
I'm running terraform pipeline through Jenkinsfile, where I'm using a input(...) block for the user approval, before apply. This is the code snippet:
stage('tf_plan') {
agent {
label: 'Jenkins-Linux-Dev'
}
steps {
sh(
label: 'Terraform Plan',
script: '''
#!/usr/bin/env bash
terraform plan -input=false -no-color -out=plan.tfplan'
'''
)
}
}
stage('tf_approve') {
when { expression { return env.Action == 'apply' } }
options {
timeout( time: 1, unit: 'MINUTES' )
}
steps {
input(
message: 'Proceed with above Terraform Plan??',
ok: 'Proceed'
)
}
}
stage('tf_apply') {
agent {
label: 'Jenkins-Linux-Dev'
}
when { expression { return env.Action == 'apply' } }
steps {
sh(
label: 'Terraform Apply',
script: '''
#!/usr/bin/env bash
terraform apply -auto-approve -input=false -no-color plan.tfplan'
'''
)
}
}
stage('tf_plan') is working absolutely fine but when env.Action = 'apply', it's not moving any further after stage('tf_approve'). It's stuck at Proceed or Abort step - not moving forward at all clicking either of 'em. Any idea what might be the problem?
Any help would be very much appreciated.
-S
My setup:
Jenkins 2.277.1
Groovy 2.3
Pipeline 2.6
Pipeline Utility Steps 2.6.1
And the following code works fine:
pipeline {
agent any
parameters {
choice(choices: ['-', 'apply'], name: 'Action')
}
stages {
stage('Trigger Promotion') {
when { expression { return env.Action == 'apply' } }
options {
timeout( time: 1, unit: 'MINUTES' )
}
steps {
script {
input(
message: 'Proceed with above Terraform Plan??',
ok: 'Proceed'
)
}
}
}
}
}
Therefore, I don't think the issue is with the input step. Need more info on what's going on with Jenkins and its workers at that moment. Try grabbing logs of Jenkins main node.
P.S.: I'd suggest avoiding PascalCase variables in Groovy. It's usually used to declare classses
Let's say we have a simple pipeline setup like this:
pipeline {
stages {
stage('Stage1') {
sh '''
echo 'Copying files'
cp ./file1 ./directory1
'''
}
stage('Stage2') {
sh '''
echo 'This stage should still work and run'
cp ./directory2/files ./directory2/subdirectory
'''
}
stage('Stage3') { ... }
...
}
}
Whenever I don't have the files in Stage1 or Stage2, it fails the build saying:
'cp cannot stat ./file1 ./directory1' or 'cp cannot stat ./directory2/files ./directory2/subdirectory'
Of course if the files exist, both stages work perfectly fine. The problem is that the build fails for the rest of the stages if a stage fails. So if Stage1 fails because there are no files, it fails every stage after and they don't even run, same goes for if Stage2 fails, then we know that Stage1 succeeded but then Stage3 and onwards fails and does not even run.
Is there a way to make it so that if the cp command fails and the cp cannot stat shows, to just skip the stage and proceed to the next one? Or at least make it so that only that stage fails and it can proceed to build the next stage(s)?
Here is an simple way of skipping the stage when a file does not exist, using the when directive:
pipeline {
agent any
stages {
stage('Stage1') {
when { expression { fileExists './file1' } }
steps {
sh '''
echo 'Copying files'
cp ./file1 ./directory1
'''
}
}
stage('Stage2') {
when { expression { fileExists './directory2/files' } }
steps {
sh '''
echo 'This stage should still work and run'
cp ./directory2/files ./directory2/subdirectory
'''
}
}
stage('Stage3') {
steps {
echo "stage 3"
}
}
}
}
In above case you have to specify the path twice, in the when directive and in the sh step, it is better to handle it in a another way e.g. using variables or closures.
Because of the restrictions in the declarative pipeline, I would recommend you to use the scripted pipeline instead.
This can be achieved using the catchError
pipeline {
agent any
stages {
stage('1') {
steps {
script{
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
echo 'Copying files'
cp ./file1 ./directory1
}
}
}
}
stage('2') {
steps {
script{
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
echo 'This stage should still work and run'
cp ./directory2/files ./directory2/subdirectory
}
}
}
}
stage('3') {
steps {
sh 'exit 0'
}
}
}
}
From above pipeline script, all stages will executed. If the cp command will not work for either of stage 1 or stage 2, it will show as failed for that particular stage but rest all stages will execute.
Similar to below screenshot:
Modified Answer
Following pipeline script include sh ''' ''', which need not have to be present inside the catchError block.
You can include only those commands inside catchError for which you want to catch the errors.
pipeline {
agent any
stages {
stage('1') {
steps {
sh """
echo 'Hello World!!!!!!!!'
curl https://www.google.com/
"""
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
echo 'Copying files'
cp ./file1 ./directory1
}
}
}
stage('2') {
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
echo 'This stage should still work and run'
cp ./directory2/files ./directory2/subdirectory
}
}
}
stage('3') {
steps {
sh 'exit 0'
}
}
}
}
You could just check if the file exists before you try to copy it using a conditional like this:
[ -f ./directory2/files ] && cp ./directory2/files ./directory2/subdirectory || echo "File does not exist"
Source and more info
I need to implement this:
pipeline {
agent none
stages {
stage('Build') {
agent {
docker {
image 'python:2-alpine'
}
}
steps {
sh 'python -m py_compile sources/add2vals.py sources/calc.py'
}
}
stage('Test') {
agent {
docker {
image 'qnib/pytest'
}
}
steps {
sh 'py.test --verbose --junit-xml test-reports/results.xml sources/test_calc.py'
}
post {
always {
junit 'test-reports/results.xml'
}
}
}
}
}
on a nodejs express project and run unit tests with mocha and chai,
this is my code :
pipeline {
agent { docker { image 'node:6.3' } }
stages {
stage('build') {
steps {
sh 'npm --version'
}
}
}
}
can anyone tell me how I should do that? the example is with python so I have no idea what I need to do.
I would take a look at the resources on the Jenkins blog. What you are looking at is the Jenkinsfile which sits in the root of your project directory.
https://jenkins.io/doc/tutorials/build-a-node-js-and-react-app-with-npm/
I'm facing issues in running jobs through Jenkinsfile. Right now the job runs up to build stage and after that, it fails for every stage. I've attache an image for the console output being received as well.
Searched everywhere but didn't got any solution, don't know where I'm making a mistake in the code.
[![enter image description here][1]][1]
parameters {
booleanParam(defaultValue: true, description: 'Execute Pipeline?', name: 'GO')
}
agent {label 'test'}
stages {
stage('Preconditions'){
steps {
script {
result = sh (script: "git log -1 | grep ' _*\\[ci skip\\].*'", returnStatus: true)
if (result == 0) {
echo "This build should be skipped. Aborting"
GO = "false"
}
}
}
}
stage('Build'){
steps {
script {
sh "pip install -r requirements.txt"
sh "mkdir -p ${out}/results"
}
}
}
stage('Smoke') {
steps {
script {
sh "robot -d results -i Smoke -v BROWSER:chrome test_suites"
currentBuild.result = 'SUCCESS'
}
}
}
stage('Sanity') {
steps {
script {
sh "robot -d results -i Sanity -v BROWSER:chrome test_suites"
currentBuild.result = 'SUCCESS'
}
}
}
stage('Process Results') {
steps {
script {
bat 'del "Results\\*.zip"'
zip zipFile: 'results/results.zip', archive: false, dir: 'results', glob: '*.html'
step([
$class : 'RobotPublisher',
outputPath : 'results',
outputFileName : "output.xml",
reportFileName : 'report.html',
logFileName : 'log.html',
disableArchiveOutput : false,
passThreshold : 95.0,
unstableThreshold: 95.0,
otherFiles : "**/*.png",
])
}
}
}
}
post {
always {
googlechatnotification url:
}
}
}````
The requirements.txt files contain all the bindings like:
selenium==3.141.0
virtualenv==16.5.0
robotframework==3.1.1
robotframework-pabot==0.53
robotframework-seleniumlibrary==3.3.1
robotframework-react==1.0.0a2
[![Console Output][2]][2]
[1]: https://i.stack.imgur.com/FPPPz.png
I supect that the failing command is the creation of the results directory in the Build stage, because the variable ${out} is in groovy syntax not shell:
stage('Build'){
steps {
script {
sh "pip install -r requirements.txt"
sh "mkdir -p ${out}/results"
}
}
}
I have a simple nodejs app setup with jenkins, however the test stage needs mongodb to run and my current jenkinsfile doesn't start a mongo container, so how can I do this?
This is my current jenkinsfile:
pipeline {
agent {
docker {
image 'node:8-alpine'
args '-p 3000:3000'
}
}
stages {
stage('Build') {
steps {
sh 'npm install'
}
}
stage('Test') {
steps {
sh 'npm test'
}
}
}
}
I've seen some answers on SO say that docker-compose would be used in a case like this but I figured maybe there's another way to just run a mongo container with docker before the test stage starts. Is this possible?