nodejs failed but bash doesn't stop even I have set -e - node.js

I run a script in nodejs that trigger by bash that run by Jenkins pipeline.
The problem is the pipeline doesn't stop when the error happens. it should not run "echo git add ."
I add set -e - but seems that not work in this case.
What I can do to stop the execute in bash because error in the npm/node?
In Jenkins:
stage('Deploy') {
sh './scripts/deploy.sh'
}
In deploy.sh:
#!/bin/bash
set -e
npm run prepare-version
echo "git add ."
In package.json:
"scripts": { "prepare-version": "node ./scripts/prepare.js" }
prepare.js:
(async() => {
throw new Error('bug!!!');
})();

Your node.js file failed, though bash script executed it succesfully. Read and compare status code:
#!/bin/bash
set -e
npm run prepare-version
[[ $? != 0 ]] && exit;
echo 'git add .'

Related

Cucumber tests fail but travis build still passes

I am using Travis for CI. For some reason, the builds pass even when some tests fail. See the full log here
https://travis-ci.org/msm1089/hobnob/jobs/534173396
The way I am running the tests is via a bash script, e2e.test.sh, that is run by yarn.
Searching for this specific issue has not turned up anything that helps. It is something to do with exit codes I believe. I think I need to somehow get the build to exit with non-zero, but as you can see at bottom of the log, yarn exits with 0.
e2e.test.sh
#!/usr/bin/env bash
RETRY_INTERVAL=${RETRY_INTERVAL:-0.2}
# Run our API server as a background process
if [[ "$OSTYPE" == "msys" ]]; then
if ! netstat -aon | grep "0.0.0.0:$SERVER_PORT" | grep "LISTENING"; then
pm2 start --no-autorestart --name test:serve "C:\Program Files\nodejs\node_modules\npm\bin\npm-cli.js" -- run test:serve
until netstat -aon | grep "0.0.0.0:$SERVER_PORT" | grep "LISTENING"; do
sleep $RETRY_INTERVAL
done
fi
else
if ! ss -lnt | grep -q :$SERVER_PORT; then
yarn run test:serve &
fi
until ss -lnt | grep -q :$SERVER_PORT; do
sleep $RETRY_INTERVAL
done
fi
npx cucumber-js spec/cucumber/features --require-module #babel/register --require spec/cucumber/steps
if [[ "$OSTYPE" == "msys" ]]; then
pm2 delete test:serve
fi
travis.yml
language: node_js
node_js:
- 'node'
- 'lts/*'
- '10'
- '10.15.3'
services:
- elasticsearch
before_install:
- curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.6.1.deb
- sudo dpkg -i --force-confnew elasticsearch-6.6.1.deb
- sudo service elasticsearch restart
before_script:
- sleep 10
env:
global:
- NODE_ENV=test
- SERVER_PROTOCOL=http
- SERVER_HOSTNAME=localhost
- SERVER_PORT=8888
- ELASTICSEARCH_PROTOCOL=http
- ELASTICSEARCH_HOSTNAME=localhost
- ELASTICSEARCH_PORT=9200
- ELASTICSEARCH_INDEX=test
package.json
...
scripts:{
"test": "yarn run test:unit && yarn run test:integration && yarn run test:e2e"
}
...
So, how can I ensure that the cucumber exit code is the one that is returned, so that the build fails as it should when the tests don't pass?
There are a few possible ways to solve this. Here are two of my favorite.
Option 1:
Add set -e at the top of your bash script, so that it exits on first error, preserving the exit code, and subsequently, failing Travis if its a non zero.
Option 2:
Capture whatever exit code you want, and exit with it wherever it makes sense.
run whatever command here
exitcode=$?
[[ $exitcode == 0 ]] || exit $exitcode
As a side note - it seems like your bash script has too many responsibilities. I would consider separating them if possible, and then you give travis a list of commands to run, and possibly one or two before_script commands.
Something along these lines:
# .travis.yml
before_script:
- ./start_server.sh
script:
- npx cucumber-js spec/cucumber/features ...

Make Nodejs script run in background in gitlab CI

Our dev project start by command npm run serve Is it possible to run it on background mode? I tried to use nohup, & in end of string. It works properly in shell, but when it start by CI on Gitlab, pipeline state always is "running" cause npm output permanently shows on screen
The clean way would be to run a container whose run command is "npm run serve"
I'm not certain running a non-blocking command through your pipeline is the right way but you should try using "&"
"npm run serve" will run the command in "detached mode.
I've faced the same problem using nohup and &. It was working well in shell, but not on Gitlab CI, It looks like npm start was not detached.
What worked for me is to call npm start inside a bash script and run it on before_script hook.
test:
stage: test
before_script:
- ./serverstart.sh
script:
- npm test
after_script:
- kill -9 $(ps aux | grep '\snode\s' | awk '{print $2}')
on the bash script serverstart.sh
# !/bin/bash
# start the server and send the console and error logs on nodeserver.log
npm start > nodeserver.log 2>&1 &
# keep waiting until the server is started
# (in my case wait for mongodb://localhost:27017/app-test to be logged)
while ! grep -q "mongodb://localhost:27017/app-test" nodeserver.log
do
sleep .1
done
echo -e "server has started\n"
exit 0
this allowed me to detach npm start and pass to next command while keeping npm startprocess alive

How does the Shell executor run scripts?

We've recently moved to Gitlab and have started using pipelines. We've set up a build server (an Ubuntu 16.04 instance) and installed a runner that uses a Shell executor but I'm unsure of how it actually executes the scripts defined in the .gitlab-ci.yml file. Consider the following snippet of code:
script:
- sh authenticate.sh $DEPLOY_KEY
- cd MAIN && sh deploy.sh && cd ..
- sh deploy_service.sh MATCHMAKING
- sh deauthenticate.sh
I was under the impression that it will just pipe these commands to Bash, and hence I was expecting the default Bash behaviour. What happens, however, is that the deploy.sh fails because of an ssh error ; Bash then continues to execute deploy_service.sh (which is expected behaviour) however this fails with a can't open deploy_service.sh error and the job terminates without Bash executing the last statement.
From what I understand, Bash will only abort on error if you do a set -e first and hence I was expecting all the statements to be executed. I've tried adding the set -e as the first statement but this makes no difference whatsoever - it doesn't terminate on the first ssh error.
I've added the exact output from Gitlab below:
Without set -e
$ cd MAIN && sh deploy.sh && cd ..
deploy.sh: 72: deploy.sh: Bad substitution
Building JS bundles locally...
> better-npm-run build
running better-npm-run in x
Executing script: build
to be executed: node ./bin/build
-> building js bundle...
-> minifying js bundle...
Uploading JS bundles to server temp folder...
COMMENCING RESTART. 5,4,3,2,1...
ssh: Could not resolve hostname $: Name or service not known
$ sh deploy_service.sh MATCHMAKING
sh: 0: Can't open deploy_service.sh
ERROR: Job failed: exit status 1
With set -e
$ set -e
$ cd MAIN && sh deploy.sh && cd ..
deploy.sh: 72: deploy.sh: Bad substitution
Building JS bundles locally...
> better-npm-run build
running better-npm-run in x
Executing script: build
to be executed: node ./bin/build
-> building js bundle...
-> minifying js bundle...
Uploading JS bundles to server temp folder...
COMMENCING RESTART. 5,4,3,2,1...
ssh: Could not resolve hostname $: Name or service not known
$ sh deploy_service.sh MATCHMAKING
sh: 0: Can't open deploy_service.sh
ERROR: Job failed: exit status 1
Why is it, without set -e, terminating on error (also, why is it terminating on the second error only and not the ssh error)? Any insights would be greatly appreciated.
Gitlab script block is actuality an array of shell scripts.
https://docs.gitlab.com/ee/ci/yaml/#script
Failure in each element of array will fail a whole array.
To workaround put your script block in some script.sh file
like
script:
- ./script.sh
I don't think your sh deploy.sh is generating a non-zero exit code.
You are using set -e to tell the current process to exit if a command exits with a non-zero return code, but you are creating a sub-process to run the shell script.
Here's a simple example script that I've called deploy.sh:
#!/bin/bash
echo "First."
echox "Error"
echo "Second"
If I run the script, you can see how the error is not handled:
$ sh deploy.sh
First.
deploy.sh: line 5: echox: command not found
Second
If I run set -e first, you will see it has no effect.
$ set -e
$ sh deploy.sh
First.
deploy.sh: line 5: echox: command not found
Second
Now, I add -e to the /bin/bash shebang:
#!/bin/bash -e
echo "First."
echox "Error"
echo "Second"
When I run the script with sh the -e still takes no effect.
$ sh ./deploy.sh
First.
./deploy.sh: line 3: echox: command not found
Second
When this script is run directly using bash, the -e takes effect.
$ ./deploy.sh
First.
./deploy.sh: line 3: echox: command not found
To fix your issue I believe you need to:
Add -e to the script shebang line (#!/bin/bash -e)
Call the script direct from bash using ./deploy.sh and not run the script through sh.
Bear in mind that if deploy.sh does fail then the cd .. will not run (&& means run the next command if the preceding one succeeded), which would mean you were in the wrong directory to run the deploy_service.sh. You would be better with cd MAIN; sh deploy.sh; cd .., but I suggest replacing your call to deploy.sh with simpler alternative:
script:
- sh authenticate.sh $DEPLOY_KEY
- (cd MAIN && sh deploy.sh)
- sh deploy_service.sh MATCHMAKING
- sh deauthenticate.sh
This is not wildly different, but will result in the cd MAIN && sh deploy.sh to be run in a sub-process (that's what the brackets do), which means that the current directory of the overall script is not affected. Think of it like "spawn a sub-process, and in the sub-process change directory and run that script", and when the sub-process finishes you end up where you started.
As other users have commented, you're actually running your scripts in sh, not bash, so all round this might be better:
script:
- ./authenticate.sh $DEPLOY_KEY
- (cd MAIN && ./deploy.sh)
- ./deploy_service.sh MATCHMAKING
- ./deauthenticate.sh

How to make jenkins move to the next stage if its "terminal" has been blocked?

I'm trying to run http calls for testing a live web api that's going to run in the jenkins machine.
This is the pipeline script that's been used.
stage 'build'
node {
git url: 'https://github.com/juliomarcos/sample-http-test-ci/'
sh "npm install"
sh "npm start"
}
stage 'test'
node {
sh "npm test"
}
But jenkins won't move to the test step. How can I run npm test after the web app has fully started?
One approach is to start the web app with an & at the end so it will run in the background. i.e.
npm start &
You can try to redirect the output of npm start to a text file like this:
npm start > output.txt &
Then in the next step, loop until the "started" message is available, something like:
tail -f output.txt | while read LOGLINE
do
[[ "${LOGLINE}" == *"listening on port"* ]] && pkill -P $$ tail
done
Code not tested :)

shell script waiting for process to finish which run in new terminal

I am trying to write shell script file for compiling refactorerl this is my code :
#!/bin/sh
osascript -e 'tell application "terminal"' -e 'do script "cd /Users/MacBookAir/Desktop/refactorerl-0.9.14.09 && sudo bin/referl -build tool && exit "' -e 'end tell'
code is runing ,but my problem is that I want terminal to wait till the build proccess is finishing then continue in this code Terminal is not waiting for the process to complite...I am using os x Yosetime 10.10.5 .....any idea...!?? tnx
Instead of letting osascript parse your && logic, maybe you can change it bit like that:
osascript -e 'tell application "terminal"' -e 'do script "cd /Users/MacBookAir/Desktop/refactorerl-0.9.14.09"' -e 'end tell' && sudo bin/referl -build tool
If script
cd /Users/MacBookAir/Desktop/refactorerl-0.9.14.09
succeeds, run
sudo bin/referl -build tool

Resources