Cucumber tests fail but travis build still passes - node.js

I am using Travis for CI. For some reason, the builds pass even when some tests fail. See the full log here
https://travis-ci.org/msm1089/hobnob/jobs/534173396
The way I am running the tests is via a bash script, e2e.test.sh, that is run by yarn.
Searching for this specific issue has not turned up anything that helps. It is something to do with exit codes I believe. I think I need to somehow get the build to exit with non-zero, but as you can see at bottom of the log, yarn exits with 0.
e2e.test.sh
#!/usr/bin/env bash
RETRY_INTERVAL=${RETRY_INTERVAL:-0.2}
# Run our API server as a background process
if [[ "$OSTYPE" == "msys" ]]; then
if ! netstat -aon | grep "0.0.0.0:$SERVER_PORT" | grep "LISTENING"; then
pm2 start --no-autorestart --name test:serve "C:\Program Files\nodejs\node_modules\npm\bin\npm-cli.js" -- run test:serve
until netstat -aon | grep "0.0.0.0:$SERVER_PORT" | grep "LISTENING"; do
sleep $RETRY_INTERVAL
done
fi
else
if ! ss -lnt | grep -q :$SERVER_PORT; then
yarn run test:serve &
fi
until ss -lnt | grep -q :$SERVER_PORT; do
sleep $RETRY_INTERVAL
done
fi
npx cucumber-js spec/cucumber/features --require-module #babel/register --require spec/cucumber/steps
if [[ "$OSTYPE" == "msys" ]]; then
pm2 delete test:serve
fi
travis.yml
language: node_js
node_js:
- 'node'
- 'lts/*'
- '10'
- '10.15.3'
services:
- elasticsearch
before_install:
- curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.6.1.deb
- sudo dpkg -i --force-confnew elasticsearch-6.6.1.deb
- sudo service elasticsearch restart
before_script:
- sleep 10
env:
global:
- NODE_ENV=test
- SERVER_PROTOCOL=http
- SERVER_HOSTNAME=localhost
- SERVER_PORT=8888
- ELASTICSEARCH_PROTOCOL=http
- ELASTICSEARCH_HOSTNAME=localhost
- ELASTICSEARCH_PORT=9200
- ELASTICSEARCH_INDEX=test
package.json
...
scripts:{
"test": "yarn run test:unit && yarn run test:integration && yarn run test:e2e"
}
...
So, how can I ensure that the cucumber exit code is the one that is returned, so that the build fails as it should when the tests don't pass?

There are a few possible ways to solve this. Here are two of my favorite.
Option 1:
Add set -e at the top of your bash script, so that it exits on first error, preserving the exit code, and subsequently, failing Travis if its a non zero.
Option 2:
Capture whatever exit code you want, and exit with it wherever it makes sense.
run whatever command here
exitcode=$?
[[ $exitcode == 0 ]] || exit $exitcode
As a side note - it seems like your bash script has too many responsibilities. I would consider separating them if possible, and then you give travis a list of commands to run, and possibly one or two before_script commands.
Something along these lines:
# .travis.yml
before_script:
- ./start_server.sh
script:
- npx cucumber-js spec/cucumber/features ...

Related

Finish background process when next process is completed

Hi, all
I am trying to implement automated test running from Makefile target. As my test dependent on running docker container, I need to check that container is up and running during whole test execution, and restart it, if it's down. I am trying do it with bash script, running in background mode.
At glance code looks like this:
run-tests:
./check_container.sh & \
docker-compose run --rm tests; \
#Need to finish check_container.sh right here, after tests execution
RESULT=$$?; \
docker-compose logs test-container; \
docker-compose kill; \
docker-compose rm -fv; \
exit $$RESULT
Tests has vary execution time (from 20min to 2hrs), so I don't know before, how much time it will take. So, I try to poll it within script longer, than the longest test suite. Script looks like:
#!/bin/bash
time=0
while [ $time -le 5000 ]; do
num=$(docker ps | grep selenium--standalone-chrome -c)
if [[ "$num" -eq 0 ]]; then
echo 'selenium--standalone-chrome container is down!';
echo 'try to recreate'
docker-compose up -d selenium
elif [[ "$num" -eq 1 ]]; then
echo 'selenium--standalone-chrome container is up and running'
else
docker ps | grep selenium--standalone-chrome
echo 'more than one selenium--standalone-chrome containers is up and running'
fi
time=$(($time + 1))
sleep 30
done
So, I am looking for how to exit script exactly after test running is finished, its meant, after command docker-compose run --rm tests is finished?
P.S. It is also ok, if background process can be finished on Makefile target finish
Docker (and Compose) can restart containers automatically when they exit. If your docker-compose.yml file has:
version: '3.8'
services:
selenium:
restart: unless-stopped
Then Docker will do everything your shell script does. If it also has
services:
tests:
depends_on:
- selenium
then the docker-compose run tests line will also cause the selenium container to be started, and you don't need a script to start it at all.
When you launch a command in the background, the special parameter $! contains its process ID. You can save this in a variable and later kill(1) it.
In plain shell-script syntax, without Make-related escaping:
./check_container.sh &
CHECK_CONTAINER_PID=$!
docker-compose run --rm tests
RESULT=$?
kill "$CHECK_CONTAINER_PID"

Is it possible to suppress NPM's echo of the commands it is running?

I've got a bash script that starts up a server and then runs some functional tests. It's got to happen in one script, so I'm running the server in the background. This all happens via 2 npm commands: start:nolog and test:functional.
All good. But there's a lot of cruft in the output that I don't care about:
✗ ./functional-tests/runInPipeline.sh
(... "good" output here)
> #co/foo#2.2.10 pretest:functional /Users/jcol53/Documents/work/foo
> curl 'http://localhost:3000/foo' -s -f -o /dev/null || (echo 'Website must be running locally for functional tests.' && exit 1)
> #co/foo#2.2.10 test:functional /Users/jcol53/Documents/work/foo
> npm run --prefix functional-tests test:dev:chromeff
> #co/foo-functional-tests#1.0.0 test:dev:chromeff /Users/jcol53/Documents/work/foo/functional-tests
> testcafe chrome:headless,firefox:headless ./tests/**.test.js -r junit:reports/functional-test.junit.xml -r html:reports/functional-test.html --skip-js-errors
That's a lot of lines that I don't need there. Can I suppress the #co/foo-functional-tests etc lines? They aren't telling me anything worthwhile...
npm run -s kills all output from the command, which is not what I'm looking for.
This is probably not possible but that's OK, I'm curious, maybe I missed something...

Make Nodejs script run in background in gitlab CI

Our dev project start by command npm run serve Is it possible to run it on background mode? I tried to use nohup, & in end of string. It works properly in shell, but when it start by CI on Gitlab, pipeline state always is "running" cause npm output permanently shows on screen
The clean way would be to run a container whose run command is "npm run serve"
I'm not certain running a non-blocking command through your pipeline is the right way but you should try using "&"
"npm run serve" will run the command in "detached mode.
I've faced the same problem using nohup and &. It was working well in shell, but not on Gitlab CI, It looks like npm start was not detached.
What worked for me is to call npm start inside a bash script and run it on before_script hook.
test:
stage: test
before_script:
- ./serverstart.sh
script:
- npm test
after_script:
- kill -9 $(ps aux | grep '\snode\s' | awk '{print $2}')
on the bash script serverstart.sh
# !/bin/bash
# start the server and send the console and error logs on nodeserver.log
npm start > nodeserver.log 2>&1 &
# keep waiting until the server is started
# (in my case wait for mongodb://localhost:27017/app-test to be logged)
while ! grep -q "mongodb://localhost:27017/app-test" nodeserver.log
do
sleep .1
done
echo -e "server has started\n"
exit 0
this allowed me to detach npm start and pass to next command while keeping npm startprocess alive

Stopping a started background service (phantomjs) in gitlab-ci

I'm starting phantomjs with specific arguments as part of my job.
This is running on a custom gitlab/gitlab-ci server, I'm currently not using containers, I guess that would simplify that.
I'm starting phantomjs like this:
- "timeout 300 phantomjs --ssl-protocol=any --ignore-ssl-errors=true vendor/jcalderonzumba/gastonjs/src/Client/main.js 8510 1024 768 2>&1 >> /tmp/gastonjs.log &"
Then I'm running my behat tests, and then I'm stopping that process again:
- "pkill -f 'src/Client/main.js' || true"
The problem is when a behat test fails, then it doesn't execute the pkill and the test-run is stuck waiting on phantomjs to finish. I already added the timeout 300 but that means I'm still currently waiting 2min or so after a fail and it will eventually stop it while test are still running when they get slow enough.
I haven't found a way to run some kind of post-run/cleanup command that also runs in case of fails.
Is there a better way to do this? Can I start phantomjs in a way that gitlab-ci doesn't care that it is still running? nohup maybe?
TL;DR; - spawn the process in a new thread with & but then you have to make sure the process is killed in successfull and failure builds.
i use this (with comments):
'E2E tests':
before_script:
- yarn install --force >/dev/null
# if there is already an instance running kill it - this is ok in my case - as this is not run very often
- /bin/bash -c '/usr/bin/killall -q lite-server; exit 0'
- export DOCKERHOST=$(ifconfig | grep -E "([0-9]{1,3}\\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d ':' | head -n1)
- export E2E_BASE_URL="http://$DOCKERHOST:8000/#."
# start the lite-server in a new process
- lite-server -c bs-config.js >/dev/null &
script:
# run the tests
- node_modules/.bin/protractor ./protractor.conf.js --seleniumAddress="http://localhost:4444/wd/hub" --baseUrl="http://$DOCKERHOST:8000" --browser chrome
# on a successfull run - kill lite server
- killall lite-server >/dev/null
after_script:
# when a test fails - try to kill it in the after_script. this looks rather complicated, but it makes sure your builds dont fail when the tests succeedes and the lite-server is already killed. to have a successfull build we ensure a non-error return code (exit 0)
- /bin/bash -c '/usr/bin/killall -q lite-server; exit 0'
stage: test
dependencies:
- Build
tags:
- selenium
https://gist.github.com/rufinus/9ee8f04fc1f9248eeb0c73ad5360a006#file-gitlab-ci-yml-L7
As hinted, basically my problem wasn't that I couldn't kill the process, it's that running my test script and it failing stopped at that point, resulting in a deadlock.
I was already doing something quite similar to the example from #Rufinus, but it just didn't work for me. There could be a few different things, like different way of running tests or so or starting it in before_script, which is not an option for me.
I did find a way to make it work for me, which was to prevent my test runner from stopping the execution of further tasks. I managed to do that with a "set +e" and then storing the exit code (something I tried to do before but it didn't work).
This is the relevant part from my job:
# Set option to prevent gitlab from stopping if behat fails.
- set +e
- "phantomjs --ssl-protocol=any --ignore-ssl-errors=true vendor/jcalderonzumba/gastonjs/src/Client/main.js 8510 1024 768 2>&1 >> /dev/null &"
# Store the exit code.
- "./vendor/bin/behat -f progress --stop-on-failure; export TEST_BEHAT=${PIPESTATUS[0]}"
- "pkill -f 'src/Client/main.js' || true"
# Exit the build
- if [ $TEST_BEHAT -eq 0 ]; then exit 0; else exit 1; fi
try -9 signal:
- "pkill -9 -f 'src/Client/main.js' || true"
You can try other signlas as well, you can find a list here

Travis-ci doesn't quit after running bash script over ssh to start activator

We are trying to have travis continually deploy to our own server when our build is successful.
env:
global:
- ACTIVATOR_VERSION=1.3.7
- ACTIVATOR_ZIP_FILE=typesafe-activator-${ACTIVATOR_VERSION}-minimal.zip
- ACTIVATOR_ZIP_URL=http://downloads.typesafe.com/typesafe-activator/${ACTIVATOR_VERSION}/${ACTIVATOR_ZIP_FILE}
- ACTIVATOR_BIN=${TRAVIS_BUILD_DIR}/activator-${ACTIVATOR_VERSION}-minimal/activator
- "DEPLOY_USERNAME=#######"
- "DEPLOY_PASSWORD=########"
- "DEPLOY_HOST=########"
language: java
jdk:
- oraclejdk8
addons:
ssh_known_hosts:
- ########
apt:
packages:
- sshpass
install:
- wget $ACTIVATOR_ZIP_URL
- unzip -q $ACTIVATOR_ZIP_FILE
script:
- $ACTIVATOR_BIN test
after_success:
- sshpass -p $DEPLOY_PASSWORD ssh $DEPLOY_USERNAME#$DEPLOY_HOST -o stricthostkeychecking=no 'bash deploy.sh'
After our travis finishes without errors it runs an ssh script on our server to pull from our git, stop our running activator and start a new one.
The script:
#!/bin/bash
#Get the path of the local repository directory
set -o verbose
DIR="/home/ftpuser/eaglescience/"
TARGET="origin/develop"
SLEEP=1m
#echo "Go into directory " ${DIR}
cd ${DIR}
PID="`cat target/universal/stage/RUNNING_PID`"
#echo "Get the code from " ${TARGET}
git fetch --all
#echo "force checkout"
git checkout --force "${TARGET}"
#echo "Compiling activator"
activator clean stage
#echo "Running activator"
kill -15 ${PID}
target/universal/stage/bin/eaglescience -Dapplication.secret=############### &
#echo "Running..."
sleep ${SLEEP}
exit 0
The problem here is that Travis-ci does not exit the bash script after it runs (even with the exit 0). This means that Travis-CI will keep waiting for a response until it times out and erros our build
The response we got after a while is the following:
[success] Total time: 33 s, completed Mar 9, 2016 11:19:20 AM
#echo "Running activator"
kill -15 ${PID}
target/universal/stage/bin/eaglescience -Dapplication.secret=########### &
#echo "Running..."
sleep ${SLEEP}
[warn] - application - system properties: application.secret is deprecated, use play.crypto.secret instead
[info] - play.api.Play - Application started (Prod)
[info] - play.core.server.NettyServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
exit 0
No output has been received in the last 10 minutes, this potentially indicates a stalled build or something wrong with the build itself.
The build has been terminated
We have tried a lot of different things, we have tried to run the ssh bash command silent. But then travis-ci terminates the connection almost instantly and the command wil not run. We also tried to add && exit 0 but then the server still keeps waiting on response.
Try using nohup the shell file with output to /dev/null 2>&1 & eg : nohup filename.sh > /dev/null 2>&1 &

Resources