Make Nodejs script run in background in gitlab CI - node.js

Our dev project start by command npm run serve Is it possible to run it on background mode? I tried to use nohup, & in end of string. It works properly in shell, but when it start by CI on Gitlab, pipeline state always is "running" cause npm output permanently shows on screen

The clean way would be to run a container whose run command is "npm run serve"
I'm not certain running a non-blocking command through your pipeline is the right way but you should try using "&"
"npm run serve" will run the command in "detached mode.

I've faced the same problem using nohup and &. It was working well in shell, but not on Gitlab CI, It looks like npm start was not detached.
What worked for me is to call npm start inside a bash script and run it on before_script hook.
test:
stage: test
before_script:
- ./serverstart.sh
script:
- npm test
after_script:
- kill -9 $(ps aux | grep '\snode\s' | awk '{print $2}')
on the bash script serverstart.sh
# !/bin/bash
# start the server and send the console and error logs on nodeserver.log
npm start > nodeserver.log 2>&1 &
# keep waiting until the server is started
# (in my case wait for mongodb://localhost:27017/app-test to be logged)
while ! grep -q "mongodb://localhost:27017/app-test" nodeserver.log
do
sleep .1
done
echo -e "server has started\n"
exit 0
this allowed me to detach npm start and pass to next command while keeping npm startprocess alive

Related

How to start fastapi ,react, node server using shell script file

I need to run many commands one by one to start my project instead of that i tried to put commands on shell script file
server.sh
#!/bin/bash
sudo systemctl start elasticsearch.service
sudo systemctl start kibana.service
cd fastapi
uvicorn main:app --reload --port 8000
cd ..
cd reactjs
npm i
npm start
cd ..
cd node
npm i
npm run dev
These are commands I put it in a .sh file, now problem is after uvicorn main:app --reload --port 8000 this command sh files failed to execute rest of the commands.
how to resolve this using .sh file or yaml file
You must run in background the three main scripts in your code:
uvicorn main:app --reload --port 8000 &
npm start &
npm run dev &
That & is used after a command to run this one in background, so the script will not stop in the first command (avicorn) and it will follow with the code.
And because of those commands will generate an output in the terminal (in the case you are running them from it) that output can be confused, so I would recommend redirect the output to a file for every command you run in background.
Your code could be like this:
#!/bin/bash
sudo systemctl start elasticsearch.service
sudo systemctl start kibana.service
cd fastapi
uvicorn main:app --reload --port 8000 > uvicorn.log &
cd ../reactjs
npm i
npm start > npmstart.log &
cd ../node
npm i
npm run dev > npmdev.log &
For killing those process you should use the kill command.
There are several options for killing, you can visit kill signals on Linux to understand how it works.
Or if you want to use a GUI, the system monitor might work, but you must know what PID is what you want to kill.
When a process starts this saves in $! its current PID (process ID), so you can use the next statement to show the PID:
echo $!
And when you want to kill the process, use:
kill -9 the_pid_shown_with_echo
So your code could be like this (but it's not enough in this case):
#!/bin/bash
sudo systemctl start elasticsearch.service
sudo systemctl start kibana.service
cd fastapi
uvicorn main:app --reload --port 8000 > uvicorn.log &
echo "Uvicorn ID: $!" | tee uvicornpid.txt
cd ../reactjs
npm i
npm start > npmstart.log &
echo "npm start ID: $!" | tee npmstart.txt
cd ../node
npm i
npm run dev > npmdev.log &
echo "npm run dev ID: $!" | tee npmrundev.txt
The statements with tee command like echo "Uvicorn ID: $!" | tee uvicornpid.txt are used for showing the text in the terminal and also redirect the output to a file. So you can check those files later for checking the PID's
But as I said, in this case this is not enough because unless with node this runs various process and if you kill the process by using the PID you got with $! this will kill that process (and maybe other one). But the process which is listening in that port, will stay running and when you run the app again, this will crash because the port is in use (unless you run the app in another port, but I would not recommend it).
You can use several commands for getting the PID you should kill.
The first way is using commands like:
pgrep node
This will return all PID which match with node word
ps -efl | grep -E "(node|PID)"
This will return an output with several columns and you will can see the PID of all process which match with node word and another information that might be useful.
Other useful commands the might be better for you are these:
lsof -i :4000
This will return the process running in the 4000 port (you will get the PID, the name of the process and more information)
fuser 4000/tcp
This only will return 4000/tcp and the PID of the process running in that port.
So, once you get the PID with one of those methods, you should kill the process with the kill command as I explained before.

Cucumber tests fail but travis build still passes

I am using Travis for CI. For some reason, the builds pass even when some tests fail. See the full log here
https://travis-ci.org/msm1089/hobnob/jobs/534173396
The way I am running the tests is via a bash script, e2e.test.sh, that is run by yarn.
Searching for this specific issue has not turned up anything that helps. It is something to do with exit codes I believe. I think I need to somehow get the build to exit with non-zero, but as you can see at bottom of the log, yarn exits with 0.
e2e.test.sh
#!/usr/bin/env bash
RETRY_INTERVAL=${RETRY_INTERVAL:-0.2}
# Run our API server as a background process
if [[ "$OSTYPE" == "msys" ]]; then
if ! netstat -aon | grep "0.0.0.0:$SERVER_PORT" | grep "LISTENING"; then
pm2 start --no-autorestart --name test:serve "C:\Program Files\nodejs\node_modules\npm\bin\npm-cli.js" -- run test:serve
until netstat -aon | grep "0.0.0.0:$SERVER_PORT" | grep "LISTENING"; do
sleep $RETRY_INTERVAL
done
fi
else
if ! ss -lnt | grep -q :$SERVER_PORT; then
yarn run test:serve &
fi
until ss -lnt | grep -q :$SERVER_PORT; do
sleep $RETRY_INTERVAL
done
fi
npx cucumber-js spec/cucumber/features --require-module #babel/register --require spec/cucumber/steps
if [[ "$OSTYPE" == "msys" ]]; then
pm2 delete test:serve
fi
travis.yml
language: node_js
node_js:
- 'node'
- 'lts/*'
- '10'
- '10.15.3'
services:
- elasticsearch
before_install:
- curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.6.1.deb
- sudo dpkg -i --force-confnew elasticsearch-6.6.1.deb
- sudo service elasticsearch restart
before_script:
- sleep 10
env:
global:
- NODE_ENV=test
- SERVER_PROTOCOL=http
- SERVER_HOSTNAME=localhost
- SERVER_PORT=8888
- ELASTICSEARCH_PROTOCOL=http
- ELASTICSEARCH_HOSTNAME=localhost
- ELASTICSEARCH_PORT=9200
- ELASTICSEARCH_INDEX=test
package.json
...
scripts:{
"test": "yarn run test:unit && yarn run test:integration && yarn run test:e2e"
}
...
So, how can I ensure that the cucumber exit code is the one that is returned, so that the build fails as it should when the tests don't pass?
There are a few possible ways to solve this. Here are two of my favorite.
Option 1:
Add set -e at the top of your bash script, so that it exits on first error, preserving the exit code, and subsequently, failing Travis if its a non zero.
Option 2:
Capture whatever exit code you want, and exit with it wherever it makes sense.
run whatever command here
exitcode=$?
[[ $exitcode == 0 ]] || exit $exitcode
As a side note - it seems like your bash script has too many responsibilities. I would consider separating them if possible, and then you give travis a list of commands to run, and possibly one or two before_script commands.
Something along these lines:
# .travis.yml
before_script:
- ./start_server.sh
script:
- npx cucumber-js spec/cucumber/features ...

How to run another serve after node server has started using shell script?

I want to write a script which basically runs my node server first and after only node server has started I want to run another script. How can I implement this using shell script?
Right now I have done this so far
echo "Going inside NodeServer folder";
cd ./../Server-Node
echo "Starting Node Server";
npm start
echo 'Going inside Project Folder';
cd ./../ionicApp
ionic serve
A simple hack is to use npm start & add a sleep 15 on the line after it (or adjust accordingly to the avg time the start takes).
Note: to terminate the node process you might have to run a command to kill it stop all instances of node.js server
Otherwise you'll want to look at some stuff here NPM run parallel task, but wait until resource is available to run second task
I found out this later. adding modified script
echo "Going inside Server-Node";
cd ./../Server-Node
echo "Starting Node Server";
npm start & echo OK
echo 'Going inside ionic-Project';
cd ./../learn-ionic
echo 'Starting ionic server';
ionic serve

Stopping a started background service (phantomjs) in gitlab-ci

I'm starting phantomjs with specific arguments as part of my job.
This is running on a custom gitlab/gitlab-ci server, I'm currently not using containers, I guess that would simplify that.
I'm starting phantomjs like this:
- "timeout 300 phantomjs --ssl-protocol=any --ignore-ssl-errors=true vendor/jcalderonzumba/gastonjs/src/Client/main.js 8510 1024 768 2>&1 >> /tmp/gastonjs.log &"
Then I'm running my behat tests, and then I'm stopping that process again:
- "pkill -f 'src/Client/main.js' || true"
The problem is when a behat test fails, then it doesn't execute the pkill and the test-run is stuck waiting on phantomjs to finish. I already added the timeout 300 but that means I'm still currently waiting 2min or so after a fail and it will eventually stop it while test are still running when they get slow enough.
I haven't found a way to run some kind of post-run/cleanup command that also runs in case of fails.
Is there a better way to do this? Can I start phantomjs in a way that gitlab-ci doesn't care that it is still running? nohup maybe?
TL;DR; - spawn the process in a new thread with & but then you have to make sure the process is killed in successfull and failure builds.
i use this (with comments):
'E2E tests':
before_script:
- yarn install --force >/dev/null
# if there is already an instance running kill it - this is ok in my case - as this is not run very often
- /bin/bash -c '/usr/bin/killall -q lite-server; exit 0'
- export DOCKERHOST=$(ifconfig | grep -E "([0-9]{1,3}\\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d ':' | head -n1)
- export E2E_BASE_URL="http://$DOCKERHOST:8000/#."
# start the lite-server in a new process
- lite-server -c bs-config.js >/dev/null &
script:
# run the tests
- node_modules/.bin/protractor ./protractor.conf.js --seleniumAddress="http://localhost:4444/wd/hub" --baseUrl="http://$DOCKERHOST:8000" --browser chrome
# on a successfull run - kill lite server
- killall lite-server >/dev/null
after_script:
# when a test fails - try to kill it in the after_script. this looks rather complicated, but it makes sure your builds dont fail when the tests succeedes and the lite-server is already killed. to have a successfull build we ensure a non-error return code (exit 0)
- /bin/bash -c '/usr/bin/killall -q lite-server; exit 0'
stage: test
dependencies:
- Build
tags:
- selenium
https://gist.github.com/rufinus/9ee8f04fc1f9248eeb0c73ad5360a006#file-gitlab-ci-yml-L7
As hinted, basically my problem wasn't that I couldn't kill the process, it's that running my test script and it failing stopped at that point, resulting in a deadlock.
I was already doing something quite similar to the example from #Rufinus, but it just didn't work for me. There could be a few different things, like different way of running tests or so or starting it in before_script, which is not an option for me.
I did find a way to make it work for me, which was to prevent my test runner from stopping the execution of further tasks. I managed to do that with a "set +e" and then storing the exit code (something I tried to do before but it didn't work).
This is the relevant part from my job:
# Set option to prevent gitlab from stopping if behat fails.
- set +e
- "phantomjs --ssl-protocol=any --ignore-ssl-errors=true vendor/jcalderonzumba/gastonjs/src/Client/main.js 8510 1024 768 2>&1 >> /dev/null &"
# Store the exit code.
- "./vendor/bin/behat -f progress --stop-on-failure; export TEST_BEHAT=${PIPESTATUS[0]}"
- "pkill -f 'src/Client/main.js' || true"
# Exit the build
- if [ $TEST_BEHAT -eq 0 ]; then exit 0; else exit 1; fi
try -9 signal:
- "pkill -9 -f 'src/Client/main.js' || true"
You can try other signlas as well, you can find a list here

How to make jenkins move to the next stage if its "terminal" has been blocked?

I'm trying to run http calls for testing a live web api that's going to run in the jenkins machine.
This is the pipeline script that's been used.
stage 'build'
node {
git url: 'https://github.com/juliomarcos/sample-http-test-ci/'
sh "npm install"
sh "npm start"
}
stage 'test'
node {
sh "npm test"
}
But jenkins won't move to the test step. How can I run npm test after the web app has fully started?
One approach is to start the web app with an & at the end so it will run in the background. i.e.
npm start &
You can try to redirect the output of npm start to a text file like this:
npm start > output.txt &
Then in the next step, loop until the "started" message is available, something like:
tail -f output.txt | while read LOGLINE
do
[[ "${LOGLINE}" == *"listening on port"* ]] && pkill -P $$ tail
done
Code not tested :)

Resources