Is it possible to suppress NPM's echo of the commands it is running? - node.js

I've got a bash script that starts up a server and then runs some functional tests. It's got to happen in one script, so I'm running the server in the background. This all happens via 2 npm commands: start:nolog and test:functional.
All good. But there's a lot of cruft in the output that I don't care about:
✗ ./functional-tests/runInPipeline.sh
(... "good" output here)
> #co/foo#2.2.10 pretest:functional /Users/jcol53/Documents/work/foo
> curl 'http://localhost:3000/foo' -s -f -o /dev/null || (echo 'Website must be running locally for functional tests.' && exit 1)
> #co/foo#2.2.10 test:functional /Users/jcol53/Documents/work/foo
> npm run --prefix functional-tests test:dev:chromeff
> #co/foo-functional-tests#1.0.0 test:dev:chromeff /Users/jcol53/Documents/work/foo/functional-tests
> testcafe chrome:headless,firefox:headless ./tests/**.test.js -r junit:reports/functional-test.junit.xml -r html:reports/functional-test.html --skip-js-errors
That's a lot of lines that I don't need there. Can I suppress the #co/foo-functional-tests etc lines? They aren't telling me anything worthwhile...
npm run -s kills all output from the command, which is not what I'm looking for.
This is probably not possible but that's OK, I'm curious, maybe I missed something...

Related

Start lots of background jobs but keep their logs separated

I have little experiences in shell commands in unix.
So far, I have checked stackOverflow and know how to run simple shell scripts in order by
using echo
echo $(sh dosomthing1.sh)
echo $(sh dosomthing2.sh)
directly using sh xxx and wait
sh dosomthing1.sh
wait
sh dosomthing2.sh
using &&
sh dosomthing1.sh && sh dosomthing2.sh
But these ways seem to be helpless to solve my problem...
Here is my problem:
I have a basic shell script to do a maven compile and then using "nohup xxx &" to start a java application in background. the script is shown below:
#get the input env parameter
env=$1
#goto application root directory
cd /applicationDir
#to compile
mvn install -Dmaven.test.skip=true
#to start with parameter env
nohup java -jar -Dspring.profiles.active=$env myApplication.jar &
#to tail the log
tail -20f myApplication.log
I have too many different applications with the same startup scripts and it is hard to start them one by one. I need to start them with one command.
All the shell scripts are expected to be processed one by one in order. If there are any exceptions, skip and run the next one.
And when I tried to write a script like this:
sh start1.sh
wait
echo "application 1 was start up"
sh start2.sh
wait
echo "application 2 was start up"
...
sh startxxx.sh
wait
echo "application xxx was start up"
Though all the children shell scripts will process in order as what I expected, and the output infomations looked like the shell is functioning well, but the fact is only the last application will be started, all the previous command "nohup xxxx &" will be shut down.
Also I have tried to write like this:
sh start1.sh &
sh start2.sh &
...
sh startxxx.sh &
Although the result was what I want, all the application will be started well, but during processing the scripts, because of the parallel running of the scripts, the consoled output is unreadable. It comes to a good result but not a graceful way.
I have no idea how to solve this problem...
Please help me with this, thank you very much!
When you have a script with commands, you cam do chmod +x start.sh. Now the script can be started with ./start.sh. You will avoid an additional sh process and with ls -l you can see which scripts are executable.
In your scripts you have tail -f. This will be very confusing for a backgound process. Start the scripts in the background and view the logging from the console. I do hope that each script is using a different myApplication.jar and myApplication.log.
When the logging in the logfile is duplicated in stdout (your commandline window), you can throw that logging away.
./start1.sh > /dev/null 2>&1 &
./start2.sh > /dev/null 2>&1 &
./startxxx.sh > /dev/null 2>&1 &
The processes will be killed when you logout before the scripts are terminated. This can be avoided with nohup:
nohup ./start1.sh > /dev/null 2>&1 &
nohup ./start2.sh > /dev/null 2>&1 &
nohup ./startxxx.sh > /dev/null 2>&1 &
Edit:
OPS wants to start programs in a fixed order.
Starting scripts exactly one after another in order, should be possible by calling them in the right order (perhaps with an additional sleep 1).
When you need to wait for program 1 finished some init stuff, you need to check that. Use 1 script calling all scripts and add some control statements, like
nohup java something &
while ! grep -q "Started" myApplication.log; do
sleep 1
done
When the java program has an error the while will wait for ever, so replace this with some max retrycount
for ((retry=0l retry<100; retry++)); do
grep -q "Started" myApplication.log && break
sleep 1
done
https://man7.org/linux/man-pages/man8/cron.8.html
This might help you. Cron is a task scheduler, which you can use to run programs in sequence. If the man page is difficult to understand, look for tutorials on it; I'm sure some would exist.

Make Nodejs script run in background in gitlab CI

Our dev project start by command npm run serve Is it possible to run it on background mode? I tried to use nohup, & in end of string. It works properly in shell, but when it start by CI on Gitlab, pipeline state always is "running" cause npm output permanently shows on screen
The clean way would be to run a container whose run command is "npm run serve"
I'm not certain running a non-blocking command through your pipeline is the right way but you should try using "&"
"npm run serve" will run the command in "detached mode.
I've faced the same problem using nohup and &. It was working well in shell, but not on Gitlab CI, It looks like npm start was not detached.
What worked for me is to call npm start inside a bash script and run it on before_script hook.
test:
stage: test
before_script:
- ./serverstart.sh
script:
- npm test
after_script:
- kill -9 $(ps aux | grep '\snode\s' | awk '{print $2}')
on the bash script serverstart.sh
# !/bin/bash
# start the server and send the console and error logs on nodeserver.log
npm start > nodeserver.log 2>&1 &
# keep waiting until the server is started
# (in my case wait for mongodb://localhost:27017/app-test to be logged)
while ! grep -q "mongodb://localhost:27017/app-test" nodeserver.log
do
sleep .1
done
echo -e "server has started\n"
exit 0
this allowed me to detach npm start and pass to next command while keeping npm startprocess alive

How do I always run a "clean up" command in bash propagating the previous error code?

I'm setting up tests for a Node.js project. The tests include interacting with static content (images) that is supposed to be served from a local http-server.
When the tests have completed - either successfully or failing - I want to end the server process and exit with a correct code. What I came up with in my npm scripts is the following:
"server": "http-server testdata -p 9876 -s",
"testcmd": "...",
"test": "npm run server & npm run testcmd && kill $(lsof -t -i:9876) || (kill $(lsof -t -i:9876) && exit 1)",
which "works", but has two problems:
it repeats code as I do not know how to run things in any case instead of defining || and && cases
any non-zero exit code of testcmd will always be transformed into an 1 exit code - ideally I would like to propagate the exact exit code
I tried reading up on this and found people talking about traps, but could not get it to work.
What would be a good way to simplify this control flow scenario?
Your two problems are connected: it would be easier to achieve what you're after if your command were better factored.
Since you do not want to set up a trap (and for this task, I don't blame you), what you need to do is capture the exit status of testcmd so that you can reiterate it later. Having done that, you can run your cleanup unconditionally, and therefore without duplication. For example:
npm run server & (npm run testcmd; status=$?; kill $(lsof -t -i:9876); exit $status)
Perhaps the special $? parameter, which expands to the exit status of the most recently executed [partial] pipeline, is the piece you were missing.

Stopping a started background service (phantomjs) in gitlab-ci

I'm starting phantomjs with specific arguments as part of my job.
This is running on a custom gitlab/gitlab-ci server, I'm currently not using containers, I guess that would simplify that.
I'm starting phantomjs like this:
- "timeout 300 phantomjs --ssl-protocol=any --ignore-ssl-errors=true vendor/jcalderonzumba/gastonjs/src/Client/main.js 8510 1024 768 2>&1 >> /tmp/gastonjs.log &"
Then I'm running my behat tests, and then I'm stopping that process again:
- "pkill -f 'src/Client/main.js' || true"
The problem is when a behat test fails, then it doesn't execute the pkill and the test-run is stuck waiting on phantomjs to finish. I already added the timeout 300 but that means I'm still currently waiting 2min or so after a fail and it will eventually stop it while test are still running when they get slow enough.
I haven't found a way to run some kind of post-run/cleanup command that also runs in case of fails.
Is there a better way to do this? Can I start phantomjs in a way that gitlab-ci doesn't care that it is still running? nohup maybe?
TL;DR; - spawn the process in a new thread with & but then you have to make sure the process is killed in successfull and failure builds.
i use this (with comments):
'E2E tests':
before_script:
- yarn install --force >/dev/null
# if there is already an instance running kill it - this is ok in my case - as this is not run very often
- /bin/bash -c '/usr/bin/killall -q lite-server; exit 0'
- export DOCKERHOST=$(ifconfig | grep -E "([0-9]{1,3}\\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d ':' | head -n1)
- export E2E_BASE_URL="http://$DOCKERHOST:8000/#."
# start the lite-server in a new process
- lite-server -c bs-config.js >/dev/null &
script:
# run the tests
- node_modules/.bin/protractor ./protractor.conf.js --seleniumAddress="http://localhost:4444/wd/hub" --baseUrl="http://$DOCKERHOST:8000" --browser chrome
# on a successfull run - kill lite server
- killall lite-server >/dev/null
after_script:
# when a test fails - try to kill it in the after_script. this looks rather complicated, but it makes sure your builds dont fail when the tests succeedes and the lite-server is already killed. to have a successfull build we ensure a non-error return code (exit 0)
- /bin/bash -c '/usr/bin/killall -q lite-server; exit 0'
stage: test
dependencies:
- Build
tags:
- selenium
https://gist.github.com/rufinus/9ee8f04fc1f9248eeb0c73ad5360a006#file-gitlab-ci-yml-L7
As hinted, basically my problem wasn't that I couldn't kill the process, it's that running my test script and it failing stopped at that point, resulting in a deadlock.
I was already doing something quite similar to the example from #Rufinus, but it just didn't work for me. There could be a few different things, like different way of running tests or so or starting it in before_script, which is not an option for me.
I did find a way to make it work for me, which was to prevent my test runner from stopping the execution of further tasks. I managed to do that with a "set +e" and then storing the exit code (something I tried to do before but it didn't work).
This is the relevant part from my job:
# Set option to prevent gitlab from stopping if behat fails.
- set +e
- "phantomjs --ssl-protocol=any --ignore-ssl-errors=true vendor/jcalderonzumba/gastonjs/src/Client/main.js 8510 1024 768 2>&1 >> /dev/null &"
# Store the exit code.
- "./vendor/bin/behat -f progress --stop-on-failure; export TEST_BEHAT=${PIPESTATUS[0]}"
- "pkill -f 'src/Client/main.js' || true"
# Exit the build
- if [ $TEST_BEHAT -eq 0 ]; then exit 0; else exit 1; fi
try -9 signal:
- "pkill -9 -f 'src/Client/main.js' || true"
You can try other signlas as well, you can find a list here

How to run Node.js as a background process and never die?

I connect to the linux server via putty SSH. I tried to run it as a background process like this:
$ node server.js &
However, after 2.5 hrs the terminal becomes inactive and the process dies. Is there anyway I can keep the process alive even with the terminal disconnected?
Edit 1
Actually, I tried nohup, but as soon as I close the Putty SSH terminal or unplug my internet, the server process stops right away.
Is there anything I have to do in Putty?
Edit 2 (on Feb, 2012)
There is a node.js module, forever. It will run node.js server as daemon service.
nohup node server.js > /dev/null 2>&1 &
nohup means: Do not terminate this process even when the stty is cut
off.
> /dev/null means: stdout goes to /dev/null (which is a dummy
device that does not record any output).
2>&1 means: stderr also goes to the stdout (which is already redirected to /dev/null). You may replace &1 with a file path to keep a log of errors, e.g.: 2>/tmp/myLog
& at the end means: run this command as a background task.
Simple solution (if you are not interested in coming back to the process, just want it to keep running):
nohup node server.js &
There's also the jobs command to see an indexed list of those backgrounded processes. And you can kill a backgrounded process by running kill %1 or kill %2 with the number being the index of the process.
Powerful solution (allows you to reconnect to the process if it is interactive):
screen
You can then detach by pressing Ctrl+a+d and then attach back by running screen -r
Also consider the newer alternative to screen, tmux.
You really should try to use screen. It is a bit more complicated than just doing nohup long_running &, but understanding screen once you never come back again.
Start your screen session at first:
user#host:~$ screen
Run anything you want:
wget http://mirror.yandex.ru/centos/4.6/isos/i386/CentOS-4.6-i386-binDVD.iso
Press ctrl+A and then d. Done. Your session keeps going on in background.
You can list all sessions by screen -ls, and attach to some by screen -r 20673.pts-0.srv command, where 0673.pts-0.srv is an entry list.
This is an old question, but is high ranked on Google. I almost can't believe on the highest voted answers, because running a node.js process inside a screen session, with the & or even with the nohup flag -- all of them -- are just workarounds.
Specially the screen/tmux solution, which should really be considered an amateur solution. Screen and Tmux are not meant to keep processes running, but for multiplexing terminal sessions. It's fine, when you are running a script on your server and want to disconnect. But for a node.js server your don't want your process to be attached to a terminal session. This is too fragile. To keep things running you need to daemonize the process!
There are plenty of good tools to do that.
PM2: http://pm2.keymetrics.io/
# basic usage
$ npm install pm2 -g
$ pm2 start server.js
# you can even define how many processes you want in cluster mode:
$ pm2 start server.js -i 4
# you can start various processes, with complex startup settings
# using an ecosystem.json file (with env variables, custom args, etc):
$ pm2 start ecosystem.json
One big advantage I see in favor of PM2 is that it can generate the system startup script to make the process persist between restarts:
$ pm2 startup [platform]
Where platform can be ubuntu|centos|redhat|gentoo|systemd|darwin|amazon.
forever.js: https://github.com/foreverjs/forever
# basic usage
$ npm install forever -g
$ forever start app.js
# you can run from a json configuration as well, for
# more complex environments or multi-apps
$ forever start development.json
Init scripts:
I'm not go into detail about how to write a init script, because I'm not an expert in this subject and it'd be too long for this answer, but basically they are simple shell scripts, triggered by OS events. You can read more about this here
Docker:
Just run your server in a Docker container with -d option and, voilá, you have a daemonized node.js server!
Here is a sample Dockerfile (from node.js official guide):
FROM node:argon
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 8080
CMD [ "npm", "start" ]
Then build your image and run your container:
$ docker build -t <your username>/node-web-app .
$ docker run -p 49160:8080 -d <your username>/node-web-app
Always use the proper tool for the job. It'll save you a lot of headaches and over hours!
another solution disown the job
$ nohup node server.js &
[1] 1711
$ disown -h %1
nohup will allow the program to continue even after the terminal dies. I have actually had situations where nohup prevents the SSH session from terminating correctly, so you should redirect input as well:
$ nohup node server.js </dev/null &
Depending on how nohup is configured, you may also need to redirect standard output and standard error to files.
Nohup and screen offer great light solutions to running Node.js in the background. Node.js process manager (PM2) is a handy tool for deployment. Install it with npm globally on your system:
npm install pm2 -g
to run a Node.js app as a daemon:
pm2 start app.js
You can optionally link it to Keymetrics.io a monitoring SAAS made by Unitech.
$ disown node server.js &
It will remove command from active task list and send the command to background
I have this function in my shell rc file, based on #Yoichi's answer:
nohup-template () {
[[ "$1" = "" ]] && echo "Example usage:\nnohup-template urxvtd" && return 0
nohup "$1" > /dev/null 2>&1 &
}
You can use it this way:
nohup-template "command you would execute here"
Have you read about the nohup command?
To run command as a system service on debian with sysv init:
Copy skeleton script and adapt it for your needs, probably all you have to do is to set some variables. Your script will inherit fine defaults from /lib/init/init-d-script, if something does not fits your needs - override it in your script. If something goes wrong you can see details in source /lib/init/init-d-script. Mandatory vars are DAEMON and NAME. Script will use start-stop-daemon to run your command, in START_ARGS you can define additional parameters of start-stop-daemon to use.
cp /etc/init.d/skeleton /etc/init.d/myservice
chmod +x /etc/init.d/myservice
nano /etc/init.d/myservice
/etc/init.d/myservice start
/etc/init.d/myservice stop
That is how I run some python stuff for my wikimedia wiki:
...
DESC="mediawiki articles converter"
DAEMON='/home/mss/pp/bin/nslave'
DAEMON_ARGS='--cachedir /home/mss/cache/'
NAME='nslave'
PIDFILE='/var/run/nslave.pid'
START_ARGS='--background --make-pidfile --remove-pidfile --chuid mss --chdir /home/mss/pp/bin'
export PATH="/home/mss/pp/bin:$PATH"
do_stop_cmd() {
start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 \
$STOP_ARGS \
${PIDFILE:+--pidfile ${PIDFILE}} --name $NAME
RETVAL="$?"
[ "$RETVAL" = 2 ] && return 2
rm -f $PIDFILE
return $RETVAL
}
Besides setting vars I had to override do_stop_cmd because of python substitutes the executable, so service did not stop properly.
Apart from cool solutions above I'd mention also about supervisord and monit tools which allow to start process, monitor its presence and start it if it died. With 'monit' you can also run some active checks like check if process responds for http request
For Ubuntu i use this:
(exec PROG_SH &> /dev/null &)
regards
Try this for a simple solution
cmd & exit

Resources