On a giltab ci job I am running a node.js script where I evaluate some conditions, and where I might want to force the job to fail.
The script is written in es6 and is ran with yarn and babel-node yarn babel-node ./script-in-es6.js.
When the condition fails I would like the job to fail. I have tried the following:
throw new Error('job failed')
process.exit(1)
require('shelljs').exit(1)
But none of these commands are enough to fail the job, it always succeeds. Is there a proper way to successfully fail a job in gitlab from node.js?
If your script really does return exit code 1, try this:
script:
- <run your script> || export RES="$?"
- exit $RES
Related
I would like to add a job to a pipeline in Gitlab, but only if a tool, e.g. Maven, exits with exit code 0.
For example, I would like to run the job for integration tests only if a given profile exists.
Currently I run always the job, but skip the call to Maven if the profile does not exists. Unfortunately my current approach adds the job to the pipeline and the viewer of the pipeline might think, that the job as been executed.
integration-test-job:
stage: build
script:
- |
if mvn help:all-profiles | grep "Profile Id: IT" 2>&- 1>&-;
then
mvn -P IT clean install
fi
Does someone have a better solution?
I'm trying to run njsscan to SAST my code on gitlab-ci. But the results of the job always fail even though there are no errors as shown in the image below.
If I run the manual on my server the command runs without any problems in the image below.
Is this a bug of gitlab-ci ? or is there a solution I can do? thank you
I have the same issue using gitlab-runner 15.3.0 with docker executor (docker version is 20.10.17):
Job is failing with RC=1 while running the before_script part
Restarting the Job (without any changes to code or pipeline-definitions) just succeed in the most cases.
We are using a dozen of runners, but even if a job is restarted on the same runner, it succeeds although it just failed there before.
I'm running my e2e tests with Nightwatch.js. I want them to run with a bash script (with the end result of them running in CI).
I am pretty new to bash and here is what i have so far:
#!/bin/bash
# exit on errors
set -e
export NODE_ENV=development
export LIVERELOAD_DISABLED=YES
npm install
NODE_ENV=e2e grunt build
echo "...Starting Node App"
#start app in the background
NODE_ENV=e2e node server.js &
#save node app process id
NODE_PROC=$!
#wait a bit
sleep 10
echo "...Running Frontend Tests"
NODE_ENV=e2e npm run nightwatch
echo "...Tests Finished... Killing Node App"
kill -9 $NODE_PROC
echo "...Node App Killed"
the problem is that the script gets stuck after running all the tests (line: NODE_ENV=e2e npm run nightwatch)
the only output i'm getting are the logs and the usual tests output. The script gets stuck no matter if the tests pass, fail, or some do and some don't.
I've tried adding exit 0 at the end which didn't work (makes sense, since it doesn't execute to that point).
Also, changing set -e to set -ex didn't change the output.
what am i missing here?
So i was looking in the wrong place, the script is completely fine, the issue was that i did not close the connection to the DB inside the tests.
Not sure why that caused the issue, but it fixed it
I have a job in my pipeline that has a script with two very important steps:
mvn test to run JUnit tests against my code
junit2html to convert the XML result of the tests to a HTML format (only possible way to see the results as my pipelines aren't done through MRs) that is uploaded to GitLab as an artifact
docker rm to destroy a container created earlier in the pipeline
My problem is that when my tests fail, the script stops immediately at mvn test, so the junit2html step is never reached, meaning the test results are never uploaded in the event of failure, and docker rm is never executed either, so the container remains and messes up subsequent pipelines as a result.
What I want is to be able to keep a job going till the end even if the script fails at some point. Basically, the job should still count as failed in GitLab CI / CD, but its entire script should be executed. How can I configure this?
In each step that you need to continue even if the step fails, you can add a flag to your .gitlab-ci.yml file in that step. For example:
...
Unit Tests:
stage: tests
only:
- branches
allow_failure: true
script:
- ...
It's that allow_failure: true flag that will continue the pipeline even if that specific step fails. Gitlab CI Documentation about allow_failure is here: https://docs.gitlab.com/ee/ci/yaml/#allow_failure
Update from comments:
If you need the step to keep going after a failure, and be aware that something failed, this has worked well for me:
./script_that_fails.sh || FAILED=true
if [ $FAILED ]
then ./do_something.sh
fi
Gitlab-CI seems allow the build to succeed even though the script is returning a non-zero exit. I have the following minimal .gitlab-ci.yml:
# Run linter
lint:
stage: build
script:
- exit 1
Producing the following result:
Running with gitlab-runner 11.1.0 (081978aa)
on gitlab-runner 72348d01
Using Shell executor...
Running on [hostname]
Fetching changes...
HEAD is now at 9f6f309 Still having problems with gitlab-runner
From https://[repo]
9f6f309..96fc77b dev -> origin/dev
Checking out 96fc77bb as dev...
Skipping Git submodules setup
$ exit 1
Job succeeded
Running on GitLab Community Edition 9.5.5 with gitlab-runner version 11.1.0. Closest post doesn't propose a resolution nor does this issue. A related question shows this setup should fail.
What are the conditions of failing a job? Isn't it a non-zero return code?
The cause of the problem was su was wrapped to call ksu as the shared machines are authenticated using Kerberos. In that case the wrapped ksu succeeds even though the script command might fail, indicating the job succeeded. This affected gitlab-runner since the shell executor was running su to run as the indicated user.