I need to run in CI some run_test.py script:
import os
os.system("test1.py")
os.system("test2.py")
test1.py and test2.py - both unit-test scripts
Even if there is, for example, an ERROR in test1.py, run_test.py is still executed correctly and CI pipeline is successfully passed, though I need it to fail.
Is there any chance to catch test1.py unit-test errors so that CI pipeline will fall?
PS: running test1.py and test2.py independently in CI file doesn't
work, only works by run_test.py script
Related
I was trying to run the test from CI/CD gitlab runner file but it is causing issue while executing from gitlab.
I have sucessfully executed the test locally using the karate option
Working fine in Local Run:
mvn test -Dkarate.env=stg +-DKarate.options=--tags #Ui" -Dtest.run.mode=localtest -Dtest.run.group=OKCUtest -Dtest=OKCUtest -Dtest.gitlabRunner=false -DbuildDirectory=stg-target/OKCUtest -Dtest.run.testSource=localtest
There are 5 test feature files which were executed using the #Api tags and now I have identified that one should be #Ui and changed the respective feature file and created the new pipeline OKCU-UI and have updated the command line syntax to address #Ui tests.
can you try this command ?
mvn test -Dkarate.options="--tags ~#Ui"
if still not try same command with version 0.9.6.RC3
The command gitlab-runner lets you "test" a gitlab job locally. However, the local run of a job seems to have the same problem as a gitlab job run in gitlab CI: The output is not immediate!
What I mean: Even if your code/test/whatever produces printed output, it is not shown immediately in your log or console.
Here is how you can reproduce this behavior (on Linux):
Create a new git repository
mkdir testrepo
cd testrepo
git init
Create file .gitlab-ci.yml with the following content
job_test:
image: python:3.8-buster
script:
- python tester.py
Create a file tester.py with the following content:
import time
for index in range(10):
print(f"{time.time()} test output")
time.sleep(1)
Run this code locally
python tester.py
which produces the output
1648130393.143866 test output
1648130394.1441162 test output
1648130395.14529 test output
1648130396.1466148 test output
1648130397.147796 test output
1648130398.148115 test output
1648130399.148294 test output
1648130400.1494567 test output
1648130401.1506176 test output
1648130402.1508648 test output
with each line appearing on the console every second.
You commit the changes
git add tester.py
git add .gitlab-ci.yml
git commit -m "just a test"
You start the job within a gitlab runner
gitlab-runner exec docker job_test
....
1648130501.9057398 test output
1648130502.9068272 test output
1648130503.9079702 test output
1648130504.9090931 test output
1648130505.910158 test output
1648130506.9112566 test output
1648130507.9120533 test output
1648130508.9131665 test output
1648130509.9142723 test output
1648130510.9154003 test output
Job succeeded
Here you get essentially the same output, but you have to wait for 10 seconds and then you get the complete output at once!
What I want is to see the output as it happens. So like one line every second.
How can I do that for both, the local gitlab-runner and the gitlab CI?
In the source code, this is controlled mostly by the clientJobTrace's updateInterval and forceSendInterval properties.
These properties are not user-configurable. In order to change this functionality, you would have to patch the source code for the GitLab Runner and compile it yourself.
The parameters for the job trace are passed from the newJobTrace function and their defaults (where you would need to alter the source) are defined here.
Also note that the UI for GitLab may not necessarily get the trace in realtime, either. So, even if the runner has sent the trace to GitLab, the javascript responsible for updating the UI only polls for trace data every ~4 or 5 seconds.
You can poll gitlab web for new log lines as fast as you can:
For running job, use url like: https://gitlab.example.sk/grpup/project/-/jobs/42006/trace It will send you a json structure with lines of log file, offset, size and so on. You can have a look at documentation here: https://docs.gitlab.com/ee/api/jobs.html#get-a-log-file
Sidenote: you can use undocumented “state” parameter from response in subsequent request to get only new lines (if any). This is handy.
Through, this does not affect latency of arrival newlines from actual job from runner to gitlab web/backend. See sytech answer for this question.
This answer should help, when there is configured redis cache, incremental logging architecture, and someone wants to get logs from currently running job in "realtime". Polling is still needed through.
Some notes can be found also on forum: https://forum.gitlab.com/t/is-there-an-api-for-getting-live-log-from-running-job/73072
I'm having some real difficulty in running a pytest fixture within a conftest file only once when the tests are run in parallel via a shell script. Contents of the shell script are below:
#!/usr/bin/env bash
pytest -k 'mobile' --os iphone my_tests/ &
pytest -k 'mobile' --os ipad my_tests/
wait
The pytest fixture creates resources for the mobiles tests to use before running the tests:
#pytest.fixture(scope='session', autouse=True)
def create_resources():
// Do stuff to create the resources
yield
// Do stuff to remove the resources
When running each on its own it works perfectly. Creates the resources, runs the tests and finally removes the resources it created. When running in parallel (with the shell script) both try to run the create_resources fixture at the same time.
Does anyone know of a way I can run the create_resource fixture just once? If so then is it possible for the second device to wait until the resources are created before all devices run the tests?
I have a job in my pipeline that has a script with two very important steps:
mvn test to run JUnit tests against my code
junit2html to convert the XML result of the tests to a HTML format (only possible way to see the results as my pipelines aren't done through MRs) that is uploaded to GitLab as an artifact
docker rm to destroy a container created earlier in the pipeline
My problem is that when my tests fail, the script stops immediately at mvn test, so the junit2html step is never reached, meaning the test results are never uploaded in the event of failure, and docker rm is never executed either, so the container remains and messes up subsequent pipelines as a result.
What I want is to be able to keep a job going till the end even if the script fails at some point. Basically, the job should still count as failed in GitLab CI / CD, but its entire script should be executed. How can I configure this?
In each step that you need to continue even if the step fails, you can add a flag to your .gitlab-ci.yml file in that step. For example:
...
Unit Tests:
stage: tests
only:
- branches
allow_failure: true
script:
- ...
It's that allow_failure: true flag that will continue the pipeline even if that specific step fails. Gitlab CI Documentation about allow_failure is here: https://docs.gitlab.com/ee/ci/yaml/#allow_failure
Update from comments:
If you need the step to keep going after a failure, and be aware that something failed, this has worked well for me:
./script_that_fails.sh || FAILED=true
if [ $FAILED ]
then ./do_something.sh
fi
On a giltab ci job I am running a node.js script where I evaluate some conditions, and where I might want to force the job to fail.
The script is written in es6 and is ran with yarn and babel-node yarn babel-node ./script-in-es6.js.
When the condition fails I would like the job to fail. I have tried the following:
throw new Error('job failed')
process.exit(1)
require('shelljs').exit(1)
But none of these commands are enough to fail the job, it always succeeds. Is there a proper way to successfully fail a job in gitlab from node.js?
If your script really does return exit code 1, try this:
script:
- <run your script> || export RES="$?"
- exit $RES