I have a job in my pipeline that has a script with two very important steps:
mvn test to run JUnit tests against my code
junit2html to convert the XML result of the tests to a HTML format (only possible way to see the results as my pipelines aren't done through MRs) that is uploaded to GitLab as an artifact
docker rm to destroy a container created earlier in the pipeline
My problem is that when my tests fail, the script stops immediately at mvn test, so the junit2html step is never reached, meaning the test results are never uploaded in the event of failure, and docker rm is never executed either, so the container remains and messes up subsequent pipelines as a result.
What I want is to be able to keep a job going till the end even if the script fails at some point. Basically, the job should still count as failed in GitLab CI / CD, but its entire script should be executed. How can I configure this?
In each step that you need to continue even if the step fails, you can add a flag to your .gitlab-ci.yml file in that step. For example:
...
Unit Tests:
stage: tests
only:
- branches
allow_failure: true
script:
- ...
It's that allow_failure: true flag that will continue the pipeline even if that specific step fails. Gitlab CI Documentation about allow_failure is here: https://docs.gitlab.com/ee/ci/yaml/#allow_failure
Update from comments:
If you need the step to keep going after a failure, and be aware that something failed, this has worked well for me:
./script_that_fails.sh || FAILED=true
if [ $FAILED ]
then ./do_something.sh
fi
Related
I would like to add a job to a pipeline in Gitlab, but only if a tool, e.g. Maven, exits with exit code 0.
For example, I would like to run the job for integration tests only if a given profile exists.
Currently I run always the job, but skip the call to Maven if the profile does not exists. Unfortunately my current approach adds the job to the pipeline and the viewer of the pipeline might think, that the job as been executed.
integration-test-job:
stage: build
script:
- |
if mvn help:all-profiles | grep "Profile Id: IT" 2>&- 1>&-;
then
mvn -P IT clean install
fi
Does someone have a better solution?
The command gitlab-runner lets you "test" a gitlab job locally. However, the local run of a job seems to have the same problem as a gitlab job run in gitlab CI: The output is not immediate!
What I mean: Even if your code/test/whatever produces printed output, it is not shown immediately in your log or console.
Here is how you can reproduce this behavior (on Linux):
Create a new git repository
mkdir testrepo
cd testrepo
git init
Create file .gitlab-ci.yml with the following content
job_test:
image: python:3.8-buster
script:
- python tester.py
Create a file tester.py with the following content:
import time
for index in range(10):
print(f"{time.time()} test output")
time.sleep(1)
Run this code locally
python tester.py
which produces the output
1648130393.143866 test output
1648130394.1441162 test output
1648130395.14529 test output
1648130396.1466148 test output
1648130397.147796 test output
1648130398.148115 test output
1648130399.148294 test output
1648130400.1494567 test output
1648130401.1506176 test output
1648130402.1508648 test output
with each line appearing on the console every second.
You commit the changes
git add tester.py
git add .gitlab-ci.yml
git commit -m "just a test"
You start the job within a gitlab runner
gitlab-runner exec docker job_test
....
1648130501.9057398 test output
1648130502.9068272 test output
1648130503.9079702 test output
1648130504.9090931 test output
1648130505.910158 test output
1648130506.9112566 test output
1648130507.9120533 test output
1648130508.9131665 test output
1648130509.9142723 test output
1648130510.9154003 test output
Job succeeded
Here you get essentially the same output, but you have to wait for 10 seconds and then you get the complete output at once!
What I want is to see the output as it happens. So like one line every second.
How can I do that for both, the local gitlab-runner and the gitlab CI?
In the source code, this is controlled mostly by the clientJobTrace's updateInterval and forceSendInterval properties.
These properties are not user-configurable. In order to change this functionality, you would have to patch the source code for the GitLab Runner and compile it yourself.
The parameters for the job trace are passed from the newJobTrace function and their defaults (where you would need to alter the source) are defined here.
Also note that the UI for GitLab may not necessarily get the trace in realtime, either. So, even if the runner has sent the trace to GitLab, the javascript responsible for updating the UI only polls for trace data every ~4 or 5 seconds.
You can poll gitlab web for new log lines as fast as you can:
For running job, use url like: https://gitlab.example.sk/grpup/project/-/jobs/42006/trace It will send you a json structure with lines of log file, offset, size and so on. You can have a look at documentation here: https://docs.gitlab.com/ee/api/jobs.html#get-a-log-file
Sidenote: you can use undocumented “state” parameter from response in subsequent request to get only new lines (if any). This is handy.
Through, this does not affect latency of arrival newlines from actual job from runner to gitlab web/backend. See sytech answer for this question.
This answer should help, when there is configured redis cache, incremental logging architecture, and someone wants to get logs from currently running job in "realtime". Polling is still needed through.
Some notes can be found also on forum: https://forum.gitlab.com/t/is-there-an-api-for-getting-live-log-from-running-job/73072
I have a gitlab pipeline where I have 3 jobs in same stage and should run in parallel. After all jobs are completed, I need to know the following information for each job:
Job Name, Status (pass/fail), started at, finished at
I am using below api in the after_script in .gitlab-ci.yml
curl https://gitlab.com/api/v4/job?job_token=$CI_JOB_TOKEN"
But is always gives me status as 'running'. How can I get the correct status whether the job is passed or failed?
You don't need to use the API in that case.
In after_script section, you can use CI_JOB_STATUS environment variable (available from Gitlab Runner 13.5). From the documentation :
The status of the job as each runner stage is executed. Use with after_script. Can be success, failed, or canceled.
I've got "running" from CI_JOB_STATUS
To fix that set ENV variable for job or globally: FF_USE_LEGACY_KUBERNETES_EXECUTION_STRATEGY: 1
More details you can find here https://gitlab.com/gitlab-org/gitlab-runner/-/issues/27693
I have a GitLab CI/CD job doing some stuff.
I want some executed commands to be able to fail and result in a warning for this job, but I also want other command to result in an error in the pipeline if they fail.
I have set in the .yaml file allow_failure: true, which will always result in a warning for this job regardless of the error.
Can I tell GitLab job to output an error for a specific exit code and a warning for another ?
With gitlab 13.9 allow_failure:exit_codes where introduced. With that you can now allow failure for certain exits codes and fail the job for all other ones.
allow_failure:
exit_codes:
- 137
- 255
Having in gitlab-ci a job like the following one:
static_test_service:
stage: test code
script:
- docker run --rm -v $(pwd):/data -w /data dparra0007/sonar-scanner:20171010-1 sonar-scanner
-Dsonar.projectKey=$CI_PROJECT_NAMESPACE:$CI_PROJECT_NAME
-Dsonar.projectName=$CI_PROJECT_NAME
-Dsonar.branch=$CI_COMMIT_REF_NAME
-Dsonar.projectVersion=$CI_JOB_ID
-Dsonar.sources=./greetingapi/src
-Dsonar.java.binaries=./greetingapi/target
-Dsonar.gitlab.project_id=$CI_PROJECT_ID
-Dsonar.gitlab.commit_sha=$CI_COMMIT_SHA
-Dsonar.gitlab.ref_name=$CI_COMMIT_REF_NAME
I would need to fail the gitlab job when the sonarqube analysis fails. But in that case, the error in analysis is reported but not sending a fail status to the job in Gitlab CI and the step always finish with success.
It seems that there is no way to raise any event from "docker run" to be managed by gitlab job.
Any idea on how to force to fail the job if the sonarqube analysis fails?
Thanks,
To break the CI build for a failed Quality Gate, you have write script based on the following steps
1.Search in /report-task.txt the values of the CE Task URL (ceTaskUrl) and CE Task Id (ceTaskId)
2.Call /api/ce/task?id=XXX where XXX is the CE Task Id retrieved from step 1 Ex:- https://yourSonarURL/api/ce/task?id=Your ceTaskId
3.Wait for sometime until the status is SUCCESS, CANCELED or FAILED from Step 2
4.If it is FAILED, break the build (Here failure is unable to generate sonar report)
5.If successful,then Use the analysisId from the JSON returned by /api/ce/task? id=XXX(step2)and Immediately call /api/qualitygates/project_status?analysisId=YYY to check the status of the quality gate.
Ex:- https://yourSonarURL/api/qualitygates/project_status?analysisId=Your analysisId
6.Step 5 gives the status of the critical, major and minor error threshold limit
7.Based on the limit break the build.
I faced this problem with GitLab and Sonar where Sonar was failing the QualityAnalysis but GitLab job was still passing with
INFO: ANALYSIS SUCCESSFUL, you can find the results at:
Now the problem is below missing config in sonar.properties
sonar.qualitygate.wait=true
sonar.qualitygate.timeout=1800
So basically, the SonarScan takes time to do the analysis and by default it won't wait for the analysis to complete and may returns default SUCCESSFUL ANALYSIS result to GitLab
With the mentioned configuration, we are explicitly asking GitLab to wait for the qualitygate to finish and gave some timeout as well (in case analysis takes long time to finish)
Now we see the GitLab job fails with below
ERROR: QUALITY GATE STATUS: FAILED - View details