The command gitlab-runner lets you "test" a gitlab job locally. However, the local run of a job seems to have the same problem as a gitlab job run in gitlab CI: The output is not immediate!
What I mean: Even if your code/test/whatever produces printed output, it is not shown immediately in your log or console.
Here is how you can reproduce this behavior (on Linux):
Create a new git repository
mkdir testrepo
cd testrepo
git init
Create file .gitlab-ci.yml with the following content
job_test:
image: python:3.8-buster
script:
- python tester.py
Create a file tester.py with the following content:
import time
for index in range(10):
print(f"{time.time()} test output")
time.sleep(1)
Run this code locally
python tester.py
which produces the output
1648130393.143866 test output
1648130394.1441162 test output
1648130395.14529 test output
1648130396.1466148 test output
1648130397.147796 test output
1648130398.148115 test output
1648130399.148294 test output
1648130400.1494567 test output
1648130401.1506176 test output
1648130402.1508648 test output
with each line appearing on the console every second.
You commit the changes
git add tester.py
git add .gitlab-ci.yml
git commit -m "just a test"
You start the job within a gitlab runner
gitlab-runner exec docker job_test
....
1648130501.9057398 test output
1648130502.9068272 test output
1648130503.9079702 test output
1648130504.9090931 test output
1648130505.910158 test output
1648130506.9112566 test output
1648130507.9120533 test output
1648130508.9131665 test output
1648130509.9142723 test output
1648130510.9154003 test output
Job succeeded
Here you get essentially the same output, but you have to wait for 10 seconds and then you get the complete output at once!
What I want is to see the output as it happens. So like one line every second.
How can I do that for both, the local gitlab-runner and the gitlab CI?
In the source code, this is controlled mostly by the clientJobTrace's updateInterval and forceSendInterval properties.
These properties are not user-configurable. In order to change this functionality, you would have to patch the source code for the GitLab Runner and compile it yourself.
The parameters for the job trace are passed from the newJobTrace function and their defaults (where you would need to alter the source) are defined here.
Also note that the UI for GitLab may not necessarily get the trace in realtime, either. So, even if the runner has sent the trace to GitLab, the javascript responsible for updating the UI only polls for trace data every ~4 or 5 seconds.
You can poll gitlab web for new log lines as fast as you can:
For running job, use url like: https://gitlab.example.sk/grpup/project/-/jobs/42006/trace It will send you a json structure with lines of log file, offset, size and so on. You can have a look at documentation here: https://docs.gitlab.com/ee/api/jobs.html#get-a-log-file
Sidenote: you can use undocumented “state” parameter from response in subsequent request to get only new lines (if any). This is handy.
Through, this does not affect latency of arrival newlines from actual job from runner to gitlab web/backend. See sytech answer for this question.
This answer should help, when there is configured redis cache, incremental logging architecture, and someone wants to get logs from currently running job in "realtime". Polling is still needed through.
Some notes can be found also on forum: https://forum.gitlab.com/t/is-there-an-api-for-getting-live-log-from-running-job/73072
Related
I was trying to run the test from CI/CD gitlab runner file but it is causing issue while executing from gitlab.
I have sucessfully executed the test locally using the karate option
Working fine in Local Run:
mvn test -Dkarate.env=stg +-DKarate.options=--tags #Ui" -Dtest.run.mode=localtest -Dtest.run.group=OKCUtest -Dtest=OKCUtest -Dtest.gitlabRunner=false -DbuildDirectory=stg-target/OKCUtest -Dtest.run.testSource=localtest
There are 5 test feature files which were executed using the #Api tags and now I have identified that one should be #Ui and changed the respective feature file and created the new pipeline OKCU-UI and have updated the command line syntax to address #Ui tests.
can you try this command ?
mvn test -Dkarate.options="--tags ~#Ui"
if still not try same command with version 0.9.6.RC3
Context
After creating a general post-receive for a GitLab server, I noticed it gets triggered directly after a new commit is detected in any repository. However, I would like the post-receive script to do something with the build status of the GitLab Runner CI on the commit that triggered the post-receive script.
Approach
Based on this question and answer, I wrote a post-receive script that gets the commit and repository, and I tried to get the build status from that commit from within the GitLab docker:
#!/bin/bash
read oldrev newrev refname
echo "Previous Commit: $oldrev"
echo "New/latest Commit: $newrev"
echo "Repository name: $refname"
# Get build status of $newrev
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/17/$refname/commits/$newrev/statuses
However, that API call does not work from within the Docker environment (which is from where the post-receive script runs).
Docker GitLab Build Status File locations
I also found the build status badges inside the Docker, they are located in: /opt/gitlab/embedded/service/gitlab-rails/public/assets/. However I do not (yet) know how to decode their filenames. For example, the build status badge accompanying Job #3, of commit: 9514d16aafc1d741ba6a9ff47718d632fa8d435b has filename: icons-6d7d4be41eac996c72b30eac2f28399ac8c6eda840a6fe8762fc1b84b30d5a2d.svg. Basically I do not know to which commit/repository that build status badge belongs.
On the other hand, I have found the location of the job logs in the hashed path of the repository:
/var/opt/gitlab/gitlab-rails/shared/artifacts/d4/73/d4735e3a265e16eee03f59718b9b5d03019c07d8b6c51f90da3a666eec13ab35/2021_10_09/1/1/job.log
/var/opt/gitlab/gitlab-rails/shared/artifacts/d4/73/d4735e3a265e16eee03f59718b9b5d03019c07d8b6c51f90da3a666eec13ab35/2021_10_14/3/3/job.log
Which each in turn contain their respective commit and branch as:
Checking out 9514d16a as master...
So in principle I could scan the repository path and accompanying job logs until I found the job.log that contains the commit of the post-receive script (for e.g. 5 minutes, to account for the delay between the commit and the starting of the GitLab Runner CI), and then search for the build status output in that job.log (e.g. Job succeeded) (for e.g. 60 minutes to allow for long jobs). However, that seems like a convoluted work-around.
Question
Hence, I was wondering, *Is there a better/faster/robuster method to get the GitLab Runner CI build status of the commit that triggered the general post-receive script of a GitLab server, inside that triggered instance/run of the post-receive script?
I have a pipeline on Azure that runs on a Windows 10 virtual machine that at some point calls a test task for an assembly (.dll) that tests functions for a Revit (3D modelling software) plugin.
In order to run the tests, the pipeline is simply running a command line task that starts RevitTestFramework, an open source application (https://github.com/DynamoDS/RevitTestFramework) used for this kind of testing.
Here are the relevant parts of my pipeline's yaml:
trigger:
- develop
pool: 'Default'
variables:
solution: '**/*.sln'
buildPlatform: 'Any CPU'
buildConfiguration: 'Debug'
steps:
# Some steps here
- task: CmdLine#2
inputs:
script: |
cd %ALLUSERSPROFILE%\RevitTestFramework\bin\AnyCPU\Debug
RevitTestFrameworkConsole.exe --dir %ALLUSERSPROFILE%\RevitTestFramework\bin\AnyCPU\Debug -a %ALLUSERSPROFILE%\RevitTestFramework\Tests\ModelEstablishment.IntegrationTests\bin\Debug\ModelEstablishment.IntegrationTests.dll -r %ALLUSERSPROFILE%\RevitTestFramework\Tests\results.xml -revit:"C:\Program Files\Autodesk\Revit 2020\Revit.exe" --continuous
Where %ALLUSERSPROFILE% is C:\ProgramData, but I also tried different folders (including C:) with the same result.
The very last line is the one that causes the issue. If it is a bit confusing, it is just summoning the program RevitTestFrameworkConsole.exe, which lives under the directory --dir and it has to test the assembly -a, spit out the results at -r, using the version of Revit specified at the path after -revit.
If I run this with my command line in Windows (not through the Azure pipeline) it runs perfectly:
But if it is Azure running it then it starts idling, repeating these lines until it cancels itself:
DevTools listening on ws://127.0.0.1:8088/devtools/browser/fa35cb10-8f4d-468f-9b0e-6457845ff8b2
Running C:\RevitTestFramework\bin\AnyCPU\Debug\RTF_Batch_Test.txt
[1202/180047.796:ERROR:gpu_process_transport_factory.cc(1029)] Lost UI shared context.
I've done my research but all I can find is that that error shouldn't be an actual error that breaks things and it usually happens when testing headless Chrome (which I'm far from doing).
Does anyone know what's going on here and how do I fix this?
UPDATE
By comparing the processes of manually running the command through the command line in the VM and what happens when Azure runs the same command, I've noticed that right after the line that says Running C:\ProgramData\RevitTestFramework\bin\AnyCPU\Debug\RTF_Batch_Test.txt (see screenshots) Revit is supposed to startup and run the tests. So I'm thinking that Azure's pipeline runs that command differently from what I do when I run the command on the same VM's command line.
Maybe this can help understanding the issue
I have a job in my pipeline that has a script with two very important steps:
mvn test to run JUnit tests against my code
junit2html to convert the XML result of the tests to a HTML format (only possible way to see the results as my pipelines aren't done through MRs) that is uploaded to GitLab as an artifact
docker rm to destroy a container created earlier in the pipeline
My problem is that when my tests fail, the script stops immediately at mvn test, so the junit2html step is never reached, meaning the test results are never uploaded in the event of failure, and docker rm is never executed either, so the container remains and messes up subsequent pipelines as a result.
What I want is to be able to keep a job going till the end even if the script fails at some point. Basically, the job should still count as failed in GitLab CI / CD, but its entire script should be executed. How can I configure this?
In each step that you need to continue even if the step fails, you can add a flag to your .gitlab-ci.yml file in that step. For example:
...
Unit Tests:
stage: tests
only:
- branches
allow_failure: true
script:
- ...
It's that allow_failure: true flag that will continue the pipeline even if that specific step fails. Gitlab CI Documentation about allow_failure is here: https://docs.gitlab.com/ee/ci/yaml/#allow_failure
Update from comments:
If you need the step to keep going after a failure, and be aware that something failed, this has worked well for me:
./script_that_fails.sh || FAILED=true
if [ $FAILED ]
then ./do_something.sh
fi
Having in gitlab-ci a job like the following one:
static_test_service:
stage: test code
script:
- docker run --rm -v $(pwd):/data -w /data dparra0007/sonar-scanner:20171010-1 sonar-scanner
-Dsonar.projectKey=$CI_PROJECT_NAMESPACE:$CI_PROJECT_NAME
-Dsonar.projectName=$CI_PROJECT_NAME
-Dsonar.branch=$CI_COMMIT_REF_NAME
-Dsonar.projectVersion=$CI_JOB_ID
-Dsonar.sources=./greetingapi/src
-Dsonar.java.binaries=./greetingapi/target
-Dsonar.gitlab.project_id=$CI_PROJECT_ID
-Dsonar.gitlab.commit_sha=$CI_COMMIT_SHA
-Dsonar.gitlab.ref_name=$CI_COMMIT_REF_NAME
I would need to fail the gitlab job when the sonarqube analysis fails. But in that case, the error in analysis is reported but not sending a fail status to the job in Gitlab CI and the step always finish with success.
It seems that there is no way to raise any event from "docker run" to be managed by gitlab job.
Any idea on how to force to fail the job if the sonarqube analysis fails?
Thanks,
To break the CI build for a failed Quality Gate, you have write script based on the following steps
1.Search in /report-task.txt the values of the CE Task URL (ceTaskUrl) and CE Task Id (ceTaskId)
2.Call /api/ce/task?id=XXX where XXX is the CE Task Id retrieved from step 1 Ex:- https://yourSonarURL/api/ce/task?id=Your ceTaskId
3.Wait for sometime until the status is SUCCESS, CANCELED or FAILED from Step 2
4.If it is FAILED, break the build (Here failure is unable to generate sonar report)
5.If successful,then Use the analysisId from the JSON returned by /api/ce/task? id=XXX(step2)and Immediately call /api/qualitygates/project_status?analysisId=YYY to check the status of the quality gate.
Ex:- https://yourSonarURL/api/qualitygates/project_status?analysisId=Your analysisId
6.Step 5 gives the status of the critical, major and minor error threshold limit
7.Based on the limit break the build.
I faced this problem with GitLab and Sonar where Sonar was failing the QualityAnalysis but GitLab job was still passing with
INFO: ANALYSIS SUCCESSFUL, you can find the results at:
Now the problem is below missing config in sonar.properties
sonar.qualitygate.wait=true
sonar.qualitygate.timeout=1800
So basically, the SonarScan takes time to do the analysis and by default it won't wait for the analysis to complete and may returns default SUCCESSFUL ANALYSIS result to GitLab
With the mentioned configuration, we are explicitly asking GitLab to wait for the qualitygate to finish and gave some timeout as well (in case analysis takes long time to finish)
Now we see the GitLab job fails with below
ERROR: QUALITY GATE STATUS: FAILED - View details