I'm trying to configure .NET project code quality check in GitLab Enterprise Edition 15.8.1-ee (Premium tier), but Gitlab UI doesn't show any code issue.
Since I'm going to use a custom code inspection tool (JetBrains Inspect Code command line tool), I've written a special converter that reformat JetBrains report format to Gitlab JSON format (https://docs.gitlab.com/ee/ci/testing/code_quality.html#implement-a-custom-tool). For testing purpose, I've prepared a GitLab code quality report, I added the report to the repository and added an additional Gitlab job to provide the file to CI pipeline.
Prepared GitLab code quality report (gl-code-quality-report.json) part:
[
{
"description": "Using directive is not required by the code and can be safely removed",
"fingerprint": "a3d5c2a9-1761-4a18-8e17-35df9e2bc3a6",
"severity": "critical",
"location": {
"path": "src/folder/Class.cs",
"lines": {
"begin": 8
}
}
}
...
]
.gitlab-ci.yml part (since the report is already pregenerated, powershell script do nothing):
check-code-quality:
stage: check-code-quality
only: ['branches']
dependencies:
- build
script: ['powershell.exe .\build\check-code-quality.ps1']
artifacts:
when: always
expire_in: 4 days
reports:
codequality: gl-code-quality-report.json
Current result: CI pipeline doesn't fail. The pipeline has a new job 'check-code-quality' and there is a new tab in the pipeline page - Code quality. Unfortunately, the tab has the text: "No code quality issues found.". In a merge request page there is a new section with the text "Code Quality hasn't changed.".
check-code-quality log has a text:
gl-code-quality-report.json: found 1 matching files and directories
Uploading artifacts as "codequality" to coordinator... ok id=1684071 responseStatus=201 Created token=64_yasyB
Why I can't see any issue in Gitlab UI? Please tell me what I'm doing wrong.
I have multiple things in mind.
First of all your JSON structure could be invalid. Make sure that the JSON file conforms to the GitLab JSON format as described in the docs.
Another problem could be that location field may be incorrect. It specifies the path to the file that contains the code quality issue. Make sure that the path is correct and accessible in your repository.
I would also check for the artifact path. Please verify that the path to the JSON file is specified in the artifacts field of your .gitlab-ci.yml.
In some cases it might also be related to a cache issue, try clearing the cache.
I've found that my pre-generated file encoding is UTF-8 with BOM and it seems Gitlab doesn't recognize data with this encoding. When I change encoding to UTF-8 Gitlab shows the code quality widget and all issues described in provided JSON file.
Related
This answer by #joki to a previous question suggests that it is possible to deploy each active branch in a GitLab repo to a dynamic environment, by giving browsable artifacts a public URL.
Trying this out with a mkdocs material project, I've found two issues.
Firstly, if the GitLab repo is within a group or a subgroup the URLs in the .gitlab-ci.yml file needs to be something more like this:
environment:
name: review/$CI_COMMIT_REF_NAME
url: "$CI_PAGES_URL/-/jobs/$CI_JOB_ID/artifacts/public/index.html"
auto_stop_in: 1 week
variables:
PUBLIC_URL: "$CI_PAGES_URL/-/jobs/$CI_JOB_ID/artifacts/public/"
Secondly, relative links within the site don't work well, leading to a lot of 404 errors, and the loss of things like style files. Possibly the URLs above are not right, or maybe the site_url in mkdocs.yml needs changing to something like:
site_url: !!python/object/apply:os.getenv ["CI_ENVIRONMENT_URL"]
however, neither of these quite worked for me.
A minimal MR with a very small deployment and review app can be found here.
Does anyone have a working recipe for mkdocs review apps?
You can see the URL you need in the »Browse« button of the build step in your pipeline.
Does this work?
develop:
artifacts:
paths:
- public
environment:
name: Develop
url: "https://$CI_PROJECT_NAMESPACE.gitlab.io/-/snim2-test-subgroup/$CI_PROJECT_NAME/-/jobs/$CI_JOB_ID/artifacts/public/index.html"
script: |
# whatever
stage: deploy
variables:
PUBLIC_URL: "/-/snim2-test-subgroup/$CI_PROJECT_NAME/-/jobs/$CI_JOB_ID/artifacts/public"
You'll also need your change to mkdocs.yml to actually use the PUBLIC_URL, and make sure it's used everywhere that absolute internal links are generated:
site_url: !!python/object/apply:os.getenv ["PUBLIC_URL"]
use_directory_urls: false
…
I am trying to run Vorto dashboard on Raspberry Pi to visualize my Bosch IoT "things" data.
In order to run the Vorto Dashboard, I installed npm and nodejs and created the config.json file.
I am getting the below error whenever I try to run the dashboard using the command: sudo vorto-dashboard config.json, knowing that I already added the OAuth2 Client credentials.
No credentials given, can not get things
Could not get the token with given credentials. - StatusCodeError: 400 -
{"error":"unauthorized_client","error_description":"INVALID_CREDENTIALS:
Invalid client credentials"}
I am currently contributing the Vorto Project as an Intern at Bosch. Due to changes in the Vorto-Dashboard we combined and merged the functionality of a previous dashboard with another coexisting updated UI, providing advanced ways to visualize the existing devices.
As the uploaded state was work in progress, we temporarily disabled the config.json methodology and removed existing references from the documentation. Apparently, the reference in the tutorial you found was omitted, sorry for that!
Today, I deployed a new version 0.5.0 of the vorto-dashboard which should work as usual. You are now able to work with either process.env.[...] varibales or a config.json file. Thank you Mena for the quick response!
Feel free to let me know if you need any further help or have additional feedback.
TL;DR
To resolve your issue, store your OAUth credentials as environmental variables.
E.g. in debian et al., export BOSCH_CLIENT_ID=... etc., then start the dashboard in the same terminal.
Context
I was about to ask the same question, as I got the same error message no matter how I referenced the config.json file (relative path, absolute path, no reference, etc.).
For clarification, the tutorial pointing to a config.json resource for storing OAuth credentials is here.
Quoting:
While the dependencies are being installed, create the config.json file and insert client_id, secret and scope from your Already created
OAuth2 Client. The content of the file has to look like this:
{
"client_id": "<YOUR_CLIENT_ID>",
"client_secret": "<YOUR_CLIENT_SECRET",
"scope": "<YOUR_SCOPE>",
"intervalMS": 10000
}
The reference to the config.json file has been removed from the README.md resource in the vorto-dashboard module of vorto-examples.
The latest README.md suggests providing the OAuth credentials through environmental variables:
You can provide your OAuth2 credentials through environment variables.
The three environment variables you have to provide are:
BOSCH_CLIENT_ID
BOSCH_CLIENT_SECRET
BOSCH_SCOPE
[...]
Looking at the source, I can only find an explicit reference to a config.json in the start script entry for package_for_deployment.json (nor anything around the source seems to be consuming, say, argv[2] for that matter).
The AuthToken.js resource in charge of handling OAuth credentials only seems to reference environmental variables through the process.env.[...] references.
Elaboration
This is only speculation at the time of writing, but I suspect the reason why the config.json methodology has been abandoned might have something to do with strengthening security, i.e. not storing OAuth credentials permanently in a file.
If that much is true, then the tutorial page should probably be amended with the latest instructions from the README.md.
I am new to working with Azure DevOps and I am trying to set up build pipelines for multiple projects and share a yml template between them. I will demonstrate more clearly what I want to achieve but first let me show you our projects' structure:
proj0-common/
|----src/
|----azure-pipelines.yml
|----pipeline-templates/
|----build-project.yml
|----install-net-core
proj1/
|----src/
|----azure-pipelines.yml
proj2/
|----src/
|----azure-pipelines.yml
proj3/
|----src/
|----azure-pipelines.yml
The first folder is our Common project in which we want to put our common scripts and packages and use them in the projects. The rest of the folders (proj1-proj3) are .net core projects and act as microservice projects. As you can see, each project has its own azure-pipelines.yml pipeline file and each project resides in its own repository in Github. Then there are the template pipeline files (build-project.yml and install-net-core) which reside in the common project.
All the projects have the same build steps, therefore I would like to use the build-project.yml template for all the three projects (instead of hardcoding every step in every file).
My problem is that since they reside in distinct projects, I cannot access the template files simply, let's say from project3, by just addressing it like this:
.
.
.
- template: ../proj0-common/pipeline-templates/build-project.yml
.
.
.
And [I believe] the reason is that each project will have its own isolated build pool(please do correct me on this if I am wrong).
I was thinking if Azure DevOps had similar functionality to the variable groups but for pipeline templates, that could solve my problem, however, I cannot find such a feature. Could someone suggest a solution to this problem?
Could you copy this use case? I experimented a bit after checking out some of the docs. It had some gaps though, like most of Microsoft's other docs around Azure DevOps.
Say you have azdevops-settings.yml that specifies the pipeline in one of your service branches. In the example below it has two task steps that runs an external template in another repository, but in one of them I supply a parameter that is otherwise set to some default in the template.
Notice I had to use the endpoint tag, otherwise it will complain. Something that could be further specified in the docs.
# In ThisProject
# ./azdevops-settings.yml
resources:
repositories:
- repository: templates
type: bitbucket
name: mygitdomain/otherRepo
endpoint: MyNameOfTheGitServiceConnection
steps:
- template: sometemplate.yml#templates
parameters:
Param1: 'Changed Param1'
- template: sometemplate.yml#templates
In the template I first have the available parameters that I want to pipe to the template. I tried out referencing parameters without pipeing them, like build id and other predefined variables and they worked fine.
I also tried using an inline script as well as a script path reference. The 'test.ps1' just prints a string, like the output below.
# otherRepo/sometemplate.yml
parameters:
Param1: 'hello there'
steps:
- powershell: |
Write-Host "Your parameter is now: $env:Param"
Write-Host "When outputting standard variable build id: $(Build.BuildId)"
Write-Host "When outputting standard variable build id via env: $env:BuildNumber"
Write-Host "The repo name is: $(Build.Repository.Name)"
Write-Host "The build definition name is: $(Build.DefinitionName)"
env:
Param: ${{parameters.Param1}}
BuildNumber: $(Build.BuildId)
- powershell: './test.ps1'
And the separate powershell script:
# otherRepo/test.ps1
Write-Host "Running script from powershell specification"
Output:
========================== Starting Command Output ===========================
Your parameter is now: Changed Param1
When outputting standard variable build id: 23
When outputting standard variable build id via env: 23
The repo name is: mygitdomain/thisRepo
The build definition name is: ThisProject
Finishing: PowerShell
========================== Starting Command Output ===========================
Running script from powershell specification
Finishing: PowerShell
..and so on..
I found only one solution to actually do that. You can reference the parent directory by using an absolute path. The key was to populate the root path using a system variable. The solution for your example:
- template: ${{variables['System.DefaultWorkingDirectory']}}/proj0-common/pipeline-templates/build-project.yml
I'm trying to run a number of api calls using dredd and api blueprint to test a site. I would like to run the tests on circleCI, as there are Selenium tests running in the same place. Each transaction needs to be accompanied by two tokens, which are set as cookies in the headers. Ideally, these would be set in the dredd.yml file. When running on a local machine, if I replace ACCESS_TOKEN and REFRESH_TOKEN with the actual values, the test runs as expected.
circle.yml:
test:
override:
- dredd
dredd.yml headers
header: ['Cookie: access_token=ACCESS_TOKEN; refresh_token=REFRESH_TOKEN']
Where ACCESS_TOKEN and REFRESH_TOKEN get replaced by the actual values set in circleCI's environment variables. I have also tried:access_token=$[ACCESS_TOKEN], access_token=$["ACCESS_TOKEN"] and access_token=$ACCESS_TOKEN. None of these are being replaced in the headers for the first api call.
The header looks like: {"Content-Type":"application/json; charset=utf-8","User-Agent":"Dredd/1.4.0 (Darwin 14.5.0; x64)","Cookie":" access_token=$ACCESS_TOKEN; refresh_token=$REFRESH_TOKEN"}
I am new to yaml files, so I'm probably missing something basic, but I did search around for a while. The hooks file is written with node.js, so I don't think the ruby/rails help will be useful here. If I am missing anything in the question don't hesitate to let me know.
YAML is a data representation language, not a template language (or template processor, for that matter). While an individual program might support loading environment variables or additional parameters named in the configuration, the YAML parser (probably, unless it's a custom module) isn't what's injecting them. While skimming the dredd docs I don't see any references to environment variables or parameters, it may be worth creating an issue on the project and starting a discussion with the developers to see if this is supported.
I can think of a number of ways to solve your specific problem, but they all involve additional tools to render the YAML with your variables injected. Perhaps the easiest solution for your case is to set environment variables in the CircleCI web configuration (NOT version-controled circle.yml). Then, set up a pre-build step, where the YAML configuration is generated. To do this, wrap the YAML in a BASH script, with the YAML document contained inside of it as a here-doc.
#!/bin/bash
# ACCESS_TOKEN and REFRESH_TOKEN are injected by CircleCI
cat <<EOF > config.yml
---
header: ['Cookie: access_token=${ACCESS_TOKEN}; refresh_token=${REFRESH_TOKEN}']
EOF
Then run the rest of your job normally, perhaps deleting the configuration file or restoring it from version control before any artifacts are created to avoid the leakage of your credentials.
The better way to work with headers is by Hook files setting headers before each request. As you are using Node.js, try set Node environment variables:
var hooks = require('hooks');
hooks.beforeEach(function(transaction) {
transaction.request.headers.Cookie =
'access_token=' + ACCESS_TOKEN +
'; refresh_token=' + REFRESH_TOKEN;
}
I´m working with my own GitLab and GitLab CI server. I´d like to get the latest success commit.
I just can get my latest build status off a branch from the URL:
http://mygitlab.ci/projects/3/status?ref=master
I need that in order to deploy the latest success version of my repo, but I really don´t understand CI with own GitLab and there are not a lot of documentation.
UPDATE:
i.e. In the picture you can see the latest 3 commits and their status. I really need to get the latest success commit (763a3077).
Solved:
Here I have the answer. The URL must be something like this:
http://my.gitlabci/api/v1/commits?project_token=<my-project-token>&project_id=<my-project-id>
GET /commits
Parameters:
project_id (required) - The ID of a project
project_token (requires) - Project token
page (optional)
per_page (optional) - items per request (default is 20)
https://docs.gitlab.com/ee/api/commits.html