I have gitlab secret detection, and i wanted to check it works. I have spring project and the job set up. What kind of secret pattern would it pick up.
Does anyone know how i can check it actually picks something up?
I have tried adding the following to the code, its made up, but doesn't get flagged:
aws_secret=AKIAIMNOJVGFDXXXE4OA
If the secrets detector finds a secret, it doesn't fail the job (ie, it doesn't have a non-0 exit code). In the analyzer output, it will show how many leaks were found, but not what they were. The full details are written to a file called gl-secret-detection-report.json. You can either cat the file in the job so you can see the results in the Job output, or upload it as an artifact so it gets recognized as a sast report.
Here's the secrets detection job from one of my pipelines that both cat's the file and uploads it as a sast report artifact. Note: for my purposes, I wasn't able to directly use the template, so I run the analyzer manually:
Secrets Detector:
stage: sast
image:
name: "registry.gitlab.com/gitlab-org/security-products/analyzers/secrets"
needs: []
only:
- branches
except:
- main
before_script:
- apk add jq
script:
- /analyzer run
- cat gl-secret-detection-report.json | jq '.'
artifacts:
reports:
sast: gl-secret-detection-report.json
The gl-secret-detection-report.json file looks like this for a test repository I set up and added a GitLab Runner registration token to a file called TESTING:
{
"version": "14.0.4",
"vulnerabilities": [
{
"id": "138bf52be327e2fc3d1934e45c93a83436c267e45aa84f5b55f2db87085cb205",
"category": "secret_detection",
"name": "GitLab Runner Registration Token",
"message": "GitLab Runner Registration Token detected; please remove and revoke it if this is a leak.",
"description": "Historic GitLab Runner Registration Token secret has been found in commit 0a4623336ac54174647e151186c796cf7987702a.",
"cve": "TESTING:5432b14f2bdaa01f041f6eeadc53fe68c96ef12231b168d86c71b95aca838f3c:gitlab_runner_registration_token",
"severity": "Critical",
"confidence": "Unknown",
"raw_source_code_extract": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"scanner": {
"id": "gitleaks",
"name": "Gitleaks"
},
"location": {
"file": "TESTING",
"commit": {
"author": "author",
"date": "2022-09-12T17:30:33Z",
"message": "a commit message",
"sha": "0a4623336ac54174647e151186c796cf7987702a"
},
"start_line": 1
},
"identifiers": [
{
"type": "gitleaks_rule_id",
"name": "Gitleaks rule ID gitlab_runner_registration_token",
"value": "gitlab_runner_registration_token"
}
]
}
],
"scan": {
"analyzer": {
"id": "secrets",
"name": "secrets",
"url": "https://gitlab.com/gitlab-org/security-products/analyzers/secrets",
"vendor": {
"name": "GitLab"
},
"version": "4.3.2"
},
"scanner": {
"id": "gitleaks",
"name": "Gitleaks",
"url": "https://github.com/zricethezav/gitleaks",
"vendor": {
"name": "GitLab"
},
"version": "8.10.3"
},
"type": "secret_detection",
"start_time": "2022-09-12T17:30:54",
"end_time": "2022-09-12T17:30:55",
"status": "success"
}
}
This includes the type of secret found, what file it was in and what line(s), and information from the commit where the secret was added.
If you wanted to force the job to fail if any secrets were found, you can do that with jq (note: I install jq in the before_script of this job, it's not available in the image by default.):
Secrets Detector:
stage: sast
image:
name: "registry.gitlab.com/gitlab-org/security-products/analyzers/secrets"
needs: []
only:
- branches
except:
- main
before_script:
- apk add jq
script:
- /analyzer run
- cat gl-secret-detection-report.json | jq '.'
- if [[ $(cat gl-secret-detection-report.json | jq '.vulnerabilities | length > 0') ]]; then echo "secrets found" && exit 1; fi
artifacts:
reports:
sast: gl-secret-detection-report.json
Related
I am developing a handful of WordPress projects on Gitlab and I would like to use semantic-release to automatically manage releases. To that end I'm trying to accomplish a few additional things:
Update and commit applicable version strings in the codebase via ${nextRelease.version}.
Similarly update versions strings in the files generated for the release (which are zipped for convenience).
I'm pretty sure I'm close, I've got the first item (via google's semantic-release-replace-plugin) but not the second. Up to this point I've tried to do most things via semantic-releases' plugin ecosystem, but if need be I can venture into script territory.
My .releaserc looks like:
{
"branches": [ "main" ],
"plugins": [
"#semantic-release/commit-analyzer",
"#semantic-release/release-notes-generator",
[
"#google/semantic-release-replace-plugin",
{
"replacements": [
{
"files": ["style.css"],
"from": "Version: .*",
"to": "Version: ${nextRelease.version}",
"results": [
{
"file": "style.css",
"hasChanged": true,
"numMatches": 1,
"numReplacements": 1
}
],
"countMatches": true
}
]
}
],
[
"#semantic-release/git",
{
"assets": ["style.css"]
}
],
[
"#semantic-release/gitlab",
{
"assets": [
{"path": "experiments.zip", "label": "zip"}
]
}
]
]
}
And the .gitlab-ci.yml looks like:
variables:
GL_TOKEN: $GL_TOKEN
stages:
- release
before_script:
- npm install
publish:
image: cimg/php:7.4-node
stage: release
script:
- npm run build
- npm run zip
- npx semantic-release
only:
refs:
- main
Where npm run build compiles some assets and npm run zip is a JavaScript-based script that zips up the desired production-ready files, in this case to generate the experiments.zip.
Any suggestions would be appreciated!
So the main issue here was that compilation was just not occurring at the right time and I needed to slip a
"#semantic-release/exec",
{
"prepareCmd": "node bin/makezip.js"
}
between "#semantic-release/git" and "#semantic-release/gitlab".
I have an Azure DevOps pipeline with the resource section is given below
resources:
repositories:
- repository: test
type: git
name: Hackfest/template
pipelines:
- pipeline: Build
source: mybuild
branch: main
# version: # Latest by default
trigger:
branches:
include:
- main
I'm trying to invoke the pipeline using a rest api call. The body of the rest api call is given below
$body='{
"definition": { "id": "3321" },
"resources": {
"pipelines": {
"Build": {
"version": "20220304.15",
"source": "mybuild"
}
}
},
"sourceBranch": "main"
}'
With the above json string I'm able to invoke the pipeline build, but it is not picking the artifacts from version 20220304.15 of the build "mybuild". Rather it is taking the latest artifact version of mybuild and starting the build.
How I should modify the above body string to pick the correct version of the "mybuild"?
With the Runs - Run Pipeline this is worked for me:
"resources": {
"repositories": {
"self": {
"refName": "refs/heads/dev"
}
},
"pipelines": {
"Build": {
"version": "Build_202203040100.1"
}
}
}
I am using the following YAML for my prometheus-adapter installation.
prometheus:
url: http://prometheus-server.prometheus.svc.cluster.local
port: 80
rules:
custom:
- seriesQuery: 'http_duration{kubernetes_namespace!="",kubernetes_pod_name!=""}'
resources:
overrides:
kubernetes_namespace: { resource: "namespace" }
kubernetes_pod_name: { resource: "pod" }
name:
matches: "^(.*)_sum"
as: "${1}_avg"
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>}[2m])) by (<<.GroupBy>>)"
This YAML is installed with the following command.
helm upgrade --install prometheus-adapter prometheus-community/prometheus-adapter --values=./prometheus-adapter-values.yaml --namespace prometheus
After generating some load with hey, I tried looking for the _avg metric with the following command.
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq -r '.resources[] | select (.name | contains ("pods/hello_http"))'
This is the output.
{
"name": "pods/hello_http_duration_sum",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
{
"name": "pods/hello_http_duration_count",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
{
"name": "pods/hello_http_duration_bucket",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
Why is the _avg metric not seen? Note that, at the moment, the accuracy of the metricsQuery is not important. I just want to know, why is the _avg metric not seen?
Where do I look for logs? The prometheus-adapter or the prometheus-server logs didn't show anything obvious.
Do I need additionalScrapeConfigs as described here?
This post similar to mine; however, my configuration matches that of OP. What am I missing?
I want to list only GitLab repositories name using GitLab API.
i tried command curl "https://gitlab.com/api/v4/projects?private_token=*************"
it is listing all merge requests, issues and also repository names.
How can i list only the repository names?
As Igor mentioned the Gitlab API does not let you to limit the response to a single field.
I would recommend combining the original API call with the jq command.
This will enable parsing of the returned json
curl "https://gitlab.com/api/v4/projects?private_token=*************" | jq '.[].name'
There's no way to limit the response to a single field only via API, but there's simple option to return minimal set of fields:
https://docs.gitlab.com/ee/api/projects.html#list-all-projects
curl "https://gitlab.com/api/v4/projects?simple=true&private_token=*************" would return something like:
[
{
"id": 4,
"description": null,
"default_branch": "master",
"ssh_url_to_repo": "git#example.com:diaspora/diaspora-client.git",
"http_url_to_repo": "http://example.com/diaspora/diaspora-client.git",
"web_url": "http://example.com/diaspora/diaspora-client",
"readme_url": "http://example.com/diaspora/diaspora-client/blob/master/README.md",
"tag_list": [ //deprecated, use `topics` instead
"example",
"disapora client"
],
"topics": [
"example",
"disapora client"
],
"name": "Diaspora Client",
"name_with_namespace": "Diaspora / Diaspora Client",
"path": "diaspora-client",
"path_with_namespace": "diaspora/diaspora-client",
"created_at": "2013-09-30T13:46:02Z",
"last_activity_at": "2013-09-30T13:46:02Z",
"forks_count": 0,
"avatar_url": "http://example.com/uploads/project/avatar/4/uploads/avatar.png",
"star_count": 0
},
{
"id": 6,
"description": null,
"default_branch": "master",
...
I was following this documentation and trying to run a simple shell script in a vm using this.
https://learn.microsoft.com/en-us/rest/api/compute/virtual%20machines%20run%20commands/runcommand#runcommandinputparameter
But what needs to be the content of the body of the post request is not clear. The commandID can be RunShellScript but where does we provide the script value.
I have tried a body like this
{
commandId: "RunShellScript",
script: "/path/scriptname"
}
with other options
script: 'scriptname'
script: 'sh scriptname'
and other each resulting in
{
"error": {
"code": "BadRequest",
"message": "Error converting value "/home/admin1/quick-python-test.sh" to type 'System.Collections.Generic.List`1[System.String]'. Path 'script', line 3, position 52.",
"target": "runCommandInput.script"
}
}
Can anyone help me how to do it properly? I am new to Azure.
To run the bash script in the VM through Azure REST API, here is the sample body for the request:
{
"commandId": "RunShellScript",
"script": [
"echo $arg1 $arg2"
],
"parameters": [
{
"name": "arg1",
"value": "hello"
},
{
"name": "arg2",
"value": "world"
}
]
}