what is the meaning of a colon in a list value in a yaml file, specifically :image of the stage build:image in a .gitlab-ci.yml file - gitlab

Is the list value build:image just a name like build_image? Or does it have special usage in either the yaml file or the .gitlab-ci.yml file? If there isn't a special usage, what is the value of using name1:name2 instead of name1_name2?
The :image doesn't seem to be put into a variable. When I run this through the gitlab pipeline, the output is
Skipping Git submodules setup
Restoring cache
Downloading artifacts
Running before_script and script
$ echo image is $image
image
is
.gitlab-ci.yml
stages:
- build:image
- tag:image
- deploy
build:
stage: build:image
script:
- echo image is $image

I don't see anything like this in the GitLab CI/CD Pipeline Configuration Reference
Where did you see this .gitlab-ci.yml file?
I ran the .gitlab-ci.yml you provided and it seems to work fine, apparently GitLab CI doesn't treat the colon in any special way -- and I wouldn't expect it to, as there is no mention of it in the documentation.

Related

GitLab submodule complains about AWS CloudFormation templates located in submodule

I am facing a problem with GitLab include and I'm wondering whether it's possible to do what I intent to.
I have 2 GitLab repositories:
my-infrastructure
my-prod-deployment
In my-infrastructure I have the YML file that defines a job with a command referencing a local file. For example in my-infrastructure I have the following:
A gitlab-infrastructure.yml template
image: amazon/aws-cli
variables:
FOO: ReplaceMe
stages:
- one
agent_ui:
stage: microservices
script:
- aws cloudformation deploy --stack-name sample --template-file templates/aws-template.yml
and also I have a templates/aws-template.yml that has some cloud formation code.
Notice that the GitLab template needs access to a local file that exists in the same project my-infrastructure
Now in the other project my-prod-deployment I have a .gitlab-ci.yml with
include:
- project: mycompany/my-infrastructure
ref: main
file: gitlab-infrastructure.yml
variables:
FOO: Bar
When I run this CI/CD pipeline I can see the FOO variable being properly overriden and I can see that the included job's script is executed. The problem is that I get a
$ aws cloudformation deploy --stack-name sample --template-file templates/aws-template.yml
Invalid template path templates/aws-template.yml
This is probably because the local relative path is in my-infrastructure, but not in my-prod-deployment that file is not locally available in this project and therefore it can't be found.
Is there any solution to this?
Maybe a way to include not only gitlab but also other files or similar?
Or maybe some kind of shortcut or link to a different repo folder?
Or maybe a way to temporary copy a remote folder to the local CI/CD pipeline execution?
Notice that I cannot use an absolute or URL path for that script parameter since that specific tool (AWS CLI) does not allow it. Otherwise I wouldn't face this relative path issue
UPDATE 1: I have tried a workaround with git submodules separating the gitlab template in a different project and adding my-infrastructure as a submodule
cd my-prod-deployment
git submodule add git#gitlab.com:mycompany/my-infrastructure.git
so that my .gitlab-ci.yml looks like this
include:
- project: mycompany/my-gitlab-templates
ref: main
file: gitlab-infrastructure.yml
variables:
CLOUDFORMATION_SUBMODULE: my-infrastructure
FOO: Bar
and my repo has a local folder my-infrastructure, but I am shocked to find that it still complains about the AWS CloudFormation template path, so I've added AWS Cloud Formation tag to the question and edited it.
This is the error
$ aws cloudformation deploy --stack-name sample --template-file $CLOUDFORMATION_SUBMODULE/templates/aws-template.yml
Invalid template path my-infrastructure/templates/aws-template.yml
Cleaning up project directory and file based variables 00:00
ERROR: Job failed: exit code 1
There is a my-infrastructure/templates/aws-template.yml path under my repo. It's part of the submodule. So I don't understand why this workaround does not work.
Any help would be appreciated.
I fixed the issue with the git submodule approach.
I had to add 2 changes as per https://docs.gitlab.com/ee/ci/git_submodules.html
Add the submodule with relative path
git submodule add ../my-infrastructure.git
So that .gitmodules displays the relative url within the same server
[submodule "my-infrastructure"]
path = my-infrastructure
url = ../my-infrastructure.git
and add this variable to .gitlab-ci.yml
variables:
# submodule behavior
GIT_SUBMODULE_STRATEGY: recursive

Changing Gitlab SAST json report names

Issue
Note: My CI contains a code complexity checker which can be ignored. This question is mainly focused on SAST.
I have recently setup a SAST pipeline for one of my Gitlab projects. The Gitlab-ce and Gitlab-runner instances are self-hosted. When the SAST scan is completed, the downloaded artifacts / json reports all contain the same name gl-sast-report.json. In this example, the artifacts bandit-sast and semgrep-sast both product gl-sast-report.json when downloaded.
SAST configuration
stages:
- CodeScan
- CodeComplexity
sast:
stage: CodeScan
tags:
- sast
code_quality:
stage: CodeComplexity
artifacts:
paths: [gl-code-quality-report.json]
services:
tags:
- cq-sans-dind
include:
- template: Security/SAST.gitlab-ci.yml
- template: Code-Quality.gitlab-ci.yml
Completed SAST results
End Goal
If possible, how could I change the name of the artifacts for bandit-sast and semgrep-sast?
If question one is possible, does this mean I have to manually specify each analyser for various projects. Currently, based on my .gitlab-ci.yml the SAST analysers are automatically detected based on the project language.
If you're using the pre-built SAST images, this isn't possible, even if you run the docker command manually like so:
docker run --volume "$PWD":/code --env=LM_REPORT_VERSION="2.1" --env=CI_PROJECT_DIR=/code registry.gitlab.com/gitlab-org/security-products/analyzers/license-finder:latest
When using these SAST (and DAST) images, the report file will always have the name in the docs, however if you ran the docker command manually like above, you could rename the file before it's uploaded as an artifact, but it would still have the same json structure/content.
Run License Scanning Analyzer:
stage: sast
script:
- docker run --volume "$PWD":/code --env=LM_REPORT_VERSION="2.1" --env=CI_PROJECT_DIR=/code registry.gitlab.com/gitlab-org/security-products/analyzers/license-finder:latest
- mv gl-license-scanning-report.json license-scanning-report.json
artifacts:
reports:
license_scanning: license-scanning-report.json
The only way to change the json structure/content is to implement the SAST tests manually without using the provided images at all. You can see all the available SAST analyzers in this Gitlab repo.
For the License Finder analyzer as an example, the Dockerfile says the entrypoint for the image is the run.sh script.
You can see on line 20 of run.sh it sets the name of the file to 'gl-license-scanning-report.json', but we can change the name by running the docker image manually so this doesn't really help. However, we can see that the actual analyzing comes from the scan_project function, which you could replicate.
So while it is possible to manually run these analyzers without the pre-built images, it will be much more difficult to get them to work.

GitLab CI executes only one job

I'm using GitLab.com free account and have installed GitLab-Runner on my Windows PC. For some reason GitLab CI is executing only one job from my .gitlab-ci.yml file.
To test it I've created simple .gitlab-ci.yml file with two jobs.
job1:
script:
- echo 1
job2:
script:
- echo 2
When I commit it to repository only job1 is executed. I've check with CI lint, file is valid. What could be the problem?
It turns out that problem was with encoding of .gitlab-ci.yml file. Initially it was encoded as UCS-2 LE BOM. After converting it to UTF-8 both jobs were recognized.

dynamically setting artifact path/folder structure on gitlab-ci.yml

I have the following gitlab-ci.yml file that reads the package.json using the jq processor to dynamically set the variable name of the artifact folder, something along the lines of
image: node:latest
stages:
- build
before_script:
## steps ignored for purpose of question
- export NAME_OF_ARTIFACT_FOLDER=$(cat package.json | jq -r .name)"_"$(cat package.json | jq -r .version)".zip"
- echo $NAME_OF_ARTIFACT_FOLDER ##prints the expected name here eg. myApp_1.0.0.zip
prod_build:
stage: build
script:
- echo $NAME_OF_ARTIFACT_FOLDER ##prints the expected name here eg. myApp_1.0.0.zip
- yarn run build
artifacts:
paths:
- dist/$NAME_OF_ARTIFACT_FOLDER ## this does not work
expire_in: 2 hrs
The issue here is - dist/$NAME_OF_ARTIFACT_FOLDER does not work, not sure if am missing something here.
EDIT
Upon hard coding the expected path such as the following, it works fine, which would mean that the folder name is valid and that the artifact is indeed identified appropriately, but does NOT work when coming from $NAME_OF_ARTIFACT_FOLDER
artifacts:
paths:
- dist/myApp_1.0.0.zip ##hardcoding the expected works just fine
expire_in: 2 hrs
Well, that is not possible currently. Manual says as follows:
The artifacts:name variable can make use of any of the predefined variables.
That is no variables set in your script part of the job can be used.
This is an open issue at GitLab
Artifacts Filename Cannot Be Set with Dynamic Variables
I had a project variable defining the path to a zip file in a script which I reused at artifacts:paths level. The linked issue would have been more obvious had the artifacts:paths instance completely failed to get assigned but in my case it inherited a different value from that a mere two lines above in my job!

Execute a script before the branch is deleted in GitLab-CI

GitLab-CI executes the stop-environment script in dynamic environments after the branch has been deleted. This effectively forces you to put all the teardown logic into the .gitlab-ci.yml instead of a script that .gitlab-ci.yml just calls.
Does anyone know a workaround for this? I have a shell script that removes the deployment. This script is part of the repository and can also be called locally (i.e. not onli in an CI environment). I want GitLab-CI to call this script when removing a dynamic environment but it's obviously not there anymore when the branch has been deleted. I also cannot put this script to the artifacts as it is generated before the build by a configure script and contains secrets. It would be great if one could execute the teardown script before the branch is deleted.
Here's a relevant excerpt from the .gitlab-ci.yml
deploy_dynamic_staging:
stage: deploy
variables:
SERVICE_NAME: foo-service-$CI_BUILD_REF_SLUG
script:
- ./configure
- make deploy.staging
environment:
name: staging/$CI_BUILD_REF_SLUG
on_stop: stop_dynamic_staging
except:
- master
stop_dynamic_staging:
stage: deploy
variables:
GIT_STRATEGY: none
script:
- make teardown # <- this fails
when: manual
environment:
name: staging/$CI_BUILD_REF_SLUG
action: stop
Probably not ideal, but you can curl the script using the gitlab API before running it:
curl \
-X GET https://gitlab.example. com/raw/master/script.sh\
-H 'PRIVATE-TOKEN: ${GITLAB_TOKEN}' > script.sh
GitLab-CI executes the stop-environment script in dynamic environments after the branch has been deleted.
That includes:
An on_stop action, if defined, is executed.
With GitLab 15.1 (June 2022), you can skip that on_top action:
Force stop an environment
In 15.1, we added a force option to the stop environment API call.
This allows you to delete an active environment without running the specified on_stop jobs in cases where running these defined actions is not desired.
See Documentation and Issue.

Resources