why we use images in CI CD pipeline? - gitlab

actually i'm facing a problem in this code :
Sorted
- Changes
- Lint
- Build
- Tests
- E2E
- SAST
- DAST
- Publish
- Deployment
# Get Runner Image
image: Node:latest
# Set Variables for mysql
**variables:**
MYSQL_ROOT_PASSWORD: secret
MYSQL_PASSWORD:
..
..
**script:**
- ./addons/scripts/ci/lintphp.sh
why we use image I asked some one said that we build on it like the docker file command FROM ubuntu:latest
and one other told me it's because it executes the code and I don't actually know the script tag above what evem does it mean to execute inside the image or on the runner?

GitLab Runner is an open source application that collects pipeline job payload and executes it. Therefore, it implements a number of executors that can be used to run your builds in different scenarios, if you are using a docker executor you need to specify what image you will be using to run your builds.

Related

Use cache with multiple image in gitlab CICD

I am making a gitlab CI/CD pipeline that uses two different image.
One of them necessitate the installations of some package using npm. In order to avoid multiple-time installation I've added some cache.
Let's see this example :
stages:
- build
- quality
cache:
paths:
- node_modules/
build-one:
image: node:latest
stage: build
script:
- npm install <some package>
build-two:
image: foo_image:latest
stage: build
script:
- some cmd
quality:
image: node:latest
stage: quality
script:
- <some cmd using the previously installed package>
The fact of having two different docker images forces me to specify it inside the job definition. So from my tests the cache isn't actually used and the command executed by the quality job will fail because the package isn't installed.
Is there a solution to this problem ?
Many thanks !
Kev'.
There can be two cases
Same runner is being used to run all the jobs. In this case the way to specified cache should work fine.
Different runners are being used to run different jobs. So suppose build job runs with runner 1 and quality jobs is running with runner 2 so the cache will only be present in runner 1.
In order to make use of caching in case 2 you will have to use distributed caching.
Then runner 1 will run the build job it will push the cache to s3 and runner 2 will pull with cache during the quality job and then can use that.

Gitlab docker ci template - what is the header name for

At first example, the image name is docker:latest.
And the stage is the defination of pipeline that i can have build, test, deploy stages.
Snippet 1
gitlab-ci.yml
docker-build:
# Use the official docker image.
image: docker:latest
stage: build
May i know the defination of docker-build?
Can i named it build or something else, what is the usage?
Snippet 2
gitlab-ci.yml
image: docker:latest
services:
- docker:dind
build:
stage: build
script:
- docker build -t test .
In another example, there is services defined. Why i need services and when i don't need it?
Can i say this example must have another file 'Dockerfile' so the docker build command only works?
Once build successfully , the image will be named docker:latest?
Job-naming:
There are a few reserved keywords which you can not use for a job name, like stages, services etc. see https://docs.gitlab.com/ee/ci/yaml/#unavailable-names-for-jobs
you can name your job anything else you like.
Stages
As you have written there are a certain set of pre defined stages: .pre, build, test, deploy and .post - but you can also define your own stages with
stages:
- build
- build-docker
- test
- deploy
Dockerfile
yes you need a dockerfile to docker build, and the tag of your image will be test as it is defined with -t test.
Regarding building docker images with gitlab ci i can recommen https://blog.callr.tech/building-docker-images-with-gitlab-ci-best-practices/ to read.
I hope this helps somehow. Generally speaking i recommend you to read the gitlab documentation and the getting started guide: https://docs.gitlab.com/ee/ci/quick_start/ - it explains a lot of the default concepts. and i would recommend to not ask to many questions within in one stackoverflow question, keep it focused to one topic

Changing Gitlab SAST json report names

Issue
Note: My CI contains a code complexity checker which can be ignored. This question is mainly focused on SAST.
I have recently setup a SAST pipeline for one of my Gitlab projects. The Gitlab-ce and Gitlab-runner instances are self-hosted. When the SAST scan is completed, the downloaded artifacts / json reports all contain the same name gl-sast-report.json. In this example, the artifacts bandit-sast and semgrep-sast both product gl-sast-report.json when downloaded.
SAST configuration
stages:
- CodeScan
- CodeComplexity
sast:
stage: CodeScan
tags:
- sast
code_quality:
stage: CodeComplexity
artifacts:
paths: [gl-code-quality-report.json]
services:
tags:
- cq-sans-dind
include:
- template: Security/SAST.gitlab-ci.yml
- template: Code-Quality.gitlab-ci.yml
Completed SAST results
End Goal
If possible, how could I change the name of the artifacts for bandit-sast and semgrep-sast?
If question one is possible, does this mean I have to manually specify each analyser for various projects. Currently, based on my .gitlab-ci.yml the SAST analysers are automatically detected based on the project language.
If you're using the pre-built SAST images, this isn't possible, even if you run the docker command manually like so:
docker run --volume "$PWD":/code --env=LM_REPORT_VERSION="2.1" --env=CI_PROJECT_DIR=/code registry.gitlab.com/gitlab-org/security-products/analyzers/license-finder:latest
When using these SAST (and DAST) images, the report file will always have the name in the docs, however if you ran the docker command manually like above, you could rename the file before it's uploaded as an artifact, but it would still have the same json structure/content.
Run License Scanning Analyzer:
stage: sast
script:
- docker run --volume "$PWD":/code --env=LM_REPORT_VERSION="2.1" --env=CI_PROJECT_DIR=/code registry.gitlab.com/gitlab-org/security-products/analyzers/license-finder:latest
- mv gl-license-scanning-report.json license-scanning-report.json
artifacts:
reports:
license_scanning: license-scanning-report.json
The only way to change the json structure/content is to implement the SAST tests manually without using the provided images at all. You can see all the available SAST analyzers in this Gitlab repo.
For the License Finder analyzer as an example, the Dockerfile says the entrypoint for the image is the run.sh script.
You can see on line 20 of run.sh it sets the name of the file to 'gl-license-scanning-report.json', but we can change the name by running the docker image manually so this doesn't really help. However, we can see that the actual analyzing comes from the scan_project function, which you could replicate.
So while it is possible to manually run these analyzers without the pre-built images, it will be much more difficult to get them to work.

Are containers available between stages in Gitlab CI

Is a container that is used in the build stage accessible in the next stage? I have yaml like this:
build_backend:
image: web-app
services:
- mysql:5.7
stage: build
script:
- make build
test_frontend:
image: node:8
stage: test
script:
- make run-tests
My tests, that are triggered in make run-tests need to run HTTP requests against the backend container if possible?
I was trying to avoid building a new container and then pushing to a registry only to pull it down again, but maybe there is no other way? If I did this, would my web-app container still have access to the mysql container if I added it as a service in the test_frontend job.
No, containers are not available between stages. Job artifacts (i.e. files) will be passed between stages by default and can also be passed explicitly betweeen jobs.
If you need to run tests against a container, you should indeed pull it down again from a registry. Then, you can use the docker in docker (dind) service to run your tests.
I think this blog post explains a similar use case nicely. The testing job that's is described there is the following:
test:
stage: test
script:
- docker run -d --env-file=.postgres-env postgres:9.5
- docker run --env-file=.environment --link=postgres:db $CONTAINER_TEST_IMAGE nosetests --with-coverage --cover-erase --cover-package=${CI_PROJECT_NAME} --cover-html

How to deploy to custom server after success CI in docker environment?

I already did CI, but now I want to deploy to my server. My server is the same machine where I do CI, but I do CI in docker-executor. So I can't have acces to server folders to update production.
There is my script:
image: node:9.11.2
cache:
paths:
- node_modules/
before_script:
- npm install
stages:
- test
- deploy
test:
stage: test
script:
- npm run test
deploy:
stage: deploy
script:
#here I want to go to /home/projectFolder and make git pull, npm i, npm start
# but I can't beause I run CI in docker-environment which hasn't acces to my server's dirictories.
First of all you should consider using gitlab auto cicd ( or use it as a base to customize if you dont want to use kubernetes)
You have multiple way to do so but the simplest way should be to use an alpine image and
- install ssh (if necessary)
- load your private ssh key ( from pipeline secrets)
- run your npm commands through ssh.
The cleanest way would be :
- generating adding a valid Dockerfile to your project
- adding docker image generation for each commit on master (in your pipeline)
- Adding docker rm running image (in your pipeline)
- Adding docker run the newly generated image (in your pipeline) (by sharing your docker volume)
- Make nginx redirect to your container.
I can give more detailed advice depending on what you decide to do.
Hoping i helped.

Resources