I'm very new to the Git lab, try to understand the .gitlab-ci.yml file. i would be thanksfull if someone could help me with this, what will all this command do, and where it will be installed all this packages, inside a docker container?
staging:
stage: staging
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=$HEROKU_STAGING_APP --api-key=$HEROKU_STAGING_API_KEY --skip-cleanup
only:
- main
Let me try to explain.
.git-ci.yml is the default file which would be read by GitLab for creation and execution of the pipeline whenever there are changes in the repository (new commit, new branch, new tag etc)
Now regarding the specific to the question, this defines a job which would be executed as part of staging stage, actual sequence would depend on how many stages are defined in your file and where this staging appears in the sequence. (Please refer stages: in your file.
Now by default all the jobs will be run on the runner. These runners are identified by the tags, as I don't see any tags in your particular job its bit difficult to comment on.
You may have an explicit image tag in your job for e.g. image: openjdk:17-alpine to run the job on a particular container.
So, whatever commands are written in the script: block, would get executed on
Either the runner where the job would be launched
Or, the container launched by using the image tag
I hope, it help you to understand the basic execution.
Related
I am making a gitlab CI/CD pipeline that uses two different image.
One of them necessitate the installations of some package using npm. In order to avoid multiple-time installation I've added some cache.
Let's see this example :
stages:
- build
- quality
cache:
paths:
- node_modules/
build-one:
image: node:latest
stage: build
script:
- npm install <some package>
build-two:
image: foo_image:latest
stage: build
script:
- some cmd
quality:
image: node:latest
stage: quality
script:
- <some cmd using the previously installed package>
The fact of having two different docker images forces me to specify it inside the job definition. So from my tests the cache isn't actually used and the command executed by the quality job will fail because the package isn't installed.
Is there a solution to this problem ?
Many thanks !
Kev'.
There can be two cases
Same runner is being used to run all the jobs. In this case the way to specified cache should work fine.
Different runners are being used to run different jobs. So suppose build job runs with runner 1 and quality jobs is running with runner 2 so the cache will only be present in runner 1.
In order to make use of caching in case 2 you will have to use distributed caching.
Then runner 1 will run the build job it will push the cache to s3 and runner 2 will pull with cache during the quality job and then can use that.
actually i'm facing a problem in this code :
Sorted
- Changes
- Lint
- Build
- Tests
- E2E
- SAST
- DAST
- Publish
- Deployment
# Get Runner Image
image: Node:latest
# Set Variables for mysql
**variables:**
MYSQL_ROOT_PASSWORD: secret
MYSQL_PASSWORD:
..
..
**script:**
- ./addons/scripts/ci/lintphp.sh
why we use image I asked some one said that we build on it like the docker file command FROM ubuntu:latest
and one other told me it's because it executes the code and I don't actually know the script tag above what evem does it mean to execute inside the image or on the runner?
GitLab Runner is an open source application that collects pipeline job payload and executes it. Therefore, it implements a number of executors that can be used to run your builds in different scenarios, if you are using a docker executor you need to specify what image you will be using to run your builds.
I'm trying GitLab for my first example.
I can't see where's is the error here:
this is for windows running firebase, vue.js, node.js on gitlab
image: node:alpine
cache:
paths:
- node_modules/
deploy_production:
stage: deploy
environment: Production
only:
- master
script:
- npm install
- npm i -g firebase tools
- npm run build
- firebase deploy --non-interactive --token "1/CYHKW-CuYsKOcy2Eo6_oC9akwGjyqtmtRZok93xb5VY"
This GitLab CI configuration is invalid: jobs:deploy_production script
can't be blank
You specify a stage in your deploy_production job but you don't define stages.
Add :
stages:
- deploy
before your job definition.
Late to the party, but one problem here is the indenting of the script tag, which needs to be under the job deploy_production. script isn't allowed at the top level like you've shown it here.
The error is kind of confusing, but does indicate the situation. Because script isn't at the right indent level, it's not part of the job, and a job requires a script.
Should be:
deploy_production:
stage: deploy
environment: Production
only:
- master
script:
- npm install
Another issue is you should screen out your token in the post!
Another way you can get this error message:
Here's what I was trying to do in gitlab-ci.yml:
default:
cache:
paths:
- .gradle
And I was getting this error message:
jobs:default script can't be blank
I was using the documentation here: https://docs.gitlab.com/ee/ci/yaml/
Which clearly indicates how to use default. The message implies gitlab thought default was a job.
ANSWER
You probably know where this is going -- the version I was using was about 3 years behind latest, and the "default" keyword had been added since then.
Check the version of gitlab you're using by going to the Help page (gitlab.domain.com/help), and it's listed at the top of the page.
To find the right documentation, I went to https://gitlab.com/rluna-gitlab/gitlab-ce, then chose my version from the branch drop down. From there went to the docs folder, then clicked on this link in the Popular Documentation table in the README.
https://gitlab.com/rluna-gitlab/gitlab-ce/-/blob/11-6-stable/doc/ci/yaml/README.md
My goal is to show badges (ex : ) based on pipeline results.
I have a private gitlab ce omnibus instance with the following .gitlab-ci.yml :
image: python:3.6
stages:
- lint
- test
before_script:
- python -V
- pip install pipenv
- pipenv install --dev
lint:
stage: lint
script:
- pipenv run pylint --output-format=text --load-plugins pylint_django project/ | tee pylint.txt
- score=$(sed -n 's/^Your code has been rated at \([-0-9.]*\)\/.*/\1/p' pylint.txt)
- echo "Pylint score was $score"
- ls
- pwd
- pipenv run anybadge --value=$score --file=pylint.svg pylint
artifacts:
paths:
- pylint.svg
test:
stage: test
script:
- pipenv run python manage.py test
So I thought that I would store the image in the artifacts of the lint job and display it via the badge feature.
But I encounter the following issue : when I browse https://example.com/[group]/[project]/-/jobs/[ID]/artifacts/file/pylint.svg, instead of seeing the badge I have the following message :
The image could not be displayed because it is stored as a job artifact. You can download it instead.
And anyways, I feel like this is the wrong way, because even if I could get the image, there don't seems to be a way to get the image from the last job since gitlab URL for badges images only supports %{project_path}, %{project_id}, %{default_branch}, %{commit_sha}
So how one would add badge to a gitlab project based on a svg generated from results in a gitlab pipeline ?
My guess is that I could push to a .badge folder but that doesn't sound like a clean solution.
You can indeed get the artifact(s) for the latest job (see documentation here), but the trick is that you need to use a slightly different URL:
https://example.com/[group]/[project]/-/jobs/artifacts/[ref]/raw/pylint.svg?job=lint
where [ref] is the reference to your branch/commit/tag.
Speaking of badge placeholders available in Gitlab, you can potentially put %{default_branch} or %{commit_sha} into [ref]. This won't allow you to get the correct badge for every branch, but at least your default branch will get one.
Please also note that ?job=lint query parameter is required, without it the URL won't work.
GitLab-CI executes the stop-environment script in dynamic environments after the branch has been deleted. This effectively forces you to put all the teardown logic into the .gitlab-ci.yml instead of a script that .gitlab-ci.yml just calls.
Does anyone know a workaround for this? I have a shell script that removes the deployment. This script is part of the repository and can also be called locally (i.e. not onli in an CI environment). I want GitLab-CI to call this script when removing a dynamic environment but it's obviously not there anymore when the branch has been deleted. I also cannot put this script to the artifacts as it is generated before the build by a configure script and contains secrets. It would be great if one could execute the teardown script before the branch is deleted.
Here's a relevant excerpt from the .gitlab-ci.yml
deploy_dynamic_staging:
stage: deploy
variables:
SERVICE_NAME: foo-service-$CI_BUILD_REF_SLUG
script:
- ./configure
- make deploy.staging
environment:
name: staging/$CI_BUILD_REF_SLUG
on_stop: stop_dynamic_staging
except:
- master
stop_dynamic_staging:
stage: deploy
variables:
GIT_STRATEGY: none
script:
- make teardown # <- this fails
when: manual
environment:
name: staging/$CI_BUILD_REF_SLUG
action: stop
Probably not ideal, but you can curl the script using the gitlab API before running it:
curl \
-X GET https://gitlab.example. com/raw/master/script.sh\
-H 'PRIVATE-TOKEN: ${GITLAB_TOKEN}' > script.sh
GitLab-CI executes the stop-environment script in dynamic environments after the branch has been deleted.
That includes:
An on_stop action, if defined, is executed.
With GitLab 15.1 (June 2022), you can skip that on_top action:
Force stop an environment
In 15.1, we added a force option to the stop environment API call.
This allows you to delete an active environment without running the specified on_stop jobs in cases where running these defined actions is not desired.
See Documentation and Issue.