Call specific runner for Git script - gitlab

How I can call a specific runner for CI job. Problem: I have default runners and now I have to install my local runner and run tests on it. But I haven't a possibility to turn off runners on the Git because the whole project builds on remote drivers.

You can use tags to tag a runner, when you register it. And you can specify that your job will only run on runners with this tags.
eg. you tagged your runner with 'fancy-example' than you can use it like
build:
tags:
- fancy-example
script:
- echo Hello
See the gitLab docs for more detailed examples and explanations:
https://docs.gitlab.com/ce/ci/yaml/README.html#tags

Related

why we use images in CI CD pipeline?

actually i'm facing a problem in this code :
Sorted
- Changes
- Lint
- Build
- Tests
- E2E
- SAST
- DAST
- Publish
- Deployment
# Get Runner Image
image: Node:latest
# Set Variables for mysql
**variables:**
MYSQL_ROOT_PASSWORD: secret
MYSQL_PASSWORD:
..
..
**script:**
- ./addons/scripts/ci/lintphp.sh
why we use image I asked some one said that we build on it like the docker file command FROM ubuntu:latest
and one other told me it's because it executes the code and I don't actually know the script tag above what evem does it mean to execute inside the image or on the runner?
GitLab Runner is an open source application that collects pipeline job payload and executes it. Therefore, it implements a number of executors that can be used to run your builds in different scenarios, if you are using a docker executor you need to specify what image you will be using to run your builds.

Problems with Gitlab CI/CD on local machine

I'm using gitlab-runner to run CI/CD locally.
It works properly when I specify all jobs in .gitlab-ci.yml like
stages:
- test
test1:
stage: test
script:
- echo "ok"
and run gitlab-runner exec shell test1
In general, I'd like to store different jobs in different files. For example, I make test-pipeline.yml with jobs that relates to the test stage in the folder named .gitlab.
The .gitlab-ci.yml contains only to rows
include:
local: .gitlab/test-pipeline.yml
I commit and push changes to the remote repo and it works there but the command gitlab-runner exec shell job_name fails because it can't find such job.
Perhaps, I have to edit some of gitlab-runner's config but it's not obviously.
Has anybody faced with the same problem?
Thanks in advance!
gitlab-runner exec has many limitations. It does not have all the same features of the regular gitlab-runner. One such limitation is that it does not support the include: statement.
So, you won't be able to use gitlab-runner exec against this kind of config file that uses include:.

How to run a script from repo A to the pipeline B in Gitlab

I have two repositories in GitLab, repositories A and B let's say.
Repo A contains:
read_ci.yml
read_ci.sh
read_ci.yml contains:
stages:
- initialise
create checksum from pipeline:
stage: initialise
script:
- chmod +x read_ci.sh
- source ./read_ci.sh
Repo B contains:
gitlab-ci.yml
gitlab-ci.yml contains:
include:
project: 'Project/project_name'
file:
- '.gitlab-ci.yml'
ref: main
Obviously, this doesn't do what my intention is.
What I want to achieve is in the project B pipeline to run the project A script.
The reason is that I want project A to be called from multiple different pipelines and run there.
an alternative to this for GitLab: Azure Pipelines. Run script from resource repo
Submodules would absolutely work as Davide mentions, though it's kinda like using a sledgehammer to hang a picture. If all you want is a single script from the repository, just download it into your container. Use the v4 API with your CI_JOB_TOKEN to download the file, then simply run it using sh. If you have many files in your secondary repository and want access to them all, then use Submodules as Davide mentiones, and make sure your CI job retrieves them by setting the submodule strategy like this:
variables:
GIT_SUBMODULE_STRATEGY: normal
If you want to run the project A script in the project B pipeline, you can add the repository B as a git submodule in A
git submodule add -b <branch-B> <git-repository-B> <target-dir>
You need also to add in the CI job, the variable GIT_SUBMODULE_STRATEGY: recursive.

Building a Docker image for a Node.js app in GitLab CI

I'm working on a Node.js application for which my current Dockerfile looks like this:
# Stage 0
# =======
FROM node:10-alpine as build-stage
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
COPY . ./
RUN yarn build
# Stage 1
# =======
FROM nginx:mainline-alpine
COPY --from=build-stage /app/build /usr/share/nginx/html
I'd like to integrate this into a GitLab CI pipeline but I'm not sure if I got the basic idea. So far I know that I need to create a .gitlab-ci.yml file which will be later picked up by GitLab.
My basic idea is:
I push my code changes to GitLab.
GitLab builds a new Docker image based on my Dockerfile.
GitLab pushes this newly create image to a "production" server (later).
So, my question is:
My .gitlab-ci.yml should then contain something like a build job which triggers... what? The docker build command? Or do I need to "copy" the Dockerfile content to the CI file?
GitLab CI executes the pipeline in the Runners that need to be registered into the project using generated tokens (Settings/CI CD/Runners). You also can used Shared Runners for multiple projects. The pipeline is configured with the .gitlab-ci.yml file and you can build, test, push and deploy docker images using the yaml file, when something is done in the repo (push to branch, merge request, etc).
It’s also useful when your application already has the Dockerfile that
can be used to create and test an image
So basically you need to install the runner, register it with the token of your project (or use Shared Runners) and configure your CI yaml file. The recommended aproach is docker in docker but it is up to you. You can also check this basic example. Finally you can deploy your container directly into Kubernetes, Heroku or Rancher. Remember to safely configure your credentials and secrets in Settings/Variables.
Conclusion
GitLab CI is awesome, but I recommend you to firstly think about your git workflow to use in order to set the stages in the .gitlab-ci.yml file. This will allow you to configure your node project as a pipeline an then it would be easy to export to other tools such as Jenkins pipelines or Travis for example.
build job trigger:
option 1:
add when: manual in the job and you can run the job by manual in CI/CD>Pipelines
option 2:
only:
- <branchname>
in this case the job start when you push into the defined branch
(this my personal suggest)
option 3:
add nothin' and the job will run every time when you push code
Of corse you can combine the options above.
In addition may star the job with web request by using the job token.
docker build command will work in pipeline. I think in script section.
Requirements docker engine on the gitlab-runner which pick the job.
Or do I need to "copy" the Dockerfile content to the CI file?
no

Execute a script before the branch is deleted in GitLab-CI

GitLab-CI executes the stop-environment script in dynamic environments after the branch has been deleted. This effectively forces you to put all the teardown logic into the .gitlab-ci.yml instead of a script that .gitlab-ci.yml just calls.
Does anyone know a workaround for this? I have a shell script that removes the deployment. This script is part of the repository and can also be called locally (i.e. not onli in an CI environment). I want GitLab-CI to call this script when removing a dynamic environment but it's obviously not there anymore when the branch has been deleted. I also cannot put this script to the artifacts as it is generated before the build by a configure script and contains secrets. It would be great if one could execute the teardown script before the branch is deleted.
Here's a relevant excerpt from the .gitlab-ci.yml
deploy_dynamic_staging:
stage: deploy
variables:
SERVICE_NAME: foo-service-$CI_BUILD_REF_SLUG
script:
- ./configure
- make deploy.staging
environment:
name: staging/$CI_BUILD_REF_SLUG
on_stop: stop_dynamic_staging
except:
- master
stop_dynamic_staging:
stage: deploy
variables:
GIT_STRATEGY: none
script:
- make teardown # <- this fails
when: manual
environment:
name: staging/$CI_BUILD_REF_SLUG
action: stop
Probably not ideal, but you can curl the script using the gitlab API before running it:
curl \
-X GET https://gitlab.example. com/raw/master/script.sh\
-H 'PRIVATE-TOKEN: ${GITLAB_TOKEN}' > script.sh
GitLab-CI executes the stop-environment script in dynamic environments after the branch has been deleted.
That includes:
An on_stop action, if defined, is executed.
With GitLab 15.1 (June 2022), you can skip that on_top action:
Force stop an environment
In 15.1, we added a force option to the stop environment API call.
This allows you to delete an active environment without running the specified on_stop jobs in cases where running these defined actions is not desired.
See Documentation and Issue.

Resources