Is there a way to configure multiple specifically-named environments (specifically, test, stage, and prod)?
In their documentation (https://docs.gitlab.com/ce/ci/environments.html) they talk about dynamically-created environments, but they are all commit based.
My build steps are the same for all of them, save for swapping out the slug:
deploy_to_test:
environment:
name: test
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
deploy_to_stage:
environment:
name: stage
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
deploy_to_prod:
environment:
name: prod
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
Is there any way to compress this down into one set of instructions? Something like:
deploy:
environment:
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
Yes, you can use anchors. If I follow the documentation properly, you would rewrite it using a hidden key .XX and then apply it with <<: *X.
For example this to define the key:
.job_template: &deploy_definition
environment:
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
And then all blocks can be writen using <<: *job_template. I assume environment will merge the name with the predefined URL.
deploy_to_test:
<<: *deploy_definition
environment:
name: test
deploy_to_stage:
<<: *deploy_definition
environment:
name: stage
deploy_to_prod:
<<: *deploy_definition
environment:
name: prod
Full docs section from the link above:
YAML has a handy feature called 'anchors', which lets you easily duplicate content across your document. Anchors can be used to duplicate/inherit properties, and is a perfect example to be used with hidden keys to provide templates for your jobs.
The following example uses anchors and map merging. It will create two jobs, test1 and test2, that will inherit the parameters of .job_template, each having their own custom script defined:
.job_template: &job_definition # Hidden key that defines an anchor named 'job_definition'
image: ruby:2.1
services:
- postgres
- redis
test1:
<<: *job_definition # Merge the contents of the 'job_definition' alias
script:
- test1 project
test2:
<<: *job_definition # Merge the contents of the 'job_definition' alias
script:
- test2 project
& sets up the name of the anchor (job_definition), << means "merge the given hash into the current one", and * includes the named anchor (job_definition again). The expanded version looks like this:
.job_template:
image: ruby:2.1
services:
- postgres
- redis
test1:
image: ruby:2.1
services:
- postgres
- redis
script:
- test1 project
test2:
image: ruby:2.1
services:
- postgres
- redis
script:
- test2 project
Besides what the answer offered, I'd like to add another similar way to achieve kind of the same thing but it's more flexible rather than to use a template and then merge it in a stage.
What you can do is to create a hidden key as well, but in this format, e.g.,
.login: &login |
cmd1
cmd2
cmd3
...
And then you can apply it to different stages by using the '*', the asterisk, like:
deploy:
stage: deploy
script:
- ...
- *login
- ...
bake:
stage: bake
script:
- ...
- *login
- ...
And the result would be equivalent to:
deploy:
stage: deploy
script:
- ...
- cmd1
- cmd2
- cmd3
- ...
bake:
stage: bake
script:
- ...
- cmd1
- cmd2
- cmd3
- ...
Based on the resource of:
https://gitlab.com/gitlab-org/gitlab-ce/issues/19677#note_13008199
As for the template implementation, it's "merged". With my own experience, if you append more scripts after merging a template, the template scripts will be overwritten. And you cannot apply multiple templates at a time. Only the last template scripts will be executed. For example:
.tmp1: &tmp1
script:
- a
- b
.tmp2: &tmp2
script:
- c
- d
job1:
<<: *tmp1
<<: *tmp2
stage: xxx
job2:
<<: *tmp2
stage: yyy
script:
- e
- f
The equivalent result would be:
job1:
stage: xxx
script:
- c
- d
job2:
stage: yyy
script:
- e
- f
If not sure about the syntax correctness, just copy and paste your .gitlab.yml file content to "CI Lint" to validate. The button is in the tab of Pipelines.
gitlab gitlab-ci yaml
Just in case: Gitlab offers (since 11.3) an extends keyword, which can be used to "templates" yaml entries (so far as I understand it):
See the official doc
Have you tried implementing variables for various environments and using different jobs for various environments? I've come up with a solution for you.
image: node:latest
variables:
GIT_DEPTH: '0'
stages:
- build
- deploy
workflow:
rules:
- if: $CI_COMMIT_REF_NAME == "develop"
variables:
DEVELOP: "true"
ENVIRONMENT_NAME: Develop
WEBSITE_URL: DEVELOP_WEBSITE_URL
S3_BUCKET: (develop-s3-bucket-name)
AWS_REGION: ************** develop
AWS_ACCOUNT: ********develop
- if: $CI_COMMIT_REF_NAME == "main"
variables:
PRODUCTION: "true"
ENVIRONMENT_NAME: PRODUCTION
WEBSITE_URL: $PROD_WEBSITE_URL
S3_BUCKET: $PROD-S3-BUCKET-NAME
AWS_REGION: ************** (prod-region)
AWS_ACCOUNT: ***********(prod-acct)
- when: always
build-app:
stage: build
script:
#build-script
environment:
name: $ENVIRONMENT_NAME
deploy-app:
stage: deploy
script:
#deploy-script
environment:
name: $ENVIRONMENT_NAME
Related
I'm developing a pipeline on GitLab-ci, in the first job I use gittools/gitversion the obtain the semantic version of my software.
Here a small piece of code of /gitversion-ci-cd-plugin-extension.gitlab-ci.yml (Full documentation here https://gitversion.net/docs/reference/build-servers/gitlab)
.gitversion_function:
image:
name: gittools/gitversion
entrypoint: ['']
stage: .pre
.....
.....
artifacts:
reports:
#propagates variables into the pipeline level
dotenv: thisversion.env
Then a simplified version of my pipeline is as follows
stages:
- .pre
- install_dependencies
- build
- deploy
include:
- local: '/gitversion-ci-cd-plugin-extension.gitlab-ci.yml'
determineversion:
extends: .gitversion_function
install_dependencies:
image: node:16.14
stage: install_dependencies
script:
- echo ${PACKAGE_VERSION}
build:
image: node:16.14
stage: build
script:
- echo $PACKAGE_VERSION
deploy:
image: bitnami/kubectl
stage: deploy
needs: ['build']
script:
- echo $PACKAGE_VERSION
The problem is that the environment variable $PACKAGE_VERSION works in the first two jobs install_dependencies and build.
echo $PACKAGE_NAME; //0.0.1
But when the jobs deploy is executed the environment variable is not expanded by pipeline and I obtain literally this
echo $PACKAGE_NAME; //$PACKAGE_NAME
I found the problem.
In the last job of my pipeline, I use needs (https://docs.gitlab.com/ee/ci/yaml/#needs) to establish dependencies between jobs.
The problem is that artifact is not automatically passed because there is no a dependency between determineversion and deploy, to fix I do this:
...
deploy:
image: bitnami/kubectl
stage: deploy
needs: ['determineversion', 'build'] # <------
script:
- echo $PACKAGE_VERSION
...
I added determineversion as a dependency of deploy, in this way $PACKAGE_VERSION is printed correctly
I have a simple .gitlab-ci.yml:
stages:
- test
- build
.docker_image: &docker_image
image:
name: foo
entrypoint: [""]
tags:
- deploy
test:
stage: test
<<: *docker_image
script:
- 'echo $CI_MERGE_REQUEST_IID'
- 'echo $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME'
only:
- merge_requests
build:
stage: build
<<: *docker_image
script:
- export
only:
- stage
How can I use CI_MERGE_REQUEST_IID and CI_MERGE_REQUEST_SOURCE_BRANCH_NAME in a build job?
I need this to add information to the commit message.
I have a repo QA/tests which I want to run all the jobs when there is a push to this repo.
I used a script to generate the jobs dynamically:
job-generator:
stage: generate
tags:
- kuber
script:
- scripts/generate-job.sh > generated-job.yml
artifacts:
paths:
- generated-job.yml
main:
trigger:
include:
- artifact: generated-job.yml
job: job-generator
strategy: depend
At the next step, I have another repo products/first which I want to run a specific job in QA/tests at every push in the products/first so I tried:
stages:
- test
tests:
stage: test
variables:
TARGET: first
trigger:
project: QA/tests
branch: master
strategy: depend
Then I tried to define a global TARGET: all variable in my main gitlab-ci.yml and override it with the TARGET: first in the above YAML.
generate-job.sh:
#!/bin/bash
PRODUCTS=("first" "second" "third")
for P in "${PRODUCTS[#]}"; do
cat << EOF
$P:
stage: test
tags:
- kuber
script:
- echo -e "Hello from $P"
rules:
- if: '"$TARGET" == "all"'
when: always
- if: '"$TARGET" == $P'
when: always
EOF
done
But no results. the downstream pipeline doesn't have any job at all!
Any idea?
I am not sure if this is now helpful, but this looks like an over complicated approach from the outside. I have to say i have limited knowledge and my answer is based on assumption:
the QA/tests repository contains certain test cases for all repositories
QA/tests has the sole purpose of containing the tests, not an overview over the projects etc.
My Suggestion
As QA/tests is only containing tests which should be executed against each project, i would create a docker image out of it which is contains all the tests and can actually execute them. (lets calls it qa-tests:latest)
Within my projects i would add a step which uses this images, with my source code of the project and executes the tests:
qa-test:
image: qa-tests:latest
script:
- echo "command to execute scripts"
# add rules here accordingly
this would solve the issue with each push into the repositories. For an easier usage, i could create a QA-Tests.gitlab-ci.yml file which can be included by the sub-projects with
include:
- project: QA/tests
file: QA-Tests.gitlab-ci.yml
this way you do not need to do updates with in the repositories if the ci snippet changes.
Finally to trigger the execution on each push, you only need to trigger the pipelines of the subprojects from the QA/tests.
Disclaimer
As i said, i have only a limited few, as the goal is described but not the motivation. With this approach you remove some of the directive calls - mainly the ones triggering from sub projects to QA/tests. And it generates a clear structure, but it might not fit your needs.
I solved it with:
gitlab-ci.yml:
variables:
TARGET: all
job-generator:
stage: generate
tags:
- kuber
script:
- scripts/generate-job.sh > generated-job.yml
artifacts:
paths:
- generated-job.yml
main:
variables:
CHILD_TARGET: $TARGET
trigger:
include:
- artifact: generated-job.yml
job: job-generator
strategy: depend
and use CHILD_TARGET in my generate-job.sh:
#!/bin/bash
PRODUCTS=("first" "second" "third")
for P in "${PRODUCTS[#]}"; do
cat << EOF
$P:
stage: test
tags:
- kuber
script:
- echo -e "Hello from $P"
rules:
- if: '\$CHILD_TARGET == "all"'
when: always
- if: '\$CHILD_TARGET == "$P"'
when: always
EOF
done
So I could call it from other projects like this:
stages:
- test
e2e-tests:
stage: test
variables:
TARGET: first
trigger:
project: QA/tests
branch: master
strategy: depend
I have a simple pipeline config:
image: python:3.7.3
pipelines:
branches:
Server:
- step:
name: Test
script:
- pytest --ignore .
which yields the following error:
We didn't find the deployment keyword in your bitbucket-pipelines.yml file
What should i do?
I just figured out, that i did not have "bitbucket-pipelines" enabled in the repository in the settings.
Here are examples to run python with bitbucket pipelines.
You can change unit test according to yours.
First make sure pipelines are enabled.
Generate SSH.
Add host key.
Add variable to Deployment menu.
I'm also attach running pipelines with tags and branch.
Generate SSH from :
https://bitbucket.org/<WORKSPACE>/<REPOSITORY_NAME>/admin/addon/admin/pipelines/ssh-keys
and put it to your server ~/.ssh/authorized_keys
definitions:
steps:
# Build
- step: &build
name: Install and Test
image: python:3.7.2
trigger: automatic
script:
- pip install -r requirements.txt
- python3 test.py test
# Deployment
- step: &deploy
name: Deploy Artifacts
trigger: automatic
deployment: test
script:
# Deploy New Artifact
- pipe: atlassian/scp-deploy:0.3.11
variables:
USER: <REMOTE_USER>
SERVER: <REMOTE_HOST>
REMOTE_PATH: <REMOTE_PATH>
LOCAL_PATH: $BITBUCKET_CLONE_DIR/**
# Runner
pipelines:
# Running by tags
tags:
v*:
- step: *build
- step:
<<: *deploy
deployment: test
trigger: manual
# Running by branch
branches:
master:
- step: *build
- step:
<<: *deploy
deployment: test
trigger: automatic
I've got something like this in my .gitlab-ci.yml:
.templates:
- &deploy-master-common
script:
- ansible-playbook … --limit=${environment:name}.example.org
environment:
url: https://${environment:name}.example.org
…
deploy-master1:
<<: *deploy-master-common
environment:
name: master1
only:
- master
When running the deploy-master1 job, unfortunately, ${environment:name} is expanded to the empty string. Is this sort of expansion not supported by YAML/GitLab CI?
I can't tell yet whether this is a GitLab or YAML restriction, but it looks like the environment hash is being replaced rather than merged. Moving <<: *deploy-master-common to the bottom of deploy-master1 I get the following error message from the GitLab CI lint API endpoint:
.gitlab-ci.yml is not valid. Errors:
[
"jobs:deploy-master1:environment name can't be blank"
]
I was able to work around this with custom variables because the template doesn't specify any:
.templates:
- &deploy-master-common
script:
- ansible-playbook … --limit=${environment_name}.example.com
environment:
name: $environment_name
url: https://${environment_name}.example.com
…
deploy-master1:
<<: *deploy-master-common
variables:
environment_name: master1
only:
- master