How to share environment variables in parallel Bitbucket pipeline steps? - bitbucket-pipelines

So, I am using Bitbucket pipelines to deploy my application. The app consists of two components: 1 and 2. They are deployed in two parallel steps in the Bitbucket pipeline:
pipelines:
custom:
1-deploy-to-test:
- parallel:
- step:
name: Deploying 1
image: google/cloud-sdk:latest
script:
- SERVICE_ENV=test
- GCLOUD_PROJECT="some-project"
- MEMORY_LIMIT="256Mi"
- ./deploy.sh
- step:
name: Deploying 2
image: google/cloud-sdk:latest
script:
- SERVICE_ENV=test
- GCLOUD_PROJECT="some-project"
- MEMORY_LIMIT="256Mi"
- ./deploy2.sh
The environment variables SERVICE_ENV, GCLOUD_PROJECT and MEMORY_LIMIT are always the same for deployments 1 and 2.
Is there any way to define these variables once for both parallel steps?

You can use User-defined variables in Pipelines. For example, you can configure your SERVICE_ENV, GCLOUD_PROJECT and MEMORY_LIMIT as a Repository variables and they will be available to all steps in your pipeline.

To add to the other answers, here's a glimpse of how we currently handle this, inspired from all the other solutions found in Bitbucket's forums.
To allow parallel tasks to re-use deployment variables (currently cannot be passed between steps), we use the bash Docker image first to set environment variables in an artifact. The pure bash image is very fast (runs under 8 seconds usually).
Then all other tasks can run in parallel benefiting of the deployment and repository variables that we usually set, all of that bypassing the current Bitbucket Pipelines limitations.
definitions:
steps:
- step: &set_env
name: Set multi-steps env variables
image: bash:5.2.12
artifacts:
- set_env.sh
script:
## Pass env
- echo "Passing all env variables to next steps"
- >-
echo "
export USERNAME=$USERNAME;
export HOST=$HOST;
export PORT_LIVE=$PORT_LIVE;
export THEME=$THEME;
" >> set_env.sh
- step: &git_ftp
name: Git-Ftp
image: davidwebca/docker-php-deployer:8.0.25
script:
# check if env file exists
- if [ -e set_env.sh ]; then
- cat set_env.sh
- source set_env.sh
- fi
# ...
branches:
staging:
- parallel:
- step:
<<: *set_env
deployment: staging
- parallel:
- step:
<<: *git_ftp
- step:
<<: *root_composer
- step:
<<: *theme_assets
According to the forums, Bitbucket staff is working on a solution to allow more flexibility on this as we speak (2022-12-01) but we shouldn't expect immediate release of this as it seems complicated to implement safely on their end.

as was explained in this link, you can define the Environment Variables and copy them to a file.
After that you can share that file between steps as an Artifact.

Related

accessing user defined pipeline variables within definitions steps bitbucket

Im trying pass a user defined variable from a custom pipeline to a step defined within the definitions section.
My yml snippet is below:
definitions:
steps:
- step: &TagVersion
trigger: manual
script:
- export VERSION=$VERSION
- echo "VERSION $VERSION"
custom:
run_custom:
- variables:
- name: VERSION
- step:
script:
- echo "starting"
- parallel:
- step:
<<: *TagVersion
variables:
VERSION: $VERSION
When I build the pipeline I can see the variable is listed as a pipeline variable when running the step TagVersion, and the correct value is shown there but dont know how to use this within the scripts section where im trying to echo out the value.
thanks

Iterate through environments in GitLab CI pipeline

I have a .gitlab-ci.yml pipeline with a simple job that needs to run in several environments. Something similar to the following:
test:v1.0:
stage: test
environment:
name: v1.0
tags:
- v1.0
script:
- ./run.sh $VERSION
test:v2.0:
stage: test
environment:
name: v2.0
tags:
- v2.0
script:
- ./run.sh $VERSION
test:v2.5:
stage: test
environment:
name: v2.5
tags:
- v2.5
script:
- ./run.sh $VERSION
Does GitLab have any kind of mechanism to create a job by iterating an array? Something similar to Ansible's loops. The idea is to avoid copy-pasting the same job over and over, when only environment or runner tag changes. I couldn't see anything in documentation, and all feature requests about this topic seem closed. Is there any workaround to achieve the same behaviour and accepted as best practice by the community?
I know from other questions here that one proposed solution could be:
test:all:
stage: test
script:
- Iterate here with v1.0, v2.0, v2.5, etc
The issue with this approach is that only one job is created, also you lose the ability to choose runners and other capabilities from GitLab environments feature, so I'd rather avoid this one.
With the possibility of using variables in tags which was implemented just recently and the use of a parallel matrix you can do the following:
test:
stage: test
script:
- ./run.sh $VERSION
environment:
name: $VERSION
tags:
- $VERSION
parallel:
matrix:
- VERSION: [v1.0, v2.0, v3.0]
This will create a job for the three defined versions and each job will run in parallel.

Convert Yaml from GitLab to azure devops

I want to convert Yaml pipeline from Gitlab to Azure DevOps. The problem is I did not have experience with GitLab before. This is yaml.
Is .package_deploy template for job? And is image it a pool or I need to use for this Docker task?
And before_script: means I need to create a task before task with docker?
variables:
myVar: "Var"
stages:
- deploy
.package_deploy:
image: registry.gitlab.com/docker-images/$myVar:latest
stage: build
script:
- cd src
- echo "Output file name is set to $OUTPUT_FILE_NAME"
- echo $OUTPUT_FILE_NAME > version.txt
- az login --service-principal -u $ARM_CLIENT_ID -p $ARM_CLIENT_SECRET --tenant $ARM_TENANT_ID
dev_package_deploy:
extends: .package_deploy
stage: deploy
before_script:
- export FOLDER=$FOLDER_DEV
- timestampSuffix=$(date -u "+%Y%m%dT%H%M%S")
- export OUTPUT_FILE_NAME=${myVar}-${timestampSuffix}-${CI_COMMIT_REF_SLUG}.tar.gz
when: manual
demo_package_deploy:
extends: .package_deploy
stage: deploy
before_script:
- export FOLDER=$FOLDER_DEMO
- timestampSuffix=$(date -u "+%Y%m%dT%H%M%S")
- export OUTPUT_FILE_NAME=${myVar}-${timestampSuffix}.tar.gz
when: manual
only:
refs:
- master
.package_deploy: is a 'hidden job' that you can use with the extends keyword. Itself, it does not create any job. It's a way to avoid repeating yourself in other job definitions.
before_script really is no different from script except that they're two different keys. The effect is that before_script + script includes all the script steps in the job.
before_script:
- one
- two
script:
- three
- four
Is the same as:
script:
- one
- two
- three
- four
image: defines the docker container in which the job runs. In this way, it is very similar to a pool you would define in ADO. But if you want things to run close to thee way it does in GitLab, you probably want to define it as container: in ADO.

How to reuse job in .gitlab-ci.yml

I currently have two jobs in my CI file which are nearly identical.
The first is for manually compiling a release build from any git branch.
deploy_internal:
stage: deploy
script: ....<deploy code>
when: manual
The second is to be used by the scheduler to release a daily build from develop branch.
scheduled_deploy_internal:
stage: deploy
script: ....<deploy code from deploy_internal copy/pasted>
only:
variables:
- $MY_DEPLOY_INTERNAL != null
This feels wrong to have all that deploy code repeated in two places. It gets worse. There are also deploy_external, deploy_release, and scheduled variants.
My question:
Is there a way that I can combine deploy_internal and scheduled_deploy_internal such that the manual/scheduled behaviour is retained (DRY basically)?
Alternatively: Is there is a better way that I should structure my jobs?
Edit:
Original title: Deploy job. Execute manually except when scheduled
You can use YAML anchors and aliases to reuse the script.
deploy_internal:
stage: deploy
script:
- &deployment_scripts |
echo "Deployment Started"
bash command 1
bash command 2
when: manual
scheduled_deploy_internal:
stage: deploy
script:
- *deployment_scripts
only:
variables:
- $MY_DEPLOY_INTERNAL != null
Or you can use extends keyword.
.deployment_script:
script:
- echo "Deployment started"
- bash command 1
- bash command 2
deploy_internal:
extends: .deployment_script
stage: deploy
when: manual
scheduled_deploy_internal:
extends: .deployment_script
stage: deploy
only:
variables:
- $MY_DEPLOY_INTERNAL != null
Use GitLab's default section containing a before_script:
default:
before_script:
- ....<deploy code>
job1:
stage: deploy
script: ....<code after than deploy>
job2:
stage: deploy
script: ....<code after than deploy>
Note: the default section fails to function as such if you try to execute a job locally with the gitlab-runner exec command - use YAML anchors instead.

Same steps for multiple named environments with GitLab CI

Is there a way to configure multiple specifically-named environments (specifically, test, stage, and prod)?
In their documentation (https://docs.gitlab.com/ce/ci/environments.html) they talk about dynamically-created environments, but they are all commit based.
My build steps are the same for all of them, save for swapping out the slug:
deploy_to_test:
environment:
name: test
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
deploy_to_stage:
environment:
name: stage
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
deploy_to_prod:
environment:
name: prod
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
Is there any way to compress this down into one set of instructions? Something like:
deploy:
environment:
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
Yes, you can use anchors. If I follow the documentation properly, you would rewrite it using a hidden key .XX and then apply it with <<: *X.
For example this to define the key:
.job_template: &deploy_definition
environment:
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
And then all blocks can be writen using <<: *job_template. I assume environment will merge the name with the predefined URL.
deploy_to_test:
<<: *deploy_definition
environment:
name: test
deploy_to_stage:
<<: *deploy_definition
environment:
name: stage
deploy_to_prod:
<<: *deploy_definition
environment:
name: prod
Full docs section from the link above:
YAML has a handy feature called 'anchors', which lets you easily duplicate content across your document. Anchors can be used to duplicate/inherit properties, and is a perfect example to be used with hidden keys to provide templates for your jobs.
The following example uses anchors and map merging. It will create two jobs, test1 and test2, that will inherit the parameters of .job_template, each having their own custom script defined:
.job_template: &job_definition # Hidden key that defines an anchor named 'job_definition'
image: ruby:2.1
services:
- postgres
- redis
test1:
<<: *job_definition # Merge the contents of the 'job_definition' alias
script:
- test1 project
test2:
<<: *job_definition # Merge the contents of the 'job_definition' alias
script:
- test2 project
& sets up the name of the anchor (job_definition), << means "merge the given hash into the current one", and * includes the named anchor (job_definition again). The expanded version looks like this:
.job_template:
image: ruby:2.1
services:
- postgres
- redis
test1:
image: ruby:2.1
services:
- postgres
- redis
script:
- test1 project
test2:
image: ruby:2.1
services:
- postgres
- redis
script:
- test2 project
Besides what the answer offered, I'd like to add another similar way to achieve kind of the same thing but it's more flexible rather than to use a template and then merge it in a stage.
What you can do is to create a hidden key as well, but in this format, e.g.,
.login: &login |
cmd1
cmd2
cmd3
...
And then you can apply it to different stages by using the '*', the asterisk, like:
deploy:
stage: deploy
script:
- ...
- *login
- ...
bake:
stage: bake
script:
- ...
- *login
- ...
And the result would be equivalent to:
deploy:
stage: deploy
script:
- ...
- cmd1
- cmd2
- cmd3
- ...
bake:
stage: bake
script:
- ...
- cmd1
- cmd2
- cmd3
- ...
Based on the resource of:
https://gitlab.com/gitlab-org/gitlab-ce/issues/19677#note_13008199
As for the template implementation, it's "merged". With my own experience, if you append more scripts after merging a template, the template scripts will be overwritten. And you cannot apply multiple templates at a time. Only the last template scripts will be executed. For example:
.tmp1: &tmp1
script:
- a
- b
.tmp2: &tmp2
script:
- c
- d
job1:
<<: *tmp1
<<: *tmp2
stage: xxx
job2:
<<: *tmp2
stage: yyy
script:
- e
- f
The equivalent result would be:
job1:
stage: xxx
script:
- c
- d
job2:
stage: yyy
script:
- e
- f
If not sure about the syntax correctness, just copy and paste your .gitlab.yml file content to "CI Lint" to validate. The button is in the tab of Pipelines.
gitlab gitlab-ci yaml
Just in case: Gitlab offers (since 11.3) an extends keyword, which can be used to "templates" yaml entries (so far as I understand it):
See the official doc
Have you tried implementing variables for various environments and using different jobs for various environments? I've come up with a solution for you.
image: node:latest
variables:
GIT_DEPTH: '0'
stages:
- build
- deploy
workflow:
rules:
- if: $CI_COMMIT_REF_NAME == "develop"
variables:
DEVELOP: "true"
ENVIRONMENT_NAME: Develop
WEBSITE_URL: DEVELOP_WEBSITE_URL
S3_BUCKET: (develop-s3-bucket-name)
AWS_REGION: ************** develop
AWS_ACCOUNT: ********develop
- if: $CI_COMMIT_REF_NAME == "main"
variables:
PRODUCTION: "true"
ENVIRONMENT_NAME: PRODUCTION
WEBSITE_URL: $PROD_WEBSITE_URL
S3_BUCKET: $PROD-S3-BUCKET-NAME
AWS_REGION: ************** (prod-region)
AWS_ACCOUNT: ***********(prod-acct)
- when: always
build-app:
stage: build
script:
#build-script
environment:
name: $ENVIRONMENT_NAME
deploy-app:
stage: deploy
script:
#deploy-script
environment:
name: $ENVIRONMENT_NAME

Resources