Replace single step in bitbucket pipeline - bitbucket-pipelines

I am trying to build rather complex Bitbucket Pipeline while following dry principles.
So far I defined multiple custom steps in definitions section:
definitions:
steps:
- step: &foo
name: foo
script:
...
- step: &bar
name: bar
script:
...
Now, I have two custom pipelines that are almost identical except for one artifact/variable.
pipelines:
custom:
action-a:
- step:
name: action a
script:
- echo "1" > .value
artifacts:
- .value
- step: *bar
- step: *foo
action-b:
- step:
name: action b
script:
- echo "2" > .value
artifacts:
- .value
- step: *bar
- step: *foo
Everything after step generating .value artifact is identical (and way more complex than just 2 steps). Would it be possible to define a single pipeline and somehow inject this artifact with anchors?

With bitbucket pipelines, you can now use Deployment Variables.
These are found at Repository Settings > Deployments
You can create a step
- step: &foo
name: Deployment Step
script:
- echo ${DEPLOYMENT_VAR}
Then in your pipelines definition, call that step and pass in your deployment name, here's an example using branch specific deployments.
pipelines:
branches:
develop:
- step:
<<: *foo
name: 'foo (develop)'
deployment: foo-dev
main:
- step:
<<: *foo
name: 'foo (main)'
deployment: foo-main
Then if your Deployment Settings look like this:
When pushing to the develop branch, your pipeline step would echo dev and your main branch would echo main
see: https://support.atlassian.com/bitbucket-cloud/docs/variables-and-secrets/#:~:text=override%20team%20variables.-,Deployment%20variables,-You%20can%20also

Related

Getting an error like unexpected value "name" on azure pipelines

I have the below yaml file. I am trying to solve the issue where two deployment files set up in two different pipelines exist in a single folder and a case where one file change does not trigger other pipelines based on file changes.
For the above purpose, I have made the below change but getting an error in the pipeline unexpected value "name",
and the code is below
.....
trigger:
paths:
include:
- '.devops/**'
variables:
deploymentFile: 'test-hub-deploy-dev.yml'
stages:
- stage: Get the deployment file name
jobs:
- job: Get deployment file
displayName: 'Get deployment file'
steps:
- name: Get the deployment file name // getting error at here
script: |
if [[ $Build.SourcesDirectory == *"deployment-folder"* ]]; then
echo "##[set-output name=deploymentFile;]$(basename $Build.SourcesDirectory)"
fi
displayName: Get deployment file name
env:
SYSTEM_DEFAULTWORKINGDIRECTORY: $(System.DefaultWorkingDirectory)
BUILD_SOURCESDIRECTORY: $(Build.SourcesDirectory)
outputs:
deploymentFile: '$(deploymentFile)'
- name: Deploy to test-dev
condition: and(succeeded(), eq(variables['deploymentFile'], 'test-hub-deploy-dev.yml'))
script: echo "Deploying using test-hub-deploy-dev.yml"
displayName: Deploy to test-dev
- stage: Build
jobs:
....
I don't know where I am doing wrong with the above yaml code. Please point me in the right direction.
1 - In triggers section 'include' and - '.devops/**' should be on the same identation level
https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema/trigger?view=azure-pipelines#examples-2
trigger:
paths:
include:
- '.devops/**'
2 - Same for 'stages' and '- stage: abc'
https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema/stages-stage?view=azure-pipelines
stages:
- stage: abc
3 - The same for 'jobs' and '- job'
https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema/jobs?view=azure-pipelines#examples
4 - And for 'steps'
https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema/steps?view=azure-pipelines
First declare your step(script/pwsh/other) then you can assign name for it.
'script' needs to have dash in front of it.
https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema/steps-script?view=azure-pipelines#examples
5 - Outputting variable is used when you want to get this variable from next jobs, here you have second step using variable from step one within same job.
6 - Where did you find this 'set-output'? Why not use ##vso + task.setvariable. This whole line cannot work.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/set-variables-scripts?view=azure-devops&tabs=bash#set-an-output-variable-for-use-in-the-same-job
7 - $(basename $Build.SourcesDirectory)
I suppose that 'basename' is variable, not constant string but you did not provide this infromation.
This will never work, to access variable you need to use syntax $(VariableName)
If you want to concat variables, put them next to each other
"$(basename)$(Build.SourcesDirectory)
8 - I do not guarantee it will fully work because there was plenty of mistakes that I could miss something.
The 'steps' section of yaml should be similar to this one
steps:
- script: |
if [[ $Build.SourcesDirectory == *"deployment-folder"* ]]; then
echo "##vso[task.setvariable variable=deploymentFile;]$(basename)$(Build.SourcesDirectory)"
fi
name: Get the deployment file name
displayName: Get deployment file name
env:
SYSTEM_DEFAULTWORKINGDIRECTORY: $(System.DefaultWorkingDirectory)
BUILD_SOURCESDIRECTORY: $(Build.SourcesDirectory)
- script: echo "Deploying using test-hub-deploy-dev.yml"
displayName: Deploy to test-dev
name: Deploy to test-dev
condition: and(succeeded(), eq(variables['deploymentFile'], 'test-hub-deploy-dev.yml'))

accessing user defined pipeline variables within definitions steps bitbucket

Im trying pass a user defined variable from a custom pipeline to a step defined within the definitions section.
My yml snippet is below:
definitions:
steps:
- step: &TagVersion
trigger: manual
script:
- export VERSION=$VERSION
- echo "VERSION $VERSION"
custom:
run_custom:
- variables:
- name: VERSION
- step:
script:
- echo "starting"
- parallel:
- step:
<<: *TagVersion
variables:
VERSION: $VERSION
When I build the pipeline I can see the variable is listed as a pipeline variable when running the step TagVersion, and the correct value is shown there but dont know how to use this within the scripts section where im trying to echo out the value.
thanks

Run specific job in Gitlab CI base on a condition

I have a repo QA/tests which I want to run all the jobs when there is a push to this repo.
I used a script to generate the jobs dynamically:
job-generator:
stage: generate
tags:
- kuber
script:
- scripts/generate-job.sh > generated-job.yml
artifacts:
paths:
- generated-job.yml
main:
trigger:
include:
- artifact: generated-job.yml
job: job-generator
strategy: depend
At the next step, I have another repo products/first which I want to run a specific job in QA/tests at every push in the products/first so I tried:
stages:
- test
tests:
stage: test
variables:
TARGET: first
trigger:
project: QA/tests
branch: master
strategy: depend
Then I tried to define a global TARGET: all variable in my main gitlab-ci.yml and override it with the TARGET: first in the above YAML.
generate-job.sh:
#!/bin/bash
PRODUCTS=("first" "second" "third")
for P in "${PRODUCTS[#]}"; do
cat << EOF
$P:
stage: test
tags:
- kuber
script:
- echo -e "Hello from $P"
rules:
- if: '"$TARGET" == "all"'
when: always
- if: '"$TARGET" == $P'
when: always
EOF
done
But no results. the downstream pipeline doesn't have any job at all!
Any idea?
I am not sure if this is now helpful, but this looks like an over complicated approach from the outside. I have to say i have limited knowledge and my answer is based on assumption:
the QA/tests repository contains certain test cases for all repositories
QA/tests has the sole purpose of containing the tests, not an overview over the projects etc.
My Suggestion
As QA/tests is only containing tests which should be executed against each project, i would create a docker image out of it which is contains all the tests and can actually execute them. (lets calls it qa-tests:latest)
Within my projects i would add a step which uses this images, with my source code of the project and executes the tests:
qa-test:
image: qa-tests:latest
script:
- echo "command to execute scripts"
# add rules here accordingly
this would solve the issue with each push into the repositories. For an easier usage, i could create a QA-Tests.gitlab-ci.yml file which can be included by the sub-projects with
include:
- project: QA/tests
file: QA-Tests.gitlab-ci.yml
this way you do not need to do updates with in the repositories if the ci snippet changes.
Finally to trigger the execution on each push, you only need to trigger the pipelines of the subprojects from the QA/tests.
Disclaimer
As i said, i have only a limited few, as the goal is described but not the motivation. With this approach you remove some of the directive calls - mainly the ones triggering from sub projects to QA/tests. And it generates a clear structure, but it might not fit your needs.
I solved it with:
gitlab-ci.yml:
variables:
TARGET: all
job-generator:
stage: generate
tags:
- kuber
script:
- scripts/generate-job.sh > generated-job.yml
artifacts:
paths:
- generated-job.yml
main:
variables:
CHILD_TARGET: $TARGET
trigger:
include:
- artifact: generated-job.yml
job: job-generator
strategy: depend
and use CHILD_TARGET in my generate-job.sh:
#!/bin/bash
PRODUCTS=("first" "second" "third")
for P in "${PRODUCTS[#]}"; do
cat << EOF
$P:
stage: test
tags:
- kuber
script:
- echo -e "Hello from $P"
rules:
- if: '\$CHILD_TARGET == "all"'
when: always
- if: '\$CHILD_TARGET == "$P"'
when: always
EOF
done
So I could call it from other projects like this:
stages:
- test
e2e-tests:
stage: test
variables:
TARGET: first
trigger:
project: QA/tests
branch: master
strategy: depend

Use variable for Bitbucket custom pipeline deployment value

I am trying to set the deployment variable for a step using a variable that is passed to a custom pipeline. The idea is not to have to duplicate the custom pipeline as the only change is the deployment variables that are read from bitbucket settings.
The definition looks as follows, but throws an error
pipelines:
custom:
my-pipeline:
- variables:
- name: deployment
- step:
deployment: $deployment
script:
- ...
Am I missing something here, or is the deployment key not allows to accept a variable?
Unfortunetly you can't use variables in the deployment field. Variables are only available in script fiels. However, your problem is easily solved in another way: anchors
For example:
definitions:
steps:
- step: &Test-step
name: Run tests
script:
- npm install
- npm run test
- step: &Deploy-step
name: Deploy to staging
script:
- npm install
- npm run build
- fab deploy
pipelines:
default:
- step: *Test-step
- step:
<<: *Deploy-step
name: Deploy to Staging
deployment: staging
trigger: manual
custom:
Staging Deployment
- step: *Test-step
- step:
<<: *Deploy-step
deployment: staging
Production Deployment
- step: *Test-step
- step:
<<: *Deploy-step
deployment: production

Same steps for multiple named environments with GitLab CI

Is there a way to configure multiple specifically-named environments (specifically, test, stage, and prod)?
In their documentation (https://docs.gitlab.com/ce/ci/environments.html) they talk about dynamically-created environments, but they are all commit based.
My build steps are the same for all of them, save for swapping out the slug:
deploy_to_test:
environment:
name: test
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
deploy_to_stage:
environment:
name: stage
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
deploy_to_prod:
environment:
name: prod
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
Is there any way to compress this down into one set of instructions? Something like:
deploy:
environment:
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
Yes, you can use anchors. If I follow the documentation properly, you would rewrite it using a hidden key .XX and then apply it with <<: *X.
For example this to define the key:
.job_template: &deploy_definition
environment:
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
And then all blocks can be writen using <<: *job_template. I assume environment will merge the name with the predefined URL.
deploy_to_test:
<<: *deploy_definition
environment:
name: test
deploy_to_stage:
<<: *deploy_definition
environment:
name: stage
deploy_to_prod:
<<: *deploy_definition
environment:
name: prod
Full docs section from the link above:
YAML has a handy feature called 'anchors', which lets you easily duplicate content across your document. Anchors can be used to duplicate/inherit properties, and is a perfect example to be used with hidden keys to provide templates for your jobs.
The following example uses anchors and map merging. It will create two jobs, test1 and test2, that will inherit the parameters of .job_template, each having their own custom script defined:
.job_template: &job_definition # Hidden key that defines an anchor named 'job_definition'
image: ruby:2.1
services:
- postgres
- redis
test1:
<<: *job_definition # Merge the contents of the 'job_definition' alias
script:
- test1 project
test2:
<<: *job_definition # Merge the contents of the 'job_definition' alias
script:
- test2 project
& sets up the name of the anchor (job_definition), << means "merge the given hash into the current one", and * includes the named anchor (job_definition again). The expanded version looks like this:
.job_template:
image: ruby:2.1
services:
- postgres
- redis
test1:
image: ruby:2.1
services:
- postgres
- redis
script:
- test1 project
test2:
image: ruby:2.1
services:
- postgres
- redis
script:
- test2 project
Besides what the answer offered, I'd like to add another similar way to achieve kind of the same thing but it's more flexible rather than to use a template and then merge it in a stage.
What you can do is to create a hidden key as well, but in this format, e.g.,
.login: &login |
cmd1
cmd2
cmd3
...
And then you can apply it to different stages by using the '*', the asterisk, like:
deploy:
stage: deploy
script:
- ...
- *login
- ...
bake:
stage: bake
script:
- ...
- *login
- ...
And the result would be equivalent to:
deploy:
stage: deploy
script:
- ...
- cmd1
- cmd2
- cmd3
- ...
bake:
stage: bake
script:
- ...
- cmd1
- cmd2
- cmd3
- ...
Based on the resource of:
https://gitlab.com/gitlab-org/gitlab-ce/issues/19677#note_13008199
As for the template implementation, it's "merged". With my own experience, if you append more scripts after merging a template, the template scripts will be overwritten. And you cannot apply multiple templates at a time. Only the last template scripts will be executed. For example:
.tmp1: &tmp1
script:
- a
- b
.tmp2: &tmp2
script:
- c
- d
job1:
<<: *tmp1
<<: *tmp2
stage: xxx
job2:
<<: *tmp2
stage: yyy
script:
- e
- f
The equivalent result would be:
job1:
stage: xxx
script:
- c
- d
job2:
stage: yyy
script:
- e
- f
If not sure about the syntax correctness, just copy and paste your .gitlab.yml file content to "CI Lint" to validate. The button is in the tab of Pipelines.
gitlab gitlab-ci yaml
Just in case: Gitlab offers (since 11.3) an extends keyword, which can be used to "templates" yaml entries (so far as I understand it):
See the official doc
Have you tried implementing variables for various environments and using different jobs for various environments? I've come up with a solution for you.
image: node:latest
variables:
GIT_DEPTH: '0'
stages:
- build
- deploy
workflow:
rules:
- if: $CI_COMMIT_REF_NAME == "develop"
variables:
DEVELOP: "true"
ENVIRONMENT_NAME: Develop
WEBSITE_URL: DEVELOP_WEBSITE_URL
S3_BUCKET: (develop-s3-bucket-name)
AWS_REGION: ************** develop
AWS_ACCOUNT: ********develop
- if: $CI_COMMIT_REF_NAME == "main"
variables:
PRODUCTION: "true"
ENVIRONMENT_NAME: PRODUCTION
WEBSITE_URL: $PROD_WEBSITE_URL
S3_BUCKET: $PROD-S3-BUCKET-NAME
AWS_REGION: ************** (prod-region)
AWS_ACCOUNT: ***********(prod-acct)
- when: always
build-app:
stage: build
script:
#build-script
environment:
name: $ENVIRONMENT_NAME
deploy-app:
stage: deploy
script:
#deploy-script
environment:
name: $ENVIRONMENT_NAME

Resources