How a bitbucket pipeline knows the previous step is over an it's time to start the next one? - bitbucket-pipelines

There are multiple steps in pipelines and even multiple instructions inside a step, e.g.
pipelines:
default:
- step:
script:
- pip install -r requirements.txt
- python test.py
pipelines:
default:
- step: # non-parallel step
name: Build
script:
- ./build.sh
- parallel: # these 2 steps will run in parallel
- step:
name: Integration 1
script:
- ./integration-tests.sh --batch 1
- step:
name: Integration 2
script:
- ./integration-tests.sh --batch 2
- step: # non-parallel step
script:
- ./deploy.sh
etc.
It seems in case of parallel absence the steps/scripts/instructions goes sequentially. So how a bitbucket pipeline knows the previous step is over an it's time to start the next one?

Related

Failed Bitbucket NodeJS repo Pipeline with AWS Lambda function with "Error parsing parameter '--zip-file......' "

Our team is having a problem trying to set up a pipeline for update an AWS Lambda function.
Once the deploy is triggered, it fails with the following error:
Status: Downloaded newer image for bitbucketpipelines/aws-lambda-deploy:0.2.3
INFO: Updating Lambda function.
aws lambda update-function-code --function-name apikey-token-authorizer2 --publish --zip-file fileb://apiGatewayAuthorizer.zip
Error parsing parameter '--zip-file': Unable to load paramfile fileb://apiGatewayAuthorizer.zip: [Errno 2] No such file or directory: 'apiGatewayAuthorizer.zip'
*Failed to update Lambda function code.
Looks like the script couldn't find the artifact, but we don't know why.
Here is the bitbucket-pipelines.yml file content:
image: node:16
# Workflow Configuration
pipelines:
default:
- parallel:
- step:
name: Build and Test
caches:
- node
script:
- echo Installing source YARN dependencies.
- yarn install
branches:
testing:
- parallel:
- step:
name: Build
script:
- apt update && apt install zip
# Exclude files to be ignored
- echo Zipping package.
- zip -r apiGatewayAuthorizer.zip . -x *.git* bitbucket-pipelines.yml
artifacts:
- apiGatewayAuthorizer.zip
- step:
name: Deploy to testing - Update Lambda code
deployment: Test
trigger: manual
script:
- pipe: atlassian/aws-lambda-deploy:0.2.3
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
FUNCTION_NAME: $LAMBDA_FUNCTION_NAME
COMMAND: 'update'
ZIP_FILE: 'apiGatewayAuthorizer.zip'
Does anyone knows what am I missing here?
Thanks to Marc C. from Atlassian, here is the solution.
Based on your YAML configuration, I can see that you're using Parallel
steps.
According to the documentation:
Parallel steps can only use artifacts produced by previous steps, not
by steps in the same parallel set.
Hence, this is why the artifacts is not generated in the "Build" step
because those 2 steps are within a parallel set.
For that, you can just remove the parallel configuration and use
multi-steps instead. This way, the first step can generate the
artifact and pass it on to the second step. Hope it helps and let me
know how it goes.
Regards, Mark C
So we've tried the solution and it worked!.
Here is the new pipeline:
pipelines:
branches:
testing:
- step:
name: Build and Test
caches:
- node
script:
- echo Installing source YARN dependencies.
- yarn install
- apt update && apt install zip
# Exclude files to be ignored
- echo Zipping package.
- zip -r my-deploy.zip . -x *.git* bitbucket-pipelines.yml
artifacts:
- my-deploy.zip
- step:
name: Deploy to testing - Update Lambda code
deployment: Test
trigger: manual
script:
- pipe: atlassian/aws-lambda-deploy:0.2.3
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
FUNCTION_NAME: $LAMBDA_FUNCTION_NAME
COMMAND: 'update'
ZIP_FILE: 'my-deploy.zip'

Gitlab Runner taking Too long to run the pipeline

I had created a specific runner for my gitlab project,
its taking too long to run the pipeline.
Its is getting stuck in Cypress test mainly.
After "All Specs passed" it will not move forward.
- build
- test
build:
stage: build
image: gradle:jdk11
script:
- gradle --no-daemon build
artifacts:
paths:
- build/distributions
expire_in: 1 day
when: always
junit-test:
stage: test
image: gradle:jdk11
dependencies: []
script:
- gradle test
timeout: 5m
cypress-test:
stage: test
image: registry.gitlab.com/sahajsoft/gurukul2022/csv-parser-srijan:latestSrigin2
dependencies:
- build
script:
- unzip -q build/distributions/csv-parser-srijan-1.0-SNAPSHOT.zip -d build/distributions
- sh build/distributions/csv-parser-srijan-1.0-SNAPSHOT/bin/csv-parser-srijan &
- npm install --save-dev cypress-file-upload
- npx cypress run --browser chrome
A recommended approach is to try and replicate your script (from your pipeline) locally, from your computer.
It will allow to check:
how long those commands take
if there is any interactive step, where a command might expect a user entry, and wait for stdin.
The second point would explain why, in an unattended environment like a pipeline one, "it will not move forward".

We didn't find the deployment keyword in your bitbucket-pipelines.yml file

I have a simple pipeline config:
image: python:3.7.3
pipelines:
branches:
Server:
- step:
name: Test
script:
- pytest --ignore .
which yields the following error:
We didn't find the deployment keyword in your bitbucket-pipelines.yml file
What should i do?
I just figured out, that i did not have "bitbucket-pipelines" enabled in the repository in the settings.
Here are examples to run python with bitbucket pipelines.
You can change unit test according to yours.
First make sure pipelines are enabled.
Generate SSH.
Add host key.
Add variable to Deployment menu.
I'm also attach running pipelines with tags and branch.
Generate SSH from :
https://bitbucket.org/<WORKSPACE>/<REPOSITORY_NAME>/admin/addon/admin/pipelines/ssh-keys
and put it to your server ~/.ssh/authorized_keys
definitions:
steps:
# Build
- step: &build
name: Install and Test
image: python:3.7.2
trigger: automatic
script:
- pip install -r requirements.txt
- python3 test.py test
# Deployment
- step: &deploy
name: Deploy Artifacts
trigger: automatic
deployment: test
script:
# Deploy New Artifact
- pipe: atlassian/scp-deploy:0.3.11
variables:
USER: <REMOTE_USER>
SERVER: <REMOTE_HOST>
REMOTE_PATH: <REMOTE_PATH>
LOCAL_PATH: $BITBUCKET_CLONE_DIR/**
# Runner
pipelines:
# Running by tags
tags:
v*:
- step: *build
- step:
<<: *deploy
deployment: test
trigger: manual
# Running by branch
branches:
master:
- step: *build
- step:
<<: *deploy
deployment: test
trigger: automatic

How to share environment variables in parallel Bitbucket pipeline steps?

So, I am using Bitbucket pipelines to deploy my application. The app consists of two components: 1 and 2. They are deployed in two parallel steps in the Bitbucket pipeline:
pipelines:
custom:
1-deploy-to-test:
- parallel:
- step:
name: Deploying 1
image: google/cloud-sdk:latest
script:
- SERVICE_ENV=test
- GCLOUD_PROJECT="some-project"
- MEMORY_LIMIT="256Mi"
- ./deploy.sh
- step:
name: Deploying 2
image: google/cloud-sdk:latest
script:
- SERVICE_ENV=test
- GCLOUD_PROJECT="some-project"
- MEMORY_LIMIT="256Mi"
- ./deploy2.sh
The environment variables SERVICE_ENV, GCLOUD_PROJECT and MEMORY_LIMIT are always the same for deployments 1 and 2.
Is there any way to define these variables once for both parallel steps?
You can use User-defined variables in Pipelines. For example, you can configure your SERVICE_ENV, GCLOUD_PROJECT and MEMORY_LIMIT as a Repository variables and they will be available to all steps in your pipeline.
To add to the other answers, here's a glimpse of how we currently handle this, inspired from all the other solutions found in Bitbucket's forums.
To allow parallel tasks to re-use deployment variables (currently cannot be passed between steps), we use the bash Docker image first to set environment variables in an artifact. The pure bash image is very fast (runs under 8 seconds usually).
Then all other tasks can run in parallel benefiting of the deployment and repository variables that we usually set, all of that bypassing the current Bitbucket Pipelines limitations.
definitions:
steps:
- step: &set_env
name: Set multi-steps env variables
image: bash:5.2.12
artifacts:
- set_env.sh
script:
## Pass env
- echo "Passing all env variables to next steps"
- >-
echo "
export USERNAME=$USERNAME;
export HOST=$HOST;
export PORT_LIVE=$PORT_LIVE;
export THEME=$THEME;
" >> set_env.sh
- step: &git_ftp
name: Git-Ftp
image: davidwebca/docker-php-deployer:8.0.25
script:
# check if env file exists
- if [ -e set_env.sh ]; then
- cat set_env.sh
- source set_env.sh
- fi
# ...
branches:
staging:
- parallel:
- step:
<<: *set_env
deployment: staging
- parallel:
- step:
<<: *git_ftp
- step:
<<: *root_composer
- step:
<<: *theme_assets
According to the forums, Bitbucket staff is working on a solution to allow more flexibility on this as we speak (2022-12-01) but we shouldn't expect immediate release of this as it seems complicated to implement safely on their end.
as was explained in this link, you can define the Environment Variables and copy them to a file.
After that you can share that file between steps as an Artifact.

Is it possible to use multiple docker images in bitbucket pipeline?

I have this pipeline file to unittest my project:
image: jameslin/python-test
pipelines:
default:
- step:
script:
- service mysql start
- pip install -r requirements/test.txt
- export DJANGO_CONFIGURATION=Test
- python manage.py test
but is it possible to switch to another docker image to deploy?
image: jameslin/python-deploy
pipelines:
default:
- step:
script:
- ansible-playbook deploy
I cannot seem to find any documentation saying either Yes or No.
You can specify an image for each step. Like that:
pipelines:
default:
- step:
name: Build and test
image: node:8.6
script:
- npm install
- npm test
- npm run build
artifacts:
- dist/**
- step:
name: Deploy
image: python:3.5.1
trigger: manual
script:
- python deploy.py
Finally found it:
https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html#Configurebitbucket-pipelines.yml-ci_stepstep(required)
step (required) Defines a build execution unit. Steps are executed in
the order in which they appear in the pipeline. Currently, each
pipeline can have only one step (one for the default pipeline and one
for each branch). You can override the main Docker image by specifying
an image in a step.
I have not found any information saying yes or no either so what I have assumed is that since this image can be configured with all the languages and technology you need I would suggest this method:
Create your docker image with all utilities you need for both default and deployment.
Use the branching method they show in their examples https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html#Configurebitbucket-pipelines.yml-ci_branchesbranches(optional)
Use shell scripts or other scripts to run specific tasks you need and
image: yourusername/your-image
pipelines:
branches:
master:
- step:
script: # Modify the commands below to build your repository.
- echo "Starting pipelines for master"
- chmod +x your-task-configs.sh #necessary to get shell script to run in BB Pipelines
- ./your-task-configs.sh
feature/*:
- step:
script: # Modify the commands below to build your repository.
- echo "Starting pipelines for feature/*"
- npm install
- npm install -g grunt-cli
- npm install grunt --save-dev
- grunt build

Resources