circleci filter branch is not working - circleci-2.0

I am very new in circleci and I was trying to limit my build to work only on a specific branch, I tried the below config file but when I include the filters section I get the following error:
Your config file has errors and may not run correctly:
2 schema violations found
required key [jobs] not found
required key [version] not found
config with filters:
version: 2
jobs:
provisioning_spark_installation_script:
working_directory: ~/build_data
docker:
- image: circleci/python:3.6.6-stretch
steps:
- setup_remote_docker:
docker_layer_caching: true
- checkout
- run: &install_awscli
name: Install AWS CLI
command: |
sudo pip3 install --upgrade awscli
- run: &login_to_ecr
name: Login to ECR
command: aws ecr get-login --region us-east-1 | sed 's/-e none//g' | bash
workflows:
version: 2
deployments:
jobs:
- provisioning_spark_installation_script
filters:
branches:
only: master
When I remove the filters section, everything is working fine - but without filters, I know how to workaround using shell and if else but it is less elegant.
Any advice ?

I was just missing colon character after the workflow name, in addition of more indentation for filters.
So now I have the following config:
version: 2
jobs:
provisioning_spark_installation_script:
working_directory: ~/build_data
docker:
- image: circleci/python:3.6.6-stretch
steps:
- setup_remote_docker:
docker_layer_caching: true
- checkout
- run: &install_awscli
name: Install AWS CLI
command: |
sudo pip3 install --upgrade awscli
- run: &login_to_ecr
name: Login to ECR
command: aws ecr get-login --region us-east-1 | sed 's/-e none//g' | bash
workflows:
version: 2
deployments:
jobs:
- provisioning_spark_installation_script:
filters:
branches:
only: master

Related

Github Workflow deploying Python app to Azure App Service

I have a requirements.txt with internal dependencies in private Github repositories. I've setup the build step of the workflow to use webfactory/ssh-agent#v0.5.4 to provide the SSH authentication which works perfectly during the build phase. The deployment phase is failing to authenticate because of SSH issues, but I can't find a similar way to get SSH working when Azure Oryx is handling the dependency building during the deploy.
The error:
Python Version: /opt/python/3.7.12/bin/python3.7
Creating directory for command manifest file if it doesnot exist
Removing existing manifest file
Python Virtual Environment: antenv
Creating virtual environment...
Activating virtual environment...
Running pip install...
"2022-09-12 15:13:31"|ERROR|ERROR: Command errored out with exit status 128: git clone -q
'ssh://****#github.com/Murphy-Hoffman/IBMi-MHC.git' /tmp/8da94d13f03a38b/antenv/src/ibmi-mhc-
db2 Check the logs for full command output. | Exit code: 1 | Please review your
requirements.txt | More information: https://aka.ms/troubleshoot-python
\n/bin/bash -c "oryx build /tmp/zipdeploy/extracted -o /home/site/wwwroot --platform python --
platform-version 3.7 -i /tmp/8da94d13f03a38b --compress-destination-dir -p
virtualenv_name=antenv --log-file /tmp/build-debug.log | tee /tmp/oryx-build.log ; exit
$PIPESTATUS "
Generating summary of Oryx build
Parsing the build logs
Found 1 issue(s)
Build Summary :
===============
Errors (1)
1. ERROR: Command errored out with exit status 128: git clone -q
'ssh://****#github.com/Murphy-Hoffman/IBMi-MHC.git' /tmp/8da94d13f03a38b/antenv/src/ibmi-mhc-
db2 Check the logs for full command output.
- Next Steps: Please review your requirements.txt
- For more details you can browse to https://aka.ms/troubleshoot-python
My requirements.txt file
autopep8==1.7.0
ibm-db==2.0.9
-e git+ssh://git#github.com/Murphy-Hoffman/IBMi-
MHC.git#57085a5e1f5637bfdd815397b45ba1b2dfd9b52c#egg=IBMi_MHC_db2&subdirectory=utility/db2
-e git+ssh://git#github.com/Murphy-Hoffman/IBMi-
MHC.git#57085a5e1f5637bfdd815397b45ba1b2dfd9b52c#egg=IBMi_MHC_UNIT&subdirectory=IBMi/_UNIT
itoolkit==1.7.0
pycodestyle==2.9.1
pyodbc==4.0.32
toml==0.10.2
Finally, the Github Action yml that succeeds during the build phase but fails in deployment
# Docs for the Azure Web Apps Deploy action: https://github.com/Azure/webapps-deploy
# More GitHub Actions for Azure: https://github.com/Azure/actions
# More info on Python, GitHub Actions, and Azure App Service: https://aka.ms/python-webapps-
actions
name: Build and deploy Python app to Azure Web App - mhc-customers
on:
push:
branches:
- main
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up Python version
uses: actions/setup-python#v1
with:
python-version: '3.7'
- name: Create and start virtual environment
run: |
python -m venv venv
source venv/bin/activate
- name: Setup SSH for Private Repos
uses: webfactory/ssh-agent#v0.5.4
with:
ssh-private-key: |
${{ secrets.IBMI_MHC_SECRET }}
- name: Install Dependencies
run: |
pip install -r requirements.txt
# Optional: Add step to run tests here (PyTest, Django test suites, etc.)
- name: Upload artifact for deployment jobs
uses: actions/upload-artifact#v2
with:
name: python-app
path: |
.
!venv/
deploy:
runs-on: ubuntu-latest
needs: build
environment:
name: 'Production'
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
steps:
- name: Setup SSH for Private Repos
uses: webfactory/ssh-agent#v0.5.4
with:
ssh-private-key: |
${{ secrets.IBMI_MHC_SECRET }}
- name: Download artifact from build job
uses: actions/download-artifact#v2
with:
name: python-app
path: .
- name: 'Deploy to Azure Web App'
uses: azure/webapps-deploy#v2
id: deploy-to-webapp
with:
app-name: 'mhc-customers'
slot-name: 'Production'
publish-profile: ${{ secrets.AZUREAPPSERVICE_PUBLISHPROFILE_89B81B4839F24A7589B3A4D5D845DA59 }}
I've got this working - sort of. After reading up on the Oryx automated build platform https://github.com/microsoft/Oryx I added a appsvc.yaml in the application root that ran this config:
version: 1
pre-build: |
git config --global url."https://{secret}#github".insteadOf https://github
The problem is that we have to put our actual Github secret in the config yaml (in replace of "secret"). This isn't ideal but works to get Oryx using the correct credentials.

Gitlab pipeline image with AWSCLI and Git

Hi I currently have this code in my pipeline:
stages:
- api-init
- api-plan
- api-apply
run-api-init:
stage: api-init
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest
script:
- git submodule add --name $rnme ../test.git
- aws s3 cp xxx xxx
artifacts:
reports:
dotenv: build.env
But I am getting the error:
> /usr/bin/bash: line 127: git: command not found
as my image does not have git command inside.
Is there any image I can use that has both awscli and git command?

Failed Bitbucket NodeJS repo Pipeline with AWS Lambda function with "Error parsing parameter '--zip-file......' "

Our team is having a problem trying to set up a pipeline for update an AWS Lambda function.
Once the deploy is triggered, it fails with the following error:
Status: Downloaded newer image for bitbucketpipelines/aws-lambda-deploy:0.2.3
INFO: Updating Lambda function.
aws lambda update-function-code --function-name apikey-token-authorizer2 --publish --zip-file fileb://apiGatewayAuthorizer.zip
Error parsing parameter '--zip-file': Unable to load paramfile fileb://apiGatewayAuthorizer.zip: [Errno 2] No such file or directory: 'apiGatewayAuthorizer.zip'
*Failed to update Lambda function code.
Looks like the script couldn't find the artifact, but we don't know why.
Here is the bitbucket-pipelines.yml file content:
image: node:16
# Workflow Configuration
pipelines:
default:
- parallel:
- step:
name: Build and Test
caches:
- node
script:
- echo Installing source YARN dependencies.
- yarn install
branches:
testing:
- parallel:
- step:
name: Build
script:
- apt update && apt install zip
# Exclude files to be ignored
- echo Zipping package.
- zip -r apiGatewayAuthorizer.zip . -x *.git* bitbucket-pipelines.yml
artifacts:
- apiGatewayAuthorizer.zip
- step:
name: Deploy to testing - Update Lambda code
deployment: Test
trigger: manual
script:
- pipe: atlassian/aws-lambda-deploy:0.2.3
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
FUNCTION_NAME: $LAMBDA_FUNCTION_NAME
COMMAND: 'update'
ZIP_FILE: 'apiGatewayAuthorizer.zip'
Does anyone knows what am I missing here?
Thanks to Marc C. from Atlassian, here is the solution.
Based on your YAML configuration, I can see that you're using Parallel
steps.
According to the documentation:
Parallel steps can only use artifacts produced by previous steps, not
by steps in the same parallel set.
Hence, this is why the artifacts is not generated in the "Build" step
because those 2 steps are within a parallel set.
For that, you can just remove the parallel configuration and use
multi-steps instead. This way, the first step can generate the
artifact and pass it on to the second step. Hope it helps and let me
know how it goes.
Regards, Mark C
So we've tried the solution and it worked!.
Here is the new pipeline:
pipelines:
branches:
testing:
- step:
name: Build and Test
caches:
- node
script:
- echo Installing source YARN dependencies.
- yarn install
- apt update && apt install zip
# Exclude files to be ignored
- echo Zipping package.
- zip -r my-deploy.zip . -x *.git* bitbucket-pipelines.yml
artifacts:
- my-deploy.zip
- step:
name: Deploy to testing - Update Lambda code
deployment: Test
trigger: manual
script:
- pipe: atlassian/aws-lambda-deploy:0.2.3
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
FUNCTION_NAME: $LAMBDA_FUNCTION_NAME
COMMAND: 'update'
ZIP_FILE: 'my-deploy.zip'

Gitlab CI PIP AWSCLI Avoid Install Again

I'm working with GitLab CE, Runners Docker and AWS ECS for Deployment.
We created a script that do what we need but we separate the stages and jobs for Development.
The problem is that we need to run this scripts to connect to AWS and let us to register containers and deploy our resources:
services:
- docker:dind
before_script:
- apk add build-base python3-dev python3 libffi-dev libressl-dev bash git gettext curl
- apk add py3-pip
- pip install six awscli awsebcli
- $(aws ecr get-login --no-include-email --region "${AWS_DEFAULT_REGION}")
- IMAGE_TAG="$(echo $CI_COMMIT_SHA | head -c 8)"
The problem is that the script runs every time with the Job, this will not be a problem is the scripts avoid the reinstallation of the dependencies:
So we need to know if we can avoid this behavior in order to run the script only once becase every job take so long on finish for this.
Our complete script .gitlab-ci.yml:
image: docker:latest
stages:
- build
- tag
- push
- ecs
- deploy
variables:
REPOSITORY_URL: OUR_REPO
REGION: OUR_REGION
TASK_DEFINITION_NAME: task
CLUSTER_NAME: default
SERVICE_NAME: service
services:
- docker:dind
before_script:
- apk add build-base python3-dev python3 libffi-dev libressl-dev bash git gettext curl
- apk add py3-pip
- pip install six awscli awsebcli
- $(aws ecr get-login --no-include-email --region "${AWS_DEFAULT_REGION}")
- IMAGE_TAG="$(echo $CI_COMMIT_SHA | head -c 8)"
build:
stage: build
tags:
- deployment
script:
- docker build -t $REPOSITORY_URL:latest .
only:
- refactor/ca
tag_image:
stage: tag
tags:
- deployment
script:
- docker tag $REPOSITORY_URL:latest $REPOSITORY_URL:$IMAGE_TAG
only:
- refactor/ca
push_image:
stage: push
tags:
- deployment
script:
- docker push $REPOSITORY_URL:$IMAGE_TAG
only:
- refactor/ca
task_definition:
stage: ecs
tags:
- deployment
script:
- TASK_DEFINITION=$(aws ecs describe-task-definition --task-definition "$TASK_DEFINITION_NAME" --region "${REGION}")
- NEW_CONTAINER_DEFINTIION=$(echo $TASK_DEFINITION | python3 $CI_PROJECT_DIR/update_task_definition_image.py $REPOSITORY_URL:$IMAGE_TAG)
- echo "Registering new container definition..."
- aws ecs register-task-definition --region "${REGION}" --family "${TASK_DEFINITION_NAME}" --container-definitions "${NEW_CONTAINER_DEFINTIION}"
only:
- refactor/ca
register_definition:
stage: ecs
tags:
- deployment
script:
- aws ecs register-task-definition --region "${REGION}" --family "${TASK_DEFINITION_NAME}" --container-definitions "${NEW_CONTAINER_DEFINTIION}"
only:
- refactor/ca
deployment:
stage: deploy
tags:
- deployment
script:
- aws ecs update-service --region "${REGION}" --cluster "${CLUSTER_NAME}" --service "${SERVICE_NAME}" --task-definition "${TASK_DEFINITION_NAME}"
only:
- refactor/ca
One workaround will be to define before_script only for the stage that runs aws ecs register-task-definition and aws ecs update-service.
Additionally, as push and ecs stages access the IMAGE_TAG variable it is convenient to store it on an build.env artifact so that this stages can access it by specifiying a dependency on the image_tag stage.
image: docker:latest
stages:
- build
- tag
- push
- ecs
variables:
REPOSITORY_URL: OUR_REPO
REGION: OUR_REGION
TASK_DEFINITION_NAME: task
CLUSTER_NAME: default
SERVICE_NAME: service
services:
- docker:dind
build:
stage: build
tags:
- deployment
script:
- docker build -t $REPOSITORY_URL:latest .
only:
- refactor/ca
tag_image:
stage: tag
tags:
- deployment
script:
- IMAGE_TAG="$(echo $CI_COMMIT_SHA | head -c 8)"
- echo "IMAGE_TAG=$IMAGE_TAG" >> build.env
- docker tag $REPOSITORY_URL:latest $REPOSITORY_URL:$IMAGE_TAG
only:
- refactor/ca
artifacts:
reports:
dotenv: build.env
push_image:
stage: push
tags:
- deployment
script:
- docker push $REPOSITORY_URL:$IMAGE_TAG
only:
- refactor/ca
dependencies:
- tag_image
task_definition:
stage: ecs
tags:
- deployment
before_script:
- apk add build-base python3-dev python3 libffi-dev libressl-dev bash git gettext curl
- apk add py3-pip
- pip install six awscli awsebcli
- $(aws ecr get-login --no-include-email --region "${AWS_DEFAULT_REGION}")
script:
- TASK_DEFINITION=$(aws ecs describe-task-definition --task-definition "$TASK_DEFINITION_NAME" --region "${REGION}")
- NEW_CONTAINER_DEFINTIION=$(echo $TASK_DEFINITION | python3 $CI_PROJECT_DIR/update_task_definition_image.py $REPOSITORY_URL:$IMAGE_TAG)
- echo "Registering new container definition..."
- aws ecs register-task-definition --region "${REGION}" --family "${TASK_DEFINITION_NAME}" --container-definitions "${NEW_CONTAINER_DEFINTIION}"
- aws ecs update-service --region "${REGION}" --cluster "${CLUSTER_NAME}" --service "${SERVICE_NAME}" --task-definition "${TASK_DEFINITION_NAME}"
only:
- refactor/ca
dependencies:
- tag_image

GitHub Action - AWS CLI

Recently the following GitHub Action has been deprecated with a deletion date already established at end of month (2019-12-31). The issue is, there is no "official" alternative yet (should be here). My questions are:
Does someone know if the "official" action will be released before 2019-12-31?
Is there an alternative?
aws-cli package is available in GitHub-hosted virtual environments. (aws-cli/1.16.266 Python/2.7.12 Linux/4.15.0-1057-azure botocore/1.13.2)
Make sure to set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in environment variables. You can use Github secrets to store these credentials securely.
- name: Upload to S3
run: |
aws s3 sync ./build s3://test-bucket
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'ap-south-1'
From GitHub documentation the aws-cli is already available directly on the host image.
It would be nice if this information were available on the deprecation notice
¯\_(ツ)_/¯
The AWS CLI will come preinstalled on GitHub Actions environments. More information can be found in the actions/virtual-environments repository. In my case I needed the latest possible version of the CLI. I followed the AWS CLI Install documentation and added the following step to a workflow running on ubuntu/latest:
- name: Install AWS CLI v2
run: |
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o /tmp/awscliv2.zip
unzip -q /tmp/awscliv2.zip -d /tmp
rm /tmp/awscliv2.zip
sudo /tmp/aws/install --update
rm -rf /tmp/aws/
An alternative to default awscli, or using third party actions is to configure python and install the awscli at the time of the build:
name: Sync to S3 bucket
on: [push]
jobs:
sync:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: actions/setup-python#v2
with:
python-version: '3.7'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install awscli
- run: aws s3 sync builddir s3://foobar --region eu-west-1 --cache-control max-age=0 --acl public-read --delete
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
"Github Actions > Building and testing Python" docs on Github https://docs.github.com/en/actions/guides/building-and-testing-python
The repo was updated yesterday with the following new deprecation notice:
This action has been deprecated in favor of
https://github.com/aws-actions. This repo has been archived and will
be made private on 12/31/2019

Resources