Gitlab CI empty script - gitlab

I'm trying to setup a simple CI/CD environment on gitlab. My code is a python app that needs an external service for testing. The service is a container that does not require any script to be run. My gitlab-ci.yml file is:
stages:
- dynamodb
- testing
build_dynamo:
stage: dynamodb
image: amazon/dynamodb-local:latest
unit_tests:
stage: testing
image: python:3.10.3
before_script:
- pip install -r requirements_tests.txt
- export PYTHONPATH="${PYTHONPATH}:./src"
script:
- python -m unittest discover -s ./tests -p '*test*.py'
For this config I get an error
Found errors in your .gitlab-ci.yml: jobs build_dynamo config should
implement a script: or a trigger: keyword
Hiow can I solve this or implement the setup I need?

Using service solved this
unit_tests:
image: python:3.10.3-slim-buster
services:
- name: amazon/dynamodb-local:latest
before_script:
- pip install -r requirements_tests.txt
- export PYTHONPATH="${PYTHONPATH}:./src"
script:
- python -m unittest discover -s ./tests -p '*test*.py'
endpoint for the service is amazon-dynamo-local:8000 since "/" are changed to "-".
Reference: https://docs.gitlab.com/ee/ci/services/#accessing-the-services

Related

Failed Bitbucket NodeJS repo Pipeline with AWS Lambda function with "Error parsing parameter '--zip-file......' "

Our team is having a problem trying to set up a pipeline for update an AWS Lambda function.
Once the deploy is triggered, it fails with the following error:
Status: Downloaded newer image for bitbucketpipelines/aws-lambda-deploy:0.2.3
INFO: Updating Lambda function.
aws lambda update-function-code --function-name apikey-token-authorizer2 --publish --zip-file fileb://apiGatewayAuthorizer.zip
Error parsing parameter '--zip-file': Unable to load paramfile fileb://apiGatewayAuthorizer.zip: [Errno 2] No such file or directory: 'apiGatewayAuthorizer.zip'
*Failed to update Lambda function code.
Looks like the script couldn't find the artifact, but we don't know why.
Here is the bitbucket-pipelines.yml file content:
image: node:16
# Workflow Configuration
pipelines:
default:
- parallel:
- step:
name: Build and Test
caches:
- node
script:
- echo Installing source YARN dependencies.
- yarn install
branches:
testing:
- parallel:
- step:
name: Build
script:
- apt update && apt install zip
# Exclude files to be ignored
- echo Zipping package.
- zip -r apiGatewayAuthorizer.zip . -x *.git* bitbucket-pipelines.yml
artifacts:
- apiGatewayAuthorizer.zip
- step:
name: Deploy to testing - Update Lambda code
deployment: Test
trigger: manual
script:
- pipe: atlassian/aws-lambda-deploy:0.2.3
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
FUNCTION_NAME: $LAMBDA_FUNCTION_NAME
COMMAND: 'update'
ZIP_FILE: 'apiGatewayAuthorizer.zip'
Does anyone knows what am I missing here?
Thanks to Marc C. from Atlassian, here is the solution.
Based on your YAML configuration, I can see that you're using Parallel
steps.
According to the documentation:
Parallel steps can only use artifacts produced by previous steps, not
by steps in the same parallel set.
Hence, this is why the artifacts is not generated in the "Build" step
because those 2 steps are within a parallel set.
For that, you can just remove the parallel configuration and use
multi-steps instead. This way, the first step can generate the
artifact and pass it on to the second step. Hope it helps and let me
know how it goes.
Regards, Mark C
So we've tried the solution and it worked!.
Here is the new pipeline:
pipelines:
branches:
testing:
- step:
name: Build and Test
caches:
- node
script:
- echo Installing source YARN dependencies.
- yarn install
- apt update && apt install zip
# Exclude files to be ignored
- echo Zipping package.
- zip -r my-deploy.zip . -x *.git* bitbucket-pipelines.yml
artifacts:
- my-deploy.zip
- step:
name: Deploy to testing - Update Lambda code
deployment: Test
trigger: manual
script:
- pipe: atlassian/aws-lambda-deploy:0.2.3
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
FUNCTION_NAME: $LAMBDA_FUNCTION_NAME
COMMAND: 'update'
ZIP_FILE: 'my-deploy.zip'

How to write .gitlab-ci.yml to build/deploy with conditions

I am new to CI/CD and Gitlab. I have a CI/CD script to test, build and deploy and I use 2 branches and 2 EC2. My goal is to have a light and not redundant script to build and deploy my changes in functions of the branch.
Currently my script looks like this but after looking the Gitlab doc I saw many conditionals keywords like rules but I'm really lost about how I can use conditional format in my script to optimise it.
Is there a way to use condition and run some script if there is a merge from a branch or from an other? Thanks in advance!
#image: alpine
image: "python:3.7"
before_script:
- python --version
stages:
- test
- build_staging
- build_prod
- deploy_staging
- deploy_prod
test:
stage: test
script:
- pip install -r requirements.txt
- pytest Flask_server/test_app.py
only:
refs:
- develop
build_staging:
stage: build_staging
image: node
before_script:
- npm install -g npm
- hash -d npm
- nodejs -v
- npm -v
script:
- cd client
- npm install
- npm update
- npm run build:staging
artifacts:
paths:
- client/dist/
expire_in: 30 minutes
only:
refs:
- develop
build_prod:
stage: build_prod
image: node
before_script:
- npm install -g npm
- hash -d npm
- nodejs -v
- npm -v
script:
- cd client
- npm install
- npm update
- npm run build
artifacts:
paths:
- client/dist/
expire_in: 30 minutes
only:
refs:
- master
deploy_staging:
stage: deploy_staging
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest # gitlab image for awc cli commands
before_script:
- apt-get update
# - apt-get -y install python3-pip
# - apt-get --assume-yes install awscli
- apt-get --assume-yes install -y shellcheck
script:
- shellcheck .ci/deploy_aws_STAGING.sh
- chmod +x .ci/deploy_aws_STAGING.sh
- .ci/deploy_aws_STAGING.sh
- aws s3 cp client/dist/ s3://......./ --recursive
only:
refs:
- develop
deploy_prod:
stage: deploy_prod
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest # gitlab image for awc cli commands
before_script:
- apt-get update
# - apt-get -y install python3-pip
# - apt-get --assume-yes install awscli
- apt-get --assume-yes install -y shellcheck
script:
- shellcheck .ci/deploy_aws_PROD.sh
- chmod +x .ci/deploy_aws_PROD.sh
- .ci/deploy_aws_PROD.sh
- aws s3 cp client/dist/ s3://........../ --recursive
only:
refs:
- master
Gitlab introduces rules for includes with version 14.2
include:
- local: builds.yml
rules:
- if: '$INCLUDE_BUILDS == "true"'
A good pattern as your cicd grows in complexity is to use includes and extend keywords. For example you could implement the following in your root level .gitlab-ci.yml file:
# best practice is to pin to a specific version of node or build your own image to avoid surprises
image: node:12
# stages don't need an environment appended to them; you'll see why in the included file
stages:
- build
- test
- deploy
# cache node modules in between jobs on a per branch basis like this
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm/
# include other definitions
includes:
- './ci-templates/.foo-app-ci.yml'
Then in another folder (or even another repository) you can include other templates. I didn't fully refactor this out for you but I hope this gives you the idea of not only how to use a rule to trigger your job but also how you can start to make reusable snippets and build on them to reduce the overall complexity. See the yaml comments for guidance on why I did things a certain way. example .foo-app-ci.yml file
# this script was repeated so define it once and reference it via anchor
.npm:install: &npm:install
- npm ci --cache .npm --prefer-offline # to use the cache you'll need to do this before installing dependencies
- cd client
- npm install
- npm update
# you probably want the same rules for each stage. define once and reuse them via anchor
.staging:rules: &staging:rules
- if: $CI_COMMIT_TAG
when: never # Do not run this job when a tag is created manually
- if: $CI_COMMIT_BRANCH == 'develop' # Run this job when commits are pushed or merged to the develop branch
.prod:rules: &prod:rules
- if: $CI_COMMIT_TAG
when: never # Do not run this job when a tag is created manually
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Run this job when commits are pushed or merged to the default branch
# many parts of the build stage were repeated; define it once and lets extend from it
.build:template: &build:template
stage: build
before_script:
- &npm:install
artifacts:
paths:
- client/dist/
expire_in: 30 minutes
# many parts of the deploy stage were repeated; define it once and lets extend from it
.deploy:template: &deploy:template
stage: deploy
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest # gitlab image for awc cli commands
before_script:
- apt-get update
- apt-get --assume-yes install -y shellcheck
# here we extend from the build template to run the staging specific build
build:staging:
extends: *build:template
environment: staging
script:
- npm run build:staging
rules:
- *staging:rules
# this is kind of an oddball... not used to seeing python to test a node app. we're not able to reuse as much here
test:staging:
image: "python:3.7"
stage: test
script:
- pip install -r requirements.txt
- pytest Flask_server/test_app.py
rules:
- *staging:rules # apply staging rules to trigger test stage
needs:
- job: build:staging # normally we want to build before test; this will trigger test after the build
# here we extend from the build template to run the prod specific build
build:prod:
extends: *build:template
environment: prod
script:
- npm run build
rules:
- *prod:rules
# same thing for the deploy phases... extend from the deploy template for env specific requirements
deploy:staging:
extends: *deploy:template
script:
- shellcheck .ci/deploy_aws_STAGING.sh
- chmod +x .ci/deploy_aws_STAGING.sh
- .ci/deploy_aws_STAGING.sh
- aws s3 cp client/dist/ s3://......./ --recursive
rules:
- *staging:rules
needs:
- job: build:staging
artifacts: true
deploy:prod:
extends: *deploy:template
script:
- shellcheck .ci/deploy_aws_PROD.sh
- chmod +x .ci/deploy_aws_PROD.sh
- .ci/deploy_aws_PROD.sh
- aws s3 cp client/dist/ s3://........../ --recursive
rules:
- *prod:rules
needs:
- job: build:prod
artifacts: true
I would start basic and as you start to get comfortable with a working pipeline you can experiment with further enhancements and breaking out into more fragments. Hope this helps!

How to utilize Docker to run tests for multiple languages on Travis CI

I am attempting to create a CI/CD pipeline with Travis CI that tests the front-end, tests the back-end, and deploys. The front-end is using Node, the back-end is using Go.
My repository is structured as follows:
- client
- DockerFile
- ...(front-end code)
- server
- DockerFile
- ...(back-end code)
- .travis.yml
Would I be able to utilize the DockerFiles in some fashion to execute tests for both sides of the application and have Travis report their results properly?
I'm not well versed with either tools so I was hoping to get some input before I dig myself into a hole. I plan on using a combination of Travis stages and docker build/docker run commands. Something like this:
jobs:
include:
- stage: test client side
before_script:
- cd client
- docker build ...
script:
docker run image /bin/sh -c "run node tests"
after_script:
- cd ..
- stage: test server side
before_script:
- cd server
script:
docker run image /bin/sh -c "run go tests"
after_script:
- cd ..
- stage: deploy
script: skip
deploy:
- provider: s3
skip_cleanup: true
on:
branch: master
This doc page makes it looks promising, but the inclusion of language: ruby and script: - bundle exec rake test throws me off. I am not sure why Ruby is required if the tests are ran through docker (at least that's what it looks like).
Update 1
I believe I got it to work correctly with the client side of the application.
Here is what I got:
services:
- docker
jobs:
include:
- stage: test
before_script:
- docker pull node:12
script:
- docker run --rm -e CI=true -v $(pwd)/client:/src node:12 /bin/sh -c "cd src; npm install; npm test"

How to configure Python coverage.pl report for Gitlab

I'm new to Gitlab, trying to setup coverage report -m for Gitlab. When I run manually, coverage report -m gives me the report. Just cant figure out what needs to be done to get that display on Gitlab.
This needs to run with Python 3.6 unit test code coverage on Linux for Gitlab.
Here is my yml file
stages:
- build
- test
- coverage
- deploy
before_script:
- python --version
- pip install -r requirements.txt
unit-tests:
image:
name: "python:3.6"
entrypoint: [""]
stage: test
script: python3 -m unittest discover
test:
image:
name: "python:3.6"
stage: test
script:
- PYTHONPATH=$(pwd) python3 my_Project_Lib/my_test_scripts/runner.py
coverage:
stage: test
script:
#- docker pull $CONTAINER_TEST_IMAGE
- python3 -m unittest discover
#- docker run $CONTAINER_TEST_IMAGE /bin/bash -c "python -m coverage run tests/tests.py && python -m coverage report -m"
coverage: '/TOTAL.+ ([0-9]{1,3}%)/'
This runs my unit tests and runer.pl fine, but no test coverage data.
Below is a working solution for Unit Test Code coverage.
Here is my .yml-file
stages:
- build
- test
- coverage
- deploy
before_script:
- pip install -r requirements.txt
test:
image:
name: "python:3.6"
stage: test
script:
- python my_Project_Lib/my_test_scripts/runner.py
unit-tests:
stage: test
script:
- python -m unittest discover
- coverage report -m
- coverage-badge
coverage: '/TOTAL.+ ([0-9]{1,3}%)/'
This runs my unit tests and runner.py fine, also runs coverage. You will need following in requirements.txt
coverage
coverage-badge
Also this line in README.MD
[![coverage report](https://gitlab.your_link.com/your_user_name/your directory/badges/master/coverage.svg)](https://gitlab.your_link.com/your_user_name/your directory/commits/master)
Your user name and link can be copied from web address.

Is it possible to use multiple docker images in bitbucket pipeline?

I have this pipeline file to unittest my project:
image: jameslin/python-test
pipelines:
default:
- step:
script:
- service mysql start
- pip install -r requirements/test.txt
- export DJANGO_CONFIGURATION=Test
- python manage.py test
but is it possible to switch to another docker image to deploy?
image: jameslin/python-deploy
pipelines:
default:
- step:
script:
- ansible-playbook deploy
I cannot seem to find any documentation saying either Yes or No.
You can specify an image for each step. Like that:
pipelines:
default:
- step:
name: Build and test
image: node:8.6
script:
- npm install
- npm test
- npm run build
artifacts:
- dist/**
- step:
name: Deploy
image: python:3.5.1
trigger: manual
script:
- python deploy.py
Finally found it:
https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html#Configurebitbucket-pipelines.yml-ci_stepstep(required)
step (required) Defines a build execution unit. Steps are executed in
the order in which they appear in the pipeline. Currently, each
pipeline can have only one step (one for the default pipeline and one
for each branch). You can override the main Docker image by specifying
an image in a step.
I have not found any information saying yes or no either so what I have assumed is that since this image can be configured with all the languages and technology you need I would suggest this method:
Create your docker image with all utilities you need for both default and deployment.
Use the branching method they show in their examples https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html#Configurebitbucket-pipelines.yml-ci_branchesbranches(optional)
Use shell scripts or other scripts to run specific tasks you need and
image: yourusername/your-image
pipelines:
branches:
master:
- step:
script: # Modify the commands below to build your repository.
- echo "Starting pipelines for master"
- chmod +x your-task-configs.sh #necessary to get shell script to run in BB Pipelines
- ./your-task-configs.sh
feature/*:
- step:
script: # Modify the commands below to build your repository.
- echo "Starting pipelines for feature/*"
- npm install
- npm install -g grunt-cli
- npm install grunt --save-dev
- grunt build

Resources