I use gitlab pipeline.
I need to upload artifacts to my build in s3.
My stage for this:
job:
image: python:latest
stage: upload_S3
dependencies:
- build
before_script:
- pip install awscli
script:
- aws --endpoint-url $ENDPOIN_URL s3 rm s3://$BUCKET_NAME/artifacts/ --recursive
- aws --endpoint-url $ENDPOIN_URL s3 cp bin s3://$BUCKET_NAME/artifacts/build-$CI_COMMIT_REF_NAME-$CI_PIPELINE_ID --recursive --exclude "settings*" --exclude "web.config"
only:
changes:
- Server/**/*
refs:
- master
- /^release-.*/
when: manual
But when I do clear folder in s3, my delete command doesn't delete all files.
If i do this in s3Browser or retry script of clear folder in s3 - it will delete leftovers.
It accumulate in bucket.
My build have many files. Why awscli can't delete all files at once?
How I can delete all at once? (don't retry stage)
Related
Our team is having a problem trying to set up a pipeline for update an AWS Lambda function.
Once the deploy is triggered, it fails with the following error:
Status: Downloaded newer image for bitbucketpipelines/aws-lambda-deploy:0.2.3
INFO: Updating Lambda function.
aws lambda update-function-code --function-name apikey-token-authorizer2 --publish --zip-file fileb://apiGatewayAuthorizer.zip
Error parsing parameter '--zip-file': Unable to load paramfile fileb://apiGatewayAuthorizer.zip: [Errno 2] No such file or directory: 'apiGatewayAuthorizer.zip'
*Failed to update Lambda function code.
Looks like the script couldn't find the artifact, but we don't know why.
Here is the bitbucket-pipelines.yml file content:
image: node:16
# Workflow Configuration
pipelines:
default:
- parallel:
- step:
name: Build and Test
caches:
- node
script:
- echo Installing source YARN dependencies.
- yarn install
branches:
testing:
- parallel:
- step:
name: Build
script:
- apt update && apt install zip
# Exclude files to be ignored
- echo Zipping package.
- zip -r apiGatewayAuthorizer.zip . -x *.git* bitbucket-pipelines.yml
artifacts:
- apiGatewayAuthorizer.zip
- step:
name: Deploy to testing - Update Lambda code
deployment: Test
trigger: manual
script:
- pipe: atlassian/aws-lambda-deploy:0.2.3
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
FUNCTION_NAME: $LAMBDA_FUNCTION_NAME
COMMAND: 'update'
ZIP_FILE: 'apiGatewayAuthorizer.zip'
Does anyone knows what am I missing here?
Thanks to Marc C. from Atlassian, here is the solution.
Based on your YAML configuration, I can see that you're using Parallel
steps.
According to the documentation:
Parallel steps can only use artifacts produced by previous steps, not
by steps in the same parallel set.
Hence, this is why the artifacts is not generated in the "Build" step
because those 2 steps are within a parallel set.
For that, you can just remove the parallel configuration and use
multi-steps instead. This way, the first step can generate the
artifact and pass it on to the second step. Hope it helps and let me
know how it goes.
Regards, Mark C
So we've tried the solution and it worked!.
Here is the new pipeline:
pipelines:
branches:
testing:
- step:
name: Build and Test
caches:
- node
script:
- echo Installing source YARN dependencies.
- yarn install
- apt update && apt install zip
# Exclude files to be ignored
- echo Zipping package.
- zip -r my-deploy.zip . -x *.git* bitbucket-pipelines.yml
artifacts:
- my-deploy.zip
- step:
name: Deploy to testing - Update Lambda code
deployment: Test
trigger: manual
script:
- pipe: atlassian/aws-lambda-deploy:0.2.3
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
FUNCTION_NAME: $LAMBDA_FUNCTION_NAME
COMMAND: 'update'
ZIP_FILE: 'my-deploy.zip'
I have a "buildspec.yaml" file in my project. When I am in the "development" branch he makes a command to send the files in a given s3, but when I "push" my "master" branch it has to be sent to another S3, is there any way I can put a conditional on the file instead of me changing the file when I switch "branch"?
my file buildspec.yaml:
version: 0.2
phases:
install:
runtime-versions:
nodejs: 12
commands:
- npm install -g #angular/cli
- npm update
build:
commands:
- ng build
post_build:
commands:
- aws s3 rm s3://link-s3-dev --recursive # when changing branches I need to change the address manually
- aws s3 cp ./dist/portaldev s3://link-s3-dev --recursive #when changing branches I need to change the address manually
artifacts:
files:
- '**/*'
When I enter the master branch, I have to change the command to send it to the correct s3, is there a way to leave a conditional on the file, instead of keeping this file different in each branch?
If you create 2 pipelines for 2 branches, then you can use environment variables to accomplish the same.
The S3 bucket becomes an environment variable say BUILD_OUTPUT_BUCKET
aws s3 rm $BUILD_OUTPUT_BUCKET --recursive
aws s3 cp ./dist/portaldev $BUILD_OUTPUT_BUCKE --recursive
This document provides information about adding additional environment variables for your build (Bullet 13).
If you can get the branch name using the variables available to build here, then you can run a script to set the appropriate bucket from within your build itself.
AWS beginner here
I have a repository on gitlab which has a branch named automatic_invoice_generator. This branch has the following content in it:
Script1.py
Script2.py
Script3.py
.gitlab-ci.yml
Now, I have to deploy these three codes as three different aws lambda functions. Right now, what I have done is create 3 different branches from automatic_invoice_generator branch, script1_branch, script2_branch, script3_branch, and for each branch (I changed the .gitlab-ci.yml file a bit to suit for the particular script).
My .gitlab-ci.yml file for Script1.py looks as follows:
image: ubuntu:latest
variables:
GIT_SUBMODULE_STRATEGY: recursive
LAMBDA_NAME: Script1
AWS_DEFAULT_REGION: eu-central-1
S3_BUCKET: invoice-bucket
stages:
- deploy
production:
stage: deploy
script:
- apt-get -y update
- apt-get -y install python3-pip python3.7 zip
- python3.7 -m pip install --upgrade pip
- python3.7 -V
- pip3.7 install virtualenv
- mv Script1.py ~
- mv csv_data~
- mv requirements.txt ~
# Move submodules
- mv edilite/edilite ~
- mv edilite/pydifact/pydifact ~
# Setup virtual environment
- mkdir ~/forlambda
- cd ~/forlambda
- virtualenv -p python3 venv
- source venv/bin/activate
- pip3.7 install -r ~/requirements.txt -t ~/forlambda/venv/lib/python3.7/site-packages/
# Package environment and dependencies
- cd ~/forlambda/venv/lib/python3.7/site-packages/
- zip -r9 ~/forlambda/archive.zip .
- cd ~
- zip -g ~/forlambda/archive.zip Script1.py
- zip -r ~/forlambda/archive.zip csv_data/*
- zip -r ~/forlambda/archive.zip edilite/*
- zip -r ~/forlambda/archive.zip pydifact/*
# Upload package to S3
# Install AWS CLI
- pip install awscli --upgrade # --user
- export PATH=$PATH:~/.local/bin # Add to PATH
# Configure AWS connection
- aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws configure set default.region $AWS_DEFAULT_REGION
- aws sts get-caller-identity --output text --query 'Account' # current account
- aws s3 cp ~/forlambda/archive.zip s3://$S3_BUCKET/$LAMBDA_NAME-deployment.zip
I am using the same .gitlab-ci.yml file for all the branches (script1_branch, script2_branch, script3_branch), only changing the LAMBDA_NAME and name of the .py scripts. When I run the .gitlab-ci.yml files for all the 3 branches, the code runs and 3 different lambda functions are created and the code runs perfectly fine.
What I would like to know if there is a way I can modify my .gitlab-ci.yml file through which I can, instead of creating 3 different branches for 3 different scripts (script1_branch, script2_branch, script3_branch), create just one branch from the automatic_invoice_generator (say all_scripts_branch) and deploy all the 3 scripts simultaneously as three different lambda functions?
I am a bit new to both aws and gitlab, so any help is appreciated.
Consider the following stub .gitlab-ci.yml which illustrates leveraging GitLab CI YAML anchors feature (https://docs.gitlab.com/ee/ci/yaml/#anchors) to reduce code duplication:
image: alpine
variables:
GIT_SUBMODULE_STRATEGY: recursive
AWS_DEFAULT_REGION: eu-central-1
S3_BUCKET: invoice-bucket
stages:
- deploy
.job_template: &job_definition # Hidden key that defines an anchor named 'job_definition'
stage: deploy
script:
- echo zip -g ~/forlambda/archive.zip ${LAMBDA_NAME}.py
- echo aws s3 cp ~/forlambda/archive.zip s3://$S3_BUCKET/${LAMBDA_NAME}-deployment.zip
production1:
variables:
LAMBDA_NAME: Script1
<<: *job_definition # Merge the contents of the 'job_definition' alias
production2:
variables:
LAMBDA_NAME: Script2
<<: *job_definition # Merge the contents of the 'job_definition' alias
References:
- https://docs.gitlab.com/ee/ci/yaml/#anchors
- https://gitlab.com/gitlab-org/gitlab-foss/issues/24535
I have problems with GitLab not uploading the artifacts generated by codeception when a test fails. It only uploads the .gitignore in the _output folder.
This is the relevant part from my .gitlab-ci.yml:
- ./src/Vendor/codeception/codeception/codecept run acceptance || true
- ls -a tests/_output
artifacts:
paths:
- "tests/_output"
expire_in: 20 days
when: always
Interesting is, that I can browse the artifacts (in this case only the .gitignore-file) before the job even finished. The logs of my runner prove, that the artifacts do indeed exist in the directory tests/_output (shorted):
$ ls -a tests/_output
.
..
.gitignore
commentsCest.answerCommentTest.fail.html
commentsCest.answerCommentTest.fail.png
commentsCest.normalCommentTest.fail.html
commentsCest.normalCommentTest.fail.png
failed
Uploading artifacts...
tests/_output: found 2 matching files
Uploading artifacts to coordinator... ok id=123456789 responseStatus=201 Created token=abcdefghij
Job succeeded
What am I doing wrong?
I figured out a workaround:
The gitlab-runner only uploads files properly inside the project directory.
To get the artifacts, copy all files to ${CI_PROJECT_DIR}:
codeception_tests:
stage: <your stage-name>
image: <your image>
script:
- ...
after_script:
- mkdir ${CI_PROJECT_DIR}/artifacts
- mkdir ${CI_PROJECT_DIR}/artifacts/codecept
- cp tests/_output ${CI_PROJECT_DIR}/artifacts/codecept -R
artifacts:
paths:
- ${CI_PROJECT_DIR}/artifacts/
expire_in: 5 days
when: always
In Gitlab, is it possible to transfer caches or artifacts between pipelines?
I am building a library in one pipeline and I want to build an application with the library in another pipeline.
Yes, it is possible. There are a couple of options to achieve this:
Using Job API and GitLab Premium
The first option is to use Job API to fetch artifacts. This method is available only if you have GitLab Premium. In this option, you use CI_JOB_TOKEN in Job API to fetch artifacts from another pipeline. Read more here.
Here is quick example of a job you would put in your application pipeline configuration:
build_application:
image: debian
stage: build
script:
- apt update && apt install -y unzip
- curl --location --output artifacts.zip "https://gitlab.example.com/api/v4/projects/${PROJECT_ID}/jobs/artifacts/master/download?job=build&job_token=$CI_JOB_TOKEN"
- unzip artifacts.zip
Using S3
The second option is to use some third-party intermediate storage, for instance, AWS S3. To pass artifacts follow below example.
In your library pipeline configuration create the following job:
variables:
TARGET_PROJECT_TOKEN: [get token from Settings -> CI/CD -> Triggers]
TARGET_PROJECT_ID: [get project id from project main page]
publish-artifact:
image: "python:latest"
stage: publish
before_script:
- pip install awscli
script:
- aws s3 cp output/artifact.zip s3://your-s3-bucket-name/artifact.zip.${CI_JOB_ID}
- "curl -X POST -F token=${TARGET_PROJECT_TOKEN} -F ref=master -F variables[ARTIFACT_ID]=${CI_JOB_ID} https://gitlab.com/api/v4/projects/${TARGET_PROJECT_ID}/trigger/pipeline"
Then in your application pipeline configuration retrieve the artifact from the s3 bucket:
fetch-artifact-from-s3:
image: "python:latest"
stage: prepare
artifacts:
paths:
- artifact/
before_script:
- pip install awscli
script:
- mkdir artifact
- aws s3 cp s3://your-s3-bucket-name/artifact.zip.${ARTIFACT_ID} artifact/artifact.zip
only:
variables:
- $ARTIFACT_ID
Once fetch-artifact-from-s3 job is completed you will have your artifact available in artifact/ directory. It can now be consumed in other jobs within application pipeline.