Hi I currently have this code in my pipeline:
stages:
- api-init
- api-plan
- api-apply
run-api-init:
stage: api-init
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest
script:
- git submodule add --name $rnme ../test.git
- aws s3 cp xxx xxx
artifacts:
reports:
dotenv: build.env
But I am getting the error:
> /usr/bin/bash: line 127: git: command not found
as my image does not have git command inside.
Is there any image I can use that has both awscli and git command?
Related
I am currently building a project with GitLab CI/CD and have come across the following Error message:
ModuleNotFoundError: No module named 'Submodule1.Submodule2.filename'
Submodule2 is a submodule inside Submodule1. In .gitmodules local paths were used (via ssh, see below).
In .gitlab-ci.yml I have tried using:
variables:
GIT_SUBMODULE_STRATEGY: recursive
and (with various combinations):
before_script:
- git submodule update --init --recursive
- git config --global alias.clone = 'clone --init --recursive'
- git submodule sync --recursive
Nothing seems to work. I would be thankful for each tip I can get. Here is my gitlab-ci.yml file:
image: docker:latest
services:
- docker:dind
- postgres:12.2-alpine
variables:
#GIT_SUBMODULE_STRATEGY: recursive
GIT_SUBMODULE_UPDATE_FLAGS: --jobs 4
CI_DEBUG_TRACE: "true"
POSTGRES_HOST_AUTH_METHOD: trust
stages:
- build
build-run:
stage: build
before_script:
- apk update && apk add git
- git submodule update --init --recursive
script:
- echo Build docker image
- docker build -t file -f Projectname .
- docker build -t django -f django.Dockerfile .
- echo Run docker compose
- docker-compose -f docker-compose.unittest.yml up -d
- sleep 60
- docker logs unittests
tags:
- docker
My .gitmodules looks something like:
[submodule "Submodule1"]
path = Submodule1
url =../../...git
[submodule "Submodule1/Submodule2"]
path = Submodule1/Submodule2
url=../../...git
Our team is having a problem trying to set up a pipeline for update an AWS Lambda function.
Once the deploy is triggered, it fails with the following error:
Status: Downloaded newer image for bitbucketpipelines/aws-lambda-deploy:0.2.3
INFO: Updating Lambda function.
aws lambda update-function-code --function-name apikey-token-authorizer2 --publish --zip-file fileb://apiGatewayAuthorizer.zip
Error parsing parameter '--zip-file': Unable to load paramfile fileb://apiGatewayAuthorizer.zip: [Errno 2] No such file or directory: 'apiGatewayAuthorizer.zip'
*Failed to update Lambda function code.
Looks like the script couldn't find the artifact, but we don't know why.
Here is the bitbucket-pipelines.yml file content:
image: node:16
# Workflow Configuration
pipelines:
default:
- parallel:
- step:
name: Build and Test
caches:
- node
script:
- echo Installing source YARN dependencies.
- yarn install
branches:
testing:
- parallel:
- step:
name: Build
script:
- apt update && apt install zip
# Exclude files to be ignored
- echo Zipping package.
- zip -r apiGatewayAuthorizer.zip . -x *.git* bitbucket-pipelines.yml
artifacts:
- apiGatewayAuthorizer.zip
- step:
name: Deploy to testing - Update Lambda code
deployment: Test
trigger: manual
script:
- pipe: atlassian/aws-lambda-deploy:0.2.3
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
FUNCTION_NAME: $LAMBDA_FUNCTION_NAME
COMMAND: 'update'
ZIP_FILE: 'apiGatewayAuthorizer.zip'
Does anyone knows what am I missing here?
Thanks to Marc C. from Atlassian, here is the solution.
Based on your YAML configuration, I can see that you're using Parallel
steps.
According to the documentation:
Parallel steps can only use artifacts produced by previous steps, not
by steps in the same parallel set.
Hence, this is why the artifacts is not generated in the "Build" step
because those 2 steps are within a parallel set.
For that, you can just remove the parallel configuration and use
multi-steps instead. This way, the first step can generate the
artifact and pass it on to the second step. Hope it helps and let me
know how it goes.
Regards, Mark C
So we've tried the solution and it worked!.
Here is the new pipeline:
pipelines:
branches:
testing:
- step:
name: Build and Test
caches:
- node
script:
- echo Installing source YARN dependencies.
- yarn install
- apt update && apt install zip
# Exclude files to be ignored
- echo Zipping package.
- zip -r my-deploy.zip . -x *.git* bitbucket-pipelines.yml
artifacts:
- my-deploy.zip
- step:
name: Deploy to testing - Update Lambda code
deployment: Test
trigger: manual
script:
- pipe: atlassian/aws-lambda-deploy:0.2.3
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
FUNCTION_NAME: $LAMBDA_FUNCTION_NAME
COMMAND: 'update'
ZIP_FILE: 'my-deploy.zip'
I have a Gitlab pipeline that is failing when it attempts docker build (using Kaniko)
I am yet to do a successful docker build BUT this particular error has started after I :
Changed the kaniko image from image: gcr.io/kaniko-project/executor:debug to gcr.io/kaniko-project/executor:51734fc3a33e04f113487853d118608ba6ff2b81
Added settings for pushing to insecure registries :
--insecure
--skip-tls-verify
--skip-tls-verify-pull
--insecure-pull
After this change part of the pipeline looks like this :
before_script:
- 'dotnet restore --packages $NUGET_PACKAGES_DIRECTORY'
build_job:
tags:
- xxxx
only:
- develop
stage: build
script:
- dotnet build --configuration Release --no-restore
publish_job:
tags:
- xxxx
only:
- develop
stage: publish
artifacts:
name: "$CI_COMMIT_SHA"
paths:
- ./$PUBLISH_DIR
script:
- dotnet publish ./src --configuration Release --output $(pwd)/$PUBLISH_DIR
docker_build_dev:
tags:
- xxxx
image:
name: gcr.io/kaniko-project/executor:51734fc3a33e04f113487853d118608ba6ff2b81
entrypoint: [""]
only:
- develop
stage: docker
before_script:
- echo "Docker build"
script:
- echo "${CI_PROJECT_DIR}"
- cp ./src/Dockerfile /builds/xxx/xxx/xxx/Dockerfile
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}"
--insecure
--skip-tls-verify
--skip-tls-verify-pull
--insecure-pull
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}"
Part of the output from the pipeline is as below:
[32;1mSkipping Git submodules setup[0;m
section_end:1652535765:get_sources
[0Ksection_start:1652535765:download_artifacts
[0K[0K[36;1mDownloading artifacts[0;m[0;m
[32;1mDownloading artifacts for publish_job (33475)...[0;m
Downloading artifacts from coordinator... ok [0;m id[0;m=33475 responseStatus[0;m=200 OK token[xxxxxxxxxxx
section_end:1652535769:download_artifacts
[0Ksection_start:1652535769:step_script
[0K[0K[36;1mExecuting "step_script" stage of the job script[0;m[0;m
section_end:1652535769:step_script
[0Ksection_start:1652535769:cleanup_file_variables
[0K[0K[36;1mCleaning up project directory and file based variables[0;m[0;m
section_end:1652539354:cleanup_file_variables
[0K[31;1mERROR: Job failed: execution took longer than 1h0m0s seconds
[0;m
What am I missing?
I was missing something in my GitLab project settings which is enabling Project Registries :
https://domain/group/subgroup/project/edit
Visibility,project features,permissions << Container Registry (Toggle to enable)
I'm working with GitLab CE, Runners Docker and AWS ECS for Deployment.
We created a script that do what we need but we separate the stages and jobs for Development.
The problem is that we need to run this scripts to connect to AWS and let us to register containers and deploy our resources:
services:
- docker:dind
before_script:
- apk add build-base python3-dev python3 libffi-dev libressl-dev bash git gettext curl
- apk add py3-pip
- pip install six awscli awsebcli
- $(aws ecr get-login --no-include-email --region "${AWS_DEFAULT_REGION}")
- IMAGE_TAG="$(echo $CI_COMMIT_SHA | head -c 8)"
The problem is that the script runs every time with the Job, this will not be a problem is the scripts avoid the reinstallation of the dependencies:
So we need to know if we can avoid this behavior in order to run the script only once becase every job take so long on finish for this.
Our complete script .gitlab-ci.yml:
image: docker:latest
stages:
- build
- tag
- push
- ecs
- deploy
variables:
REPOSITORY_URL: OUR_REPO
REGION: OUR_REGION
TASK_DEFINITION_NAME: task
CLUSTER_NAME: default
SERVICE_NAME: service
services:
- docker:dind
before_script:
- apk add build-base python3-dev python3 libffi-dev libressl-dev bash git gettext curl
- apk add py3-pip
- pip install six awscli awsebcli
- $(aws ecr get-login --no-include-email --region "${AWS_DEFAULT_REGION}")
- IMAGE_TAG="$(echo $CI_COMMIT_SHA | head -c 8)"
build:
stage: build
tags:
- deployment
script:
- docker build -t $REPOSITORY_URL:latest .
only:
- refactor/ca
tag_image:
stage: tag
tags:
- deployment
script:
- docker tag $REPOSITORY_URL:latest $REPOSITORY_URL:$IMAGE_TAG
only:
- refactor/ca
push_image:
stage: push
tags:
- deployment
script:
- docker push $REPOSITORY_URL:$IMAGE_TAG
only:
- refactor/ca
task_definition:
stage: ecs
tags:
- deployment
script:
- TASK_DEFINITION=$(aws ecs describe-task-definition --task-definition "$TASK_DEFINITION_NAME" --region "${REGION}")
- NEW_CONTAINER_DEFINTIION=$(echo $TASK_DEFINITION | python3 $CI_PROJECT_DIR/update_task_definition_image.py $REPOSITORY_URL:$IMAGE_TAG)
- echo "Registering new container definition..."
- aws ecs register-task-definition --region "${REGION}" --family "${TASK_DEFINITION_NAME}" --container-definitions "${NEW_CONTAINER_DEFINTIION}"
only:
- refactor/ca
register_definition:
stage: ecs
tags:
- deployment
script:
- aws ecs register-task-definition --region "${REGION}" --family "${TASK_DEFINITION_NAME}" --container-definitions "${NEW_CONTAINER_DEFINTIION}"
only:
- refactor/ca
deployment:
stage: deploy
tags:
- deployment
script:
- aws ecs update-service --region "${REGION}" --cluster "${CLUSTER_NAME}" --service "${SERVICE_NAME}" --task-definition "${TASK_DEFINITION_NAME}"
only:
- refactor/ca
One workaround will be to define before_script only for the stage that runs aws ecs register-task-definition and aws ecs update-service.
Additionally, as push and ecs stages access the IMAGE_TAG variable it is convenient to store it on an build.env artifact so that this stages can access it by specifiying a dependency on the image_tag stage.
image: docker:latest
stages:
- build
- tag
- push
- ecs
variables:
REPOSITORY_URL: OUR_REPO
REGION: OUR_REGION
TASK_DEFINITION_NAME: task
CLUSTER_NAME: default
SERVICE_NAME: service
services:
- docker:dind
build:
stage: build
tags:
- deployment
script:
- docker build -t $REPOSITORY_URL:latest .
only:
- refactor/ca
tag_image:
stage: tag
tags:
- deployment
script:
- IMAGE_TAG="$(echo $CI_COMMIT_SHA | head -c 8)"
- echo "IMAGE_TAG=$IMAGE_TAG" >> build.env
- docker tag $REPOSITORY_URL:latest $REPOSITORY_URL:$IMAGE_TAG
only:
- refactor/ca
artifacts:
reports:
dotenv: build.env
push_image:
stage: push
tags:
- deployment
script:
- docker push $REPOSITORY_URL:$IMAGE_TAG
only:
- refactor/ca
dependencies:
- tag_image
task_definition:
stage: ecs
tags:
- deployment
before_script:
- apk add build-base python3-dev python3 libffi-dev libressl-dev bash git gettext curl
- apk add py3-pip
- pip install six awscli awsebcli
- $(aws ecr get-login --no-include-email --region "${AWS_DEFAULT_REGION}")
script:
- TASK_DEFINITION=$(aws ecs describe-task-definition --task-definition "$TASK_DEFINITION_NAME" --region "${REGION}")
- NEW_CONTAINER_DEFINTIION=$(echo $TASK_DEFINITION | python3 $CI_PROJECT_DIR/update_task_definition_image.py $REPOSITORY_URL:$IMAGE_TAG)
- echo "Registering new container definition..."
- aws ecs register-task-definition --region "${REGION}" --family "${TASK_DEFINITION_NAME}" --container-definitions "${NEW_CONTAINER_DEFINTIION}"
- aws ecs update-service --region "${REGION}" --cluster "${CLUSTER_NAME}" --service "${SERVICE_NAME}" --task-definition "${TASK_DEFINITION_NAME}"
only:
- refactor/ca
dependencies:
- tag_image
I try to pass build_backend stage before build_djangoapp with Dockerfile on GitLab, but it fails with this error.
/busybox/sh: eval: line 111: apk: not found
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 127
GitLab CI/CD project
.gitlab-ci.yml
# Official image for Hashicorp's Terraform. It uses light image which is Alpine
# based as it is much lighter.
#
# Entrypoint is also needed as image by default set `terraform` binary as an
# entrypoint.
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
# Default output file for Terraform plan
variables:
GITLAB_TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME}
PLAN: plan.tfplan
PLAN_JSON: tfplan.json
TF_ROOT: ${CI_PROJECT_DIR}
GITLAB_TF_PASSWORD: ${CI_JOB_TOKEN}
cache:
paths:
- .terraform
before_script:
- apk --no-cache add jq
- alias convert_report="jq -r '([.resource_changes[]?.change.actions?]|flatten)|{\"create\":(map(select(.==\"create\"))|length),\"update\":(map(select(.==\"update\"))|length),\"delete\":(map(select(.==\"delete\"))|length)}'"
- cd ${TF_ROOT}
- terraform --version
- echo ${GITLAB_TF_ADDRESS}
- terraform init -backend-config="address=${GITLAB_TF_ADDRESS}" -backend-config="lock_address=${GITLAB_TF_ADDRESS}/lock" -backend-config="unlock_address=${GITLAB_TF_ADDRESS}/lock" -backend-config="username=${MY_GITLAB_USERNAME}" -backend-config="password=${MY_GITLAB_ACCESS_TOKEN}" -backend-config="lock_method=POST" -backend-config="unlock_method=DELETE" -backend-config="retry_wait_min=5"
stages:
- validate
- build
- test
- deploy
- app_deploy
validate:
stage: validate
script:
- terraform validate
plan:
stage: build
script:
- terraform plan -out=$PLAN
- terraform show --json $PLAN | convert_report > $PLAN_JSON
artifacts:
name: plan
paths:
- ${TF_ROOT}/plan.tfplan
reports:
terraform: ${TF_ROOT}/tfplan.json
# Separate apply job for manual launching Terraform as it can be destructive
# action.
apply:
stage: deploy
environment:
name: production
script:
- terraform apply -input=false $PLAN
dependencies:
- plan
when: manual
only:
- master
build_backend:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- echo "{\"auths\":{\"https://gitlab.amixr.io:4567\":{\"username\":\"gitlab-ci-token\",\"password\":\"$CI_JOB_TOKEN\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --cache=true --context ./djangoapp --dockerfile ./djangoapp/Dockerfile --destination $CONTAINER_IMAGE:$CI_COMMIT_REF_NAME
# https://github.com/GoogleContainerTools/kaniko#pushing-to-google-gcr
build_djangoapp:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
before_script:
- echo 1
script:
- export GOOGLE_APPLICATION_CREDENTIALS=$TF_VAR_gcp_creds_file
- /kaniko/executor --cache=true --context ./djangoapp --dockerfile ./djangoapp/Dockerfile --destination gcr.io/{TF_VAR_gcp_project_name}/djangoapp:$CI_COMMIT_REF_NAME
when: manual
only:
- master
needs: []
app_deploy:
image: google/cloud-sdk
stage: app_deploy
before_script:
- echo 1
environment:
name: production
script:
- gcloud auth activate-service-account --key-file=${TF_VAR_gcp_creds_file}
- gcloud container clusters get-credentials my-cluster --region us-central1 --project ${TF_VAR_gcp_project_name}
- kubectl apply -f hello-kubernetes.yaml
when: manual
only:
- master
needs: []
I looked at your project and it appears you figured out this one already.
Your .gitlab-ci.yml has a global before_script block, that is trying to install packages with apk. But your build_backend is based on the kaniko-image, which is using Busybox, which doesn't have apk (nor apt-get for that matter). You overrided in a later commit the before_script for build_backend, and this took away the problem.