Gitlab CI PIP AWSCLI Avoid Install Again - gitlab

I'm working with GitLab CE, Runners Docker and AWS ECS for Deployment.
We created a script that do what we need but we separate the stages and jobs for Development.
The problem is that we need to run this scripts to connect to AWS and let us to register containers and deploy our resources:
services:
- docker:dind
before_script:
- apk add build-base python3-dev python3 libffi-dev libressl-dev bash git gettext curl
- apk add py3-pip
- pip install six awscli awsebcli
- $(aws ecr get-login --no-include-email --region "${AWS_DEFAULT_REGION}")
- IMAGE_TAG="$(echo $CI_COMMIT_SHA | head -c 8)"
The problem is that the script runs every time with the Job, this will not be a problem is the scripts avoid the reinstallation of the dependencies:
So we need to know if we can avoid this behavior in order to run the script only once becase every job take so long on finish for this.
Our complete script .gitlab-ci.yml:
image: docker:latest
stages:
- build
- tag
- push
- ecs
- deploy
variables:
REPOSITORY_URL: OUR_REPO
REGION: OUR_REGION
TASK_DEFINITION_NAME: task
CLUSTER_NAME: default
SERVICE_NAME: service
services:
- docker:dind
before_script:
- apk add build-base python3-dev python3 libffi-dev libressl-dev bash git gettext curl
- apk add py3-pip
- pip install six awscli awsebcli
- $(aws ecr get-login --no-include-email --region "${AWS_DEFAULT_REGION}")
- IMAGE_TAG="$(echo $CI_COMMIT_SHA | head -c 8)"
build:
stage: build
tags:
- deployment
script:
- docker build -t $REPOSITORY_URL:latest .
only:
- refactor/ca
tag_image:
stage: tag
tags:
- deployment
script:
- docker tag $REPOSITORY_URL:latest $REPOSITORY_URL:$IMAGE_TAG
only:
- refactor/ca
push_image:
stage: push
tags:
- deployment
script:
- docker push $REPOSITORY_URL:$IMAGE_TAG
only:
- refactor/ca
task_definition:
stage: ecs
tags:
- deployment
script:
- TASK_DEFINITION=$(aws ecs describe-task-definition --task-definition "$TASK_DEFINITION_NAME" --region "${REGION}")
- NEW_CONTAINER_DEFINTIION=$(echo $TASK_DEFINITION | python3 $CI_PROJECT_DIR/update_task_definition_image.py $REPOSITORY_URL:$IMAGE_TAG)
- echo "Registering new container definition..."
- aws ecs register-task-definition --region "${REGION}" --family "${TASK_DEFINITION_NAME}" --container-definitions "${NEW_CONTAINER_DEFINTIION}"
only:
- refactor/ca
register_definition:
stage: ecs
tags:
- deployment
script:
- aws ecs register-task-definition --region "${REGION}" --family "${TASK_DEFINITION_NAME}" --container-definitions "${NEW_CONTAINER_DEFINTIION}"
only:
- refactor/ca
deployment:
stage: deploy
tags:
- deployment
script:
- aws ecs update-service --region "${REGION}" --cluster "${CLUSTER_NAME}" --service "${SERVICE_NAME}" --task-definition "${TASK_DEFINITION_NAME}"
only:
- refactor/ca

One workaround will be to define before_script only for the stage that runs aws ecs register-task-definition and aws ecs update-service.
Additionally, as push and ecs stages access the IMAGE_TAG variable it is convenient to store it on an build.env artifact so that this stages can access it by specifiying a dependency on the image_tag stage.
image: docker:latest
stages:
- build
- tag
- push
- ecs
variables:
REPOSITORY_URL: OUR_REPO
REGION: OUR_REGION
TASK_DEFINITION_NAME: task
CLUSTER_NAME: default
SERVICE_NAME: service
services:
- docker:dind
build:
stage: build
tags:
- deployment
script:
- docker build -t $REPOSITORY_URL:latest .
only:
- refactor/ca
tag_image:
stage: tag
tags:
- deployment
script:
- IMAGE_TAG="$(echo $CI_COMMIT_SHA | head -c 8)"
- echo "IMAGE_TAG=$IMAGE_TAG" >> build.env
- docker tag $REPOSITORY_URL:latest $REPOSITORY_URL:$IMAGE_TAG
only:
- refactor/ca
artifacts:
reports:
dotenv: build.env
push_image:
stage: push
tags:
- deployment
script:
- docker push $REPOSITORY_URL:$IMAGE_TAG
only:
- refactor/ca
dependencies:
- tag_image
task_definition:
stage: ecs
tags:
- deployment
before_script:
- apk add build-base python3-dev python3 libffi-dev libressl-dev bash git gettext curl
- apk add py3-pip
- pip install six awscli awsebcli
- $(aws ecr get-login --no-include-email --region "${AWS_DEFAULT_REGION}")
script:
- TASK_DEFINITION=$(aws ecs describe-task-definition --task-definition "$TASK_DEFINITION_NAME" --region "${REGION}")
- NEW_CONTAINER_DEFINTIION=$(echo $TASK_DEFINITION | python3 $CI_PROJECT_DIR/update_task_definition_image.py $REPOSITORY_URL:$IMAGE_TAG)
- echo "Registering new container definition..."
- aws ecs register-task-definition --region "${REGION}" --family "${TASK_DEFINITION_NAME}" --container-definitions "${NEW_CONTAINER_DEFINTIION}"
- aws ecs update-service --region "${REGION}" --cluster "${CLUSTER_NAME}" --service "${SERVICE_NAME}" --task-definition "${TASK_DEFINITION_NAME}"
only:
- refactor/ca
dependencies:
- tag_image

Related

Dependency Scanning not triggering in GitLab CICD pipeline

I am new to GitLab and was trying to build a sample CICD pipeline. Following is my code:
variables:
REPO_NAME: devsecopscollab/my_test_repo
IMAGE_TAG: demo-app-1.0
include:
- template: SAST.gitlab-ci.yml
- template: Dependency-Scanning.gitlab-ci.yml
stages:
- Test
- Build
- Deploy
job1_runTests:
stage: Test
image: python:3.10.8
before_script:
- apt-get update && apt-get install make
script:
- make test
sast:
stage: Test
artifacts:
name: sast
paths:
- gl-sast-report.json
reports:
sast: gl-sast-report.json
when: always
dependency_scanning:
stage: Test
variables:
CI_DEBUG_TRACE: "true"
artifacts:
name: dependency_scanning
paths:
- gl-dependency-scanning-report.json
reports:
dependency_scanning: gl-dependency-scanning-report.json
when: always
job2_buildImage:
stage: Build
image: docker:20.10.21
services:
- docker:20.10.21-dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker login -u $DOCKERHUB_USER -p $DOCKERHUB_PASS
script:
- docker build -t $REPO_NAME:$IMAGE_TAG .
- docker push $REPO_NAME:$IMAGE_TAG
job3_deploy:
stage: Deploy
before_script:
- chmod 400 $SSH_KEY
script:
- ssh -o StrictHostKeyChecking=no -i $SSH_KEY ubuntu#$PUBLIC_IP "
docker login -u $DOCKERHUB_USER -p $DOCKERHUB_PASS &&
docker ps -aq | xargs --no-run-if-empty docker stop | xargs --no-run-if-empty docker rm &&
docker run -d -p 5000:5000 $REPO_NAME:$IMAGE_TAG"
But my pipeline looks like in this image here (no dependency scanning stage is shown):
Is something wrong with this pipeline? Why is dependency scanning stage not visible?
Tried the above given code snippet and was expecting a dependency scanning stage visible on the pipeline.
Dependency scanning works only in ultimate plan. cf: https://gitlab.com/gitlab-org/gitlab/-/issues/327366
https://i.stack.imgur.com/o7GE9.png
Also check this link https://docs.gitlab.com/ee/user/application_security/dependency_scanning/
to trigger the dependency scanning template is easer just use
include:
- template: SAST.gitlab-ci.yml
- template: Dependency-Scanning.gitlab-ci.yml

Gitlab pipeline image with AWSCLI and Git

Hi I currently have this code in my pipeline:
stages:
- api-init
- api-plan
- api-apply
run-api-init:
stage: api-init
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest
script:
- git submodule add --name $rnme ../test.git
- aws s3 cp xxx xxx
artifacts:
reports:
dotenv: build.env
But I am getting the error:
> /usr/bin/bash: line 127: git: command not found
as my image does not have git command inside.
Is there any image I can use that has both awscli and git command?

Gitlab pipeline failing on "Cleaning up project directory and file based variables"

I have a Gitlab pipeline that is failing when it attempts docker build (using Kaniko)
I am yet to do a successful docker build BUT this particular error has started after I :
Changed the kaniko image from image: gcr.io/kaniko-project/executor:debug to gcr.io/kaniko-project/executor:51734fc3a33e04f113487853d118608ba6ff2b81
Added settings for pushing to insecure registries :
--insecure
--skip-tls-verify
--skip-tls-verify-pull
--insecure-pull
After this change part of the pipeline looks like this :
before_script:
- 'dotnet restore --packages $NUGET_PACKAGES_DIRECTORY'
build_job:
tags:
- xxxx
only:
- develop
stage: build
script:
- dotnet build --configuration Release --no-restore
publish_job:
tags:
- xxxx
only:
- develop
stage: publish
artifacts:
name: "$CI_COMMIT_SHA"
paths:
- ./$PUBLISH_DIR
script:
- dotnet publish ./src --configuration Release --output $(pwd)/$PUBLISH_DIR
docker_build_dev:
tags:
- xxxx
image:
name: gcr.io/kaniko-project/executor:51734fc3a33e04f113487853d118608ba6ff2b81
entrypoint: [""]
only:
- develop
stage: docker
before_script:
- echo "Docker build"
script:
- echo "${CI_PROJECT_DIR}"
- cp ./src/Dockerfile /builds/xxx/xxx/xxx/Dockerfile
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}"
--insecure
--skip-tls-verify
--skip-tls-verify-pull
--insecure-pull
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}"
Part of the output from the pipeline is as below:
[32;1mSkipping Git submodules setup[0;m
section_end:1652535765:get_sources
[0Ksection_start:1652535765:download_artifacts
[0K[0K[36;1mDownloading artifacts[0;m[0;m
[32;1mDownloading artifacts for publish_job (33475)...[0;m
Downloading artifacts from coordinator... ok [0;m id[0;m=33475 responseStatus[0;m=200 OK token[xxxxxxxxxxx
section_end:1652535769:download_artifacts
[0Ksection_start:1652535769:step_script
[0K[0K[36;1mExecuting "step_script" stage of the job script[0;m[0;m
section_end:1652535769:step_script
[0Ksection_start:1652535769:cleanup_file_variables
[0K[0K[36;1mCleaning up project directory and file based variables[0;m[0;m
section_end:1652539354:cleanup_file_variables
[0K[31;1mERROR: Job failed: execution took longer than 1h0m0s seconds
[0;m
What am I missing?
I was missing something in my GitLab project settings which is enabling Project Registries :
https://domain/group/subgroup/project/edit
Visibility,project features,permissions << Container Registry (Toggle to enable)

/busybox/sh: eval: line 109: apk: not found while build_backend on GitLab

I try to pass build_backend stage before build_djangoapp with Dockerfile on GitLab, but it fails with this error.
/busybox/sh: eval: line 111: apk: not found
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 127
GitLab CI/CD project
.gitlab-ci.yml
# Official image for Hashicorp's Terraform. It uses light image which is Alpine
# based as it is much lighter.
#
# Entrypoint is also needed as image by default set `terraform` binary as an
# entrypoint.
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
# Default output file for Terraform plan
variables:
GITLAB_TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME}
PLAN: plan.tfplan
PLAN_JSON: tfplan.json
TF_ROOT: ${CI_PROJECT_DIR}
GITLAB_TF_PASSWORD: ${CI_JOB_TOKEN}
cache:
paths:
- .terraform
before_script:
- apk --no-cache add jq
- alias convert_report="jq -r '([.resource_changes[]?.change.actions?]|flatten)|{\"create\":(map(select(.==\"create\"))|length),\"update\":(map(select(.==\"update\"))|length),\"delete\":(map(select(.==\"delete\"))|length)}'"
- cd ${TF_ROOT}
- terraform --version
- echo ${GITLAB_TF_ADDRESS}
- terraform init -backend-config="address=${GITLAB_TF_ADDRESS}" -backend-config="lock_address=${GITLAB_TF_ADDRESS}/lock" -backend-config="unlock_address=${GITLAB_TF_ADDRESS}/lock" -backend-config="username=${MY_GITLAB_USERNAME}" -backend-config="password=${MY_GITLAB_ACCESS_TOKEN}" -backend-config="lock_method=POST" -backend-config="unlock_method=DELETE" -backend-config="retry_wait_min=5"
stages:
- validate
- build
- test
- deploy
- app_deploy
validate:
stage: validate
script:
- terraform validate
plan:
stage: build
script:
- terraform plan -out=$PLAN
- terraform show --json $PLAN | convert_report > $PLAN_JSON
artifacts:
name: plan
paths:
- ${TF_ROOT}/plan.tfplan
reports:
terraform: ${TF_ROOT}/tfplan.json
# Separate apply job for manual launching Terraform as it can be destructive
# action.
apply:
stage: deploy
environment:
name: production
script:
- terraform apply -input=false $PLAN
dependencies:
- plan
when: manual
only:
- master
build_backend:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- echo "{\"auths\":{\"https://gitlab.amixr.io:4567\":{\"username\":\"gitlab-ci-token\",\"password\":\"$CI_JOB_TOKEN\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --cache=true --context ./djangoapp --dockerfile ./djangoapp/Dockerfile --destination $CONTAINER_IMAGE:$CI_COMMIT_REF_NAME
# https://github.com/GoogleContainerTools/kaniko#pushing-to-google-gcr
build_djangoapp:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
before_script:
- echo 1
script:
- export GOOGLE_APPLICATION_CREDENTIALS=$TF_VAR_gcp_creds_file
- /kaniko/executor --cache=true --context ./djangoapp --dockerfile ./djangoapp/Dockerfile --destination gcr.io/{TF_VAR_gcp_project_name}/djangoapp:$CI_COMMIT_REF_NAME
when: manual
only:
- master
needs: []
app_deploy:
image: google/cloud-sdk
stage: app_deploy
before_script:
- echo 1
environment:
name: production
script:
- gcloud auth activate-service-account --key-file=${TF_VAR_gcp_creds_file}
- gcloud container clusters get-credentials my-cluster --region us-central1 --project ${TF_VAR_gcp_project_name}
- kubectl apply -f hello-kubernetes.yaml
when: manual
only:
- master
needs: []
I looked at your project and it appears you figured out this one already.
Your .gitlab-ci.yml has a global before_script block, that is trying to install packages with apk. But your build_backend is based on the kaniko-image, which is using Busybox, which doesn't have apk (nor apt-get for that matter). You overrided in a later commit the before_script for build_backend, and this took away the problem.

circleci filter branch is not working

I am very new in circleci and I was trying to limit my build to work only on a specific branch, I tried the below config file but when I include the filters section I get the following error:
Your config file has errors and may not run correctly:
2 schema violations found
required key [jobs] not found
required key [version] not found
config with filters:
version: 2
jobs:
provisioning_spark_installation_script:
working_directory: ~/build_data
docker:
- image: circleci/python:3.6.6-stretch
steps:
- setup_remote_docker:
docker_layer_caching: true
- checkout
- run: &install_awscli
name: Install AWS CLI
command: |
sudo pip3 install --upgrade awscli
- run: &login_to_ecr
name: Login to ECR
command: aws ecr get-login --region us-east-1 | sed 's/-e none//g' | bash
workflows:
version: 2
deployments:
jobs:
- provisioning_spark_installation_script
filters:
branches:
only: master
When I remove the filters section, everything is working fine - but without filters, I know how to workaround using shell and if else but it is less elegant.
Any advice ?
I was just missing colon character after the workflow name, in addition of more indentation for filters.
So now I have the following config:
version: 2
jobs:
provisioning_spark_installation_script:
working_directory: ~/build_data
docker:
- image: circleci/python:3.6.6-stretch
steps:
- setup_remote_docker:
docker_layer_caching: true
- checkout
- run: &install_awscli
name: Install AWS CLI
command: |
sudo pip3 install --upgrade awscli
- run: &login_to_ecr
name: Login to ECR
command: aws ecr get-login --region us-east-1 | sed 's/-e none//g' | bash
workflows:
version: 2
deployments:
jobs:
- provisioning_spark_installation_script:
filters:
branches:
only: master

Resources