Accessing a script inside image, GitlLab CI - gitlab

My config.yml :
stages:
- collect
image:
name: usd3221/log-stats:latest
entrypoint: [""]
default:
tags:
- local
job:
stage: collect
script:
- pwd
#- ruby run_log.rb ${argone} ${arg_2}
I am using a custom image log-stats which has the run_log.rb inside the $WORKDIR. My question is, how can I run this script which is inside the image directory on my gitlab runner and pass the arguments from my git env vars.

In the image fields, change the entrypoint to the script you want to execute i.e:
entrypoint: ["sh", "-c", "ruby run_log.rb ${argone} ${argtwo}"]
Assuming the $WORKDIR in your image contains the ruby run_log.rb.

Related

Unable to deploy to Azure Container Registry from GitLab

I have the following pipeline:
# .gitlab-ci.yml
stages:
- build
- push
build:
stage: build
services:
- docker:dind
image: docker:latest
script:
# Build the Docker image
- docker build -t myfe:$CI_COMMIT_SHA .
push:
stage: push
image: bitnami/azure-cli
script:
# - echo $DOCKERHUB_PASSWORD | docker login -u $DOCKERHUB_USERNAME --password-stdin
- echo $ACR_CLIENT_ID | docker login mycr.azurecr.io --username $ACR_CLIENT_ID --password-stdin
# Push the Docker image to the ACR
- docker push myfe:$CI_COMMIT_SHA
only:
- main
# before_script:
# - echo "$DOCKERHUB_PASSWORD" | docker login -u "$DOCKERHUB_USERNAME" --password-stdin
variables:
DOCKERHUB_USERNAME: $DOCKERHUB_USERNAME
DOCKERHUB_PASSWORD: $DOCKERHUB_PASSWORD
It results in the following error:
Using docker image sha256:373... for bitnami/azure-cli with digest bitnami/azure-cli#sha256:9128... ...
ERROR: 'sh' is misspelled or not recognized by the system.
Examples from AI knowledge base:
https://aka.ms/cli_ref
Read more about the command in reference docs
Any idea what this might mean?
The bitnami/azure-cli has an entrypoint of az, so your script is running as az parameters.
To solve this, you need to override the entrypoint using: entrypoint: [""] in your gitlab-ci.yml.
For more info check: https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#override-the-entrypoint-of-an-image
If you want to user an Azure CLI image for this .gitlab-ci.yml file, you should use the official Microsoft image instead:
image: mcr.microsoft.com/azure-cli
Works like a charm!

Is it possible to override the pipeline default image in a hidden job used as a template

I’ve created a hidden job that requires a specific image:
myhiddenjob.yml
.dosomething:
inherit:
default: false
image: curlimages/curl
script: …
This job is used in a pipeline that has a different base image:
default:
image: maven:3.8.6-jdk-11
include:
- remote: 'https://myhiddenjob.yml’
build:
stage: build
script:
- mvn $MAVEN_CLI_OPTS compile
testjob:
stage: mystage
script:
- echo "Working…"
- !reference [.dosomething, script]
The default image provides the mvn command used in the build job. However I want that the testjob uses a different image, the one defined inside the hidden job (curlimages/curl). However, although I specified inherit false, my hidden job uses the default image (maven:3.8.6-jdk-11) instead of the specified (curlimages/curl).
How can I override it in the hidden job definition?
One way is to use extends
.dosomething:
image: curlimages/curl
script: …
testjob:
stage: mystage
before_script:
- .....testjob sepcific commands
extends: .dosomething
You can also use before_script to add testjob specific commands.

How to pass global variable value to next stage in GitLab CI/CD

Based on GitLab documentation You can use the variables keyword to pass CI/CD variables to a downstream pipeline.
I have a global variable DATABASE_URL
The init stage retrieves connection string from the AWS Secret manager and sets it
to DATABASE_URL
Then in the deploy stage I want to use that variable to deploy database. However in the deploy stage variable's value is empty.
variables:
DATABASE_URL: ""
default:
tags:
- myrunner
stages:
- init
- deploy
init-job:
image: docker.xxxx/awscli
stage: init
script:
- SECRET_VALUE="$(aws secretsmanager get-secret-value --secret-id my_secret --region us-west-2 --output text --query SecretString)"
- DATABASE_URL="$(jq -r .DATABASE_URL <<< $SECRET_VALUE)"
- echo "$DATABASE_URL"
deploy-dev-database:
image: node:14
stage: deploy
environment:
name: development
script:
- echo "$DATABASE_URL"
- npm install
- npx sequelize-cli db:migrate
rules:
- if: $CI_COMMIT_REF_NAME == "dev"
Init Job. echos the DATABASE_URL
However DATABASE_URL is empty in deploy stage
Questions
1> How do I pass the global variable across the stages.
2> NodeJS database deployment process will be using this variable as process.env.DATABASE_URL will it be available to nodejs environment?
Variables are set by precedence, and when you print a variable inside of a job, it will look for the variable inside itself (the same job), and then start moving up to what's defined in the CI YAML file (variables: section), then the project, group, and instance. The job will never look at other jobs.
If you want to pass a variable from one job to another, you would want to make sure you don't set the variable at all and instead pass the variable from one job to another following the documentation on passing environment variables to another job.
Basically,
Make sure to remove DATABASE_URL: "" from the variables section.
Make the last line of your init-job script - echo "$DATABASE_URL" >> init.env. You can call your .env file whatever you want of course.
Add an artifacts: section to your init-job.
Add a dependencies: or needs: section to your deploy-dev-database job to pull the variable.
You should end up with something like this:
stages:
- init
- deploy
init-job:
image: docker.xxxx/awscli
stage: init
script:
- SECRET_VALUE="$(aws secretsmanager get-secret-value --secret-id my_secret --region us-west-2 --output text --query SecretString)"
- DATABASE_URL="$(jq -r .DATABASE_URL <<< $SECRET_VALUE)"
- echo "$DATABASE_URL" >> init.env
artifacts:
reports:
dotenv: init.env
deploy-dev-database:
image: node:14
stage: deploy
dependencies:
- init-job
environment:
name: development
script:
- echo "$DATABASE_URL"
- npm install
- npx sequelize-cli db:migrate
rules:
- if: $CI_COMMIT_REF_NAME == "dev"

Gitlab issue while running helm command as - Error: unknown command "sh" for "helm"

I have a packge script which needs to run on alpina:helm image . I have used this before but for some reason this is always giving me error as - Error: unknown command "sh" for "helm"
package:
<<: *artifacts
stage: package
image: alpine/helm
variables:
GIT_STRATEGY: none
script:
- echo $VERSION
- helm package ./keycloak --app-version=$VERSION
artifacts:
paths:
- "*.tgz"
Can anybody tell me what is the issue here I am not very sure . Helm command should be running as per my assumption but not sure why isnt it .
As explained in the docs, the runner in gitlab is started this way
the runner starts the docker container specified in image and uses the entrypoint of this container
the runner attaches itself to the container
the runner combines before_script, script and after_script into a single script
the runner sends the combined script to the container's shell
If you take a look at the entrypoint of the alpine/helm image, you see that the entrypoint is helm and when the container starts it runs helm. The gitlab runner expects no entrypoint or that the entrypoint is set to start a shell so you get the Error: unknown command "sh" for "helm" as there is no running shell.
With overriding the entrypoint we make sure the runner finds a shell in the container which can execute the script.
package:
stage: package
image:
name: alpine/helm
entrypoint: [""]
variables:
GIT_STRATEGY: none
script:
- echo $VERSION
- helm package ./keycloak --app-version=$VERSION
artifacts:
paths:
- "*.tgz"
EDIT:
By reading the docs again I changed the entrypoint to an empty entrypoint for docker 17.06 and later (entrypoint: [""]) as this is more concise.

GitLab runner use same folder for diffrent environments

I have a problem. I have two merge requests from two different branches to the master branch in a project.
Now I want to start an environment in GitLab for each merge request. I do this with a shell executor and I start docker container with docker run image_name where I mount the folder from the build process inside the container. Looks like this:
stages:
- deploy
deploy_stage:
stage: deploy
script:
- docker run -d --name ContainerName -v ${CI_PROJECT_DIR}:/var/www/html -e VIRTUAL_HOST=example.com php
environment:
name: review/$CI_COMMIT_REF_NAME
url: http://example.com
on_stop: stop_stage
tags:
- shell
except:
- master
stop_stage:
stage: deploy
variables:
GIT_STRATEGY: none
script:
- docker stop ContainerName
- docker rm ContainerName
when: manual
environment:
name: review/$CI_COMMIT_REF_NAME
action: stop
tags:
- shell
Now my problem is that one environment is runng and when a new job runs the checkout/code get overwritten by the new pipeline job and both environments have now the same code but they should be different.
Does anyone have a solution for me how I can configure the gitlab runner to have different checkout folder for each merge request?

Resources