How to put multiple images in image keyword in gitlab-ci - gitlab

I have two jobs build_binary and build deb and I want to combine both of them. But the issue is they both use different images. Former one uses golang:latest and later one uses ubuntu:20.04 as shown:
gitlab-ci.yml
build_binary:
stage: build
image: golang:latest
rules:
- if: '$CI_COMMIT_TAG'
# tags:
# - docker-executor
artifacts:
untracked: true
script:
- echo "Some script"
build deb:
stage: deb-build
rules:
- if: '$CI_COMMIT_TAG'
# tags:
# - docker-executor
image: ubuntu:20.04
dependencies:
- build_binary
script:
- echo "Some script 2"
artifacts:
untracked: true
I have tried in these two ways but it didn't work.
build_binary:
stage: build
image: golang:latest ubuntu:20.04
and
build_binary:
stage: build
image: [golang:latest,ubuntu:20.04]
Any pointers would be very helpful.

It's not about gitlab-ci - for first hand you should understand what is images, what containers (docker) are in it's nature.
You cannot magically get mixed image that will be magically ubuntu:20.04+golang:latest. It's just impossible to make it from gitlab-ci file.
But you can create your own IMAGE.
You can take Dockerfile for ubuntu:20.04, at dockerhub https://hub.docker.com/_/ubuntu
Then you can add commands to it to install golang inside this operation system `
After this you open golang:latest Dockerfile and copy it's installation process to ubuntu Dockerfile with required modifications.
Then you do docker build -t my-super-ubuntu-and-golang - see manual
Then you check it - docker run and check that it's normal ubuntu with golang
If all success you can push it to your own account to dockerhub and in gitlab-ci:
image: your-name/golang-ubuntu-20.04
...
Suggestion to use services is very incorrect - service starts another image and connect it by network, so you can run postgres, rabbit and another services and use it in your tests for example:
image: alpine
services : [ rabbitmq ]
not mean that rabbitmq will be started on alpine - both will started - image of alpine and image of rabbitmq with local HOST name rabbitmq, and your alpine could connect to tcp://rabbitmq:5672 and use it. It's another approach.
P.S.
For example you can look at https://hub.docker.com/r/partlab/ubuntu-golang
I think it's not image you really want but you can see how to make mixed ubuntu-golang image

Using the image and services keywords in your.gitlab-ci.yml file, you may define a GitLab CI/CD job that uses multiple Docker images.
For example, you could use the following configuration to specify a job that uses both a Golang image and an Ubuntu image:
build:
image: golang:latest
services:
- ubuntu:20.04
before_script:
- run some command using Ubuntu
script:
- go build
- run some other command using Ubuntu

Related

Cannot run DIND for GCloud SDK docker image in GitLab Runner

I have set up a simple .gitlab-ci file which should be able to run docker service:
docker:
image: google/cloud-sdk:latest
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://127.0.0.1:2375
services:
- docker:dind
tags:
- docker
script:
- docker pull buster-slim
However it fails as:
https://gitlab.com/knyttl/runnerdemo/-/jobs/932204050
2020-12-25T19:31:04.558361767Z time="2020-12-25T19:31:04.558195638Z" level=info msg="API listen on [::]:2375"
2020-12-25T19:31:04.558522591Z time="2020-12-25T19:31:04.558447616Z" level=info msg="API listen on /var/run/docker.sock"
The service apparently correctly starts, but then it doesn't work:
Cannot connect to the Docker daemon at tcp://127.0.0.1:2375. Is the docker daemon running?
The problem comes from the fact that the docker daemon is not part of the image google/cloud-sdk/ that you specify for this job. You should create your own image with the google/cloud-sdk as a base image. You can also install & start docker in the before_script section of the job. See the doc on DockerHub for the image you use:
Installing additional components
By default, all gcloud components are installed on the default images (google/cloud-sdk:latest and google/cloud-sdk:VERSION).
The google/cloud-sdk:slim and google/cloud-sdk:alpine images do not contain additional components pre-installed.
You can extend these images by following the instructions below:
Debian-based images
cd debian_slim/
docker build --build-arg CLOUD_SDK_VERSION=159.0.0
--build-arg INSTALL_COMPONENTS="google-cloud-sdk-datastore-emulator" \
-t my-cloud-sdk-docker:slim .`
The image which can use DIND is docker:dind, not necessarily google/cloud-sdk:latest, so your .gitlab-ci.yml wuold look like:
docker:
image: docker:dind
variables:
DOCKER_DRIVER: overlay2
services:
- docker:dind
tags:
- docker
script:
- docker pull buster-slim
# ...
# I don't know what needs to be built...
You can check this tutorial for a step by step recipe.
In fact, the only reason why this was not working was:
 DOCKER_HOST: tcp://docker:2375
The dind service CAN run within cloud-sdk image, but it needs to be linked as a host.

Reuse artifacts from previous pipelines in Bitbucket

I would like to use an artifact from a previous pipeline and checking the documentation I haven't been able to find how.
I've only seen how to reuse them in the same pipeline (https://confluence.atlassian.com/bitbucket/using-artifacts-in-steps-935389074.html)
How can I reuse an existing artifact from a previous pipeline?
This is my current bitbucket-pipelines.yml:
image: php:7.2.18
pipelines:
branches:
delete-me:
- step:
name: Build docker containers
artifacts:
- docker_containers.tar
services:
- docker
script:
- docker/build_containers_if_not_exists.sh
- sleep 30 # wait for docker to start all containers
- docker save $(docker images -q) -o ${BITBUCKET_CLONE_DIR}/docker_containers.tar
- step:
name: Compile styles & js
caches:
- composer
script:
- docker load --input docker_containers.tar
- docker-compose up -d
- composer install
Maybe you can try to use Pipelines Caches feature. You should define your custom cache, for example:
definitions:
caches:
docker_containers: /docker_containers
The cache will be saved after the first successful build and will be available to the next pipelines for the next 7 days. Here is more info about using caches https://confluence.atlassian.com/bitbucket/caching-dependencies-895552876.html

Deploy docker container using gitlab ci docker-in-docker setup

I'm currently trying to setup a gitlab ci pipeline. I've chosen to go with the Docker-in-Docker setup.
I got my ci pipeline to build and push the docker image to the registry of gitlab but I cannot seem deploy it using the following configuration:
.gitlab-ci.yml
image: docker:stable
services:
- docker:dind
stages:
- build
- deploy
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
TEST_IMAGE: registry.gitlab.com/user/repo.nl:$CI_COMMIT_REF_SLUG
RELEASE_IMAGE: registry.gitlab.com/user/repo.nl:release
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker info
build:
stage: build
tags:
- build
script:
- docker build --pull -t $TEST_IMAGE .
- docker push $TEST_IMAGE
only:
- branches
deploy:
stage: deploy
tags:
- deploy
script:
- docker pull $TEST_IMAGE
- docker tag $TEST_IMAGE $RELEASE_IMAGE
- docker push $RELEASE_IMAGE
- docker run -d --name "review-$CI_COMMIT_REF_SLUG" -p "80:80" $RELEASE_IMAGE
only:
- master
when: manual
When I run the deploy action I actually get the following feedback in my log, but when I go check the server there is no container running.
$ docker run -d --name "review-$CI_COMMIT_REF_SLUG" -p "80:80" $RELEASE_IMAGE
7bd109a8855e985cc751be2eaa284e78ac63a956b08ed8b03d906300a695a375
Job succeeded
I have no clue as to what I am forgetting here. Am I right to expect this method to be correct for deploying containers? What am I missing / doing wrong?
tldr: Want to deploy images into production using gitlab ci and docker-in-docker setup, job succeeds but there is no container. Goal is to have a running container on host after deployment.
Found out that I needed to include the docker socket in the gitlab-runner configuration as well, and not only have it available in the container.
By adding --docker-volumes '/var/run/docker.sock:/var/run/docker.sock' and removing DOCKER_HOST=tcp://docker:2375 I was able to connect to docker on my host system and spawn sibling containers.

What are services in gitlab pipeline job?

I am using gitlab's pipeline for CI and CD to build images for my projects.
In every job there are configurations to be set like image and stage but I can't wrap my head around what services are. Can someone explain its functionality? Thanks
Here's a code snippet I use that I found
build-run:
image: docker:latest
stage: build
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker build -t "$CI_REGISTRY_IMAGE/my-project:$CI_COMMIT_SHA" .
- docker push "$CI_REGISTRY_IMAGE/my-project:$CI_COMMIT_SHA"
cache:
untracked: true
environment: build
The documentation says:
The services keyword defines just another Docker image that is run during your job and is linked to the Docker image that the image keyword defines. This allows you to access the service image during build time.

Using a private Docker Image from Gitlab Registry as the base image for CI

How should I authenticate if I want to use an image from the Gitlab Registry as a base image of another CI build?
According to https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/configuration/advanced-configuration.md#using-a-private-docker-registry I first have to manually login on the runner machine. Somehow it feels strange to login with an existing Gitlab user.
Is there a way to use the CI variable "CI_BUILD_TOKEN" (which is described as "Token used for authenticating with the GitLab Container Registry") for authentication to pull the base image from Gitlab Registry?
EDIT: I found out that I can use images from public projects. But I don't really want to make my docker projects public.
UPDATE: Starting with Gitlab 8.14 you can just use the docker images from the build in docker registry. See https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/configuration/advanced-configuration.md#support-for-gitlab-integrated-registry
All of the above answers including the acepted one are deprecated, This is possible in 2021:
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#access-an-image-from-a-private-container-registry
TL;DR
Set the CI/CD variable DOCKER_AUTH_CONFIG value with appropriate authentication information in following format:
Step 1:
# The use of "-n" - prevents encoding a newline in the password.
echo -n "my_username:my_password" | base64
# Example output to copy
bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=
Step 2 (This JSON is the value to be set for DOCKER_AUTH_CONFIG variable):
{
"auths": {
"registry.example.com:5000": {
"auth": "(Base64 content from above)"
}
}
}
Now it's possible, they have included that option months ago.
Use gitlab-ci-tokenas user and the variable $CI_BUILD_TOKEN as password.
This example works on GitLab 8.13.6. It builds the test image if needed, and in the next stage uses it to perform syntax checks:
build_test:
stage: build_test_image
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:test -f dockerfiles/test/Dockerfile .
- docker push $CI_REGISTRY_IMAGE:test
tags:
- docker_build
environment: test
test_syntax:
image: $CI_REGISTRY_IMAGE:test
stage: test
script:
- flake8 --ignore=E501,E265,E402 .
UPDATE: Re-reading the question, the accepted answer is correct. In my example, the job test_syntax will fail to authenticate to the registry, unless the user logins manually from the runner machine. Although, it can work if the 2 runners are on the same host, but it's not the best solution anyway.
In gitlab-ci-multi-runner 1.8 there's an option to add the Registry credentials as a variable, so you only need to login once to get the encoded credentials. See documentation.
No, this is currently not possible in any elegant way. GitLab should implement explicit credentials for the base images, it will be the most straight-forward and correct solution.
You need to docker login on the GitLab Runner machine. You can't use the gitlab-ci-token since they expire and also project-dependant, so you can't actually use one token for every project. Using your own login is pretty much the only solution available right now (happy to get corrected on this one).
This is absolutely possible as of September 2018. I'll post my naive implementation here.
Context:
You'll need to leverage the docker:dind service, which lets you run the docker command inside of a docker container.
This will require you to use a valid docker login, which you can do using GitLab's builtin variables (gitlab-ci-token, $CI-JOB-TOKEN).
You should then be able to authenticate to your repo's registry (example $REGISTRY value: registry.gitlab.com/$USER/$REPO:$TAG), which will allow you to push or pull docker containers from inside the CI/CD context, as well as from any authenticated docker server.
Implementation:
Create this block at top level to ensure it runs before the following jobs:
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $REGISTRY
Job to build and save images to your registry:
build_container:
image: docker:latest
stage: build
services:
- docker:dind
script:
- docker build -t $REGISTRY .
- docker push $REGISTRY
Job that uses the custom image:
build_app:
image: $REGISTRY
stage: deploy
script:
- npm run build
Regarding Cross-Repo Jobs:
I accomplish this by creating a "bot" GitLab user and assigning them access to repos/groups as appropriate. Then it's just a matter of replacing gitlab-ci-token and $CI_JOB_TOKEN with appropriate environment variables. This is only necessary if the base image is private.
Its possible you first have to login to gitlab container registry of the image you want to use, kindly see below example. Notice the
before_script: which basically auths you before using the image.
image: docker:latest
services:
- docker:dind
stages:
- build
variables:
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/obonyojimmy/node-mono-clr:latest
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_BUILD_TOKEN registry.gitlab.com
build-app:
stage: build
image: $CONTAINER_RELEASE_IMAGE
script:
- npm run build
I had a similar situation. My Java application uses Testcontainers lib in tests and this lib runs Docker container from private registry. I spent a lot of time trying to figure this out and I managed to handle this by creating a ~/.docker/config.json file in before_script section. I hope it'll help somebody:
image: openjdk:11-jdk-slim
stages:
- build
before_script:
- mkdir ~/".docker"
- echo "{\"auths\":{\"$REGISTRY_HOST\":{\"auth\":\"$(printf "$REGISTRY_USER:$REGISTRY_PASSWORD" | openssl base64 -A)\"}}}" > ~/".docker/config.json"
build:
stage: build
services:
- docker:dind
script:
- ./gradlew build

Resources