Cannot run DIND for GCloud SDK docker image in GitLab Runner - gitlab

I have set up a simple .gitlab-ci file which should be able to run docker service:
docker:
image: google/cloud-sdk:latest
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://127.0.0.1:2375
services:
- docker:dind
tags:
- docker
script:
- docker pull buster-slim
However it fails as:
https://gitlab.com/knyttl/runnerdemo/-/jobs/932204050
2020-12-25T19:31:04.558361767Z time="2020-12-25T19:31:04.558195638Z" level=info msg="API listen on [::]:2375"
2020-12-25T19:31:04.558522591Z time="2020-12-25T19:31:04.558447616Z" level=info msg="API listen on /var/run/docker.sock"
The service apparently correctly starts, but then it doesn't work:
Cannot connect to the Docker daemon at tcp://127.0.0.1:2375. Is the docker daemon running?

The problem comes from the fact that the docker daemon is not part of the image google/cloud-sdk/ that you specify for this job. You should create your own image with the google/cloud-sdk as a base image. You can also install & start docker in the before_script section of the job. See the doc on DockerHub for the image you use:
Installing additional components
By default, all gcloud components are installed on the default images (google/cloud-sdk:latest and google/cloud-sdk:VERSION).
The google/cloud-sdk:slim and google/cloud-sdk:alpine images do not contain additional components pre-installed.
You can extend these images by following the instructions below:
Debian-based images
cd debian_slim/
docker build --build-arg CLOUD_SDK_VERSION=159.0.0
--build-arg INSTALL_COMPONENTS="google-cloud-sdk-datastore-emulator" \
-t my-cloud-sdk-docker:slim .`

The image which can use DIND is docker:dind, not necessarily google/cloud-sdk:latest, so your .gitlab-ci.yml wuold look like:
docker:
image: docker:dind
variables:
DOCKER_DRIVER: overlay2
services:
- docker:dind
tags:
- docker
script:
- docker pull buster-slim
# ...
# I don't know what needs to be built...
You can check this tutorial for a step by step recipe.

In fact, the only reason why this was not working was:
 DOCKER_HOST: tcp://docker:2375
The dind service CAN run within cloud-sdk image, but it needs to be linked as a host.

Related

How to put multiple images in image keyword in gitlab-ci

I have two jobs build_binary and build deb and I want to combine both of them. But the issue is they both use different images. Former one uses golang:latest and later one uses ubuntu:20.04 as shown:
gitlab-ci.yml
build_binary:
stage: build
image: golang:latest
rules:
- if: '$CI_COMMIT_TAG'
# tags:
# - docker-executor
artifacts:
untracked: true
script:
- echo "Some script"
build deb:
stage: deb-build
rules:
- if: '$CI_COMMIT_TAG'
# tags:
# - docker-executor
image: ubuntu:20.04
dependencies:
- build_binary
script:
- echo "Some script 2"
artifacts:
untracked: true
I have tried in these two ways but it didn't work.
build_binary:
stage: build
image: golang:latest ubuntu:20.04
and
build_binary:
stage: build
image: [golang:latest,ubuntu:20.04]
Any pointers would be very helpful.
It's not about gitlab-ci - for first hand you should understand what is images, what containers (docker) are in it's nature.
You cannot magically get mixed image that will be magically ubuntu:20.04+golang:latest. It's just impossible to make it from gitlab-ci file.
But you can create your own IMAGE.
You can take Dockerfile for ubuntu:20.04, at dockerhub https://hub.docker.com/_/ubuntu
Then you can add commands to it to install golang inside this operation system `
After this you open golang:latest Dockerfile and copy it's installation process to ubuntu Dockerfile with required modifications.
Then you do docker build -t my-super-ubuntu-and-golang - see manual
Then you check it - docker run and check that it's normal ubuntu with golang
If all success you can push it to your own account to dockerhub and in gitlab-ci:
image: your-name/golang-ubuntu-20.04
...
Suggestion to use services is very incorrect - service starts another image and connect it by network, so you can run postgres, rabbit and another services and use it in your tests for example:
image: alpine
services : [ rabbitmq ]
not mean that rabbitmq will be started on alpine - both will started - image of alpine and image of rabbitmq with local HOST name rabbitmq, and your alpine could connect to tcp://rabbitmq:5672 and use it. It's another approach.
P.S.
For example you can look at https://hub.docker.com/r/partlab/ubuntu-golang
I think it's not image you really want but you can see how to make mixed ubuntu-golang image
Using the image and services keywords in your.gitlab-ci.yml file, you may define a GitLab CI/CD job that uses multiple Docker images.
For example, you could use the following configuration to specify a job that uses both a Golang image and an Ubuntu image:
build:
image: golang:latest
services:
- ubuntu:20.04
before_script:
- run some command using Ubuntu
script:
- go build
- run some other command using Ubuntu

Couldn't connect to Docker daemon

I am new in Docker and CI\CD
I am using a vps with Ubuntu 18.04.
The docker of the project runs locally and works fine.
I don't quite understand why the server is trying to find the docker on http, not on tcp.
override.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd
docker service status
daemon.json
{ "storage-driver":"overlay" }
gitlab-ci.yml
image: docker/compose:latest
services:
- docker:dind
stages:
- deploy
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
deploy:
stage: deploy
only:
- master
tags:
- deployment
script:
# - export DOCKER_HOST="tcp://127.0.0.1:2375"
- docker-compose stop || true
- docker-compose up -d
- docker ps
environment:
name: production
Error
Set the DOCKER_HOST variable. When using the docker:dind service, the default hostname for the daemon is the name of the service, docker.
variables:
DOCKER_HOST: "tcp://docker:2375"
You must have also setup your GitLab runner to enable privileged containers.
docker needs root permission to be access. If you want to run docker commands, or docker-compose from a regular user, you need to add your user to docker group like:
sudo usermod -a -G docker yourUserName
By doing that, you can up your services and other docker stuffs with your regular user. If you don't want to add your user into docker group, you need to always prefix sudo on every docker commando you run:
sudo docker-compose up -d

Deploy docker container using gitlab ci docker-in-docker setup

I'm currently trying to setup a gitlab ci pipeline. I've chosen to go with the Docker-in-Docker setup.
I got my ci pipeline to build and push the docker image to the registry of gitlab but I cannot seem deploy it using the following configuration:
.gitlab-ci.yml
image: docker:stable
services:
- docker:dind
stages:
- build
- deploy
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
TEST_IMAGE: registry.gitlab.com/user/repo.nl:$CI_COMMIT_REF_SLUG
RELEASE_IMAGE: registry.gitlab.com/user/repo.nl:release
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker info
build:
stage: build
tags:
- build
script:
- docker build --pull -t $TEST_IMAGE .
- docker push $TEST_IMAGE
only:
- branches
deploy:
stage: deploy
tags:
- deploy
script:
- docker pull $TEST_IMAGE
- docker tag $TEST_IMAGE $RELEASE_IMAGE
- docker push $RELEASE_IMAGE
- docker run -d --name "review-$CI_COMMIT_REF_SLUG" -p "80:80" $RELEASE_IMAGE
only:
- master
when: manual
When I run the deploy action I actually get the following feedback in my log, but when I go check the server there is no container running.
$ docker run -d --name "review-$CI_COMMIT_REF_SLUG" -p "80:80" $RELEASE_IMAGE
7bd109a8855e985cc751be2eaa284e78ac63a956b08ed8b03d906300a695a375
Job succeeded
I have no clue as to what I am forgetting here. Am I right to expect this method to be correct for deploying containers? What am I missing / doing wrong?
tldr: Want to deploy images into production using gitlab ci and docker-in-docker setup, job succeeds but there is no container. Goal is to have a running container on host after deployment.
Found out that I needed to include the docker socket in the gitlab-runner configuration as well, and not only have it available in the container.
By adding --docker-volumes '/var/run/docker.sock:/var/run/docker.sock' and removing DOCKER_HOST=tcp://docker:2375 I was able to connect to docker on my host system and spawn sibling containers.

What are services in gitlab pipeline job?

I am using gitlab's pipeline for CI and CD to build images for my projects.
In every job there are configurations to be set like image and stage but I can't wrap my head around what services are. Can someone explain its functionality? Thanks
Here's a code snippet I use that I found
build-run:
image: docker:latest
stage: build
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker build -t "$CI_REGISTRY_IMAGE/my-project:$CI_COMMIT_SHA" .
- docker push "$CI_REGISTRY_IMAGE/my-project:$CI_COMMIT_SHA"
cache:
untracked: true
environment: build
The documentation says:
The services keyword defines just another Docker image that is run during your job and is linked to the Docker image that the image keyword defines. This allows you to access the service image during build time.

build and push docker images with GitLab CI

I would like to build and push docker images to my local nexus repo with GitLab CI
This is my current CI file:
image: docker:latest
services:
- docker:dind
before_script:
- docker info
- docker login -u some_user -p nexus-rfit some_host
stages:
- build
build-deploy-ubuntu-image:
stage: build
script:
- docker build -t some_host/dev-image:ubuntu ./ubuntu/
- docker push some_host/dev-image:ubuntu
only:
- master
when: manual
I also have a job for an alpine docker image, but when I want to run any of it it's failing with the following error:
Checking out 13102ac4 as master...
Skipping Git submodules setup
$ docker info
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
ERROR: Job failed: exit code 1
So technically the docker daemon in the image isn't running, but I have no idea why.
GitLab folks have a reference on their docs about using docker-build inside docker-based jobs: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html#use-docker-in-docker-executor. Since you seem to have everything in place (i.e. the right image for the job and the additional docker:dind service), it's most likely a runner-config issue.
If you look at the second step in the docs:
Register GitLab Runner from the command line to use docker and privileged mode:
[...]
Notice that it's using the privileged mode to start the build and service containers. If you want to use docker-in-docker mode, you always have to use privileged = true in your Docker containers.
Probably you're using a runner that was not configured in privileged mode and hence can't properly run the docker daemon inside. You can directly edit the /etc/gitlab-runner/config.toml on your registered runner to add that option.
(Also, read on the section on the docs for some more info about the performance related to the storage driver you choose/your runner supports when using dind)

Resources