I'd like to create a Docker based Gitlab CI runner which pulls the docker images for the build from a private Docker Registry (v2). I cannot make the Gitlab Runner to pull the image from a local Registry, it tries to GET something from a /v1 API. I get the following error message:
ERROR: Build failed: Error while pulling image: Get http://registry:5000/v1/repositories/maven/images: dial tcp: lookup registry on 127.0.1.1:53: no such host
Here's a minimal example, using docker-compose and a web browser.
I have the following docker-compose.yml file:
version: "2"
services:
gitlab:
image: gitlab/gitlab-ce
ports:
- "22:22"
- "8080:80"
links:
- registry:registry
gitlab_runner:
image: gitlab/gitlab-runner
volumes:
- /var/run/docker.sock:/var/run/docker.sock
links:
- registry:registry
- gitlab:gitlab
registry:
image: registry:2
After the first Gitlab login, I register the runner into the Gitlab instance:
root#130d08732613:/# gitlab-runner register
Running in system-mode.
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/ci):
http://192.168.61.237:8080/ci
Please enter the gitlab-ci token for this runner:
tE_1RKnwkfj2HfHCcrZW
Please enter the gitlab-ci description for this runner:
[130d08732613]: docker
Please enter the gitlab-ci tags for this runner (comma separated):
Registering runner... succeeded runner=tE_1RKnw
Please enter the executor: docker-ssh+machine, docker, docker-ssh, parallels, shell, ssh, virtualbox, docker+machine:
docker
Please enter the default Docker image (eg. ruby:2.1):
maven:latest
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
After this, I see the Gitlab runner in my Gitlab instance:
After this I push a simple maven image to my newly created Docker repository:
vilmosnagy#vnagy-dell:~/$ docker tag maven:3-jdk-7 172.19.0.2:5000/maven:3-jdk7
vilmosnagy#vnagy-dell:~/$ docker push 172.19.0.2:5000/maven:3-jdk7
The push refers to a repository [172.19.0.2:5000/maven]
79ab7e0adb89: Pushed
f831784a6a81: Pushed
b5fc1e09eaa7: Pushed
446c0d4b63e5: Pushed
338cb8e0e9ed: Pushed
d1c800db26c7: Pushed
42755cf4ee95: Pushed
3-jdk7: digest: sha256:135e7324ccfc7a360c7641ae20719b068f257647231d037960ae5c4ead0c3771 size: 1794
(I got the 172.19.0.2 IP-address from a docker inspect command's output)
After this I create a test project in the Gitlab and add a simple .gitlab-ci.yml file:
image: registry:5000/maven:3-jdk-7
stages:
- build
- test
- analyze
maven_build:
stage: build
script:
- "mvn -version"
And after the build the Gitlab gives the error in seen in the beginning of the post.
If I enter into the running gitlab-runner container, I can access the registry under the given URL:
vilmosnagy#vnagy-dell:~/$ docker exec -it comptest_gitlab_runner_1 bash
root#c0c5cebcc06f:/# curl http://registry:5000/v2/maven/tags/list
{"name":"maven","tags":["3-jdk7"]}
root#c0c5cebcc06f:/# exit
exit
vilmosnagy#vnagy-dell:~/$
But the error still the same:
Do you have any idea how to force the gitlab-runner to use the v2 api of the private registry?
Current Gitlab and Gitlab Runners support this, see: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#use-a-private-container-registry
On older Gitlab I've solved this with copying an auth key into ~/.docker/config.json
{
"auths": {
"my.docker.registry.url": {
"auth": "dmlsbW9zLm5hZ3k6VGZWNTM2WmhC"
}
}
}
I've logged into this container from my computer and copied this auth key into the Gitlab Runner's docker container.
What version of docker do you run on Gitlab ?
Also for a v2 registry, you have to explicitly allow insecure registry with a command line switch, or secure your registry using a certificate.
Otherwise Docker fallback to the v1 registry if it gets a security exception.
Related
I'm new to Gitlab Pipelines and want to set up one for one of my Python projects.
I'm using the docker GitLab-runner container with this Configuration file:
version: '3'
services:
runner:
container_name: runner
image: gitlab/gitlab-runner:latest
restart: unless-stopped
environment:
- TZ=Europe/Berlin
volumes:
- ./data:/etc/gitlab-runner/
- /var/run/docker.sock:/var/run/docker.sock
Whenever a pipeline is executed I get this error message:
Running with GitLab-runner 14.10.1 (f761588f)
on docker xxxxxxx
Preparing the "docker" executor
Using Docker executor with image python:latest ...
Pulling docker image python:latest ...
Using docker image sha256:8dec8e39f2eca1ee1f1b668619023da929039a39983de4433d42d25a7b79267c for python:latest with digest python#sha256:567018293e51a89db96ce4c9679fdefc89b3d17a9fe9e94c0091b04ac5bb4e89 ...
Preparing environment
Running on runner-xxxxxxxxx-project-38-concurrent-0 via xxxxxxxx...
Getting source from Git repository
Fetching changes with git depth set to 20...
Reinitialized existing Git repository in /builds/group/project/.git/
remote: HTTP Basic: Access denied
fatal: Authentication failed for 'http://mygitlab.de/group/projekt.git/'
Cleaning up a project directory and file-based variables
ERROR: Job failed: exit code 1
The Gitlab Runner is assigned to a project. I already tried to reset everything and use it with my IP address, my DNS address, my local IP, my local device name but nothing worked yet
I read about others having the same problems, mostly in 2016 or older. Is there anything I'm missing? Is there a setting I have to set correctly?
EDIT:
Thanks, #Vadim for correcting my tags
After some more testing, I tried the same with a public repository. And to my surprise, it worked. The Problem is the Authorisation. I still need to add as much as possible to my configuration, test if it affects the public repo, and then try it with a private repo.
I will keep this more updated as I heard of others having the same problems
For my case, gitlab was behind a proxy other than the built-in traefik proxy. I believe this caused the necessity of using this setting. After registering your runner, edit the config.toml and and add the clone_URL
[[runners]]
url = "https://gitlab.example.com"
clone_url = "https://gitlab.example.com"
This solved the issue for me.
One thing that might help you is to try and pass the actual IP in the extra hosts for the runner.
It should go into the config.toml for the runner something like extra_hosts = [ 192.1xx.x.x:mygitlab.de]
When pushing a docker image with a modified tag (to contain registry) to the gitlab integrated registry i get an access denied.
Using the gitlab registry is using it per project. Once the registry is enabled for a project there is a hint how to push the images to the registry https://gitlab.mydomain.com/**path/to/project**/container_registry.
The problem got solved when the full path was included in the TAG Name.
When i changed the tagname to [registryUrl]:[registryPort]/path/to/project/[imageNameWithTags] i was able to push to the repository/registry.
Indeed you need to do docker login ... as described on the /container_registry page.
You can also rely on some GitLab Predefined environment variables to make code generic and re-use it in many projects.
Here is the example of doing it in .gitlab-ci.yml:
build-image:
stage: build
image: docker:latest
services:
- name: docker:dind
script:
- docker build -t $CI_REGISTRY_IMAGE .
- docker login -u $CI_REGISTRY_USER -p "$CI_JOB_TOKEN" $CI_REGISTRY
- docker push $CI_REGISTRY_IMAGE
See full example in one of our projects
I have problems with GitLab CI/CD configuration - I'm using free runners on GitLab it self.
I have joomla (test) project using docker - I'm learng how it's work.
I created .gitlab-ci.yml with:
image: docker:latest
services:
- docker:dind
at top of file.
On test stage I want run docker image created at the build stage.
When I add:
services:
- mariadb:latest
to test stage I always get
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? at docker pull command. Without it I get error at docker run command at joomla image initialization cose of lack of MySql server
Any help will be appreciated.
If you set
services:
- mariadb:latest
in your test job, this will override the globally defined services. Therefore, the docker daemon is not running during test. This also explains why you do not get the Docker daemon error when you omit the services definition for the test job.
Either specify the docker:dind service also for the test job, or remove the local services definition and add mariadb to your global services definition.
I would like to build and push docker images to my local nexus repo with GitLab CI
This is my current CI file:
image: docker:latest
services:
- docker:dind
before_script:
- docker info
- docker login -u some_user -p nexus-rfit some_host
stages:
- build
build-deploy-ubuntu-image:
stage: build
script:
- docker build -t some_host/dev-image:ubuntu ./ubuntu/
- docker push some_host/dev-image:ubuntu
only:
- master
when: manual
I also have a job for an alpine docker image, but when I want to run any of it it's failing with the following error:
Checking out 13102ac4 as master...
Skipping Git submodules setup
$ docker info
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
ERROR: Job failed: exit code 1
So technically the docker daemon in the image isn't running, but I have no idea why.
GitLab folks have a reference on their docs about using docker-build inside docker-based jobs: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html#use-docker-in-docker-executor. Since you seem to have everything in place (i.e. the right image for the job and the additional docker:dind service), it's most likely a runner-config issue.
If you look at the second step in the docs:
Register GitLab Runner from the command line to use docker and privileged mode:
[...]
Notice that it's using the privileged mode to start the build and service containers. If you want to use docker-in-docker mode, you always have to use privileged = true in your Docker containers.
Probably you're using a runner that was not configured in privileged mode and hence can't properly run the docker daemon inside. You can directly edit the /etc/gitlab-runner/config.toml on your registered runner to add that option.
(Also, read on the section on the docs for some more info about the performance related to the storage driver you choose/your runner supports when using dind)
How should I authenticate if I want to use an image from the Gitlab Registry as a base image of another CI build?
According to https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/configuration/advanced-configuration.md#using-a-private-docker-registry I first have to manually login on the runner machine. Somehow it feels strange to login with an existing Gitlab user.
Is there a way to use the CI variable "CI_BUILD_TOKEN" (which is described as "Token used for authenticating with the GitLab Container Registry") for authentication to pull the base image from Gitlab Registry?
EDIT: I found out that I can use images from public projects. But I don't really want to make my docker projects public.
UPDATE: Starting with Gitlab 8.14 you can just use the docker images from the build in docker registry. See https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/configuration/advanced-configuration.md#support-for-gitlab-integrated-registry
All of the above answers including the acepted one are deprecated, This is possible in 2021:
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#access-an-image-from-a-private-container-registry
TL;DR
Set the CI/CD variable DOCKER_AUTH_CONFIG value with appropriate authentication information in following format:
Step 1:
# The use of "-n" - prevents encoding a newline in the password.
echo -n "my_username:my_password" | base64
# Example output to copy
bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=
Step 2 (This JSON is the value to be set for DOCKER_AUTH_CONFIG variable):
{
"auths": {
"registry.example.com:5000": {
"auth": "(Base64 content from above)"
}
}
}
Now it's possible, they have included that option months ago.
Use gitlab-ci-tokenas user and the variable $CI_BUILD_TOKEN as password.
This example works on GitLab 8.13.6. It builds the test image if needed, and in the next stage uses it to perform syntax checks:
build_test:
stage: build_test_image
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:test -f dockerfiles/test/Dockerfile .
- docker push $CI_REGISTRY_IMAGE:test
tags:
- docker_build
environment: test
test_syntax:
image: $CI_REGISTRY_IMAGE:test
stage: test
script:
- flake8 --ignore=E501,E265,E402 .
UPDATE: Re-reading the question, the accepted answer is correct. In my example, the job test_syntax will fail to authenticate to the registry, unless the user logins manually from the runner machine. Although, it can work if the 2 runners are on the same host, but it's not the best solution anyway.
In gitlab-ci-multi-runner 1.8 there's an option to add the Registry credentials as a variable, so you only need to login once to get the encoded credentials. See documentation.
No, this is currently not possible in any elegant way. GitLab should implement explicit credentials for the base images, it will be the most straight-forward and correct solution.
You need to docker login on the GitLab Runner machine. You can't use the gitlab-ci-token since they expire and also project-dependant, so you can't actually use one token for every project. Using your own login is pretty much the only solution available right now (happy to get corrected on this one).
This is absolutely possible as of September 2018. I'll post my naive implementation here.
Context:
You'll need to leverage the docker:dind service, which lets you run the docker command inside of a docker container.
This will require you to use a valid docker login, which you can do using GitLab's builtin variables (gitlab-ci-token, $CI-JOB-TOKEN).
You should then be able to authenticate to your repo's registry (example $REGISTRY value: registry.gitlab.com/$USER/$REPO:$TAG), which will allow you to push or pull docker containers from inside the CI/CD context, as well as from any authenticated docker server.
Implementation:
Create this block at top level to ensure it runs before the following jobs:
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $REGISTRY
Job to build and save images to your registry:
build_container:
image: docker:latest
stage: build
services:
- docker:dind
script:
- docker build -t $REGISTRY .
- docker push $REGISTRY
Job that uses the custom image:
build_app:
image: $REGISTRY
stage: deploy
script:
- npm run build
Regarding Cross-Repo Jobs:
I accomplish this by creating a "bot" GitLab user and assigning them access to repos/groups as appropriate. Then it's just a matter of replacing gitlab-ci-token and $CI_JOB_TOKEN with appropriate environment variables. This is only necessary if the base image is private.
Its possible you first have to login to gitlab container registry of the image you want to use, kindly see below example. Notice the
before_script: which basically auths you before using the image.
image: docker:latest
services:
- docker:dind
stages:
- build
variables:
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/obonyojimmy/node-mono-clr:latest
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_BUILD_TOKEN registry.gitlab.com
build-app:
stage: build
image: $CONTAINER_RELEASE_IMAGE
script:
- npm run build
I had a similar situation. My Java application uses Testcontainers lib in tests and this lib runs Docker container from private registry. I spent a lot of time trying to figure this out and I managed to handle this by creating a ~/.docker/config.json file in before_script section. I hope it'll help somebody:
image: openjdk:11-jdk-slim
stages:
- build
before_script:
- mkdir ~/".docker"
- echo "{\"auths\":{\"$REGISTRY_HOST\":{\"auth\":\"$(printf "$REGISTRY_USER:$REGISTRY_PASSWORD" | openssl base64 -A)\"}}}" > ~/".docker/config.json"
build:
stage: build
services:
- docker:dind
script:
- ./gradlew build