Gitlab pipeline failed : ERROR: Preparation failed: Error response from daemon: toomanyrequests - gitlab

I have Harbor local docker registry and all needed images are there and connected GitLab to the Harbor and all the images are received from the Harbor but after November 2, Docker put a limit on the number of pulls and it seems dind service pulls from Docker hub.
Is it possible to use dind service to pull from Harbor?
Pipeline output:
Running with gitlab-runner 12.10.1 (ce065b93)
on docker_runner_7 WykGNjC6
Preparing the "docker" executor
30:20
Using Docker executor with image **harbor**.XXX.XXXX.net/library/docker_maven_jvm14 ...
Starting service docker:**dind** ...
**Pulling docker image docker:dind** ...
**ERROR**: Preparation failed: Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit (docker.go:198:2s)
Will be retried in 3s ...
Using Docker executor with image harbor.XXX.XXX.net/library/docker_maven_jvm14 ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
ERROR: Preparation failed: Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit (docker.go:198:4s)
Will be retried in 3s ...
Using Docker executor with image harbor.XXX.XXX.net/library/docker_maven_jvm14 ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
ERROR: Preparation failed: Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit (docker.go:198:3s)
Will be retried in 3s ...
ERROR: Job failed (system failure): Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit (docker.go:198:3s)

Another way:
If you don't want to add daemon.json, you can do this:
Pull docker-dind from docker hub
docker pull docker:stable-dind
Login to harbor
docker login harbor.XXX.com
Tag image to harbor
docker tag docker:stable-dind harbor.XXX.com/library/docker:stable-dind
Push to harbor
docker push harbor.XXX.com/library/docker:stable-dind
Go to the .gitlab-ci.yml
Instead of
services:
- docker:dind
write:
services:
- name: harbor.XXX.com/library/docker:stable-dind
alias: docker
My .gitlab-ci.yml :
stages:
- build_and_push
Build:
image: ${DOCKER_REGISTRY}/library/docker:ci_tools
stage: build_and_push
tags:
- dind
services:
- name: ${DOCKER_REGISTRY}/library/docker:stable-dind
alias: docker
script:
- docker login -u $DOCKER_REGISTRY_USERNAME -p $DOCKER_REGISTRY_PASSWORD $DOCKER_REGISTRY
- make build test release REGISTRY=${DOCKER_REGISTRY}/library/ TELEGRAF_DOWNLOAD_URL="https://storage.XXX.com/ops/packages/telegraf-1.15.3_linux_amd64.tar.gz" TELEGRAF_SHA256="85a1ee372fb06921d09a345641bba5f3488d2db59a3fafa06f3f8c876523801d"

I can't found the solution for Gitlab but you can tell the docker to ignore the docker hub registry and go to the local registry.
Add daemon.json in /etc/docker/daemon.json
, if doesn't exist you can simply add in the path.
daemon.json
{
"registry-mirrors": ["https://harbor.XXX.com"]
}
sudo systemctl restart docker

I too faced the same issue while deploying some micro-services to kube cluster, here is a blog that I wrote that provides a workaround to optimize the deployment workflow: https://mailazy.com/blog/optimize-docker-pull-gitlab-pipelines/

Related

Gitlab Preparation failed: Error response from daemon: Conflict. The container name is already in use by container

I am using Gitlab CI in my development for continuous integration. I have my gitlab-runner runnig on a ubuntu instance.
I have one application, where i use MongoDB v3.6. I have to do a database integration test in the test stage of my CI/CD.
prepare:
image: node:11.10.1-alpine
stage: setup
script:
- npm install --quiet node-gyp
- npm install --quiet
- npm install -g yarn
- chmod a+rwx /usr/local/lib/node_modules/yarn/bin/yarn*
- chmod a+rwx /usr/local/bin/yarn*
- yarn install
- cd client
- yarn install
- cd ../
- cd admin
- yarn install
cache:
key: "$CI_COMMIT_REF_SLUG"
paths:
- node_modules/
- client/node_modules/
- admin/node_modules/
policy: push
app_testing:
image: node:11.10.1-alpine
services:
- name: mongo:3.6
stage: test
cache:
key: "$CI_COMMIT_REF_SLUG"
paths:
- node_modules/
- client/node_modules/
- admin/node_modules/
script:
- yarn run test
- cd client
- yarn run test
- cd ../
- cd admin
- yarn run test
For every alternate pipeline, i am getting the below error in the app_testing( test ) stage.
ERROR: Job failed (system failure): Error response from daemon: Conflict. The container name "/runner-e7ce6426-project-11081252-concurrent-0-mongo-0" is already in use by container "0964b061b56d8995966f577e7354852130915228bac1a7513a773bbb82aeefaf". You have to remove (or rename) that container to be able to reuse that name.
Below is the full log of the specific job which is failing
Running with gitlab-runner 10.8.0 (079aad9e)
on SharedRunner-XYZGroup e7ce6426
Using Docker executor with image node:11.10.1-alpine ...
Starting service mongo:3.6 ...
Pulling docker image mongo:3.6 ...
Using docker image sha256:57c2f7e051086c7618c26a2998afb689214b4213edd578f82fe4b2b1d19ee7c0 for mongo:3.6 ...
ERROR: Preparation failed: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Will be retried in 3s ...
Using Docker executor with image node:11.10.1-alpine ...
Starting service mongo:3.6 ...
Pulling docker image mongo:3.6 ...
Using docker image sha256:57c2f7e051086c7618c26a2998afb689214b4213edd578f82fe4b2b1d19ee7c0 for mongo:3.6 ...
ERROR: Preparation failed: Error response from daemon: Conflict. The container name "/runner-e7ce6426-project-11081252-concurrent-0-mongo-0" is already in use by container "0964b061b56d8995966f577e7354852130915228bac1a7513a773bbb82aeefaf". You have to remove (or rename) that container to be able to reuse that name.
Will be retried in 3s ...
Using Docker executor with image node:11.10.1-alpine ...
Starting service mongo:3.6 ...
Pulling docker image mongo:3.6 ...
Using docker image sha256:57c2f7e051086c7618c26a2998afb689214b4213edd578f82fe4b2b1d19ee7c0 for mongo:3.6 ...
ERROR: Preparation failed: Error response from daemon: Conflict. The container name "/runner-e7ce6426-project-11081252-concurrent-0-mongo-0" is already in use by container "0964b061b56d8995966f577e7354852130915228bac1a7513a773bbb82aeefaf". You have to remove (or rename) that container to be able to reuse that name.
Will be retried in 3s ...
ERROR: Job failed (system failure): Error response from daemon: Conflict. The container name "/runner-e7ce6426-project-11081252-concurrent-0-mongo-0" is already in use by container "0964b061b56d8995966f577e7354852130915228bac1a7513a773bbb82aeefaf". You have to remove (or rename) that container to be able to reuse that name.
I tried disabling secondary caches, it didn't work for me.
Now i don't know how to fix this issue. As a workaround, i have to trigger a new pipeline every time it fails, which of course no one likes, as the ultimate goal for anyone to automate things is to focus on most important things.
Any help on this would be appreciated.
Thanks in advance.
This is a known issue, see https://gitlab.com/gitlab-org/gitlab-runner/issues/4327. GitLab is re-using the same service container name. This approach fails if the previous container wasn't deleted in time.
If you read through the (long list of) comments you may discover some workarounds of which are among others:
limit concurrency to 1
increase your Runner's machine's IOPS (e.g. switch from HDD to SSD)
As we were facing the same issue with the Docker executor, we currently kind of worked around it by using the Docker+Machine executor. Although you can't really be sure to avoid that error, my experience is that jobs are running more reliably since then. The tradeoff, however, is that for each job a VM is provisioned that wants to be paid.

GitLab CI/Docker on Windows/node container - unknown network http

I'm trying to run basic pipeline that should build my web-project using GitLab CI
My setup:
GitLab CE 10.1.4 on Ubuntu 16.04
GitLab Runner 10.1.0 on Windows 10 1703 with docker executor
Docker 17.09.0-ce-win33 (13620) on same Windows machine
.gitlab-ci.yml:
image: node:latest
cache:
paths:
- node_modules/
build:
script:
- npm install
- npm run build
And I get failed build with this output:
Running with gitlab-runner 10.1.0 (c1ecf97f)
on ****** (9366a476)
Using Docker executor with image node:latest ...
Using docker image sha256:4d72396806765f67139745bb91135098acaf23ce7d627e41eb4da9c62e5d6729 for predefined container...
Pulling docker image node:latest ...
Using docker image node:latest ID=sha256:cf20b9ab2cbc1b6f76e820839ad5f296b4c9a9fd04f3e74651c16ed49943dbc4 for build container...
ERROR: Job failed (system failure): dial http: unknown network http
I've thought that problem might happen because container can't access internet, but https://hub.docker.com/r/byrnedo/alpine-curl/ curl container got the data, so I think it's not the case.
UPD:
Problem occurs when GitLab runner try to attach to container.
Problem was in GitLab CI config (config.toml), docker host.
host = "http://127.0.0.1:2375" - wrong one
should be host = "tcp://127.0.0.1:2375"

build and push docker images with GitLab CI

I would like to build and push docker images to my local nexus repo with GitLab CI
This is my current CI file:
image: docker:latest
services:
- docker:dind
before_script:
- docker info
- docker login -u some_user -p nexus-rfit some_host
stages:
- build
build-deploy-ubuntu-image:
stage: build
script:
- docker build -t some_host/dev-image:ubuntu ./ubuntu/
- docker push some_host/dev-image:ubuntu
only:
- master
when: manual
I also have a job for an alpine docker image, but when I want to run any of it it's failing with the following error:
Checking out 13102ac4 as master...
Skipping Git submodules setup
$ docker info
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
ERROR: Job failed: exit code 1
So technically the docker daemon in the image isn't running, but I have no idea why.
GitLab folks have a reference on their docs about using docker-build inside docker-based jobs: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html#use-docker-in-docker-executor. Since you seem to have everything in place (i.e. the right image for the job and the additional docker:dind service), it's most likely a runner-config issue.
If you look at the second step in the docs:
Register GitLab Runner from the command line to use docker and privileged mode:
[...]
Notice that it's using the privileged mode to start the build and service containers. If you want to use docker-in-docker mode, you always have to use privileged = true in your Docker containers.
Probably you're using a runner that was not configured in privileged mode and hence can't properly run the docker daemon inside. You can directly edit the /etc/gitlab-runner/config.toml on your registered runner to add that option.
(Also, read on the section on the docs for some more info about the performance related to the storage driver you choose/your runner supports when using dind)

Gitlab CI cannot pull image from private docker registry

I'd like to create a Docker based Gitlab CI runner which pulls the docker images for the build from a private Docker Registry (v2). I cannot make the Gitlab Runner to pull the image from a local Registry, it tries to GET something from a /v1 API. I get the following error message:
ERROR: Build failed: Error while pulling image: Get http://registry:5000/v1/repositories/maven/images: dial tcp: lookup registry on 127.0.1.1:53: no such host
Here's a minimal example, using docker-compose and a web browser.
I have the following docker-compose.yml file:
version: "2"
services:
gitlab:
image: gitlab/gitlab-ce
ports:
- "22:22"
- "8080:80"
links:
- registry:registry
gitlab_runner:
image: gitlab/gitlab-runner
volumes:
- /var/run/docker.sock:/var/run/docker.sock
links:
- registry:registry
- gitlab:gitlab
registry:
image: registry:2
After the first Gitlab login, I register the runner into the Gitlab instance:
root#130d08732613:/# gitlab-runner register
Running in system-mode.
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/ci):
http://192.168.61.237:8080/ci
Please enter the gitlab-ci token for this runner:
tE_1RKnwkfj2HfHCcrZW
Please enter the gitlab-ci description for this runner:
[130d08732613]: docker
Please enter the gitlab-ci tags for this runner (comma separated):
Registering runner... succeeded runner=tE_1RKnw
Please enter the executor: docker-ssh+machine, docker, docker-ssh, parallels, shell, ssh, virtualbox, docker+machine:
docker
Please enter the default Docker image (eg. ruby:2.1):
maven:latest
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
After this, I see the Gitlab runner in my Gitlab instance:
After this I push a simple maven image to my newly created Docker repository:
vilmosnagy#vnagy-dell:~/$ docker tag maven:3-jdk-7 172.19.0.2:5000/maven:3-jdk7
vilmosnagy#vnagy-dell:~/$ docker push 172.19.0.2:5000/maven:3-jdk7
The push refers to a repository [172.19.0.2:5000/maven]
79ab7e0adb89: Pushed
f831784a6a81: Pushed
b5fc1e09eaa7: Pushed
446c0d4b63e5: Pushed
338cb8e0e9ed: Pushed
d1c800db26c7: Pushed
42755cf4ee95: Pushed
3-jdk7: digest: sha256:135e7324ccfc7a360c7641ae20719b068f257647231d037960ae5c4ead0c3771 size: 1794
(I got the 172.19.0.2 IP-address from a docker inspect command's output)
After this I create a test project in the Gitlab and add a simple .gitlab-ci.yml file:
image: registry:5000/maven:3-jdk-7
stages:
- build
- test
- analyze
maven_build:
stage: build
script:
- "mvn -version"
And after the build the Gitlab gives the error in seen in the beginning of the post.
If I enter into the running gitlab-runner container, I can access the registry under the given URL:
vilmosnagy#vnagy-dell:~/$ docker exec -it comptest_gitlab_runner_1 bash
root#c0c5cebcc06f:/# curl http://registry:5000/v2/maven/tags/list
{"name":"maven","tags":["3-jdk7"]}
root#c0c5cebcc06f:/# exit
exit
vilmosnagy#vnagy-dell:~/$
But the error still the same:
Do you have any idea how to force the gitlab-runner to use the v2 api of the private registry?
Current Gitlab and Gitlab Runners support this, see: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#use-a-private-container-registry
On older Gitlab I've solved this with copying an auth key into ~/.docker/config.json
{
"auths": {
"my.docker.registry.url": {
"auth": "dmlsbW9zLm5hZ3k6VGZWNTM2WmhC"
}
}
}
I've logged into this container from my computer and copied this auth key into the Gitlab Runner's docker container.
What version of docker do you run on Gitlab ?
Also for a v2 registry, you have to explicitly allow insecure registry with a command line switch, or secure your registry using a certificate.
Otherwise Docker fallback to the v1 registry if it gets a security exception.

Ansible and docker: locally build image get pulled and causes failure

I'm using Ansible to provision my server with anything required to make a my website work. The goal is to install a base system and provide it with docker containers running apps (at the moment it's just one app).
The problem I'm facing is that my docker image isn't hosted at dockerhub or something else. Instead it's being built by an Ansible task. However, when I'm trying to run the built image, Ansible tries to pull it (which isn't possible) and then dies.
This is what the playbook section looks like:
- name: check or build image
docker_image:
path=/srv/svenv.nl-docker
name='svenv/svenv.nl'
state=build
- name: start svenv/svenv.nl container
docker:
name: svenv.nl
volumes:
- /srv/svenv.nl-docker/data/var/lib/mysql/:/var/lib/mysql/
- /srv/svenv.nl-docker/data/svenv.nl/svenv/media:/svenv.nl/svenv/media
ports:
- 80:80
- 3306:3306
image: svenv/svenv.nl
When I run this, a failure indicates that the svenv/svenv.nl get's pulled from the repository, it isn't there so it crashes:
failed: [vps02.svenv.nl] => {"changes": ["{\"status\":\"Pulling repository svenv/svenv.nl\"}\r\n", "{\"errorDetail\":{\"message\":\"Error: image svenv/svenv.nl:latest not found\"},\"error\":\"Error: image svenv/svenv.nl:latest not found\"}\r\n"], "failed": true, "status": ""}
msg: Unrecognized status from pull.
FATAL: all hosts have already failed -- aborting
My question is:
How can I
Build a local docker
Then start it as a container without pulling it
You are hitting this error:
https://github.com/ansible/ansible-modules-core/issues/1707
Ansible is attempting to create a container, but the create is failing with:
docker.errors.InvalidVersion: mem_limit has been moved to host_config in API version 1.19
Unfortunately, there is catch-all except: that is hiding this error. The result is that rather than failing with the above message, ansible assumes that the image is simply missing locally and attempts to pull it.
You can work around this by setting docker_api_version to something earlier than 1.19:
- name: start svenv/svenv.nl container
docker:
name: svenv.nl
ports:
- 80:80
- 3306:3306
image: svenv/svenv.nl
docker_api_version: 1.18

Resources