I'm using Ansible to provision my server with anything required to make a my website work. The goal is to install a base system and provide it with docker containers running apps (at the moment it's just one app).
The problem I'm facing is that my docker image isn't hosted at dockerhub or something else. Instead it's being built by an Ansible task. However, when I'm trying to run the built image, Ansible tries to pull it (which isn't possible) and then dies.
This is what the playbook section looks like:
- name: check or build image
docker_image:
path=/srv/svenv.nl-docker
name='svenv/svenv.nl'
state=build
- name: start svenv/svenv.nl container
docker:
name: svenv.nl
volumes:
- /srv/svenv.nl-docker/data/var/lib/mysql/:/var/lib/mysql/
- /srv/svenv.nl-docker/data/svenv.nl/svenv/media:/svenv.nl/svenv/media
ports:
- 80:80
- 3306:3306
image: svenv/svenv.nl
When I run this, a failure indicates that the svenv/svenv.nl get's pulled from the repository, it isn't there so it crashes:
failed: [vps02.svenv.nl] => {"changes": ["{\"status\":\"Pulling repository svenv/svenv.nl\"}\r\n", "{\"errorDetail\":{\"message\":\"Error: image svenv/svenv.nl:latest not found\"},\"error\":\"Error: image svenv/svenv.nl:latest not found\"}\r\n"], "failed": true, "status": ""}
msg: Unrecognized status from pull.
FATAL: all hosts have already failed -- aborting
My question is:
How can I
Build a local docker
Then start it as a container without pulling it
You are hitting this error:
https://github.com/ansible/ansible-modules-core/issues/1707
Ansible is attempting to create a container, but the create is failing with:
docker.errors.InvalidVersion: mem_limit has been moved to host_config in API version 1.19
Unfortunately, there is catch-all except: that is hiding this error. The result is that rather than failing with the above message, ansible assumes that the image is simply missing locally and attempts to pull it.
You can work around this by setting docker_api_version to something earlier than 1.19:
- name: start svenv/svenv.nl container
docker:
name: svenv.nl
ports:
- 80:80
- 3306:3306
image: svenv/svenv.nl
docker_api_version: 1.18
Related
I am trying to cache images locally in a Docker registry using a pull through cache as described here: https://docs.docker.com/registry/recipes/mirror/. But when I do, Docker seems to ignore the pass-through cache and is instead pulling images directly from Docker Hub.
I have a Docker Compose file that I am using to run the registry:
version: '3.9'
services:
registry:
restart: always
image: registry:2
ports:
- 5000:5000
volumes:
- ~/.docker/registry:/var/lib/registry
- ./registry-config.yml:/etc/docker/registry/config.yml
And I have configured Docker to use the registry via the --registry-mirror option, which I have confirmed is taking effect by checking the output of docker info:
$ docker info
...
Registry Mirrors:
https://localhost:5000/
...
But when I try to pull an image, I see no activity in the registry's logs, as if Docker is ignoring the mirror option and just going straight to Docker Hub.
I have also confirmed the registry is working by pulling alpine:latest from Docker Hub, tagging it as localhost:5000/alpine:latest and pushing it to the registry, then pulling it back out again. Both of which work fine.
This is all being done on a Linux VM running Ubuntu.
Can someone please help me understand what I am doing wrong. and how I am supposed to get Docker to pull all images through the pull-through cache?
Thanks in advance for your help.
I have two jobs build_binary and build deb and I want to combine both of them. But the issue is they both use different images. Former one uses golang:latest and later one uses ubuntu:20.04 as shown:
gitlab-ci.yml
build_binary:
stage: build
image: golang:latest
rules:
- if: '$CI_COMMIT_TAG'
# tags:
# - docker-executor
artifacts:
untracked: true
script:
- echo "Some script"
build deb:
stage: deb-build
rules:
- if: '$CI_COMMIT_TAG'
# tags:
# - docker-executor
image: ubuntu:20.04
dependencies:
- build_binary
script:
- echo "Some script 2"
artifacts:
untracked: true
I have tried in these two ways but it didn't work.
build_binary:
stage: build
image: golang:latest ubuntu:20.04
and
build_binary:
stage: build
image: [golang:latest,ubuntu:20.04]
Any pointers would be very helpful.
It's not about gitlab-ci - for first hand you should understand what is images, what containers (docker) are in it's nature.
You cannot magically get mixed image that will be magically ubuntu:20.04+golang:latest. It's just impossible to make it from gitlab-ci file.
But you can create your own IMAGE.
You can take Dockerfile for ubuntu:20.04, at dockerhub https://hub.docker.com/_/ubuntu
Then you can add commands to it to install golang inside this operation system `
After this you open golang:latest Dockerfile and copy it's installation process to ubuntu Dockerfile with required modifications.
Then you do docker build -t my-super-ubuntu-and-golang - see manual
Then you check it - docker run and check that it's normal ubuntu with golang
If all success you can push it to your own account to dockerhub and in gitlab-ci:
image: your-name/golang-ubuntu-20.04
...
Suggestion to use services is very incorrect - service starts another image and connect it by network, so you can run postgres, rabbit and another services and use it in your tests for example:
image: alpine
services : [ rabbitmq ]
not mean that rabbitmq will be started on alpine - both will started - image of alpine and image of rabbitmq with local HOST name rabbitmq, and your alpine could connect to tcp://rabbitmq:5672 and use it. It's another approach.
P.S.
For example you can look at https://hub.docker.com/r/partlab/ubuntu-golang
I think it's not image you really want but you can see how to make mixed ubuntu-golang image
Using the image and services keywords in your.gitlab-ci.yml file, you may define a GitLab CI/CD job that uses multiple Docker images.
For example, you could use the following configuration to specify a job that uses both a Golang image and an Ubuntu image:
build:
image: golang:latest
services:
- ubuntu:20.04
before_script:
- run some command using Ubuntu
script:
- go build
- run some other command using Ubuntu
I have Harbor local docker registry and all needed images are there and connected GitLab to the Harbor and all the images are received from the Harbor but after November 2, Docker put a limit on the number of pulls and it seems dind service pulls from Docker hub.
Is it possible to use dind service to pull from Harbor?
Pipeline output:
Running with gitlab-runner 12.10.1 (ce065b93)
on docker_runner_7 WykGNjC6
Preparing the "docker" executor
30:20
Using Docker executor with image **harbor**.XXX.XXXX.net/library/docker_maven_jvm14 ...
Starting service docker:**dind** ...
**Pulling docker image docker:dind** ...
**ERROR**: Preparation failed: Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit (docker.go:198:2s)
Will be retried in 3s ...
Using Docker executor with image harbor.XXX.XXX.net/library/docker_maven_jvm14 ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
ERROR: Preparation failed: Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit (docker.go:198:4s)
Will be retried in 3s ...
Using Docker executor with image harbor.XXX.XXX.net/library/docker_maven_jvm14 ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
ERROR: Preparation failed: Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit (docker.go:198:3s)
Will be retried in 3s ...
ERROR: Job failed (system failure): Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit (docker.go:198:3s)
Another way:
If you don't want to add daemon.json, you can do this:
Pull docker-dind from docker hub
docker pull docker:stable-dind
Login to harbor
docker login harbor.XXX.com
Tag image to harbor
docker tag docker:stable-dind harbor.XXX.com/library/docker:stable-dind
Push to harbor
docker push harbor.XXX.com/library/docker:stable-dind
Go to the .gitlab-ci.yml
Instead of
services:
- docker:dind
write:
services:
- name: harbor.XXX.com/library/docker:stable-dind
alias: docker
My .gitlab-ci.yml :
stages:
- build_and_push
Build:
image: ${DOCKER_REGISTRY}/library/docker:ci_tools
stage: build_and_push
tags:
- dind
services:
- name: ${DOCKER_REGISTRY}/library/docker:stable-dind
alias: docker
script:
- docker login -u $DOCKER_REGISTRY_USERNAME -p $DOCKER_REGISTRY_PASSWORD $DOCKER_REGISTRY
- make build test release REGISTRY=${DOCKER_REGISTRY}/library/ TELEGRAF_DOWNLOAD_URL="https://storage.XXX.com/ops/packages/telegraf-1.15.3_linux_amd64.tar.gz" TELEGRAF_SHA256="85a1ee372fb06921d09a345641bba5f3488d2db59a3fafa06f3f8c876523801d"
I can't found the solution for Gitlab but you can tell the docker to ignore the docker hub registry and go to the local registry.
Add daemon.json in /etc/docker/daemon.json
, if doesn't exist you can simply add in the path.
daemon.json
{
"registry-mirrors": ["https://harbor.XXX.com"]
}
sudo systemctl restart docker
I too faced the same issue while deploying some micro-services to kube cluster, here is a blog that I wrote that provides a workaround to optimize the deployment workflow: https://mailazy.com/blog/optimize-docker-pull-gitlab-pipelines/
I am using Gitlab CI in my development for continuous integration. I have my gitlab-runner runnig on a ubuntu instance.
I have one application, where i use MongoDB v3.6. I have to do a database integration test in the test stage of my CI/CD.
prepare:
image: node:11.10.1-alpine
stage: setup
script:
- npm install --quiet node-gyp
- npm install --quiet
- npm install -g yarn
- chmod a+rwx /usr/local/lib/node_modules/yarn/bin/yarn*
- chmod a+rwx /usr/local/bin/yarn*
- yarn install
- cd client
- yarn install
- cd ../
- cd admin
- yarn install
cache:
key: "$CI_COMMIT_REF_SLUG"
paths:
- node_modules/
- client/node_modules/
- admin/node_modules/
policy: push
app_testing:
image: node:11.10.1-alpine
services:
- name: mongo:3.6
stage: test
cache:
key: "$CI_COMMIT_REF_SLUG"
paths:
- node_modules/
- client/node_modules/
- admin/node_modules/
script:
- yarn run test
- cd client
- yarn run test
- cd ../
- cd admin
- yarn run test
For every alternate pipeline, i am getting the below error in the app_testing( test ) stage.
ERROR: Job failed (system failure): Error response from daemon: Conflict. The container name "/runner-e7ce6426-project-11081252-concurrent-0-mongo-0" is already in use by container "0964b061b56d8995966f577e7354852130915228bac1a7513a773bbb82aeefaf". You have to remove (or rename) that container to be able to reuse that name.
Below is the full log of the specific job which is failing
Running with gitlab-runner 10.8.0 (079aad9e)
on SharedRunner-XYZGroup e7ce6426
Using Docker executor with image node:11.10.1-alpine ...
Starting service mongo:3.6 ...
Pulling docker image mongo:3.6 ...
Using docker image sha256:57c2f7e051086c7618c26a2998afb689214b4213edd578f82fe4b2b1d19ee7c0 for mongo:3.6 ...
ERROR: Preparation failed: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Will be retried in 3s ...
Using Docker executor with image node:11.10.1-alpine ...
Starting service mongo:3.6 ...
Pulling docker image mongo:3.6 ...
Using docker image sha256:57c2f7e051086c7618c26a2998afb689214b4213edd578f82fe4b2b1d19ee7c0 for mongo:3.6 ...
ERROR: Preparation failed: Error response from daemon: Conflict. The container name "/runner-e7ce6426-project-11081252-concurrent-0-mongo-0" is already in use by container "0964b061b56d8995966f577e7354852130915228bac1a7513a773bbb82aeefaf". You have to remove (or rename) that container to be able to reuse that name.
Will be retried in 3s ...
Using Docker executor with image node:11.10.1-alpine ...
Starting service mongo:3.6 ...
Pulling docker image mongo:3.6 ...
Using docker image sha256:57c2f7e051086c7618c26a2998afb689214b4213edd578f82fe4b2b1d19ee7c0 for mongo:3.6 ...
ERROR: Preparation failed: Error response from daemon: Conflict. The container name "/runner-e7ce6426-project-11081252-concurrent-0-mongo-0" is already in use by container "0964b061b56d8995966f577e7354852130915228bac1a7513a773bbb82aeefaf". You have to remove (or rename) that container to be able to reuse that name.
Will be retried in 3s ...
ERROR: Job failed (system failure): Error response from daemon: Conflict. The container name "/runner-e7ce6426-project-11081252-concurrent-0-mongo-0" is already in use by container "0964b061b56d8995966f577e7354852130915228bac1a7513a773bbb82aeefaf". You have to remove (or rename) that container to be able to reuse that name.
I tried disabling secondary caches, it didn't work for me.
Now i don't know how to fix this issue. As a workaround, i have to trigger a new pipeline every time it fails, which of course no one likes, as the ultimate goal for anyone to automate things is to focus on most important things.
Any help on this would be appreciated.
Thanks in advance.
This is a known issue, see https://gitlab.com/gitlab-org/gitlab-runner/issues/4327. GitLab is re-using the same service container name. This approach fails if the previous container wasn't deleted in time.
If you read through the (long list of) comments you may discover some workarounds of which are among others:
limit concurrency to 1
increase your Runner's machine's IOPS (e.g. switch from HDD to SSD)
As we were facing the same issue with the Docker executor, we currently kind of worked around it by using the Docker+Machine executor. Although you can't really be sure to avoid that error, my experience is that jobs are running more reliably since then. The tradeoff, however, is that for each job a VM is provisioned that wants to be paid.
I'd like to create a Docker based Gitlab CI runner which pulls the docker images for the build from a private Docker Registry (v2). I cannot make the Gitlab Runner to pull the image from a local Registry, it tries to GET something from a /v1 API. I get the following error message:
ERROR: Build failed: Error while pulling image: Get http://registry:5000/v1/repositories/maven/images: dial tcp: lookup registry on 127.0.1.1:53: no such host
Here's a minimal example, using docker-compose and a web browser.
I have the following docker-compose.yml file:
version: "2"
services:
gitlab:
image: gitlab/gitlab-ce
ports:
- "22:22"
- "8080:80"
links:
- registry:registry
gitlab_runner:
image: gitlab/gitlab-runner
volumes:
- /var/run/docker.sock:/var/run/docker.sock
links:
- registry:registry
- gitlab:gitlab
registry:
image: registry:2
After the first Gitlab login, I register the runner into the Gitlab instance:
root#130d08732613:/# gitlab-runner register
Running in system-mode.
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/ci):
http://192.168.61.237:8080/ci
Please enter the gitlab-ci token for this runner:
tE_1RKnwkfj2HfHCcrZW
Please enter the gitlab-ci description for this runner:
[130d08732613]: docker
Please enter the gitlab-ci tags for this runner (comma separated):
Registering runner... succeeded runner=tE_1RKnw
Please enter the executor: docker-ssh+machine, docker, docker-ssh, parallels, shell, ssh, virtualbox, docker+machine:
docker
Please enter the default Docker image (eg. ruby:2.1):
maven:latest
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
After this, I see the Gitlab runner in my Gitlab instance:
After this I push a simple maven image to my newly created Docker repository:
vilmosnagy#vnagy-dell:~/$ docker tag maven:3-jdk-7 172.19.0.2:5000/maven:3-jdk7
vilmosnagy#vnagy-dell:~/$ docker push 172.19.0.2:5000/maven:3-jdk7
The push refers to a repository [172.19.0.2:5000/maven]
79ab7e0adb89: Pushed
f831784a6a81: Pushed
b5fc1e09eaa7: Pushed
446c0d4b63e5: Pushed
338cb8e0e9ed: Pushed
d1c800db26c7: Pushed
42755cf4ee95: Pushed
3-jdk7: digest: sha256:135e7324ccfc7a360c7641ae20719b068f257647231d037960ae5c4ead0c3771 size: 1794
(I got the 172.19.0.2 IP-address from a docker inspect command's output)
After this I create a test project in the Gitlab and add a simple .gitlab-ci.yml file:
image: registry:5000/maven:3-jdk-7
stages:
- build
- test
- analyze
maven_build:
stage: build
script:
- "mvn -version"
And after the build the Gitlab gives the error in seen in the beginning of the post.
If I enter into the running gitlab-runner container, I can access the registry under the given URL:
vilmosnagy#vnagy-dell:~/$ docker exec -it comptest_gitlab_runner_1 bash
root#c0c5cebcc06f:/# curl http://registry:5000/v2/maven/tags/list
{"name":"maven","tags":["3-jdk7"]}
root#c0c5cebcc06f:/# exit
exit
vilmosnagy#vnagy-dell:~/$
But the error still the same:
Do you have any idea how to force the gitlab-runner to use the v2 api of the private registry?
Current Gitlab and Gitlab Runners support this, see: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#use-a-private-container-registry
On older Gitlab I've solved this with copying an auth key into ~/.docker/config.json
{
"auths": {
"my.docker.registry.url": {
"auth": "dmlsbW9zLm5hZ3k6VGZWNTM2WmhC"
}
}
}
I've logged into this container from my computer and copied this auth key into the Gitlab Runner's docker container.
What version of docker do you run on Gitlab ?
Also for a v2 registry, you have to explicitly allow insecure registry with a command line switch, or secure your registry using a certificate.
Otherwise Docker fallback to the v1 registry if it gets a security exception.