I've tried to get my setup work with gitlab-ci. I have a simple gitlab-ci.yml file
build_ubuntu:
image: ubuntu:14.04
services:
- rikorose/gcc-cmake:gcc-5
stage: build
script:
- apt-get update
- apt-get install -y python3 build-essential curl
- cmake --version
tags:
- linux
I want to get a ubuntu 14.04 LTS with gcc and cmake (apt-get version is to old) installed. If i use it locally (via docker --link command) everything works, but when the gitlab-ci-runner will process it i get the following waring (which is in my case an error)
Running with gitlab-ci-multi-runner 9.2.0 (adfc387)
on xubuntuci1 (19c6d3ce)
Using Docker executor with image ubuntu:14.04 ...
Starting service rikorose/gcc-cmake:gcc-5 ...
Pulling docker image rikorose/gcc-cmake:gcc-5 ...
Using docker image rikorose/gcc-cmake:gcc-5
ID=sha256:ef2ac00b36e638897a2046c954e89ea953cfd5c257bf60103e32880e88299608
for rikorose/gcc-cmake service...
Waiting for services to be up and running...
*** WARNING: Service runner-19c6d3ce-project-54-concurrent-0-rikorose__gcc-
cmake probably didn't start properly.
Error response from daemon: Cannot link to a non running container: /runner-
19c6d3ce-project-54-concurrent-0-rikorose__gcc-cmake AS /runner-19c6d3ce-
project-54-concurrent-0-rikorose__gcc-cmake-wait-for-service/runner-
19c6d3ce-project-54-concurrent-0-rikorose__gcc-cmake
Does anybody know how i can fix this?
Thanks in advance
Tonka
You must start the gitlab-runner container with
--privileged true
but that is not enough. Any runner containers that are spun up by gitlab after registering need to be privileged too. So you need to mount the gitlab-runner
docker exec -it runner /bin/bash
nano /etc/gitlab-runner/config.toml
and change privileged flag from false into true:
privileged = true
That will solve the problem!
note: you can also mount the config.toml as a volume on the container then you won't have to log into the container to change privileged to true because you can preconfigure the container before running it.
In my case, I had to add
variables:
DOCKER_TLS_CERTDIR: ""
Related
I have a problem when I run an image mongo with docker-compose.yml. I need to encrypt my data because it is very sensitive. My docker-compose.yml is:
version: '3'
services:
mongo:
image: "mongo"
command: ["mongod","--enableEncryption","--encryptionKeyFile", "/data/db/mongodb-keyfile"]
ports:
- "27017:27017"
volumes:
- $PWD/data:/data/db
I check the mongodb-keyfile exits in data/db, ok no problem, but when I build the file, made and up the image, and te command is:
"docker-entrypoint.sh mongod --enableEncryption --encryptionKeyFile /data/db/mongodb-keyfile"
The status:
About a minute ago Exited (2) About a minute ago
I show the logs and see:
Error parsing command line: unrecognised option '--enableEncryption'
I understand the error, but I don't known how to solve it. I think to make a Dockerfile with the image an ubuntu (linux whatever) and install mongo with the all configurations necessary. Or try to solved it.
Please help me, thx.
According to the documentation, the encryption is available in MongoDB Enterprise only. So you need to have paid subscription to use it.
For the docker image of the enterprise version it says in here that you can build it yourself:
Download the Docker build files for MongoDB Enterprise.
Set MONGODB_VERSION to your major version of choice.
export MONGODB_VERSION=4.0
curl -O --remote-name-all https://raw.githubusercontent.com/docker-library/mongo/master/$MONGODB_VERSION/{Dockerfile,docker-entrypoint.sh}
Build the Docker container.
Use the downloaded build files to create a Docker container image wrapped around MongoDB Enterprise. Set DOCKER_USERNAME to your Docker Hub username.
export DOCKER_USERNAME=username
chmod 755 ./docker-entrypoint.sh
docker build --build-arg MONGO_PACKAGE=mongodb-enterprise --build-arg MONGO_REPO=repo.mongodb.com -t $DOCKER_USERNAME/mongo-enterprise:$MONGODB_VERSION .
Test your image.
The following commands run mongod locally in a Docker container and check the version.
docker run --name mymongo -itd $DOCKER_USERNAME/mongo-enterprise:$MONGODB_VERSION
docker exec -it mymongo /usr/bin/mongo --eval "db.version()"
I have a gitlab CI pipeline setup and sometimes I get random failures where the test is on-going but then it shows:
ERROR: Job failed (system failure): Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
What could be the reason for this?
This is on Gitlab 11.1.4, gitlab-runner 10.7.4, Docker version 1.13.1.
Ok.
So a docker container cannot be created.
It could be those Reasons:
- the user gitlab-runner (the one who take the pipelines and starts them) is not member of the docker group
- sudo usermod -a -G docker gitlab-runner
- The Daemon is not running. Enable it (so that it start at boot)
systemctl enable docker && systemctl start docker
The problem seemed to be a too old docker daemon. Recent docker versions >= 18.06.0-ce seem to behave well.
I have two containers running on a host. When I'm in container A I want to run a diff on container B compared to it's image to see what has changed in the filesystem. I know this can be ran easily from the host itself, but I'm wondering is there any way of doing this from inside container A, to see the difference on container B?
You can run any docker commands from within container which will communicate with host docker daemon if:
You have access to docker socket inside container
You have docker client inside container
You can achieve first condition by mounting docker socket to container - add following to your docker run call:
-v /var/run/docker.sock:/var/run/docker.sock.
The second condition depends on your docker image.
If you are running bare Ubuntu image you can have shell inside container which will be able to do what you want with following command:
docker run -it -v /var/run/docker.sock:/var/run/docker.sock ubuntu:latest sh -c "apt-get update ; apt-get install docker.io -y ; bash"
I would like to build and push docker images to my local nexus repo with GitLab CI
This is my current CI file:
image: docker:latest
services:
- docker:dind
before_script:
- docker info
- docker login -u some_user -p nexus-rfit some_host
stages:
- build
build-deploy-ubuntu-image:
stage: build
script:
- docker build -t some_host/dev-image:ubuntu ./ubuntu/
- docker push some_host/dev-image:ubuntu
only:
- master
when: manual
I also have a job for an alpine docker image, but when I want to run any of it it's failing with the following error:
Checking out 13102ac4 as master...
Skipping Git submodules setup
$ docker info
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
ERROR: Job failed: exit code 1
So technically the docker daemon in the image isn't running, but I have no idea why.
GitLab folks have a reference on their docs about using docker-build inside docker-based jobs: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html#use-docker-in-docker-executor. Since you seem to have everything in place (i.e. the right image for the job and the additional docker:dind service), it's most likely a runner-config issue.
If you look at the second step in the docs:
Register GitLab Runner from the command line to use docker and privileged mode:
[...]
Notice that it's using the privileged mode to start the build and service containers. If you want to use docker-in-docker mode, you always have to use privileged = true in your Docker containers.
Probably you're using a runner that was not configured in privileged mode and hence can't properly run the docker daemon inside. You can directly edit the /etc/gitlab-runner/config.toml on your registered runner to add that option.
(Also, read on the section on the docs for some more info about the performance related to the storage driver you choose/your runner supports when using dind)
I'm trying to run an nginx container as a service and share 2 volumes between the host machine and container, so that files in one directory are automatically shared with the other paired directory.
My docker-compose.yml is the following:
version: '2'
services:
nginx:
image: nginx
build: .
ports:
- "5000:80"
volumes:
- /home/user1/share:/share/user1
- /home/user2/share:/share/user2
restart: always
The only way I can get this to work currently is by adding privileged: true to the docker-compose file, however I am not allowed to due this due to security requirements.
When trying to access the volume in the container, I get the following error:
[root#host docker-nginx]# docker exec -it dockernginx_nginx_1 bash
root#2d574f9c6131:/# ls /share/user1/
ls: cannot open directory /share/user1/: Permission denied
Even attaching myself to bash on the container with the following parameters denies me of accessing the resource (or at least listing the contents):
docker exec -it --privileged=true -u 6004:6004 dockernginx_nginx_1 bash
(Note: 6004:6004 happens to be the id:gid ownership that is passed on to /share/user1/)
Is there any way of accessing the contents without building the nginx service with elevated privileges?
Perhaps the issue lies in SELinux restrictions enforced in the container?
The container is running Debian GNU/Linux 8 (jessie) and the host is running CentOS Linux 7 (Core)
Related questions:
Permission denied inside Docker container
Docker was running with --selinux-enabled=true, this prohibited me from accessing the contents of directories in the container.
Read more: http://www.projectatomic.io/blog/2016/07/docker-selinux-flag/
The solution was to disable it, it can either be done by (1) configuring or by (2) installing the non-selinux CentOS package, I went with option 2:
I made sure to reinstall and update Docker from 1.10 to 1.12.1 and not install docker-engine-selinux.noarch but instead have docker-engine.x86_64 and have the SELinux package installed as a dependency (yum does this automatically). By doing this and starting the Docker daemon, you can verify with ps aux | grep "docker" that docker-containerd is not started with the --selinux-enabled=true option.