I installed gitlab_runner.exe and Docker Desktop on Windows 10 and try to execute the following from gitlab-ci.yml
.docker-build:
image: ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/docker:19.03.12
services:
- name: ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/docker:19.03.12-dind
alias: docker
before_script:
- docker info
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker build -t $CI_REGISTRY/$CI_PROJECT_PATH/$IMAGE_NAME:$CI_PIPELINE_ID -t $CI_REGISTRY/$CI_PROJECT_PATH/$IMAGE_NAME:$TAG -f $DOCKER_FILE $DOCKER_PATH
- docker push $CI_REGISTRY/$CI_PROJECT_PATH/$IMAGE_NAME:$TAG
- docker push $CI_REGISTRY/$CI_PROJECT_PATH/$IMAGE_NAME:$CI_PIPELINE_ID
As I am running locally, variables CI_REGISTRY is not getting set. I tried the following but nothing worked
1. gitlab-runner-windows-amd64.exe exec shell --env "CI_REGISTRY=gitco.com:4004" .docker-build
2. set CI_REGISTRY=gitco.com:4004 from Windows command prompt
3. Tried setting the variable within .gitlab-ci.yml
No matter, whatever I try, it does not recognize the CI_REGISTRY value and errored as follows
Error response from daemon: Get https://$CI_REGISTRY/v2/: dial tcp: lookup $CI_REGISTRY: no such host
I googled but unable to find relevant link for this issue. Any help is highly appreciated
Related
Tell me please.
I'm using Jenkins to build a project that runs in a docker container and I've run into a problem.
When executing this piece of code:
stage('deploy front') {
when { equals expected: 'do', actual: buildFront }
agent {docker{image 'ebiwd/alpine-ssh'}}
steps{
sh 'chmod 400 .iac/privatekey'
sh "ssh -i .iac/privatekey ci_user#134.209.181.163"
}
}
I get an error:
+ ssh -i .iac/privatekey ci_user#134.209.181.163
Pseudo-terminal will not be allocated because stdin is not a terminal.
Warning: Permanently added '134.209.181.163' (ECDSA) to the list of
known hosts.
bind: No such file or directory
unix_listener: cannot bind to path:
/root/.ssh/sockets/ci_user#134.209.181.163-22.uzumQ42Zb6Tcr2E9
Moreover, if you execute the following script with your hands in the container, then everything works
ssh -i .iac/privatekey ci_user#134.209.181.163
container with Jenkins started with docker-compose.yaml
version: '3.1'
services:
jenkins:
image: jenkins/jenkins:2.277.1-lts
container_name: jenkins
hostname: jenkins
restart: always
user: root
privileged: true
ports:
- 172.17.0.1:8070:8080
- 50000:50000
volumes:
- /opt/docker/jenkins/home:/var/jenkins_home
- /etc/timezone:/etc/timezone
- /usr/bin/docker:/usr/bin/docker
- /etc/localtime:/etc/localtime
- /var/run/docker.sock:/var/run/docker.sock
What could be the problem?
I have the same error in my Gitlab pipelines:
bind: No such file or directory
unix_listener: cannot bind to path: /root/.ssh/sockets/aap_adm#wp-np2-26.ebi.ac.uk-22.LIXMnQy4cW5klzgB
lost connection
I think that the error is related to this changeset.
In particular, the ssh config file requires the path "~/.ssh/sockets" to be present. Since we are not using the script /usr/local/bin/add-ssh-key (custom script created for that image) this path is missing.
I've opened an issue in the image project: Error using the image in CI/CD pipelines #10.
The problem was at this place:
agent {docker{image 'ebiwd/alpine-ssh'}}
When i change it to prev version, like:
agent {docker{image 'ebiwd/alpine-ssh:3.13'}}
Everything started working
I am new in Docker and CI\CD
I am using a vps with Ubuntu 18.04.
The docker of the project runs locally and works fine.
I don't quite understand why the server is trying to find the docker on http, not on tcp.
override.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd
docker service status
daemon.json
{ "storage-driver":"overlay" }
gitlab-ci.yml
image: docker/compose:latest
services:
- docker:dind
stages:
- deploy
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
deploy:
stage: deploy
only:
- master
tags:
- deployment
script:
# - export DOCKER_HOST="tcp://127.0.0.1:2375"
- docker-compose stop || true
- docker-compose up -d
- docker ps
environment:
name: production
Error
Set the DOCKER_HOST variable. When using the docker:dind service, the default hostname for the daemon is the name of the service, docker.
variables:
DOCKER_HOST: "tcp://docker:2375"
You must have also setup your GitLab runner to enable privileged containers.
docker needs root permission to be access. If you want to run docker commands, or docker-compose from a regular user, you need to add your user to docker group like:
sudo usermod -a -G docker yourUserName
By doing that, you can up your services and other docker stuffs with your regular user. If you don't want to add your user into docker group, you need to always prefix sudo on every docker commando you run:
sudo docker-compose up -d
I have a problem when I run an image mongo with docker-compose.yml. I need to encrypt my data because it is very sensitive. My docker-compose.yml is:
version: '3'
services:
mongo:
image: "mongo"
command: ["mongod","--enableEncryption","--encryptionKeyFile", "/data/db/mongodb-keyfile"]
ports:
- "27017:27017"
volumes:
- $PWD/data:/data/db
I check the mongodb-keyfile exits in data/db, ok no problem, but when I build the file, made and up the image, and te command is:
"docker-entrypoint.sh mongod --enableEncryption --encryptionKeyFile /data/db/mongodb-keyfile"
The status:
About a minute ago Exited (2) About a minute ago
I show the logs and see:
Error parsing command line: unrecognised option '--enableEncryption'
I understand the error, but I don't known how to solve it. I think to make a Dockerfile with the image an ubuntu (linux whatever) and install mongo with the all configurations necessary. Or try to solved it.
Please help me, thx.
According to the documentation, the encryption is available in MongoDB Enterprise only. So you need to have paid subscription to use it.
For the docker image of the enterprise version it says in here that you can build it yourself:
Download the Docker build files for MongoDB Enterprise.
Set MONGODB_VERSION to your major version of choice.
export MONGODB_VERSION=4.0
curl -O --remote-name-all https://raw.githubusercontent.com/docker-library/mongo/master/$MONGODB_VERSION/{Dockerfile,docker-entrypoint.sh}
Build the Docker container.
Use the downloaded build files to create a Docker container image wrapped around MongoDB Enterprise. Set DOCKER_USERNAME to your Docker Hub username.
export DOCKER_USERNAME=username
chmod 755 ./docker-entrypoint.sh
docker build --build-arg MONGO_PACKAGE=mongodb-enterprise --build-arg MONGO_REPO=repo.mongodb.com -t $DOCKER_USERNAME/mongo-enterprise:$MONGODB_VERSION .
Test your image.
The following commands run mongod locally in a Docker container and check the version.
docker run --name mymongo -itd $DOCKER_USERNAME/mongo-enterprise:$MONGODB_VERSION
docker exec -it mymongo /usr/bin/mongo --eval "db.version()"
I've tried to get my setup work with gitlab-ci. I have a simple gitlab-ci.yml file
build_ubuntu:
image: ubuntu:14.04
services:
- rikorose/gcc-cmake:gcc-5
stage: build
script:
- apt-get update
- apt-get install -y python3 build-essential curl
- cmake --version
tags:
- linux
I want to get a ubuntu 14.04 LTS with gcc and cmake (apt-get version is to old) installed. If i use it locally (via docker --link command) everything works, but when the gitlab-ci-runner will process it i get the following waring (which is in my case an error)
Running with gitlab-ci-multi-runner 9.2.0 (adfc387)
on xubuntuci1 (19c6d3ce)
Using Docker executor with image ubuntu:14.04 ...
Starting service rikorose/gcc-cmake:gcc-5 ...
Pulling docker image rikorose/gcc-cmake:gcc-5 ...
Using docker image rikorose/gcc-cmake:gcc-5
ID=sha256:ef2ac00b36e638897a2046c954e89ea953cfd5c257bf60103e32880e88299608
for rikorose/gcc-cmake service...
Waiting for services to be up and running...
*** WARNING: Service runner-19c6d3ce-project-54-concurrent-0-rikorose__gcc-
cmake probably didn't start properly.
Error response from daemon: Cannot link to a non running container: /runner-
19c6d3ce-project-54-concurrent-0-rikorose__gcc-cmake AS /runner-19c6d3ce-
project-54-concurrent-0-rikorose__gcc-cmake-wait-for-service/runner-
19c6d3ce-project-54-concurrent-0-rikorose__gcc-cmake
Does anybody know how i can fix this?
Thanks in advance
Tonka
You must start the gitlab-runner container with
--privileged true
but that is not enough. Any runner containers that are spun up by gitlab after registering need to be privileged too. So you need to mount the gitlab-runner
docker exec -it runner /bin/bash
nano /etc/gitlab-runner/config.toml
and change privileged flag from false into true:
privileged = true
That will solve the problem!
note: you can also mount the config.toml as a volume on the container then you won't have to log into the container to change privileged to true because you can preconfigure the container before running it.
In my case, I had to add
variables:
DOCKER_TLS_CERTDIR: ""
I am building a docker container with a nodejs application, which will be build from meteorJS. For the build a shell runner is used (`meteor build /opt/project/build/core --directory) as this is all done in a gitlab CI.
build:
stage: build
tags:
- deploy
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- meteor npm install
- meteor build /opt/project/build/core --directory
script:
- cd /opt/project/build/core/bundle
- docker build -t $CI_REGISTRY_IMAGE:latest .
So the files of the application are now at /opt/project/build/core. Now I want to copy those file into another docker image (project-e2e:latest)
I tried to do
docker cp /opt/project/build/core/bundle project-e2e:latest/opt/project/build/core
But this gives me the error
Error response from daemon: No such container: project-e2e
But I see, the container is running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a238132e37a2 project-e2e:latest "/bin/bash" 14 hours ago Up 14 hours clever_kirch
Maybe the problem is, that I'm trying to copy out of the shell runner docker image and the target project-e2e is 'outside'?
If you want to get the files generated into the container you can copy them by using docker:
docker cp nightwatch:/opt/project/build/core/your_file <your_local_path>
Basically the pattern is:
docker cp <source> <target>
If the source/target is a container, you have to use:
<container_name>:<path_inside_container>