I am new in Docker and CI\CD
I am using a vps with Ubuntu 18.04.
The docker of the project runs locally and works fine.
I don't quite understand why the server is trying to find the docker on http, not on tcp.
override.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd
docker service status
daemon.json
{ "storage-driver":"overlay" }
gitlab-ci.yml
image: docker/compose:latest
services:
- docker:dind
stages:
- deploy
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
deploy:
stage: deploy
only:
- master
tags:
- deployment
script:
# - export DOCKER_HOST="tcp://127.0.0.1:2375"
- docker-compose stop || true
- docker-compose up -d
- docker ps
environment:
name: production
Error
Set the DOCKER_HOST variable. When using the docker:dind service, the default hostname for the daemon is the name of the service, docker.
variables:
DOCKER_HOST: "tcp://docker:2375"
You must have also setup your GitLab runner to enable privileged containers.
docker needs root permission to be access. If you want to run docker commands, or docker-compose from a regular user, you need to add your user to docker group like:
sudo usermod -a -G docker yourUserName
By doing that, you can up your services and other docker stuffs with your regular user. If you don't want to add your user into docker group, you need to always prefix sudo on every docker commando you run:
sudo docker-compose up -d
Related
I have set up a simple .gitlab-ci file which should be able to run docker service:
docker:
image: google/cloud-sdk:latest
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://127.0.0.1:2375
services:
- docker:dind
tags:
- docker
script:
- docker pull buster-slim
However it fails as:
https://gitlab.com/knyttl/runnerdemo/-/jobs/932204050
2020-12-25T19:31:04.558361767Z time="2020-12-25T19:31:04.558195638Z" level=info msg="API listen on [::]:2375"
2020-12-25T19:31:04.558522591Z time="2020-12-25T19:31:04.558447616Z" level=info msg="API listen on /var/run/docker.sock"
The service apparently correctly starts, but then it doesn't work:
Cannot connect to the Docker daemon at tcp://127.0.0.1:2375. Is the docker daemon running?
The problem comes from the fact that the docker daemon is not part of the image google/cloud-sdk/ that you specify for this job. You should create your own image with the google/cloud-sdk as a base image. You can also install & start docker in the before_script section of the job. See the doc on DockerHub for the image you use:
Installing additional components
By default, all gcloud components are installed on the default images (google/cloud-sdk:latest and google/cloud-sdk:VERSION).
The google/cloud-sdk:slim and google/cloud-sdk:alpine images do not contain additional components pre-installed.
You can extend these images by following the instructions below:
Debian-based images
cd debian_slim/
docker build --build-arg CLOUD_SDK_VERSION=159.0.0
--build-arg INSTALL_COMPONENTS="google-cloud-sdk-datastore-emulator" \
-t my-cloud-sdk-docker:slim .`
The image which can use DIND is docker:dind, not necessarily google/cloud-sdk:latest, so your .gitlab-ci.yml wuold look like:
docker:
image: docker:dind
variables:
DOCKER_DRIVER: overlay2
services:
- docker:dind
tags:
- docker
script:
- docker pull buster-slim
# ...
# I don't know what needs to be built...
You can check this tutorial for a step by step recipe.
In fact, the only reason why this was not working was:
DOCKER_HOST: tcp://docker:2375
The dind service CAN run within cloud-sdk image, but it needs to be linked as a host.
I am running an Azure Container Job, where I spin up a different Docker container manually like this:
jobs:
- job: RunIntegrationTests
pool:
vmImage: "ubuntu-18.04"
container:
image: mynamespace/frontend_image:latest
endpoint: My Docker Hub Endpoint
steps:
- script: |
docker run --rm --name backend_container -p 8000:8000 -d backend_image inv server
I have to create the container manually since the image lives in AWS ECR, and the password authentication scheme that Azure provides for it can only be used with a token that expires, so it seems useless. How can I make it so that my_container is reachable from within subsequent steps of my job?. I have tried starting my job with:
options: --network mynetwork
And share it with "backend_container", but I get the error:
docker: Error response from daemon: Container cannot be connected
to network endpoints: mynetwork
While starting the "frontend" container, which might be because Azure is trying to start a container on multiple networks.
To run a container job, and attach a custom image to the created network, you can use a step as showed in the below example:
steps:
- task: DownloadPipelineArtifact#2
inputs:
artifactName: my-image.img
targetPath: images
target: host # Important, to run this on the host and not in the container
- bash: |
docker load -i images/my-image.img
docker run --rm --name my-container -p 8042:8042 my-image
# This is not really robust, as we rely on naming convections in Azure Pipelines
# But I assume they won't change to a really random name anyway.
network=$(docker network list --filter name=vsts_network -q)
docker network connect $network my-container
docker network inspect $network
target: host
Note: it's important the these steps run in the host, and not in the container (that is run for the container-job). This is done by specifying target: host for the task.
In the example the container from the custom image can the be addressed by my-container.
I ended up not using the container: property altogether, and started all containers manually, so that I can specify the same network:
steps:
- task: DockerInstaller#0
displayName: Docker Installer
inputs:
dockerVersion: 19.03.8
releaseType: stable
- task: Docker#2
displayName: Login to Docker hub
inputs:
command: login
containerRegistry: My Docker Hub
- script: |
docker network create integration_tests_network
docker run --rm --name backend --network integration_tests_network -p 8000:8000 -d backend-image inv server
docker run --rm --name frontend -d --network integration_tests_network frontend-image tail -f /dev/null
And run subsequents commands on the frontend container with docker exec
I want to deploy a Docker stack from Azure pipelines. I have set some variables, and I am calling these variables in the Docker stack file. However, none of my environment variables are read in the docker stack file. My question: Is there any explanations why I can't read the environment variables in the yaml file?
Below is all my variables
And here is my docker stack configuration
version: "3.1"
services:
postgres:
image: "postgres"
volumes:
- /home/db-postgres:/data/db
environment:
POSTGRES_PASSWORD: ${POSTGRESPASSWORD}
POSTGRES_DB: ${POSTGRESDB}
main:
command: "flask run --host=127.0.0.1"
image: "personal-image"
ports:
- 5000:5000
environment:
SECRET_KEY: ${FLASK_SERIALIZER_SECRET}
JWT_SECRET_KEY: ${FLASK_JWT_SECRET}
FLASK_APP: app.py
MAIL_USERNAME: ${MAIL_USERNAME}
MAIL_PASSWORD: ${MAIL_PASSWORD}
APP_ADMINS: ${APP_ADMINS}
SQLALCHEMY_DATABASE_URI: ${SQLALCHEMY_DATABASE_URI}
From the azure pipeline yaml file, I can read the environment variables though...
What I don't understand is that in an other project, I m doing the exact same thing, and everything works fine.
Edit: Here is my azure-pipelines.yml script. The agent is a self hosted EC2 Linux agent:
steps:
- bash: |
echo $(DOCKERREGISTRY_PASSWORD) | docker login --username $(DOCKERREGISTRY_USER) --password-stdin
displayName: log in to Docker Registry
- bash: |
sudo service docker start
sudo docker stack deploy --with-registry-auth --prune --compose-file stack.staging.yml my_cluster_name
displayName: Deploy Docker Containers
- bash: |
sudo docker system prune --volumes -f
displayName: Clean memory
- bash: |
docker logout
sudo service docker stop
displayName: logout of Docker Registry
You can check the Agent Specification of the yaml pipeline in other projects. They might use the different agents.
I created a test pipeline and found the environment variables in docker stack file were not be able to be substituted in mac or ubuntu agents. But it seemed working in windows agents.
If you used mac or ubuntu agents to run your pipeline. You might need to use define the environment variables in the dockerComposeFileArgs field. See below:
- task: DockerCompose#0
displayName: 'Build services'
inputs:
containerregistrytype: 'Container Registry'
dockerRegistryEndpoint: Dockerhost
dockerComposeFileArgs: |
MAIL_USERNAME=$(MAIL_USERNAME)
MAIL_PASSWORD=$(MAIL_PASSWORD)
APP_ADMINS=$(APP_ADMINS)
SQLALCHEMY_DATABASE_URI=$(SQLALCHEMY_DATABASE_URI)
action: 'Build services'
Update:
For bash task, you can try usig env field to map the variables. See below:
- bash: |
sudo docker stack deploy ...
displayName: 'Bash Script'
enabled: false
env:
MAIL_USERNAME: $(MAIL_USERNAME)
MAIL_PASSWORD: $(MAIL_PASSWORD)
APP_ADMINS: $(APP_ADMINS)
I'm currently trying to setup a gitlab ci pipeline. I've chosen to go with the Docker-in-Docker setup.
I got my ci pipeline to build and push the docker image to the registry of gitlab but I cannot seem deploy it using the following configuration:
.gitlab-ci.yml
image: docker:stable
services:
- docker:dind
stages:
- build
- deploy
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
TEST_IMAGE: registry.gitlab.com/user/repo.nl:$CI_COMMIT_REF_SLUG
RELEASE_IMAGE: registry.gitlab.com/user/repo.nl:release
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker info
build:
stage: build
tags:
- build
script:
- docker build --pull -t $TEST_IMAGE .
- docker push $TEST_IMAGE
only:
- branches
deploy:
stage: deploy
tags:
- deploy
script:
- docker pull $TEST_IMAGE
- docker tag $TEST_IMAGE $RELEASE_IMAGE
- docker push $RELEASE_IMAGE
- docker run -d --name "review-$CI_COMMIT_REF_SLUG" -p "80:80" $RELEASE_IMAGE
only:
- master
when: manual
When I run the deploy action I actually get the following feedback in my log, but when I go check the server there is no container running.
$ docker run -d --name "review-$CI_COMMIT_REF_SLUG" -p "80:80" $RELEASE_IMAGE
7bd109a8855e985cc751be2eaa284e78ac63a956b08ed8b03d906300a695a375
Job succeeded
I have no clue as to what I am forgetting here. Am I right to expect this method to be correct for deploying containers? What am I missing / doing wrong?
tldr: Want to deploy images into production using gitlab ci and docker-in-docker setup, job succeeds but there is no container. Goal is to have a running container on host after deployment.
Found out that I needed to include the docker socket in the gitlab-runner configuration as well, and not only have it available in the container.
By adding --docker-volumes '/var/run/docker.sock:/var/run/docker.sock' and removing DOCKER_HOST=tcp://docker:2375 I was able to connect to docker on my host system and spawn sibling containers.
I would like to build and push docker images to my local nexus repo with GitLab CI
This is my current CI file:
image: docker:latest
services:
- docker:dind
before_script:
- docker info
- docker login -u some_user -p nexus-rfit some_host
stages:
- build
build-deploy-ubuntu-image:
stage: build
script:
- docker build -t some_host/dev-image:ubuntu ./ubuntu/
- docker push some_host/dev-image:ubuntu
only:
- master
when: manual
I also have a job for an alpine docker image, but when I want to run any of it it's failing with the following error:
Checking out 13102ac4 as master...
Skipping Git submodules setup
$ docker info
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
ERROR: Job failed: exit code 1
So technically the docker daemon in the image isn't running, but I have no idea why.
GitLab folks have a reference on their docs about using docker-build inside docker-based jobs: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html#use-docker-in-docker-executor. Since you seem to have everything in place (i.e. the right image for the job and the additional docker:dind service), it's most likely a runner-config issue.
If you look at the second step in the docs:
Register GitLab Runner from the command line to use docker and privileged mode:
[...]
Notice that it's using the privileged mode to start the build and service containers. If you want to use docker-in-docker mode, you always have to use privileged = true in your Docker containers.
Probably you're using a runner that was not configured in privileged mode and hence can't properly run the docker daemon inside. You can directly edit the /etc/gitlab-runner/config.toml on your registered runner to add that option.
(Also, read on the section on the docs for some more info about the performance related to the storage driver you choose/your runner supports when using dind)