I have some trouble with the setup of a clean and encapsulated docker environment for each gitlab ci pipeline.
What I want to achieve:
Each pipeline should run in its own docker environment.
Docker containers started in one job should be present in jobs of a following stage (that use the docker executor).
a sample pipeline could contain the following stages:
startup containers (docker executor)
install some dependencies (docker executor)
run tests (docker executor)
run some other kind of tests (docker executor)
release to docker registry (docker executor)
deploy to kubernetes (Kubernetes executor)
rollback kubernetes (Kubernetes executor)
stop / remove containers (docker executor)
When I use the docker executor with the docker-in-docker (dind) service each job runs in a clean environment. But that means docker containers started in one job won't be accesible in the following one.
When I make use of docker socket binding the given sample pipeline could be realized.
But if I understand everything right, this could lead to conflicts between different commits running that pipeline.
The docker socket is passed through from the host and thus all docker containers that are created within a pipeline will be available on the host and concurrent pipelines as well.
To prevent naming conflicts the name of each container could append the predefined gitlab environment variable CI_COMMIT_SHA. So each pipeline creates its own identifiable containers (on the host).
But this is a security issue. As the gitlab documentation says the command
docker rm -f $(docker ps -a -q)
run in any job would remove all containers even outside the pipeline, meaning the host including the gitlab runner containers.
I've read a lot in the gitlab docs and other sources but I can't find a soltion to setup a clean and encapsulated docker environment for a whole pipeline where containers are accesible between stages but not from the outside (other pipelines). Also containers of the host should be save.
Is there a clean solution to this problem? Or at least reasonable workarounds?
Thanks in advance for your support!
Related
We need to build docker images using self-hosted linux agent which is deployed as docker container (in Azure Container Instances).
As of now, the agent is Ubuntu image, however to enable building images inside this container I thought of using Kaniko image. However, I haven't figured out how to run Kaniko image without executing the kaniko itself right away (as we need to run devops agent primarily and run kaniko on-demand).
Any hints? Or better ideas how to build docker images in running docker container?
Solved with following code, however Kaniko does not work as expected when running inside my container (tested the same parameters with kaniko inside my container and in default container and in my container does not work (cannot authenticate to ACR)).
Might end up with the VMSS DevOps agent...
FROM whatever-base-image
...
COPY --from gcr.io/kaniko-project/executor /kaniko/executor /kaniko/executor
Ref: https://github.com/GoogleContainerTools/kaniko/issues/2058#issuecomment-1104666901
I have a very simple .gitlab-ci.yml file:
build-job:
stage: build
script:
- echo "Hello, $GITLAB_USER_LOGIN!"
And run this pipeline. It uses a shared gitlab runner.
The job executed in a docker container. But we haven't mentioned docker executor for gitlab runner and also any image. So how is this whole thing done?
Running your CI/CD jobs in Docker containers implies to Register a runner so that all jobs run in Docker containers.
But if you are using GitLab SaaS (GitLab.com), your CI jobs automatically run on runners in the GitLab Build Cloud.
With a Linux runner, that means n1-standard-1 instances with 3.75GB of RAM, CoreOS and the latest Docker Engine installed.
we are starting to use docker containers in our Azure pipelines.
For one scenario we would need to run the docker container in privileged mode and I am wondering whether Azure pipelines supports "privileged" container execution?
Thank you
If docker containers are located in Docker Hub or Local Machineļ¼ you could run the docker containers in privileged mode.
You could try to run the following script: docker run --privileged [image_name]
steps:
- task: Docker#2
inputs:
containerRegistry: 'DockerServiceConnectionName'
command: 'login'
- script: docker run --privileged [image_name]
You could refer to this ticket about the Azure Container.
Azure Container Instances does not expose direct access to the underlying infrastructure that hosts container groups. This includes running privileged containers and thus it is not supported currently.
I have problems with GitLab CI/CD configuration - I'm using free runners on GitLab it self.
I have joomla (test) project using docker - I'm learng how it's work.
I created .gitlab-ci.yml with:
image: docker:latest
services:
- docker:dind
at top of file.
On test stage I want run docker image created at the build stage.
When I add:
services:
- mariadb:latest
to test stage I always get
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? at docker pull command. Without it I get error at docker run command at joomla image initialization cose of lack of MySql server
Any help will be appreciated.
If you set
services:
- mariadb:latest
in your test job, this will override the globally defined services. Therefore, the docker daemon is not running during test. This also explains why you do not get the Docker daemon error when you omit the services definition for the test job.
Either specify the docker:dind service also for the test job, or remove the local services definition and add mariadb to your global services definition.
I have a docker swarm cluster, it contains 1 master 3 nodes. When we deploy a container through swarm master, e.g with the below command
docker -H tcp://<master_ip>:5001 run -dt --name swarm-test busybox /bin/sh
Swarm will auto pick a node and deploy my container. Is there a way to hand pick a node? e.g I want to deploy a container in node 1.
Take a look at the Swarm filter docs. You can set various constraints on what node Swarm should pick for any given container. For your case try something like:
docker run ... -e constraint:node==node1 ...
This would start the container on node1.