I am using the azure devops pipeline to build a docker image for my asp.net web application. I have to use self-hosted agent as build and deployment server. After each time running the CI pipeline, new images are created and then pushed to Docker registry. now the problem is , the images which are built and saved on the agent! after a while the agent disk faces the low disk and I have to delete the old images manually.
How can I delete docker images after pushing to the registry during the CI pipeline?
please check attached snapshot.
After pushing image add command line step to delete image:
- task: CmdLine#2
inputs:
script: 'docker rmi -f IMAGE:TAG'
or more destructive
- task: CmdLine#2
inputs:
script: 'docker system prune -a --force'
Run
docker rmi -f image-name
which will forcefully remove the image after you push the image to the registry
After you push the image to the registry add a cmd task with the command to remove the image:
docker rmi [OPTIONS] IMAGE [IMAGE...]
For example:
docker rmi test1:latest
How about using docker system prune. Doing this usually removes all dangling images from the system but using it with -a should take care of removing any unused images as well.
Please refer to official documentation here .
Related
We need to build docker images using self-hosted linux agent which is deployed as docker container (in Azure Container Instances).
As of now, the agent is Ubuntu image, however to enable building images inside this container I thought of using Kaniko image. However, I haven't figured out how to run Kaniko image without executing the kaniko itself right away (as we need to run devops agent primarily and run kaniko on-demand).
Any hints? Or better ideas how to build docker images in running docker container?
Solved with following code, however Kaniko does not work as expected when running inside my container (tested the same parameters with kaniko inside my container and in default container and in my container does not work (cannot authenticate to ACR)).
Might end up with the VMSS DevOps agent...
FROM whatever-base-image
...
COPY --from gcr.io/kaniko-project/executor /kaniko/executor /kaniko/executor
Ref: https://github.com/GoogleContainerTools/kaniko/issues/2058#issuecomment-1104666901
Azure documentation (https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/build/docker?view=azure-devops) does not specify how to run a docker container in Azure pipeline.
We can use the Docker#2 task to build / push docker images but it does not have a command to run a container. By looking at source code of older versions of Docker task I can see there has been a run command, but those are now deprecated and there is no documentation to be found.
I also followed the doc: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops
With following yaml I was able to pull a docker image which was previously pushed to ACR.
(my-acr is a service connection I added via project settings)
pool:
vmImage: 'ubuntu-16.04'
container:
image: somerepo/rnd-hello:latest
endpoint: my-acr
steps:
- script: printenv
But I cannot get the container to run.
Apparently the configuration mentioned in the question will pull the image and run the step (in this case printenv command in the script) inside the container. A temporary working directory will be mounted automatically and it will run inside that dir.
However this will not run the container itself. (CMD command defined in the Dockerfile will not be executed)
In order to run the container itself we have to login to docker registry with Docker#2 inbuilt task and then manually execute the docker run as a script. Here is an example,
trigger: none
jobs:
- job: RunTest
workspace:
clean: all
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Docker#2
displayName: Login to ACR
inputs:
command: login
containerRegistry: my-acr
- script: |
docker run my-registry.azurecr.io/somerepo/rnd-hello:latest
If you want, you can simply use a shell command to execute docker run and simply rely on that for all the further steps in your pipeline. You don't need to use Docker tasks in Pipelines to be able to communicate with the daemon.
Another solution would be using Azure Container Registry for running a container, but that seems like the last resort in case something went wrong with Pipelines.
When pushing a docker image with a modified tag (to contain registry) to the gitlab integrated registry i get an access denied.
Using the gitlab registry is using it per project. Once the registry is enabled for a project there is a hint how to push the images to the registry https://gitlab.mydomain.com/**path/to/project**/container_registry.
The problem got solved when the full path was included in the TAG Name.
When i changed the tagname to [registryUrl]:[registryPort]/path/to/project/[imageNameWithTags] i was able to push to the repository/registry.
Indeed you need to do docker login ... as described on the /container_registry page.
You can also rely on some GitLab Predefined environment variables to make code generic and re-use it in many projects.
Here is the example of doing it in .gitlab-ci.yml:
build-image:
stage: build
image: docker:latest
services:
- name: docker:dind
script:
- docker build -t $CI_REGISTRY_IMAGE .
- docker login -u $CI_REGISTRY_USER -p "$CI_JOB_TOKEN" $CI_REGISTRY
- docker push $CI_REGISTRY_IMAGE
See full example in one of our projects
I am using gitlab's pipeline for CI and CD to build images for my projects.
In every job there are configurations to be set like image and stage but I can't wrap my head around what services are. Can someone explain its functionality? Thanks
Here's a code snippet I use that I found
build-run:
image: docker:latest
stage: build
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker build -t "$CI_REGISTRY_IMAGE/my-project:$CI_COMMIT_SHA" .
- docker push "$CI_REGISTRY_IMAGE/my-project:$CI_COMMIT_SHA"
cache:
untracked: true
environment: build
The documentation says:
The services keyword defines just another Docker image that is run during your job and is linked to the Docker image that the image keyword defines. This allows you to access the service image during build time.
I would like to build and push docker images to my local nexus repo with GitLab CI
This is my current CI file:
image: docker:latest
services:
- docker:dind
before_script:
- docker info
- docker login -u some_user -p nexus-rfit some_host
stages:
- build
build-deploy-ubuntu-image:
stage: build
script:
- docker build -t some_host/dev-image:ubuntu ./ubuntu/
- docker push some_host/dev-image:ubuntu
only:
- master
when: manual
I also have a job for an alpine docker image, but when I want to run any of it it's failing with the following error:
Checking out 13102ac4 as master...
Skipping Git submodules setup
$ docker info
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
ERROR: Job failed: exit code 1
So technically the docker daemon in the image isn't running, but I have no idea why.
GitLab folks have a reference on their docs about using docker-build inside docker-based jobs: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html#use-docker-in-docker-executor. Since you seem to have everything in place (i.e. the right image for the job and the additional docker:dind service), it's most likely a runner-config issue.
If you look at the second step in the docs:
Register GitLab Runner from the command line to use docker and privileged mode:
[...]
Notice that it's using the privileged mode to start the build and service containers. If you want to use docker-in-docker mode, you always have to use privileged = true in your Docker containers.
Probably you're using a runner that was not configured in privileged mode and hence can't properly run the docker daemon inside. You can directly edit the /etc/gitlab-runner/config.toml on your registered runner to add that option.
(Also, read on the section on the docs for some more info about the performance related to the storage driver you choose/your runner supports when using dind)