How to run a docker container in Azure pipeline? - azure

Azure documentation (https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/build/docker?view=azure-devops) does not specify how to run a docker container in Azure pipeline.
We can use the Docker#2 task to build / push docker images but it does not have a command to run a container. By looking at source code of older versions of Docker task I can see there has been a run command, but those are now deprecated and there is no documentation to be found.
I also followed the doc: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops
With following yaml I was able to pull a docker image which was previously pushed to ACR.
(my-acr is a service connection I added via project settings)
pool:
vmImage: 'ubuntu-16.04'
container:
image: somerepo/rnd-hello:latest
endpoint: my-acr
steps:
- script: printenv
But I cannot get the container to run.

Apparently the configuration mentioned in the question will pull the image and run the step (in this case printenv command in the script) inside the container. A temporary working directory will be mounted automatically and it will run inside that dir.
However this will not run the container itself. (CMD command defined in the Dockerfile will not be executed)
In order to run the container itself we have to login to docker registry with Docker#2 inbuilt task and then manually execute the docker run as a script. Here is an example,
trigger: none
jobs:
- job: RunTest
workspace:
clean: all
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Docker#2
displayName: Login to ACR
inputs:
command: login
containerRegistry: my-acr
- script: |
docker run my-registry.azurecr.io/somerepo/rnd-hello:latest

If you want, you can simply use a shell command to execute docker run and simply rely on that for all the further steps in your pipeline. You don't need to use Docker tasks in Pipelines to be able to communicate with the daemon.
Another solution would be using Azure Container Registry for running a container, but that seems like the last resort in case something went wrong with Pipelines.

Related

Running Docker container in Azure pipeline running on self hosted agent that is running in the container as well

I am trying to pull Docker container in my Azure pipeline. Azure pipeline is running on the self hosted agent, that is running in the docker container. I get a following error:
Is it possible to run the container in the pipeline, when the pipeline itself runs on the container self hosted agent?
Pipeline YAML:
# Node.js
# Build a general Node.js project with npm.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/javascript
trigger:
- master
resources:
containers:
- container: qmate
image: qmate.int.repositories.cloud.sap/qmate-executor:latest
pool:
vmImage: ubuntu-latest
name: SYS-DEV-Self-hosted
demands:
- agent.name -equals SYSDEV-agent
steps:
- task: NodeTool#0
inputs:
versionSpec: '15.x'
displayName: 'Install Node.js'
- task: DockerInstaller#0
inputs:
dockerVersion: '17.09.0-ce'
- script: docker pull qmate
workingDirectory: ./
displayName: 'Docker Pull'
- script: |
cd tests/QmateE2E/regression
npm install
npx wdio config.js
displayName: 'npm install and build'
You may configure the self-hosted agent in the docker container.
You don't need to run the docker container in the pipeline. You could install the self-hosted agent in the docker instance.
And then make the docker container as a self-hosted which can be set in the agent pool.
You can specify multiple containers to run with the container jobs... (If you want to run another container to interact with) (The container that you specify on the pipeline would be pulled and started automatically by Azure Devops) (I would normally specify the container to run on in a top-level container: or for one under the specific job, if multiple jobs are present.)
(The way it is done currently, the safe option, in case more containers are added, is to have a target: qmate for each of the steps that should run in the container)
For the error you had here: For steps that interact with docker, like docker build, you can also set target: host on the specific task. (Azure DevOps seems to mount stuff to allow most of the context to be shared) (in this case the image that you are trying to pull was likely already pulled when the pipeline started)

How to run a docker container in azure devops?

I'm currently playing around with docker containers and azure devops with the goal to run a couple of tests from it.
This is what I currently do:
I have created a dockerfile in my repo
I have created a pipline that build and push an image to container registry
I have checked that the image exist in container registry
I have started on a new release pipline with the following task:
A login task:
steps:
- task: Docker#2
displayName: Login
inputs:
containerRegistry: nameOfMyRegistry
command: login
A run task:
steps:
- task: Docker#2
displayName: 'Run tests'
inputs:
containerRegistry: nameOfRegistry
repository: nameOfRepository
command: run
arguments: 'nameOfImage -p 8089:8089 -f tests.py --run-time 30s -u 1 -r 1'
But after I run this I get the following error:
2021-04-26T11:39:38.9204965Z ##[error]Unable to find image 'nameOfMyImage:latest' locally
2021-04-26T11:39:38.9228444Z ##[error]docker: Error response from daemon: manifest for nameOfMyImage:latest not found: manifest unknown: manifest tagged by "latest" is not found.
So I'm not sure if I'm missing something? I put in all information to my azure container registry so I thought it would just get the image from it but it seems like it can't find it.
I know I got answer that say you can't use run with the Docker#2 task but I actually managed to get it to run now. The problem was that I used the wrong name of the image. I had to write it like this:
nameOfRegistry.azurecr.io/nameOfRepository:tag
So just bad from me but I will leave this here if someone manage to do the same mistake.
Nothing complex, looks like task input command supports only buildAndPush, build, push, login, logout and it doesn't support run reference
Something like this with script should work. reference
resources:
containers:
- container: builder
image: ubuntu:18.04
steps:
- script: echo "I can run inside the container (it starts by default)"
target:
container: builder
There is no option run in specification
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/build/docker?view=azure-devops#task-inputs
To build an application and run tests right after the build you can use following commands:
steps:
- task: Docker#2
displayName: Login to ACR
inputs:
command: login
containerRegistry: dockerRegistryServiceConnection1
- task: Docker#2
displayName: Build
inputs:
command: build
repository: contosoRepository
tags: tag1
If you want to run tests in Docker container, you should use
Container Structure Tests
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/test/container-structure-test-task?view=azure-devops&tabs=yaml
Azure Container Instances https://marketplace.visualstudio.com/items?itemName=DanielMeixner.de-danielmeixner-anycode&targetId=5467da56-2ffa-43cd-87b7-0b2c6da5b5af

Fail to rename docker image at Azure pipelines

I try to rename the build image, using this task:
steps:
- task: Docker#0
displayName: 'Run a Docker TAG rename'
inputs:
containerregistrytype: 'Container Registry'
dockerRegistryConnection: 'docker hub'
action: 'Run a Docker command'
customCommand: 'tag azuretp:latest (my docker hub account)/dockerhub:myfirstpush'
but fails with the error:
"C:\Program Files\Docker\docker.exe" tag azuretp:latest ***/dockerhub:myfirstpush
Error response from daemon: No such image: azuretp:latest
Running locally i am able to rename it, using the command:
docker tag trfoutwsrv:dev (my docker hub account)/dockerhub:myfirstpush
At Azure Pipeline Services, the image name changes with build. I already try azuretp:{Build.BuildNumber} but that variable doesn't exist at the task run time.
The goal is to rename the image so it can be pushed after to my docker hub repository.
I already split the original task to rename and then push, but now i am stuck on renaming it.
in this case the solution was to use azuretp:$(Build.BuilNumber)

Are containers available between stages in Gitlab CI

Is a container that is used in the build stage accessible in the next stage? I have yaml like this:
build_backend:
image: web-app
services:
- mysql:5.7
stage: build
script:
- make build
test_frontend:
image: node:8
stage: test
script:
- make run-tests
My tests, that are triggered in make run-tests need to run HTTP requests against the backend container if possible?
I was trying to avoid building a new container and then pushing to a registry only to pull it down again, but maybe there is no other way? If I did this, would my web-app container still have access to the mysql container if I added it as a service in the test_frontend job.
No, containers are not available between stages. Job artifacts (i.e. files) will be passed between stages by default and can also be passed explicitly betweeen jobs.
If you need to run tests against a container, you should indeed pull it down again from a registry. Then, you can use the docker in docker (dind) service to run your tests.
I think this blog post explains a similar use case nicely. The testing job that's is described there is the following:
test:
stage: test
script:
- docker run -d --env-file=.postgres-env postgres:9.5
- docker run --env-file=.environment --link=postgres:db $CONTAINER_TEST_IMAGE nosetests --with-coverage --cover-erase --cover-package=${CI_PROJECT_NAME} --cover-html

build and push docker images with GitLab CI

I would like to build and push docker images to my local nexus repo with GitLab CI
This is my current CI file:
image: docker:latest
services:
- docker:dind
before_script:
- docker info
- docker login -u some_user -p nexus-rfit some_host
stages:
- build
build-deploy-ubuntu-image:
stage: build
script:
- docker build -t some_host/dev-image:ubuntu ./ubuntu/
- docker push some_host/dev-image:ubuntu
only:
- master
when: manual
I also have a job for an alpine docker image, but when I want to run any of it it's failing with the following error:
Checking out 13102ac4 as master...
Skipping Git submodules setup
$ docker info
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
ERROR: Job failed: exit code 1
So technically the docker daemon in the image isn't running, but I have no idea why.
GitLab folks have a reference on their docs about using docker-build inside docker-based jobs: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html#use-docker-in-docker-executor. Since you seem to have everything in place (i.e. the right image for the job and the additional docker:dind service), it's most likely a runner-config issue.
If you look at the second step in the docs:
Register GitLab Runner from the command line to use docker and privileged mode:
[...]
Notice that it's using the privileged mode to start the build and service containers. If you want to use docker-in-docker mode, you always have to use privileged = true in your Docker containers.
Probably you're using a runner that was not configured in privileged mode and hence can't properly run the docker daemon inside. You can directly edit the /etc/gitlab-runner/config.toml on your registered runner to add that option.
(Also, read on the section on the docs for some more info about the performance related to the storage driver you choose/your runner supports when using dind)

Resources