Initialize container, failing when executing useradd in azure pipeline - azure

When running my azure pipeline I run into the below error when trying to initialize a container.
OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: "useradd": executable file not found in $PATH: unknown
The snippet of the pipeline where it fails
resources:
containers:
- container: linux2
image: amazonlinux:latest
options: --user 0:0
stages:
- stage: build_stage
pool:
vmImage: 'ubuntu-latest'
demands:
- msbuild
- visualstudio
- vstest
jobs:
- job: build_job
container: linux2
steps:
- checkout: self
clean: true
This seems like it could potentially be a permissions issue but I'm really not sure.

It is because amazonlinux:latest doesn't have useradd command.
AzDev creates the container from the image, then it adds a new user and add this user to sudoers file (even if you don't have sudo in your image), and only after that it runs the container.
So the easiest approach is just to change an image. Also you can take amazonlinux:latest and add useradd and use it in your pipeline.

Related

Running a docker container pulled from Azure Container Registry in Azure DevOps pipeline

I would like to run a docker container, which i pull from Azure Container Registry. All of this i would like to do in Azure DevOps pipeline.
Firstly i created sample Node.js app and Dockerized it with this tutorial: https://nodejs.org/en/docs/guides/nodejs-docker-webapp/
Then i did my Azure Pipeline which firstly do build&push and then pull and run.
My pipeline:
stages:
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
- task: Docker#2
displayName: Docker pull
inputs:
command: pull
containerRegistry: $(dockerRegistryServiceConnection)
arguments: container01.azurecr.io/devopsnodejs:latest
- task: Docker#2
displayName: Login to ACR
inputs:
command: login
containerRegistry: $(dockerRegistryServiceConnection)
- script: |
docker run -p 49160:8080 -d container01.azurecr.io/devopsnodejs:latest
The pipelines runs every step sucessfully, the last script with docker run prints this to Azure DevOps console
Generating script.
Script contents:
docker run -p 49160:8080 -d ***/devopsnodejs:latest
========================== Starting Command Output ===========================
/usr/bin/bash --noprofile --norc /home/vsts/work/_temp/b117892d-e34c-484c-ad8c-f99cd0a97e18.sh
7c6c9d548c4be3e4568e56ffc87cca27e698fc53b5ec15a1595cd45fe72dd143
And now the problem is that, I cannot acces the app which should return simply get request saying 'Hello World'
Im trying to go to localhost:49160, to curl -i localhost:49160 but there is only curl: (7) Failed to connect to localhost port 49160 after 2258 ms: Connection refused
Also, if i do it locally, not in azure pipelines, so I simply run docker pull container01.azurecr.io/devopsnodejs:latest and docker run -p 49160:8080 -d container01.azurecr.io/devopsnodejs:latest in powershell, the docker ps will show me this container, as well as the curl -i localhost:49160 will work. Am I able to access this locally, or if i run it in Azure Pipelines it will work only there?
Have you seen this question already, which is I think your scenario?
Cannot conect to Docker container running in VSTS
You could use a bash script with ssh to logon to the target machine and perform docker commands on the machine. The following is just an example. I have something like this in one of my pipelines.
- task: Bash#3
displayName: "Deploy image $(imageName)"
inputs:
targetType: 'inline'
failOnStderr: false
script: |
sshpass -p '$(localroot-userpassword)' ssh -o StrictHostKeyChecking=no $(localroot-username)#$(remoteHost) "{ sudo docker stop $(imageName) ; sudo docker rm $(imageName) ; sudo docker pull $(imageName) ; sudo docker run ...#add other commands separated by semicolon}"

Gitlab issue while running helm command as - Error: unknown command "sh" for "helm"

I have a packge script which needs to run on alpina:helm image . I have used this before but for some reason this is always giving me error as - Error: unknown command "sh" for "helm"
package:
<<: *artifacts
stage: package
image: alpine/helm
variables:
GIT_STRATEGY: none
script:
- echo $VERSION
- helm package ./keycloak --app-version=$VERSION
artifacts:
paths:
- "*.tgz"
Can anybody tell me what is the issue here I am not very sure . Helm command should be running as per my assumption but not sure why isnt it .
As explained in the docs, the runner in gitlab is started this way
the runner starts the docker container specified in image and uses the entrypoint of this container
the runner attaches itself to the container
the runner combines before_script, script and after_script into a single script
the runner sends the combined script to the container's shell
If you take a look at the entrypoint of the alpine/helm image, you see that the entrypoint is helm and when the container starts it runs helm. The gitlab runner expects no entrypoint or that the entrypoint is set to start a shell so you get the Error: unknown command "sh" for "helm" as there is no running shell.
With overriding the entrypoint we make sure the runner finds a shell in the container which can execute the script.
package:
stage: package
image:
name: alpine/helm
entrypoint: [""]
variables:
GIT_STRATEGY: none
script:
- echo $VERSION
- helm package ./keycloak --app-version=$VERSION
artifacts:
paths:
- "*.tgz"
EDIT:
By reading the docs again I changed the entrypoint to an empty entrypoint for docker 17.06 and later (entrypoint: [""]) as this is more concise.

How to run a docker container in azure devops?

I'm currently playing around with docker containers and azure devops with the goal to run a couple of tests from it.
This is what I currently do:
I have created a dockerfile in my repo
I have created a pipline that build and push an image to container registry
I have checked that the image exist in container registry
I have started on a new release pipline with the following task:
A login task:
steps:
- task: Docker#2
displayName: Login
inputs:
containerRegistry: nameOfMyRegistry
command: login
A run task:
steps:
- task: Docker#2
displayName: 'Run tests'
inputs:
containerRegistry: nameOfRegistry
repository: nameOfRepository
command: run
arguments: 'nameOfImage -p 8089:8089 -f tests.py --run-time 30s -u 1 -r 1'
But after I run this I get the following error:
2021-04-26T11:39:38.9204965Z ##[error]Unable to find image 'nameOfMyImage:latest' locally
2021-04-26T11:39:38.9228444Z ##[error]docker: Error response from daemon: manifest for nameOfMyImage:latest not found: manifest unknown: manifest tagged by "latest" is not found.
So I'm not sure if I'm missing something? I put in all information to my azure container registry so I thought it would just get the image from it but it seems like it can't find it.
I know I got answer that say you can't use run with the Docker#2 task but I actually managed to get it to run now. The problem was that I used the wrong name of the image. I had to write it like this:
nameOfRegistry.azurecr.io/nameOfRepository:tag
So just bad from me but I will leave this here if someone manage to do the same mistake.
Nothing complex, looks like task input command supports only buildAndPush, build, push, login, logout and it doesn't support run reference
Something like this with script should work. reference
resources:
containers:
- container: builder
image: ubuntu:18.04
steps:
- script: echo "I can run inside the container (it starts by default)"
target:
container: builder
There is no option run in specification
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/build/docker?view=azure-devops#task-inputs
To build an application and run tests right after the build you can use following commands:
steps:
- task: Docker#2
displayName: Login to ACR
inputs:
command: login
containerRegistry: dockerRegistryServiceConnection1
- task: Docker#2
displayName: Build
inputs:
command: build
repository: contosoRepository
tags: tag1
If you want to run tests in Docker container, you should use
Container Structure Tests
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/test/container-structure-test-task?view=azure-devops&tabs=yaml
Azure Container Instances https://marketplace.visualstudio.com/items?itemName=DanielMeixner.de-danielmeixner-anycode&targetId=5467da56-2ffa-43cd-87b7-0b2c6da5b5af

Can't access Azure pipeline variables in my Docker stack Yaml file

I want to deploy a Docker stack from Azure pipelines. I have set some variables, and I am calling these variables in the Docker stack file. However, none of my environment variables are read in the docker stack file. My question: Is there any explanations why I can't read the environment variables in the yaml file?
Below is all my variables
And here is my docker stack configuration
version: "3.1"
services:
postgres:
image: "postgres"
volumes:
- /home/db-postgres:/data/db
environment:
POSTGRES_PASSWORD: ${POSTGRESPASSWORD}
POSTGRES_DB: ${POSTGRESDB}
main:
command: "flask run --host=127.0.0.1"
image: "personal-image"
ports:
- 5000:5000
environment:
SECRET_KEY: ${FLASK_SERIALIZER_SECRET}
JWT_SECRET_KEY: ${FLASK_JWT_SECRET}
FLASK_APP: app.py
MAIL_USERNAME: ${MAIL_USERNAME}
MAIL_PASSWORD: ${MAIL_PASSWORD}
APP_ADMINS: ${APP_ADMINS}
SQLALCHEMY_DATABASE_URI: ${SQLALCHEMY_DATABASE_URI}
From the azure pipeline yaml file, I can read the environment variables though...
What I don't understand is that in an other project, I m doing the exact same thing, and everything works fine.
Edit: Here is my azure-pipelines.yml script. The agent is a self hosted EC2 Linux agent:
steps:
- bash: |
echo $(DOCKERREGISTRY_PASSWORD) | docker login --username $(DOCKERREGISTRY_USER) --password-stdin
displayName: log in to Docker Registry
- bash: |
sudo service docker start
sudo docker stack deploy --with-registry-auth --prune --compose-file stack.staging.yml my_cluster_name
displayName: Deploy Docker Containers
- bash: |
sudo docker system prune --volumes -f
displayName: Clean memory
- bash: |
docker logout
sudo service docker stop
displayName: logout of Docker Registry
You can check the Agent Specification of the yaml pipeline in other projects. They might use the different agents.
I created a test pipeline and found the environment variables in docker stack file were not be able to be substituted in mac or ubuntu agents. But it seemed working in windows agents.
If you used mac or ubuntu agents to run your pipeline. You might need to use define the environment variables in the dockerComposeFileArgs field. See below:
- task: DockerCompose#0
displayName: 'Build services'
inputs:
containerregistrytype: 'Container Registry'
dockerRegistryEndpoint: Dockerhost
dockerComposeFileArgs: |
MAIL_USERNAME=$(MAIL_USERNAME)
MAIL_PASSWORD=$(MAIL_PASSWORD)
APP_ADMINS=$(APP_ADMINS)
SQLALCHEMY_DATABASE_URI=$(SQLALCHEMY_DATABASE_URI)
action: 'Build services'
Update:
For bash task, you can try usig env field to map the variables. See below:
- bash: |
sudo docker stack deploy ...
displayName: 'Bash Script'
enabled: false
env:
MAIL_USERNAME: $(MAIL_USERNAME)
MAIL_PASSWORD: $(MAIL_PASSWORD)
APP_ADMINS: $(APP_ADMINS)

Deploy docker container using gitlab ci docker-in-docker setup

I'm currently trying to setup a gitlab ci pipeline. I've chosen to go with the Docker-in-Docker setup.
I got my ci pipeline to build and push the docker image to the registry of gitlab but I cannot seem deploy it using the following configuration:
.gitlab-ci.yml
image: docker:stable
services:
- docker:dind
stages:
- build
- deploy
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
TEST_IMAGE: registry.gitlab.com/user/repo.nl:$CI_COMMIT_REF_SLUG
RELEASE_IMAGE: registry.gitlab.com/user/repo.nl:release
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker info
build:
stage: build
tags:
- build
script:
- docker build --pull -t $TEST_IMAGE .
- docker push $TEST_IMAGE
only:
- branches
deploy:
stage: deploy
tags:
- deploy
script:
- docker pull $TEST_IMAGE
- docker tag $TEST_IMAGE $RELEASE_IMAGE
- docker push $RELEASE_IMAGE
- docker run -d --name "review-$CI_COMMIT_REF_SLUG" -p "80:80" $RELEASE_IMAGE
only:
- master
when: manual
When I run the deploy action I actually get the following feedback in my log, but when I go check the server there is no container running.
$ docker run -d --name "review-$CI_COMMIT_REF_SLUG" -p "80:80" $RELEASE_IMAGE
7bd109a8855e985cc751be2eaa284e78ac63a956b08ed8b03d906300a695a375
Job succeeded
I have no clue as to what I am forgetting here. Am I right to expect this method to be correct for deploying containers? What am I missing / doing wrong?
tldr: Want to deploy images into production using gitlab ci and docker-in-docker setup, job succeeds but there is no container. Goal is to have a running container on host after deployment.
Found out that I needed to include the docker socket in the gitlab-runner configuration as well, and not only have it available in the container.
By adding --docker-volumes '/var/run/docker.sock:/var/run/docker.sock' and removing DOCKER_HOST=tcp://docker:2375 I was able to connect to docker on my host system and spawn sibling containers.

Resources