Azure DevOps self-hosted build agent - Kaniko - azure

We need to build docker images using self-hosted linux agent which is deployed as docker container (in Azure Container Instances).
As of now, the agent is Ubuntu image, however to enable building images inside this container I thought of using Kaniko image. However, I haven't figured out how to run Kaniko image without executing the kaniko itself right away (as we need to run devops agent primarily and run kaniko on-demand).
Any hints? Or better ideas how to build docker images in running docker container?

Solved with following code, however Kaniko does not work as expected when running inside my container (tested the same parameters with kaniko inside my container and in default container and in my container does not work (cannot authenticate to ACR)).
Might end up with the VMSS DevOps agent...
FROM whatever-base-image
...
COPY --from gcr.io/kaniko-project/executor /kaniko/executor /kaniko/executor
Ref: https://github.com/GoogleContainerTools/kaniko/issues/2058#issuecomment-1104666901

Related

how to clean docker images from azure devops build server

I am using the azure devops pipeline to build a docker image for my asp.net web application. I have to use self-hosted agent as build and deployment server. After each time running the CI pipeline, new images are created and then pushed to Docker registry. now the problem is , the images which are built and saved on the agent! after a while the agent disk faces the low disk and I have to delete the old images manually.
How can I delete docker images after pushing to the registry during the CI pipeline?
please check attached snapshot.
After pushing image add command line step to delete image:
- task: CmdLine#2
inputs:
script: 'docker rmi -f IMAGE:TAG'
or more destructive
- task: CmdLine#2
inputs:
script: 'docker system prune -a --force'
Run
docker rmi -f image-name
which will forcefully remove the image after you push the image to the registry
After you push the image to the registry add a cmd task with the command to remove the image:
docker rmi [OPTIONS] IMAGE [IMAGE...]
For example:
docker rmi test1:latest
How about using docker system prune. Doing this usually removes all dangling images from the system but using it with -a should take care of removing any unused images as well.
Please refer to official documentation here .

Azure App Service, docker container deployment, unable to pass cap-add=NET_ADMIN property during docker run

I was doing single container deployment in the azure app service. As my container needs to be run in the NET_ADMIN mode, i had to pass cap-add=NET_ADMIN during the docker run, something like this
docker run -e cap-add=NET_ADMIN -p 8080:8080 my_image:v1
In azure app service we have to pass the runtime arguments in the configurations.
But it is a known issue that we can't pass any key with - (hiphen) from configurations.
So i am unable run my container in the NET_ADMIN mode.
Is there any work around, so that i will be able to run with NET_ADMIN mode in azure app service?
Base image : alpine 4.1.4
PS: My requirement needs me to run a single container and not with docker-compose

Deploy a ASPNETAPP on a docker container to a Windows VM using Azure DevOps Server 2019

I'm currently using Azure DevOps Server 2019 (on-premise) to deploy an ASP.NET App (CI-CD).
Is it possible to deploy this app to run via a docker container to a Windows VM?
i'm currently following the examples on this link on how to run an ASP.NET App on a docker container.
https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/docker/building-net-docker-images?view=aspnetcore-3.1
How could i do the same by utilising Azure DevOps Server 2019 to do so.
basically most of not all of the resources/guides/how-to s saw are pointing deploy to the azure cloud or docker hub.
Is it possible to deploy this app to run via a docker container to a Windows VM?
Yes it is possible, You will need to create a self-hosted agent on the Windows VM to which you deploy your app. You can just use powershell task to run docker build and docker run on the self-hosted agent without the need to upload the image to ACR/dockerhub.
Of course You can aslo upload the built image to ACR/dockerhub as #Aravind mentioned. And have a powershell task that pull the image.
The main idea is to use a powershell task to run docker command on the agent hosted on the Windows VM. You can refer to below steps.
1,create a self-hosted agent. Please check the detailed steps here.
2,create a build pipeline.
Here is an example to create a yaml pipeline.
Here is an example to create a classic UI pipelie.
3, Customize your build pipeline, Use a single powershell task to run docker build and docker run command as described in the tutorial. (You can also use docker task to build and push image to ARC/Dockerhub, and then use powershell task to pull and run the image as #Aravind mentioned.)
steps:
- powershell: |
docker build -t aspnetapp .
docker run -it --rm -p 5000:80 --name aspnetcore_sample aspnetapp
displayName: 'PowerShell Script'
Noted: please make sure docker is installed on the Windows VM(the powershell task will invoke the docker cli installed on the VM). And choose the self-hosted agent(hosted on Windows VM) to run your pipeline by choosing the agent pool where the self-hosted agent resides(the agent pool that includes the self-hosted agent is decided at the creation of the agent.)

Docker commands in Azure

Maybe I do not understand the concept of Azure Container Instances (ACI) and Azure at all correctly. I am using Azure CLI on my Windows-Computer and want to create a Windows-container (core-image) with dockerfile. But there is no AZ command available. I am able to create a container, there is no problem. But not with a dockerfile. Is there a possibility to run docker commands for Azure (Azure CLI, Azure bash, Azure powershell)? Maybe somebody can clarify my misunderstanding.
Many thanks in advance, J.
Of curse, yes, you can use the Azure CLI command to build containers with Dockerfile. But there is a queue for the steps.
The docker image is the first step, you can use the CLI command az acr build to build the image directly in the ACR, with your Dockerfile. For example, the Dockerfile is in your local machine and it's windows image:
az acr build -t sample/hello-world:{{.Run.ID}} -r MyRegistry . --platform windows
The ACI is the second step, CLI command az container create will help you to create the container instance with your images. The example command here:
az container create -g MyResourceGroup --name mywinapp --image winappimage:latest --os-type Windows --cpu 2 --memory 3.5
Once you have your image, you should publish it to Azure Container Registry or Docker Hub.
Take a look on the following links, it provides the information to:
Create a container image for deployment to Azure Container Instances
Deploy the container from Azure Container Registry
Deploy your application
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-tutorial-prepare-app
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-tutorial-prepare-acr
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-tutorial-deploy-app
I have recently done the same thing. I have deployed my windows service to Azure Container Instance through Azure Container Registry. Here is step by step process you need to follow. Before performing these steps you need to have published folder of application. You need to install Docker Desktop in your machine.
Create Dockerfile with below commands and put it inside published folder:
FROM mcr.microsoft.com/dotnet/framework/runtime:4.8-windowsservercore-ltsc2019
COPY . .
ENTRYPOINT Application.exe
Here you need to use base file as per your neeed. You can find Windows base images [here][1]
Now navigate to this directory(published folder path) in Powershell and execute below command:
docker image build -t IMAGE_NAME:TAG . -- name of the image with tag
docker run --rm IMAGE_NAME:TAG -- you can run it locally
Now to push this image to Azure, below are the commands. First login into azure and then azure container registery.
az login -- it will navigate to browser for login
docker login ACR_LOGIN_SERVER_NAME -u ACR_USERNAME --password ACR_PASSWORD
docker tag IMAGE_NAME:TAG ACR_LOGIN_SERVER_NAME/IMAGE_NAME:TAG -- tag local image to azure inside ACR
docker push ACR_LOGIN_SERVER_NAME/IMAGE_NAME:TAG -- push image to ACR
Once you have pushed docker image to ACR, you can see it under Repositories in ACR. Based on this repository, you need to create Azure Container Instance to run your docker image.
To create ACI, click on "Create a resource" and select Containers > Container Instances. Here, you need to key some info like resource group and docker image credentials. Make sure you select Private as Image type and key image registry credentials. This ACI deployment process may take couple of minutes as it will fetch the docker image and then deploy. Once deployment is done, you will see Container running and you can check logs as well.
Hope it helps!!

Setup Own Docker Environment For Each Gitlab CI Pipeline

I have some trouble with the setup of a clean and encapsulated docker environment for each gitlab ci pipeline.
What I want to achieve:
Each pipeline should run in its own docker environment.
Docker containers started in one job should be present in jobs of a following stage (that use the docker executor).
a sample pipeline could contain the following stages:
startup containers (docker executor)
install some dependencies (docker executor)
run tests (docker executor)
run some other kind of tests (docker executor)
release to docker registry (docker executor)
deploy to kubernetes (Kubernetes executor)
rollback kubernetes (Kubernetes executor)
stop / remove containers (docker executor)
When I use the docker executor with the docker-in-docker (dind) service each job runs in a clean environment. But that means docker containers started in one job won't be accesible in the following one.
When I make use of docker socket binding the given sample pipeline could be realized.
But if I understand everything right, this could lead to conflicts between different commits running that pipeline.
The docker socket is passed through from the host and thus all docker containers that are created within a pipeline will be available on the host and concurrent pipelines as well.
To prevent naming conflicts the name of each container could append the predefined gitlab environment variable CI_COMMIT_SHA. So each pipeline creates its own identifiable containers (on the host).
But this is a security issue. As the gitlab documentation says the command
docker rm -f $(docker ps -a -q)
run in any job would remove all containers even outside the pipeline, meaning the host including the gitlab runner containers.
I've read a lot in the gitlab docs and other sources but I can't find a soltion to setup a clean and encapsulated docker environment for a whole pipeline where containers are accesible between stages but not from the outside (other pipelines). Also containers of the host should be save.
Is there a clean solution to this problem? Or at least reasonable workarounds?
Thanks in advance for your support!

Resources