Running Gitlab Runner in Azure Container Instances (ACI) - azure

I would like to run Gitlab-Runner in Azure Container Instances (ACI).
For this I have the docker container gitlab/gitlab-runner running in the Azure ACI.
With the following command I register this runner for my Gitlab server.
gitlab-runner register \
--non-interactive \
--run-untagged=true \
--locked=false \
--executor "docker" \
--docker-image docker:latest \
--url "https://gitlab.com/" \
--registration-token "MyTokenYYYYYYYY" \
--description "my-own-runner" \
--tag-list "frontend, runner" \
--docker-volumes /var/run/docker.sock:/var/run/docker.sock
The new runner is also recognized under gitlab. However, when I run a job, I get the following error.
Preparing the "docker" executor
ERROR: Failed to remove network for build
ERROR: Preparation failed: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? (docker.go:960:0s)
If I run the Runner with the identical configuration locally on my notebook everything works. How do I get it to work in the Azure ACI?
How can I mount the docker sock in the Azure ACI when registering it?
Many thanks in advance for your help.

You're not going to be able to run another docker container inside the container you created in Azure ACI. In order to achieve "docker-in-docker" (dind), the daemon instance (your ACI container in this case) needs to be running in privileged mode which would allow escalated access to the host machine that you are sharing with other ACI users. You can read about this more on Docker hub where it says:
Note: --privileged is required for Docker-in-Docker to function
properly, but it should be used with care as it provides full access
to the host environment, as explained in the relevant section of the
Docker documentation.
The common solution for this is to use an auto-scale group of 0 or more VMs to provide compute resources to your gitlab runners and the containers they spawn.
Gitlab docs - docker-machine autoscaler
blog post re: doing this on Azure https://www.n0r1sk.com/post/on-premise-gitlab-with-autoscale-docker-machine-microsoft-azure-runners/

Related

use blobfuse inside an Azure Kubernetes (AKS) container

We wanted to configure blobfuse inside an Azure Kubernetes container to access the Azure storage service.
I created the storage account and a blob container.
I installed blobfuse on the docker image (I tried with alpine and with ubuntu:22.04 images).
I start my application through a Jenkins pipeline with this configuration:
pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: test
image: my_ubuntu:20.04
command: ['cat']
securityContext:
allowPrivilegeEscalation: true
devices:
- /dev/fuse
"""
}
}
I ran this command inside my docker container:
blobfuse /path/to/my/buckett --container-name=${AZURE_BLOB_CONTAINER} --tmp-path=/tmp/path --log-level=LOG_DEBUG --basic-remount-check=true
I got
fuse: device not found, try 'modprobe fuse' first
Running modprobe fuse returns modprobe: FATAL: Module fuse not found in directory /lib/modules/5.4.0-1068-azure
All answers I googled mentioned using --privileged and /dev/fuse device, which I did, with no results.
The same procedure works fine on my linux desktop, but not from inside a docker container on the AKS cluster.
Is this even the right approach to access the Azure Storage service from inside Kubernetes?
Is it possible to fix the error fuse: device not found ?
fuse: device not found, try 'modprobe fuse' first
I have also researched regarding fuse issues:
Either the fuse kernel module isn't loaded on your host computer (very unlikely)
Either the container you're using to perform the build doesn't have enough privileages.
--privileged gives too many permissions to the container instead of you should be able to get things working by replacing it with --cap-add SYS_ADMIN like below.
docker run -d --rm \
--device /dev/fuse \
--cap-add SYS_ADMIN \
<image_id/name>
and also run
docker run -d --rm \
--device /dev/fuse \
--cap-add SYS_ADMIN \
--security-opt apparmor:unconfined \
<image_id/name>
Try to run this command if it fails try to check up setup and check with versions and also blobfuse installation
For reference I also suggest you this Article
Mounting Azure Files and Blobs using Non-traditional options in Kubernetes - by Arun Kumar Singh
kubernetes-sigs/blob-csi-driver: Azure Blob Storage CSI driver (github.com)

Deploy docker compose on Azure Container App

from ACA docs we have to specify target port of the container
az containerapp create \
--name my-container-app \
--resource-group $RESOURCE_GROUP \
--environment $CONTAINERAPPS_ENVIRONMENT \
--image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
--target-port 80 \
--ingress 'external' \
--query configuration.ingress.fqdn
Now my question is how to deploy via docker-compose not just a single image, since there is a single target-port?
If you want to deploy multiple containers to a single container app, you can define more than one container in the configuration's containers array
Reference: Containers in Azure Container Apps Preview | Microsoft Docs
Alternative
If you want to deploy your application via Docker Compose, you can deploy to Azure Web app
While creating the web app, select publish as Docker Container. Under Docker, select options as Docker Compose and provide the Docker Compose file

Azure App Service, docker container deployment, unable to pass cap-add=NET_ADMIN property during docker run

I was doing single container deployment in the azure app service. As my container needs to be run in the NET_ADMIN mode, i had to pass cap-add=NET_ADMIN during the docker run, something like this
docker run -e cap-add=NET_ADMIN -p 8080:8080 my_image:v1
In azure app service we have to pass the runtime arguments in the configurations.
But it is a known issue that we can't pass any key with - (hiphen) from configurations.
So i am unable run my container in the NET_ADMIN mode.
Is there any work around, so that i will be able to run with NET_ADMIN mode in azure app service?
Base image : alpine 4.1.4
PS: My requirement needs me to run a single container and not with docker-compose

Docker commands in Azure

Maybe I do not understand the concept of Azure Container Instances (ACI) and Azure at all correctly. I am using Azure CLI on my Windows-Computer and want to create a Windows-container (core-image) with dockerfile. But there is no AZ command available. I am able to create a container, there is no problem. But not with a dockerfile. Is there a possibility to run docker commands for Azure (Azure CLI, Azure bash, Azure powershell)? Maybe somebody can clarify my misunderstanding.
Many thanks in advance, J.
Of curse, yes, you can use the Azure CLI command to build containers with Dockerfile. But there is a queue for the steps.
The docker image is the first step, you can use the CLI command az acr build to build the image directly in the ACR, with your Dockerfile. For example, the Dockerfile is in your local machine and it's windows image:
az acr build -t sample/hello-world:{{.Run.ID}} -r MyRegistry . --platform windows
The ACI is the second step, CLI command az container create will help you to create the container instance with your images. The example command here:
az container create -g MyResourceGroup --name mywinapp --image winappimage:latest --os-type Windows --cpu 2 --memory 3.5
Once you have your image, you should publish it to Azure Container Registry or Docker Hub.
Take a look on the following links, it provides the information to:
Create a container image for deployment to Azure Container Instances
Deploy the container from Azure Container Registry
Deploy your application
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-tutorial-prepare-app
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-tutorial-prepare-acr
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-tutorial-deploy-app
I have recently done the same thing. I have deployed my windows service to Azure Container Instance through Azure Container Registry. Here is step by step process you need to follow. Before performing these steps you need to have published folder of application. You need to install Docker Desktop in your machine.
Create Dockerfile with below commands and put it inside published folder:
FROM mcr.microsoft.com/dotnet/framework/runtime:4.8-windowsservercore-ltsc2019
COPY . .
ENTRYPOINT Application.exe
Here you need to use base file as per your neeed. You can find Windows base images [here][1]
Now navigate to this directory(published folder path) in Powershell and execute below command:
docker image build -t IMAGE_NAME:TAG . -- name of the image with tag
docker run --rm IMAGE_NAME:TAG -- you can run it locally
Now to push this image to Azure, below are the commands. First login into azure and then azure container registery.
az login -- it will navigate to browser for login
docker login ACR_LOGIN_SERVER_NAME -u ACR_USERNAME --password ACR_PASSWORD
docker tag IMAGE_NAME:TAG ACR_LOGIN_SERVER_NAME/IMAGE_NAME:TAG -- tag local image to azure inside ACR
docker push ACR_LOGIN_SERVER_NAME/IMAGE_NAME:TAG -- push image to ACR
Once you have pushed docker image to ACR, you can see it under Repositories in ACR. Based on this repository, you need to create Azure Container Instance to run your docker image.
To create ACI, click on "Create a resource" and select Containers > Container Instances. Here, you need to key some info like resource group and docker image credentials. Make sure you select Private as Image type and key image registry credentials. This ACI deployment process may take couple of minutes as it will fetch the docker image and then deploy. Once deployment is done, you will see Container running and you can check logs as well.
Hope it helps!!

Neo4j HA model don't work in docker

I'm trying to run Neo4j in HA mode using Azure Container Service + Docker. To run mode is required HA 3 instances within the same network.
I create a network with the command:
docker network create --driver = bridge cluster
But when trying to associate instances of this network I got the following error:
docker: Error response from daemon: network cluster not found.
I've tried with the network ID and does not work.
I'm following this tutorial: https://neo4j.com/developer/docker-3.x/ but without success. Something tip?
ps .: Running in sigle mode works.
Commands and result I get.
jefersonanjos#swarm-master-21858A81-0:~/neo4j/data$ docker network create --driver=bridge cluster
result: d9fb3dd121ded5bfe01765ce4d276b75ad4e66ef1f2bd62b858a2cea86ccc1ec
jefersonanjos#swarm-master-21858A81-0:~/neo4j/data$ docker run --name=instance1 --detach --publish=7474:7474 --publish=7687:7687 --net=cluster --hostname=instance1 \
--env=NEO4J_dbms_mode=HA --env=NEO4J_ha_serverId=1 \
--env=NEO4J_ha_host_coordination=instance1:5001 --env=NEO4J_ha_host_data=instance1:6001 \
--env=NEO4J_ha_initialHosts=instance1:5001,instance2:5001,instance3:5001 \
neo4j:enterprise
result: b57ca9a895535b07ef97d956a780b9687e7384b33f389e2470e0ed743c79ef11
jefersonanjos#swarm-master-21858A81-0:~/neo4j/data$ docker run --name=instance2 --detach --publish 7475:7474 --publish=7688:7687 --net=cluster --hostname=instance2 \
--env=NEO4J_dbms_mode=HA --env=NEO4J_ha_serverId=2 \
--env=NEO4J_ha_host_coordination=instance2:5001 --env=NEO4J_ha_host_data=instance2:6001 \
--env=NEO4J_ha_initialHosts=instance1:5001,instance2:5001,instance3:5001 \
neo4j:enterprise
docker: Error response from daemon: network cluster not found.
See 'docker run --help'.
jefersonanjos#swarm-master-21858A81-0:~/neo4j/data$ docker run --name=instance3 --detach --publish 7476:7474 --publish=7689:7687 --net=cluster --hostname=instance3 \
--env=NEO4J_dbms_mode=HA --env=NEO4J_ha_serverId=3 \
--env=NEO4J_ha_host_coordination=instance3:5001 --env=NEO4J_ha_host_data=instance3:6001 \
--env=NEO4J_ha_initialHosts=instance1:5001,instance2:5001,instance3:5001 \
neo4j:enterprise
08c4c5156dc8bb589f4c876de3a2bf0170450ae640606d505e1851da94220d7e
The problem in azure with docker was because I'm doing the test with a cluster of machines.
So the command:
docker network create --driver = bridge cluster does not serve for this purpose.
We can must use the --driver = overlay to function as multi-host.
Mor info: https://docs.docker.com/engine/userguide/networking/get-started-overlay/

Resources