We have been tasked with setting up a container-based Jenkins deployment, and there is strong pressure to do this in AKS. Our Jenkins needs to be able to build other containers. Normally I'd handle this with a docker-in-docker approach by mounting /var/run/docker.sock & /usr/bin/docker into my running container.
I do not know if this is possible in AKS or not. Some forum posts on GitHub suggest that host-mounting is possible but broken in the latest AKS relase. My limited experimentation with a Helm chart was met with this error:
Error: release jenkins4 failed: Deployment.apps "jenkins" is invalid:
[spec.template.spec.initContainers[0].volumeMounts[0].name: Required
value, spec.template.spec.initContainers[0].volumeMounts[0].name: Not
found: ""]
The change I made was to update the volumeMounts: section of jenkins-master-deployment.yaml and include the following:
-
type: HostPath
hostPath: /var/run/docker.sock
mountPath: /var/run/docker.sock
Is what I'm trying to do even possible based on AKS security settings, or did I just mess up my chart?
If it's not possible to mount the docker socket into a container in AKS, that's fine, I just need a definitive answer.
Thanks,
Well, we did this a while back for VSTS (cloud TFS, now called Azure DevOps) build agents, so it should be possible. The way we did it is also with mounting the docker.sock
The relevant part for us was:
... container spec ...
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-volume
volumes:
- name: docker-volume
hostPath:
path: /var/run/docker.sock
I have achieved the requirement using following manifests.
Our k8s manifest file carries this securityContext under pod definition.
securityContext:
privileged: true
In our Dockerfile we were installing Docker-inside-Docker like this way
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install curl wget -y
RUN apt-get install \
ca-certificates \
curl \
gnupg \
lsb-release -y
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y
# last two lines of Dockerfile
COPY ./agent_startup.sh .
RUN chmod +x /agent_startup.sh
CMD ["/usr/sbin/init"]
CMD ["./agent_startup.sh"]
Content of agent_startup.sh file
#!/bin/bash
echo "DOCKER STARTS HERE"
service --status-all
service docker start
service docker start
docker version
docker ps
echo "DOCKER ENDS HERE"
sleep 100000
Sample k8s file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: build-agent
labels:
app: build-agent
spec:
replicas: 1
selector:
matchLabels:
app: build-agent
template:
metadata:
labels:
app: build-agent
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: build-agent
image: myecr-repo.azurecr.io/buildagent
securityContext:
privileged: true
When Dockerized agent pool was up, docker daemon was running inside docker container.
My Kubectl version
PS D:\Temp\temp> kubectl.exe version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.22.6
WARNING: version difference between client (1.25) and server (1.22) exceeds the supported minor version skew of +/-1
pod shell output:
root#**********-bcd967987-52wrv:/# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
**Disclaimer: Our kubernetes cluster version is 1.22 and base image is Ubuntu-18.04 and tested only to check if docker-inside-docker is running and not registered with Azure DevOps. You can modify startup script according to your need **
Related
I am deploying an Azure Self hosted agent on a Kubernetes Cluster 1.22+ following steps in:
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops#linuxInstructions
I am adding podman to self hosted agent as container manager, following code is added to self hosted agent Dockerfile:
# install podman
ENV VERSION_ID=20.04
RUN apt-get update -y && apt-get install curl wget gnupg2 -y && . ./etc/os-release && sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list" && wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/xUbuntu_${VERSION_ID}/Release.key -O- | apt-key add - && apt-get update -y && apt-get -y install podman && podman --version
Everything runs smoothly when running the container in privileged mode.
...
securityContext:
privileged: true
...
When swith to privileged: false and try to connect to podman, I get following error
level=warning msg="\"/\" is not a shared mount, this could cause issues or missing mounts with rootless containers"
Error: mount /var/lib/containers/storage/overlay:/var/lib/containers/storage/overlay, flags: 0x1000: permission denied
the Command I use for connecting is:
podman login private.container.registry \
--username $USER \
--password $PASS \
--storage-opt mount_program=/usr/bin/fuse-overlayfs
How can I use podman with unprivileged mode ?
Issue was related to Containerd's apparmor profile denying the mount syscall,
I fixed it for now by disabling apparmor for the container while running unprivileged mode
...
template:
metadata:
labels:
app: vsts-agent-2
annotations:
container.apparmor.security.beta.kubernetes.io/kubepodcreation: unconfined
...
securityContext:
privileged: false #true
A better way would be creating an apparmor profile that allows the mount and apply it to the container
I am creating a dockerfile in Azure Devops and is trying to mount Azure Blob Storage Container (which contains files) into the dockerfile. I understand that there is help on microsoft regarding volume but they are in yaml format which is not what I suitable for me as I am using Azure Devops Pipeline to build my Image. I truly appreciate your help. Thanks in advance
Currently my code looks like this
FROM ubuntu:18.04
# Update the repository sources list
RUN apt-get update
FROM python:3.7.5
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive
RUN apt-get install -y build-essential python3-pip python3-dev postgresql-11 postgresql-contrib ufw sudo libssl-dev libffi-dev
RUN apt-get install -y libldap2-dev libsasl2-dev ldap-utils
RUN apt-get install -y systemd vim
# Install ODBC driver 17
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/ubuntu/18.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update -y
RUN ACCEPT_EULA=Y apt-get install -y msodbcsql17 mssql-tools unixodbc-dev redis-server rabbitmq-server
# Install python libraries
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code
RUN pip install --upgrade pip && \
pip install -r requirements.txt
RUN mkdir /home/workspace
WORKDIR /home/workspace
COPY ./ /home/workspace
# Mount
## *** REQUIRE HELP HERE ***##
RUN service postgresql start
CMD ["python", "home/workspace/python manage.py runserver 0:8000"]
The final runtime in which your container will be deployed can be of relevance because Azure provides you some options or not depending on where you actually deploy it.
Please, consider for instance this question in which I provided some help: as you can see, if you use docker compose and App Service, Azure brings you the opportunity of mounting a Azure File or Blob Storage in your container.
According to your comments you are using AKS. AKS and Kubernetes are a very specific use case. Instead of trying to use the AZ CLI to copy content from the storage account to your container, it would be preferable to take advantage of the mechanisms that Kubernetes provides and try to use volume mounts.
Please, consider for instance read this excellent article. The author exposes in two companion posts a very similar use case to yours, but with SQLite.
Basically, following the guidance provided in the abovementioned post, first, create a Kubernetes Secret with the Azure Storage connection data. You need to provide information about your storage account name and the value of one your access keys. The yaml configuration file will look similar to:
apiVersion: v1
kind: Secret
metadata:
name: your-secret
namespace: your-namespace
type: Opaque
data:
azurestorageaccountname: Base64_Encoded_Value_Here
azurestorageaccountkey: Base64_Encoded_Value_Here
Then, create the Kubernetes deployment and define the appropriate volume mounts. For example, assuming the name of your Azure Storage file share is your-file-share-name:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-azurestorage-test
namespace: your-namespace
spec:
selector:
matchLabels:
app: your-app
template:
metadata:
labels:
app: your-app
spec:
containers:
- name: your-app
image: your-registry.azurecr.io/your-app:latest
volumeMounts:
- name: azurefileshare
mountPath: /postgre-require-dmount-path
imagePullSecrets:
- name: your-pull-secret-if-you-have-one
volumes:
- name: azurefileshare
azureFile:
secretName: your-secret
shareName: your-file-share-name
readOnly: false
The important things to note in the deployment are volumeMounts, used to specify the path in which the file share will be mounted inside the container, the appropriate for you PostgreSQL deployment, and volumes, with the azureFile entry, with information about the secret name created previously and the name of your actual Azure Storage File share.
I have configured a Kubernetes cluster on Microsoft Azure and installed a Grafana helm chart on it.
In a directory on my local computer, I have a custom Grafana plugin that I developed in the past and I would like to install it in Grafana running on the Cloud.
Is there a way to do that?
You can use an initContainer like this:
initContainers:
- name: local-plugins-downloader
image: busybox
command:
- /bin/sh
- -c
- |
#!/bin/sh
set -euo pipefail
mkdir -p /var/lib/grafana/plugins
cd /var/lib/grafana/plugins
for url in http://192.168.95.169/grafana-piechart-panel.zip; do
wget --no-check-certificate $url -O temp.zip
unzip temp.zip
rm temp.zip
done
volumeMounts:
- name: storage
mountPath: /var/lib/grafana
You need to have an emptyDir volume called storage in the pod, this is the default if you use the helm chart.
Then it needs to be mounted on the grafana's container. You also need to make sure that the grafana plugin directory is /var/lib/grafana/plugins
I've created a docker image on centOS by enabling systemd services and built my image. I created docker-compose.yml file and docker-compose up -d and the image gets built and I can hit my application at localhost:8080/my/app.
I was using this tutorial - https://carlos.mendible.com/2017/12/01/deploy-your-first-service-to-azure-container-services-aks/.
So after I'm done with my docker image, I deployed my Image to Azure Container Registry and then created Azure Container Service (AKS Cluster). Then deploying that same working docker image on to AKS cluster and I get 404 page not found, when I'm trying to access the load balancer public IP. I got into kubernetes machine and tried to curl localhost:8080/my/app, still 404.
I see my services are up and running without any issue inside the Kubernetes pod and configuration is pretty much same as my docker container.
Here is my Dockerfile:
#Dockerfile based on latest CentOS 7 image
FROM c7-systemd-httpd-local
RUN yum install -y epel-release # for nginx
RUN yum install -y initscripts # for old "service"
ENV container docker
RUN yum install -y bind bind-utils
RUN systemctl enable named.service
# webserver service
RUN yum install -y nginx
RUN systemctl enable nginx.service
# Without this, init won't start the enabled services and exec'ing and starting
# them reports "Failed to get D-Bus connection: Operation not permitted".
VOLUME /run /tmp
# Don't know if it's possible to run services without starting this
ENTRYPOINT [ "/usr/sbin/init" ]
VOLUME ["/sys/fs/cgroup"]
RUN mkdir -p /myappfolder
COPY . myappfolder
WORKDIR ./myappfolder
RUN sh ./setup.sh
WORKDIR /
EXPOSE 8080
CMD ["/bin/startServices.sh"]
Here is my Docker-Compose.yml
version: '3'
services:
myapp:
build: ./myappfolder
container_name: myapp
environment:
- container=docker
ports:
- "8080:8080"
privileged: true
cap_add:
- SYS_ADMIN
security_opt:
- seccomp:unconfined
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
command: "bash -c /usr/sbin/init"
Here is my Kubectl yml file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- args:
- bash
- -c
- /usr/sbin/init
env:
- name: container
value: docker
name: myapp
image: myapp.azurecr.io/newinstalled_app:v1
ports:
- containerPort: 8080
args: ["--allow-privileged=true"]
securityContext:
capabilities:
add: ["SYS_ADMIN"]
privileged: true
#command: ["bash", "-c", "/usr/sbin/init"]
imagePullSecrets:
- name: myapp-test
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: myapp
I used these commands -
1. az group create --name resource group --location eastus
2. az ask create --resource-group rename --name kubname --node-count 1 --generate-ssh-keys
3. az ask get-credentials --resource-group rename --name kubname
4. kubectl get cs
5. kubectl cluster-info
6. kubectl create -f yamlfile.yml
7. kubectl get po --watch
8. kubectl get svc --watch
9. kubectl get pods
10. kubectl exec -it myapp-66678f7645-2r58w -- bash
entered into pod - its 404.
11. kubectl get svc -> External IP - 104.43.XX.XXX:8080/my/app -> goes to 404.
But my docker-compose up -d -> goes into our application.
Am I missing anything?
Figured it out. I need to have loadbalancer pointing to 80 and destination port to 8080.
That's the only change I made and things started working fine.
Thanks!
I'm novice in using Kubernetes, Docker and GCP, sorry if the question is stupid and (or) obvious.
I try to create simple gRPC server with http(s) mapping using as example Google samples. The issue is that my container can be started from Google cloud shell with no complains but fails on Kubernetes Engine after deployment.
In Google Cloud Console:
git clone https://gitlab.com/myrepos/grpc.git
cd grpc
docker build -t gcr.io/project-id/python-grpc-diagnostic-server:v1 .
# Run the container "locally"
docker run --rm -p 8000:8000 gcr.io/mproject-id/python-grpc-diagnostic-server:v1
Server is started
^CServer is stopped
# Pushing the image to Container Registry
gcloud docker -- push gcr.io/project-id/python-grpc-diagnostic-server:v1
# Deployment
kubectl create -f grpc-diagnostic.yaml
In Deployment details 'diagnostic' container has "CrashLoopBackOff" status and in Logs the next error appears:
File "/diagnostic/diagnostic_pb2.py", line 17, in <module>
from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2
ModuleNotFoundError: No module named 'google.api'
Could you please give any idea why the same container starts in shell and fails on Kubernetes Engine?
Thanks.
requirements.txt
grpcio
grpcio-tools
pytz
google-auth
googleapis-common-protos
Dockerfile
FROM gcr.io/google_appengine/python
# Create a virtualenv for dependencies. This isolates these packages from
# system-level packages.
RUN virtualenv -p python3.6 /env
# Setting these environment variables are the same as running
# source /env/bin/activate.
ENV VIRTUAL_ENV -p python3.6 /env
ENV PATH /env/bin:$PATH
ADD . /diagnostic/
WORKDIR /diagnostic
RUN pip install -r requirements.txt
EXPOSE 8000
ENTRYPOINT ["python", "/diagnostic/diagnostic_server.py"]
grpc-diagnostic.yaml
apiVersion: v1
kind: Service
metadata:
name: esp-grpc-diagnostic
spec:
ports:
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 80
targetPort: 9000 # or 8000?
protocol: TCP
name: http2
selector:
app: esp-grpc-diagnostic
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: esp-grpc-diagnostic
spec:
replicas: 1
template:
metadata:
labels:
app: esp-grpc-diagnostic
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http2_port=9000",
"--service=diagnostic.endpoints.project-id.cloud.goog",
"--rollout_strategy=managed",
"--backend=grpc://127.0.0.1:8000"
]
ports:
- containerPort: 9000
- name: diagnostic
image: gcr.io/project-id/python-grpc-diagnostic-server:v1
ports:
- containerPort: 8000
That was my stupid mistake. I changed the image, but the name of image was the same, so the cluster continued using the old wrong image thinking nothing changed.
The right way to redeploy a code is create image with new tag, for instance v1.01 and set the new image for existing deployment as it is described in documentation. I deleted the service and the deployment and recreated it again, but I didn't delete the cluster thinking that I started from scratch.
Right way:
docker build -t gcr.io/project-id/python-grpc-diagnostic-server:v1.01 .
gcloud docker -- push gcr.io/project-id/python-grpc-diagnostic-server:v1.01
kubectl set image deployment/esp-grpc-diagnostic diagnostic=gcr.io/project-id/python-grpc-diagnostic-server:v1.01
Another way pulling updated images with no name changed is changing imagePullPolicy that is set to IfNotPresent by default. more info