How to install custom plugin for Grafana running in Kubernetes cluster on Azure - azure

I have configured a Kubernetes cluster on Microsoft Azure and installed a Grafana helm chart on it.
In a directory on my local computer, I have a custom Grafana plugin that I developed in the past and I would like to install it in Grafana running on the Cloud.
Is there a way to do that?

You can use an initContainer like this:
initContainers:
- name: local-plugins-downloader
image: busybox
command:
- /bin/sh
- -c
- |
#!/bin/sh
set -euo pipefail
mkdir -p /var/lib/grafana/plugins
cd /var/lib/grafana/plugins
for url in http://192.168.95.169/grafana-piechart-panel.zip; do
wget --no-check-certificate $url -O temp.zip
unzip temp.zip
rm temp.zip
done
volumeMounts:
- name: storage
mountPath: /var/lib/grafana
You need to have an emptyDir volume called storage in the pod, this is the default if you use the helm chart.
Then it needs to be mounted on the grafana's container. You also need to make sure that the grafana plugin directory is /var/lib/grafana/plugins

Related

run docker inside docker container in AKS [duplicate]

We have been tasked with setting up a container-based Jenkins deployment, and there is strong pressure to do this in AKS. Our Jenkins needs to be able to build other containers. Normally I'd handle this with a docker-in-docker approach by mounting /var/run/docker.sock & /usr/bin/docker into my running container.
I do not know if this is possible in AKS or not. Some forum posts on GitHub suggest that host-mounting is possible but broken in the latest AKS relase. My limited experimentation with a Helm chart was met with this error:
Error: release jenkins4 failed: Deployment.apps "jenkins" is invalid:
[spec.template.spec.initContainers[0].volumeMounts[0].name: Required
value, spec.template.spec.initContainers[0].volumeMounts[0].name: Not
found: ""]
The change I made was to update the volumeMounts: section of jenkins-master-deployment.yaml and include the following:
-
type: HostPath
hostPath: /var/run/docker.sock
mountPath: /var/run/docker.sock
Is what I'm trying to do even possible based on AKS security settings, or did I just mess up my chart?
If it's not possible to mount the docker socket into a container in AKS, that's fine, I just need a definitive answer.
Thanks,
Well, we did this a while back for VSTS (cloud TFS, now called Azure DevOps) build agents, so it should be possible. The way we did it is also with mounting the docker.sock
The relevant part for us was:
... container spec ...
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-volume
volumes:
- name: docker-volume
hostPath:
path: /var/run/docker.sock
I have achieved the requirement using following manifests.
Our k8s manifest file carries this securityContext under pod definition.
securityContext:
privileged: true
In our Dockerfile we were installing Docker-inside-Docker like this way
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install curl wget -y
RUN apt-get install \
ca-certificates \
curl \
gnupg \
lsb-release -y
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y
# last two lines of Dockerfile
COPY ./agent_startup.sh .
RUN chmod +x /agent_startup.sh
CMD ["/usr/sbin/init"]
CMD ["./agent_startup.sh"]
Content of agent_startup.sh file
#!/bin/bash
echo "DOCKER STARTS HERE"
service --status-all
service docker start
service docker start
docker version
docker ps
echo "DOCKER ENDS HERE"
sleep 100000
Sample k8s file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: build-agent
labels:
app: build-agent
spec:
replicas: 1
selector:
matchLabels:
app: build-agent
template:
metadata:
labels:
app: build-agent
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: build-agent
image: myecr-repo.azurecr.io/buildagent
securityContext:
privileged: true
When Dockerized agent pool was up, docker daemon was running inside docker container.
My Kubectl version
PS D:\Temp\temp> kubectl.exe version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.22.6
WARNING: version difference between client (1.25) and server (1.22) exceeds the supported minor version skew of +/-1
pod shell output:
root#**********-bcd967987-52wrv:/# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
**Disclaimer: Our kubernetes cluster version is 1.22 and base image is Ubuntu-18.04 and tested only to check if docker-inside-docker is running and not registered with Azure DevOps. You can modify startup script according to your need **

use blobfuse inside an Azure Kubernetes (AKS) container

We wanted to configure blobfuse inside an Azure Kubernetes container to access the Azure storage service.
I created the storage account and a blob container.
I installed blobfuse on the docker image (I tried with alpine and with ubuntu:22.04 images).
I start my application through a Jenkins pipeline with this configuration:
pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: test
image: my_ubuntu:20.04
command: ['cat']
securityContext:
allowPrivilegeEscalation: true
devices:
- /dev/fuse
"""
}
}
I ran this command inside my docker container:
blobfuse /path/to/my/buckett --container-name=${AZURE_BLOB_CONTAINER} --tmp-path=/tmp/path --log-level=LOG_DEBUG --basic-remount-check=true
I got
fuse: device not found, try 'modprobe fuse' first
Running modprobe fuse returns modprobe: FATAL: Module fuse not found in directory /lib/modules/5.4.0-1068-azure
All answers I googled mentioned using --privileged and /dev/fuse device, which I did, with no results.
The same procedure works fine on my linux desktop, but not from inside a docker container on the AKS cluster.
Is this even the right approach to access the Azure Storage service from inside Kubernetes?
Is it possible to fix the error fuse: device not found ?
fuse: device not found, try 'modprobe fuse' first
I have also researched regarding fuse issues:
Either the fuse kernel module isn't loaded on your host computer (very unlikely)
Either the container you're using to perform the build doesn't have enough privileages.
--privileged gives too many permissions to the container instead of you should be able to get things working by replacing it with --cap-add SYS_ADMIN like below.
docker run -d --rm \
--device /dev/fuse \
--cap-add SYS_ADMIN \
<image_id/name>
and also run
docker run -d --rm \
--device /dev/fuse \
--cap-add SYS_ADMIN \
--security-opt apparmor:unconfined \
<image_id/name>
Try to run this command if it fails try to check up setup and check with versions and also blobfuse installation
For reference I also suggest you this Article
Mounting Azure Files and Blobs using Non-traditional options in Kubernetes - by Arun Kumar Singh
kubernetes-sigs/blob-csi-driver: Azure Blob Storage CSI driver (github.com)

Installing Argo Rollouts on Azure Kubernetes cluster

I'm using ArgoCD along with ArgoRollouts on my local cluster. Setting it up a local cluster is straight forward, download the binaries, set path for the binaries and execute kubectl argo rollouts version
However, I'm trying to install it on a new Azure Kubernetes cluster but unable to do, as per the installation steps mentioned, the binaries need to be downloaded and set as the Env path but it is failing at sudo mv ./kubectl-argo-rollouts-linux-amd64 /usr/local/bin/kubectl-argo-rollouts - which is understood, but how do I overcome that?
I've not come across any other way to install ArgoRollouts. There are documents available on installing ArgoCD but not ArgoRollouts.
We use Kustomize to generate our manifests for Argo Rollouts. We also use have Argo CD manage Argo Rollouts as a separate application.
> cat kustomization.yml
resources:
- https://raw.githubusercontent.com/argoproj/argo-rollouts/v1.2.1/manifests/install.yaml
- https://raw.githubusercontent.com/argoproj/argo-rollouts/v1.2.1/manifests/dashboard-install.yaml
- https://raw.githubusercontent.com/argoproj/argo-rollouts/v1.2.1/manifests/notifications-install.yaml
images:
- name: quay.io/argoproj/argo-rollouts
newTag: v1.2.1
- name: quay.io/argoproj/kubectl-argo-rollouts
newTag: v1.2.1
namespace: argo-rollouts
If you want to install manually (ie Argo CD not managing it), then you can navigate to the kustomization directory and run kustomize build . | kubectl apply -f -

Mounting Azure Blob Storage in Docker File

I am creating a dockerfile in Azure Devops and is trying to mount Azure Blob Storage Container (which contains files) into the dockerfile. I understand that there is help on microsoft regarding volume but they are in yaml format which is not what I suitable for me as I am using Azure Devops Pipeline to build my Image. I truly appreciate your help. Thanks in advance
Currently my code looks like this
FROM ubuntu:18.04
# Update the repository sources list
RUN apt-get update
FROM python:3.7.5
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive
RUN apt-get install -y build-essential python3-pip python3-dev postgresql-11 postgresql-contrib ufw sudo libssl-dev libffi-dev
RUN apt-get install -y libldap2-dev libsasl2-dev ldap-utils
RUN apt-get install -y systemd vim
# Install ODBC driver 17
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/ubuntu/18.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update -y
RUN ACCEPT_EULA=Y apt-get install -y msodbcsql17 mssql-tools unixodbc-dev redis-server rabbitmq-server
# Install python libraries
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code
RUN pip install --upgrade pip && \
pip install -r requirements.txt
RUN mkdir /home/workspace
WORKDIR /home/workspace
COPY ./ /home/workspace
# Mount
## *** REQUIRE HELP HERE ***##
RUN service postgresql start
CMD ["python", "home/workspace/python manage.py runserver 0:8000"]
The final runtime in which your container will be deployed can be of relevance because Azure provides you some options or not depending on where you actually deploy it.
Please, consider for instance this question in which I provided some help: as you can see, if you use docker compose and App Service, Azure brings you the opportunity of mounting a Azure File or Blob Storage in your container.
According to your comments you are using AKS. AKS and Kubernetes are a very specific use case. Instead of trying to use the AZ CLI to copy content from the storage account to your container, it would be preferable to take advantage of the mechanisms that Kubernetes provides and try to use volume mounts.
Please, consider for instance read this excellent article. The author exposes in two companion posts a very similar use case to yours, but with SQLite.
Basically, following the guidance provided in the abovementioned post, first, create a Kubernetes Secret with the Azure Storage connection data. You need to provide information about your storage account name and the value of one your access keys. The yaml configuration file will look similar to:
apiVersion: v1
kind: Secret
metadata:
name: your-secret
namespace: your-namespace
type: Opaque
data:
azurestorageaccountname: Base64_Encoded_Value_Here
azurestorageaccountkey: Base64_Encoded_Value_Here
Then, create the Kubernetes deployment and define the appropriate volume mounts. For example, assuming the name of your Azure Storage file share is your-file-share-name:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-azurestorage-test
namespace: your-namespace
spec:
selector:
matchLabels:
app: your-app
template:
metadata:
labels:
app: your-app
spec:
containers:
- name: your-app
image: your-registry.azurecr.io/your-app:latest
volumeMounts:
- name: azurefileshare
mountPath: /postgre-require-dmount-path
imagePullSecrets:
- name: your-pull-secret-if-you-have-one
volumes:
- name: azurefileshare
azureFile:
secretName: your-secret
shareName: your-file-share-name
readOnly: false
The important things to note in the deployment are volumeMounts, used to specify the path in which the file share will be mounted inside the container, the appropriate for you PostgreSQL deployment, and volumes, with the azureFile entry, with information about the secret name created previously and the name of your actual Azure Storage File share.

How to use local docker images with Minikube?

I have several docker images that I want to use with minikube. I don't want to first have to upload and then download the same image instead of just using the local image directly. How do I do this?
Stuff I tried:
1. I tried running these commands (separately, deleting the instances of minikube both times and starting fresh)
kubectl run hdfs --image=fluxcapacitor/hdfs:latest --port=8989
kubectl run hdfs --image=fluxcapacitor/hdfs:latest --port=8989 imagePullPolicy=Never
Output:
NAME READY STATUS RESTARTS AGE
hdfs-2425930030-q0sdl 0/1 ContainerCreating 0 10m
It just gets stuck on some status but never reaches the ready state.
2. I tried creating a registry and then putting images into it but that didn't work either. I might've done that incorrectly but I can't find proper instructions to do this task.
Please provide instructions to use local docker images in local kubernetes instance.
OS: ubuntu 16.04
Docker : Docker version 1.13.1, build 092cba3
Kubernetes :
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:40:50Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"}
If someone could help me get a solution that uses docker-compose to do this, that'd be awesome.
Edit:
Images loaded in eval $(minikube docker-env):
REPOSITORY TAG IMAGE ID CREATED SIZE
fluxcapacitor/jupyterhub latest e5175fb26522 4 weeks ago 9.59 GB
fluxcapacitor/zeppelin latest fe4bc823e57d 4 weeks ago 4.12 GB
fluxcapacitor/prediction-pmml latest cae5b2d9835b 4 weeks ago 973 MB
fluxcapacitor/scheduler-airflow latest 95adfd56f656 4 weeks ago 8.89 GB
fluxcapacitor/loadtest latest 6a777ab6167c 5 weeks ago 899 MB
fluxcapacitor/hdfs latest 00fa0ed0064b 6 weeks ago 1.16 GB
fluxcapacitor/sql-mysql latest 804137671a8c 7 weeks ago 679 MB
fluxcapacitor/metastore-1.2.1 latest ea7ce8c5048f 7 weeks ago 1.35 GB
fluxcapacitor/cassandra latest 3cb5ff117283 7 weeks ago 953 MB
fluxcapacitor/apachespark-worker-2.0.1 latest 14ee3e4e337c 7 weeks ago 3.74 GB
fluxcapacitor/apachespark-master-2.0.1 latest fe60b42d54e5 7 weeks ago 3.72 GB
fluxcapacitor/package-java-openjdk-1.8 latest 1db08965289d 7 weeks ago 841 MB
gcr.io/google_containers/kubernetes-dashboard-amd64 v1.5.1 1180413103fd 7 weeks ago 104 MB
fluxcapacitor/stream-kafka-0.10 latest f67750239f4d 2 months ago 1.14 GB
fluxcapacitor/pipeline latest f6afd6c5745b 2 months ago 11.2 GB
gcr.io/google-containers/kube-addon-manager v6.1 59e1315aa5ff 3 months ago 59.4 MB
gcr.io/google_containers/kubedns-amd64 1.9 26cf1ed9b144 3 months ago 47 MB
gcr.io/google_containers/kube-dnsmasq-amd64 1.4 3ec65756a89b 5 months ago 5.13 MB
gcr.io/google_containers/exechealthz-amd64 1.2 93a43bfb39bf 5 months ago 8.37 MB
gcr.io/google_containers/pause-amd64
As the handbook describes, you can reuse the Docker daemon from Minikube with eval $(minikube docker-env).
So to use an image without uploading it, you can follow these steps:
Set the environment variables with eval $(minikube docker-env)
Build the image with the Docker daemon of Minikube (eg docker build -t my-image .)
Set the image in the pod spec like the build tag (eg my-image)
Set the imagePullPolicy to Never, otherwise Kubernetes will try to download the image.
Important note: You have to run eval $(minikube docker-env) on each terminal you want to use, since it only sets the environment variables for the current shell session.
What worked for me, based on the solution by #svenwltr:
# Start minikube
minikube start
# Set docker env
eval $(minikube docker-env) # unix shells
minikube docker-env | Invoke-Expression # PowerShell
# Build image
docker build -t foo:0.0.1 .
# Run in minikube
kubectl run hello-foo --image=foo:0.0.1 --image-pull-policy=Never
# Check that it's running
kubectl get pods
There is one easy and effective way to push your local Docker image directly to minikube, which will save time from building the images in minikube again.
minikube image load <image name>
(minikube cache add <image name> - old deprecated way, for reference)
More details here
All possible method to push images to minikube are mention here: https://minikube.sigs.k8s.io/docs/handbook/pushing/
Notes:
This Answer isnt limited to minikube!
If wanting to create the registry on minikube's Docker then run eval $(minikube docker-env) first (to make docker available on the host machine's terminal).
Otherwise enter in the virtual machine via minikube ssh, and then proceed with the following steps
depending on your operative system, minikube will automatically mount your homepath onto the VM.
as Eli stated, you'll need to add the local registry as insecure in order to use http (may not apply when using localhost but does apply if using the local hostname)
Don't use http in production, make the effort for securing things up.
Use a local registry:
docker run -d -p 5000:5000 --restart=always --name local-registry registry:2
Now tag your image properly:
docker tag ubuntu localhost:5000/ubuntu
Note that localhost should be changed to dns name of the machine running registry container.
Now push your image to local registry:
docker push localhost:5000/ubuntu
You should be able to pull it back:
docker pull localhost:5000/ubuntu
Now change your yaml file to use the local registry.
Think about mounting volumes at appropriate location, to persist the images on the registry.
Adding to to #Farhad 's answer based on this answer,
This are the steps to setup a local registry.
Setup in local machine
Setup hostname in local machine: edit /etc/hosts to add this line
docker.local 127.0.0.1
Now start a local registry (remove -d to run non-daemon mode) :
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Now tag your image properly:
docker tag ubuntu docker.local:5000/ubuntu
Now push your image to local registry:
docker push docker.local:5000/ubuntu
Verify that image is pushed:
curl -X GET http://docker.local:5000/v2/ubuntu/tags/list
Setup in minikube
ssh into minikube with: minukube ssh
edit /etc/hosts to add this line
docker.local <your host machine's ip>
Verify access:
curl -X GET http://docker.local:5000/v2/ubuntu/tags/list
Now if you try to pull, yo might get an http access error.
Enable insecure access:
If you are always planning to use minkube with this local setup then create a minikube to use insecure registry by default (wont work on existing cluster).
minikube start --insecure-registry="docker.local:5000"
else follow below steps:
systemctl stop docker
edit the docker serice file: get path from systemctl status docker
it might be :
/etc/systemd/system/docker.service.d/10-machine.conf or
/usr/lib/systemd/system/docker.service
append this text (replace 192.168.1.4 with your ip)
--insecure-registry docker.local:5000 --insecure-registry 192.168.1.4:5000
to this line
ExecStart=/usr/bin/docker daemon -H tcp://0.0.0.0:2376 -H
unix:///var/run/docker.sock --tlsverify --tlscacert /etc/docker/ca.pem
--tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=virtualbox --insecure-registry 10.0.0.0/24
systemctl daemon-reload
systemctl start docker
try pulling:
docker pull docker.local:5000/ubuntu
Now change your yaml file to use local registry.
containers:
- name: ampl-django
image: dockerhub/ubuntu
to
containers:
- name: ampl-django
image: docker.local:5000/nymbleup
Don't use http in production, make the effort for securing things up.
Newer versions of minikube allows you to load image from the local docker instance by running
minikube image rm image <imagename>:<version>
minikube image load <imagename>:<version> --daemon
the load command might show an error but the image still gets loaded to your minikube instance
one thing to remember regarding 'minikube' is that minikube's host is not the same as your local host, therefore, what i realized, that in order to use local images for testing with minikube you must build your docker image first locally or pull it locally and then add it using the command bellow into the minikube context which is, nothing else as another linux instance.
minikube cache add <image>:<tag>
yet, don't forget to set the imagePullPolicy: Never in your kubernetes deployment yamls, as it will ensure using locally added images instead of trying pull it remotely from the registry.
update: minikube cache will be deprecated in upcoming versions, please switch to minikube image load
One approach is to build the image locally and then do:
docker save imageNameGoesHere | pv | (eval $(minikube docker-env) && docker load)
minikube docker-env might not return the correct info running under a different user / sudo. Instead you can run sudo -u yourUsername minikube docker-env.
It should return something like:
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/home/chris/.minikube/certs"
export DOCKER_API_VERSION="1.23"
# Run this command to configure your shell:
# eval $(minikube docker-env)
In addition to the accepted answer, you can also achieve what you originally wanted (creating a deployment using the run command) with the following command:
kubectl run hdfs --image=fluxcapacitor/hdfs:latest --port=8989 --generator=run-pod/v1
I found the information about the generator on the Kubernetes-dev forum:
If you're using kubectl run, it generates a manifest for you that happens to have imagePullPolicy set to Always by default. You can use this command to get an imagePullPolicy of IfNotPresent, which will work for minikube:
kubectl run --image=<container> --generator=run-pod/v1
Dan Lorenc
https://groups.google.com/forum/#!topic/kubernetes-dev/YfvWuFr_XOM
If anyone is looking to come back to the local environment after setting the minikube env, use following command.
eval $(docker-machine env -u)
A simpler method that answers the original question "How to use local docker images with Minikube?", is to save the image to a tar file and load it into minikube:
# export the docker image to a tar file
docker save --output my-image.tar the.full.path.to/the/docker/image:the-tag
# set local environment variables so that docker commands go to the docker in minikube
eval $(minikube docker-env)
# or if on windows: #FOR /f "tokens=*" %i IN ('minikube docker-env') DO #%i
# import the docker image from the tar file into minikube
docker load --input my-image.tar
# cleanup - put docker back to normal
eval $(minikube docker-env -u)
# or if on windows: #FOR /f "tokens=*" %i IN ('minikube docker-env -u') DO #%i
Then running the image involves a command like the following. Make sure to include the "--image-pull-policy=Never" parameter.
kubectl run my-image --image=the.full.path.to/the/docker/image:the-tag --image-pull-policy=Never --port=80
From the kubernetes docs:
https://kubernetes.io/docs/concepts/containers/images/#updating-images
The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. If you would like to always force a pull, you can do one of the following:
set the imagePullPolicy of the container to Always;
use :latest as the tag for the image to use;
enable the AlwaysPullImages admission controller.
Or read the other way: Using the :latest tag forces images to always be pulled. If you use the eval $(minikube docker-env) as mentioned above, then either don't use any tag, or assign a tag to your local image you can avoid Kubernetes trying to forcibly pull it.
One idea would be to save the docker image locally and later load it into minikube as follows:
Let say, for example, you already have puckel/docker-airflow image.
Save that image to local disk -
docker save puckel/docker-airflow > puckel_docker_airflow.tar
Now enter into minikube docker env -
eval $(minikube docker-env)
Load that locally saved image -
docker load < puckel_docker_airflow.tar
It is that simple and it works like a charm.
minikube addons enable registry -p minikube
💡 Registry addon on with docker uses 32769 please use that instead
of default 5000
📘 For more information see:
https://minikube.sigs.k8s.io/docs/drivers/docker
docker tag ubuntu $(minikube ip -p minikube):32769/ubuntu
docker push $(minikube ip -p minikube):32769/ubuntu
OR
minikube addons enable registry
docker tag ubuntu $(minikube ip):32769/ubuntu
docker push $(minikube ip):32769/ubuntu
The above is good enough for development purpose. I am doing this on archlinux.
There is now a Minikube Registry addon, this is probably the easiest way. Here is how to use it: https://minikube.sigs.k8s.io/docs/tasks/registry/insecure/
Note that I had DNS issues, might be a bug.
You should know that docker in your local machine is separated from the docker in your minikube cluster.
So you should load/copy a Docker image from your local machine into the minikube cluster:
minikube image load <IMAGE_NAME>
or alternatively when working with minikube, you can build images directly inside it:
#instead of:
docker image build -t <IMAGE_NAME> .
#do:
minikube image build -t <IMAGE_NAME> .
To add to the previous answers, if you have a tarball image, you can simply load it to you local docker set of images docker image load -i /path/image.tar .Please remember to run it after eval $(minikube docker-env), since minikube does not share images with the locally installed docker engine.
Other answers suppose you use minikube with VM, so your local images are not accessible from minikube VM.
In case if you use minikube with --vm-driver=none, you can easily reuse local images by setting image_pull_policy to Never:
kubectl run hello-foo --image=foo --image-pull-policy=Never
or setting imagePullPolicy field for cotainers in corresponding .yaml manifests.
steps to run local docker images in kubernetes
1. eval $(minikube -p minikube docker-env)
2. in the artifact file , under spec section -> containers add
imagePullPolicy: IfNotPresent or imagePullPolicy: Never
apiVersion: "v1"
kind: Pod
metadata:
name: web
labels:
name: web
app: demo
spec:
containers:
- name: web
image: web:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
name: http
protocol: TCP
3. then run kubectl create -f <filename>
For minikube on Docker:
Option 1: Using minikube registry
Check your minikube ports
docker ps
You will see something like: 127.0.0.1:32769->5000/tcp
It means that your minikube registry is on 32769 port for external usage, but internally it's on 5000 port.
Build your docker image tagging it:
docker build -t 127.0.0.1:32769/hello .
Push the image to the minikube registry:
docker push 127.0.0.1:32769/hello
Check if it's there:
curl http://localhost:32769/v2/_catalog
Build some deployment using the internal port:
kubectl create deployment hello --image=127.0.0.1:5000/hello
Your image is right now in minikube container, to see it write:
eval $(minikube -p <PROFILE> docker-env)
docker images
caveat: if using only one profile named "minikube" then "-p " section is redundant, but if using more then don't forget about it; Personally I delete the standard one (minikube) not to make mistakes.
Option 2: Not using registry
Switch to minikube container Docker:
eval $(minikube -p <PROFILE> docker-env)
Build your image:
docker build -t hello .
Create some deployment:
kubectl create deployment hello --image=hello
At the end change the deployment ImagePullPolicy from Always to IfNotPresent:
kubectl edit deployment hello
In addition of minikube image load <image name>, check out the latest (Nov 2021 at the time of writing) release of Minikube.
v1.24.0
Add --no-kubernetes flag to start minikube without Kubernetes
See PR 12848, for
That gives you:
mk start --no-kubernetes
minikube v1.24.0-beta.0 on Darwin 11.6 (arm64)
Automatically selected the docker driver
Starting minikube without Kubernetes minikube in cluster minikube
Pulling base image ...
Creating docker container (CPUs=2, Memory=1988MB) ...
Done! minikube is ready without Kubernetes!
Things to try without Kubernetes
"minikube ssh" to SSH into minikube's node.
"minikube docker-env" to build images by pointing to the docker inside minikube
"minikube image" to build images without docker
building off the earlier answer to use eval $(minikube docker-env) in order to load up minikube's docker environment, for an easier toggle, add the following function to your shell rc file:
dockube() {
if [[ $1 = 'which' ]]; then
if [[ $MINIKUBE_ACTIVE_DOCKERD = 'minikube' ]]; then
echo $MINIKUBE_ACTIVE_DOCKERD
else
echo 'system'
fi
return
fi
if [[ $MINIKUBE_ACTIVE_DOCKERD = 'minikube' ]]; then
eval $(minikube docker-env -u)
echo "now using system docker"
else
eval $(minikube -p minikube docker-env)
echo "now using minikube docker"
fi
}
dockube with no argument will toggle between the system and minikube docker environment, and dockube which will return which one is in use.
For Windows users, the way I do it.
I use the docker desktop to host my MiniKube image and use PowerShell as a console.
First I create my MiniKube cluster:
minikube start --bootstrapper=kubeadm --vm-driver=docker --profile "cluster1"
For instance, let's say I have a Dockerfile contains:
FROM nginx
2 steps way, Build an image and Upload the image to minikube
docker build -t mynginximage .
minikube image load mynginximage
Or 1 step way, Build directly in MiniKube
minikube image build -t mynginximage .
To run my image in MiniKube
kubectl run myweb --image=mynginximage --image-pull-policy=Never
or via mynginxpod.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: myweb
spec:
containers:
- name: myweb
image: mynginximage
imagePullPolicy: Never
ports:
- containerPort: 80
And kubectl apply -f .\mynginxpod.yaml
Now to test it, run:
kubectl get pods myweb
NAME READY STATUS RESTARTS AGE
myweb 1/1 Running 0 25s
To access it:
kubectl exec --stdin --tty myweb -- /bin/bash
To expose it:
kubectl port-forward nginx 3333:80
what if you could just run k8s within docker's vm? there's native support for this with the more recent versions of docker desktop... you just need to enable that support.
https://www.docker.com/blog/kubernetes-is-now-available-in-docker-desktop-stable-channel/
https://www.docker.com/blog/docker-windows-desktop-now-kubernetes/
how i found this out:
while reading the docs for helm, they give you a brief tutorial how to install minikube.
that tutorial installs minikube in a vm that's different/separate from docker.
so when it came time to install my helm charts, i couldn't get helm/k8s to pull the images i had built using docker. that's how i arrived here at this question.
so... if you can live with whatever version of k8s comes with docker desktop, and you can live with it running in whatever vm docker has, then maybe this solution is a bit easier than some of the others.
disclaimer: not sure how switching between windows/linux containers would impact anything.
setup minikube docker-env
again build the same docker image (using minikube docker-env)
change imagePullPolicy to Never in your deployment
actually what happens here , your Minikube can't recognise your docker daemon as it is independent service.You have to first set your minikube-docker environment use below command to check
"eval $(minikube docker-env)"
If you run below command it will show where your minikube looks for docker.
~$ minikube docker-env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.37.192:2376"
export DOCKER_CERT_PATH="/home/ubuntu/.minikube/certs"
export MINIKUBE_ACTIVE_DOCKERD="minikube"
**# To point your shell to minikube's docker-daemon, run:**
# eval $(minikube -p minikube docker-env)
You have to again build images once you setup minikube docker-env else it will fail.
There are two easy ways to load local images to Minikube.
Always make sure to set imagePullPolicy: Never in your deployment yaml.
Eg:
spec:
containers:
- name: myapp
image: pz/demo
imagePullPolicy: Never
ports:
- containerPort: 8080
Luckily, there are two straightforward commands to help with this.
The first one is the image load command. You can load a Docker image from your local machine into the Minikube cluster with the following command.
General
minikube image load <IMAGE_NAME>
Example
minikube image load pz/demo
After loading the image to your Minikube cluster, you can restart your Pods of the above Deployment and notice that they are starting fine.
With the previous way, you always build the Docker image on your local machine and then move it to the Minikube container, which again takes a bit of time, even though not a lot.
Using the image build command of Minikube, we can build the image directly inside the Minikube container.
General
minikube image build -t <IMAGE_NAME> <PATH_TO_DOCKERFILE>
Example
minikube image build -t pz/demo /New APP/Dockerfile
Using the minikube image build command the image is instantly available to Minikkube and doesn't have to be explicitly loaded in a second step via the minikube image load command.
Using one of both methods to get our application Docker image into Minikube and restart the Pods, we can recheck the logs of the Deployment:
Further, to verify end to end that everything is working as expected, we can port forward our local port 8080 to the 8080 of the Deployment by using:
kubectl port-forward deployment/myapp 8080:8080
Rechecking the browser, we see that the locally built application runs fine on the Minikube cluster.
Ref: https://levelup.gitconnected.com/two-easy-ways-to-use-local-docker-images-in-minikube-cd4dcb1a5379
you can either reuse the docker shell, with eval $(minikube docker-env), alternatively, you can leverage on docker save | docker load across the shells.
On minikube 1.20, minikube cache add imagename:tag is deprecated.
Instead use minikube image load imagename:tag
If I understand, you have local images, maybe passed by a usb pen and want to load it in minikube?
Just load the image like:
minikube image load my-local-image:0.1
With this, in kubernetes yaml file, you can change the imagePullPolicy to Never, and it will be find because you just loaded it in minikube.
Had this problem, done this and worked.
Most of the answers are already great.
But one important thing I have faced is that if you are using BuildKit
(DOCKER_BUILDKIT=1)
then the images created after executing the eval $(minkube docker-env) will not go to minikube docker engine. Instead it will go to your docker engine on local.
So remove any of the references if you are using below
-mount=type=cache,target=/root/.m2

Resources