How to deploy my Cassandra cluster with Kubernetes - cassandra

I have tried to install Cassandra on my Kubernetes cluster. After executing the commands
kubectl apply -f Cassandra-service.yaml
and
kubectl apply -f cassandra-statefulset.yaml
I have no errors, but the three Cassandras pods are not setting up.
When I execute
kubectl get pods -o wide
the result is that a pod called Cassandra-0 is not ready. I expected that the Cassandra pods would be already set up.
This is my cassandra-statefulset.yaml file: https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/cassandra/cassandra-statefulset.yaml
I expect there to be three Cassandra pods but there is only one in the pending state:
Here is the result of the previous command:

What Kubernetes environment do you use? Do you use Minikube?
It seems that cluster cannot create PersistentVolumeClaim. Maybe StorageClass configuration doesn't suit your cluster.
Also example Cassandra deployment contains:
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
So, your cluster should has free 1.5cpu and ~3Gb.
On my opinion, it's better and easier to configure Helm charts for infrastructure deployments, for example - https://github.com/bitnami/charts/tree/master/bitnami/cassandra

Maybe there are insufficient resources on minikube config so try to delete, reconfigure and start minikube, then deploy cassandra again.
Note: minikube delete will delete all the k8s cluster configured, be caferul.
minikube delete
minikube config set cpus 4
minikube config set memory 5120
minikube start
kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml
kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml
Ref: https://kubernetes.io/docs/tutorials/stateful-application/cassandra/

Related

Azure Kuberenets Cluster: Could not find a ready tiller pod

I created an aks cluster using az aks create command with kubenet network and 2 nodes. Due to permissions issue in the AD account, the NSG had to be switched off before running the aks create command. After the AKS cluster created successfully, the NSG was reapplied.
In order to check the health of the newly created cluster, when I run:
kubectl get nodes --all-namespaces;
there are no nodes returned.
However, when I look into the azure portal and the corresponding vNet, there are 2vmss created using the ips within the subnet range.
When I run:
kubectl get pods --all-namespaces;
all pods are in pending state:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-xxxxdxxxxx-xxxxx 0/1 Pending 0 5h
kube-system coredns-autoscaler-xxdxxxxxxxx-xxxx 0/1 Pending 0 5h
kube-system kubernetes-dashboard-xxdxxxxxx-xxxxx 0/1 Pending 0 5h
kube-system metrics-server-xxxxxxxdxx-xxxx 0/1 Pending 0 5h
kube-system omsagent-rs-xxxxxxxxdx-xxxxx 0/1 Pending 0 5h
kube-system tiller-deploy-xxxxxxxdxxx-xxxx 0/1 Pending 0 34m
kube-system tunnelfront-xxxxxxxdx-xxxxx 0/1 Pending 0 5h
I then did a describe on the coredns pod:
kubectl describe pod coredns-xxxxxxxxxx-xxxx -n kube-system
Warning FailedScheduling 2m40s (x2242 over 2d5h) default-scheduler
no nodes available to schedule pods
I need to deploy some containers using helm/tiller and when I run the installation commands I get the error
Error: could not find a ready tiller pod
I know this is not directly to do with helm/tiller installation, the issue may be a bit more deeper.
I am new to Kubernetes, any thoughts on how to diagnose the issue will be much appreciated.
if no nodes are returned from kubectl get nodes I'd suggest recreating the cluster, since if there are no nodes - no pods can ever run on this cluster. you might try and upgrade the cluster to a newer version of kubernetes (this would effectively redeploy the nodes), that might help.
You need to manually deploy
kubectl logs --namespace kube-system tiller-deploy-xxxxxxxdxxx-xxxx
as stated in the below comments, there are no nodes and all the pods are in the pending state according to your logs, as recommended here you need to delete the cluster and recreate the cluster.

Getting 'didn't match node selector' when running Docker Windows container in Azure AKS

In my local machine I created a Windows Docker/nano server container and was able to 'push' this container into an Azure Container Registry using this command (The reason why I had to use the Windows container is because I have to use CSOM in the ASP.NET Core and it is not possible in Linux)
docker push MyContainerRegistry.azurecr.io/myimage:v1
That Docker container IS visible inside the Azure container registry which is MyContainerRegistry
I know that in order to run it I have to create a Container Instance; however, our management team doesn't want to go with that path and wants to use AKS instead
We do have an AKS cluster created
The kubectl IS running in our Azure shell
I tried to create an AKS pod using this command
kubectl apply -f myyaml.yaml
These are contents of yaml file
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mypod
spec:
replicas: 1
template:
metadata:
labels:
app: mypod
spec:
containers:
- name: mypod
image: MyContainerRegistry.azurecr.io/itataxsync:v1
ports:
- containerPort: 80
imagePullSecrets:
- name: mysecret
nodeSelector:
beta.kubernetes.io/os: windows
The pod successfully created.
When I run 'get pods' I see a newly created pod
However, when I get into details of this pod, I see the following
"Warning FailedScheduling 3m (x2 over 3m) default-scheduler 0/3
nodes are available: 3 node(s) didn't match node selector."
Does it mean that I simply can't run Docker Windows container in Azure using AKS?
Is there any way I can run Docker Windows container in Azure at all?
Thank you very much for your help!
You cannot yet have windows nodes on AKS, you can, however, use AKS engine (examples).
Bear in mind that windows support in kubernetes is a bit lacking, so you will run into issues, unfortunately.

PVC volume mount on Azure kubernetes takes over an hour

I have tectonic kubernetes cluster installed on Azure. It's made from tectonic-installer GH repo, from master (commit 0a7a1edb0a2eec8f3fb9e1e612a8ef1fd890c332).
> kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:23:22Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3+coreos.0", GitCommit:"42de91f04e456f7625941a6c4aaedaa69708be1b", GitTreeState:"clean", BuildDate:"2017-08-07T19:44:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
On the cluster I created storage class, PVC and pod as in: https://gist.github.com/mwieczorek/28b7c779555d236a9756cb94109d6695
But the pod cannot start. When I run:
kubectl describe pod mypod
I get in events:
FailedMount Unable to mount volumes for pod "mypod_default(afc68bee-88cb-11e7-a44f-000d3a28f26a)":
timeout expired waiting for volumes to attach/mount for pod "default"/"mypod". list of unattached/unmounted volumes=[mypd]
In kubelet logs (https://gist.github.com/mwieczorek/900db1e10971a39942cba07e202f3c50) I see:
Error: Volume not attached according to node status for volume "pvc-61a8dc6a-88cb-11e7-ad19-000d3a28f2d3"
(UniqueName: "kubernetes.io/azure-disk//subscriptions/abc/resourceGroups/tectonic-cluster-mwtest/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-61a8dc6a-88cb-11e7-ad19-000d3a28f2d3") pod "mypod" (UID: "afc68bee-88cb-11e7-a44f-000d3a28f26a")
When I create PVC - new disc on Azure is created.
And after creating pod - I see on the azure portal that the disc is attached to worker VM where the pod is scheduled.
> fdisk -l
shows:
Disk /dev/sdc: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
I found similar issue on GH ( kubernetes/kubernetes/issues/50150) but I have cluster built from master so it's not the udev rules (I checked - file /etc/udev/rules.d/66-azure-storage.rules exists)
Does anybody knows if it's a bug (maybe know issue)?
Or am I doing something wrong?
Also: how can I troubleshoot that further?
I had test in lab, use your yaml file to create pod, after one hour, it still show pending.
root#k8s-master-ED3DFF55-0:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
mypod 0/1 Pending 0 1h
task-pv-pod 1/1 Running 0 2h
We can use this yaml file to create pod:
PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
namespace: kube-public
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
Output:
root#k8s-master-ED3DFF55-0:~# kubectl get pvc --namespace=kube-public
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
mypvc Bound pvc-1b097337-8960-11e7-82fc-000d3a191e6a 100Gi RWO default 3h
Pod:
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
Output:
root#k8s-master-ED3DFF55-0:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
task-pv-pod 1/1 Running 0 3h
As a workaround, we can use default as the storageclass.
In Azure, there are managed disk and unmanaged disk. if your nodes are use managed disk, two storage classes will be created to provide access to create Kubernetes persistent volumes using Azure managed disks.
They are managed-premium and managed-standard and map to Standard_LRS and Premium_LRS managed disk types respectively.
If your nodes are use non-managed disk, the default storage class will be used if persistent volume resources don't specify a storage class as part of the resource definition.
The default storage class uses non-managed blob storage and will provision the blob within an existing storage account present in the resource group or provision a new storage account.
Non-managed persistent volume types are available on all VM sizes.
More information about managed disk and non-managed disk, please refer to this link.
Here is the test result:
root#k8s-master-ED3DFF55-0:~# kubectl get pvc --namespace=default
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
shared Pending standard-managed 2h
shared1 Pending managed-standard 15m
shared12 Pending standard-managed 14m
shared123 Bound pvc-a379ced4-897c-11e7-82fc-000d3a191e6a 2Gi RWO default 12m
task-pv-claim Bound pvc-3cefd456-8961-11e7-82fc-000d3a191e6a 3Gi RWO default 3h
Update:
Here is my K8s agent's unmanaged disk:
In your case, "kubectl describe pod-name" does not provide suffiecient info, you need to provide k8s contoller manager logs for troubleshooting
Get the controller manager logs on master:
#get the "CONTAINER ID" of "/hyperkube controlle"
docker ps -a | grep "hyperkube controlle" | awk -F ' ' '{print $1}'
#get controller manager logs
docker logs "CONTAINER ID" > "CONTAINER ID".log 2>&1 &
Provisioning should be very quick. Check your controller logs to make sure the PV required by the PVC is provisioned correctly:
Navigate to Azure portal > cluster > Activity Log
Remove filter for namespaces and look for "Update Storage Account Create" entries.
In our case we needed to register our cluster subscription for the 'Microsoft.Storage' namespace so that the controller can provision the required PV. You can do this with the azure cli:
az provider register --namespace Microsoft.Storage
I had a similar issue, this command worked for me.
az resource update --ids /subscriptions/<SUBSCRIPTION-ID>/resourcegroups/<RESOURCE-GROUP>/providers/Microsoft.ContainerService/managedClusters/<AKS-CLUSTER-NAME>/agentpools/<NODE-GROUP-NAME>

How to use local docker images with Minikube?

I have several docker images that I want to use with minikube. I don't want to first have to upload and then download the same image instead of just using the local image directly. How do I do this?
Stuff I tried:
1. I tried running these commands (separately, deleting the instances of minikube both times and starting fresh)
kubectl run hdfs --image=fluxcapacitor/hdfs:latest --port=8989
kubectl run hdfs --image=fluxcapacitor/hdfs:latest --port=8989 imagePullPolicy=Never
Output:
NAME READY STATUS RESTARTS AGE
hdfs-2425930030-q0sdl 0/1 ContainerCreating 0 10m
It just gets stuck on some status but never reaches the ready state.
2. I tried creating a registry and then putting images into it but that didn't work either. I might've done that incorrectly but I can't find proper instructions to do this task.
Please provide instructions to use local docker images in local kubernetes instance.
OS: ubuntu 16.04
Docker : Docker version 1.13.1, build 092cba3
Kubernetes :
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:40:50Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"}
If someone could help me get a solution that uses docker-compose to do this, that'd be awesome.
Edit:
Images loaded in eval $(minikube docker-env):
REPOSITORY TAG IMAGE ID CREATED SIZE
fluxcapacitor/jupyterhub latest e5175fb26522 4 weeks ago 9.59 GB
fluxcapacitor/zeppelin latest fe4bc823e57d 4 weeks ago 4.12 GB
fluxcapacitor/prediction-pmml latest cae5b2d9835b 4 weeks ago 973 MB
fluxcapacitor/scheduler-airflow latest 95adfd56f656 4 weeks ago 8.89 GB
fluxcapacitor/loadtest latest 6a777ab6167c 5 weeks ago 899 MB
fluxcapacitor/hdfs latest 00fa0ed0064b 6 weeks ago 1.16 GB
fluxcapacitor/sql-mysql latest 804137671a8c 7 weeks ago 679 MB
fluxcapacitor/metastore-1.2.1 latest ea7ce8c5048f 7 weeks ago 1.35 GB
fluxcapacitor/cassandra latest 3cb5ff117283 7 weeks ago 953 MB
fluxcapacitor/apachespark-worker-2.0.1 latest 14ee3e4e337c 7 weeks ago 3.74 GB
fluxcapacitor/apachespark-master-2.0.1 latest fe60b42d54e5 7 weeks ago 3.72 GB
fluxcapacitor/package-java-openjdk-1.8 latest 1db08965289d 7 weeks ago 841 MB
gcr.io/google_containers/kubernetes-dashboard-amd64 v1.5.1 1180413103fd 7 weeks ago 104 MB
fluxcapacitor/stream-kafka-0.10 latest f67750239f4d 2 months ago 1.14 GB
fluxcapacitor/pipeline latest f6afd6c5745b 2 months ago 11.2 GB
gcr.io/google-containers/kube-addon-manager v6.1 59e1315aa5ff 3 months ago 59.4 MB
gcr.io/google_containers/kubedns-amd64 1.9 26cf1ed9b144 3 months ago 47 MB
gcr.io/google_containers/kube-dnsmasq-amd64 1.4 3ec65756a89b 5 months ago 5.13 MB
gcr.io/google_containers/exechealthz-amd64 1.2 93a43bfb39bf 5 months ago 8.37 MB
gcr.io/google_containers/pause-amd64
As the handbook describes, you can reuse the Docker daemon from Minikube with eval $(minikube docker-env).
So to use an image without uploading it, you can follow these steps:
Set the environment variables with eval $(minikube docker-env)
Build the image with the Docker daemon of Minikube (eg docker build -t my-image .)
Set the image in the pod spec like the build tag (eg my-image)
Set the imagePullPolicy to Never, otherwise Kubernetes will try to download the image.
Important note: You have to run eval $(minikube docker-env) on each terminal you want to use, since it only sets the environment variables for the current shell session.
What worked for me, based on the solution by #svenwltr:
# Start minikube
minikube start
# Set docker env
eval $(minikube docker-env) # unix shells
minikube docker-env | Invoke-Expression # PowerShell
# Build image
docker build -t foo:0.0.1 .
# Run in minikube
kubectl run hello-foo --image=foo:0.0.1 --image-pull-policy=Never
# Check that it's running
kubectl get pods
There is one easy and effective way to push your local Docker image directly to minikube, which will save time from building the images in minikube again.
minikube image load <image name>
(minikube cache add <image name> - old deprecated way, for reference)
More details here
All possible method to push images to minikube are mention here: https://minikube.sigs.k8s.io/docs/handbook/pushing/
Notes:
This Answer isnt limited to minikube!
If wanting to create the registry on minikube's Docker then run eval $(minikube docker-env) first (to make docker available on the host machine's terminal).
Otherwise enter in the virtual machine via minikube ssh, and then proceed with the following steps
depending on your operative system, minikube will automatically mount your homepath onto the VM.
as Eli stated, you'll need to add the local registry as insecure in order to use http (may not apply when using localhost but does apply if using the local hostname)
Don't use http in production, make the effort for securing things up.
Use a local registry:
docker run -d -p 5000:5000 --restart=always --name local-registry registry:2
Now tag your image properly:
docker tag ubuntu localhost:5000/ubuntu
Note that localhost should be changed to dns name of the machine running registry container.
Now push your image to local registry:
docker push localhost:5000/ubuntu
You should be able to pull it back:
docker pull localhost:5000/ubuntu
Now change your yaml file to use the local registry.
Think about mounting volumes at appropriate location, to persist the images on the registry.
Adding to to #Farhad 's answer based on this answer,
This are the steps to setup a local registry.
Setup in local machine
Setup hostname in local machine: edit /etc/hosts to add this line
docker.local 127.0.0.1
Now start a local registry (remove -d to run non-daemon mode) :
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Now tag your image properly:
docker tag ubuntu docker.local:5000/ubuntu
Now push your image to local registry:
docker push docker.local:5000/ubuntu
Verify that image is pushed:
curl -X GET http://docker.local:5000/v2/ubuntu/tags/list
Setup in minikube
ssh into minikube with: minukube ssh
edit /etc/hosts to add this line
docker.local <your host machine's ip>
Verify access:
curl -X GET http://docker.local:5000/v2/ubuntu/tags/list
Now if you try to pull, yo might get an http access error.
Enable insecure access:
If you are always planning to use minkube with this local setup then create a minikube to use insecure registry by default (wont work on existing cluster).
minikube start --insecure-registry="docker.local:5000"
else follow below steps:
systemctl stop docker
edit the docker serice file: get path from systemctl status docker
it might be :
/etc/systemd/system/docker.service.d/10-machine.conf or
/usr/lib/systemd/system/docker.service
append this text (replace 192.168.1.4 with your ip)
--insecure-registry docker.local:5000 --insecure-registry 192.168.1.4:5000
to this line
ExecStart=/usr/bin/docker daemon -H tcp://0.0.0.0:2376 -H
unix:///var/run/docker.sock --tlsverify --tlscacert /etc/docker/ca.pem
--tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=virtualbox --insecure-registry 10.0.0.0/24
systemctl daemon-reload
systemctl start docker
try pulling:
docker pull docker.local:5000/ubuntu
Now change your yaml file to use local registry.
containers:
- name: ampl-django
image: dockerhub/ubuntu
to
containers:
- name: ampl-django
image: docker.local:5000/nymbleup
Don't use http in production, make the effort for securing things up.
Newer versions of minikube allows you to load image from the local docker instance by running
minikube image rm image <imagename>:<version>
minikube image load <imagename>:<version> --daemon
the load command might show an error but the image still gets loaded to your minikube instance
one thing to remember regarding 'minikube' is that minikube's host is not the same as your local host, therefore, what i realized, that in order to use local images for testing with minikube you must build your docker image first locally or pull it locally and then add it using the command bellow into the minikube context which is, nothing else as another linux instance.
minikube cache add <image>:<tag>
yet, don't forget to set the imagePullPolicy: Never in your kubernetes deployment yamls, as it will ensure using locally added images instead of trying pull it remotely from the registry.
update: minikube cache will be deprecated in upcoming versions, please switch to minikube image load
One approach is to build the image locally and then do:
docker save imageNameGoesHere | pv | (eval $(minikube docker-env) && docker load)
minikube docker-env might not return the correct info running under a different user / sudo. Instead you can run sudo -u yourUsername minikube docker-env.
It should return something like:
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/home/chris/.minikube/certs"
export DOCKER_API_VERSION="1.23"
# Run this command to configure your shell:
# eval $(minikube docker-env)
In addition to the accepted answer, you can also achieve what you originally wanted (creating a deployment using the run command) with the following command:
kubectl run hdfs --image=fluxcapacitor/hdfs:latest --port=8989 --generator=run-pod/v1
I found the information about the generator on the Kubernetes-dev forum:
If you're using kubectl run, it generates a manifest for you that happens to have imagePullPolicy set to Always by default. You can use this command to get an imagePullPolicy of IfNotPresent, which will work for minikube:
kubectl run --image=<container> --generator=run-pod/v1
Dan Lorenc
https://groups.google.com/forum/#!topic/kubernetes-dev/YfvWuFr_XOM
If anyone is looking to come back to the local environment after setting the minikube env, use following command.
eval $(docker-machine env -u)
A simpler method that answers the original question "How to use local docker images with Minikube?", is to save the image to a tar file and load it into minikube:
# export the docker image to a tar file
docker save --output my-image.tar the.full.path.to/the/docker/image:the-tag
# set local environment variables so that docker commands go to the docker in minikube
eval $(minikube docker-env)
# or if on windows: #FOR /f "tokens=*" %i IN ('minikube docker-env') DO #%i
# import the docker image from the tar file into minikube
docker load --input my-image.tar
# cleanup - put docker back to normal
eval $(minikube docker-env -u)
# or if on windows: #FOR /f "tokens=*" %i IN ('minikube docker-env -u') DO #%i
Then running the image involves a command like the following. Make sure to include the "--image-pull-policy=Never" parameter.
kubectl run my-image --image=the.full.path.to/the/docker/image:the-tag --image-pull-policy=Never --port=80
From the kubernetes docs:
https://kubernetes.io/docs/concepts/containers/images/#updating-images
The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. If you would like to always force a pull, you can do one of the following:
set the imagePullPolicy of the container to Always;
use :latest as the tag for the image to use;
enable the AlwaysPullImages admission controller.
Or read the other way: Using the :latest tag forces images to always be pulled. If you use the eval $(minikube docker-env) as mentioned above, then either don't use any tag, or assign a tag to your local image you can avoid Kubernetes trying to forcibly pull it.
One idea would be to save the docker image locally and later load it into minikube as follows:
Let say, for example, you already have puckel/docker-airflow image.
Save that image to local disk -
docker save puckel/docker-airflow > puckel_docker_airflow.tar
Now enter into minikube docker env -
eval $(minikube docker-env)
Load that locally saved image -
docker load < puckel_docker_airflow.tar
It is that simple and it works like a charm.
minikube addons enable registry -p minikube
💡 Registry addon on with docker uses 32769 please use that instead
of default 5000
📘 For more information see:
https://minikube.sigs.k8s.io/docs/drivers/docker
docker tag ubuntu $(minikube ip -p minikube):32769/ubuntu
docker push $(minikube ip -p minikube):32769/ubuntu
OR
minikube addons enable registry
docker tag ubuntu $(minikube ip):32769/ubuntu
docker push $(minikube ip):32769/ubuntu
The above is good enough for development purpose. I am doing this on archlinux.
There is now a Minikube Registry addon, this is probably the easiest way. Here is how to use it: https://minikube.sigs.k8s.io/docs/tasks/registry/insecure/
Note that I had DNS issues, might be a bug.
You should know that docker in your local machine is separated from the docker in your minikube cluster.
So you should load/copy a Docker image from your local machine into the minikube cluster:
minikube image load <IMAGE_NAME>
or alternatively when working with minikube, you can build images directly inside it:
#instead of:
docker image build -t <IMAGE_NAME> .
#do:
minikube image build -t <IMAGE_NAME> .
To add to the previous answers, if you have a tarball image, you can simply load it to you local docker set of images docker image load -i /path/image.tar .Please remember to run it after eval $(minikube docker-env), since minikube does not share images with the locally installed docker engine.
Other answers suppose you use minikube with VM, so your local images are not accessible from minikube VM.
In case if you use minikube with --vm-driver=none, you can easily reuse local images by setting image_pull_policy to Never:
kubectl run hello-foo --image=foo --image-pull-policy=Never
or setting imagePullPolicy field for cotainers in corresponding .yaml manifests.
steps to run local docker images in kubernetes
1. eval $(minikube -p minikube docker-env)
2. in the artifact file , under spec section -> containers add
imagePullPolicy: IfNotPresent or imagePullPolicy: Never
apiVersion: "v1"
kind: Pod
metadata:
name: web
labels:
name: web
app: demo
spec:
containers:
- name: web
image: web:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
name: http
protocol: TCP
3. then run kubectl create -f <filename>
For minikube on Docker:
Option 1: Using minikube registry
Check your minikube ports
docker ps
You will see something like: 127.0.0.1:32769->5000/tcp
It means that your minikube registry is on 32769 port for external usage, but internally it's on 5000 port.
Build your docker image tagging it:
docker build -t 127.0.0.1:32769/hello .
Push the image to the minikube registry:
docker push 127.0.0.1:32769/hello
Check if it's there:
curl http://localhost:32769/v2/_catalog
Build some deployment using the internal port:
kubectl create deployment hello --image=127.0.0.1:5000/hello
Your image is right now in minikube container, to see it write:
eval $(minikube -p <PROFILE> docker-env)
docker images
caveat: if using only one profile named "minikube" then "-p " section is redundant, but if using more then don't forget about it; Personally I delete the standard one (minikube) not to make mistakes.
Option 2: Not using registry
Switch to minikube container Docker:
eval $(minikube -p <PROFILE> docker-env)
Build your image:
docker build -t hello .
Create some deployment:
kubectl create deployment hello --image=hello
At the end change the deployment ImagePullPolicy from Always to IfNotPresent:
kubectl edit deployment hello
In addition of minikube image load <image name>, check out the latest (Nov 2021 at the time of writing) release of Minikube.
v1.24.0
Add --no-kubernetes flag to start minikube without Kubernetes
See PR 12848, for
That gives you:
mk start --no-kubernetes
minikube v1.24.0-beta.0 on Darwin 11.6 (arm64)
Automatically selected the docker driver
Starting minikube without Kubernetes minikube in cluster minikube
Pulling base image ...
Creating docker container (CPUs=2, Memory=1988MB) ...
Done! minikube is ready without Kubernetes!
Things to try without Kubernetes
"minikube ssh" to SSH into minikube's node.
"minikube docker-env" to build images by pointing to the docker inside minikube
"minikube image" to build images without docker
building off the earlier answer to use eval $(minikube docker-env) in order to load up minikube's docker environment, for an easier toggle, add the following function to your shell rc file:
dockube() {
if [[ $1 = 'which' ]]; then
if [[ $MINIKUBE_ACTIVE_DOCKERD = 'minikube' ]]; then
echo $MINIKUBE_ACTIVE_DOCKERD
else
echo 'system'
fi
return
fi
if [[ $MINIKUBE_ACTIVE_DOCKERD = 'minikube' ]]; then
eval $(minikube docker-env -u)
echo "now using system docker"
else
eval $(minikube -p minikube docker-env)
echo "now using minikube docker"
fi
}
dockube with no argument will toggle between the system and minikube docker environment, and dockube which will return which one is in use.
For Windows users, the way I do it.
I use the docker desktop to host my MiniKube image and use PowerShell as a console.
First I create my MiniKube cluster:
minikube start --bootstrapper=kubeadm --vm-driver=docker --profile "cluster1"
For instance, let's say I have a Dockerfile contains:
FROM nginx
2 steps way, Build an image and Upload the image to minikube
docker build -t mynginximage .
minikube image load mynginximage
Or 1 step way, Build directly in MiniKube
minikube image build -t mynginximage .
To run my image in MiniKube
kubectl run myweb --image=mynginximage --image-pull-policy=Never
or via mynginxpod.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: myweb
spec:
containers:
- name: myweb
image: mynginximage
imagePullPolicy: Never
ports:
- containerPort: 80
And kubectl apply -f .\mynginxpod.yaml
Now to test it, run:
kubectl get pods myweb
NAME READY STATUS RESTARTS AGE
myweb 1/1 Running 0 25s
To access it:
kubectl exec --stdin --tty myweb -- /bin/bash
To expose it:
kubectl port-forward nginx 3333:80
what if you could just run k8s within docker's vm? there's native support for this with the more recent versions of docker desktop... you just need to enable that support.
https://www.docker.com/blog/kubernetes-is-now-available-in-docker-desktop-stable-channel/
https://www.docker.com/blog/docker-windows-desktop-now-kubernetes/
how i found this out:
while reading the docs for helm, they give you a brief tutorial how to install minikube.
that tutorial installs minikube in a vm that's different/separate from docker.
so when it came time to install my helm charts, i couldn't get helm/k8s to pull the images i had built using docker. that's how i arrived here at this question.
so... if you can live with whatever version of k8s comes with docker desktop, and you can live with it running in whatever vm docker has, then maybe this solution is a bit easier than some of the others.
disclaimer: not sure how switching between windows/linux containers would impact anything.
setup minikube docker-env
again build the same docker image (using minikube docker-env)
change imagePullPolicy to Never in your deployment
actually what happens here , your Minikube can't recognise your docker daemon as it is independent service.You have to first set your minikube-docker environment use below command to check
"eval $(minikube docker-env)"
If you run below command it will show where your minikube looks for docker.
~$ minikube docker-env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.37.192:2376"
export DOCKER_CERT_PATH="/home/ubuntu/.minikube/certs"
export MINIKUBE_ACTIVE_DOCKERD="minikube"
**# To point your shell to minikube's docker-daemon, run:**
# eval $(minikube -p minikube docker-env)
You have to again build images once you setup minikube docker-env else it will fail.
There are two easy ways to load local images to Minikube.
Always make sure to set imagePullPolicy: Never in your deployment yaml.
Eg:
spec:
containers:
- name: myapp
image: pz/demo
imagePullPolicy: Never
ports:
- containerPort: 8080
Luckily, there are two straightforward commands to help with this.
The first one is the image load command. You can load a Docker image from your local machine into the Minikube cluster with the following command.
General
minikube image load <IMAGE_NAME>
Example
minikube image load pz/demo
After loading the image to your Minikube cluster, you can restart your Pods of the above Deployment and notice that they are starting fine.
With the previous way, you always build the Docker image on your local machine and then move it to the Minikube container, which again takes a bit of time, even though not a lot.
Using the image build command of Minikube, we can build the image directly inside the Minikube container.
General
minikube image build -t <IMAGE_NAME> <PATH_TO_DOCKERFILE>
Example
minikube image build -t pz/demo /New APP/Dockerfile
Using the minikube image build command the image is instantly available to Minikkube and doesn't have to be explicitly loaded in a second step via the minikube image load command.
Using one of both methods to get our application Docker image into Minikube and restart the Pods, we can recheck the logs of the Deployment:
Further, to verify end to end that everything is working as expected, we can port forward our local port 8080 to the 8080 of the Deployment by using:
kubectl port-forward deployment/myapp 8080:8080
Rechecking the browser, we see that the locally built application runs fine on the Minikube cluster.
Ref: https://levelup.gitconnected.com/two-easy-ways-to-use-local-docker-images-in-minikube-cd4dcb1a5379
you can either reuse the docker shell, with eval $(minikube docker-env), alternatively, you can leverage on docker save | docker load across the shells.
On minikube 1.20, minikube cache add imagename:tag is deprecated.
Instead use minikube image load imagename:tag
If I understand, you have local images, maybe passed by a usb pen and want to load it in minikube?
Just load the image like:
minikube image load my-local-image:0.1
With this, in kubernetes yaml file, you can change the imagePullPolicy to Never, and it will be find because you just loaded it in minikube.
Had this problem, done this and worked.
Most of the answers are already great.
But one important thing I have faced is that if you are using BuildKit
(DOCKER_BUILDKIT=1)
then the images created after executing the eval $(minkube docker-env) will not go to minikube docker engine. Instead it will go to your docker engine on local.
So remove any of the references if you are using below
-mount=type=cache,target=/root/.m2

App container to cassandra node - one to one or?

I am using containers to run both app servers & Cassandra nodes.
When starting the app server container, I need to specify which Cassandra node(1..n) to connect to. How would you divide the workload?
One app container to one or more Cassandra nodes(How many).
One or more app container to one Cassandra node(How many).
Many to many(How many).
This is for a production setup, 100 % uptime. Each data load from cassandra is small but many.
I should be scalable so I can put in more app containers - like in Kubernetes they have pods. Pods is a set of nodes that make up granules of the application.
Therefore I am looking for the best possible group of containers(Cassandra and App server) that will scale
Info: Kubernetes is a to expensive setup in the beginning. And while waiting for Docker Swarm to be in release state I will do this manually. Any insight is welcome?
Regards
Please see:
https://github.com/kubernetes/kubernetes/blob/release-1.0/examples/cassandra/README.md
for a tutorial of how to run Cassandra on Kubernetes.
You will also need to add in best practices like snapshotting the databases to persistent storage and other such things.
(and why do you say that Kubernetes is expensive? Google Container Engine only charges the cost of the VMs for small clusters, and you can deploy open source Kubernetes yourself for free)
Don't run the app container and Cassandra node inside of the same pod. You want to be able to scale your Cassandra cluster independently of your application.
For the Cassandra side of things, I suggest:
A replication controller so you can easily scale your number of Cassandra nodes. Luckily for us, C* nodes are all the same.
A Cassandra service so that your application pods have a stable endpoint at which they can talk to C*
A headless Kubernetes service to provide your Cassandra pods with seed node IP addresses
You will need to have DNS working in your Kubernetes cluster.
The Cassandra Replication Controller
cassandra-replication-controller.yml
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: cassandra
name: cassandra
spec:
replicas: 1
selector:
name: cassandra
template:
metadata:
labels:
name: cassandra
spec:
containers:
- image: vyshane/cassandra
name: cassandra
env:
# Feel free to change the following:
- name: CASSANDRA_CLUSTER_NAME
value: Cassandra
- name: CASSANDRA_DC
value: DC1
- name: CASSANDRA_RACK
value: Kubernetes Cluster
- name: CASSANDRA_ENDPOINT_SNITCH
value: GossipingPropertyFileSnitch
# The peer discovery domain needs to point to the Cassandra peer service
- name: PEER_DISCOVERY_DOMAIN
value: cassandra-peers.default.cluster.local.
ports:
- containerPort: 9042
name: cql
volumeMounts:
- mountPath: /var/lib/cassandra/data
name: data
volumes:
- name: data
emptyDir: {}
The Cassandra Service
The Cassandra service is pretty simple. Add the thrift port if you need that.
cassandra-service.yml
apiVersion: v1
kind: Service
metadata:
labels:
name: cassandra
name: cassandra
spec:
ports:
- port: 9042
name: cql
selector:
name: cassandra
The Cassandra Peer Discovery Service
This is a headless Kubernetes service that provides the IP addresses of Cassandra peers via DNS A records. The peer service definition looks like this:
cassandra-peer-service.yml
apiVersion: v1
kind: Service
metadata:
labels:
name: cassandra-peers
name: cassandra-peers
spec:
clusterIP: None
ports:
- port: 7000
name: intra-node-communication
- port: 7001
name: tls-intra-node-communication
selector:
name: cassandra
The Cassandra Docker Image
We extend the official Cassandra image thus:
Dockerfile
FROM cassandra:2.2
MAINTAINER Vy-Shane Xie <shane#node.mu>
ENV REFRESHED_AT 2015-09-16
RUN apt-get -qq update && \
DEBIAN_FRONTEND=noninteractive apt-get -yq install dnsutils && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY custom-entrypoint.sh /
ENTRYPOINT ["/custom-entrypoint.sh"]
CMD ["cassandra", "-f"]
Notice the custom-entrypoint.sh script. It simply configures the seed nodes by querying our Cassandra peer discovery service:
custom-entrypoint.sh
#!/bin/bash
#
# Configure Cassandra seed nodes.
my_ip=$(hostname --ip-address)
CASSANDRA_SEEDS=$(dig $PEER_DISCOVERY_DOMAIN +short | \
grep -v $my_ip | \
sort | \
head -2 | xargs | \
sed -e 's/ /,/g')
export CASSANDRA_SEEDS
/docker-entrypoint.sh "$#"
Starting Cassandra
To start Cassandra, simply run
kubectl create -f cassandra-peer-service.yml
kubectl create -f cassandra-service.yml
kubectl create -f cassandra-replication-controller.yml
This will give you a one-node Cassandra cluster. To add another node:
kubectl scale rc cassandra --replicas=2
Talking to Cassandra
Your application pods can connect to Cassandra using the cassandra hostname. It points to the Cassandra service.
Show me the code
I made a GitHub repo with the above setup: Multinode Cassandra Cluster on Kubernetes.

Resources