I have several docker images that I want to use with minikube. I don't want to first have to upload and then download the same image instead of just using the local image directly. How do I do this?
Stuff I tried:
1. I tried running these commands (separately, deleting the instances of minikube both times and starting fresh)
kubectl run hdfs --image=fluxcapacitor/hdfs:latest --port=8989
kubectl run hdfs --image=fluxcapacitor/hdfs:latest --port=8989 imagePullPolicy=Never
Output:
NAME READY STATUS RESTARTS AGE
hdfs-2425930030-q0sdl 0/1 ContainerCreating 0 10m
It just gets stuck on some status but never reaches the ready state.
2. I tried creating a registry and then putting images into it but that didn't work either. I might've done that incorrectly but I can't find proper instructions to do this task.
Please provide instructions to use local docker images in local kubernetes instance.
OS: ubuntu 16.04
Docker : Docker version 1.13.1, build 092cba3
Kubernetes :
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:40:50Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"}
If someone could help me get a solution that uses docker-compose to do this, that'd be awesome.
Edit:
Images loaded in eval $(minikube docker-env):
REPOSITORY TAG IMAGE ID CREATED SIZE
fluxcapacitor/jupyterhub latest e5175fb26522 4 weeks ago 9.59 GB
fluxcapacitor/zeppelin latest fe4bc823e57d 4 weeks ago 4.12 GB
fluxcapacitor/prediction-pmml latest cae5b2d9835b 4 weeks ago 973 MB
fluxcapacitor/scheduler-airflow latest 95adfd56f656 4 weeks ago 8.89 GB
fluxcapacitor/loadtest latest 6a777ab6167c 5 weeks ago 899 MB
fluxcapacitor/hdfs latest 00fa0ed0064b 6 weeks ago 1.16 GB
fluxcapacitor/sql-mysql latest 804137671a8c 7 weeks ago 679 MB
fluxcapacitor/metastore-1.2.1 latest ea7ce8c5048f 7 weeks ago 1.35 GB
fluxcapacitor/cassandra latest 3cb5ff117283 7 weeks ago 953 MB
fluxcapacitor/apachespark-worker-2.0.1 latest 14ee3e4e337c 7 weeks ago 3.74 GB
fluxcapacitor/apachespark-master-2.0.1 latest fe60b42d54e5 7 weeks ago 3.72 GB
fluxcapacitor/package-java-openjdk-1.8 latest 1db08965289d 7 weeks ago 841 MB
gcr.io/google_containers/kubernetes-dashboard-amd64 v1.5.1 1180413103fd 7 weeks ago 104 MB
fluxcapacitor/stream-kafka-0.10 latest f67750239f4d 2 months ago 1.14 GB
fluxcapacitor/pipeline latest f6afd6c5745b 2 months ago 11.2 GB
gcr.io/google-containers/kube-addon-manager v6.1 59e1315aa5ff 3 months ago 59.4 MB
gcr.io/google_containers/kubedns-amd64 1.9 26cf1ed9b144 3 months ago 47 MB
gcr.io/google_containers/kube-dnsmasq-amd64 1.4 3ec65756a89b 5 months ago 5.13 MB
gcr.io/google_containers/exechealthz-amd64 1.2 93a43bfb39bf 5 months ago 8.37 MB
gcr.io/google_containers/pause-amd64
As the handbook describes, you can reuse the Docker daemon from Minikube with eval $(minikube docker-env).
So to use an image without uploading it, you can follow these steps:
Set the environment variables with eval $(minikube docker-env)
Build the image with the Docker daemon of Minikube (eg docker build -t my-image .)
Set the image in the pod spec like the build tag (eg my-image)
Set the imagePullPolicy to Never, otherwise Kubernetes will try to download the image.
Important note: You have to run eval $(minikube docker-env) on each terminal you want to use, since it only sets the environment variables for the current shell session.
What worked for me, based on the solution by #svenwltr:
# Start minikube
minikube start
# Set docker env
eval $(minikube docker-env) # unix shells
minikube docker-env | Invoke-Expression # PowerShell
# Build image
docker build -t foo:0.0.1 .
# Run in minikube
kubectl run hello-foo --image=foo:0.0.1 --image-pull-policy=Never
# Check that it's running
kubectl get pods
There is one easy and effective way to push your local Docker image directly to minikube, which will save time from building the images in minikube again.
minikube image load <image name>
(minikube cache add <image name> - old deprecated way, for reference)
More details here
All possible method to push images to minikube are mention here: https://minikube.sigs.k8s.io/docs/handbook/pushing/
Notes:
This Answer isnt limited to minikube!
If wanting to create the registry on minikube's Docker then run eval $(minikube docker-env) first (to make docker available on the host machine's terminal).
Otherwise enter in the virtual machine via minikube ssh, and then proceed with the following steps
depending on your operative system, minikube will automatically mount your homepath onto the VM.
as Eli stated, you'll need to add the local registry as insecure in order to use http (may not apply when using localhost but does apply if using the local hostname)
Don't use http in production, make the effort for securing things up.
Use a local registry:
docker run -d -p 5000:5000 --restart=always --name local-registry registry:2
Now tag your image properly:
docker tag ubuntu localhost:5000/ubuntu
Note that localhost should be changed to dns name of the machine running registry container.
Now push your image to local registry:
docker push localhost:5000/ubuntu
You should be able to pull it back:
docker pull localhost:5000/ubuntu
Now change your yaml file to use the local registry.
Think about mounting volumes at appropriate location, to persist the images on the registry.
Adding to to #Farhad 's answer based on this answer,
This are the steps to setup a local registry.
Setup in local machine
Setup hostname in local machine: edit /etc/hosts to add this line
docker.local 127.0.0.1
Now start a local registry (remove -d to run non-daemon mode) :
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Now tag your image properly:
docker tag ubuntu docker.local:5000/ubuntu
Now push your image to local registry:
docker push docker.local:5000/ubuntu
Verify that image is pushed:
curl -X GET http://docker.local:5000/v2/ubuntu/tags/list
Setup in minikube
ssh into minikube with: minukube ssh
edit /etc/hosts to add this line
docker.local <your host machine's ip>
Verify access:
curl -X GET http://docker.local:5000/v2/ubuntu/tags/list
Now if you try to pull, yo might get an http access error.
Enable insecure access:
If you are always planning to use minkube with this local setup then create a minikube to use insecure registry by default (wont work on existing cluster).
minikube start --insecure-registry="docker.local:5000"
else follow below steps:
systemctl stop docker
edit the docker serice file: get path from systemctl status docker
it might be :
/etc/systemd/system/docker.service.d/10-machine.conf or
/usr/lib/systemd/system/docker.service
append this text (replace 192.168.1.4 with your ip)
--insecure-registry docker.local:5000 --insecure-registry 192.168.1.4:5000
to this line
ExecStart=/usr/bin/docker daemon -H tcp://0.0.0.0:2376 -H
unix:///var/run/docker.sock --tlsverify --tlscacert /etc/docker/ca.pem
--tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=virtualbox --insecure-registry 10.0.0.0/24
systemctl daemon-reload
systemctl start docker
try pulling:
docker pull docker.local:5000/ubuntu
Now change your yaml file to use local registry.
containers:
- name: ampl-django
image: dockerhub/ubuntu
to
containers:
- name: ampl-django
image: docker.local:5000/nymbleup
Don't use http in production, make the effort for securing things up.
Newer versions of minikube allows you to load image from the local docker instance by running
minikube image rm image <imagename>:<version>
minikube image load <imagename>:<version> --daemon
the load command might show an error but the image still gets loaded to your minikube instance
one thing to remember regarding 'minikube' is that minikube's host is not the same as your local host, therefore, what i realized, that in order to use local images for testing with minikube you must build your docker image first locally or pull it locally and then add it using the command bellow into the minikube context which is, nothing else as another linux instance.
minikube cache add <image>:<tag>
yet, don't forget to set the imagePullPolicy: Never in your kubernetes deployment yamls, as it will ensure using locally added images instead of trying pull it remotely from the registry.
update: minikube cache will be deprecated in upcoming versions, please switch to minikube image load
One approach is to build the image locally and then do:
docker save imageNameGoesHere | pv | (eval $(minikube docker-env) && docker load)
minikube docker-env might not return the correct info running under a different user / sudo. Instead you can run sudo -u yourUsername minikube docker-env.
It should return something like:
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/home/chris/.minikube/certs"
export DOCKER_API_VERSION="1.23"
# Run this command to configure your shell:
# eval $(minikube docker-env)
In addition to the accepted answer, you can also achieve what you originally wanted (creating a deployment using the run command) with the following command:
kubectl run hdfs --image=fluxcapacitor/hdfs:latest --port=8989 --generator=run-pod/v1
I found the information about the generator on the Kubernetes-dev forum:
If you're using kubectl run, it generates a manifest for you that happens to have imagePullPolicy set to Always by default. You can use this command to get an imagePullPolicy of IfNotPresent, which will work for minikube:
kubectl run --image=<container> --generator=run-pod/v1
Dan Lorenc
https://groups.google.com/forum/#!topic/kubernetes-dev/YfvWuFr_XOM
If anyone is looking to come back to the local environment after setting the minikube env, use following command.
eval $(docker-machine env -u)
A simpler method that answers the original question "How to use local docker images with Minikube?", is to save the image to a tar file and load it into minikube:
# export the docker image to a tar file
docker save --output my-image.tar the.full.path.to/the/docker/image:the-tag
# set local environment variables so that docker commands go to the docker in minikube
eval $(minikube docker-env)
# or if on windows: #FOR /f "tokens=*" %i IN ('minikube docker-env') DO #%i
# import the docker image from the tar file into minikube
docker load --input my-image.tar
# cleanup - put docker back to normal
eval $(minikube docker-env -u)
# or if on windows: #FOR /f "tokens=*" %i IN ('minikube docker-env -u') DO #%i
Then running the image involves a command like the following. Make sure to include the "--image-pull-policy=Never" parameter.
kubectl run my-image --image=the.full.path.to/the/docker/image:the-tag --image-pull-policy=Never --port=80
From the kubernetes docs:
https://kubernetes.io/docs/concepts/containers/images/#updating-images
The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. If you would like to always force a pull, you can do one of the following:
set the imagePullPolicy of the container to Always;
use :latest as the tag for the image to use;
enable the AlwaysPullImages admission controller.
Or read the other way: Using the :latest tag forces images to always be pulled. If you use the eval $(minikube docker-env) as mentioned above, then either don't use any tag, or assign a tag to your local image you can avoid Kubernetes trying to forcibly pull it.
One idea would be to save the docker image locally and later load it into minikube as follows:
Let say, for example, you already have puckel/docker-airflow image.
Save that image to local disk -
docker save puckel/docker-airflow > puckel_docker_airflow.tar
Now enter into minikube docker env -
eval $(minikube docker-env)
Load that locally saved image -
docker load < puckel_docker_airflow.tar
It is that simple and it works like a charm.
minikube addons enable registry -p minikube
💡 Registry addon on with docker uses 32769 please use that instead
of default 5000
📘 For more information see:
https://minikube.sigs.k8s.io/docs/drivers/docker
docker tag ubuntu $(minikube ip -p minikube):32769/ubuntu
docker push $(minikube ip -p minikube):32769/ubuntu
OR
minikube addons enable registry
docker tag ubuntu $(minikube ip):32769/ubuntu
docker push $(minikube ip):32769/ubuntu
The above is good enough for development purpose. I am doing this on archlinux.
There is now a Minikube Registry addon, this is probably the easiest way. Here is how to use it: https://minikube.sigs.k8s.io/docs/tasks/registry/insecure/
Note that I had DNS issues, might be a bug.
You should know that docker in your local machine is separated from the docker in your minikube cluster.
So you should load/copy a Docker image from your local machine into the minikube cluster:
minikube image load <IMAGE_NAME>
or alternatively when working with minikube, you can build images directly inside it:
#instead of:
docker image build -t <IMAGE_NAME> .
#do:
minikube image build -t <IMAGE_NAME> .
To add to the previous answers, if you have a tarball image, you can simply load it to you local docker set of images docker image load -i /path/image.tar .Please remember to run it after eval $(minikube docker-env), since minikube does not share images with the locally installed docker engine.
Other answers suppose you use minikube with VM, so your local images are not accessible from minikube VM.
In case if you use minikube with --vm-driver=none, you can easily reuse local images by setting image_pull_policy to Never:
kubectl run hello-foo --image=foo --image-pull-policy=Never
or setting imagePullPolicy field for cotainers in corresponding .yaml manifests.
steps to run local docker images in kubernetes
1. eval $(minikube -p minikube docker-env)
2. in the artifact file , under spec section -> containers add
imagePullPolicy: IfNotPresent or imagePullPolicy: Never
apiVersion: "v1"
kind: Pod
metadata:
name: web
labels:
name: web
app: demo
spec:
containers:
- name: web
image: web:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
name: http
protocol: TCP
3. then run kubectl create -f <filename>
For minikube on Docker:
Option 1: Using minikube registry
Check your minikube ports
docker ps
You will see something like: 127.0.0.1:32769->5000/tcp
It means that your minikube registry is on 32769 port for external usage, but internally it's on 5000 port.
Build your docker image tagging it:
docker build -t 127.0.0.1:32769/hello .
Push the image to the minikube registry:
docker push 127.0.0.1:32769/hello
Check if it's there:
curl http://localhost:32769/v2/_catalog
Build some deployment using the internal port:
kubectl create deployment hello --image=127.0.0.1:5000/hello
Your image is right now in minikube container, to see it write:
eval $(minikube -p <PROFILE> docker-env)
docker images
caveat: if using only one profile named "minikube" then "-p " section is redundant, but if using more then don't forget about it; Personally I delete the standard one (minikube) not to make mistakes.
Option 2: Not using registry
Switch to minikube container Docker:
eval $(minikube -p <PROFILE> docker-env)
Build your image:
docker build -t hello .
Create some deployment:
kubectl create deployment hello --image=hello
At the end change the deployment ImagePullPolicy from Always to IfNotPresent:
kubectl edit deployment hello
In addition of minikube image load <image name>, check out the latest (Nov 2021 at the time of writing) release of Minikube.
v1.24.0
Add --no-kubernetes flag to start minikube without Kubernetes
See PR 12848, for
That gives you:
mk start --no-kubernetes
minikube v1.24.0-beta.0 on Darwin 11.6 (arm64)
Automatically selected the docker driver
Starting minikube without Kubernetes minikube in cluster minikube
Pulling base image ...
Creating docker container (CPUs=2, Memory=1988MB) ...
Done! minikube is ready without Kubernetes!
Things to try without Kubernetes
"minikube ssh" to SSH into minikube's node.
"minikube docker-env" to build images by pointing to the docker inside minikube
"minikube image" to build images without docker
building off the earlier answer to use eval $(minikube docker-env) in order to load up minikube's docker environment, for an easier toggle, add the following function to your shell rc file:
dockube() {
if [[ $1 = 'which' ]]; then
if [[ $MINIKUBE_ACTIVE_DOCKERD = 'minikube' ]]; then
echo $MINIKUBE_ACTIVE_DOCKERD
else
echo 'system'
fi
return
fi
if [[ $MINIKUBE_ACTIVE_DOCKERD = 'minikube' ]]; then
eval $(minikube docker-env -u)
echo "now using system docker"
else
eval $(minikube -p minikube docker-env)
echo "now using minikube docker"
fi
}
dockube with no argument will toggle between the system and minikube docker environment, and dockube which will return which one is in use.
For Windows users, the way I do it.
I use the docker desktop to host my MiniKube image and use PowerShell as a console.
First I create my MiniKube cluster:
minikube start --bootstrapper=kubeadm --vm-driver=docker --profile "cluster1"
For instance, let's say I have a Dockerfile contains:
FROM nginx
2 steps way, Build an image and Upload the image to minikube
docker build -t mynginximage .
minikube image load mynginximage
Or 1 step way, Build directly in MiniKube
minikube image build -t mynginximage .
To run my image in MiniKube
kubectl run myweb --image=mynginximage --image-pull-policy=Never
or via mynginxpod.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: myweb
spec:
containers:
- name: myweb
image: mynginximage
imagePullPolicy: Never
ports:
- containerPort: 80
And kubectl apply -f .\mynginxpod.yaml
Now to test it, run:
kubectl get pods myweb
NAME READY STATUS RESTARTS AGE
myweb 1/1 Running 0 25s
To access it:
kubectl exec --stdin --tty myweb -- /bin/bash
To expose it:
kubectl port-forward nginx 3333:80
what if you could just run k8s within docker's vm? there's native support for this with the more recent versions of docker desktop... you just need to enable that support.
https://www.docker.com/blog/kubernetes-is-now-available-in-docker-desktop-stable-channel/
https://www.docker.com/blog/docker-windows-desktop-now-kubernetes/
how i found this out:
while reading the docs for helm, they give you a brief tutorial how to install minikube.
that tutorial installs minikube in a vm that's different/separate from docker.
so when it came time to install my helm charts, i couldn't get helm/k8s to pull the images i had built using docker. that's how i arrived here at this question.
so... if you can live with whatever version of k8s comes with docker desktop, and you can live with it running in whatever vm docker has, then maybe this solution is a bit easier than some of the others.
disclaimer: not sure how switching between windows/linux containers would impact anything.
setup minikube docker-env
again build the same docker image (using minikube docker-env)
change imagePullPolicy to Never in your deployment
actually what happens here , your Minikube can't recognise your docker daemon as it is independent service.You have to first set your minikube-docker environment use below command to check
"eval $(minikube docker-env)"
If you run below command it will show where your minikube looks for docker.
~$ minikube docker-env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.37.192:2376"
export DOCKER_CERT_PATH="/home/ubuntu/.minikube/certs"
export MINIKUBE_ACTIVE_DOCKERD="minikube"
**# To point your shell to minikube's docker-daemon, run:**
# eval $(minikube -p minikube docker-env)
You have to again build images once you setup minikube docker-env else it will fail.
There are two easy ways to load local images to Minikube.
Always make sure to set imagePullPolicy: Never in your deployment yaml.
Eg:
spec:
containers:
- name: myapp
image: pz/demo
imagePullPolicy: Never
ports:
- containerPort: 8080
Luckily, there are two straightforward commands to help with this.
The first one is the image load command. You can load a Docker image from your local machine into the Minikube cluster with the following command.
General
minikube image load <IMAGE_NAME>
Example
minikube image load pz/demo
After loading the image to your Minikube cluster, you can restart your Pods of the above Deployment and notice that they are starting fine.
With the previous way, you always build the Docker image on your local machine and then move it to the Minikube container, which again takes a bit of time, even though not a lot.
Using the image build command of Minikube, we can build the image directly inside the Minikube container.
General
minikube image build -t <IMAGE_NAME> <PATH_TO_DOCKERFILE>
Example
minikube image build -t pz/demo /New APP/Dockerfile
Using the minikube image build command the image is instantly available to Minikkube and doesn't have to be explicitly loaded in a second step via the minikube image load command.
Using one of both methods to get our application Docker image into Minikube and restart the Pods, we can recheck the logs of the Deployment:
Further, to verify end to end that everything is working as expected, we can port forward our local port 8080 to the 8080 of the Deployment by using:
kubectl port-forward deployment/myapp 8080:8080
Rechecking the browser, we see that the locally built application runs fine on the Minikube cluster.
Ref: https://levelup.gitconnected.com/two-easy-ways-to-use-local-docker-images-in-minikube-cd4dcb1a5379
you can either reuse the docker shell, with eval $(minikube docker-env), alternatively, you can leverage on docker save | docker load across the shells.
On minikube 1.20, minikube cache add imagename:tag is deprecated.
Instead use minikube image load imagename:tag
If I understand, you have local images, maybe passed by a usb pen and want to load it in minikube?
Just load the image like:
minikube image load my-local-image:0.1
With this, in kubernetes yaml file, you can change the imagePullPolicy to Never, and it will be find because you just loaded it in minikube.
Had this problem, done this and worked.
Most of the answers are already great.
But one important thing I have faced is that if you are using BuildKit
(DOCKER_BUILDKIT=1)
then the images created after executing the eval $(minkube docker-env) will not go to minikube docker engine. Instead it will go to your docker engine on local.
So remove any of the references if you are using below
-mount=type=cache,target=/root/.m2
Related
I have a problem when I run an image mongo with docker-compose.yml. I need to encrypt my data because it is very sensitive. My docker-compose.yml is:
version: '3'
services:
mongo:
image: "mongo"
command: ["mongod","--enableEncryption","--encryptionKeyFile", "/data/db/mongodb-keyfile"]
ports:
- "27017:27017"
volumes:
- $PWD/data:/data/db
I check the mongodb-keyfile exits in data/db, ok no problem, but when I build the file, made and up the image, and te command is:
"docker-entrypoint.sh mongod --enableEncryption --encryptionKeyFile /data/db/mongodb-keyfile"
The status:
About a minute ago Exited (2) About a minute ago
I show the logs and see:
Error parsing command line: unrecognised option '--enableEncryption'
I understand the error, but I don't known how to solve it. I think to make a Dockerfile with the image an ubuntu (linux whatever) and install mongo with the all configurations necessary. Or try to solved it.
Please help me, thx.
According to the documentation, the encryption is available in MongoDB Enterprise only. So you need to have paid subscription to use it.
For the docker image of the enterprise version it says in here that you can build it yourself:
Download the Docker build files for MongoDB Enterprise.
Set MONGODB_VERSION to your major version of choice.
export MONGODB_VERSION=4.0
curl -O --remote-name-all https://raw.githubusercontent.com/docker-library/mongo/master/$MONGODB_VERSION/{Dockerfile,docker-entrypoint.sh}
Build the Docker container.
Use the downloaded build files to create a Docker container image wrapped around MongoDB Enterprise. Set DOCKER_USERNAME to your Docker Hub username.
export DOCKER_USERNAME=username
chmod 755 ./docker-entrypoint.sh
docker build --build-arg MONGO_PACKAGE=mongodb-enterprise --build-arg MONGO_REPO=repo.mongodb.com -t $DOCKER_USERNAME/mongo-enterprise:$MONGODB_VERSION .
Test your image.
The following commands run mongod locally in a Docker container and check the version.
docker run --name mymongo -itd $DOCKER_USERNAME/mongo-enterprise:$MONGODB_VERSION
docker exec -it mymongo /usr/bin/mongo --eval "db.version()"
I'm using lwieske/java-8:server-jre-8u121-slim with Alpine Linux
I'd like to set hostname from a text file to be seen globally (for all shells)
/ # env
HOSTNAME=2fa4a43a975c
/ # cat /etc/afile
something
/ # hostname -F /etc/afile
hostname: sethostname: Operation not permitted
everything running as a service in swarm
i want every node to have unique hostname based on container id.
You can provide the --hostname flag to docker run as well:
docker run -d --net mynet --ip 162.18.1.1 --hostname mynodename
As for workaround, can use docker-compose to assign the hostnames for multiple containers.
Here is the example docker-compose.yml:
version: '3'
services:
ubuntu01:
image: ubuntu
hostname: ubuntu01
ubuntu02:
image: ubuntu
hostname: ubuntu02
ubuntu03:
image: ubuntu
hostname: ubuntu03
ubuntu04:
image: ubuntu
hostname: ubuntu04
To make it dynamic, you can generatedocker-compose.yml from the script.
Then run with: docker-compose up.
docker service create has a --hostname parameter that allows you to specify the hostname. On a more personal note, if you'll connect to one of your services, then any other service on the same network will be pingable and accessible using the service name, with the added benefit of allowing you multiple replicas without worrying about what those will be named.
Better to be late than never. Found this Q trying to find the same thing myself.
The answer is to give the docker container SYS_ADMIN capability and 'hostname -F' will now set the hostname properly.
docker-compose:
cap_add:
- SYS_ADMIN
I'm trying to run an nginx container as a service and share 2 volumes between the host machine and container, so that files in one directory are automatically shared with the other paired directory.
My docker-compose.yml is the following:
version: '2'
services:
nginx:
image: nginx
build: .
ports:
- "5000:80"
volumes:
- /home/user1/share:/share/user1
- /home/user2/share:/share/user2
restart: always
The only way I can get this to work currently is by adding privileged: true to the docker-compose file, however I am not allowed to due this due to security requirements.
When trying to access the volume in the container, I get the following error:
[root#host docker-nginx]# docker exec -it dockernginx_nginx_1 bash
root#2d574f9c6131:/# ls /share/user1/
ls: cannot open directory /share/user1/: Permission denied
Even attaching myself to bash on the container with the following parameters denies me of accessing the resource (or at least listing the contents):
docker exec -it --privileged=true -u 6004:6004 dockernginx_nginx_1 bash
(Note: 6004:6004 happens to be the id:gid ownership that is passed on to /share/user1/)
Is there any way of accessing the contents without building the nginx service with elevated privileges?
Perhaps the issue lies in SELinux restrictions enforced in the container?
The container is running Debian GNU/Linux 8 (jessie) and the host is running CentOS Linux 7 (Core)
Related questions:
Permission denied inside Docker container
Docker was running with --selinux-enabled=true, this prohibited me from accessing the contents of directories in the container.
Read more: http://www.projectatomic.io/blog/2016/07/docker-selinux-flag/
The solution was to disable it, it can either be done by (1) configuring or by (2) installing the non-selinux CentOS package, I went with option 2:
I made sure to reinstall and update Docker from 1.10 to 1.12.1 and not install docker-engine-selinux.noarch but instead have docker-engine.x86_64 and have the SELinux package installed as a dependency (yum does this automatically). By doing this and starting the Docker daemon, you can verify with ps aux | grep "docker" that docker-containerd is not started with the --selinux-enabled=true option.
I am running a Jenkins cluster where in the Master and Slave, both are running as a Docker containers.
The Host is latest boot2docker VM running on MacOS.
To allow Jenkins to be able to perform deployment using Docker, I have mounted the docker.sock and docker client from the host to the Jenkins container like this :-
docker run -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker -v $HOST_JENKINS_DATA_DIRECTORY/jenkins_data:/var/jenkins_home -v $HOST_SSH_KEYS_DIRECTORY/.ssh/:/var/jenkins_home/.ssh/ -p 8080:8080 jenkins
I am facing issues while mounting a volume to Docker containers that are run inside the Jenkins container. For example, if I need to run another Container inside the Jenkins container, I do the following :-
sudo docker run -v $JENKINS_CONTAINER/deploy.json:/root/deploy.json $CONTAINER_REPO/$CONTAINER_IMAGE
The above runs the container, but the file "deploy.json" is NOT mounted as a file, but instead as a "Directory". Even if I mount a Directory as a Volume, I am unable to view the files in the resulting container.
Is this a problem, because of file permissions due to Docker in Docker case?
A Docker container in a Docker container uses the parent HOST's Docker daemon and hence, any volumes that are mounted in the "docker-in-docker" case is still referenced from the HOST, and not from the Container.
Therefore, the actual path mounted from the Jenkins container "does not exist" in the HOST. Due to this, a new directory is created in the "docker-in-docker" container that is empty. Same thing applies when a directory is mounted to a new Docker container inside a Container.
Very basic and obvious thing which I missed, but realized as soon I typed the question.
Lots of good info in these posts but I find none of them are very clear about which container they are referring to. So let's label the 3 environments:
host: H
docker container running on H: D
docker container running in D: D2
We all know how to mount a folder from H into D: start D with
docker run ... -v <path-on-H>:<path-on-D> -v /var/run/docker.sock:/var/run/docker.sock ...
The challenge is: you want path-on-H to be available in D2 as path-on-D2.
But we all got bitten when trying to mount the same path-on-H into D2, because we started D2 with
docker run ... -v <path-on-D>:<path-on-D2> ...
When you share the docker socket on H with D, then running docker commands in D is essentially running them on H. Indeed if you start D2 like this, all works (quite unexpectedly at first, but makes sense when you think about it):
docker run ... -v <path-on-H>:<path-on-D2> ...
The next tricky bit is that for many of us, path-on-H will change depending on who runs it. There are many ways to pass data into D so it knows what to use for path-on-H, but probably the easiest is an environment variable. To make the purpose of such var clearer, I start its name with DIND_. Then from H start D like this:
docker run ... -v <path-on-H>:<path-on-D> --env DIND_USER_HOME=$HOME \
--env DIND_SOMETHING=blabla -v /var/run/docker.sock:/var/run/docker.sock ...
and from D start D2 like this:
docker run ... -v $DIND_USER_HOME:<path-on-D2> ...
Another way to go about this is to use either named volumes or data volume containers. This way, the container inside doesn't have to know anything about the host and both Jenkins container and the build container reference the data volume the same way.
I have tried doing something similar to what you are doing, except with an agent rather that using the Jenkins master. The problem was the same in that I couldn't mount the Jenkins workspace in the inner container. What worked for me was using the data volume container approach and the workspace files were visible to both the agent container and the inner container. What I liked about the approach is the both containers reference the data volume in the same way. Mounting directories with an inner container would be tricky as the inner container now needs to know something about the host that its parent container is running on.
I have detailed blog post about my approach here:
http://damnhandy.com/2016/03/06/creating-containerized-build-environments-with-the-jenkins-pipeline-plugin-and-docker-well-almost/
As well as code here:
https://github.com/damnhandy/jenkins-pipeline-docker
In my specific case, not everything is working the way I'd like it to in terms of the Jenkins Pipeline plugin. But it does address the issue of the inner container being able to access the Jenkins workspace directory.
Regarding your use case related to Jenkins, you can simply fake the path by creating a symlink on the host:
ln -s $HOST_JENKINS_DATA_DIRECTORY/jenkins_data /var/jenkins_home
If you are like me and don't want to mess with Jenkins Setup or too lazy to go through all this trouble, here is a simple workaround I did to get this working for me.
Step 1 - Add following variables to the environment section of pipeline
environment {
ABSOLUTE_WORKSPACE = "/home/ubuntu/volumes/jenkins-data/workspace"
JOB_WORKSPACE = "\${PWD##*/}"
}
Step 2 - Run you container with following command Jenkins pipeline as follows.
steps {
sh "docker run -v ${ABSOLUTE_WORKSPACE}/${JOB_WORKSPACE}/my/dir/to/mount:/targetPath imageName:tag"
}
Take note of the double quotes in the above statement, Jenkins will not convert the env variables if the quotes are not formatted properly or single quotes are added instead.
What does each variable signify?
ABSOLUTE_WORKSPACE is the path of our Jenkins volume which we had mounted while starting Jenkins Docker Container. In my case, the docker run command was as follows.
sudo docker run \
-p 80:8080 \
-v /home/ubuntu/volumes/jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-d -t jenkinsci/blueocean
Thus the varible ABSOLUTE_WORKSPACE=/home/ubuntu/volumes/jenkins-data + /workspace
JOB_WORKSPACE command gives us the current workspace directory where your code's lives. This is also the root dir of your code base. Just followed this answer for reference.
How does this work ?
It is very straight forward, as mentioned in #ZephyrPLUSPLUS ( credits where due ) answer, the source path for our docker container which is being run in Jenkins pipeline is not the path in current container, rather the path taken is host's path. All we are doing here is constructing the path where our Jenkins pipeline is being run. And mounting it to our container. Voila!!
Here's a little illustration to help clarify ...
This also works via docker-compose and/or named volumes so you don't need to create a data only container, but you still need to have the empty directory on the host.
Host setup
Make host side directories and set permissions to allow Docker containers to access
sudo mkdir -p /var/jenkins_home/{workspace,builds,jobs} && sudo chown -R 1000 /var/jenkins_home && sudo chmod -R a+rwx /var/jenkins_home
docker-compose.yml
version: '3.1'
services:
jenkins:
build: .
image: jenkins
ports:
- 8080:8080
- 50000:50000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- workspace:/var/jenkins_home/workspace/
# Can also do builds/jobs/etc here and below
jenkins-lts:
build:
context: .
args:
versiontag: lts
image: jenkins:lts
ports:
- 8081:8080
- 50001:50000
volumes:
workspace:
driver: local
driver_opts:
type: none
o: bind
device: /var/jenkins_home/workspace/
When you docker-compose up --build jenkins (you may want to incorporate this into a ready to run example like https://github.com/thbkrkr/jks where the .groovy scripts pre-configure Jenkins to be useful on startup) and then you will be able to have your jobs clone into the $JENKINS_HOME/workspace directory and shouldn't get errors about missing files/etc because the host and container paths will match, and then running further containers from within the Docker-in-Docker should work as well.
Dockerfile (for Jenkins with Docker in Docker)
ARG versiontag=latest
FROM jenkins/jenkins:${versiontag}
ENV JAVA_OPTS="-Djenkins.install.runSetupWizard=false"
COPY jenkins_config/config.xml /usr/share/jenkins/ref/config.xml.override
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
USER root
RUN curl -L http://get.docker.io | bash && \
usermod -aG docker jenkins
# Since the above takes a while make any other root changes below this line
# eg `RUN apt update && apt install -y curl`
# drop back to the regular jenkins user - good practice
USER jenkins
EXPOSE 8080
A way to work around this issue is to mount a directory (inside your docker container in which you mounted your docker socket) using the exact same path for its destination. Then, when you run a container from within that container, you are able to mount anything within that mount's path into the new container using docker -v.
Take this example:
# Spin up your container from which you will use docker
docker run -v /some/dir:/some/dir -v /var/run/docker.sock:/var/run.docker.sock docker:latest
# Now spin up a container from within this container
docker run -v /some/dir:/usr/src/app $CONTAINER_IMAGE
The folder /some/dir is now mounted across your host, the intermediate container as well as your destination container. Since the mount's path exists on both the host as the "nearly docker-in-docker" container, you can use docker -v as expected.
It's kind of similar to the suggestion of creating a symlink on the host but I found this (at least in my case), a cleaner solution. Just don't forget to cleanup the dir on the host afterwards! ;)
I have same problem in Gitlab CI, I solved this by using docker cp to do something like mount
script:
- docker run --name ${CONTAINER_NAME} ${API_TEST_IMAGE_NAME}
after_script:
- docker cp ${CONTAINER_NAME}:/code/newman ./
- docker rm ${CONTAINER_NAME}
Based from the description mentioned by #ZephyrPLUSPLUS
here is how I managed to solve this:
vagrant#vagrant:~$ hostname
vagrant
vagrant#vagrant:~$ ls -l /home/vagrant/dir-new/
total 4
-rw-rw-r-- 1 vagrant vagrant 10 Jun 19 11:24 file-new
vagrant#vagrant:~$ cat /home/vagrant/dir-new/file-new
something
vagrant#vagrant:~$ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker /bin/sh
/ # hostname
3947b1f93e61
/ # ls -l /home/vagrant/dir-new/
ls: /home/vagrant/dir-new/: No such file or directory
/ # docker run -it --rm -v /home/vagrant/dir-new:/magic ubuntu /bin/bash
root#3644bfdac636:/# ls -l /magic
total 4
-rw-rw-r-- 1 1000 1000 10 Jun 19 11:24 file-new
root#3644bfdac636:/# cat /magic/file-new
something
root#3644bfdac636:/# exit
/ # hostname
3947b1f93e61
/ # vagrant#vagrant:~$ hostname
vagrant
vagrant#vagrant:~$
So docker is installed on a Vagrant machine. Lets call it vagrant. The directory you want to mount is in /home/vagrant/dir-new in vagrant.
It starts a container, with host 3947b1f93e61. Notice that /home/vagrant/dir-new/ is not mounted for 3947b1f93e61.
Next we use the exact location from vagrant, which is /home/vagrant/dir-new as the source of the mount and specify any mount target we want, in this case it is /magic. Also note that /home/vagrant/dir-new does not exist in 3947b1f93e61.
This starts another container, 3644bfdac636.
Now the contents from /home/vagrant/dir-new in vagrant can be accessed from 3644bfdac636.
I think because docker-in-docker is not a child, but a sibling. and the path you specify must be the parent path and not the sibling's path. So any mount would still refer to the path from vagrant, no matter how deep you do docker-in-docker.
You can solve this passing in an environment variable.
Example:
.
├── docker-compose.yml
└── my-volume-dir
└── test.txt
In docker-compose.yml
version: "3.3"
services:
test:
image: "ubuntu:20.04"
volumes:
- ${REPO_ROOT-.}/my-volume-dir:/my-volume
entrypoint: ls /my-volume
To test run
docker run -e REPO_ROOT=${PWD} \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ${PWD}:/my-repo \
-w /my-repo \
docker/compose \
docker-compose up test
You should see in the output:
test_1 | test.txt
Looking at shipyard, I noticed that the deploy container launches containers on the host ( redis, router, database, load balancer, shipyard)
This is done by using the -H flag.
So I decided to try this to deploy my apps as this would make deployment tons easier ( versus systemd, init.d ).
I was able to get about 70% there, but the thing that broke was --volumes-from tag.
The container starts, but the volume it's mounting to is empty. I have a simple example posted here.
http://goo.gl/a558XL
If you run these commands on host. it works fine.
on_host$ docker run --name data joshuacalloway/data
on_host$ docker run --volumes-from data ubuntu cat /data/hello.txt
However if you do this in a container. It is broken.
on_host$ docker run -it --entrypoint=/bin/bash -v /var/run/docker.sock:/var/run/docker.sock joshuacalloway/deploy -s
in_container:/# docker ps -----> this shows docker processes on the host
in_container:/# docker rm data ---> this removes docker container data that was created above
in_container:/# docker run --name data joshuacalloway/data
in_container:/# docker run --volumes-from data ubuntu cat /data/hello.txt