I have access to a 10Gb virtual machine, which was already at 99% disk usage. In order to start a new container I moved my docker directory (previously at /var/snap/docker) to another disk where I have as much space as I need, and created a symbolic link at /var/snap called docker.
Now when I try to run a container it prints out:
Error response from daemon: OCI runtime create failed: /var/snap/docker/common/var-lib-docker/overlay2/833f49d424b2b1089aee6989bda609fa85d3a72540142c464e00e1fa4eec0a3b/merged is not an absolute path or is a symlink: unknown
Error: failed to start containers: ycsb
I don't know what I did wrong, or if this isn't supposed to work at all.
This used to work (to at least v19.03.1), but now does not.
However, you can explicitly change the Docker rootdir that the daemon uses, which is more secure.
From https://unix.stackexchange.com/questions/452368/change-docker-root-dir-on-red-hat-linux:
Edit (or create) the /etc/docker/daemon.json file so that it reads
{
"data-root": "/mnt/your-volume/var/snap/docker",
...other possible entries...
}
Restart the daemon with
sudo systemctl restart docker
And try your run.
Related
I have two ubuntu server VMs running on the same proxmox server. Both are running docker. I want to migrate one container from one of the VMs to the other. For that I need to attach a USB drive to the target VM which will be mounted inside the docker container. I mounted the drive exactly the same way in both VMs (the old one is shut down of course) and the mounting works, I can access the directory and see the contents of the drive. Now I want to run the container with the exact same command as I used on the old vm which looks something like this:
docker run -d --restart unless-stopped --stop-timeout 300 -p 8081:8081 --mount type=bind,source="/data",destination=/internal_data
This works in the old VM, but on the new one it says:
docker: Error response from daemon: invalid mount config for type "bind": bind source path does not exist: /data.
See 'docker run --help'.
I don't understand what's wrong. /data exists and is owned by root, the same as it is on the old VM. In fact, it's the same drive with the same contents. If I shut down the new VM and boot up the old one with the drive mounted in exactly the same way, it just works.
What can cause this error, if the source path does in fact exist?
I fixed it by mounting the drive in a mount point in /mnt/.
I changed nothing else and in the other VM it works when mounting on the root with the same user and permissions. No idea why that fixed it.
Working on learning linux, Docker, and Docker Volumes. I want to spin up an httpd container that references a httpd.conf file on my local machine that has proxy and load balancing code in it with the correct modules loaded. I've gotten it figured out manually, but now I want to try using a volume to simplify the process.
My issue is I'm getting permission errors when trying to run the container. I'm using CentOS 7 on a VM, and I'm getting an SELinux Alert when I try to start up the container. I believe this is a permissions issue, I've changed permissions on my local machine so that the directory and httpd.conf file are accessible to anyone, but I believe the actual issue is within the httpd containers permissions.
How do I allow the volume to work? Please let me know what other information to provide if what I have here is too vague, still learning linux and docker so not sure what else will be required to diagnose this problem.
According to the Docker reference manual, you can add an additional option to the mount parameter which modifies the SELinux label to the host file/directory being mounted.
So your volume mount parameter will go from something like:
-v /folder/to/mount:/directory/in/container
To:
-v /folder/to/mount:/directory/in/container:z
Full command line example:
docker run -d --name my-httpd-cont -ti -v "$(pwd)"/httpd.conf:conf/httpd.conf:z httpd:latest
As you're learning Linux, you can turn SELinux off, or put it into permissive mode which will log warnings, but won't act upon them. This can be done by running the following command.
setenforce 0
To turn it off altogether, manually edit the /etc/selinux/config file and reboot the server.
I have a docker containter based on centos/systemd. I run the container with
docker run -d --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro <image>
Then i can access the container with:
docker exec -ti <containerID> /bin/bash
Then i can list all loaded units with the command systemctl . This works fine.
Now i want to deploy the image into a kubernetes cluster, this works also fine and i can access the running pod in the cluster via kubectl exec -ti <pod> /bin/bash
If i type now the command systemctl i get the error message
Failed to get D-Bus connection: Operation not permitted
How is it possible to make systemd/systemctl available in the pod?
HINT: Need systemd because of software running inside container, so supervisord is not an option here
It is a sad observation that the old proposal from Daniel Walsh (Redhat) is still floating around - which includes a hint to run a "privileged container" to get some systemd behaviour, by basically talking to the daemon outside of the container.
Drop that. Just forget it. You can't get that in a real cluster unless violating its basic designs.
And in most cases, the requirement for systemd in a container is not very strict when looking closer. There are quite a number of service-manager or an init-daemon implmentations for containers. You could try with the docker-systemctl-replacement script for example.
The command to start systemd would have to be in a script in the container. I use /usr/sbin/init or /usr/lib/systemd/systemd --systemd --unit=basic.target. Additionally you need start systemd with the tmpfs for /run to store runtime information. Scripting it is not easy and Tableau is a good example of why it's being done.
Also, I recommend to NOT use --privileged at all costs, because it's a security risk plus you may accidentally alter or bring down the host with changes made inside the container.
I have a Docker container (not image) that crashes when I try to start it. The Docker logs show that it is failing because and Apache2 conf file can't find a directory (/var/www/html/log/ - this is the result of me trying to get SSL setup and forgot to create this directory after I referenced it in the 000-default.conf file and restarted Apache).
How do I create this directory in the container without having to start the container itself?
You have 4.5 options that comes to my mind:
You can rebuild the image and set up the directory while doing it.
You can attach a volume while starting the image, but in this case your changes will remain in your disk and not in your container.
You can run the image overriding the entry point with --entrypoint="bash" or something. You need to do it with -ti flag so that it begins in interactive mode. Then make your changes and run docker commit -p <container> <image:tag> -p pauses container while commiting. I recommend this unless it absolutely needs to be running.
I am not sure if this one works so I give half point :P but if it does this would be the fastest option actually. You can start the container in interactive mode with docker start -i container which would attach a terminal. And if you have time until container exits or read that part of configuration, you can create the folder.
Ah finally, I have just remembered, you should be able to move files and folders from your file system to container using docker cp [container:]<source> [container:]<destination> even while container is not running.
In general, if you're using a base Docker image for Apache (for example, httpd/2.4/Dockerfile), it should already have "/var/www/html/log".
SUGGESTION 1: Please make sure you're starting with a "good" base image.
SUGGESTION 2: Add "mkdir -p /var/www/html/log" to your Dockerfile, and rebuild the image.
I'm not sure how you're using your image - what you want the image to contain besides Apache - but:
SUGGESTION 3: Google for a simple tutorial that matches your use case, and see what steps you might be "missing". For example: Dockerize your Laravel Application
I tried installing docker on a server of mine using this tutorial.
I want to run docker images remotely and use the portainer web-interface to administrate everything.
However, when I get to the point where I need to test my installation and I enter the command $ sudo docker run hello-world, I only get the following error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:424: container init caused \"join session keyring: create session key: disk quota exceeded\"": unknown. ERRO[0000] error waiting for container: context canceled
I tried the following methods:
"Install Docker CE / Install using the convenience script"
"Install Docker CE / Install using the repository"
This also happens when I try to run other images (eg. portainer).
I hope this is enough information.
I am new to docker, so I don't know how I should debug it efficiently.
Try to increase maxkeys kernel parameter:
echo 50000 > /proc/sys/kernel/keys/maxkeys
see: https://discuss.linuxcontainers.org/t/error-with-docker-inside-lxc-container/922/2
So, as it turns out, I connected to the wrong vServer.
The one I was connected to is using LXD (as you might have seen in my previous comment), which doesn't support Docker (at least not the way this guide advises).
When I ran the same setup on a vServer using a bare-metal(type 1) hypervisor, it worked without a problem.
I think this has to do with automatic storage allocation under LXD, but this is just a guess.