the architecture is simple,
I have a small bash script that orchestrate
two types of containers:
1 member of type "1" talks to another member of type "2". about 15 containers of each type.
When I detect that a member of type "2" (or "1") died, i kill the other one and raise them again.
the environment is Amazon ec2, 8 core, ubuntu 14.04.
After some period of time, when trying to do 'docker ps', i get no response.
I think that the problem is the amount of times i issue 'docker stop' command.
when i did 'docker kill' instead, the problem occurs faster.
Reading the documentation i see that 'docker stop' command coincide with what i see in the docker.log - which is a lot of docker kill (this is what happens when a container does not respond to a 'docker stop' command).
Is there a problem with the docker stop/kill - and the docker memory management?
That could be linked to issue 15101, where daemon containers have trouble to be killed.
I still have this issue on docker 1.8.1.
Removing /etc/apparmor.d/docker and running sudo service apparmor reload appears to fix it
This look like a problem in the docker infrastructure, there is a pretty big thread on the subject that looks like what i asked here, i will post an answer when this issue resolve there.
Docker Daemon Hangs under load
Related
I'm running Jenkins inside a Docker container. I wonder if it's ok for the Jenkins container to also be a Docker host? What I'm thinking about is to start a new docker container for each integration test build from inside Jenkins (to start databases, message brokers etc). The containers should thus be shutdown after the integration tests are completed. Is there a reason to avoid running docker containers from inside another docker container in this way?
Running Docker inside Docker (a.k.a. dind), while possible, should be avoided, if at all possible. (Source provided below.) Instead, you want to set up a way for your main container to produce and communicate with sibling containers.
Jérôme Petazzoni — the author of the feature that made it possible for Docker to run inside a Docker container — actually wrote a blog post saying not to do it. The use case he describes matches the OP's exact use case of a CI Docker container that needs to run jobs inside other Docker containers.
Petazzoni lists two reasons why dind is troublesome:
It does not cooperate well with Linux Security Modules (LSM).
It creates a mismatch in file systems that creates problems for the containers created inside parent containers.
From that blog post, he describes the following alternative,
[The] simplest way is to just expose the Docker socket to your CI container, by bind-mounting it with the -v flag.
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Now this container will have access to the Docker socket, and will therefore be able to start containers. Except that instead of starting "child" containers, it will start "sibling" containers.
I answered a similar question before on how to run a Docker container inside Docker.
To run docker inside docker is definitely possible. The main thing is that you run the outer container with extra privileges (starting with --privileged=true) and then install docker in that container.
Check this blog post for more info: Docker-in-Docker.
One potential use case for this is described in this entry. The blog describes how to build docker containers within a Jenkins docker container.
However, Docker inside Docker it is not the recommended approach to solve this type of problems. Instead, the recommended approach is to create "sibling" containers as described in this post
So, running Docker inside Docker was by many considered as a good type of solution for this type of problems. Now, the trend is to use "sibling" containers instead. See the answer by #predmijat on this page for more info.
It's OK to run Docker-in-Docker (DinD) and in fact Docker (the company) has an official DinD image for this.
The caveat however is that it requires a privileged container, which depending on your security needs may not be a viable alternative.
The alternative solution of running Docker using sibling containers (aka Docker-out-of-Docker or DooD) does not require a privileged container, but has a few drawbacks that stem from the fact that you are launching the container from within a context that is different from that one in which it's running (i.e., you launch the container from within a container, yet it's running at the host's level, not inside the container).
I wrote a blog describing the pros/cons of DinD vs DooD here.
Having said this, Nestybox (a startup I just founded) is working on a solution that runs true Docker-in-Docker securely (without using privileged containers). You can check it out at www.nestybox.com.
Yes, we can run docker in docker, we'll need to attach the unix socket /var/run/docker.sock on which the docker daemon listens by default as volume to the parent docker using -v /var/run/docker.sock:/var/run/docker.sock.
Sometimes, permissions issues may arise for docker daemon socket for which you can write sudo chmod 757 /var/run/docker.sock.
And also it would require to run the docker in privileged mode, so the commands would be:
sudo chmod 757 /var/run/docker.sock
docker run --privileged=true -v /var/run/docker.sock:/var/run/docker.sock -it ...
I was trying my best to run containers within containers just like you for the past few days. Wasted many hours. So far most of the people advise me to do stuff like using the docker's DIND image which is not applicable for my case, as I need the main container to be Ubuntu OS, or to run some privilege command and map the daemon socket into container. (Which never ever works for me)
The solution I found was to use Nestybox on my Ubuntu 20.04 system and it works best. Its also extremely simple to execute, provided your local system is ubuntu (which they support best), as the container runtime are specifically deigned for such application. It also has the most flexible options. The free edition of Nestybox is perhaps the best method as of Nov 2022. Highly recommends you to try it without bothering all the tedious setup other people suggest. They have many pre-constructed solutions to address such specific needs with a simple command line.
The Nestybox provide special runtime environment for newly created docker container, they also provides some ubuntu/common OS images with docker and systemd in built.
Their goal is to make the main container function exactly the same as a virtual machine securely. You can literally ssh into your ubuntu main container as well without the ability to access anything in the main machine. From your main container you may create all kinds of containers like a normal local system does. That systemd is very important for you to setup docker conveniently inside the container.
One simple common command to execute sysbox:
dock run --runtime=sysbox-runc -it any_image
If you think thats what you are looking for, you can find out more at their github:
https://github.com/nestybox/sysbox
Quicklink to instruction on how to deploy a simple sysbox runtime environment container: https://github.com/nestybox/sysbox/blob/master/docs/quickstart/README.md
I have a docker containter based on centos/systemd. I run the container with
docker run -d --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro <image>
Then i can access the container with:
docker exec -ti <containerID> /bin/bash
Then i can list all loaded units with the command systemctl . This works fine.
Now i want to deploy the image into a kubernetes cluster, this works also fine and i can access the running pod in the cluster via kubectl exec -ti <pod> /bin/bash
If i type now the command systemctl i get the error message
Failed to get D-Bus connection: Operation not permitted
How is it possible to make systemd/systemctl available in the pod?
HINT: Need systemd because of software running inside container, so supervisord is not an option here
It is a sad observation that the old proposal from Daniel Walsh (Redhat) is still floating around - which includes a hint to run a "privileged container" to get some systemd behaviour, by basically talking to the daemon outside of the container.
Drop that. Just forget it. You can't get that in a real cluster unless violating its basic designs.
And in most cases, the requirement for systemd in a container is not very strict when looking closer. There are quite a number of service-manager or an init-daemon implmentations for containers. You could try with the docker-systemctl-replacement script for example.
The command to start systemd would have to be in a script in the container. I use /usr/sbin/init or /usr/lib/systemd/systemd --systemd --unit=basic.target. Additionally you need start systemd with the tmpfs for /run to store runtime information. Scripting it is not easy and Tableau is a good example of why it's being done.
Also, I recommend to NOT use --privileged at all costs, because it's a security risk plus you may accidentally alter or bring down the host with changes made inside the container.
In opensuse docker container, cronjob is not working. When I try systemctl command getting this error: Failed to et D-bus: Unknown error -1 . I have tried many blogs and stackoverflow questions everywhere It was advised that basic architecture of Docker image should be redesigned.
What exactly needs to be done here is not mentioned. Kindly help, I am stuck on this issue.
To a first approximation, commands like systemctl, initctl, service, or start just don't work in Docker and you should find a different way to do what you're attempting.
Stylewise, the standard way to use a Docker container is to launch some sort of service in the foreground. As one specific example, the standard Redis image doesn't go through any sort of init script; it just runs
CMD ["redis-server"]
In most Docker images it's unusual to even so much as launch a background process (with the shell & operator). It's not usually necessary and in Dockerfiles the interaction with the RUN directive has confused some people.
In the specific case of systemctl, it requires an extremely heavyweight init system that is not just a process manager but also wants to monitor and manage kernel-level parameters, includes a logging system, runs an inter-process message bus, and some other functionality. You can't run systemd under Docker without the container being --privileged, which gives the container the ability to "escape" on to the host system in some unfortunate ways.
I'm following step one of this docker tutorial.
I have installed ubuntu version 14.04 on a virtual box vm.
I intentionally downgraded by docker version so that when I type "docker version" I get Client version: 1.5.0. This is because the server I intend to communicate with is on 1.5.0.
When trying the command "docker run hello-world" I get the response:
"Post http:///var/run/docker.sock/v1.17/containers/create: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?"
When running "sudo docker run hello-world" I get the response:
Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
Can someone please explain to me what's happening and how can fix it?
Thanks.
Edit: I tried to follow the solution for Linux here
However,
I had tried to follow El Mesa's instructions in that post. However, when I got to running sudo docker -d I got an Error running DeviceCreate (createPool) dm_task_run failed. I don't think I need to start up a anything since I was just following the tutorial and the tutorial just did docker run hello-world immediately after installing docker
Pay attention to the text that immediately preceeds Are you trying to connect to a TLS-enabled daemon without TLS in the error message. In the question asked here it is permission denied, but it could also be no such file or directory (or possibly something else). The former is more likely to mean that the current user is lacking permissions to access docker, and the latter is more likely to mean that there is a problem with the docker service itself, including the possibility that it is not running at all.
So depending on what your situation is look for the answers on this and the
linked question page that focus on the respective problem area.
In my case (CentOS Linux release 7.1.1503 (Core), docker-1.7.1-108.el7.centos.x86_64) it was permission denied. I have added user to the docker group (sudo usermod -a -G docker user) but docker command still didn't work when I ran it under user, while it ran fine under sudo. What I forgot to do is log the user out and back in after adding it to the docker group, which is a step necessary for the group membership to take effect.
Restarting the machine will also solve this issue but it is a more drastic step and will work because it will imply log out / log in step. I would recommend trying to log out and back in before restarting because if it works it will give you more confidence that the group membership was the actual issue. And if it doesn't work you can always try restarting, though if it works after that it will probably work because restarting took care of some other underlying issue.
And one more thing in case you come across it and find yourself in doubt - when you first install docker and wish to add user to the docker group, you may notice (as I did in my case) that the "dockerroot" group exists but not "docker" group. Do not add user to the dockerroot group assuming that is the one you need. Instead create new docker group and add the user to it.
It may be that your docker daemon is not running.
I have ubuntu/docker on a desktop with wireless LAN.
It acts a bit finicky compared to the wired computers from which docker works OK, and duplicates the error message you reported:
$ docker run -it ubuntu:latest /bin/bash
FATA[0000] Post http:///var/run/docker.sock/v1.17/containers/create: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
However, after running:
sudo service docker start
It behaves correctly (at least until the host is rebooted):
$ docker run -it ubuntu:latest /bin/bash
root#2cea4e5f5028:/#
If the system is not starting the docker daemon on boot, as was the case here, then the docker daemon can be automatically started on boot by editing /etc/rc.local to do so. Add the line below immediately before the exit line. This will fork a new bash shell, wait 30 sec for the network setup, etc., to settle, and start the docker daemon. sudo is unnecessary here because /etc/rc.local runs as root.
( sleep 30; /usr/sbin/service docker start ) &
We are trying to move onto Docker for deployment purpose. Our architecture requires to have a redis, a mongodb and several nodejs and java based Docker containers.
So my question is, if suppose the redis/mongodb docker container crashes, do we loose all the data that it had?
We want isolation, but at the same time we don't want to loose data due to malfunction/crashes. Is this even possible to achieve this with Docker or is it something not relevant here?
Any help or comments will be greatly appreciated.
Thanks
The answer is: YES - If a container crashes so that it can not be restored/restarted the data is gone. But, normally containers can be restarted and continued - in that case the data is not lost.
E.g. - the following sequence from the docker docs illustrates how container startup work. Note that the data is not lost here until the container is removed.
# Start a new container
$ JOB=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo Hello world; sleep 1; done")
# Stop the container
$ sudo docker stop $JOB
# Start the container
$ sudo docker start $JOB
# Restart the container
$ sudo docker restart $JOB
# SIGKILL a container
$ sudo docker kill $JOB
# Remove a container
$ sudo docker stop $JOB # Container must be stopped to remove it
$ sudo docker rm $JOB
Whenever you execute a docker run command you start a new container with fresh data. The data is based on the image you provide and that data is consistent (unless you rebuild the image of course).
So, how should you setup docker to keep your data intact? I think that a good approach is to keep the important data mounted in a volume. Volumes are simply external folders (i.e. a folder from the host system) that holds the data and this data will not be lost even if you reinstall the entire docker daemon.
Example:
docker run -v /some/local/dir:/some/dir/in/redis-container my/redis
This mounts the host folder /some/local/dir as the folder /some/dir/in/redis-container in the running container. If e.g. redis stores its data in that folder you're all set to go and reboots/crashes can be survived.
More info about docker volumes check out the docs. Another great article is the also from the docker website, Managing Data in Containers.
EDIT: After comments I clarified the answer - the data is lost if the container can't be restarted (total crash).
If a container crashes, you won't lose any data - at least not more than with a regular application crash.
The container itself is unlikely to crash (after all, it's only an envelope for your application(s)). Your application(s) running in a container can crash, and if they do, their data will still be on the container filesystem. All you have to do in such a situation is to restart the failed container.
One case where you could lose something is if you explicitly tell Docker to remove the container when it's not running anymore (--rm option).
That being said, for IO-intensive applications such as databases, it is highly recommended to host data on Docker volumes, for performance reasons (a docker volume is a traditional filesystem, while the container default filesystem is a stack of layers and will be slower).