I keep receiving randomly "refused connection" while trying to ssh into linux containers.
In order to try to find if it was a rogue computer that was impersonating the IP of the container, I ran an arping on the interface of the machine what was trying to reach the container and inside the container, but there's no duplicate IPs.
I double checked the SSH configuration of the container and on the "hypervisor" host to make sure that it wasn't a timeout or something.
The current solution that I found is to keep a crontab SSHing into the container and in that way I stopped receiving the "refused connection" error.
So I started to wonder: Does anybody knows if containers go to sleep when they're inactive?
Thank you, very much in advance for your kind answers!
Best regards!
No, not at all.
One thing you must realise is that processes running inside a container are similar to any other process running on the kernel. The kernel does see any difference between them, apart from the fact that it has different namespaces and cgroups to limit the resource usage.
If you using docker, you can easily find the PID of any container by running the following command
docker inspect --format '{{.State.Pid}}' container_name
Related
I'm having the issue (which seems to be common) that I'm dockerizing applications that run on one machine, and these applications, now, need to run in different containers (because that's the docker paradigm and how things should be done). Currently I'm having issues with postfix and dovecot... people have found this too painful that there are tons of containers running both dovecot and postfix in one container, and I'm doing my best to do this right, but the lack of inet protocol examples (over tcp) is just too painful to continue with this. Leave alone bad logging and things that just don't work. I digress.
The question
Is it correct to have shared docker volumes that have socket files shared across different containers, and expect them to communicate correctly? Are there limitations that I have to be aware of?
Bonus: Out of curiosity, can this be extended to virtual machines?
EDIT: I would really appreciate sharing the source of the information you provide.
A Unix socket can't cross VM or physical-host boundaries. If you're thinking about ever deploying this setup in a multi-host setup like Kubernetes, Docker Swarm, or even just having containers running on multiple hosts, you'll need to use some TCP-based setup instead. (Sharing files in these environments is tricky; sharing a Unix socket actually won't work.)
If you're using Docker Desktop, also remember that runs a hidden Linux virtual machine, even on native Linux. That may limit your options. There are other setups that more directly use a VM; my day-to-day Docker turns out to be Minikube, for example, which runs a single-node Kubernetes cluster with a Docker daemon in a VM.
I'd expect sharing a Unix socket to work only if the two containers are on the same physical system, and inside the same VM if appropriate, and with the same storage mounted into both (not necessarily in the same place). I'd expect putting the socket on a named Docker volume mounted into both containers to work. I'd probably expect a bind-mounted host directory to work only on a native Linux system not running Docker Desktop.
I have two docker containers A and B. On container A a django application is running. On container B a WEBDAV Source is mounted.
Now I want to check from container A if a folder exists in container B (in the WebDAV mount destination).
What is the best solution to do something like that? Currently I solved it mounting the docker socket into the container A to execute cmds from A inside B. I am aware that mounting the docker socket into a container is a security risk for the host and the whole application stack.
Other possible solutions would be to use SSH or share and mount the directory which should be checked. Of course there are further possible solutions like doing it with HTTP requests.
Because there are so many ways to solve a problem like that, I want to know if there is a best practise (considering security, effort to implement, performance) to execute commands from container A in contianer B.
Thanks in advance
WebDAV provides a file-system-like interface on top of HTTP. I'd just directly use this. This requires almost no setup other than providing the other container's name in configuration (and if you're using plain docker run putting both containers on the same network), and it's the same setup in basically all container environments (including Docker Swarm, Kubernetes, Nomad, AWS ECS, ...) and a non-Docker development environment.
Of the other options you suggest:
Sharing a filesystem is possible. It leads to potential permission problems which can be tricky to iron out. There are potential security issues if the client container isn't supposed to be able to write the files. It may not work well in clustered environments like Kubernetes.
ssh is very hard to set up securely in a Docker environment. You don't want to hard-code a plain-text password that can be easily recovered from docker history; a best-practice setup would require generating host and user keys outside of Docker and bind-mounting them into both containers (I've never seen a setup like this in an SO question). This also brings the complexity of running multiple processes inside a container.
Mounting the Docker socket is complicated, non-portable across environments, and a massive security risk (you can very easily use the Docker socket to root the entire host). You'd need to rewrite that code for each different container environment you might run in. This should be a last resort; I'd consider it only if creating and destroying containers would need to be a key part of this one container's operation.
Is there a best practise to execute commands from container A in contianer B?
"Don't." Rearchitect your application to have some other way to communicate between the two containers, often over HTTP or using a message queue like RabbitMQ.
One solution would be to mount one filesystem readonly on one container and read-write on the other container.
See this answer: Docker, mount volumes as readonly
I'm looking for a way for a user to be able to execute a limited set of commands on the host, while only accessing it from containers/browser. The goal is to prevent the need for SSH'ing to the host just to run commands occasionally like make start, make stop, etc. These make commands just execute a series of docker-compose commands and are needed sometimes in dev.
The two possible ways in I can think of are:
Via cloud9 terminal inside browser (we'll already be using it). By default this terminal only accesses the container itself of course.
Via a custom mini webapp (e.g. node.js/express) with buttons that map to commands. This would be easy to do if running on the host itself, but I want to keep all code like this as containers.
Although it might not be best practice it is still possible to control the host from inside a container. If you are running docker-compose commands you can bind mount the docker socket by using -v /var/run/docker.sock:/var/run/docker.sock on ubuntu.
If you want to use other system tools you will have to bind mount all required volumes using -v this gets really tricky and tedious when you want to use system bins that use /lib.*.so files.
If you need to use sudo commands don't forget to add --privileged flag when running the container
Named Pipes can be very useful to run commands on host machine from docker. Your question is very similar to this one
The solution using named pipes was also given in the same question. I had tried and tested this approach and it works perfectly fine.
That approach would be against the docker concepts of process/resources encapsulation. With docker you encapsulate processes completely from the host and from each other (unless you link containers/volumes). From within a docker container you cannot see any processes running on the host due to process namespaces. When you now want to execute processes on the host from within a container that would be against the docker methodology.
A container is not supposed to break out and access the host. Docker is (amongst other things) process isolation. You may find various tricks to execute some code on the host, when you set it up, though.
Now I am developing the new content so building the server.
On my server, the base system is the Cent OS(7), I installed the Docker, pulled the cent os, and establish the "WEB SERVER container" Django with uwsgi and nginx.
However I want to up the service, (Database with postgres), what is the best way to do it?
Install postgres on my existing container (with web server)
Build up the new container only for database.
and I want to know each advantage and weak point of those.
It's idiomatic to use two separate containers. Also, this is simpler - if you have two or more processes in a container, you need a parent process to monitor them (typically people use a process manager such as supervisord). With only one process, you won't need to do this.
By monitoring, I mainly mean that you need to make sure that all processes are correctly shutdown if the container receives a SIGSTOP signal. If you don't do this properly, you will end up with zombie processes. You won't need to worry about this if you only have a signal process or use a process manager.
Further, as Greg points out, having separate containers allows you to orchestrate and schedule the containers separately, so you can do update/change/scale/restart each container without affecting the other one.
If you want to keep the data in the database after a restart, the database shouldn't be in a container but on the host. I will assume you want the db in a container as well.
Setting up a second container is a lot more work. You should find a way that the containers know about each other's address. The address changes each time you start the container, so you need to make some scripts on the host. The host must find out the ip-adresses and inform the containers.
The containers might want to update the /etc/hosts file with the address of the other container. When you want to emulate different servers and perform resilience tests this is a nice solution. You will need quite some bash knowledge before you get this running well.
In about all other situations choose for one container. Installing everything in one container is easier for setting up and for developing afterwards. Setting up Docker is just the environment where you want to do your real work. Tooling should help you with your real work, not take all your time and effort.
I am trying to find the best way to automatically start services inside a docker container once it has been restarted.
I don't mean starting the docker container on restart. I'm trying to achieve the following way:
I stop a container; and
when I start it again, the same services (processes) I was running before will start up again.
I.e. if I am running apache and ssh inside the container starting those service on container restart
That's really not the docker way (multiple processes per container). You can try to go down that path, as I did for several months, but you'll find that you'll be going against the docker team's design principles most of the time. I used the phusion/baseimage base image and it really is well designed, with a good init process and support for run-it and ssh out of the box. Tread carefully, if you go down that path however.