communication between containers in docker - linux

Is there any to way to communicate among docker containers other than via sockets/network? I have read docker documentation which says we can link docker containers using --link option but it doesn't speicify how to transfer data/msg from one container to another. I have already created a container named checkram.
Now I want to link a new container with this container and I run
docker run -i -t --privileged --link=checkram:linkcheck --name linkcont topimg command.
Then i checked env variable LINKCHECK_PORT in linkcont container which contains tcp://172.17.0.14:22.
I don't know what to do with this ip and port and how to communicate with checkram container from linkcont container. can anyone help me out of this? Thanks in advance.

There are several tools you can use to achieve running multiple docker containers and interact with them. docker has a tool: docker Compose where you can build and interact multiple containers.
Another tool that works as well: decking you can also use FIG, but i found decking was very straight forward and easy to configure. At that time when i was using decking, docker compose was not released yet. docker compose is a newer tool, yet it is developed by docker.

Related

Use of Docker and Linux containers (LXC)

Given that using Docker alongside with LXC containers in the same host can create problems in iptables (if I understood correctly) (source: https://github.com/docker/for-linux/issues/103), the same applies when using Docker INSIDE lxc container ?
In other words, could we create a LXC container on the host A, and inside of that container, install Docker, use it and be not affected by this issue?
Context: Why I am asking this? because I want to create and run a gitlab-runner inside a Docker container (with other things such as Docker, maven, etc) in a Host that has lxc containers already running, I do not want to touch these containers.

linuxamazon not running docker deamon [duplicate]

I'm running Jenkins inside a Docker container. I wonder if it's ok for the Jenkins container to also be a Docker host? What I'm thinking about is to start a new docker container for each integration test build from inside Jenkins (to start databases, message brokers etc). The containers should thus be shutdown after the integration tests are completed. Is there a reason to avoid running docker containers from inside another docker container in this way?
Running Docker inside Docker (a.k.a. dind), while possible, should be avoided, if at all possible. (Source provided below.) Instead, you want to set up a way for your main container to produce and communicate with sibling containers.
Jérôme Petazzoni — the author of the feature that made it possible for Docker to run inside a Docker container — actually wrote a blog post saying not to do it. The use case he describes matches the OP's exact use case of a CI Docker container that needs to run jobs inside other Docker containers.
Petazzoni lists two reasons why dind is troublesome:
It does not cooperate well with Linux Security Modules (LSM).
It creates a mismatch in file systems that creates problems for the containers created inside parent containers.
From that blog post, he describes the following alternative,
[The] simplest way is to just expose the Docker socket to your CI container, by bind-mounting it with the -v flag.
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Now this container will have access to the Docker socket, and will therefore be able to start containers. Except that instead of starting "child" containers, it will start "sibling" containers.
I answered a similar question before on how to run a Docker container inside Docker.
To run docker inside docker is definitely possible. The main thing is that you run the outer container with extra privileges (starting with --privileged=true) and then install docker in that container.
Check this blog post for more info: Docker-in-Docker.
One potential use case for this is described in this entry. The blog describes how to build docker containers within a Jenkins docker container.
However, Docker inside Docker it is not the recommended approach to solve this type of problems. Instead, the recommended approach is to create "sibling" containers as described in this post
So, running Docker inside Docker was by many considered as a good type of solution for this type of problems. Now, the trend is to use "sibling" containers instead. See the answer by #predmijat on this page for more info.
It's OK to run Docker-in-Docker (DinD) and in fact Docker (the company) has an official DinD image for this.
The caveat however is that it requires a privileged container, which depending on your security needs may not be a viable alternative.
The alternative solution of running Docker using sibling containers (aka Docker-out-of-Docker or DooD) does not require a privileged container, but has a few drawbacks that stem from the fact that you are launching the container from within a context that is different from that one in which it's running (i.e., you launch the container from within a container, yet it's running at the host's level, not inside the container).
I wrote a blog describing the pros/cons of DinD vs DooD here.
Having said this, Nestybox (a startup I just founded) is working on a solution that runs true Docker-in-Docker securely (without using privileged containers). You can check it out at www.nestybox.com.
Yes, we can run docker in docker, we'll need to attach the unix socket /var/run/docker.sock on which the docker daemon listens by default as volume to the parent docker using -v /var/run/docker.sock:/var/run/docker.sock.
Sometimes, permissions issues may arise for docker daemon socket for which you can write sudo chmod 757 /var/run/docker.sock.
And also it would require to run the docker in privileged mode, so the commands would be:
sudo chmod 757 /var/run/docker.sock
docker run --privileged=true -v /var/run/docker.sock:/var/run/docker.sock -it ...
I was trying my best to run containers within containers just like you for the past few days. Wasted many hours. So far most of the people advise me to do stuff like using the docker's DIND image which is not applicable for my case, as I need the main container to be Ubuntu OS, or to run some privilege command and map the daemon socket into container. (Which never ever works for me)
The solution I found was to use Nestybox on my Ubuntu 20.04 system and it works best. Its also extremely simple to execute, provided your local system is ubuntu (which they support best), as the container runtime are specifically deigned for such application. It also has the most flexible options. The free edition of Nestybox is perhaps the best method as of Nov 2022. Highly recommends you to try it without bothering all the tedious setup other people suggest. They have many pre-constructed solutions to address such specific needs with a simple command line.
The Nestybox provide special runtime environment for newly created docker container, they also provides some ubuntu/common OS images with docker and systemd in built.
Their goal is to make the main container function exactly the same as a virtual machine securely. You can literally ssh into your ubuntu main container as well without the ability to access anything in the main machine. From your main container you may create all kinds of containers like a normal local system does. That systemd is very important for you to setup docker conveniently inside the container.
One simple common command to execute sysbox:
dock run --runtime=sysbox-runc -it any_image
If you think thats what you are looking for, you can find out more at their github:
https://github.com/nestybox/sysbox
Quicklink to instruction on how to deploy a simple sysbox runtime environment container: https://github.com/nestybox/sysbox/blob/master/docs/quickstart/README.md

Docker web terminal

I have a VPS running Debian 8 with Docker. I want to give my customers some kind of terminal access to there container trough the web interface.
What's the best way of implementing this? And does anyone has some kind of example.
Cheers,
Ramon
You can spin your own web interface easily since Docker includes a REST based API. There are also plenty of existing implementations of this out there, including:
Universal Control Plane
UI for Docker
Docker WebUI
And various others if you search Docker Hub.
Because you're also asking for examples: A very easy implementation for a UI is the following:
install the docker engine (curl -sSL https://get.docker.com/ | sh)
Start the docker daemon: (sudo service docker start)
Run the ui-for-docker container and map the port 9000:
docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock uifd/ui-for-docker
access server-ip:9000 in your browser.
If you want just know what is happening in your docker registry, than you also may want to try this UI for Docker Registry. It is a bit "raw" now, but it has features that other have not.
It shows dependence tree (FROM directive) of stored images.
It shows pretty statistics about uploads number and image sizes.
Can serve multiple repositories.

Can I run Docker-in-Docker without using the --privileged flag

I'd like to use Docker-in-Docker however the --privileged gives blanket access to devices. Is there a way to run this using a combination of volumes and cap-add etc. instead?
Unfortunately no, you must use the --privileged flag to run Docker in Docker, you can take a look at the official announcement where they state this is one of the many purposes of the --privileged flag.
Basically, you need more access to the host system devices to run docker than you get when running without --privileged.
Yes, you can run docker in docker without the --privileged flag. It involves mounting the docker socket to the container like so:
docker run -it -v /var/run/docker.sock:/var/run/docker.sock \
-v $(which docker):/bin/docker \
alpine docker ps -a
That is going mount the docker socket and executable into the container and run docker ps -a within the alpine container. Jérôme Petazzoni, who authored the the dind example and did a lot of the work on the --privileged flag had this to say about docker in docker:
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
I have been using this approach for a while now and it works pretty good.
The caveat with this approach is things get funky with storage. You're better off using data volume containers or named data volumes rather than mounting directories. Since you're using the docker socket from the host, any directories you want to mount with in a child container need to be from the context of the host, not the parent container. It gets weird. I have had better luck with data volume containers.
Yes. There are dind-rootless versions of docker image in docker hub.
https://hub.docker.com/_/docker

Why use a data-only container over a host mount?

I understand the concept of data-only containers
But why would you use a data-only container over a simple host mount given that data-only containers seem to make it harder to find the data.
When you don't want to manage the mount yourself and don't need to find the data frequently. Good example is database containers, where using data-only container provides you with the following conveniences:
No need to even know what are the volumes that you have to create for a mature container, e.g.
docker run --name my-data tutum/mysql:5.5 true
docker run -d --name my --volumes-from my-data tutum/mysql:5.5
Simplified management via docker. You don't have to manually delete the host directory or create a new path when you need to start anew.

Resources