Starting and stopping docker container from other container - node.js

I need to start, stop and restart containers from inside another container.
For Example:
Container A -> start Container B
Container A -> stop Container C
My Dockerfile:
FROM node:7.2.0-slim
WORKDIR /docker
COPY . /docker
CMD [ "npm", "start" ]
Docker Version 1.12.3
I want to avoid using a ssh connection. Any Ideas?

Per se a container runs in an isolated environment (e.g. with its own file system or network stack) and thus has no direct way to interact with the host it is running on. This is of course intended that way to allow for real isolation.
But there is a way to run containers with some more privileges. To talk to the docker daemon on the host, you can for example mount the docker socket of the host system into the container. This works the same way as you probably would mount some host folder into the container.
docker run -v /var/run/docker.sock:/var/run/docker.sock yourimage
For an example, please see the docker-compose file of the traefik proxy which is a process that listenes for starting and stopping containers on the host to activate some proxy routes to them. You can find the example in the traefik proxy repository.
To be able to talk to the docker daemon on the host, you then also need to have a docker client installed in the container or use some docker api for your programming language. There is an official list of such libraries for different programming languages in the docker docs.
Of course you should be aware of what privileges you give to the container. Someone who manages to exploit your application could possibly shut down your other containers or - even worse - start own containers on your system which can easily be used to gain control over your system. Keep that in mind when you build your application.

Related

Use of Docker and Linux containers (LXC)

Given that using Docker alongside with LXC containers in the same host can create problems in iptables (if I understood correctly) (source: https://github.com/docker/for-linux/issues/103), the same applies when using Docker INSIDE lxc container ?
In other words, could we create a LXC container on the host A, and inside of that container, install Docker, use it and be not affected by this issue?
Context: Why I am asking this? because I want to create and run a gitlab-runner inside a Docker container (with other things such as Docker, maven, etc) in a Host that has lxc containers already running, I do not want to touch these containers.

linuxamazon not running docker deamon [duplicate]

I'm running Jenkins inside a Docker container. I wonder if it's ok for the Jenkins container to also be a Docker host? What I'm thinking about is to start a new docker container for each integration test build from inside Jenkins (to start databases, message brokers etc). The containers should thus be shutdown after the integration tests are completed. Is there a reason to avoid running docker containers from inside another docker container in this way?
Running Docker inside Docker (a.k.a. dind), while possible, should be avoided, if at all possible. (Source provided below.) Instead, you want to set up a way for your main container to produce and communicate with sibling containers.
Jérôme Petazzoni — the author of the feature that made it possible for Docker to run inside a Docker container — actually wrote a blog post saying not to do it. The use case he describes matches the OP's exact use case of a CI Docker container that needs to run jobs inside other Docker containers.
Petazzoni lists two reasons why dind is troublesome:
It does not cooperate well with Linux Security Modules (LSM).
It creates a mismatch in file systems that creates problems for the containers created inside parent containers.
From that blog post, he describes the following alternative,
[The] simplest way is to just expose the Docker socket to your CI container, by bind-mounting it with the -v flag.
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Now this container will have access to the Docker socket, and will therefore be able to start containers. Except that instead of starting "child" containers, it will start "sibling" containers.
I answered a similar question before on how to run a Docker container inside Docker.
To run docker inside docker is definitely possible. The main thing is that you run the outer container with extra privileges (starting with --privileged=true) and then install docker in that container.
Check this blog post for more info: Docker-in-Docker.
One potential use case for this is described in this entry. The blog describes how to build docker containers within a Jenkins docker container.
However, Docker inside Docker it is not the recommended approach to solve this type of problems. Instead, the recommended approach is to create "sibling" containers as described in this post
So, running Docker inside Docker was by many considered as a good type of solution for this type of problems. Now, the trend is to use "sibling" containers instead. See the answer by #predmijat on this page for more info.
It's OK to run Docker-in-Docker (DinD) and in fact Docker (the company) has an official DinD image for this.
The caveat however is that it requires a privileged container, which depending on your security needs may not be a viable alternative.
The alternative solution of running Docker using sibling containers (aka Docker-out-of-Docker or DooD) does not require a privileged container, but has a few drawbacks that stem from the fact that you are launching the container from within a context that is different from that one in which it's running (i.e., you launch the container from within a container, yet it's running at the host's level, not inside the container).
I wrote a blog describing the pros/cons of DinD vs DooD here.
Having said this, Nestybox (a startup I just founded) is working on a solution that runs true Docker-in-Docker securely (without using privileged containers). You can check it out at www.nestybox.com.
Yes, we can run docker in docker, we'll need to attach the unix socket /var/run/docker.sock on which the docker daemon listens by default as volume to the parent docker using -v /var/run/docker.sock:/var/run/docker.sock.
Sometimes, permissions issues may arise for docker daemon socket for which you can write sudo chmod 757 /var/run/docker.sock.
And also it would require to run the docker in privileged mode, so the commands would be:
sudo chmod 757 /var/run/docker.sock
docker run --privileged=true -v /var/run/docker.sock:/var/run/docker.sock -it ...
I was trying my best to run containers within containers just like you for the past few days. Wasted many hours. So far most of the people advise me to do stuff like using the docker's DIND image which is not applicable for my case, as I need the main container to be Ubuntu OS, or to run some privilege command and map the daemon socket into container. (Which never ever works for me)
The solution I found was to use Nestybox on my Ubuntu 20.04 system and it works best. Its also extremely simple to execute, provided your local system is ubuntu (which they support best), as the container runtime are specifically deigned for such application. It also has the most flexible options. The free edition of Nestybox is perhaps the best method as of Nov 2022. Highly recommends you to try it without bothering all the tedious setup other people suggest. They have many pre-constructed solutions to address such specific needs with a simple command line.
The Nestybox provide special runtime environment for newly created docker container, they also provides some ubuntu/common OS images with docker and systemd in built.
Their goal is to make the main container function exactly the same as a virtual machine securely. You can literally ssh into your ubuntu main container as well without the ability to access anything in the main machine. From your main container you may create all kinds of containers like a normal local system does. That systemd is very important for you to setup docker conveniently inside the container.
One simple common command to execute sysbox:
dock run --runtime=sysbox-runc -it any_image
If you think thats what you are looking for, you can find out more at their github:
https://github.com/nestybox/sysbox
Quicklink to instruction on how to deploy a simple sysbox runtime environment container: https://github.com/nestybox/sysbox/blob/master/docs/quickstart/README.md

Escaping Docker Container

I have installed a Docker on my Ubuntu machine 16.04.
Is there any way to bypass Docker container to host? (RCE, Privilege Escalation etc..) Which means is there any way to access the host machine inside the docker container.
Below is the command which I am using it to launch the container.
docker run --rm -ti ubuntu:16.04
I am going to give docker containers access in my college for testing purpose. And, I have hosted everything on my personal cloud. Is it possible to compromise the host machine from the container?
Please let me know about this. Before I start giving access in my college I need to make sure about it.
PS: I have configured macvlan and containers cannot talk to each other.
Thanks!!

Live reload Node.js dev environment with Docker

I'm trying to work on a dev environment with Node.js and Docker.
I want to be able to:
run my docker container when I boot my computer once and for all;
make changes in my local source code and see the changes without interacting with the docker container (with a mount).
I've tried the Node image and, if I understand correctly, it is not what I'm looking for.
I know how to make the mount point, but I'm missing how the server is supposed to detect the changes and "relaunch" itself.
I'm new to Node.js so if there is a better way to do things, feel free to share.
run my docker container when I boot my computer once and for all;
start containers automatically with the docker daemon or with your process manager
make changes in my local source code and see the changes without
interacting with the docker container (with a mount).
You need to mount your dev app folder as a volume
$ docker run --name myapp -v /app/src:/app image/app
and set in your Dockerfile nodeJs
CMD ["nodemon", "-L", "/app"]

Docker containers as Linux services?

I just created a secure Docker Registry and ran it on a remote VM (using docker run ...). I then ran docker ps and saw that it is in fact running. I exited the machine and then SSHed back in. Again, I ran docker ps and verified it "survived" me exiting the SSH session.
This has me wondering: do Docker containers actually run as Linux services? If not, is there any way of getting them to run as traditional (upstart- or systemd-based) services? Is there even any reason/merit to do so?
The docker engine runs as a daemon.
That is mentioned in "Host integration":
As of Docker 1.2, restart policies are the built-in Docker mechanism for restarting containers when they exit. If set, restart policies will be used when the Docker daemon starts up, as typically happens after a system boot. Restart policies will ensure that linked containers are started in the correct order.
If restart policies don’t suit your needs (i.e., you have non-Docker processes that depend on Docker containers), you can use a process manager like upstart, systemd or supervisor instead.
That involves (when a container runs with some options) some security fraught, by the way: see issue 14767 and issue 6401:
The container (with --net host option) is the host when it comes to the network stack so any services running on the host are accessible to the container. It just so happens that you communicate to upstart ( and others ) this way.
This feature is a runtime only option, just like the --privileged flag, therefore an image cannot request this, it must be explicitly set at runtime.

Resources