I'm looking for a way for a user to be able to execute a limited set of commands on the host, while only accessing it from containers/browser. The goal is to prevent the need for SSH'ing to the host just to run commands occasionally like make start, make stop, etc. These make commands just execute a series of docker-compose commands and are needed sometimes in dev.
The two possible ways in I can think of are:
Via cloud9 terminal inside browser (we'll already be using it). By default this terminal only accesses the container itself of course.
Via a custom mini webapp (e.g. node.js/express) with buttons that map to commands. This would be easy to do if running on the host itself, but I want to keep all code like this as containers.
Although it might not be best practice it is still possible to control the host from inside a container. If you are running docker-compose commands you can bind mount the docker socket by using -v /var/run/docker.sock:/var/run/docker.sock on ubuntu.
If you want to use other system tools you will have to bind mount all required volumes using -v this gets really tricky and tedious when you want to use system bins that use /lib.*.so files.
If you need to use sudo commands don't forget to add --privileged flag when running the container
Named Pipes can be very useful to run commands on host machine from docker. Your question is very similar to this one
The solution using named pipes was also given in the same question. I had tried and tested this approach and it works perfectly fine.
That approach would be against the docker concepts of process/resources encapsulation. With docker you encapsulate processes completely from the host and from each other (unless you link containers/volumes). From within a docker container you cannot see any processes running on the host due to process namespaces. When you now want to execute processes on the host from within a container that would be against the docker methodology.
A container is not supposed to break out and access the host. Docker is (amongst other things) process isolation. You may find various tricks to execute some code on the host, when you set it up, though.
Related
I have two docker containers A and B. On container A a django application is running. On container B a WEBDAV Source is mounted.
Now I want to check from container A if a folder exists in container B (in the WebDAV mount destination).
What is the best solution to do something like that? Currently I solved it mounting the docker socket into the container A to execute cmds from A inside B. I am aware that mounting the docker socket into a container is a security risk for the host and the whole application stack.
Other possible solutions would be to use SSH or share and mount the directory which should be checked. Of course there are further possible solutions like doing it with HTTP requests.
Because there are so many ways to solve a problem like that, I want to know if there is a best practise (considering security, effort to implement, performance) to execute commands from container A in contianer B.
Thanks in advance
WebDAV provides a file-system-like interface on top of HTTP. I'd just directly use this. This requires almost no setup other than providing the other container's name in configuration (and if you're using plain docker run putting both containers on the same network), and it's the same setup in basically all container environments (including Docker Swarm, Kubernetes, Nomad, AWS ECS, ...) and a non-Docker development environment.
Of the other options you suggest:
Sharing a filesystem is possible. It leads to potential permission problems which can be tricky to iron out. There are potential security issues if the client container isn't supposed to be able to write the files. It may not work well in clustered environments like Kubernetes.
ssh is very hard to set up securely in a Docker environment. You don't want to hard-code a plain-text password that can be easily recovered from docker history; a best-practice setup would require generating host and user keys outside of Docker and bind-mounting them into both containers (I've never seen a setup like this in an SO question). This also brings the complexity of running multiple processes inside a container.
Mounting the Docker socket is complicated, non-portable across environments, and a massive security risk (you can very easily use the Docker socket to root the entire host). You'd need to rewrite that code for each different container environment you might run in. This should be a last resort; I'd consider it only if creating and destroying containers would need to be a key part of this one container's operation.
Is there a best practise to execute commands from container A in contianer B?
"Don't." Rearchitect your application to have some other way to communicate between the two containers, often over HTTP or using a message queue like RabbitMQ.
One solution would be to mount one filesystem readonly on one container and read-write on the other container.
See this answer: Docker, mount volumes as readonly
I wonder if one can take all the current environment variables settings OS applications and create a simple docker layer on top of it all so that docker container user will not be able to damage host system even if he would remove all files, yet will have abilety to access all installed applications and system settings inside his docker layer?
Technically you might be able to hack together a solution that does this by copying in all data/apps, installing dependencies, re-configuring the applications and providing a bash shell to attach to for a user to play around with but this is not what Docker is designed for at all, not to mention that I would not recommend anyone to attempt this.
I always try to explain docker's usecase as processes which run in isolated containers with defined interfaces that may be exposed. Meaning you would ideally run one application within it which has an interface exposed for communication.
What you are looking for is essentially a VM with snapshots which you can provide to different users.
I'm developing a platform where users can create their own "widgets", widgets are basically js snippets ( in the future there will be html and css too ).
The problem is they must run even when the user is not on the website, so basically my service will have to schedule those user scripts to run every now and then.
I'm trying to figure out which would be the best way to "sandbox" that script, one of the first ideas i had was to run on it's own process inside of a Docker, so let's say the user manages to somehow get into the shell it would be a virtual machine and hopefully he would be locked inside.
I'm not a Docker specialist so i'm not even sure if that makes sense, anyway that would yield another problem which is spinning hundreds of dockers to run 1 simple javascript snippet.
Is there any "secure" way of doing this? Perhaps running the script on an empty scope and somehow removing access to the "require" method?
Another requirement would be to kill the script if it times out.
EDIT:
- Found this relevant stackexchange link
This can be done with docker, you would create a docker image with their script in it and the run the image which creates a container for the script to run in.
You could even make it super easy and create a common image, based on the official node.js docker image, and pass in the users custom files at run time, run them, save the output, and then you are done. This approach is good because there is only one image to maintain, and it keeps the setup simple.
The best way to pass in the data would be to create a volume mount on the container, and mount the users directory into the container at the same spot everytime.
For example, let's say you had a host with a directory structure like this.
/users/
aaron/
bob/
chris/
Then when you run the containers you just need to change the volume mount.
docker run -v /users/aaron:/user/ myimagename/myimage
docker run -v /users/bob:/user/ myimagename/myimage
I'm not sure what the output would be, but you could write it to /user/output inside the container and the output would be stored in the users output directory.
As far as timeouts, you could write a simple script that looks at docker ps and if it is running for longer then the limit, docker stop the container.
Because everything is run in a container, you can run many at a time and they are isolated from each other and the host.
Since I first knew of Docker, I thought it might be the solution for several problems we are usually facing at the lab. I work as a Data Analyst for a small Biology research group. I am using Snakemake for defining the -usually big and quite complex- workflows for our analyses.
From Snakemake, I usually call small scripts in R, Python, or even Command Line Applications such as aligners or annotation tools. In this scenario, it is not uncommon to suffer from dependency hell, hence I was thinking about wrapping some of the tools in Docker containers.
At this moment I am stuck at a point where I do not know if I have chosen technology badly, or if I am not able to properly assimilate all the information about Docker.
The problem is related to the fact that you have to run the Docker tools as root, which is something I would not like to do at all, since the initial idea was to make the dockerized applications available to every researcher willing to use them.
In AskUbuntu, the most voted answer proposes to add the final user to the docker group, but it seems that this is not good for security. In the security articles at Docker, on the other hand, they explain that running the tools as root is good for your security. I have found similar questions at SO, but related to the environment inside the container.
Ok, I have no problem with this, but as every moderate-complexity example I happen to find, it seems it is more oriented towards web-applications development, where the system could initially start the container once and then forget about it.
Things I am considering right now:
Configuring the Docker daemon as a TLS-enabled, TCP remote service, and provide the corresponding users with certificates. Would there be any overhead in running the applications? Security issues?
Create images that only make available the application to the host by sharing a /usr/local/bin/ volume or similar. Is this secure? How can you create a daemonized container that does not need to execute anything? The only example I have found implies creating an infinite loop.
The nucleotid.es page seem to do something similar to what I want, but I have not found any reference to security issues. Maybe they are running all the containers inside a virtual machine, where they do not have to worry about these issues, due to the fact that they do not need to expose the dockerized applications to more people.
Sorry about my verbosity. I just wanted to write down the mental process (possibly flawed, I know, I know) where I am stuck. To sum up:
Is there any possibility to create a dockerized command line application which does not need to be run using sudo, is available for several people in the same server, and which is not intended to run in a daemonized fashion?
Thank you in advance.
Regards.
If users will be able to execute docker run then will be able to control host system just because they could map files from host to container and in container they always could be root if they could use docker run or docker exec. So users should not be able to execute docker directly. I think easiest solution here to create scripts which run docker and these scripts could either have suid flag or users could have sudo access to them.
Now I am developing the new content so building the server.
On my server, the base system is the Cent OS(7), I installed the Docker, pulled the cent os, and establish the "WEB SERVER container" Django with uwsgi and nginx.
However I want to up the service, (Database with postgres), what is the best way to do it?
Install postgres on my existing container (with web server)
Build up the new container only for database.
and I want to know each advantage and weak point of those.
It's idiomatic to use two separate containers. Also, this is simpler - if you have two or more processes in a container, you need a parent process to monitor them (typically people use a process manager such as supervisord). With only one process, you won't need to do this.
By monitoring, I mainly mean that you need to make sure that all processes are correctly shutdown if the container receives a SIGSTOP signal. If you don't do this properly, you will end up with zombie processes. You won't need to worry about this if you only have a signal process or use a process manager.
Further, as Greg points out, having separate containers allows you to orchestrate and schedule the containers separately, so you can do update/change/scale/restart each container without affecting the other one.
If you want to keep the data in the database after a restart, the database shouldn't be in a container but on the host. I will assume you want the db in a container as well.
Setting up a second container is a lot more work. You should find a way that the containers know about each other's address. The address changes each time you start the container, so you need to make some scripts on the host. The host must find out the ip-adresses and inform the containers.
The containers might want to update the /etc/hosts file with the address of the other container. When you want to emulate different servers and perform resilience tests this is a nice solution. You will need quite some bash knowledge before you get this running well.
In about all other situations choose for one container. Installing everything in one container is easier for setting up and for developing afterwards. Setting up Docker is just the environment where you want to do your real work. Tooling should help you with your real work, not take all your time and effort.