I wonder if one can take all the current environment variables settings OS applications and create a simple docker layer on top of it all so that docker container user will not be able to damage host system even if he would remove all files, yet will have abilety to access all installed applications and system settings inside his docker layer?
Technically you might be able to hack together a solution that does this by copying in all data/apps, installing dependencies, re-configuring the applications and providing a bash shell to attach to for a user to play around with but this is not what Docker is designed for at all, not to mention that I would not recommend anyone to attempt this.
I always try to explain docker's usecase as processes which run in isolated containers with defined interfaces that may be exposed. Meaning you would ideally run one application within it which has an interface exposed for communication.
What you are looking for is essentially a VM with snapshots which you can provide to different users.
Related
I have two docker containers A and B. On container A a django application is running. On container B a WEBDAV Source is mounted.
Now I want to check from container A if a folder exists in container B (in the WebDAV mount destination).
What is the best solution to do something like that? Currently I solved it mounting the docker socket into the container A to execute cmds from A inside B. I am aware that mounting the docker socket into a container is a security risk for the host and the whole application stack.
Other possible solutions would be to use SSH or share and mount the directory which should be checked. Of course there are further possible solutions like doing it with HTTP requests.
Because there are so many ways to solve a problem like that, I want to know if there is a best practise (considering security, effort to implement, performance) to execute commands from container A in contianer B.
Thanks in advance
WebDAV provides a file-system-like interface on top of HTTP. I'd just directly use this. This requires almost no setup other than providing the other container's name in configuration (and if you're using plain docker run putting both containers on the same network), and it's the same setup in basically all container environments (including Docker Swarm, Kubernetes, Nomad, AWS ECS, ...) and a non-Docker development environment.
Of the other options you suggest:
Sharing a filesystem is possible. It leads to potential permission problems which can be tricky to iron out. There are potential security issues if the client container isn't supposed to be able to write the files. It may not work well in clustered environments like Kubernetes.
ssh is very hard to set up securely in a Docker environment. You don't want to hard-code a plain-text password that can be easily recovered from docker history; a best-practice setup would require generating host and user keys outside of Docker and bind-mounting them into both containers (I've never seen a setup like this in an SO question). This also brings the complexity of running multiple processes inside a container.
Mounting the Docker socket is complicated, non-portable across environments, and a massive security risk (you can very easily use the Docker socket to root the entire host). You'd need to rewrite that code for each different container environment you might run in. This should be a last resort; I'd consider it only if creating and destroying containers would need to be a key part of this one container's operation.
Is there a best practise to execute commands from container A in contianer B?
"Don't." Rearchitect your application to have some other way to communicate between the two containers, often over HTTP or using a message queue like RabbitMQ.
One solution would be to mount one filesystem readonly on one container and read-write on the other container.
See this answer: Docker, mount volumes as readonly
I see that a number of people have set up Docker containers with Guacamole or other tools to allow them to remote in to GUI as if the container was a remote Linux desktop. A friend of mine had a conversation with a professor who told him that they set up Ubuntu desktop access for their students via ubuntu/rdp docker containers.
It's an attractive concept for efficiently packed cloned desktops since you don't need 50 copies of the guest OS, but how would you manage such a swarm without a connection broker like a VDI solution or a hypervizer console like a KVM setup? Would you simply use standard docker (or swarm) management tools to manage the containers themselves, then some separate remote client for the actual remote control connections?
I'm currently reading up on Docker, but unclear: If each desktop is the same, so say Firefox, LibreOffice, etc. Is there any way to gain efficiency by sharing these resources as well? For instance, could there be a container with those resources that the others all connect to... or have it shared on a lower level like the OS? Looking for any way to gain efficiency, lower overall cpu, ram, etc for all combined machines on server. Really looking for anything other than a separate copy of the same thing in each container.
I see that there are solutions for shared persistent storage in containers like Hatchway. Are there other issues caused by statelessness of the container that this does not address?
Also, I see a few ways people have cobbled together internet connectivity for docker containers (like IP per container), but most of the older posts are people frustrated with the process. Is there now a standard or preferred way to do something like this?
Or, if docker/containers are absolutely the wrong way to go about setting up the most efficient possible Linux remote desktop clones, I'd love to understand exactly what part does not work so I can find the right way.
I see after days of reading that LXD is actually what I'm looking for (Linux machine containers) instead of Docker (process containers).
The company where I work (strictly regulated/audited environment) is yet to embrace containers but would like to adopt them for some applications. There is the view that as the image build process issues commands as root (or could be overridden by the user by use of the USER command), that building (not running) a container is effectively giving a user unfettered access as root during the build process. This is anathema to them and goes against all manner of company policies. Access to certain commands for computers is restricted via PowerBroker, i.e. access to certain commands requires explicit permissioning and is logged/subject to audit.
We need to allow container images to be built by a CI/CD system as well as ideally to allow developers to be able to build containers locally. Containers will generally be run in Kubernetes, but may be run directly on a VM. I'd like to be able to have CI build agents spin up on demand, as there are a lot of developers, so I want to run the build process within Kubernetes.
What is the best practice for building docker containers in this sort of environment please? Should we look to restrict access to commands within the Dockerfile?
My current thinking for this approach:
CI/CD:
Define "company-approved" image to act as build agent within
Kubernetes.
Build image defines a user that the build process runs as (not
root).
Build agent image contains PowerBroker, enabling locking down access
to sensitive commands.
Scan docker file for use of user command and forbid this.
Build agent runs docker-in-docker, as per here
(https://applatix.com/case-docker-docker-kubernetes-part-2/). This
achieves isolation between multiple build instances whilst ensuring
all containers are controlled via Kubernetes.
Images are scanned for security compliance via OpenSCAP or similar.
Passing the scan is part of the build process. Passing the scan
allows the image to be tagged as compliant and pushed to a registry.
I'm uncomfortable with the thinking around (4), as this seems a bit rule bound (i.e. it's a sort of blacklist approach) and I'm sure there must be a better way.
Developer's localhost:
Define "company-approved" base images (tagged as such inside a
trusted registry).
Image defines a user that the build process runs
as (not root).
Base image contains PowerBroker, enabling locking
down access to sensitive commands.
Create wrapper script on localhost that wraps docker build. No direct access to docker build: user must use script instead. Access to script is secured via PowerBroker. Script can also scan docker file for use of user command and forbid this.
Pushing of images to registry requires tagging which requires scanning for security compliance via OpenSCAP or similar as above.
I'd like to use the OpenSCAP results plus the CI system to create an audit trail of the images that exist; similarly for the deploy process. The security team that monitor for CVEs etc should be able to understand what containers exist and have been deployed and be able to trigger rebuilds of images to make use of updated libraries, or to flag up to developers when containers need to be rebuilt/redeployed. I want to be able to demonstrate that all containers meet a security configuration policy that is itself defined as code.
Is this a sensible way to go? Is there even a risk for allowing a user to build (but not run) a container image without restriction? If there is not, what's the best way to ensure that a foolish/malicious developer has not undone the best practices inside the "approved base image", other than a manual code review (which is going to be done anyway, but might miss something)?
By the way, you must assume that all code/images are hosted in-house/on-premises, i.e. nothing is allowed to use a cloud-based product/service.
When docker build runs each layer executes in the context of a container. So the risks presented by that command executing are constrained by what access is available to the container.
Locking down the build environment could be achieved by restricting what the Docker engine instance which will complete the build can do.
Things like ensuring that user namespaces are used can reduce the risk of a command run inside a container having a wider effect on the environment.
Of course that doesn't mitigate the risks of a developer curl|bashing from an untrusted location, but then what's to stop that being done outside of Docker? (i.e. what additional risk is being introduced by the use of Docker in this scenario)
If you have a policy of restricting externally hosted code, for example, then one option could be to just restrict access from the Docker build host to the Internet.
If you're making use of Kubernetes for the build process and are concerned about malicious software being executed in containers, it could be worth reviewing the CIS Kubernetes standard and making sure you've locked down your clusters appropriately.
There is the view that as the image build process issues commands as
root (or could be overridden by the user by use of the USER command),
that building (not running) a container is effectively giving a user
unfettered access as root during the build process
This view is not correct. When you build an image, all what you are doing is creating new docker layers (files) which are stored under /var/lib/docker/aufs/layers. There are simply no security concerns when building docker images.
There are tools to analyze the security of images you already built. One is the image analyzer built into Dockerhub.
Since I first knew of Docker, I thought it might be the solution for several problems we are usually facing at the lab. I work as a Data Analyst for a small Biology research group. I am using Snakemake for defining the -usually big and quite complex- workflows for our analyses.
From Snakemake, I usually call small scripts in R, Python, or even Command Line Applications such as aligners or annotation tools. In this scenario, it is not uncommon to suffer from dependency hell, hence I was thinking about wrapping some of the tools in Docker containers.
At this moment I am stuck at a point where I do not know if I have chosen technology badly, or if I am not able to properly assimilate all the information about Docker.
The problem is related to the fact that you have to run the Docker tools as root, which is something I would not like to do at all, since the initial idea was to make the dockerized applications available to every researcher willing to use them.
In AskUbuntu, the most voted answer proposes to add the final user to the docker group, but it seems that this is not good for security. In the security articles at Docker, on the other hand, they explain that running the tools as root is good for your security. I have found similar questions at SO, but related to the environment inside the container.
Ok, I have no problem with this, but as every moderate-complexity example I happen to find, it seems it is more oriented towards web-applications development, where the system could initially start the container once and then forget about it.
Things I am considering right now:
Configuring the Docker daemon as a TLS-enabled, TCP remote service, and provide the corresponding users with certificates. Would there be any overhead in running the applications? Security issues?
Create images that only make available the application to the host by sharing a /usr/local/bin/ volume or similar. Is this secure? How can you create a daemonized container that does not need to execute anything? The only example I have found implies creating an infinite loop.
The nucleotid.es page seem to do something similar to what I want, but I have not found any reference to security issues. Maybe they are running all the containers inside a virtual machine, where they do not have to worry about these issues, due to the fact that they do not need to expose the dockerized applications to more people.
Sorry about my verbosity. I just wanted to write down the mental process (possibly flawed, I know, I know) where I am stuck. To sum up:
Is there any possibility to create a dockerized command line application which does not need to be run using sudo, is available for several people in the same server, and which is not intended to run in a daemonized fashion?
Thank you in advance.
Regards.
If users will be able to execute docker run then will be able to control host system just because they could map files from host to container and in container they always could be root if they could use docker run or docker exec. So users should not be able to execute docker directly. I think easiest solution here to create scripts which run docker and these scripts could either have suid flag or users could have sudo access to them.
Now I am developing the new content so building the server.
On my server, the base system is the Cent OS(7), I installed the Docker, pulled the cent os, and establish the "WEB SERVER container" Django with uwsgi and nginx.
However I want to up the service, (Database with postgres), what is the best way to do it?
Install postgres on my existing container (with web server)
Build up the new container only for database.
and I want to know each advantage and weak point of those.
It's idiomatic to use two separate containers. Also, this is simpler - if you have two or more processes in a container, you need a parent process to monitor them (typically people use a process manager such as supervisord). With only one process, you won't need to do this.
By monitoring, I mainly mean that you need to make sure that all processes are correctly shutdown if the container receives a SIGSTOP signal. If you don't do this properly, you will end up with zombie processes. You won't need to worry about this if you only have a signal process or use a process manager.
Further, as Greg points out, having separate containers allows you to orchestrate and schedule the containers separately, so you can do update/change/scale/restart each container without affecting the other one.
If you want to keep the data in the database after a restart, the database shouldn't be in a container but on the host. I will assume you want the db in a container as well.
Setting up a second container is a lot more work. You should find a way that the containers know about each other's address. The address changes each time you start the container, so you need to make some scripts on the host. The host must find out the ip-adresses and inform the containers.
The containers might want to update the /etc/hosts file with the address of the other container. When you want to emulate different servers and perform resilience tests this is a nice solution. You will need quite some bash knowledge before you get this running well.
In about all other situations choose for one container. Installing everything in one container is easier for setting up and for developing afterwards. Setting up Docker is just the environment where you want to do your real work. Tooling should help you with your real work, not take all your time and effort.