Disable certain Docker run options - security

I'm currently working on a setup to make Docker available on a high performance cluster (HPC). The idea is that every user in our group should be able to reserve a machine for a certain amount of time and be able to use Docker in a "normal way". Meaning accessing the Docker Daemon via the Docker CLI.
To do that, the user would be added to the Docker group. But this imposes a big security problem for us, since this basically means that the user has root privileges on that machine.
The new idea is to make use of the user namespace mapping option (as described in https://docs.docker.com/engine/reference/commandline/dockerd/#/daemon-user-namespace-options). As I see it, this would tackle our biggest security concern that the root in a container is the same as the root on the host machine.
But as long as users are able to bypass this via --userns=host , this doesn't increase security in any way.
Is there a way to disable this and other Docker run options?

As mentioned in issue 22223
There are a whole lot of ways in which users can elevate privileges through docker run, eg by using --privileged.
You can stop this by:
either not directly providing access to the daemon in production, and using scripts,
(which is not what you want here)
or by using an auth plugin to disallow some options.
That is:
dockerd --authorization-plugin=plugin1
Which can lead to:

Related

How to create a docker image of current file and OS system?

I wonder if one can take all the current environment variables settings OS applications and create a simple docker layer on top of it all so that docker container user will not be able to damage host system even if he would remove all files, yet will have abilety to access all installed applications and system settings inside his docker layer?
Technically you might be able to hack together a solution that does this by copying in all data/apps, installing dependencies, re-configuring the applications and providing a bash shell to attach to for a user to play around with but this is not what Docker is designed for at all, not to mention that I would not recommend anyone to attempt this.
I always try to explain docker's usecase as processes which run in isolated containers with defined interfaces that may be exposed. Meaning you would ideally run one application within it which has an interface exposed for communication.
What you are looking for is essentially a VM with snapshots which you can provide to different users.

Security restrictions when building dockerfile

The company where I work (strictly regulated/audited environment) is yet to embrace containers but would like to adopt them for some applications. There is the view that as the image build process issues commands as root (or could be overridden by the user by use of the USER command), that building (not running) a container is effectively giving a user unfettered access as root during the build process. This is anathema to them and goes against all manner of company policies. Access to certain commands for computers is restricted via PowerBroker, i.e. access to certain commands requires explicit permissioning and is logged/subject to audit.
We need to allow container images to be built by a CI/CD system as well as ideally to allow developers to be able to build containers locally. Containers will generally be run in Kubernetes, but may be run directly on a VM. I'd like to be able to have CI build agents spin up on demand, as there are a lot of developers, so I want to run the build process within Kubernetes.
What is the best practice for building docker containers in this sort of environment please? Should we look to restrict access to commands within the Dockerfile?
My current thinking for this approach:
CI/CD:
Define "company-approved" image to act as build agent within
Kubernetes.
Build image defines a user that the build process runs as (not
root).
Build agent image contains PowerBroker, enabling locking down access
to sensitive commands.
Scan docker file for use of user command and forbid this.
Build agent runs docker-in-docker, as per here
(https://applatix.com/case-docker-docker-kubernetes-part-2/). This
achieves isolation between multiple build instances whilst ensuring
all containers are controlled via Kubernetes.
Images are scanned for security compliance via OpenSCAP or similar.
Passing the scan is part of the build process. Passing the scan
allows the image to be tagged as compliant and pushed to a registry.
I'm uncomfortable with the thinking around (4), as this seems a bit rule bound (i.e. it's a sort of blacklist approach) and I'm sure there must be a better way.
Developer's localhost:
Define "company-approved" base images (tagged as such inside a
trusted registry).
Image defines a user that the build process runs
as (not root).
Base image contains PowerBroker, enabling locking
down access to sensitive commands.
Create wrapper script on localhost that wraps docker build. No direct access to docker build: user must use script instead. Access to script is secured via PowerBroker. Script can also scan docker file for use of user command and forbid this.
Pushing of images to registry requires tagging which requires scanning for security compliance via OpenSCAP or similar as above.
I'd like to use the OpenSCAP results plus the CI system to create an audit trail of the images that exist; similarly for the deploy process. The security team that monitor for CVEs etc should be able to understand what containers exist and have been deployed and be able to trigger rebuilds of images to make use of updated libraries, or to flag up to developers when containers need to be rebuilt/redeployed. I want to be able to demonstrate that all containers meet a security configuration policy that is itself defined as code.
Is this a sensible way to go? Is there even a risk for allowing a user to build (but not run) a container image without restriction? If there is not, what's the best way to ensure that a foolish/malicious developer has not undone the best practices inside the "approved base image", other than a manual code review (which is going to be done anyway, but might miss something)?
By the way, you must assume that all code/images are hosted in-house/on-premises, i.e. nothing is allowed to use a cloud-based product/service.
When docker build runs each layer executes in the context of a container. So the risks presented by that command executing are constrained by what access is available to the container.
Locking down the build environment could be achieved by restricting what the Docker engine instance which will complete the build can do.
Things like ensuring that user namespaces are used can reduce the risk of a command run inside a container having a wider effect on the environment.
Of course that doesn't mitigate the risks of a developer curl|bashing from an untrusted location, but then what's to stop that being done outside of Docker? (i.e. what additional risk is being introduced by the use of Docker in this scenario)
If you have a policy of restricting externally hosted code, for example, then one option could be to just restrict access from the Docker build host to the Internet.
If you're making use of Kubernetes for the build process and are concerned about malicious software being executed in containers, it could be worth reviewing the CIS Kubernetes standard and making sure you've locked down your clusters appropriately.
There is the view that as the image build process issues commands as
root (or could be overridden by the user by use of the USER command),
that building (not running) a container is effectively giving a user
unfettered access as root during the build process
This view is not correct. When you build an image, all what you are doing is creating new docker layers (files) which are stored under /var/lib/docker/aufs/layers. There are simply no security concerns when building docker images.
There are tools to analyze the security of images you already built. One is the image analyzer built into Dockerhub.

What is the correct way to run an nginx docker container in OpenShift?

RedHat OpenShift runs docker containers as random user IDs.
This works fine for some containers, but the NGINX container requires file permissions to be set to world read/write/execute in order to work.
Is there a more correct way to build/run a container for use with OpenShift?
For example, does OpenShift provide any kind of process ownership groups or rules?
Here is the nginx image I pull down, and the chmod command we currently run to make it work in OpenShift.
registry-nginxinc.rhcloud.com/nginx/rhel7-nginx:1.9.2
RUN chmod -R 777 /var/log/nginx /var/cache/nginx/ \
&& chmod -R 777 /var/run \
&& chmod -R 777 /etc/nginx/*
References:
http://mailman.nginx.org/pipermail/nginx-devel/2015-November/007511.html
https://github.com/fsimorbrian/openshift-secure-routes
Why does this openshift route succeed in CDK but fail in RHEL7 Atomic?
Best practice is that you do not run your containers as root. Many Docker images out there, even some official images, ignore this and require you to run as root. Advice is generally that you should set up the image so that your application doesn't require root and can start up as a non root user you set up in the Dockerfile. Even this advice though isn't the most secure option for a couple of reasons.
The first is that they will say to use USER username, where username is obviously not root. For a platform that is hosting that image, that doesn't actually guarantee your application isn't running as root. This is because a named user such as username could be mapped to uid of 0 in the container and so still running with root privileges. To allow a platform to properly verify that your image isn't set up to run as root, you should use a uid instead of username. That should be anything except uid of 0.
The second problem is that although running as a specific non root user in your own Docker service instance may be fine, it isn't when you consider a multi tenant environment, be that for different users, or even different applications where it is important that the different applications can't access each other in any way.
In a multi tenant environment, the safest thing you can do is to run all applications owned by a specific account, or in different projects, as different users. One reason this is import is from the perspective of data access on persistent volumes. If everything was running as the same user and it managed to get access to persistent volumes it shouldn't, it could see data from other applications.
As far as OpenShift goes, by default it runs with the highest level of security to protect you. Thus, applications in one project are run with a different user to applications in another project.
You can reduce the security measures and override this if you have the appropriate privileges, but you should only make changes if you are comfortable that the application you are doing it for has a low risk profile. That is, you don't grab some arbitrary Docker image off the Internet you don't know anything about and let it run as root.
To learn more about changing the security context constraints around a specific application start by reading through:
https://docs.openshift.com/enterprise/latest/admin_guide/manage_scc.html
You can override the default and say that an image can run as the user it declares in the Dockerfile or even run it as root if need be.
The better way if want the best security is to construct the Docker image so that it can run as any user and not just a specific user.
The general guidelines for how to do this are:
Create a new user account in the container to run the application as. Make the primary group of this account be group ID 0. That is, its group will be that of root, but the user will not. It needs to be group ID 0 as that is what UNIX will default the group to if running as a user that has no entry in the UNIX passwd file.
Any directories/files that the application needs read access to should be readable/accessible by others, or readable/accessible by group root.
Any directories/files that the application needs write access to should be writable by group root.
The application should not require the ability to bind privileged ports. Technically you could workaround that by using Linux capabilities, but some build systems for Docker images, such as Docker Hub automated builds, appear not to support you using aspects of Linux capabilities and so you wouldn't be able to build images using those if needing to use setcap.
Finally, you will find that if using OpenShift Local (CDK) from Red Hat, or the all-in-one VM for OpenShift Origin, that none of this is required. This is because those VM images have purposely been set up to allow as the default policy the ability to run any image, even images wanting to run as root. This is purely so that it is easier to try out arbitrary images you download, but in a production system you really want to be running images in a more secure way, with the ability to run images as root off by default.
If you want to read more about some of the issues around running containers as root and the issues that can come up when a platform runs containers as an arbitrary user ID, have a look at the series of blog posts at:
http://blog.dscpl.com.au/2016/01/roundup-of-docker-issues-when-hosting.html

Is there a way to restrict untrusted container scheduler?

I have an application which I'd like to give the privilege to launch short-lived tasks and schedule these as docker containers. I was thinking of doing this simply via docker run.
As I want to make the attack surface as small as possible, I treat the application as untrusted. As such it can potentially run arbitrary docker run commands (if the codebase contained bug or the container was compromised, input was improperly escaped somewhere etc.) against a predefined docker API endpoint.
This is why I'd like to restrict that application (effectively a scheduler) in some ways:
prevent --privileged use
enforce --read-only flag
enforce memory & CPU limits
I looked at couple of options:
selinux
the selinux policies would need to be set on the host level and then propagated inside the containers via --selinux-enabled flag on the daemon level. The scheduler can however override this anyway via run --privileged.
seccomp profiles
these are only applied at a time of launching the container (seccomp flags are available for docker run)
AppArmor
this can (again) be overriden on the scheduler level via --privileged
docker daemon --exec-opts flag
only a single option is actually available for this flag (native.cgroupdriver)
It seems that Docker is designed to trust container schedulers by default.
Does anyone know if this is a design decision?
Is there any other possible solution available w/ current latest Docker version that I missed?
I also looked at Kubernetes and its Limit Ranges & Resource Quotas which can be applied to K8S namespaces, which looked interesting, assuming there's a way to enforce certain schedulers to only use certain namespaces. This would however increase the scope of this problem to operating K8S cluster.
running docker on a unix platform should be compatible with nice Or so I would think at first looking a little more closely it looks like you need somethign like -cpuset-cpus="0,1"
From the second link , "The --cpu-quota looks to be similar to the --cpuset-cpus ... allocate one or a few cores to a process, it's just time managed instead of processor number managed."

Docker for a one shot CLI application

Since I first knew of Docker, I thought it might be the solution for several problems we are usually facing at the lab. I work as a Data Analyst for a small Biology research group. I am using Snakemake for defining the -usually big and quite complex- workflows for our analyses.
From Snakemake, I usually call small scripts in R, Python, or even Command Line Applications such as aligners or annotation tools. In this scenario, it is not uncommon to suffer from dependency hell, hence I was thinking about wrapping some of the tools in Docker containers.
At this moment I am stuck at a point where I do not know if I have chosen technology badly, or if I am not able to properly assimilate all the information about Docker.
The problem is related to the fact that you have to run the Docker tools as root, which is something I would not like to do at all, since the initial idea was to make the dockerized applications available to every researcher willing to use them.
In AskUbuntu, the most voted answer proposes to add the final user to the docker group, but it seems that this is not good for security. In the security articles at Docker, on the other hand, they explain that running the tools as root is good for your security. I have found similar questions at SO, but related to the environment inside the container.
Ok, I have no problem with this, but as every moderate-complexity example I happen to find, it seems it is more oriented towards web-applications development, where the system could initially start the container once and then forget about it.
Things I am considering right now:
Configuring the Docker daemon as a TLS-enabled, TCP remote service, and provide the corresponding users with certificates. Would there be any overhead in running the applications? Security issues?
Create images that only make available the application to the host by sharing a /usr/local/bin/ volume or similar. Is this secure? How can you create a daemonized container that does not need to execute anything? The only example I have found implies creating an infinite loop.
The nucleotid.es page seem to do something similar to what I want, but I have not found any reference to security issues. Maybe they are running all the containers inside a virtual machine, where they do not have to worry about these issues, due to the fact that they do not need to expose the dockerized applications to more people.
Sorry about my verbosity. I just wanted to write down the mental process (possibly flawed, I know, I know) where I am stuck. To sum up:
Is there any possibility to create a dockerized command line application which does not need to be run using sudo, is available for several people in the same server, and which is not intended to run in a daemonized fashion?
Thank you in advance.
Regards.
If users will be able to execute docker run then will be able to control host system just because they could map files from host to container and in container they always could be root if they could use docker run or docker exec. So users should not be able to execute docker directly. I think easiest solution here to create scripts which run docker and these scripts could either have suid flag or users could have sudo access to them.

Resources