I am currently prototyping Docker hosted on WSL and Ubuntu that will be located on a compliant workstation. Being an early prototype, we want it setup heavily restricted to side step compliancy.
Now a piece of the puzzle is being able to restrict users to only a few commands that will allow them to accomplish their job. For example, can I use Unix permissions to restrict Docker commands such as: docker network create and flags such as --privileged, --mount, etc? The goal here is to deploy a specific configuration and ensure that it cannot be changed by non-admin users. Thank you.
Related
I currently use Ansible to manage and deploy a fleet of servers.
I wish to start using Docker for some applications and would like to build a Docker image using the same scripts we use to configure on non Dockerized hosts.
For example we have an Ansible role that builds Nginx with 3rd party modules, would like to use the same role to build a Docker image with the custom Nginx.
Any ideas how I would get this done?
There is the "Ansible Container" project, https://www.ansible.com/integrations/containers/ansible-container. That page points also to the github repo.
It is not clear how well maintained it is, but their reasoning and approach makes sense.
Consider that you might have some adjustments to do regarding two aspects:
a container should do only one thing (microservice)
how to pass configuration to the container at runtime (Docker has some guidelines, such as environmental variables if possible or mounting a volume with the configuration files)
That's a perfect example where the docker-systemctl-replacement script should be used.
It has been developed to allow ansible scripts to target both virtual machines and docker containers. It had been developed when distros did switch to systemd which was hard to enable for containers. When overwriting /usr/bin/systemctl then the docker container will then look good enough for ansible that all the old scripts will continue to run, installing rpm/deb, and having 'service:'s started and enabled.
I wonder if one can take all the current environment variables settings OS applications and create a simple docker layer on top of it all so that docker container user will not be able to damage host system even if he would remove all files, yet will have abilety to access all installed applications and system settings inside his docker layer?
Technically you might be able to hack together a solution that does this by copying in all data/apps, installing dependencies, re-configuring the applications and providing a bash shell to attach to for a user to play around with but this is not what Docker is designed for at all, not to mention that I would not recommend anyone to attempt this.
I always try to explain docker's usecase as processes which run in isolated containers with defined interfaces that may be exposed. Meaning you would ideally run one application within it which has an interface exposed for communication.
What you are looking for is essentially a VM with snapshots which you can provide to different users.
I have developed an application and am using docker to build it. I would like to ship it as a VMware OVF package. What are my options? How do I ship it so customer can deploy it in their VMware environment?
Also I am using a base Ubuntu image and installed NodeJS, MongoDB and other dependencies on it. But I would like to configure my NodeJS based application and MongoDB database as a service within the package I intend to ship. I know how to configure these as a service using init.d on a normal VM. How do I go about this in Docker? Should I have my init.d files in my application folder and copy them over to Docker container during build? Or are there better ways?
Appreciate any advise.
Update:
The reason I ask this question is - My target users need not know docker necessarily. The application should be easy to deploy for someone who do not have docker experience. With all services in a single VM makes it easy to troubleshoot issues. As in, all log files will be saved in the /var/log directory for different services and we can see status of all different services at once. Rather than the user having to look into each docker service. And probably troubleshooting issue with docker itself.
But at the same time I feel it convenient to build the application the docker way.
VMware vApps usually made of multiple VMs running to provide a service. They may have start up dependencies and etc.
Now Using docker you can have those VMs as containers running on a single docker host VM. So a single VM removes the need for vAPP.
On the other hand containerizing philosophy requires us to use Microservices. short explanation in your case, putting each service in a separate container. Then write up a docker compose file to bring the containers up and put it in start up. After that you can make an OVF of your docker host VM and ship it.
A better way in my opinion is to create docker images, put them in your repository and let the customers pull them. Then provide docker compose file for them.
I'm currently working on a setup to make Docker available on a high performance cluster (HPC). The idea is that every user in our group should be able to reserve a machine for a certain amount of time and be able to use Docker in a "normal way". Meaning accessing the Docker Daemon via the Docker CLI.
To do that, the user would be added to the Docker group. But this imposes a big security problem for us, since this basically means that the user has root privileges on that machine.
The new idea is to make use of the user namespace mapping option (as described in https://docs.docker.com/engine/reference/commandline/dockerd/#/daemon-user-namespace-options). As I see it, this would tackle our biggest security concern that the root in a container is the same as the root on the host machine.
But as long as users are able to bypass this via --userns=host , this doesn't increase security in any way.
Is there a way to disable this and other Docker run options?
As mentioned in issue 22223
There are a whole lot of ways in which users can elevate privileges through docker run, eg by using --privileged.
You can stop this by:
either not directly providing access to the daemon in production, and using scripts,
(which is not what you want here)
or by using an auth plugin to disallow some options.
That is:
dockerd --authorization-plugin=plugin1
Which can lead to:
RedHat OpenShift runs docker containers as random user IDs.
This works fine for some containers, but the NGINX container requires file permissions to be set to world read/write/execute in order to work.
Is there a more correct way to build/run a container for use with OpenShift?
For example, does OpenShift provide any kind of process ownership groups or rules?
Here is the nginx image I pull down, and the chmod command we currently run to make it work in OpenShift.
registry-nginxinc.rhcloud.com/nginx/rhel7-nginx:1.9.2
RUN chmod -R 777 /var/log/nginx /var/cache/nginx/ \
&& chmod -R 777 /var/run \
&& chmod -R 777 /etc/nginx/*
References:
http://mailman.nginx.org/pipermail/nginx-devel/2015-November/007511.html
https://github.com/fsimorbrian/openshift-secure-routes
Why does this openshift route succeed in CDK but fail in RHEL7 Atomic?
Best practice is that you do not run your containers as root. Many Docker images out there, even some official images, ignore this and require you to run as root. Advice is generally that you should set up the image so that your application doesn't require root and can start up as a non root user you set up in the Dockerfile. Even this advice though isn't the most secure option for a couple of reasons.
The first is that they will say to use USER username, where username is obviously not root. For a platform that is hosting that image, that doesn't actually guarantee your application isn't running as root. This is because a named user such as username could be mapped to uid of 0 in the container and so still running with root privileges. To allow a platform to properly verify that your image isn't set up to run as root, you should use a uid instead of username. That should be anything except uid of 0.
The second problem is that although running as a specific non root user in your own Docker service instance may be fine, it isn't when you consider a multi tenant environment, be that for different users, or even different applications where it is important that the different applications can't access each other in any way.
In a multi tenant environment, the safest thing you can do is to run all applications owned by a specific account, or in different projects, as different users. One reason this is import is from the perspective of data access on persistent volumes. If everything was running as the same user and it managed to get access to persistent volumes it shouldn't, it could see data from other applications.
As far as OpenShift goes, by default it runs with the highest level of security to protect you. Thus, applications in one project are run with a different user to applications in another project.
You can reduce the security measures and override this if you have the appropriate privileges, but you should only make changes if you are comfortable that the application you are doing it for has a low risk profile. That is, you don't grab some arbitrary Docker image off the Internet you don't know anything about and let it run as root.
To learn more about changing the security context constraints around a specific application start by reading through:
https://docs.openshift.com/enterprise/latest/admin_guide/manage_scc.html
You can override the default and say that an image can run as the user it declares in the Dockerfile or even run it as root if need be.
The better way if want the best security is to construct the Docker image so that it can run as any user and not just a specific user.
The general guidelines for how to do this are:
Create a new user account in the container to run the application as. Make the primary group of this account be group ID 0. That is, its group will be that of root, but the user will not. It needs to be group ID 0 as that is what UNIX will default the group to if running as a user that has no entry in the UNIX passwd file.
Any directories/files that the application needs read access to should be readable/accessible by others, or readable/accessible by group root.
Any directories/files that the application needs write access to should be writable by group root.
The application should not require the ability to bind privileged ports. Technically you could workaround that by using Linux capabilities, but some build systems for Docker images, such as Docker Hub automated builds, appear not to support you using aspects of Linux capabilities and so you wouldn't be able to build images using those if needing to use setcap.
Finally, you will find that if using OpenShift Local (CDK) from Red Hat, or the all-in-one VM for OpenShift Origin, that none of this is required. This is because those VM images have purposely been set up to allow as the default policy the ability to run any image, even images wanting to run as root. This is purely so that it is easier to try out arbitrary images you download, but in a production system you really want to be running images in a more secure way, with the ability to run images as root off by default.
If you want to read more about some of the issues around running containers as root and the issues that can come up when a platform runs containers as an arbitrary user ID, have a look at the series of blog posts at:
http://blog.dscpl.com.au/2016/01/roundup-of-docker-issues-when-hosting.html