When developing in a Docker container on Linux, there is a problem with permissions: how to manage file ownership and permissions between the host and the container.
Imagine that I have a Docker image that runs Ubuntu and an Apache server. Using the default settings for (recent versions of) Apache, the document root will be /var/www/html and Apache will be run as the www-data user.
In order to do some development, I expose the document root via Docker with -v /path/to/my/files:/var/www/html. And this is where the problem arises:
The files in /path/to/my/files are owned by the containers www-data user. If I'm lucky and my host has a www-data user, it will be that user; otherwise, it will be a distinct user local to the container. The permissions on those files will (probably) be 0755.
So, when I'm working away as myself (a user called jsmith), those files cannot be edited by me because of incorrect file permissions & ownership.
I could change the ownership of the files to jsmith, but that will cause problems with Apache - it will have difficulty accessing files in the document root.
I could change the permissions to 0777, but any new files I create in the course of my work will be owned by jsmith.
The end result is that it is necessary to constantly adjust ownership & permissions on the development files. Other people must have this problem, but every post I've seen on the topic of using Docker in a development workflow just kind of overlooks this problem.
I do have a solution, but I'm not entirely happy with it:
I set up a folder at /src/myproject. This holds my development files and is owned by www-data:www-data.
Using BindFS, I mount /src/myproject at ~/myproject, mapping www-data:www-data to jsmith:jsmith. This allows me to edit files in ~/myproject without messing around with permissions.
The Apache Docker container mounts the /src/myproject directory with -v /src/myproject:/var/www/html. Apache sees the www-data ownership of the files and has no problems.
This works well, but seems overly complicated. How do other people solve this problem?
I realize I'm very likely too late but this might be of help to someone.
In your Dockerfile, you could do this:
RUN usermod -u 1000 www-data
RUN groupmod -g 1000 www-data
This may work in some setups.
I can think of two solutions:
Use a common group id among all developers and images. The uid may end up being numeric in the container, but the gid would give at least read access, and optionally write access, without giving it out globally. Use the setgid bit on the containing directories to have files automatically created with this gid. This isn't the cleanest approach, and may lead to giving out access to other group members, but it may be much easier to manage depending on your organization's workflow.
The second option is named volumes, which I believe were added after you asked this question. They let you have the data exist with the uid/gid's known to the containers. This has the downside of moving the data into the internal docker directories where it's less easy to manage outside of a container. However, there are microservices approaches that keep the volume synchronized with an outside source (git pull, rsync, etc) using a dedicated container that mounts the same volume. You essentially move all of the reads and writes of the data into containers, including any backups, update routines, and testing scripts.
Update: A third option I often use for development environments is a run an entrypoint script as root that compares the mounted volume uid/gid to the uid/gid of a user inside the container. If they do not match, the uid/gid of the user inside the container is updated to match the host. This allows developers to reuse the same image across multiple hosts where the uid/gid of each developer may be different on their location machine. The code to do this is included in my bin/fix-perms script that is part of my example base image. The last step of my entrypoint script is to then use gosu to drop from root back to the user, now with the changed uid/gid, and all files written will now match those of the user on the host.
If you happen to be running on MacOS, a recent feature called osxfs automatically corrects for uid/gid mismatches with host volumes.
Related
I'm trying to set up a Docker host, but I want to make it right from the start, and there is one concept I don't understand:
If I write a file from within the container to a mounted volume, it has the UID of the user from the container, even if the same UID doesn't exist on the host.
I know that I can set the container to run as a different user, but this might break things if the container image expects it to be run as a specific user.
If I have a bunch of containers running later, my goal is that every file in a volume is owned by a single host user, so that adding/changing files doesn't cause permission conflicts for the container.
(Example: I add a file as a different user from the host so the container can't modify it.)
What's the best practice here? How do I make it so that I have the best experience later? Here's a screenshot of what I mean:
I plan to create a web interface to configure a part of my system, including some files owned by root. I will be a NodeJS server and I know that running it as root is not a good idea.
Any suggestions about how to perform that without performance and security issues?
Thank you.
I decided to create a specific script that will be owned by root with high restricted rights and allow a sudo on that script for a dedicated user without password so that could not log in (only root can do a su on it).
In the script i will perform wanted action (updte, upgrade, files copy, etc.)
Let's hope that scurity is good enought
As far as I understand, membership in docker group is basically equal to root privileges. I need to allow one user to execute specific container but I don't want him to be root-equivalent.
Since it's quite problematic to make bash script setgid, I was thinking about writing short executable, which would be setgid for docker group and only thing it would do is launch that specific docker container. This particular executable would be +x and owned by another user. The side-effect that ANY user could launch it is acceptable for me.
Would this bring any security over being in docker group? Or is it still trivial to break?
I doubt I'm first person to want this, how is this normally solved?
RedHat OpenShift runs docker containers as random user IDs.
This works fine for some containers, but the NGINX container requires file permissions to be set to world read/write/execute in order to work.
Is there a more correct way to build/run a container for use with OpenShift?
For example, does OpenShift provide any kind of process ownership groups or rules?
Here is the nginx image I pull down, and the chmod command we currently run to make it work in OpenShift.
registry-nginxinc.rhcloud.com/nginx/rhel7-nginx:1.9.2
RUN chmod -R 777 /var/log/nginx /var/cache/nginx/ \
&& chmod -R 777 /var/run \
&& chmod -R 777 /etc/nginx/*
References:
http://mailman.nginx.org/pipermail/nginx-devel/2015-November/007511.html
https://github.com/fsimorbrian/openshift-secure-routes
Why does this openshift route succeed in CDK but fail in RHEL7 Atomic?
Best practice is that you do not run your containers as root. Many Docker images out there, even some official images, ignore this and require you to run as root. Advice is generally that you should set up the image so that your application doesn't require root and can start up as a non root user you set up in the Dockerfile. Even this advice though isn't the most secure option for a couple of reasons.
The first is that they will say to use USER username, where username is obviously not root. For a platform that is hosting that image, that doesn't actually guarantee your application isn't running as root. This is because a named user such as username could be mapped to uid of 0 in the container and so still running with root privileges. To allow a platform to properly verify that your image isn't set up to run as root, you should use a uid instead of username. That should be anything except uid of 0.
The second problem is that although running as a specific non root user in your own Docker service instance may be fine, it isn't when you consider a multi tenant environment, be that for different users, or even different applications where it is important that the different applications can't access each other in any way.
In a multi tenant environment, the safest thing you can do is to run all applications owned by a specific account, or in different projects, as different users. One reason this is import is from the perspective of data access on persistent volumes. If everything was running as the same user and it managed to get access to persistent volumes it shouldn't, it could see data from other applications.
As far as OpenShift goes, by default it runs with the highest level of security to protect you. Thus, applications in one project are run with a different user to applications in another project.
You can reduce the security measures and override this if you have the appropriate privileges, but you should only make changes if you are comfortable that the application you are doing it for has a low risk profile. That is, you don't grab some arbitrary Docker image off the Internet you don't know anything about and let it run as root.
To learn more about changing the security context constraints around a specific application start by reading through:
https://docs.openshift.com/enterprise/latest/admin_guide/manage_scc.html
You can override the default and say that an image can run as the user it declares in the Dockerfile or even run it as root if need be.
The better way if want the best security is to construct the Docker image so that it can run as any user and not just a specific user.
The general guidelines for how to do this are:
Create a new user account in the container to run the application as. Make the primary group of this account be group ID 0. That is, its group will be that of root, but the user will not. It needs to be group ID 0 as that is what UNIX will default the group to if running as a user that has no entry in the UNIX passwd file.
Any directories/files that the application needs read access to should be readable/accessible by others, or readable/accessible by group root.
Any directories/files that the application needs write access to should be writable by group root.
The application should not require the ability to bind privileged ports. Technically you could workaround that by using Linux capabilities, but some build systems for Docker images, such as Docker Hub automated builds, appear not to support you using aspects of Linux capabilities and so you wouldn't be able to build images using those if needing to use setcap.
Finally, you will find that if using OpenShift Local (CDK) from Red Hat, or the all-in-one VM for OpenShift Origin, that none of this is required. This is because those VM images have purposely been set up to allow as the default policy the ability to run any image, even images wanting to run as root. This is purely so that it is easier to try out arbitrary images you download, but in a production system you really want to be running images in a more secure way, with the ability to run images as root off by default.
If you want to read more about some of the issues around running containers as root and the issues that can come up when a platform runs containers as an arbitrary user ID, have a look at the series of blog posts at:
http://blog.dscpl.com.au/2016/01/roundup-of-docker-issues-when-hosting.html
I have a folder on my server on which I have changed the permissions to 777 (read, write and execute all) to allow users to upload their pictures.
So I want to know, what are the security risks involved in this?
I have implemented code to restrict what file formats can be uploaded, but what would happen if someone was to find the location of the directory, can this pose any threat to my server?
Can they start uploading any files they desire?
Thanks.
When users are uploading files to your server through a web form and some PHP script, the disk access on the server happens with the user id the web server is running under (usually nobody, www-data, apache, _httpd or even root).
Note here, that this single user id is used, regardless of which user uploads the file.
As long as there are no local users accessing the system by other means (ssh, for example), setting the upload directories permissions to 0777 would make not much of a difference -- appart from somebody exploiting a security vulnerability somewhere else in your system there's no one those permissions apply to anyway, and such an attacker would probably just use /tmp.
It is always good practice to set only those permissions on a file or directory that are actually needed. In this case that means probably something like:
drwxrws--- 5 www-data www-data 4096 Nov 17 16:44 upload/
I'm assuming that other local users besides the web server will want to access those files, like the sysadmin or a web designer. Add those users to the group your web server runs under and they don't need sudo or root privileges to access that directory. Also, the +s means that new files and directories in upload/ will automatically be owned by the same group.
As to your last question: just because an attacker knows where the directory is, doesn't mean he can magically make files appear there. There still has to be some sort of service running that accepts files and stores them there... so no, setting the permissions to 0777 doesn't directly make it any less safe.
Still, there are several more dimensions to "safety" and "security" that you cannot address with file permissions in this whole setup:
uploaders can still overwrite each others files because they all work with the same user id
somebody can upload a malicious PHP script to the upload directory and run it from there, possibly exploit other vulnerabilities on your system and gain root access
somebody can use your server to distribute child porn
somebody could run a phishing site from your server after uploading a lookalike of paypal.com
...and there are probably more. Some of those problems you may have addressed in your upload script, but then again, understanding of unix file permissions and where they apply comes usually waaaay at the beginning when learning about security issues, which shows that you are probably not ready yet to tackle all of the possible problems.
Have your code looked at by somebody!
By what means are these users uploading their pictures? If it's over the web, then you only need to give the web server or the CGI script user access to the folder.
The biggest danger here is that users can overwrite other users files, or delete other users files. Nobody without access to this folder will be able to write to it (unless you have some kind of guest/anonymous user).
If you need a directory that everyone can create files in, what you want is to mimic the permissions of the /tmp directory.
$ chown root:root dir; chmod 777 dir; chmod +t dir;
This way any user can create a file, but they cannot delete files owned by other users.
Contrary to what others have said, the executable bit on a directory in unix systems means you can make that directory your current directory (cd to it). It has nothing to do with executing (execution of a directory is meaningless). If you remove the executable bit, nobody will be able to 'cd' to it.
If nothing else, I would remove the executable permissions for all users (if not owner and group as well). With this enabled, someone could upload a file that looks like a picture but is really an executable, which might cause no end of damage.
Possibly remove the read and write permissions for all users as well and restrict it to just owner and group, unless you need anonymous access.
You do not want the executable bit on. As far as *nix goes, the executable bit means you can actually run the file. So, for example, php scripts can be uploaded as type JPEG, and then someone can run that script if they know the location and it's within the web directory.