Security of Docker as it runs as root user - security

A Docker blog post indicates:
Docker containers are, by default, quite secure; especially if you
take care of running your processes inside the containers as
non-privileged users (i.e. non root)."
So, what is the security issue if I'm running as a root under the docker? I mean, it is quite secure if I take care of my processes as non-privileged users, so, how can I be harmful to host in a container as a root user? I'm just asking it to understand it, how can it be isolated if it is not secure when running as root? Which system calls can expose the host system then?

When you run as root, you can access a broader range of kernel services. For instance, you can:
manipulate network interfaces, routing tables, netfilter rules;
create raw sockets (and generally speaking, "exotic" sockets, exercising code that has received less scrutiny than good old TCP and UDP);
mount/unmount/remount filesystems;
change file ownership, permissions, extended attributes, overriding regular permissions (i.e. using slightly different code paths);
etc.
(It's interesting to note that all those examples are protected by capabilities.)
The key point is that as root, you can exercise more kernel code; if there is a vulnerability in that code, you can trigger it as root, but not as a regular user.
Additionally, if someone finds a way to break out of a container, if you break out as root, you can do much more damage than as a regular user, obviously.

You can reboot host machine by echoing to /proc/sysrq-trigger on docker. Processes running as root in docker can do this.
This seems quite good reason not to run processes as root in docker ;)

Related

Why it is unsafe to run applications as root in Docker container?

There are many sources telling it is bad to run apps under root inside Docker container but they always refer to this link: https://blog.docker.com/2014/06/docker-container-breakout-proof-of-concept-exploit/ with issue fixed long ago because newer Docker versions whitelist kernel capabilities.
Therefore:
Were there any other Docker exploits that worked under container root user but didn't work under container non-root user?
Were there any linux kernel exploits that worked under container root user but didn't work under container non-root user?
So, this is skirting the question a little bit but I'm going to try my best to give you an informative and in-depth answer to help you understand the issues involved with running an application as root.
First off, this isn't a 100% definite no-go. You can run applications as root, and in some cases you may need to. But in software, we have something known as the Principle of Least Privilege, also known as the Principle of Least Authority in some areas. This is an important concept in computer security, promoting minimal privileges on computers, based on users' job necessities. Each system component or process should have the least authority necessary to perform its duties. This helps reduce the "attack surface" of the computer by eliminating unnecessary privileges that can result in network exploits and computer compromises. You can apply this principle to the computers you work on by ordinarily operating without administrative rights.
By unnecessarily running an application as root is giving the program permissions to do things that it does not need to do - such as perform system functions and to manage a variety of the operating system's configuration settings. If your application is a basic website filled with cooking recipes, it does not need access to the system configuration files.
Applications are meant to be run with non-administrative security (or as mere mortals) so you have to elevate their privileges to modify the underlying system. This is how the general security model has worked for years.
It also makes applications easier to deploy and adds a layer of scalability. In general, the fewer privileges an application requires the easier it is to deploy within a larger environment. Applications that install device drivers or require elevated security privileges typically have additional steps involved in their deployment. For example, on Windows a solution with no device drivers can be run directly with no installation, while device drivers must be installed separately using the Windows installer service in order to grant the driver elevated privileges.
I apologise if this does not answer your question, but I've done my best to explain why you should not run applications as root. I hope this helps!
To combat image misuse. Running applications as non-privileged user helps keeping system secure when image users misuse Docker - e.g. run with --privileged or mount system directories into container.

How can I access files of different users and retain permission restrictions in Linux using Node.JS?

I'm trying to reimplement an existing server service in Node.JS. That service can be compared to a classic FTP server: authenticated users can read/create/modify files, but restricted to the permissions given to the matching system user name.
I'm pretty sure I can't have Node.JS run as root and switch users using seteuid() or alike since that would break concurrency.
Instead, can I let my Node.JS process run as ROOT and manually check permissions when accessing files? I'm thinking about some system call like "could user X create a file in directory Y?"
Otherwise, could I solve this by using user groups? Note that the service must be able to delete/modify a file created by the real system user, which may not set a special group just so that the service can access the file.
Running node as root sounds dangerous, but I assume there aren't many options left for you. Most FTP servers run as root too, for the same reason. Though, it means you need to pay a severe attention to the security of the code you are going to run.
Now to the question:
You are asking whether you can reimplement the UNIX permissions control in node.js. Yes you can, but Should Not! It is almost 100% chance you will leave holes or miss edge cases Unix core has already taken care of.
Instead use the process.setuid(id) as you mentioned. It will not defeat concurrency, but you need to think of parallel concurrency rather than async now. That is an extra work, but will release you of an headache of reinventing the Unix security.
Alternatively, if all of the operations you want to carry on filesystem involve shell commands, then you can simply modify them to the following pattern:
runuser -l userNameHere -c 'command'

Is it necessary to run miltiple uWSGI web-sites with different users?

In the docs they say:
The customers’ applications can now be run (using the process manager
of your choice, such as rc.local, Running uWSGI via Upstart,
Supervisord or whatever strikes your fancy) with a different uid and a
limited (if you want) address space for each socket:
Is it really necessary to do this? If yes - why?
It is a general security rule. Immagine one of your app being compromised, if it runs with the same permissions of the others it will be potentially able to damage them as well.

when should I use "apache:apache" or "nobody:nobody" on my web server files?

Background: I remember at my old place of employment how the web server admin would always make me change the httpd-accessible file upload directories so that they were owned by apache:apache or nobody:nobody.
He said this was for security reasons.
Question: Can you tell me what specifically were the security implications of this? Also is there a way to get apache to run as nobody:nobody, and are there security implications for that as well?
TIA
There is a valid reason, supposing the httpd (Apache) was owned by root and belongs to the group root also, and that there was a vulnerability that was found in the code itself, for example, a malicious user requested a URL that is longer than expected and the httpd seg-faulted. Now, that exploit has uncovered root access which means, it has control over the system and hence a malicious user would ultimately seize control and create havoc on the box.
That is a reason why the ownership of the httpd daemon runs under nobody:nobody or apache:apache. It is effectively a preventative measure to ensure that no exploit/vulnerability will expose root access. Imagine the security implications if that was to happen.
Fortunately, now, depending on the Linux distribution, BSD variants (OpenBSD/FreeBSD/NetBSD) or the commercial Unix variants, the httpd daemon runs under a user group that has the least privileges. And furthermore, it would be safe to say that a lot of the Apache code has been well tested enough and stable. About 49% of servers across all domains are running Apache. Microsoft's IIS runs at 29% of the domains. This is according the the netcraft survey site here.
In another context, it shows that having a program running under least privileges would be deemed 'safe' and mitigates any possible chances of exploits, vulnerabilites.
This is the wrong site for this question. Ordinarily you would not want the source code to be owned by the same user as Apache. Should a security flaw in Apache or your server-side scripts arise, an attacker could maliciously modify your web site's files without privilege escalation.
The one exception would be file upload directories, as you said. In this case, you want Apache to make changes to that directory.

Secure way to run other people code (sandbox) on my server?

I want to make a web service that runs other people's code locally. Naturally, I want to limit their code's access to a certain "sandbox" directory, so that they won't be able to connect to other parts of my server (DB, main webserver, etc.)
What's the best way to do this?
Run VMware/Virtualbox:
+ I guess it's as secure as it gets. Even if someone manage to "hack", they only hack the guest machine
+ Can limit the CPU & memory the processes use
+ Easy to set up - just create the VM
- Harder to "connect" the sandbox directory from the host to the guest
- Wasting extra memory and CPU for managing the VM
Run underprivileged user:
+ Doesn't waste extra resources
+ Sandbox directory is just a plain directory
? Can't limit CPU and memory?
? I don't know if it's secure enough
Any other way?
Server running Fedora Core 8, the "other" codes written in Java & C++
To limit CPU and memory, you want to set limits for groups of processes (POSIX resource limits only apply to individual processes). You can do this using cgroups.
For example, to limit memory start by mounting the memory cgroups filesystem:
# mount cgroup -t cgroup -o memory /cgroups/memory
Then, create a new sub-directory for each group, e.g.
# mkdir /cgroups/memory/my-users
Put the processes you want constrained (process with PID "1234" here) into this group:
# cd /cgroups/memory/my-users
# echo 1234 >> tasks
Set the total memory limit for the group:
# echo 1000000 > memory.limit_in_bytes
If processes in the group fork child processes, they will also be in the group.
The above group sets the resident memory limit (i.e. constrained processes will start to swap rather than using more memory). Other cgroups let you constrain other things, such as CPU time.
You could either put your server process into the group (so that the whole system with all its users fall under the limits) or get the server to put each new session into a new group.
chroot, jail, container, VServer/OpenVZ/etc., are generally more secure than running as an unprivileged user, but lighter-weight than full OS virtualization.
Also, for Java, you might trust the JVM's built-in sandboxing, and for compiling C++, NaCl claims to be able to sandbox x86 code.
But as Checkers' answer states, it's been proven possible to cause malicious damage from almost any "sandbox" in the past, and I would expect more holes to be continually found (and hopefully fixed) in the future. Do you really want to be running untrusted code?
Reading the codepad.org/about page might give you some cool ideas.
http://codepad.org/about
Running under unprivileged user still allows a local attacker to exploit vulnerabilities to elevate privileges.
Allowing to execute code in a VM can be insecure as well; the attacker can gain access to host system, as recent VMWare vulnerability report has shown.
In my opinion, allowing running native code on your system in the first place is not a good idea from security point of view. Maybe you should reconsider allowing them to run native code, this will certainly reduce the risk.
Check out ulimit and friends for ways of limiting the underprivileged user's ability to DOS the machine.
Try learning a little about setting up policies for SELinux. If you're running a Red Hat box, you're good to go since they package it into the default distro.
This will be useful if you know the things to which the code should not have access. Or you can do the opposite, and only grant access to certain things.
However, those policies are complicated, and may require more investment in time than you may wish to put forth.
Use Ideone API - the simplest way.
try using lxc as a container for your apache server
Not sure about how much effort you want to put into this thing but could you run Xen like the VPS web hosts out there?
http://www.xen.org/
This would allow full root access on their little piece of the server without compromising the other users or the base system.

Resources