Why it is unsafe to run applications as root in Docker container? - linux

There are many sources telling it is bad to run apps under root inside Docker container but they always refer to this link: https://blog.docker.com/2014/06/docker-container-breakout-proof-of-concept-exploit/ with issue fixed long ago because newer Docker versions whitelist kernel capabilities.
Therefore:
Were there any other Docker exploits that worked under container root user but didn't work under container non-root user?
Were there any linux kernel exploits that worked under container root user but didn't work under container non-root user?

So, this is skirting the question a little bit but I'm going to try my best to give you an informative and in-depth answer to help you understand the issues involved with running an application as root.
First off, this isn't a 100% definite no-go. You can run applications as root, and in some cases you may need to. But in software, we have something known as the Principle of Least Privilege, also known as the Principle of Least Authority in some areas. This is an important concept in computer security, promoting minimal privileges on computers, based on users' job necessities. Each system component or process should have the least authority necessary to perform its duties. This helps reduce the "attack surface" of the computer by eliminating unnecessary privileges that can result in network exploits and computer compromises. You can apply this principle to the computers you work on by ordinarily operating without administrative rights.
By unnecessarily running an application as root is giving the program permissions to do things that it does not need to do - such as perform system functions and to manage a variety of the operating system's configuration settings. If your application is a basic website filled with cooking recipes, it does not need access to the system configuration files.
Applications are meant to be run with non-administrative security (or as mere mortals) so you have to elevate their privileges to modify the underlying system. This is how the general security model has worked for years.
It also makes applications easier to deploy and adds a layer of scalability. In general, the fewer privileges an application requires the easier it is to deploy within a larger environment. Applications that install device drivers or require elevated security privileges typically have additional steps involved in their deployment. For example, on Windows a solution with no device drivers can be run directly with no installation, while device drivers must be installed separately using the Windows installer service in order to grant the driver elevated privileges.
I apologise if this does not answer your question, but I've done my best to explain why you should not run applications as root. I hope this helps!

To combat image misuse. Running applications as non-privileged user helps keeping system secure when image users misuse Docker - e.g. run with --privileged or mount system directories into container.

Related

Kubernetes: privileged containers and security concerns

Running a container in privileged mode is discouraged for security reasons.
For example: https://www.cncf.io/blog/2020/10/16/hack-my-mis-configured-kubernetes-privileged-pods/
It seems obvious to me that is is preferable to avoid privileged containers when a non-privileged container instead would be sufficient.
However, let's say I need to run a service that requires root access on the host to perform some tasks. Is there an added security risk in running this service in a privileged container (or with some linux capabilities) rather than, for example, a daemon that runs as root (or with those same linux capabilities)? What is the added attack surface?
If a hacker manages to run a command in the context of the container, all right, it is game over. But what kind of vulnerability would allow him to do so that couldn't also be exploited in the case of the aforementioned daemon (apart from sharing the kubeconfig file thoughtlessly)?
Firstly and as you said, it is important to underline that running a container in privileged mode is highly discouraged for some obvious security reasons and here is why:
The risk of running a privileged container lies in the fact that it has access to the host's resources, including the ability to modify the host's system files, access sensitive information, and gain elevated privileges. Basically as it provides more permissions to the container than it would have in a non-privileged mode it significantly increase the risk of a attack surface.
If a hacker gains access to the privileged container, he can potentially access and manipulate the host system and potentially move laterally to other systems and compromising the security of the entire of your infrastructure. A similar vulnerability in a daemon running as root or with additional Linux capabilities would carry the same risk, as the hacker would have access to the same resources and elevated privileges.
In both cases, it is very important to follow best practices for securing the system, such as reducing the attack surface, implementing least privilege, and maintaining proper network segmentation to reduce the risk of compromise.
In this security article written by the astra security team they have mentioned PHP Remote Code Execution Vulnerability (2020) using which the attacker can get hold of your server. If this process is being run by a non root user the attack surface will be reduced but if the same service is having root user access the attacker can get access to remaining containers. This is the reason why it’s always preferred to have least privileged access configured for all the services, also go through this document for getting an overview on attacks that can be performed using privileged containers.

what is a container? and gVisor?

I am trying to understand what are containers and what is their purpose?
I am a little bit confused. When I started to read about them I saw that they rely on the Linux namespaces (is it true?) - a way to isolate the process within the container from the other processes on the machine, and got the impression that their main purpose is security.
For instance, let's say that I own a server that runs multiple services. I also don't want that a single hacked service will be able to hack the whole system. So I put each service inside a container that will make the service unable to interfere the other processes inside the machine, like to kill them or to play with their memory and in that way eliminate the risk.
But later I saw other purposes like being able to ship the app easily? or something like that. so what is their main purpose? I also read that if their main purpose is security - they have a problem. because they run directly on the host kernel (again, is it true?)- and an exploit like the "dirty cow" will or was able to get out of the container and be able to corrupt the machine. So I ended reading about the gVisor - which from what I understood tries to secure the containers, and in some cases succeed. So - what does gVisor do differently? that it's able to secure the containers? is gVisor a container itself? or just a Runtime environment for containers?
eventually, I always see comparisons between containers and VM and I ask why? And when should I use them?
I don't know if anything that I wrote is correct, and I will be glad if you will point out my mistakes, and answer my questions. Yes, I know that there are a lot of them and I am sorry, but Thanks!
The answer below is not guaranteed to be concise. Anyone is welcomed to point out my mistakes.
It might be a little bit vague because many people mixed such concepts nowadays.
1. LXC
When I first got to know such concepts, container still meant LXC, a long-existed technique in Linux. IMHO, container is a complete process that does not simulate a kernel. The difference between a container and a normal process is that container provides a isolated view via cgroups, as if it was in a new operating system. But in fact, the containers still share the host kernel (you are right), so people do worry about the security, especially when you want to deploy it in a public cloud (I don't see people using LXC directly on public cloud yet).
Despite the potential insecurity, the convenience and lightweightness(fast boot, small memory fingerprint) of containers seem to outweigh its drawbacks in most of security-insensitive situations. Tools like docker and kubernetes make large-scale deployment and management more efficient.
2. Virtual Machine & Hardware-assisted virtualization
In contrast to container, the concept Virtual Machine represents another category of isolated execution environment. Considering that most of VMs leverages some hardware-accelerating techniques like VT-x, I will assume you are talking about hardware-assisted virtualization. Virtual Machine usually contains a full kernel inside it.
See this picture from Doug Chamberlain
The Intel VT-x technique provides 2 modes, root mode(privileged) and non-root mode(not privileged). Each mode has its own ring0-ring3 (e.g, non-root ring3, non-root ring0, root ring3, root ring 0). The whole virtual machine runs in non-root mode, and the hypervisor(VMM, e.g., kvm) runs in root-mode.
In the classic qemu+kvm setup, qemu runs in root ring3, and kvm runs in root ring0.
The strong isolation and the existance of guest kernel makes virtual machine more secure and compatible. But, of course, the price is performance and efficiency (slower boot etc.)
Container-based Virtualization
People want the isolation of hardware-assisted virtualization, but don't want to give up the convenience of containers. Therefore, the hybrid solution seems really intuitive to come.
There are 2 typical solutions at present, Kata Container and [gVisor][6]
Kata Container tries to slim the whole stack of virtual machine to make it more lightweight. However, there is still linux inside it and it is still a virtual machine, but more lightweight.
gVisor claims to be an secure container, but it still leverages hardware virtualization techniques (or ptrace if you don't want virtualization). There is a component called sentry, which runs both in non-root ring0 and root ring3. The sentry will do part of the guest kernel's job, but is much smaller than linux. If sentry could not finish a request itself, it proxy the request down to the host kenrel.
The reason why most people believe gvisor is somewhat more secure is that it achieves "defense in depth" -- more layers of indirection lead people to believe it is more secure. This is usually true, but again, is not a guarantee.

Containers - What are their benefits, if they can't run across platform

I read over internet "containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server".
I also read that linux containers cannot run on windows.
The benifits of containers stated "Containers run as an isolated process in userspace on the host operating system."
I don't understand if the containers are not platform independent what we are actually achieving out it?
1) Anyhow all the applications on a linux box should run as an isolated process in their userspace.
2) If containers only contain app code + runtimes + tools + libraries. They can be shipped together. What conatiners are getting here?
Posting the comment as answer::
If containers only contain app code + runtimes + tools + libraries.
They can be shipped together. What conatiners are getting here?
Suppose there is an enterprise with thousands of employees and all of them work on Visual Studio C++. Now, the administrator can create a container with the installed (only C++ components) and configured VS, and deploy that container to all employees. The employees can instantly start working without bothering about installation and configuration of the application. Again, if the employee somehow corrupts the application, they only need to download the container again and they are good to go.
Sandboxing
Security
Maintenance
Mobility
Backup
Many more to go.
Are container platform independent?
IMHO, I don't think so, as they rely on the system calls. Though, I am open to other concepts if anybody knows better on this topic.
Even only considering one platform, containers have their advantages; just not perhaps the ones you need right now. :-) Containers help in administration/maintenance of complex IT systems. With containers you can easily isolate applications, their configuration, and their users, to achieve:
Better security (if someone breaks in, damage is usually limited to one container)
Better safety (if something breaks, or e.g. you make an error, only applications in a given container will be victim to this)
Easier management (containers can be started/stopped separately, can be transferred to another hosts (granted: host with the same OS; in case of Linux containers the host must also be Linux))
Easier testing (you can create and dispose-off containers at will, anytime)
Lighter backup (you can backup just the container; not the whole host)
Some form of increasing availaibility (by proper pre-configuration and automatic switching a container over to another host you can be up and running quicker in case of the primary host failure)
...just to name the first advantages coming to mind.

Security of Docker as it runs as root user

A Docker blog post indicates:
Docker containers are, by default, quite secure; especially if you
take care of running your processes inside the containers as
non-privileged users (i.e. non root)."
So, what is the security issue if I'm running as a root under the docker? I mean, it is quite secure if I take care of my processes as non-privileged users, so, how can I be harmful to host in a container as a root user? I'm just asking it to understand it, how can it be isolated if it is not secure when running as root? Which system calls can expose the host system then?
When you run as root, you can access a broader range of kernel services. For instance, you can:
manipulate network interfaces, routing tables, netfilter rules;
create raw sockets (and generally speaking, "exotic" sockets, exercising code that has received less scrutiny than good old TCP and UDP);
mount/unmount/remount filesystems;
change file ownership, permissions, extended attributes, overriding regular permissions (i.e. using slightly different code paths);
etc.
(It's interesting to note that all those examples are protected by capabilities.)
The key point is that as root, you can exercise more kernel code; if there is a vulnerability in that code, you can trigger it as root, but not as a regular user.
Additionally, if someone finds a way to break out of a container, if you break out as root, you can do much more damage than as a regular user, obviously.
You can reboot host machine by echoing to /proc/sysrq-trigger on docker. Processes running as root in docker can do this.
This seems quite good reason not to run processes as root in docker ;)

Developing in VS2012 with no admin rights

I'm doing some research on limitations of developing with VS2012/Windows7 with no local admin rights.
I found this link re: VS2003 and lack of admin rights. However can not find any information regarding VS2012. Can someone please help me?
You do not need administrative rights to develop applications using Visual Studio on Windows. I do it all the time. Like any good Windows user, my primary account does not run with administrative privileges.
There are only a couple of cases that I can think of where you might need additional privileges.
First is if you're doing something like developing Windows services or shell extensions (rather than, say, desktop or Web applications). Then you'll need to have the ability to install, remove, start, and stop services; install shell extensions; relaunch Explorer; etc.. Or better yet, just do all of your testing inside of a virtual machine, on which you can grant full administrative privileges with little or no security concerns.
Second is if you need to debug a process that your user account does not have access to. In practice, this would mean attaching the debugger to a running process that is not your own. Normally you won't need to do this, as the only processes you'll be attaching a debugger to are the ones that you're writing, and you'll own those processes. But if you do need to debug the operating system or processes running in the context of another user, you will need some degree of administrative privileges. Fortunately, you can grant the debug privilege separately from the whole suite of administrative privileges. Somewhat less fortunately, this still gives away the farm—a skilled hacker with debug privileges effectively has the run of the system. One hopes you would be able to trust your programmers at least a little bit, though! The worst case is they trash their own machine and it has to be reloaded.

Resources