For reasons beyond the scope of this post, I want to run external (user submitted) code similar to the computer language benchmark game. Obviously this needs to be done in a restricted environment. Here are my restriction requirements:
Can only read/write to current working directory (will be large tempdir)
No external access (internet, etc)
Anything else I probably don't care about (e.g., processor/memory usage, etc).
I myself have several restrictions. A solution which uses standard *nix functionality (specifically RHEL 5.x) would be preferred, as then I could use our cluster for the backend. It is also difficult to get software installed there, so something in the base distribution would be optimal.
Now, the questions:
Can this even be done with externally compiled binaries? It seems like it could be possible, but also like it could just be hopeless.
What about if we force the code itself to be submitted, and compile it ourselves. Does that make the problem easier or harder?
Should I just give up on home directory protection, and use a VM/rollback? What about blocking external communication (isn't the VM usually talked to over a bridged LAN connection?)
Something I missed?
Possibly useful ideas:
rssh. Doesn't help with compiled code though
Using a VM with rollback after code finishes (can network be configured so there is a local bridge but no WAN bridge?). Doesn't work on cluster.
I would examine and evaluate both a VM and a special SELinux context.
I don't think you'll be able to do what you need with simple file system protection because you won't be able to prevent access to syscalls which will allow access to the network etc. You can probably use AppArmor to do what you need though. That uses the kernel and virtualizes the foreign binary.
Related
I am running a Java minecraft server on my Linux server. I have been asked to install some mods (e.g. data packs) on the server which appear to contain Java code written by a third party other than Mojang.
Is this Java code restricted in what it can do, or can it run any arbitrary code it likes (e.g. read /etc/passwd, open TCP ports, claim huge amounts of memory, etc.)?
In other words, how risky are minecraft mods containing Java code?
This is a very good question. I actually was wondering that too. I have spent a bit of time looking at the binaries of minecraft and the minecraft server spigot. I assume we both use the Java Edition.
First of all, the Java code, once you run it on a host, can do anything. The only mechanism that can prevent that and protect the user from the developer in Java, is the security manager. Once you turn on the security manager (which is an opt-in mechanism) you have the ability to define a set of rules that Java will obey like e.g. it will not write into directories you don't allow it too.
So the question is: is minecraft using the security manager per default. I am 99% sure it does not. No one is using the security manager because it is a pain to configure it right and things stop working every time you get it wrong (you know that an applications uses the security manager because you face problems with policy misconfiguration every now and then).
Running minecraft is made by running an exe. I would not now where to turn the security manager on even if I would like too. There is a bit of hope with the spigot server. You can install the security manager with -Djava.security.manager and input your policy with -Djava.security.policy==my.policy. But getting the policy right will be pain. I will try to look into it though when I have a free week or so.
Minecraft mods can act with the full permissions of the user running Minecraft. If you don't trust the author of a mod, then there's basically two approaches to safely use it:
Audit the source code yourself, and then compile the mod yourself so you know it matches the binary
Run Minecraft in a sandbox such as a VM, or as a heavily restricted user.
I am trying to understand what are containers and what is their purpose?
I am a little bit confused. When I started to read about them I saw that they rely on the Linux namespaces (is it true?) - a way to isolate the process within the container from the other processes on the machine, and got the impression that their main purpose is security.
For instance, let's say that I own a server that runs multiple services. I also don't want that a single hacked service will be able to hack the whole system. So I put each service inside a container that will make the service unable to interfere the other processes inside the machine, like to kill them or to play with their memory and in that way eliminate the risk.
But later I saw other purposes like being able to ship the app easily? or something like that. so what is their main purpose? I also read that if their main purpose is security - they have a problem. because they run directly on the host kernel (again, is it true?)- and an exploit like the "dirty cow" will or was able to get out of the container and be able to corrupt the machine. So I ended reading about the gVisor - which from what I understood tries to secure the containers, and in some cases succeed. So - what does gVisor do differently? that it's able to secure the containers? is gVisor a container itself? or just a Runtime environment for containers?
eventually, I always see comparisons between containers and VM and I ask why? And when should I use them?
I don't know if anything that I wrote is correct, and I will be glad if you will point out my mistakes, and answer my questions. Yes, I know that there are a lot of them and I am sorry, but Thanks!
The answer below is not guaranteed to be concise. Anyone is welcomed to point out my mistakes.
It might be a little bit vague because many people mixed such concepts nowadays.
1. LXC
When I first got to know such concepts, container still meant LXC, a long-existed technique in Linux. IMHO, container is a complete process that does not simulate a kernel. The difference between a container and a normal process is that container provides a isolated view via cgroups, as if it was in a new operating system. But in fact, the containers still share the host kernel (you are right), so people do worry about the security, especially when you want to deploy it in a public cloud (I don't see people using LXC directly on public cloud yet).
Despite the potential insecurity, the convenience and lightweightness(fast boot, small memory fingerprint) of containers seem to outweigh its drawbacks in most of security-insensitive situations. Tools like docker and kubernetes make large-scale deployment and management more efficient.
2. Virtual Machine & Hardware-assisted virtualization
In contrast to container, the concept Virtual Machine represents another category of isolated execution environment. Considering that most of VMs leverages some hardware-accelerating techniques like VT-x, I will assume you are talking about hardware-assisted virtualization. Virtual Machine usually contains a full kernel inside it.
See this picture from Doug Chamberlain
The Intel VT-x technique provides 2 modes, root mode(privileged) and non-root mode(not privileged). Each mode has its own ring0-ring3 (e.g, non-root ring3, non-root ring0, root ring3, root ring 0). The whole virtual machine runs in non-root mode, and the hypervisor(VMM, e.g., kvm) runs in root-mode.
In the classic qemu+kvm setup, qemu runs in root ring3, and kvm runs in root ring0.
The strong isolation and the existance of guest kernel makes virtual machine more secure and compatible. But, of course, the price is performance and efficiency (slower boot etc.)
Container-based Virtualization
People want the isolation of hardware-assisted virtualization, but don't want to give up the convenience of containers. Therefore, the hybrid solution seems really intuitive to come.
There are 2 typical solutions at present, Kata Container and [gVisor][6]
Kata Container tries to slim the whole stack of virtual machine to make it more lightweight. However, there is still linux inside it and it is still a virtual machine, but more lightweight.
gVisor claims to be an secure container, but it still leverages hardware virtualization techniques (or ptrace if you don't want virtualization). There is a component called sentry, which runs both in non-root ring0 and root ring3. The sentry will do part of the guest kernel's job, but is much smaller than linux. If sentry could not finish a request itself, it proxy the request down to the host kenrel.
The reason why most people believe gvisor is somewhat more secure is that it achieves "defense in depth" -- more layers of indirection lead people to believe it is more secure. This is usually true, but again, is not a guarantee.
I read over internet "containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server".
I also read that linux containers cannot run on windows.
The benifits of containers stated "Containers run as an isolated process in userspace on the host operating system."
I don't understand if the containers are not platform independent what we are actually achieving out it?
1) Anyhow all the applications on a linux box should run as an isolated process in their userspace.
2) If containers only contain app code + runtimes + tools + libraries. They can be shipped together. What conatiners are getting here?
Posting the comment as answer::
If containers only contain app code + runtimes + tools + libraries.
They can be shipped together. What conatiners are getting here?
Suppose there is an enterprise with thousands of employees and all of them work on Visual Studio C++. Now, the administrator can create a container with the installed (only C++ components) and configured VS, and deploy that container to all employees. The employees can instantly start working without bothering about installation and configuration of the application. Again, if the employee somehow corrupts the application, they only need to download the container again and they are good to go.
Sandboxing
Security
Maintenance
Mobility
Backup
Many more to go.
Are container platform independent?
IMHO, I don't think so, as they rely on the system calls. Though, I am open to other concepts if anybody knows better on this topic.
Even only considering one platform, containers have their advantages; just not perhaps the ones you need right now. :-) Containers help in administration/maintenance of complex IT systems. With containers you can easily isolate applications, their configuration, and their users, to achieve:
Better security (if someone breaks in, damage is usually limited to one container)
Better safety (if something breaks, or e.g. you make an error, only applications in a given container will be victim to this)
Easier management (containers can be started/stopped separately, can be transferred to another hosts (granted: host with the same OS; in case of Linux containers the host must also be Linux))
Easier testing (you can create and dispose-off containers at will, anytime)
Lighter backup (you can backup just the container; not the whole host)
Some form of increasing availaibility (by proper pre-configuration and automatic switching a container over to another host you can be up and running quicker in case of the primary host failure)
...just to name the first advantages coming to mind.
I'm in the process of writing an application in C++ on Linux. The goal is to have it load dynamically linked libraries at run-time and to provide all the services that the libraries require. The main aim is to have it act as a black box where code loaded at run-time can not break out and damage the rest of the system.
I've never done anything like this before and am a little lost of the best method to take. If I load all the dynamically linked libraries under a special process and then use something like SELinux to limit the ability for the central daemon to do anything outside of its requirements would that seem like a reasonable solution?
The reason I ask is that I want to allow people to load code into this container application that then handles all the server side stuff for them, so things such as security, permissions, networking, logging etc are all provided with a simple, clean and cross platform API regardless of the version of UNIX that the container is running on.
I want to sharpen my skills in terms of gnu/linux and get a better understanding of how servers work. So I thought I'd set up an apache webserver with ftp, ssh, svn etc. Since I use Adobe products everyday in my line of work installing a linux dist. straight on my machine isn't an option. Yes, I could probably do a dualboot with linux and vista. But since I am a novice I don't want to risk something happening to my machine.
So I thought of start of installing a dist. with a pretty steep learning curve with a lot of manual configuration. To maximize the familiarization of command line operations and such. The goal is to make it working and have a safe setup.
So before I write a WOT;
I was curious of, what pros and cons there are in terms of security to have a setup like this?
Thank you!
None, there are no difference if the *nix system is on a VM or physical hardware if you give it access to resources.
In the case of the VM if you don't want it to have access to the hosts hard drive then don't add the physical hard drive. Same for the Network and any other resource.
I am running a bunch of virtual servers on my single server. I'm using OpenVZ but the basic pros and cons are the same.
Pros
I enjoy the fact that I get to experiment a lot. I can install stuff, screw things up royally, and then just wipe out the entire virtual server and start over. It beats re-installing the OS in real-life. I can also easily compare and contrast competing products this way. I'm also able to monitor the running of the system and understand how it works in a more intimate way.
Cons
Resource consumption, which is the reason why I chose OpenVZ - it doesn't consume that much as compared to VirtualBox.
Security wise, you need to take the same precautions as you do for a real system. The difference is that if your machine is compromised, you can just wipe it out easily.