In the site Ideone a user uploads code to be run on a remote server. This is similar to the functions of an online judge.
The problem is that users might upload code that attempts to 'hack' the system. I understand that in C and C++ it's easy to disable a certain set of system calls (patch a few .dll's), but I'm not so sure about other languages.
How would you protect your system if you were to support higher level languages (Erlang, Haskell) on the online judge?
Use Ideone API
Run in a sandbox as a non-privileged user. That's not absolutely foolproof, but it makes the bar for doing lasting damage or serious compromise very high. It also does not depend on possible options or modifications to the language run-time in question. If you are dealing with a fully compiled language (that is, no run-time interpreter), you can do this as well.
For example, take Erlang. Set up a chroot jail that contains only what you need to run Erlang. Add a non-privileged user account and home directory. Bring in the code to be run, verify all file/directory permissions, change to the non-privileged UID and run the code.
You can find more detailed instructions on setting up jails in the Wikipedia article referenced above. Procedures and requirements are slightly different for different OSes.
Related
I don't usually post in forums because normally I can find any answer I need using Google. However every search that I am running is giving me very specific results, such as buffer overflow vulnerabilities that already exist for a specific game or system which is not what I need.
I have a home network including Windows Server 2008 R2 and my son wants to start a Minecraft server which of course I want to give him full access to so he can learn. However, I know that every game is "moddable" and he uses custom maps and the like in a lot of his games.
My concern is that I am going to create security risks on my network based on inexperienced programming. Will giving him the ability to install and create mods on my server potentially open up vulnerabilities (outside of the open Minecraft ports) due to the possible inexperience of people actually writing the mods? Or do mods just simply not work that way and I can't find an answer to my question because it's retarded and no one actually programs a mod lol?
Depends on the way the mod system works for the game, and whether the game itself is sandboxed. Importantly, no software is perfectly secure. You have to decide what level of security and reliability you are happy with.
There are several ways a mod could expose a vulnerability:
The game could allow the mod access to an inappropriately permissive set of actions, such as access to the filesystem. This can include the developer not sandboxing the mod properly.
The mod could exploit a vulnerability in the game's API to access actions the game developer didn't intend. This would be due to a bug in the API.
The mod could exploit a vulnerability in the language engine (for example, Java has a long history of security vulnerabilities).
The mod itself could be vulnerable to attack, and could be made to launch one of the attacks above.
If the mod system is script or VM based, such as Lua, JavaScript or Java, I would feel relatively safe installing mods (so long as the game has a well implemented API/sandbox), because exploits 2-4 are relatively unlikely.
(My understanding of native code mods/plugins is limited, but I'm pretty sure you MUST trust a native mod if you want to run it. Even if you do, it might still be exploitable. )
My understanding of minecraft mods is that they are written in java. My feeling about the Mojang guys is that they know what they're doing, so I'd be surprised if their mod API isn't exceptionally well designed and implemented. Having said that, installing mods necessarily introduces a security risk.
If this risk is unacceptable, you can reduce it by introducing depth to the system. Why not, say, run your minecraft server in a virtual machine, with limited access to the network (only required ports, for example)? That way the impact of vulnerabilities is reduced greatly.
I'd recommend creating a Ubuntu VM on VirtualBox (because they're both free as in beer), but you could install it on whatever OS you're comfortable with.
Buffer overrun vulnerabilities are associated with programming languages that permit unchecked memory access. Minecraft is written in Java, a language which is not susceptible to buffer overruns, so a pure-Java mod would be very unlikely to exhibit anything resembling this kind of vulnerability.
Naturally programs in Java can still be vulnerable to other kinds of security issue, either against the game itself (eg there have been game-account login exploits against Minecraft servers) or against the server (I'm not aware of any known cases of this for Minecraft, but it's always possible). The usual mitigations for running servers apply, for example lock down network access to good IPs if possible, run server as limited user and so on.
I'm currently offering an assembly compile service for some people. They can enter their assembly code in an online editor and compile it. When then compile it, the code is sent to my server with an ajax request, gets compiled and the output of the program is returned.
However, I'm wondering what I can do to prevent any serious damage to the server. I'm quite new to assembly myself so what is possible when they run their script on my server? Can they delete or move files? Is there any way to prevent these security issues?
Thank you in advance!
Have a look at http://sourceforge.net/projects/libsandbox/. It is designed for doing exactly what you want on a linux server:
This project provides API's in C/C++/Python for testing and profiling simple (single process) programs in a restricted environment, or sandbox. Runtime behaviours of binary executable programs can be captured and blocked according to configurable / programmable policies.
The sandbox libraries were originally designed and utilized as the core security module of a full-fledged online judge system for ACM/ICPC training. They have since then evolved into a general-purpose tool for binary program testing, profiling, and security restriction. The sandbox libraries are currently maintained by the OpenJudge Alliance (http://openjudge.net/) as a standalone, open-source project to facilitate various assignment grading solutions for IT/CS education.
If this is a tutorial service, so the clients just need to test miscellaneous assembly code and do not need to perform operations outside of their program (such as reading or modifying the file system), then another option is to permit only a selected subset of instructions. In particular, do not allow any instructions that can make system calls, and allow only limited control-transfer instructions (e.g., no returns, branches only to labels defined within the user’s code, and so on). You might also provide some limited ways to return output, such as a library call that prints whatever value is in a particular register. Do not allow data declarations in the text (code) section, since arbitrary machine code could be entered as numerical data definitions.
Although I wrote “another option,” this should be in addition to the others that other respondents have suggested, such as sandboxing.
This method is error prone and, if used, should be carefully and thoroughly designed. For example, some assemblers permit multiple instructions on one line. So merely ensuring that the text in the first instruction field of a line was acceptable would miss the remaining instructions on the line.
Compiling and running someone else's arbitrary code on your server is exactly that, arbitrary code execution. Arbitrary code execution is the holy grail of every malicious hacker's quest. Someone could probably use this question to find your service and exploit it this second. Stop running the service immediately. If you wish to continue running this service, you should compile and run the program within a sandbox. However, until this is implemented, you should suspend the service.
You should run the code in a virtual machine sandbox because if the code is malicious, the sandbox will prevent the code from damaging your actual OS. Some Virtual Machines include VirtualBox and Xen. You could also perform some sort of signature detection on the code to search for known malicious functionality, though any form of signature detection can be beaten.
This is a link to VirtualBox's homepage: https://www.virtualbox.org/
This is a link to Xen: http://xen.org/
as per Using ptrace to write a program supervisor in userspace, I'm attempting to create the program supervisor component of an online judge.
What system calls would I need to block totally, always allow or check the attributes of to:
Prevent forking or runing other commands
Restrict to standard 'safe' C and C++ libs
Prevent net access
Restrict access to all but 2 files 'in.txt' and 'out.txt'
Prevent access to any system functions or details.
Prevent the application from escaping its supervisor
Prevent anything nasty.
Thanks any help/advice/links much appreciated.
From a security perspective, the best approach is to figure out what you need to permit rather than what you need to deny. I would recommend starting with a supervisor that just logs everything that a known-benign set of programs does, and then whitelist those syscalls and file accesses. As new programs run afoul of this very restrictive sandbox, you can then evaluate loosening restrictions on a case-by-case basis until you find the right profile.
This is essentially how application sandbox profiles are developed on Mac OS X.
Perhaps you can configure AppArmor to do what you want. From the FAQ:
AppArmor is the most effective and easy-to-use Linux application security system available on the market today. AppArmor is a security framework that proactively protects the operating system and applications from external or internal threats, even zero-day attacks, by enforcing good program behavior and preventing even unknown software flaws from being exploited. AppArmor security profiles completely define what system resources individual programs can access, and with what privileges. A number of default policies are included with AppArmor, and using a combination of advanced static analysis and learning-based tools, AppArmor policies for even very complex applications can be deployed successfully in a matter of hours.
If you only wants system calls to inspect another processus, you can use ptrace(), but ou will have no guaranties, like said in Using ptrace to write a program supervisor in userspace.
You can use valgrind to inspect and hook functions calls, libraries, but it will be tedious and maybe blacklisting is not the good way to do that.
You can also use systrace, ( http://en.wikipedia.org/wiki/Systrace ) to write rules in order to authorize/block various things, like open only some files, etc... It is simple to use it to sandbox a processus.
I have read the Wikipedia article, but I am not really sure what it means, and how similar it is to version control.
It would be helpful if somebody could explain in very simple terms what sandboxing is.
A sandpit or sandbox is a low, wide container or shallow depression filled with sand in which children can play. Many homeowners with children build sandpits in their backyards because, unlike much playground equipment, they can be easily and cheaply constructed. A "sandpit" may also denote an open pit sand mine.
Well, A software sandbox is no different than a sandbox built for a child to play. By providing a sandbox to a child we simulate the environment of real play ground (in other words an isolated environment) but with restrictions on what a child can do. Because we don't want child to get infected or we don't want him to cause trouble to others. :) What so ever the reason is, we just want to put restrictions on what child can do for Security Reasons.
Now coming to our software sandbox, we let any software(child) to execute(play) but with some restrictions over what it (he) can do. We can feel safe & secure about what the executing software can do.
You've seen & used Antivirus software. Right? It is also a kind of sandbox. It puts restrictions on what any program can do. When a malicious activity is detected, it stops and informs user that "this application is trying to access so & so resources. Do want to allow?".
Download a program named sandboxie and you can get an hands on experience of a sandbox. Using this program you can run any program in controlled environment.
The red arrows indicate changes flowing from a running program into your computer. The box labeled Hard disk (no sandbox) shows changes by a program running normally. The box labeled Hard disk (with sandbox) shows changes by a program running under Sandboxie. The animation illustrates that Sandboxie is able to intercept the changes and isolate them within a sandbox, depicted as a yellow rectangle. It also illustrates that grouping the changes together makes it easy to delete all of them at once.
Now from a programmer's point of view, sandbox is restricting the API that is allowed to the application. In the antivirus example, we are limiting the system call (operating system API).
Another example would be online coding arenas like topcoder. You submit a code (program) but it runs on the server. For the safety of the server, They should limit the level of access of API of the program. In other words, they need to create a sandbox and run your program inside it.
If you have a proper sandox you can even run a virus infected file and stop all the malicious activity of the virus and see for yourself what it is trying to do. In fact, this will be the first step of an Antivirus researcher.
This definition of sandboxing basically means having test environments (developer integration, quality assurance, stage, etc). These test environments mimic production, but they do not share any of the production resources. They have completely separate servers, queues, databases, and other resources.
More commonly, I've seen sandboxing refer to something like a virtual machine -- isolating some running code on a machine so that it can't affect the base system.
For a concrete example: suppose you have an application that deals with money transfers. In the production environment, real money is exchanged. In the sandboxed environment, everything runs exactly the same, but the money is virtual. It's for testing purposes.
Paypal offers such a sandboxed environment, for example.
For the "sandbox" in software development, it means to develop without disturbing others in an isolated way.
It is not similiar to version control. But some version control (as branching) method can help making sandboxes.
More often we refer to the other sandbox.
In anyway, sandbox often mean an isolated environment. You can do anything you like in the sandbox, but its effect won't propagate outside the sandbox. For instance, in software development, that means you don't need to mess with stuff in /usr/lib to test your library, etc.
A sandbox is an isolated testing environment that enables users to run programs or execute files without affecting the application, system, or platform on which they run. Software developers use sandboxes to test new programming code. Especially cybersecurity professionals use sandboxes to test potentially malicious software. Without sandboxing, an application or other system process could have unlimited access to all the user data and system resources on a network.
Sandboxes are also used to safely execute malicious code to avoid harming the device on which the code is running, the network, or other connected devices. Using a sandbox to detect malware offers an additional layer of protection against security threats, such as stealthy attacks and exploits that use zero-day vulnerabilities.
The main article is here.
Inspired by a much more specific question on ServerFault.
We all have to trust a huge number of people for the security and integrity of the systems we use every day. Here I'm thinking of all the authors of all the code running on your server or PC, and everyone involved in designing and building the hardware. This is mitigated by reputation and, where source is available, peer review.
Someone else you might have to trust, who is mentioned far less often, is the person who previously had root on a system. Your predecessor as system administrator at work. Or for home users, that nice Linux-savvy friend who configured your system for you. The previous owner of your phone (can you really trust the Factory Reset button?)
You have to trust them because there are so many ways to retain root despite the incoming admin's best efforts, and those are only the ones I could think of in a few minutes. Anyone who has ever had root on a system could have left all kinds of crazy backdoors, and your only real recourse under any Linux-based system I've seen is to reinstall your OS and all code that could ever run with any kind of privilege. Say, mount /home with noexec and reinstall everything else. Even that's not sufficient if any user whose data remains may ever gain privilege or influence a privileged user in sufficient detail (think shell aliases and other malicious configuration). Persistence of privilege is not a new problem.
How would you design a Linux-based system on which the highest level of privileged access can provably be revoked without a total reinstall? Alternatively, what system like that already exists? Alternatively, why is the creation of such a system logically impossible?
When I say Linux-based, I mean something that can run as much software that runs on Linux today as possible, with as few modifications to that software as possible. Physical access has traditionally meant game over because of things like keyloggers which can transmit, but suppose the hardware is sufficiently inspectable / tamper-evident to make ongoing access by that route sufficiently difficult, just because I (and the users of SO?) find the software aspects of this problem more interesting. :-) You might also assume the existence of a BIOS that can be provably reflashed known-good, or which can't be flashed at all.
I'm aware of the very basics of SELinux, and I don't think it's much help here, but I've never actually used it: feel free to explain how I'm wrong.
First and foremost, you did say design :) My answer will contain references to stuff that you can use right now, but some of it is not yet stable enough for production. My answer will also contain allusions to stuff that would need to be written.
You can not accomplish this unless you (as user9876 pointed out) fully and completely trust the individual or company that did the initial installation. If you can't trust this, your problem is infinitely recursive.
I was very active in a new file system several years ago called ext3cow, a copy on write version of ext3. Snapshots were cheap and 100% immutable, the port from Linux 2.4 to 2.6 broke and abandoned the ability to modify or delete files in the past.
Pound for pound, it was as efficient as ext3. Sure, that's nothing to write home about, but it was (and for a large part) still is the production standard FS.
Using that type of file system, assuming a snapshot was made of the pristine installation after all services had been installed and configured, it would be quite easy to diff an entire volume to see what changed and when.
At this point, after going through the diff, you can decide that nothing is interesting and just change the root password, or you can go inspect things that seem a little odd.
Now, for the stuff that has to be written if something interesting is found:
Something that you can pipe the diff though that investigates each file. What you're going to see is a list of revisions per file, at which time they would have to be recursively compared. I.e. , present against former-present, former-present against past1, past1 against past2, etc , until you reach the original file or the point that it no longer exists. Doing this by hand would seriously suck. Also, you need to identify files that were never versioned to begin with.
Something to inspect your currently running kernel. If someone has tainted VFS, none of this is going to work, CoW file systems use temporal inodes to access files in the past. I know a lot of enterprise customers who modify the kernel quite a bit, up to and including modules, VMM and VFS. This may not be such an easy task - comparing against 'pristine' may not be tenable since the old admin may have made good modifications to the kernel since it was installed.
Databases are a special headache, since they change typically each second or more, including the user table. That's going to need to be checked manually, unless you come up with something that can check to be sure that nothing is strange, such a tool would be very specific to your setup. Classic UNIX 'root' is not your only concern here.
Now, consider the other computers on the network. How many of them are running an OS that is known to be easily exploited and bot infested? Even if your server is clean, what if this guy joins #foo on irc and starts an attack on your servers via your own LAN? Most people will click links that a co-worker sends, especially if its a juicy blog entry about the company .. social engineering is very easy if you're doing it from the inside.
In short, what you suggest is tenable, however I'm dubious that most companies could enforce best practices needed for it to work when needed. If the end result is that you find a BOFH in your work force and need to can him, you had better of contained him throughout his employment.
I'll update this answer more as I continue to think about it. Its a very interesting topic. What I've posted so far are my own collected thoughts on the same.
Edit:
Yes, I know about virtual machines and checkpointing, a solution assuming that brings on a whole new level of recursion. Did the (now departed) admin have direct root access to the privileged domain or storage server? Probably, yes, which is why I'm not considering it for the purposes of this question.
Look at Trusted Computing. The general idea is that the BIOS loads the bootloader, then hashes it and sends that hash to a special chip. The bootloader then hashes the OS kernel, which in turn hashes all the kernel-mode drivers. You can then ask the chip whether all the hashes were as expected.
Assuming you trust the person who originally installed and configured the system, this would enable you to prove that your OS hasn't had a rootkit installed by any of the later sysadmins. You could then manually run a hash over all the files on the system (since there is no rootkit the values will be accurate) and compare these against a list provided by the original installer. Any changed files will have to be checked carefully (e.g. /etc/passwd will have changed due to new users being legitimately added).
I have no idea how you'd handle patching such a system without breaking the chain of trust.
Also, note that your old sysadmin should be assumed to know any password typed into that system by any user, and to have unencrypted copies of any private key used on that system by any user. So it's time to change all your passwords.