safely executing arbitrary code - security

I have a program that can get code from a user as input (This question is language-agnostic, though I am primarily interested in answers for Java and Python). Usually, this code is going to be useful, but I don't have a guarantee that the user isn't making a mistake, or even deliberately giving malicious code.
I want to be able to execute this code safely, i.e. without harmful side effects if it turns out to be faulty or malicious.
More specifically:
the user specifies that the input code should operate on some objects that exist in the primary program (the program that gets the code from the user and executes it). Optimally, it should be able to access these objects directly, but sending them over to the child program through some communication protocol or a file is also fine.
in the same way, the code should generate some output that is transmitted back to the parent program.
the user can specify whether the code should be allowed to access any other data, whether it should be allowed to read or write to files, and whether it should have access to any other interfaces or OS methods.
it is possible to specify a maximum runtime after which the code will be interrupted if it hasn't finished executing yet.
the parent program and the code to execute may be different languages. You can assume that the programs necessary to compile and execute the given code are installed and available to the parent program. If the languages are different assume that some standard format like JSON can be used for transmitting the data (or is there a way to do this more efficiently?)
I think that this should be doable with a Virtual Machine. However, speed is a concern and I want to be able to execute many code blocks quickly, so that creating and tearing down a VM for each of them may be prohibitively expensive.
Another option is creating a sandbox, which e.g. Java can do, but as far as I am aware only for executing other Java code. I am unable to find a solution to do this with arbitrary languages.
For which languages does this work well, for which is it difficult?
Is this easier on some OS than on others?

Related

Is there a supported way to obtain LDT entries of debuggee?

A userspace process can call modify_ldt(2) to alter entries of its LDT. A debugger, to make correct analysis of what the process reads and where, as well as what code it executes currently, needs to know what descriptor a value of e.g. CS=0x7 selects.
Currently the only way I think could possibly work is injecting some code into debuggee to retrieve the LDT, executing it and then returning back to original state. But this is quite error-prone and is likely to break the debugger user's workflow e.g. when signals arrive.
So is there a better way? I've googled something like PTRACE_LDT, but the pages are from 2005, and grepping modern linux source doesn't find anything relevant to x86 other than comments.

Designing concurrency in a Python program

I'm designing a large-scale project, and I think I see a way I could drastically improve performance by taking advantage of multiple cores. However, I have zero experience with multiprocessing, and I'm a little concerned that my ideas might not be good ones.
Idea
The program is a video game that procedurally generates massive amounts of content. Since there's far too much to generate all at once, the program instead tries to generate what it needs as or slightly before it needs it, and expends a large amount of effort trying to predict what it will need in the near future and how near that future is. The entire program, therefore, is built around a task scheduler, which gets passed function objects with bits of metadata attached to help determine what order they should be processed in and calls them in that order.
Motivation
It seems to be like it ought to be easy to make these functions execute concurrently in their own processes. But looking at the documentation for the multiprocessing modules makes me reconsider- there doesn't seem to be any simple way to share large data structures between threads. I can't help but imagine this is intentional.
Questions
So I suppose the fundamental questions I need to know the answers to are thus:
Is there any practical way to allow multiple threads to access the same list/dict/etc... for both reading and writing at the same time? Can I just launch multiple instances of my star generator, give it access to the dict that holds all the stars, and have new objects appear to just pop into existence in the dict from the perspective of other threads (that is, I wouldn't have to explicitly grab the star from the process that made it; I'd just pull it out of the dict as if the main thread had put it there itself).
If not, is there any practical way to allow multiple threads to read the same data structure at the same time, but feed their resultant data back to a main thread to be rolled into that same data structure safely?
Would this design work even if I ensured that no two concurrent functions tried to access the same data structure at the same time, either for reading or for writing?
Can data structures be inherently shared between processes at all, or do I always explicitly have to send data from one process to another as I would with processes communicating over a TCP stream? I know there are objects that abstract away that sort of thing, but I'm asking if it can be done away with entirely; have the object each thread is looking at actually be the same block of memory.
How flexible are the objects that the modules provide to abstract away the communication between processes? Can I use them as a drop-in replacement for data structures used in existing code and not notice any differences? If I do such a thing, would it cause an unmanageable amount of overhead?
Sorry for my naivete, but I don't have a formal computer science education (at least, not yet) and I've never worked with concurrent systems before. Is the idea I'm trying to implement here even remotely practical, or would any solution that allows me to transparently execute arbitrary functions concurrently cause so much overhead that I'd be better off doing everything in one thread?
Example
For maximum clarity, here's an example of how I imagine the system would work:
The UI module has been instructed by the player to move the view over to a certain area of space. It informs the content management module of this, and asks it to make sure that all of the stars the player can currently click on are fully generated and ready to be clicked on.
The content management module checks and sees that a couple of the stars the UI is saying the player could potentially try to interact with have not, in fact, had the details that would show upon click generated yet. It produces a number of Task objects containing the methods of those stars that, when called, will generate the necessary data. It also adds some metadata to these task objects, assuming (possibly based on further information collected from the UI module) that it will be 0.1 seconds before the player tries to click anything, and that stars whose icons are closest to the cursor have the greatest chance of being clicked on and should therefore be requested for a time slightly sooner than the stars further from the cursor. It then adds these objects to the scheduler queue.
The scheduler quickly sorts its queue by how soon each task needs to be done, then pops the first task object off the queue, makes a new process from the function it contains, and then thinks no more about that process, instead just popping another task off the queue and stuffing it into a process too, then the next one, then the next one...
Meanwhile, the new process executes, stores the data it generates on the star object it is a method of, and terminates when it gets to the return statement.
The UI then registers that the player has indeed clicked on a star now, and looks up the data it needs to display on the star object whose representative sprite has been clicked. If the data is there, it displays it; if it isn't, the UI displays a message asking the player to wait and continues repeatedly trying to access the necessary attributes of the star object until it succeeds.
Even though your problem seems very complicated, there is a very easy solution. You can hide away all the complicated stuff of sharing you objects across processes using a proxy.
The basic idea is that you create some manager that manages all your objects that should be shared across processes. This manager then creates its own process where it waits that some other process instructs it to change the object. But enough said. It looks like this:
import multiprocessing as m
manager = m.Manager()
starsdict = manager.dict()
process = Process(target=yourfunction, args=(starsdict,))
process.run()
The object stored in starsdict is not the real dict. instead it sends all changes and requests, you do with it, to its manager. This is called a "proxy", it has almost exactly the same API as the object it mimics. These proxies are pickleable, so you can pass as arguments to functions in new processes (like shown above) or send them through queues.
You can read more about this in the documentation.
I don't know how proxies react if two processes are accessing them simultaneously. Since they're made for parallelism I guess they should be safe, even though I heard they're not. It would be best if you test this yourself or look for it in the documentation.

Linux non-su script indirectly triggering su script?

I'd like to create an auto-testing/grading script for students on a Linux system such that:
Any student user can initiate the script at any time.
A separate script (with root privileges) copies student code to a non-student-accessible file space, using non-student-accessible unit tests, etc.
The user receives limited feedback in the form of a text file generated by the grading script.
In short, I'm looking to create something similar to programming contest submission systems, but allowing richer feedback without revealing all teacher unit testing.
I would imagine that a spooling behavior between one initiating script and one root-permission cron script might be in order. Are there any models/examples of how one might best structure communication between a user-initiated script and a separate root-initiated script for such purposes?
There are many options.
The things I would mention at the first line:
Don't use su; use sudo; there are several reasons for it, and the main reason, that to use su you need the password of the user you want to be and with sudo — you don't;
Scripts can't be suid, you must use binaries or just a normal script that will be started using sudo (of course students must have sudoers entry that allows them to use the script);
Cron is not that fast, as you may theoretically need; cron runs tasks every minute; please consider inotify usage;
To communicate between components of your system you need something that will react in realtime; there are many opensource components/libraries/frameworks that could help you, but I would recommend you to take a look at ZeroMQ and Redis;
Results of the scripts' executions/tests can be written either to a filesystem (I think it would be better), or to a DBMS.
If you want to stick to shell scripting, the method I suggest for communicating between processes would be to have the root script continually check a named pipe for input (i.e. keep opening it after each eof) and send each input through whatever various tests must be done. Have part of the input be a 'return address' - where to send the result.
This should allow the tests to be performed in a privileged space without exposing any control over the privileged space to the students. The students don't need sudo, and you don't need to pull in libraries. Just have the students pipe their code into a non-privileged script that adds the return address and whatever other markup you may need, which then gives it to the named pipe.

How do you safely read memory in Unix (or at least Linux)?

I want to read a byte of memory but don't know if the memory is truly readable or not. You can do it under OS X with the vm_read function and under Windows with ReadProcessMemory or with _try/_catch. Under Linux I believe I can use ptrace, but only if not already being debugged.
FYI the reason I want to do this is I'm writing exception handler program state-dumping code, and it helps the user a lot if the user can see what various memory values are, or for that matter know if they were invalid.
If you don't have to be fast, which is typical in code handling state dump/exception handlers, then you can put your own signal handler in place before the access attempt, and restore it after. Incredibly painful and slow, but it is done.
Another approach is to parse the contents of /dev/proc//maps to build a map of the memory once, then on each access decide whether the address in inside the process or not.
If it were me, I'd try to find something that already does this and re-implement or copy the code directly if licence met my needs. It's painful to write from scratch, and nice to have something that can give support traces with symbol resolution.

Control Linux Application Launch/Licensing

I need to employ some sort of licensing on some Linux applications that I don't have access to their code base.
What I'm thinking is having a separate process read the license key and check for the availability of that application. I would then need to ensure that process is run during every invocation of the respected application. Is there some feature of Linux that can assist in this? For example something like the sudoers file in which I detect what user and what application is trying to be launched, and if a combination is met, run the license process check first.
Or can I do something like not allow the user to launch the (command-line) application by itself, and force them to pipe it to my license process as so:
/usr/bin/tm | license_process // whereas '/usr/bin/tm' would fail on its own
I need to employ some sort of licensing on some Linux applications
Please note that license checks will generally cost you way more (in support and administration) than they are worth: anybody who wants to bypass the check and has a modicum of skill will do so, and will not pay for the license if he can't anyway (that is, by not implementing a licensing scheme you are generally not leaving any money on the table).
that I don't have access to their code base.
That makes your task pretty much impossible: the only effective copy-protection schemes require that you rebuild your entire application, and make it check the license in so many distinct places that the would be attacker gets bored and goes away. You can read about such schemes here.
I'm thinking is having a separate process read the license key and check for the availability of that application.
Any such scheme will be bypassed in under 5 minutes by someone skilled with strace and gdb. Don't waste your time.
You could write a wrapper binary that does the checks, and then link in the real application as part of that binary, using some dlsym tricks you may be able to call the real main function from the wrapper main function.
IDEA
read up on ELF-hacking: http://www.linuxforums.org/articles/understanding-elf-using-readelf-and-objdump_125.html
use ld to rename the main function of the program you want to protect access to. http://fixunix.com/aix/399546-renaming-symbol.html
write a wrapper that does the checks and uses dlopen and dlsym to call the real main.
link together real application with your wrapper, as one binary.
Now you have an application that has your custom checks that are somewhat hard to break, but not impossible.
I have not tested this, don't have the time, but sort of fun experiment.

Resources