So primarily I try the simple architecture in which I make a common interface's repo and then pull it on every plugin and then implement that, then call the main app for adding the plugin and run it dynamically while running.
interface PluginI {
val core: CoreApplication // Means of communication with core app, this obj is sent by core app by constructor
fun version(): Double
suspend fun load(pluginConfiguration: PluginConfiguration)
suspend fun run()
}
But how should the plugin be restricted to some area, such as protection from potentially destroying, hijacking or crashing the main app? Especially it should be restricted from using anything from jvm's static classes such as System, an example of some hijacking app is that it could do a System.getRuntime().exec() which could be execute exploits in shell.
Sandboxing isn't the solution, or is it? Because it just breaks the connection between the main app and plugin.
I am searching for a solution which gives only a shared object from which main app communicates with plugin and send some info it wants such as date/time or whatever that can't hurt runtime.
Creating sandboxed environment in java is pretty much impossible, with java 9+ you could use modules to make this a bit easier... especially if you want to allow some kind of reflections.
But allowing reflections otherwise is really hard and risky, and everything you do should definitely work as a whitelist, blacklists just "don't work" (aka, I don't believe anyone will be able to find all the things to include and remember to keep it updated when updating any of your dependencies).
Java have a build-in system for such stuff and its called SecurityManager, your application should setup own SecurityManager and use it to filter every method call and only allow for invocation of method from the same plugin as it was called from and block any kind of reflections, block changing security manager and block any manual class loading.
Additionally java 9 modules can be used to simplify this setup, as modules can even block calls between modules while not restricting reflections on classes from the same module. But it can't replace security manager fully.
Also just remember that no matter what you do, someone can still cause your application to become unstable, especially if you will allow any kind of I/O or own threading. But plugin can still allocate huge amount of memory or just run infinite loop - or actually just both at the same time ;)
And there is no way in java to either force thread to be stopped or limit amount of allocated memory from some code.
The only partial solution for this I could see would be to use java agent to instrument code of each plugin and add calls checking if thread was interrupted inside any code that could run for too long.
Same would be possible with allocations.
But then you also need to be 100% sure that none of the whitelisted methods can loop or allocate too much at once.
Basically: don't.
It's only good solution (without the agent) if you can trust plugin a bit but you just want to set some rules to avoid bad practices. It will not stop someone who wants to break something, but will stop typical programmer from creating non-clean code.
If you need to run untrusted code... either run it in actual sandbox on OS level, like look at sphere engine and only communicate with it in some safe way.
Or use some other language that offers you to do the same as stated above but much easier, like Lua. http://lua-users.org/wiki/SandBoxes
You can then run such language from java using script engine https://github.com/luaj/luaj
But even then its hard to be 100% sure it will work right and that there are no holes someone will find and use, as it does not take much if all that an attacker will need is to stress cpu/memory enough to disrupt main application.
Related
Is it possible to disable / bypass Unity3D main thread check when accessing methods and fields from classes in UnityEngine and UnityEditor from other threads?
If so, what are the ways to achieve this?
Does anyone know how Unity Team have implemented this check?
(I know of, and am currently using other techniques that allow me to successfully resolve any multi-threading problems, but this question is of rather academic type :)
Please respond only with possible solutions or informations why this cannot be done.
If Unity doesn't allow you to do something on the main thread respectively not on the main thread, assume there is at least one good reason for that.
Trying to circumvent it will almost inevitably result in problems, possibly hard to debug ones because Unity is trying to protect you from doing both outright illegal things that will corrupt or crash the app (ie accessing OpenGL context from a different thread) or things that are most likely going to lead to potential issues like race conditions, deadlocks, etc.
So even if there were a way to bypass said checks, without access to the code and profound understanding of multithreaded programming in the context of that code it would be highly counterproductive, self-defeating and setting yourself up for failures and hard to debug issues if you did actually manage to bypass said checks.
Every framework comes with a set of rules to work in. If Unity is saying "this method must be called on the main thread" you just follow those rules. Especially because you don't have access to the engine source code and could prove otherwise that in this specific set of circumstances it wouldn't be a problem. And if you had the source code, you could disable the check for those specific circumstances. Still, if the framework developer(s) insist that this wouldn't be a good idea you'll better listen to them because they surely know their framework far better than anyone else.
I'm currently offering an assembly compile service for some people. They can enter their assembly code in an online editor and compile it. When then compile it, the code is sent to my server with an ajax request, gets compiled and the output of the program is returned.
However, I'm wondering what I can do to prevent any serious damage to the server. I'm quite new to assembly myself so what is possible when they run their script on my server? Can they delete or move files? Is there any way to prevent these security issues?
Thank you in advance!
Have a look at http://sourceforge.net/projects/libsandbox/. It is designed for doing exactly what you want on a linux server:
This project provides API's in C/C++/Python for testing and profiling simple (single process) programs in a restricted environment, or sandbox. Runtime behaviours of binary executable programs can be captured and blocked according to configurable / programmable policies.
The sandbox libraries were originally designed and utilized as the core security module of a full-fledged online judge system for ACM/ICPC training. They have since then evolved into a general-purpose tool for binary program testing, profiling, and security restriction. The sandbox libraries are currently maintained by the OpenJudge Alliance (http://openjudge.net/) as a standalone, open-source project to facilitate various assignment grading solutions for IT/CS education.
If this is a tutorial service, so the clients just need to test miscellaneous assembly code and do not need to perform operations outside of their program (such as reading or modifying the file system), then another option is to permit only a selected subset of instructions. In particular, do not allow any instructions that can make system calls, and allow only limited control-transfer instructions (e.g., no returns, branches only to labels defined within the user’s code, and so on). You might also provide some limited ways to return output, such as a library call that prints whatever value is in a particular register. Do not allow data declarations in the text (code) section, since arbitrary machine code could be entered as numerical data definitions.
Although I wrote “another option,” this should be in addition to the others that other respondents have suggested, such as sandboxing.
This method is error prone and, if used, should be carefully and thoroughly designed. For example, some assemblers permit multiple instructions on one line. So merely ensuring that the text in the first instruction field of a line was acceptable would miss the remaining instructions on the line.
Compiling and running someone else's arbitrary code on your server is exactly that, arbitrary code execution. Arbitrary code execution is the holy grail of every malicious hacker's quest. Someone could probably use this question to find your service and exploit it this second. Stop running the service immediately. If you wish to continue running this service, you should compile and run the program within a sandbox. However, until this is implemented, you should suspend the service.
You should run the code in a virtual machine sandbox because if the code is malicious, the sandbox will prevent the code from damaging your actual OS. Some Virtual Machines include VirtualBox and Xen. You could also perform some sort of signature detection on the code to search for known malicious functionality, though any form of signature detection can be beaten.
This is a link to VirtualBox's homepage: https://www.virtualbox.org/
This is a link to Xen: http://xen.org/
In some languages (Java, C# without unsafe code, ...) it is (should be) impossible to corrupt memory - no manual memory management, etc. This allows them to restrict resources (access to files, access to net, maximum memory usage, ...) to applications quite easily - e.g. Java applets (Java web start). It's sometimes called sandboxing.
My question is: is it possible with native programs (e.g. written in memory-unsafe language like C, C++; but without having source code)? I don't mean simple bypass-able sandbox, or anti-virus software.
I think about two possibilities:
run application as different OS user, set restrictions for this user. Disadvantage - many users, for every combination of parameters, access rights?
(somehow) limit (OS API) functions, that can be called
I don't know if any of possibilities allow (at least in theory) in full protection, without possibility of bypass.
Edit: I'm interested more in theory - I don't care that e.g. some OS has some undocumented functions, or how to sandbox any application on given OS. For example, I want to sandbox application and allow only two functions: get char from console, put char to console. How is it possible to do it unbreakably, no possibility of bypassing?
Answers mentioned:
Google Native Client, uses subset of x86 - in development, together with (possible?) PNaCl - portable native client
full VM - obviously overkill, imagine tens of programs...
In other words, could native (unsafe memory access) code be used within restricted environment, e.g. in web browser, with 100% (at least in theory) security?
Edit2: Google Native Client is exactly what I would like - any language, safe or unsafe, run at native speed, sandbox, even in web browser. Everybody use whatever language you want, in web or on desktop.
You might want to read about Google's Native Client which runs x86 code (and ARM code I believe now) in a sandbox.
You pretty much described AppArmor in your original question. There are quite a few good videos explaining it which I highly recommend watching.
Possible? Yes. Difficult? Also yes. OS-dependent? Very yes.
Most modern OSes support various levels of process isolation that can be used to acheive what you want. The simplest approach is to simply attach a debugger and break on all system calls; then filter these calls in the debugger. This, however, is a large performance hit, and is difficult to make safe in the presence of multiple threads. It is also difficult to implement safely on OSes where the low-level syscall interface is not documented - such as Mac OS or Windows.
The Chrome browser folks have done a lot of work in this field. They've posted design docs for Windows, Linux (in particular the SUID sandbox), and Mac OS X. Their approach is effective but not totally foolproof - there may still be some minor information leaks between the outer OS and the guest application. In addition, some of the OSes require specific modifications to the guest program to be able to communicate out of the sandbox.
If some modification to the hosted application is acceptable, Google's native client is worth a look. This restricts the compiler's code generation choices in such a way that the loader can prove that it doesn't do anything nasty. This obviously doesn't work on arbitrary executables, but it will get you the performance benefits of native code.
Finally, you can always simply run the program in question, plus an entire OS to itself, in an emulator. This approach is basically foolproof, but adds significant overhead.
Yes this is possible IF the hardware provides mechanisms to restrict memory accesses. Desktop processors usually are equipped with an MMU and access levels, so the OS can employ these to deny access to any memory address a thread should not have access to.
Virtual memory is implemented by the very same means: any access to memory currently swapped out to disk is trapped, the memory fetched from disk and then the thread is continued. Virtualization takes it a little farther, because it also traps accesses to hardware registers.
All the OS really needs to do is properly use those features and it will be impossible for any code to break out of the sandbox. Of course this much easier said than practically applied. Mostly because the OS takes liberties if favor of performance, oversights in what certain OS calls can be used to do and last but not least bugs in the implementation.
I have read the Wikipedia article, but I am not really sure what it means, and how similar it is to version control.
It would be helpful if somebody could explain in very simple terms what sandboxing is.
A sandpit or sandbox is a low, wide container or shallow depression filled with sand in which children can play. Many homeowners with children build sandpits in their backyards because, unlike much playground equipment, they can be easily and cheaply constructed. A "sandpit" may also denote an open pit sand mine.
Well, A software sandbox is no different than a sandbox built for a child to play. By providing a sandbox to a child we simulate the environment of real play ground (in other words an isolated environment) but with restrictions on what a child can do. Because we don't want child to get infected or we don't want him to cause trouble to others. :) What so ever the reason is, we just want to put restrictions on what child can do for Security Reasons.
Now coming to our software sandbox, we let any software(child) to execute(play) but with some restrictions over what it (he) can do. We can feel safe & secure about what the executing software can do.
You've seen & used Antivirus software. Right? It is also a kind of sandbox. It puts restrictions on what any program can do. When a malicious activity is detected, it stops and informs user that "this application is trying to access so & so resources. Do want to allow?".
Download a program named sandboxie and you can get an hands on experience of a sandbox. Using this program you can run any program in controlled environment.
The red arrows indicate changes flowing from a running program into your computer. The box labeled Hard disk (no sandbox) shows changes by a program running normally. The box labeled Hard disk (with sandbox) shows changes by a program running under Sandboxie. The animation illustrates that Sandboxie is able to intercept the changes and isolate them within a sandbox, depicted as a yellow rectangle. It also illustrates that grouping the changes together makes it easy to delete all of them at once.
Now from a programmer's point of view, sandbox is restricting the API that is allowed to the application. In the antivirus example, we are limiting the system call (operating system API).
Another example would be online coding arenas like topcoder. You submit a code (program) but it runs on the server. For the safety of the server, They should limit the level of access of API of the program. In other words, they need to create a sandbox and run your program inside it.
If you have a proper sandox you can even run a virus infected file and stop all the malicious activity of the virus and see for yourself what it is trying to do. In fact, this will be the first step of an Antivirus researcher.
This definition of sandboxing basically means having test environments (developer integration, quality assurance, stage, etc). These test environments mimic production, but they do not share any of the production resources. They have completely separate servers, queues, databases, and other resources.
More commonly, I've seen sandboxing refer to something like a virtual machine -- isolating some running code on a machine so that it can't affect the base system.
For a concrete example: suppose you have an application that deals with money transfers. In the production environment, real money is exchanged. In the sandboxed environment, everything runs exactly the same, but the money is virtual. It's for testing purposes.
Paypal offers such a sandboxed environment, for example.
For the "sandbox" in software development, it means to develop without disturbing others in an isolated way.
It is not similiar to version control. But some version control (as branching) method can help making sandboxes.
More often we refer to the other sandbox.
In anyway, sandbox often mean an isolated environment. You can do anything you like in the sandbox, but its effect won't propagate outside the sandbox. For instance, in software development, that means you don't need to mess with stuff in /usr/lib to test your library, etc.
A sandbox is an isolated testing environment that enables users to run programs or execute files without affecting the application, system, or platform on which they run. Software developers use sandboxes to test new programming code. Especially cybersecurity professionals use sandboxes to test potentially malicious software. Without sandboxing, an application or other system process could have unlimited access to all the user data and system resources on a network.
Sandboxes are also used to safely execute malicious code to avoid harming the device on which the code is running, the network, or other connected devices. Using a sandbox to detect malware offers an additional layer of protection against security threats, such as stealthy attacks and exploits that use zero-day vulnerabilities.
The main article is here.
I'm trying to write a tool which lets me inspect the state of a PowerBuilder-based application. What I'm thinking of is something like Spy++ (or, even nicer, 'Snoop' as it exists for .NET applications) which lets me inspect the object tree (and properties of objects) of some PowerBuilder-based GUI.
I did the same for ordinary (MFC-based) applications as well as .NET applications already, but unfortunately I never developed an application in PowerBuilder myself, so I'm generally thinking about two problems at this point:
Is there some API (preferably in Java or C/C++) available which lets one traverse the
tree of visual objects of a PowerBuilder application? I read up a bit on the PowerBuilder Native Interface system, but it seems that this is meant to write PowerBuilder extensions in C/C++ which can then be called from the PowerBuilder script language, right?
If there is some API available - maybe PowerBuilder applications even expose some sort of IPC-enabled API which lets me inspect the state of a PowerBuilder object hierarchy without being within the process of the PowerBuilder application? Maybe there's an automation interface available, or something COM-based - or maybe something else?
Right now, my impression is that probably need to inject a DLL into the process of the PowerBuilder application and then gain access to the running PowerBuilder VM so that I can query it for the object tree. Some sort of IPC mechanism will then let me transport this information out of the PowerBuilder application's process.
Does anybody have some experience with this or can shed some light on whether anybody tried to do this already?
Best regards,
Frerich
First, the easy answer: I think what you're trying to do has been done, sort of. Rex from Enable does what I think you're after, but IIRC from talking with the developers, it depends on code hooks built into the application.
Which leads to the suggestion that I don't think you'll be able to do what I think you're trying to do completely externally from the application. You can grab window handles with WinAPIs and do some basic things with that, but not as much as you want. And getting information about DataWindows with WinAPIs? Forget it.
I believe I've heard of an API like the one you're asking about, but I've never heard of anyone other that automated testing software tool manufacturers getting their hands on it. If this is true (and the quality of this information is along the lines of "heard it in the hallway"), I suspect there might be some application security issues in letting this get out. (I know you'd never want to infect my application, or poke around and find out my secrets. grin)
Even with hooks into the PowerBuilder VM memory space, I'm not aware of being able to get a list of objects in memory without some PowerScript framework hooks (e.g. populating a list on every open and constructor with object handles). Once you've got a window handle, you can easily traverse its control arrays (and its subclasses control arrays) to get a list of objects on the window, but things like handles to NVO instance variables would be problematic.
I admire the idea. I wish I had better news (other than maybe Rex might solve your problem without the headaches of doing it yourself). Now I'm looking forward even more to what eran may release! grin
Good luck,
Terry.
I've just created such a tool, but I cheated a bit. Was actually about to ask the same question myself on the PB newsgroups. My solution is made of two parts:
Spy-like tool - a stand-alone app that like Spy++, i.e. lets you drag a target onto a control, using Windows API functions (though written in PB).
Internal infrastructure for target applications - located at the ancestor of all of the application's windows. Once given a certain (windows) handle, it goes through the Control[] array and looks for the control whose handle matches the given one. If necessary, it also recurses into control-containers such as tabs.
When the user selects a control, the spy tool first looks for its containing window using Windows API. When found, the tool sends a custom message to that window, which is then handled by the app's infrastructure. The control is then located in the PB app, and its details are finally sent back to the spy tool, which presents them to the user.
I suspect the infrastructure part can be replaced with some external thing, as I've seen tools that seem to be able to do that (Visual Expert, QTP). However, I haven't had the time to further investigate, and this solution was relatively easy to develop.
I've got to say, your question comes on a surprising timing. See this recent question of mine. If you're interested in the tool I've created, drop me a comment.