How to disable NodeJs internal Modules? - security

I am in the process of evaluating the use of NodeJs for a shared programming platform.
Users should be able to submit code and run it on the server. To give them the best fundamentals, several NodeJs Modules should be provided.
For security reasons the processes should be chrooted to forbid access to system resources.
The best approach seems to be the use of child_processes, especially the fork() function.
For further security also some NodeJs Modules should be disabled, like launching additional child processes.
How can I disable these modules for a child? I can't even seem to find compile options to disable some by default,

Basically, what you are looking for is running untrusted code within a trusted environment. The key here is sandboxing, I guess.
Please note that there are various solutions out there for creating and managing sandboxes in Node.js, among others:
gf3/sandbox, which is A nifty javascript sandbox for node.js
hflw/node-sandbox, which is an Advanced sandboxing library that allows communication between the sandbox and the parent process.
I do not have any practical experience with either of them, but I guess that's a step into the right direction for you. Maybe you would like to share your experiences with them here? I think this would be awesome :-)
Hope this helps.

Related

How to build a safe (restricted ecosystem) plugin based architecture (in Kotlin)

So primarily I try the simple architecture in which I make a common interface's repo and then pull it on every plugin and then implement that, then call the main app for adding the plugin and run it dynamically while running.
interface PluginI {
val core: CoreApplication // Means of communication with core app, this obj is sent by core app by constructor
fun version(): Double
suspend fun load(pluginConfiguration: PluginConfiguration)
suspend fun run()
}
But how should the plugin be restricted to some area, such as protection from potentially destroying, hijacking or crashing the main app? Especially it should be restricted from using anything from jvm's static classes such as System, an example of some hijacking app is that it could do a System.getRuntime().exec() which could be execute exploits in shell.
Sandboxing isn't the solution, or is it? Because it just breaks the connection between the main app and plugin.
I am searching for a solution which gives only a shared object from which main app communicates with plugin and send some info it wants such as date/time or whatever that can't hurt runtime.
Creating sandboxed environment in java is pretty much impossible, with java 9+ you could use modules to make this a bit easier... especially if you want to allow some kind of reflections.
But allowing reflections otherwise is really hard and risky, and everything you do should definitely work as a whitelist, blacklists just "don't work" (aka, I don't believe anyone will be able to find all the things to include and remember to keep it updated when updating any of your dependencies).
Java have a build-in system for such stuff and its called SecurityManager, your application should setup own SecurityManager and use it to filter every method call and only allow for invocation of method from the same plugin as it was called from and block any kind of reflections, block changing security manager and block any manual class loading.
Additionally java 9 modules can be used to simplify this setup, as modules can even block calls between modules while not restricting reflections on classes from the same module. But it can't replace security manager fully.
Also just remember that no matter what you do, someone can still cause your application to become unstable, especially if you will allow any kind of I/O or own threading. But plugin can still allocate huge amount of memory or just run infinite loop - or actually just both at the same time ;)
And there is no way in java to either force thread to be stopped or limit amount of allocated memory from some code.
The only partial solution for this I could see would be to use java agent to instrument code of each plugin and add calls checking if thread was interrupted inside any code that could run for too long.
Same would be possible with allocations.
But then you also need to be 100% sure that none of the whitelisted methods can loop or allocate too much at once.
Basically: don't.
It's only good solution (without the agent) if you can trust plugin a bit but you just want to set some rules to avoid bad practices. It will not stop someone who wants to break something, but will stop typical programmer from creating non-clean code.
If you need to run untrusted code... either run it in actual sandbox on OS level, like look at sphere engine and only communicate with it in some safe way.
Or use some other language that offers you to do the same as stated above but much easier, like Lua. http://lua-users.org/wiki/SandBoxes
You can then run such language from java using script engine https://github.com/luaj/luaj
But even then its hard to be 100% sure it will work right and that there are no holes someone will find and use, as it does not take much if all that an attacker will need is to stress cpu/memory enough to disrupt main application.

Vala and PolicyKit

I'm creating a simple GTK+ based application in Vala, which should be able to write into system directories, so it needs root access. I realize that giving full root access is a bad idea, so I need a way to gain temporary privileges.
In theory, the PolicyKit D-Bus service is the tool for the job, but I have no idea how to use it, let alone in Vala code. Any insight would be appreciated.
update:
I have done some further digging. My starting point was this. So basically what I need is finding out how to adapt these solutions to PolicyKit. For this, it is necessary to find the D-Bus interface of PolicyKit. I found it here. (Strangely I didn't find it in my local /usr/share/dbus-1/interfaces folder.) But now I have no idea how to continue.
The polkit Reference Manual contains some good information, including a high-level overview on writing polkit applications.
Instead of using the D-Bus interface directly, you should probably consider using the libpolkit-gobject-1 library. You can use the GIR directly, or generate a VAPI (which I would recommend) with vapigen. Here is one I just generated. I'm not really familiar with the API, but it is very easy to use a C API reference as a reference to figure out the Vala API.

Sandboxing a program using WinAPI hooks

I'd like to sandbox a native code and use hooking of WinAPI and system functions to block or allow this program to perform some operations like reading/writing files, modify Windows registry, using an Internet connection. Is it a good and secure way to do so? How difficult would it be for that program to bypass such a security layer?
I've checked your questions and they all are related to the task that seems to be invalid from the very beginning, and here's why: you are trying to secure one application and you are ready to reinvent the wheel for this. There exist several approaches (and many ready-made solutions) to solve your problem. So instead of coding you need to look at existing solutions.
The approaches are:
use Windows permissions to restrict
access of your application to
resources
take take VMWare or Parallels or
other virtualization platform and
run your program there
take sandboxing SDK (such as
BoxedApp) and "wrap" your
application.
+1 to Hans, however if you are really into it then I can recommend Easyhook. I have personally used it successfully in Win XP, Vista and 7. I don't know how bypassable it is but other alternatives do exist - madSHI hooks, and, if you want to go the official way, try Detours from Microsoft.
Antivirus apps try to solve nearly the same problem, without much success.
1. You'd never know how even most common operations can be used.
2. There're syscalls, so the program doesn't have to use winapi at all.

What is sandboxing?

I have read the Wikipedia article, but I am not really sure what it means, and how similar it is to version control.
It would be helpful if somebody could explain in very simple terms what sandboxing is.
A sandpit or sandbox is a low, wide container or shallow depression filled with sand in which children can play. Many homeowners with children build sandpits in their backyards because, unlike much playground equipment, they can be easily and cheaply constructed. A "sandpit" may also denote an open pit sand mine.
Well, A software sandbox is no different than a sandbox built for a child to play. By providing a sandbox to a child we simulate the environment of real play ground (in other words an isolated environment) but with restrictions on what a child can do. Because we don't want child to get infected or we don't want him to cause trouble to others. :) What so ever the reason is, we just want to put restrictions on what child can do for Security Reasons.
Now coming to our software sandbox, we let any software(child) to execute(play) but with some restrictions over what it (he) can do. We can feel safe & secure about what the executing software can do.
You've seen & used Antivirus software. Right? It is also a kind of sandbox. It puts restrictions on what any program can do. When a malicious activity is detected, it stops and informs user that "this application is trying to access so & so resources. Do want to allow?".
Download a program named sandboxie and you can get an hands on experience of a sandbox. Using this program you can run any program in controlled environment.
The red arrows indicate changes flowing from a running program into your computer. The box labeled Hard disk (no sandbox) shows changes by a program running normally. The box labeled Hard disk (with sandbox) shows changes by a program running under Sandboxie. The animation illustrates that Sandboxie is able to intercept the changes and isolate them within a sandbox, depicted as a yellow rectangle. It also illustrates that grouping the changes together makes it easy to delete all of them at once.
Now from a programmer's point of view, sandbox is restricting the API that is allowed to the application. In the antivirus example, we are limiting the system call (operating system API).
Another example would be online coding arenas like topcoder. You submit a code (program) but it runs on the server. For the safety of the server, They should limit the level of access of API of the program. In other words, they need to create a sandbox and run your program inside it.
If you have a proper sandox you can even run a virus infected file and stop all the malicious activity of the virus and see for yourself what it is trying to do. In fact, this will be the first step of an Antivirus researcher.
This definition of sandboxing basically means having test environments (developer integration, quality assurance, stage, etc). These test environments mimic production, but they do not share any of the production resources. They have completely separate servers, queues, databases, and other resources.
More commonly, I've seen sandboxing refer to something like a virtual machine -- isolating some running code on a machine so that it can't affect the base system.
For a concrete example: suppose you have an application that deals with money transfers. In the production environment, real money is exchanged. In the sandboxed environment, everything runs exactly the same, but the money is virtual. It's for testing purposes.
Paypal offers such a sandboxed environment, for example.
For the "sandbox" in software development, it means to develop without disturbing others in an isolated way.
It is not similiar to version control. But some version control (as branching) method can help making sandboxes.
More often we refer to the other sandbox.
In anyway, sandbox often mean an isolated environment. You can do anything you like in the sandbox, but its effect won't propagate outside the sandbox. For instance, in software development, that means you don't need to mess with stuff in /usr/lib to test your library, etc.
A sandbox is an isolated testing environment that enables users to run programs or execute files without affecting the application, system, or platform on which they run. Software developers use sandboxes to test new programming code. Especially cybersecurity professionals use sandboxes to test potentially malicious software. Without sandboxing, an application or other system process could have unlimited access to all the user data and system resources on a network.
Sandboxes are also used to safely execute malicious code to avoid harming the device on which the code is running, the network, or other connected devices. Using a sandbox to detect malware offers an additional layer of protection against security threats, such as stealthy attacks and exploits that use zero-day vulnerabilities.
The main article is here.

Common Lisp: What's the best way to use libraries in a shared hosting environment?

I was thinking about this the other day and wanted to see what the SO community had to say about the subject.
As it stands right now Common Lisp is getting some attention as a web development platform, and with good reason (of which I'm sure you are already convinced).
I was wondering how one would go about using a library in a shared environment in a similar fashion to PHP.
If I set up something like SBCL as an interperter to interpret FASL files like Python or PHP, what would be the best way to use libraries (like clsql for instance).
Most come as asdf installable libraries, but it would be a stupid amount of overhead to require and install the library each and every time a request is made.
Keeping in mind this is for shared hosting; would it be best to ..
1) Install system wide copies of the libraries for use in applications; reduces space, but there may be problems with using the correct version of the library.
2) Allow users (through a control panel) to install local copies for themselves; more space, no version problems.
3) Tell them to wrap it into a module and load it on demand like Python does (I'm not sure if/how this can be done with Lisp). Just being able to load a library for use would be the best option, but I don't think a lot of them are designed to be used this way.
Anyways, looking to hear your opinions, thanks.
There are two ways I would look at it:
start a Lisp for each request
This way it would be much better that the Lisp is a saved image with all necessary libraries and data loaded. But that approach does not look very promising to me.
run a Lisp and let a frontend (web browser, another web server, ...) connect to it
This way you can either start a saved image or a Lisp that loads a bunch of stuff once and serves the requests.
I like to use saved images/applications in a deployment scenario. They can be quickly started, contain all the necessary software and are independent of library changes.
So it might be useful to provide pre-configured Lisp images that contain the necessary software or let the user configure and save an image.

Resources