This is an imaginary example of what I like to do. Don't take it too literally.
Let say my process is being ran as www-data and I have a lua script called thedevil.lua. It will try to delete, corrupt and cause as much problems as possible. I'd like to fire up a process (or load a shared object) that has a lua interpreter and it will try to ruin all my websites as the user is www-data.
Is there a way I can say lets create this process (or load a library) with LIMITED permissions. Say the script is in /var/www/devilscript/thedevil.lua. I'd like to give it permissions for /tmp/www/devilscript and /var/www/devilscript/. Is that possible? I don't want to create a new user called devilscript and give it limited permissions than run the process as that user. I just want to say I am www-data but I only want to give this process/lib a subset of what I can do.
-edit- Could you give me the name of the functions to execute the said so or binary with lower permissions?
-edit2- Can windows do something like I asked?
Yes, depending on the operating system you are running on, there are various sorts of sandboxing methods available in modern Unix systems. It depends a bit on which one you are running. Under Linux there are almost too many -- SELinux, Apparmor, Tomoyo, and others. FreeBSD has a Mandatory Access Control System as well as the Capsicum capabilities system. Mac OS X has a sandboxing system as well.
Most such systems allow you to reduce the privilege that a particular process gets in a fairly granular manner. In general, capability systems are easier to work with than Mandatory Access Control (MAC) systems, but they are less frequently available.
A primitive way of doing this sort of privilege restriction in older Unix systems was "chrooting" a process, that is, running it in a restricted part of the file hierarchy using the chroot system call. Unfortunately, that remains the only truly portable form of privilege reduction available in Unix systems -- you thus encounter it in the configuration systems of many system daemons.
SELinux will allow you to create a domain that has restricted access to various file contexts and resources, regardless of the user the process is running as (even root).
Related
I found this answer on learning Linux Kernel Programming and my question is more specific for the security features of the Linux Kernel. I want to know how to limit privileged users or process's access rights to other processes and files in contrast to full access of root.
Until now I found:
user and group for Discretionary Access Control (DAC), with differentiation in read, write and execute for user, group and other
user root for higher privileged tasks
setuid and setgid to extend users's DAC and set group/user ID of calling process, e.g. user run ping with root rights to open Linux sockets
Capabilities for fine-grained rights, e.g. remove suid bit of ping and set cap_net_raw
Control Groups (Cgroups) to limit access on resources i.e. cpu, network, io devices
Namespace to separate process's view on IPC, network, filesystem, pid
Secure Computing (Seccomp) to limit system calls
Linux Security Modules (LSM) to add additional security features like Mandatory Access Control, e.g. SELinux with Type Enforcement
Is the list complete? While writing the question I found fanotify to monitor filesystem events e.g. for anti virus scans. Probably there are more security features available.
Are there any more Linux security features which could be used in a programmable way from inside or outside of a file or process to limit privileged access? Perhaps there is a complete list.
The traditional unix way to limit a process that somehow needs more privileges and yet contain it so that it cannot use more than what it needs is to "chroot" it.
chroot changes the apparent root of a process. If done right, it can only access those resources inside that newly created chroot environment (aka. chroot jail)
e.g. it can only access those files, but also, only those devices etc.
To create a process that does this willingly is relatively easy, and not that uncommon.
To create an environment where an existing piece of software (e.g. a webserver, mailserver, ...) feels at home in and still functions properly is something that requires experience. The main thing is to find the minimal set of resources needed (shared libraries, configuration files, devices, dependent services (e.g. syslog), ... ).
You may add
EFS,AppArmor,Yama
auditctl,ausearch,aureport
Tools similar to fanotify:
Snort, ClamAV,OpenSSL,AIDE, nmap, GnuPG
I'm trying to reimplement an existing server service in Node.JS. That service can be compared to a classic FTP server: authenticated users can read/create/modify files, but restricted to the permissions given to the matching system user name.
I'm pretty sure I can't have Node.JS run as root and switch users using seteuid() or alike since that would break concurrency.
Instead, can I let my Node.JS process run as ROOT and manually check permissions when accessing files? I'm thinking about some system call like "could user X create a file in directory Y?"
Otherwise, could I solve this by using user groups? Note that the service must be able to delete/modify a file created by the real system user, which may not set a special group just so that the service can access the file.
Running node as root sounds dangerous, but I assume there aren't many options left for you. Most FTP servers run as root too, for the same reason. Though, it means you need to pay a severe attention to the security of the code you are going to run.
Now to the question:
You are asking whether you can reimplement the UNIX permissions control in node.js. Yes you can, but Should Not! It is almost 100% chance you will leave holes or miss edge cases Unix core has already taken care of.
Instead use the process.setuid(id) as you mentioned. It will not defeat concurrency, but you need to think of parallel concurrency rather than async now. That is an extra work, but will release you of an headache of reinventing the Unix security.
Alternatively, if all of the operations you want to carry on filesystem involve shell commands, then you can simply modify them to the following pattern:
runuser -l userNameHere -c 'command'
Ok, so over the past year I have built some rather complex automation scripts (mostly bash, but with some perl here and there) for some of the more common work we do at my place of business. They rely heavily on ImageMagick, Ghostscript, and PhantomJS to name just a few. They also traverse a huge number of directories across the network and several different file systems and host OSs... Frankly the fact that they work is a bit of a miracle and perhaps a testament to my willingness to keep beating my head against the wall... Also, trust me, this is easier and more effective than trying to corral my resources. Our archives are... organic... and certain high-ranking individuals in the company think of them as belonging to them and do not look out for the interests of the company in their management. They are, at least, relatively well backed-up these days.
In any case these scripts automate the production of a number of image-based print-ready products of varying degrees of complexity up to multi-hundred page image-heavy books, and as such some of them accept absurdly complex argument structures to do all the things they do. (P.S. embedded Javascript in SVGS is a MAGICAL thing!) These systems have been in "working beta" for a while now, which basically means I've been hand entering the commands at a terminal to run them, and I want to move them up to production and offer them as a webservice so that those in production who are not friends with the command line can use them, and to also potentially integrate them with our new custom-developed order management system.
TL;DR below
so that's the background, the problem is this:
I'm running everything on a headless CentOS 6.4 virtual machine with SELINUX disabled.
Apache2 serves up my interface.sh CGI just fine, and the internet has already helped me make the POST data into shell variables. Now I need to launch the worker scripts that actually direct the heavy lifting and coordinate the binaries... from the CGI:
#get post data from form and make it into variables...
/bin/bash /path/to/script/worker.sh $arg1 $arg2 $arg3 $arg5 $arg6 $opt1 $arg7
Nothing.
httpd log shows permission denied, fair enough!
Ok, googling suggests that the script being called by the CGI must also be owned by the apache user and group, or by root with 755 permissions. Done!
now httpd log show permission denied for things worker.sh is trying to do :/
Google has lead me to believe that for security reasons fcgi requires that everything interacted with by the CGI process chain must have correctly controlled permissions, all the way down to the binaries and source files... Sure, this is smart for security and damage control, but almost impossible in my case. We have very dynamic data and terabytes of resources... :/
the script worker.sh exports its own environment and runs all it's commands as root. This is largely to overcome the minefield of permissions disasters that I have to contend with and CentOS's own paranoia about allowing stuff to happen. I had hoped this might be a work around, but no.
One suggestion I have seen is to simply write out the composed command to a text file and have cron or incron do something with the text file. Seems like that would work... BUT, I'd love to be able to get STDIO back into my web page as there are verbose errors and notifications (though no interaction) in many of these worker scripts, and I would like to provide notification of completion as well. Is there any way to do this that doesn't require a permissions war to be waged?
To run certain commands as another user, you can use sudo.
Set up sudo to allow passwordless access to run your command by the apache user. Then you can have the CGI script call sudo /path/to/script args to run it as root (or -u for another user of your choice).
It's very hard to make this secure, so you should make sure your CGI script is only accessible by trustworthy individuals.
My Apache module launches a helper subprocess which does, for example, but not limited by, the following things:
It sets up a socket so that it can communicate with Apache.
Reads and writes files in a temporary location that is deleted when Apache exits. These files are used e.g. for storing large amounts of data received over the network, in case that data does not comfortably fit in RAM.
It spawns user-specified executables. Similar to CGI. Each of these spawned processes are run as their own dedicated user.
The helper subprocess is launched as root so that it can manage file ownerships and permissions and can spawn more processes as specific users.
Some users of my module run on systems with SELinux installed, e.g. RedHat-based distros. SELinux usually interferes with my module. Until now I've been telling people to disable SELinux system-wide because I can't figure out how to write a proper policy for my software. Documentation is very scattered, complex and usually only targets system administrators, not software developers.
As a step into the right direction, I want to implement minimal support for SELinux. I'm looking for a way to launch my helper subprocess without any SELinux constraints without disabling SELinux system-wide. Is there a way to do that, and if so, how?
Well... you could write a rule that transitions your domain to unconfined_t, but then you'd piss off quite a few sysadmins. Best to write yourself a new domain that inherits from httpd_t and also adds the appropriate contexts for access.
For reasons beyond the scope of this post, I want to run external (user submitted) code similar to the computer language benchmark game. Obviously this needs to be done in a restricted environment. Here are my restriction requirements:
Can only read/write to current working directory (will be large tempdir)
No external access (internet, etc)
Anything else I probably don't care about (e.g., processor/memory usage, etc).
I myself have several restrictions. A solution which uses standard *nix functionality (specifically RHEL 5.x) would be preferred, as then I could use our cluster for the backend. It is also difficult to get software installed there, so something in the base distribution would be optimal.
Now, the questions:
Can this even be done with externally compiled binaries? It seems like it could be possible, but also like it could just be hopeless.
What about if we force the code itself to be submitted, and compile it ourselves. Does that make the problem easier or harder?
Should I just give up on home directory protection, and use a VM/rollback? What about blocking external communication (isn't the VM usually talked to over a bridged LAN connection?)
Something I missed?
Possibly useful ideas:
rssh. Doesn't help with compiled code though
Using a VM with rollback after code finishes (can network be configured so there is a local bridge but no WAN bridge?). Doesn't work on cluster.
I would examine and evaluate both a VM and a special SELinux context.
I don't think you'll be able to do what you need with simple file system protection because you won't be able to prevent access to syscalls which will allow access to the network etc. You can probably use AppArmor to do what you need though. That uses the kernel and virtualizes the foreign binary.