How to limit privileged user access at Linux Kernel level? - linux

I found this answer on learning Linux Kernel Programming and my question is more specific for the security features of the Linux Kernel. I want to know how to limit privileged users or process's access rights to other processes and files in contrast to full access of root.
Until now I found:
user and group for Discretionary Access Control (DAC), with differentiation in read, write and execute for user, group and other
user root for higher privileged tasks
setuid and setgid to extend users's DAC and set group/user ID of calling process, e.g. user run ping with root rights to open Linux sockets
Capabilities for fine-grained rights, e.g. remove suid bit of ping and set cap_net_raw
Control Groups (Cgroups) to limit access on resources i.e. cpu, network, io devices
Namespace to separate process's view on IPC, network, filesystem, pid
Secure Computing (Seccomp) to limit system calls
Linux Security Modules (LSM) to add additional security features like Mandatory Access Control, e.g. SELinux with Type Enforcement
Is the list complete? While writing the question I found fanotify to monitor filesystem events e.g. for anti virus scans. Probably there are more security features available.
Are there any more Linux security features which could be used in a programmable way from inside or outside of a file or process to limit privileged access? Perhaps there is a complete list.

The traditional unix way to limit a process that somehow needs more privileges and yet contain it so that it cannot use more than what it needs is to "chroot" it.
chroot changes the apparent root of a process. If done right, it can only access those resources inside that newly created chroot environment (aka. chroot jail)
e.g. it can only access those files, but also, only those devices etc.
To create a process that does this willingly is relatively easy, and not that uncommon.
To create an environment where an existing piece of software (e.g. a webserver, mailserver, ...) feels at home in and still functions properly is something that requires experience. The main thing is to find the minimal set of resources needed (shared libraries, configuration files, devices, dependent services (e.g. syslog), ... ).

You may add
EFS,AppArmor,Yama
auditctl,ausearch,aureport
Tools similar to fanotify:
Snort, ClamAV,OpenSSL,AIDE, nmap, GnuPG

Related

Why unshare(CLONE_NEWNET) requires CAP_SYS_ADMIN?

I'm playing with linux namespaces and I've noticed that if a user wants to execute a process in a new network namespace (without using user namespaces) he needs to be root or have the CAP_SYS_ADMIN capability.
The unshare(2) manpage says:
CLONE_NEWNET (since Linux 2.6.24)
This flag has the same effect as the clone(2) CLONE_NEWNET flag.
Unshare the network namespace, so that the calling process is moved into a new net‐work namespace which is not shared with any previously existing process. Use of CLONE_NEWNET requires the CAP_SYS_ADMIN capability.
So, if I want to execute a pdf reader in a network sandbox I must use user-net-namespaces or some privileged wrapper.
Why? The new process will be placed in a new network namespace with no interfaces, so it will be isolated from the real network, right? Which kind of problems/security threats do unprivileged non user network namespaces raise?
Creating a network namespace allows manipulating the execution environment of binaries that have the setuid flag or are otherwise privileged. User namespaces take away this possibility, because a process cannot gain privileges that are not included in the user namespace.
In general, it cannot be known that no security vulnerability is caused by denying a privileged process from accessing the network. Therefore, the kernel assumes that operation is privileged, and it is up to the system policy to decide whether a privileged utility is provided for ordinary users.

Can I load a library or process with limited permissions?

This is an imaginary example of what I like to do. Don't take it too literally.
Let say my process is being ran as www-data and I have a lua script called thedevil.lua. It will try to delete, corrupt and cause as much problems as possible. I'd like to fire up a process (or load a shared object) that has a lua interpreter and it will try to ruin all my websites as the user is www-data.
Is there a way I can say lets create this process (or load a library) with LIMITED permissions. Say the script is in /var/www/devilscript/thedevil.lua. I'd like to give it permissions for /tmp/www/devilscript and /var/www/devilscript/. Is that possible? I don't want to create a new user called devilscript and give it limited permissions than run the process as that user. I just want to say I am www-data but I only want to give this process/lib a subset of what I can do.
-edit- Could you give me the name of the functions to execute the said so or binary with lower permissions?
-edit2- Can windows do something like I asked?
Yes, depending on the operating system you are running on, there are various sorts of sandboxing methods available in modern Unix systems. It depends a bit on which one you are running. Under Linux there are almost too many -- SELinux, Apparmor, Tomoyo, and others. FreeBSD has a Mandatory Access Control System as well as the Capsicum capabilities system. Mac OS X has a sandboxing system as well.
Most such systems allow you to reduce the privilege that a particular process gets in a fairly granular manner. In general, capability systems are easier to work with than Mandatory Access Control (MAC) systems, but they are less frequently available.
A primitive way of doing this sort of privilege restriction in older Unix systems was "chrooting" a process, that is, running it in a restricted part of the file hierarchy using the chroot system call. Unfortunately, that remains the only truly portable form of privilege reduction available in Unix systems -- you thus encounter it in the configuration systems of many system daemons.
SELinux will allow you to create a domain that has restricted access to various file contexts and resources, regardless of the user the process is running as (even root).

SSH user lock to their home dir & one service

I'm really new to Linux. I Google'd for couple of days, and installed Java and Tomcat in CentOS.
Now I need a user, that has all privileges in their home directory (including files, subdirs and files in subdirs), but cannot access any other dir.
Also this user has to have a permission to manage one service (I created tomcat service, which I can 'start','stop' and 'restart').
Can anyone explain how to do this?
You've asked for a lot.
There's a few approaches possible:
Entirely with "native" Linux permissions
Using a mandatory access control system
Native Linux permissions
Create this new user their own new group. Make them the only member of this group.
Remove world read, write, execute permissions on all your data files. If any users were getting their privileges to the data files via world permissions, either create new groups for all the users and data as appropriate (maybe one for accounting, one for billing, one for sales, one for engineering, etc. Whatever works.)
Add a new sudoers(5) entry for this user for the sudo stop tomcat, sudo start tomcat, sudo restart tomcat, sudo status tomcat -- or whichever commands this user will need to execute to manage the tomcat service. See visudo(8) for details on editing the sudo(8) config file.
If you really want to lock this user down, copy in the utilities that this person will need into their ~/bin/ dir and then proceed to remove the world execute bit on /bin, /sbin, /usr/bin, /usr/sbin. (Leave /lib, /usr/lib, etc. alone -- copying in the libraries this user will need is doubtless a lot of work.)
Mandatory access controls
I'll explain this using the AppArmor system; I've worked on AppArmor for over a decade, and it is the system I know best. There are more choices: TOMOYO, SMACK, and SELinux are all excellent tools. AppArmor and TOMOYO work on the idea that you confine access to pathnames. SMACK and SELinux work on the idea that every object on the system is assigned a label and the policy specifies which labels (on processes) can read, write, execute, etc. labels (on data or other processes). If you wanted to enforce a comprehensive Open, Classified, Secret, Top Secret style of protection, SMACK or SELinux would be the better tools. If you want to confine some programs to some files, AppArmor or TOMOYO would be the better tools.
AppArmor should come ready-to-use on most Ubuntu, SUSE, PLD, Annvix, Mandriva, and Pardus distributions.
The AppArmor system confines processes and controls how processes can move from domain to domain when the processes execute new programs. Domains are usually assigned by program.
The easiest way to get started is to copy /bin/bash to /bin/jail_bash (or some other name not in /etc/shells), set the shell for the user in /etc/passwd (chsh(1) can make this easy), and create an AppArmor profile for /bin/jail_bash that allows only the actions you want to allow. If we confine the process correctly, then the user cannot escape the profile we make for them.
Add a new sudoers(5) entry for this user for the sudo stop tomcat, sudo start tomcat, sudo restart tomcat, sudo status tomcat -- or whichever commands this user will need to execute to manage the tomcat service. See visudo(8) for details on editing the sudo(8) config file.
In one terminal, run aa-genprof jail_bash. In another terminal, log in as the user (or otherwise run /bin/jail_bash) and begin doing tasks that you want to allow the person to do. We'll use what you do as training material to build a profile iteratively. You might be interested to watch /var/log/syslog or /var/log/audit/audit.log (if you have the auditd package installed) to see what operations AppArmor notices your program doing. Don't do too much at once -- just a few new things per iteration.
In the aa-genprof terminal, answer the questions as they come up. Allow what needs to be allowed. Deny what ought to be denied. When you are asked about execution privileges, prefer inherit or child over profile. (The profile option will influence every one else on the system. Inherit or child will only influence executions from whatever profile you're currently working on improving. Child breaks apart privileges into smaller pieces, while inherit keeps permissions in larger profiles. Prefer inherit for this case.)
Once you get to questions about executing tomcat, use the unconfined execute privilege. This is dangerous -- if a bug in the way tomcat is started allows people to start unconfined shells, then this can be used to break out of the jail. You could confine tomcat (and this is even a good idea -- tomcat isn't perfect) to prevent this from being an escape route, but that is probably not necessary right away.
AppArmor is designed to make it easy to grow the profiles on a system over time. AppArmor isn't applicable to all security situations, but we deployed scenarios very similar to this at the DEF CON Capture-the-flag hacking contest with excellent results. We had to allow fellow attackers root (and ephemeral user accounts) access to the machine via telnet, as well as POP3, SMTP, HTTP+CGI, and FTP.
Be sure to hand-inspect the profiles in /etc/apparmor.d/ before allowing your user to log in. You can fix anything you want with a plain text editor; run /etc/init.d/apparmor restart to reload all profiles (and unload the profiles you might remove).
It's handy to keep an unconfined root sash(1) shell open when you're first learning how to configure AppArmor. If you ignore the warning about programs that shouldn't have their own profile, it might be difficult to get back into your own system. (Don't forget about booting with init=/bin/sh in the worst of situations.)
You can easily create a very restricted environment by starting bash in restricted mode. Set the user's shell to rbash instead of bash, and that will put it into restricted mode.
http://www.gnu.org/s/bash/manual/html_node/The-Restricted-Shell.html
There's a chance that rbash will be too restrictive for your needs. Among other things, the restricted environment forbids changing directories. But take a look at it and see if it's sufficient for your needs.

How do I disable SELinux for a subprocess launched from Apache?

My Apache module launches a helper subprocess which does, for example, but not limited by, the following things:
It sets up a socket so that it can communicate with Apache.
Reads and writes files in a temporary location that is deleted when Apache exits. These files are used e.g. for storing large amounts of data received over the network, in case that data does not comfortably fit in RAM.
It spawns user-specified executables. Similar to CGI. Each of these spawned processes are run as their own dedicated user.
The helper subprocess is launched as root so that it can manage file ownerships and permissions and can spawn more processes as specific users.
Some users of my module run on systems with SELinux installed, e.g. RedHat-based distros. SELinux usually interferes with my module. Until now I've been telling people to disable SELinux system-wide because I can't figure out how to write a proper policy for my software. Documentation is very scattered, complex and usually only targets system administrators, not software developers.
As a step into the right direction, I want to implement minimal support for SELinux. I'm looking for a way to launch my helper subprocess without any SELinux constraints without disabling SELinux system-wide. Is there a way to do that, and if so, how?
Well... you could write a rule that transitions your domain to unconfined_t, but then you'd piss off quite a few sysadmins. Best to write yourself a new domain that inherits from httpd_t and also adds the appropriate contexts for access.

Secure way to run other people code (sandbox) on my server?

I want to make a web service that runs other people's code locally. Naturally, I want to limit their code's access to a certain "sandbox" directory, so that they won't be able to connect to other parts of my server (DB, main webserver, etc.)
What's the best way to do this?
Run VMware/Virtualbox:
+ I guess it's as secure as it gets. Even if someone manage to "hack", they only hack the guest machine
+ Can limit the CPU & memory the processes use
+ Easy to set up - just create the VM
- Harder to "connect" the sandbox directory from the host to the guest
- Wasting extra memory and CPU for managing the VM
Run underprivileged user:
+ Doesn't waste extra resources
+ Sandbox directory is just a plain directory
? Can't limit CPU and memory?
? I don't know if it's secure enough
Any other way?
Server running Fedora Core 8, the "other" codes written in Java & C++
To limit CPU and memory, you want to set limits for groups of processes (POSIX resource limits only apply to individual processes). You can do this using cgroups.
For example, to limit memory start by mounting the memory cgroups filesystem:
# mount cgroup -t cgroup -o memory /cgroups/memory
Then, create a new sub-directory for each group, e.g.
# mkdir /cgroups/memory/my-users
Put the processes you want constrained (process with PID "1234" here) into this group:
# cd /cgroups/memory/my-users
# echo 1234 >> tasks
Set the total memory limit for the group:
# echo 1000000 > memory.limit_in_bytes
If processes in the group fork child processes, they will also be in the group.
The above group sets the resident memory limit (i.e. constrained processes will start to swap rather than using more memory). Other cgroups let you constrain other things, such as CPU time.
You could either put your server process into the group (so that the whole system with all its users fall under the limits) or get the server to put each new session into a new group.
chroot, jail, container, VServer/OpenVZ/etc., are generally more secure than running as an unprivileged user, but lighter-weight than full OS virtualization.
Also, for Java, you might trust the JVM's built-in sandboxing, and for compiling C++, NaCl claims to be able to sandbox x86 code.
But as Checkers' answer states, it's been proven possible to cause malicious damage from almost any "sandbox" in the past, and I would expect more holes to be continually found (and hopefully fixed) in the future. Do you really want to be running untrusted code?
Reading the codepad.org/about page might give you some cool ideas.
http://codepad.org/about
Running under unprivileged user still allows a local attacker to exploit vulnerabilities to elevate privileges.
Allowing to execute code in a VM can be insecure as well; the attacker can gain access to host system, as recent VMWare vulnerability report has shown.
In my opinion, allowing running native code on your system in the first place is not a good idea from security point of view. Maybe you should reconsider allowing them to run native code, this will certainly reduce the risk.
Check out ulimit and friends for ways of limiting the underprivileged user's ability to DOS the machine.
Try learning a little about setting up policies for SELinux. If you're running a Red Hat box, you're good to go since they package it into the default distro.
This will be useful if you know the things to which the code should not have access. Or you can do the opposite, and only grant access to certain things.
However, those policies are complicated, and may require more investment in time than you may wish to put forth.
Use Ideone API - the simplest way.
try using lxc as a container for your apache server
Not sure about how much effort you want to put into this thing but could you run Xen like the VPS web hosts out there?
http://www.xen.org/
This would allow full root access on their little piece of the server without compromising the other users or the base system.

Resources