How is the SELinux MLS range restricted during user login? - security

I'm trying to understand what logic determines whether a user can log in with a particular MLS sensitivity level. At first I suspected that pam_selinux.so reads the /etc/selinux/.../seusers file to understand which user is bound to which seuser and then restricts the user to sensitivities equal to or lower than the high component of the MLS range.
However, after scratching through its source code I found that, after asking the user if he would like to change their security context from the the default context, pam_selinux checks that the new MLS labels are appropriate by calling into the kernel policy.
The following code is in modules/pam_selinux/pam_selinux.c from the Ubuntu libpam-modules 1.1.1-4ubuntu2 package.
static int mls_range_allowed(pam_handle_t *pamh, security_context_t src, security_context_t dst, int debug)
{
struct av_decision avd;
int retval;
unsigned int bit = CONTEXT__CONTAINS;
context_t src_context = context_new (src);
context_t dst_context = context_new (dst);
context_range_set(dst_context, context_range_get(src_context));
if (debug)
pam_syslog(pamh, LOG_NOTICE, "Checking if %s mls range valid for %s", dst, context_str(dst_context));
retval = security_compute_av(context_str(dst_context), dst, SECCLASS_CONTEXT, bit, &avd);
context_free(src_context);
context_free(dst_context);
if (retval || ((bit & avd.allowed) != bit))
return 0;
return 1;
}
It seems to me that this check is actually checked in the kernel policy, seen in the security_compute_av() call. This turned my understanding of SELinux login on my head.
So, could someone please explain:
How is the validity of a user-chosen login security level determined?
How exactly is that logic implemented in the policy, in pam_selinux, and in the kernel?
Currently, I'm not too interested in type enforcement multi, categories security, or role based access control, so no need to explain how those components are validated if they don't affect MLS sensitivities.

Given that I also share the "SELinux folds my brain in half" problem, I think I can help. First and foremost, you need to remember the difference between discretionary access control and mandatory access control. You also need to remember that user space defines a lot of things, but the kernel gets to enforce them.
First, here is a partial list of user space versus kernel space issues:
User space defines a valid user ID, the kernel creates processes owned by that user ID (the number, not the name)
User space puts permissions and ownership on a file on an ext3/4 file system, the kernel enforces access to that file based upon the file inode and every subsequent parent directory inode
If two users share the same user ID in /etc/passwd, the kernel will grant them both the same privileges because enforcement is done by the numeric identifier, not the textual one
User space requests a network socket to another host, the kernel isolates that conversation from others on the same system
With SELinux, user space defines roles, logins, and users via semanage and the kernel compiles those down into a large Access Vector Cache (AVC) so that it can enforce role-based access control and mandatory access control
Also under SELinux, a security administrator can use semanage to define a minimum and maximum security context. If you are in a multi-level security (MLS) configuration, and during log in, the users picks some context, then the kernel measures that against the AVC to determine if it is allowed.
What would probably help this make sense is to be in a multi-level security configuration. I took the class on SELinux and we touched it for about two hours. Most people don't want to go there. Ever. I've been in an MLS configuration quite a bit, so I understand the reasoning behind the coding decision you were chasing, but I agree that tinkering with MLS is a pretty painful way to understand how and why PAM works like it does.
Discretionary Access Control (DAC) is where user space, especially non-root users, can define who can access data that they control and in what fashion. Think file permissions. Because users control it, there is a trivial amount of effort needed to allow one user to see processes and/or files owned by another user. Normally, we don't care that much because a good administrator assumes that any one user could compromise the whole box and so all users are trusted equally. This might be very little trust, but there is still some level of trust.
Mandatory Access Control (MAC) is where user space is not to be trusted. Not all users are created equal. From a non-MLS perspective, consider the case where you have a web server and a database server on the same hardware (it will never survive the Slashdot effect). The only time the two processes communicate is over a dedicated connection channel over TCP. Otherwise, they must not even know that the other exists. We would operate them under two different contexts and the kernel will enforce the separation. Even looking at the process table or wandering around the hard drive as root will not get you any closer unless you change contexts.
In an MLS configuration, I can't tell you how many times I've tried to get random combinations of sensitivity and context only to be rebuffed for picking an invalid combination. It can be very frustrating because it takes a lot of exploring of your existing policy (/etc/selinux/policy/src/policy/policy under Red Hat 5) to know what it or is not allowed.
Under a Strict configuration, I can clearly see why all of this is overkill. And that's simply because SELinux is overkill for simple situations. It has other features though that partially redeem it, but chief among them is fine grained access control that enforces administrator-set permissions. One area that this is used the most is in restricting service daemons to just their essential access needed. It is painful to set up a new daemon, but it keeps trivial exploitations like a shared library exploit from going any further because the process in question may be assigned to a role that won't let it run non-daemon commands like /bin/ls or a shell. Exploits don't do you much good in those situations.

Related

Can kernel code make things read-only in a way that other kernel code can't undo?

I'm under the impression that the Linux kernel's attempts to protect itself revolve around not letting malicious code run in kernelspace. In particular, if a malicious kernel module were to get loaded, that it would be "game over" from a security standpoint. However, I recently came across a post that contradicts this belief and says that there is some way that the kernel can protect parts of itself from other parts of itself:
There is plenty of mechanisms to protect you against malicious modules. I write kernel code for fun so I have some experience in the field; it's basically a flag in the pagetable.
What's there to stop any kernel module from changing that flag in the pagetable back? The only protection against malicious modules is keeping them from loading at all. Once one loads, it's game over.
Make the pagetable readonly. Done.
Kernel modules can just make it read-write again, the same way your code made it read-only, then carry on with their changes.
You can actually lock this down so that kernel mode cannot modify the page table until an interrupt occurs. If your IDT is read-only as well there is no way for the module to do anything about it.
That doesn't make any sense to me. Am I missing something big about how kernel memory works? Can kernelspace code restrict itself from modifying the page table? Would that really prevent kernel rootkits? If so, then why doesn't the Linux kernel do that today to put an end to all kernel rootkits?
If the malicious kernel code is loaded in the trusted way (e.g. loading a kernel module and not exploiting a vulnerability) then no: kernel code is kernel code.
Intel CPUs do have a series of mechanisms to disable read/write access to kernel memory:
CR0.WP if set disallows writes accesses to both user and kernel read-only pages. Used to detect bugs in the kernel code.
CR4.PKE if set (4-level paging must be enabled, mandatory in 64-bit mode) disallows the kernel from accessing (not including instruction fetches) the user page mode unless these are tagged with the right key (which marks their RW permissions). Used to allow the kernel to write to structures like VSDO and KUSER_SHARED_DATA but not other user mode structures. The keys permissions are in an MSR, not in memory; the keys themselves are in the page table entries.
CR4.SMEP if set disallows kernel instruction fetching from user mode pages. Used to prevent attacks where a kernel function pointer is redirected to a user mode allocated page (e.g. the nelson.c privilege escalation exploit).
CR4.SMAP if set disallows kernel access to user mode pages during implicit access or during any type (implicit or explicit) of access (if EFLAGS.AC=0, thus overriding the protection keys). Used to enforce a more strictly no-user-mode-access policy.
Of course the R/W and U/S bits in the paging structures control if the item is read-only/read-write and assigned to user or kernel.
You can read how permissions are applied for supervisor-mode accesses in the Intel manual:
Data writes to supervisor-mode addresses.
Access rights depend on the value of CR0.WP:
- If CR0.WP = 0, data may be written to any supervisor-mode address.
- If CR0.WP = 1, data may be written to any supervisor-mode address with a translation for which the
R/W flag (bit 1) is 1 in every paging-structure entry controlling the translation; data may not be written
to any supervisor-mode address with a translation for which the R/W flag is 0 in any paging-structure
entry controlling the translation.
So even if the kernel protected a page X as read-only and then protected the page structures themselves as read-only, a malicious module could simply clear CR0.WP.
It could also change CR3 and use its own paging structures.
Note that Intel developed SGX to address the threat model where the kernel itself is evil.
However, running the kernel components into enclaves in a secure way (i.e. no single point of failure) may not be trivial.
Another approach is virtualizing the kernel with the VMX extension, though this is by no way trivial to implement.
Finally, the CPU has four protection levels at the segmentation layer but paging has only two: supervisor (CPL = 0) and user (CPL > 0).
It is theoretically possible to run a kernel component in "Ring 1" but then you'd need to make an interface (e.g. something like a call gate or syscall) for it to access the other kernel functions.
It's easier to run it in user mode altogether (since you don't trust the module in the first place).
I have no idea what this is supposed to mean:
You can actually lock this down so that kernel mode cannot modify the page table until an interrupt occurs.
I don't recall any mechanism by which the interrupt handling will lock/unlock anything.
I'm curious though, if anybody can shed some light they are welcome.
Security in the x86 CPUs (but this may be generalized) has always been hierarchical: whoever cames first set up the constraints for whoever cames later.
There is usually little to no protection between nonisolated components at the same hierarchical level.

Why many Linux distros use setuid instead of capabilities?

capabilities(7) are a great way for not giving all root privileges to a process and AFAIK can be used instead of setuid(2). According to this and many others,
"Unfortunately, still many binaries have the setuid bit set, while they should be replaced with capabilities instead."
As a simple example, on Ubuntu,
$ ls -l `which ping`
-rwsr-xr-x 1 root root 44168 May 8 2014 /bin/ping
As you know, setting suid/guid on a file, changes the effective user ID to root. So if there the suid-enabled program contains a flaw, the non-privileged user can break-out and become the equivalent of the root user.
My question is why many Linux distributions still use setuid method while setting capabilities can be used instead with less security concerns?
This may not give the reason why some dudes somewhere decided one way or another, but some auditing tools and interfaces may not yet know about capabilities.
An example is the proc_connector netlink interface and the programs based on it (like forkstat): there are events for a process changing its credentials, but not for it changing its capabilities.
FWIW, the cause why you may not get eg a net_raw+ep ping(8) instead of a setuid one on a Debian-like distro is because that depends on the setcap(8) utility from the libcap2-bin package already existing before you install ping. From iputils-ping.postinst:
if command -v setcap > /dev/null; then
if setcap cap_net_raw+ep /bin/ping; then
chmod u-s /bin/ping
else
echo "Setcap failed on /bin/ping, falling back to setuid" >&2
chmod u+s /bin/ping
fi
else
echo "Setcap is not installed, falling back to setuid" >&2
chmod u+s /bin/ping
fi
Also notice that ping itself will drop any setuid privileges and switch to use capabilities on Linux upon starting, so your concerns about it may be a bit exagerated. From ping.c:
int
main(int argc, char **argv)
{
struct addrinfo hints = { .ai_family = AF_UNSPEC, .ai_protocol = IPPROTO
_UDP, .ai_socktype = SOCK_DGRAM, .ai_flags = getaddrinfo_flags };
struct addrinfo *result, *ai;
int status;
int ch;
socket_st sock4 = { .fd = -1 };
socket_st sock6 = { .fd = -1 };
char *target;
limit_capabilities();
From ping_common.c
void limit_capabilities(void)
{
...
if (setuid(getuid()) < 0) {
perror("setuid");
exit(-1);
}
Consider a different planet were capabilites patches were rejected, the submitter had to try harder to make a neighborly improvement, and came up with this:
filesystems can be mounted suid, nosuid, or capsuid.
if mounted capsuid, setuid bits on ${file} don't count unless .${file}.suid_capability also exists, is setuid, and is either 0-length or parseable in some standardized format:
-r-sr-xr-x. 1 carton carton 3812 Dec 27 15:39 t1
-r-Sr--r--. 1 carton carton 0 Dec 27 15:39 .t1.suid_capability
if the file is empty, setuid bit works as normal. If it has parseable contents, it works as partial-root capabilities.
if desired the .t1.suid_capability files can be annotated with fields for:
binding to a hash of their matching 't1' file
binding to an absolute path, so for example a setuid file within a chroot does not become "hot" until you chroot into it.
binding to a public key or hash nonce loaded at boot identifying the installed system like a uuid. If the private key or nonce is hidden from a backup server, traditional password resets would still work but some forms of rootkit persistence would become more difficult. It also lets you work with images of other systems in subdirectories, automatically making them effectively "nosuid"-mounted even if you are undisciplined about mount options and elect not to use the absolute path feature.
Now answer some questions about this planet relative to our own:
any missing functionality compared to our native planet?
which system works better with 'ls', 'tar', 'rsync', 'cpio', 'find', 'pax', 'mtree'? with gentoo's ebuild sandbox? with LXC?
which system works better with diskless or guest systems running on NFS or 9p roots?
which system works better with tripwire?
Now repeat those questions considering three systems instead of two:
our planet: mysterious capabilities metadata in filesystems
alternate planet using civilized Unix approach
mosvy's 'ping' example, where the binary drops privileges right after it starts
With the third system are finally starting to lose features relative to status quo: Someone could write a hypothetical capabilities-aware tripwire that works less well in the 'ping' example. Has anyone done that, though? . . . anyone outside NSA---have the people forcing this complexity on us done that work to deliver (relatively small) value from the complexity? And if that's the feature-hill we want to die on (as opposed to reducing attack surface), is there maybe a better way to get it, like dm-verity, that once again moots the complexity?
Now let's make a slight refinement to mosvy's 'ping':
crt.o or some early runtime startup code reads the sidecar file and /etc/suid_capability_hash_nonce. It drops suid if /etc/suid_capability_hash_nonce exists but the sidecar file does not (in "alternate planet" terms, the existence of /etc/suid_capability_hash_nonce makes the whole system '-o capsuid'-mounted). The early runtime startup code handles parsing the sidecars and dropping to the enumerated capabilities so that logic doesn't have to be coded into main.c one-by-one.
Honestly, I think the 'ping' example is superior because programs know what capabilities they need, and the need changes only when the program's source code changes. By setting them in a drop_privileges()-style function the need will never get out of sync. Fanned-out distributions won't have to scramble to update capability sets.
If you disagree and want the sidecar/capabilities-style system, something equivalent could have been implemented without tampering with kernel, mount options, bootup, any of it: old dracut-nfsroot initrds would keep working. It is just a matter of style.
This is all a Socratic way of saying capabilities offers nothing beyond dm-verity + 'ping'-style privileges-dropping. There is nothing "unfortunate" about eschewing complexity that's not pulling its weight.
Every few years CADT developers invent some new form of metadata to cram into filesystem directories: Finder metadata, POSIX ACLs, SELinux contexts, NFS ACLs. What will they think of next? How long will it take to update all the filesystems, all the network protocols, all the fileutils tools, all the non-Linux storage operating systems, all the fancy licensed "enterprise" tape backup suites? Will anyone ever get around to updating all that stuff, or will we just limp along with substandard tools? Will even the core feature be documented usefully, or will it be the undocumented plaything of a few annoying "distributions"? Will the feature work most of the time, but stop systems from booting unexpectedly and create chicken-and-egg recovery problems? Will the feature actually stop any attacks?
Is it fair to all the people who have to clean up after them and tolerate complexity in their own systems that made different, simpler choices, just to unbreak interop that their new feature broke?
The value delivered is particularly low in this case, but we have enough experience to set the value bar high on this type of proposal. I think it was a mistake to accept the feature into Linux and applaud any distribution that avoids it.
Capability support was really enabled in 2008 when file capabilities were added to the kernel. So, that's 13 years and counting to replace setuid. Your question is very valid.
If capability support wasn't implemented in a backwardly compatible way in Linux, it would either have been rejected at birth, or things would have surely changed by now! I suspect it all comes down to the fact that there is no economic incentive to adopt them when their benefit is only evident temporarily when a bug in some code is found to be exploitable.
I think that people are resigned to all the ways that setuid binaries seem to be exploitable and the point fixes that people quickly roll out when another vulnerability surfaces. Buffer overflow -> launch shell -> root exploit -> code fix -> start over. They are comfortable with the idea that user identity = privilege (ie., root). This spills into how people want to describe the futility of breaking down all powerful root into independent capabilities, when a chain of exploits can yield all privilege. Clearly, the pervasive idea that privilege and identity should be equivalent is why Ambient capabilities even exist.
Capabilities, however, when used as they were intended to be used are not identity = privilege. They are capable binaries = privilege - where the privilege is reduced relative to a user identity executing arbitrary code, by the combination of those capability bits and the code in the actual program that has them.
If you write code that edits files, it is clear it shouldn't need to worry about being directly abused for raw ethernet packet formation, or loading kernel modules. Exploiting a bug in that code may well allow a malicious edit of a file but, unlike setuid, it won't allow strange packets to be sent on the network. At least not without tricking the code of some independently capable executable into doing it by proxy.
However, no one intentionally writes buggy code, so I think the answer to your question comes down to another question: "where exactly is the benefit of limiting how exploitable an exploit is?".
I suspect that if some distribution were to figure out how to restructure their code base to eliminate setuid-root binaries, in favor of file capabilities, it would fare better (certainly no worse) over time as code exploits are found vs. other distributions clinging to setuid-root. But, until such a distribution comes into being, I can't fault the idea that that is just an opinion.

Persistant storage values handling in Linux

I have a QSPI flash on my embedded board.
I have a driver + process "Q" to handle reading and writing into.
I want to store variables like SW revisions, IP, operation time, etc.
I would like to ask for suggestions how to handle the different access rights to write and read values from user space and other processes.
I was thinking to have file for each variable. Than I can assign access rights for those files and process Q can change the value in file if value has been changed. So process Q will only write into and other processes or users can only read.
But I don't know about writing. I was thinking about using message queue or zeroMQ and build the software around it but i am not sure if it is not overkill. But I am not sure how to manage access rights anyway.
What would be the best approach? I would really appreciate if you could propose even totally different approach.
Thanks!
This question will probably be downvoted / flagged due to the "Please suggest an X" nature.
That said, if a file per variable is what you're after, you might want to look at implementing a FUSE file system that wraps your SPI driver/utility "Q" (or build it into "Q" if you get to compile/control source to "Q"). I'm doing this to store settings in an EEPROM on a current work project and its turned out nicely. So I have, for example, a file, that when read, retrieves 6 bytes from EEPROM (or a cached copy) provides a MAC address in std hex/colon-separated notation.
The biggest advantage here, is that it becomes trivial to access all your configuration / settings data from shell scripts (e.g. your init process) or other scripting languages.
Another neat feature of doing it this way is that you can use inotify (which comes "free", no extra code in the fusefs) to create applications that efficiently detect when settings are changed.
A disadvantage of this approach is that it's non-trivial to do atomic transactions on multiple settings and still maintain normal file semantics.

Limiting the File System Usage Programmatically in Linux

I was assigned to write a system call for Linux kernel, which oddly determines (and reduces) users´ maximum transfer amount per minute (for file operations). This system call will be called lim_fs_usage and will take a parameter for maximum number of bytes all users can access in a minute. For short, I am going to determine bandwidth of all filesystem operations in Linux. The project also asks for choosing appropriate method for distribution of this restricted resource (file access) among the users but I think this
won´t be a big problem.
I did a long long search and scan but could not find a method for managing file system access programmatically. I thought of mapping (mmap())hard drive to memory and manage memory operations but this turned to be useless. I also tried to find an API for virtual file system in order to monitor and limit it but I could not find one. Any ideas, please... Any help is greatly appreciated. Thank you in advance...
I wonder if you could do this as an IO scheduler implementation.
The main difficulty of doing IO bandwidth limitation under Linux is, by the time it reaches anywhere near the device, the kernel has probably long since forgotten who caused it.
Likewise, you can get on some very tricky ground in determining who is responsible for a given piece of IO:
If a binary is demand-loaded, who owns the IO doing that?
A mapped section of memory (demand-loaded executable or otherwise) might be kicked out of memory because someone else used too much ram, thus causing the kernel to choose to evict those pages, which places an unfair burden on the quota of the other user to then page it back in
IO operations can be combined, and might come from different users
A write operation might cause an IO sooner or later depending on how the kernel schedules it; a later schedule may mean that fewer IOs need to be done in the long run, as another write gets done to the same block in the interim; writing to an already dirty block in cache does not make it any dirtier.
If you understand all these and more caveats, and still want to, I imagine doing it as an IO scheduler is the way to go.
IO schedulers are pluggable under Linux (2.6) and can be changed dynamically - the kernel waits for all IO on the device (IO scheduler is switchable per block device) to end and then switches to the new one.
Since it's urgent I'll give you an idea out of the top of my head without doing any research on the feasibility -- what about inserting a hook to monitor system calls that deal with file system access?
You might end up writing specialised kernel modules to handle the various filesystems (ext3, ext4, etc) but as a proof-of-concept you can start with one. Do not forget that root has reserved blocks in memory, process space and disk for his own operations.
Managing memory operations does not sound related to what you're trying to do (but perhaps I am mistaken here).
After a long period of thinking and searching, I decided to use the ¨hooking¨ method proposed. I am thinking of creating a new system call which initializes and manages a global variable like hdd_ bandwith _limit. This variable will be used in Read() and Write() system calls´ modified implementation (instead of ¨count¨ variable). Then I will decide distribution of this resource which is the real issue. Probably I will find out how many users are using the system for a certain moment and divide this resource equally. Will be a Round-Robin-like distribution. But still, I am open to suggestions on this distribution issue. Will it be a SJF or FCFS or Round-Robin? Synchronization is another issue. How can I know a user´s job is short or long? Or whether he is done with the operation or not?

Secure access to files in a directory identified by an environment variable?

Can anyone point to some code that deals with the security of files access via a path specified (in part) by an environment variable, specifically for Unix and its variants, but Windows solutions are also of interest?
This is a big long question - I'm not sure how well it fits the SO paradigm.
Consider this scenario:
Background:
Software package PQR can be installed in a location chosen by users.
The environment variable $PQRHOME is used to identify the install directory.
By default, all programs and files under $PQRHOME belong to a special group, pqrgrp.
Similarly, all programs and files under $PQRHOME either belong to a special user, pqrusr, or to user root (and those are SUID root programs).
A few programs are SUID pqrusr; a few more programs are SGID pqrgrp.
Most directories are owned by pqrusr and belong to pqrgrp; some can belong to other groups, and the members of those groups acquire extra privileges with the software.
Many of the privileged executables must be run by people who are not members of pqrgrp; the programs have to validate that the user is permitted to run it by arcane rules that do not directly concern this question.
After startup, some of the privileged programs have to retain their elevated privileges because they are long-running daemons that may act on behalf of many users over their lifetime.
The programs are not authorized to change directory to $PQRHOME for a variety of arcane reasons.
Current checking:
The programs currently check that $PQRHOME and key directories under it are 'safe' (owned by pqrusr, belong to pqrgrp, do not have public write access).
Thereafter, programs access files under $PQRHOME via the full value of environment variable.
In particular, the G11N and L10N is achieved by accessing files in 'safe' directories, and reading format strings for printf() etc out of the files in those directories, using the full pathname derived from $PQRHOME plus a known sub-structure (for example, $PQRHOME/g11n/en_us/messages.l10n).
Assume that the 'as installed' value of $PQRHOME is /opt/pqr.
Known attack:
Attacker sets PQRHOME=/home/attacker/pqr.
This is actually a symlink to /opt/pqr, so when one of the PQR programs, call it pqr-victim, checks the directory, it has correct permissions.
Immediately after the security checking is completed successfully, the attacker changes the symlink so that it points to /home/attacker/bogus-pqr, which is clearly under the attacker's control.
Dire things happen when the pqr-victim now accesses a file under the supposedly safe directory.
Given that PQR currently behaves as described, and is a large package (multiple millions of lines of code, developed over more than a decade to a variety of coding standards, which were frequently ignored, anyway), what techniques would you use to remediate the problem?
Known options include:
Change all formatting calls to use function that checks actual arguments against the format strings, with an extra argument indicating the actual types passed to the function. (This is tricky, and potentially error prone because of the sheer number of format operations to be changed - but if the checking function is itself sound, works well.)
Establish the direct path to PQRHOME and validate it for security (details below), refusing to start if it is not secure, and thereafter using the direct path and not the value of $PQRHOME (when they differ). (This requires all file operations that use $PQRHOME to use not the value from getenv() but the mapped path. For example, this would require the software to establish that /home/attacker/pqr is a symlink to /opt/pqr, that the path to /opt/pqr is secure, and thereafter, whenever a file is referenced as $PQRHOME/some/thing, the name used would be /opt/pqr/some/thing and not /home/attacker/pqr/some/thing. This is a large code base - not trivial to fix.)
Ensure that all directories on $PQRHOME, even tracking through symlinks, are secure (details below, again), and the software refuses to start if anything is insecure.
Hard-code the path to the software install location. (This won't work PQR; it makes testing hell, if nothing else. For users, it means they can have but one version installed, and upgrades etc require parallel running. This does not work for PQR.)
Proposed criteria for secure paths:
For each directory, the owner must be trusted. (Rationale: the owner can change permissions at any time, so the owner must be trusted not to make changes at random that break the security of the software.)
For each directory, the group must either not have write privileges (so members of the group cannot modify the directory contents) or the group must be trusted. (Rationale: if the group members can modify the directory, then they can break the security of the software, so either they must be unable to change it, or they must be trusted not to changed it.)
For each directory, 'others' must have no write privilege on the directory.
By default, the users root, bin, sys, and pqrusr can be trusted (where bin and sys exist).
By default, the group with GID=0 (variously known as root, wheel or system), bin, sys, and pqrgrp can be trusted. Additionally, the group that owns the root directory (which is called admin on MacOS X) can be trusted.
The POSIX function realpath() provides a mapping service that will map /home/attacker/pqr to /opt/pqr; it does not do the security checking, but that need only be done on the resolved path.
So, with all that as background, is there any known software which goes through vaguely related gyrations to ensure its security? Is this being overly paranoid? (If so, why - and are you really sure?)
Edited:
Thanks for the various comments.
#S.Lott: The attack (outlined in the question) means that at least one setuid root program can be made to use a format string of the (unprivileged) user's choosing, and can at least crash the program and therefore most probably can acquire a root shell. It requires local shell access, fortunately; it is not a remote attack. It requires a non-negligible amount of knowledge to get there, but I consider it unwise to assume that the expertise is not 'out there'.
So, what I'm describing is a 'format string vulnerability' and the known attack path involves faking the program out so that although it thinks it is accessing secure message files, it actually goes and uses the message files (which contain format strings) that are under the control of the user, not under the control of the software.
Option 2 works, if you write a new value for $PQRHOME after resolving its real path and check its security. That way very little of your code needs changing thereafter.
As far as keeping the setuid privileges, it would help if you can do some sort of privilege separation, so that any operations involving input from the real user runs under the real uid. The privileged process and the real-uid process then talk using a socketpair or something like it.
Well, it sounds paranoid, but if it is or not depends on which system(s) your application is running on and which damage can an attacker do.
So, if your userbase is possibly hostile and if the damage is possibly very high, I'd go for the option 4, but modified as follows to remove its drawbacks.
Let me quote two relevant things:
1)
The programs currently check that $PQRHOME and key directories
under it are 'safe' (owned by pqrusr,
belong to pqrgrp, do not have public
write access).
2)
Thereafter, programs access files under $PQRHOME via the full
value of environment variable.
You don't need to actually hard-code the full path, you can hard-code just the relative path from the "program" you mentioned in 1) to the path mentioned in 2) where the files are.
Issue to control:
a) you must be sure that there isn't anything "attacker-accessible" (e.g. in term of symlinks) in between the executable's path and the files' path
b) you must be sure that the executable check its own path in a reliable way, but this should not be a problem in all the Unix'es I know (but I don't know all 'em and I don't know windows at all).
EDITED after the 3rd comment:
If your OS support /proc, the syslink /proc/${pid}/exe is the best way to solve b)
EDITED after sleeping on it:
Is the installation a "safe" process? If so, you might create (at installation time) a wrapper script. This script should be executable but not writable (and possibly neither readable). It would set the $PQRHOME env var to the "safe" value and then call your actual program (it might eventually do other useful things too). Since in UNIX the env vars of a running process cannot be changed by anything else but the running process, you are safe (of course the env vars can be changed by the parent before the process starts). I do not know if this approach works in Windows, though.

Resources