Secure access to files in a directory identified by an environment variable? - security

Can anyone point to some code that deals with the security of files access via a path specified (in part) by an environment variable, specifically for Unix and its variants, but Windows solutions are also of interest?
This is a big long question - I'm not sure how well it fits the SO paradigm.
Consider this scenario:
Background:
Software package PQR can be installed in a location chosen by users.
The environment variable $PQRHOME is used to identify the install directory.
By default, all programs and files under $PQRHOME belong to a special group, pqrgrp.
Similarly, all programs and files under $PQRHOME either belong to a special user, pqrusr, or to user root (and those are SUID root programs).
A few programs are SUID pqrusr; a few more programs are SGID pqrgrp.
Most directories are owned by pqrusr and belong to pqrgrp; some can belong to other groups, and the members of those groups acquire extra privileges with the software.
Many of the privileged executables must be run by people who are not members of pqrgrp; the programs have to validate that the user is permitted to run it by arcane rules that do not directly concern this question.
After startup, some of the privileged programs have to retain their elevated privileges because they are long-running daemons that may act on behalf of many users over their lifetime.
The programs are not authorized to change directory to $PQRHOME for a variety of arcane reasons.
Current checking:
The programs currently check that $PQRHOME and key directories under it are 'safe' (owned by pqrusr, belong to pqrgrp, do not have public write access).
Thereafter, programs access files under $PQRHOME via the full value of environment variable.
In particular, the G11N and L10N is achieved by accessing files in 'safe' directories, and reading format strings for printf() etc out of the files in those directories, using the full pathname derived from $PQRHOME plus a known sub-structure (for example, $PQRHOME/g11n/en_us/messages.l10n).
Assume that the 'as installed' value of $PQRHOME is /opt/pqr.
Known attack:
Attacker sets PQRHOME=/home/attacker/pqr.
This is actually a symlink to /opt/pqr, so when one of the PQR programs, call it pqr-victim, checks the directory, it has correct permissions.
Immediately after the security checking is completed successfully, the attacker changes the symlink so that it points to /home/attacker/bogus-pqr, which is clearly under the attacker's control.
Dire things happen when the pqr-victim now accesses a file under the supposedly safe directory.
Given that PQR currently behaves as described, and is a large package (multiple millions of lines of code, developed over more than a decade to a variety of coding standards, which were frequently ignored, anyway), what techniques would you use to remediate the problem?
Known options include:
Change all formatting calls to use function that checks actual arguments against the format strings, with an extra argument indicating the actual types passed to the function. (This is tricky, and potentially error prone because of the sheer number of format operations to be changed - but if the checking function is itself sound, works well.)
Establish the direct path to PQRHOME and validate it for security (details below), refusing to start if it is not secure, and thereafter using the direct path and not the value of $PQRHOME (when they differ). (This requires all file operations that use $PQRHOME to use not the value from getenv() but the mapped path. For example, this would require the software to establish that /home/attacker/pqr is a symlink to /opt/pqr, that the path to /opt/pqr is secure, and thereafter, whenever a file is referenced as $PQRHOME/some/thing, the name used would be /opt/pqr/some/thing and not /home/attacker/pqr/some/thing. This is a large code base - not trivial to fix.)
Ensure that all directories on $PQRHOME, even tracking through symlinks, are secure (details below, again), and the software refuses to start if anything is insecure.
Hard-code the path to the software install location. (This won't work PQR; it makes testing hell, if nothing else. For users, it means they can have but one version installed, and upgrades etc require parallel running. This does not work for PQR.)
Proposed criteria for secure paths:
For each directory, the owner must be trusted. (Rationale: the owner can change permissions at any time, so the owner must be trusted not to make changes at random that break the security of the software.)
For each directory, the group must either not have write privileges (so members of the group cannot modify the directory contents) or the group must be trusted. (Rationale: if the group members can modify the directory, then they can break the security of the software, so either they must be unable to change it, or they must be trusted not to changed it.)
For each directory, 'others' must have no write privilege on the directory.
By default, the users root, bin, sys, and pqrusr can be trusted (where bin and sys exist).
By default, the group with GID=0 (variously known as root, wheel or system), bin, sys, and pqrgrp can be trusted. Additionally, the group that owns the root directory (which is called admin on MacOS X) can be trusted.
The POSIX function realpath() provides a mapping service that will map /home/attacker/pqr to /opt/pqr; it does not do the security checking, but that need only be done on the resolved path.
So, with all that as background, is there any known software which goes through vaguely related gyrations to ensure its security? Is this being overly paranoid? (If so, why - and are you really sure?)
Edited:
Thanks for the various comments.
#S.Lott: The attack (outlined in the question) means that at least one setuid root program can be made to use a format string of the (unprivileged) user's choosing, and can at least crash the program and therefore most probably can acquire a root shell. It requires local shell access, fortunately; it is not a remote attack. It requires a non-negligible amount of knowledge to get there, but I consider it unwise to assume that the expertise is not 'out there'.
So, what I'm describing is a 'format string vulnerability' and the known attack path involves faking the program out so that although it thinks it is accessing secure message files, it actually goes and uses the message files (which contain format strings) that are under the control of the user, not under the control of the software.

Option 2 works, if you write a new value for $PQRHOME after resolving its real path and check its security. That way very little of your code needs changing thereafter.
As far as keeping the setuid privileges, it would help if you can do some sort of privilege separation, so that any operations involving input from the real user runs under the real uid. The privileged process and the real-uid process then talk using a socketpair or something like it.

Well, it sounds paranoid, but if it is or not depends on which system(s) your application is running on and which damage can an attacker do.
So, if your userbase is possibly hostile and if the damage is possibly very high, I'd go for the option 4, but modified as follows to remove its drawbacks.
Let me quote two relevant things:
1)
The programs currently check that $PQRHOME and key directories
under it are 'safe' (owned by pqrusr,
belong to pqrgrp, do not have public
write access).
2)
Thereafter, programs access files under $PQRHOME via the full
value of environment variable.
You don't need to actually hard-code the full path, you can hard-code just the relative path from the "program" you mentioned in 1) to the path mentioned in 2) where the files are.
Issue to control:
a) you must be sure that there isn't anything "attacker-accessible" (e.g. in term of symlinks) in between the executable's path and the files' path
b) you must be sure that the executable check its own path in a reliable way, but this should not be a problem in all the Unix'es I know (but I don't know all 'em and I don't know windows at all).
EDITED after the 3rd comment:
If your OS support /proc, the syslink /proc/${pid}/exe is the best way to solve b)
EDITED after sleeping on it:
Is the installation a "safe" process? If so, you might create (at installation time) a wrapper script. This script should be executable but not writable (and possibly neither readable). It would set the $PQRHOME env var to the "safe" value and then call your actual program (it might eventually do other useful things too). Since in UNIX the env vars of a running process cannot be changed by anything else but the running process, you are safe (of course the env vars can be changed by the parent before the process starts). I do not know if this approach works in Windows, though.

Related

Why many Linux distros use setuid instead of capabilities?

capabilities(7) are a great way for not giving all root privileges to a process and AFAIK can be used instead of setuid(2). According to this and many others,
"Unfortunately, still many binaries have the setuid bit set, while they should be replaced with capabilities instead."
As a simple example, on Ubuntu,
$ ls -l `which ping`
-rwsr-xr-x 1 root root 44168 May 8 2014 /bin/ping
As you know, setting suid/guid on a file, changes the effective user ID to root. So if there the suid-enabled program contains a flaw, the non-privileged user can break-out and become the equivalent of the root user.
My question is why many Linux distributions still use setuid method while setting capabilities can be used instead with less security concerns?
This may not give the reason why some dudes somewhere decided one way or another, but some auditing tools and interfaces may not yet know about capabilities.
An example is the proc_connector netlink interface and the programs based on it (like forkstat): there are events for a process changing its credentials, but not for it changing its capabilities.
FWIW, the cause why you may not get eg a net_raw+ep ping(8) instead of a setuid one on a Debian-like distro is because that depends on the setcap(8) utility from the libcap2-bin package already existing before you install ping. From iputils-ping.postinst:
if command -v setcap > /dev/null; then
if setcap cap_net_raw+ep /bin/ping; then
chmod u-s /bin/ping
else
echo "Setcap failed on /bin/ping, falling back to setuid" >&2
chmod u+s /bin/ping
fi
else
echo "Setcap is not installed, falling back to setuid" >&2
chmod u+s /bin/ping
fi
Also notice that ping itself will drop any setuid privileges and switch to use capabilities on Linux upon starting, so your concerns about it may be a bit exagerated. From ping.c:
int
main(int argc, char **argv)
{
struct addrinfo hints = { .ai_family = AF_UNSPEC, .ai_protocol = IPPROTO
_UDP, .ai_socktype = SOCK_DGRAM, .ai_flags = getaddrinfo_flags };
struct addrinfo *result, *ai;
int status;
int ch;
socket_st sock4 = { .fd = -1 };
socket_st sock6 = { .fd = -1 };
char *target;
limit_capabilities();
From ping_common.c
void limit_capabilities(void)
{
...
if (setuid(getuid()) < 0) {
perror("setuid");
exit(-1);
}
Consider a different planet were capabilites patches were rejected, the submitter had to try harder to make a neighborly improvement, and came up with this:
filesystems can be mounted suid, nosuid, or capsuid.
if mounted capsuid, setuid bits on ${file} don't count unless .${file}.suid_capability also exists, is setuid, and is either 0-length or parseable in some standardized format:
-r-sr-xr-x. 1 carton carton 3812 Dec 27 15:39 t1
-r-Sr--r--. 1 carton carton 0 Dec 27 15:39 .t1.suid_capability
if the file is empty, setuid bit works as normal. If it has parseable contents, it works as partial-root capabilities.
if desired the .t1.suid_capability files can be annotated with fields for:
binding to a hash of their matching 't1' file
binding to an absolute path, so for example a setuid file within a chroot does not become "hot" until you chroot into it.
binding to a public key or hash nonce loaded at boot identifying the installed system like a uuid. If the private key or nonce is hidden from a backup server, traditional password resets would still work but some forms of rootkit persistence would become more difficult. It also lets you work with images of other systems in subdirectories, automatically making them effectively "nosuid"-mounted even if you are undisciplined about mount options and elect not to use the absolute path feature.
Now answer some questions about this planet relative to our own:
any missing functionality compared to our native planet?
which system works better with 'ls', 'tar', 'rsync', 'cpio', 'find', 'pax', 'mtree'? with gentoo's ebuild sandbox? with LXC?
which system works better with diskless or guest systems running on NFS or 9p roots?
which system works better with tripwire?
Now repeat those questions considering three systems instead of two:
our planet: mysterious capabilities metadata in filesystems
alternate planet using civilized Unix approach
mosvy's 'ping' example, where the binary drops privileges right after it starts
With the third system are finally starting to lose features relative to status quo: Someone could write a hypothetical capabilities-aware tripwire that works less well in the 'ping' example. Has anyone done that, though? . . . anyone outside NSA---have the people forcing this complexity on us done that work to deliver (relatively small) value from the complexity? And if that's the feature-hill we want to die on (as opposed to reducing attack surface), is there maybe a better way to get it, like dm-verity, that once again moots the complexity?
Now let's make a slight refinement to mosvy's 'ping':
crt.o or some early runtime startup code reads the sidecar file and /etc/suid_capability_hash_nonce. It drops suid if /etc/suid_capability_hash_nonce exists but the sidecar file does not (in "alternate planet" terms, the existence of /etc/suid_capability_hash_nonce makes the whole system '-o capsuid'-mounted). The early runtime startup code handles parsing the sidecars and dropping to the enumerated capabilities so that logic doesn't have to be coded into main.c one-by-one.
Honestly, I think the 'ping' example is superior because programs know what capabilities they need, and the need changes only when the program's source code changes. By setting them in a drop_privileges()-style function the need will never get out of sync. Fanned-out distributions won't have to scramble to update capability sets.
If you disagree and want the sidecar/capabilities-style system, something equivalent could have been implemented without tampering with kernel, mount options, bootup, any of it: old dracut-nfsroot initrds would keep working. It is just a matter of style.
This is all a Socratic way of saying capabilities offers nothing beyond dm-verity + 'ping'-style privileges-dropping. There is nothing "unfortunate" about eschewing complexity that's not pulling its weight.
Every few years CADT developers invent some new form of metadata to cram into filesystem directories: Finder metadata, POSIX ACLs, SELinux contexts, NFS ACLs. What will they think of next? How long will it take to update all the filesystems, all the network protocols, all the fileutils tools, all the non-Linux storage operating systems, all the fancy licensed "enterprise" tape backup suites? Will anyone ever get around to updating all that stuff, or will we just limp along with substandard tools? Will even the core feature be documented usefully, or will it be the undocumented plaything of a few annoying "distributions"? Will the feature work most of the time, but stop systems from booting unexpectedly and create chicken-and-egg recovery problems? Will the feature actually stop any attacks?
Is it fair to all the people who have to clean up after them and tolerate complexity in their own systems that made different, simpler choices, just to unbreak interop that their new feature broke?
The value delivered is particularly low in this case, but we have enough experience to set the value bar high on this type of proposal. I think it was a mistake to accept the feature into Linux and applaud any distribution that avoids it.
Capability support was really enabled in 2008 when file capabilities were added to the kernel. So, that's 13 years and counting to replace setuid. Your question is very valid.
If capability support wasn't implemented in a backwardly compatible way in Linux, it would either have been rejected at birth, or things would have surely changed by now! I suspect it all comes down to the fact that there is no economic incentive to adopt them when their benefit is only evident temporarily when a bug in some code is found to be exploitable.
I think that people are resigned to all the ways that setuid binaries seem to be exploitable and the point fixes that people quickly roll out when another vulnerability surfaces. Buffer overflow -> launch shell -> root exploit -> code fix -> start over. They are comfortable with the idea that user identity = privilege (ie., root). This spills into how people want to describe the futility of breaking down all powerful root into independent capabilities, when a chain of exploits can yield all privilege. Clearly, the pervasive idea that privilege and identity should be equivalent is why Ambient capabilities even exist.
Capabilities, however, when used as they were intended to be used are not identity = privilege. They are capable binaries = privilege - where the privilege is reduced relative to a user identity executing arbitrary code, by the combination of those capability bits and the code in the actual program that has them.
If you write code that edits files, it is clear it shouldn't need to worry about being directly abused for raw ethernet packet formation, or loading kernel modules. Exploiting a bug in that code may well allow a malicious edit of a file but, unlike setuid, it won't allow strange packets to be sent on the network. At least not without tricking the code of some independently capable executable into doing it by proxy.
However, no one intentionally writes buggy code, so I think the answer to your question comes down to another question: "where exactly is the benefit of limiting how exploitable an exploit is?".
I suspect that if some distribution were to figure out how to restructure their code base to eliminate setuid-root binaries, in favor of file capabilities, it would fare better (certainly no worse) over time as code exploits are found vs. other distributions clinging to setuid-root. But, until such a distribution comes into being, I can't fault the idea that that is just an opinion.

Change or hide process name in htop

It seems that htop shows all running processes to every user, and process names in htop contain all the file names that I include in the command line. Since I usually use very long file names that actually contains a lot of detailed information about my project, I do not want such information to be visible to every one (but I am OK that other users see what software that I am running).
How can I hide the details in the process name?
How can I hide the details in the process name?
Since kernel 3.3, you can mount procfs with the hidepid option set to 1 or 2.
The kernel documentation file proc.txt describe this option:
The following mount options are supported:
hidepid= Set proc access mode.
hidepid=0 means classic mode - everybody may access all /proc directories
(default).
hidepid=1 means users may not access any /proc directories but their own. Sensitive files like cmdline, sched*, status are now protected against other users. This makes it impossible to learn whether any user runs specific program (given the program doesn't reveal itself by its behaviour). As an additional bonus, as /proc//cmdline is unaccessible for other users, poorly written programs passing sensitive information via program arguments are now protected against local eavesdroppers.
hidepid=2 means hidepid=1 plus all /proc will be fully invisible to other users. It doesn't mean that it hides a fact whether a process with a specific pid value exists (it can be learned by other means, e.g. by "kill -0 $PID"), but it hides process' uid and gid, which may be learned by stat()'ing /proc// otherwise. It greatly complicates an intruder's task of gathering information about running processes, whether some daemon runs with elevated privileges, whether other user runs some sensitive program, whether other users run any program at all, etc.

Make isolated build or any shell command

Please, give me a hint to the simplest and lightest solution to isolate a linux shell script (usually ubuntu in case it has smth special)
What I mean about isolation:
1. Filesystem - the most important - I want it cannot access any folders (read) outside workspace except those I will manually configure in some way
2. actually, other types of isolation does not matter
It is ok for "soft" isolation, I mean script may just fail/aborted if trying to access(read) denied paths, but "hard" isolation to get "Not found" for such attempts looks like a cleaner solution
I do not need any process isolations, script may use sudo/fakeroot/etc. inside it, but this should not affect isolation.
Also, I plan to use different isolations inside one workspace:
for ex., I have folders:
a/
b/
include/
target/
I want to make a giving it access only to "a"(rw), "include"(r) and "target" (rw+sudo)
make b giving it access only to "b"(rw), "include"(r) and "target" (rw+sudo)
and target will get both results from A and B, allowing B overwrite anything of results of A - the same if there is no isolation
The target of isolation I'm talking about is to prevent B reading from A, even knowing that there is A and vice versa
Thanks!
Two different users and SSH is a simple way to solve your problem. One of the key benefits is that this will start a "clean" environment in a new shell.
ssh <user_a>#localhost '<path_to_build_script_a>'
ssh <user_b>#localhost '<path_to_build_script_b>'
User a and b must both be members of the group that owns common directories.
Note that it's the directory write permission that decide if a user can create new files inside that directory.
Edit: 2013-07-29
For lots of sequential isolated builds like in your case, one solution is to do as you already have suggested; automate file permission changes so that each build only have access to the files and folders it should.

Regarding Hard Link

Can somebody please explain me why the kernel doesn't allow us to make a hard link to a directory. Whether it is because it breaks the rule of directed acyclic graph structure of the file-system or it is because of some other reason. What other complications come if it allows that?
Back in the days of 7th Edition (or Version 7) UNIX, there were no system calls mkdir(2) and rmdir(2). The mkdir(1) program was SUID root, and used the mknod(2) system call to create the directory and the link(2) system call to make the entries for . and .. in the new directory. The link(2) system call only allowed root to do that. Consequently, way back then (circa 1978), it was possible for the superuser to create links to directories, but only the superuser was permitted to do so to ensure that there were no problems with cycles or other missing links. There were diagnostic programs to pick up the pieces if the system crashed while a directory was partly created, for example.
You can find the Unix 7th Edition manuals at Bell Labs. Sections 2 and 3 are devoid of mkdir(2) and rmdir(2). You used the mknod(2) system call to make the directory:
NAME
mknod – make a directory or a special file
SYNOPSIS
mknod(name, mode, addr)
char *name;
DESCRIPTION
Mknod creates a new file whose name is the null-terminated string pointed to by name. The mode of
the new file (including directory and special file bits) is initialized from mode. (The protection part of
the mode is modified by the process’s mode mask; see umask(2)). The first block pointer of the i-node
is initialized from addr. For ordinary files and directories addr is normally zero. In the case of a special
file, addr specifies which special file.
Mknod may be invoked only by the super-user.
SEE ALSO
mkdir(1), mknod(1), filsys(5)
DIAGNOSTICS
Zero is returned if the file has been made; – 1 if the file already exists or if the user is not the superuser.
The entry for link(2) states:
DIAGNOSTICS
Zero is returned when a link is made; – 1 is returned when name1 cannot be found; when name2 already
exists; when the directory of name2 cannot be written; when an attempt is made to link to a directory by
a user other than the super-user; when an attempt is made to link to a file on another file system; when a
file has too many links.
The entry for unlink(2) states:
DIAGNOSTICS
Zero is normally returned; – 1 indicates that the file does not exist, that its directory cannot be written,
or that the file contains pure procedure text that is currently in use. Write permission is not required on
the file itself. It is also illegal to unlink a directory (except for the super-user).
The manual page for the ln(1) command noted:
It is forbidden to link to a directory or to link across file systems.
The manual page for the mkdir(1) command notes:
Standard entries, '.', for the directory itself, and '..'
for its parent, are made automatically.
This would not be worthy of comment were it not that it was possible to create directories without those links.
Nowadays, the mkdir(2) and rmdir(2) system calls are standard and permit any user to create and remove directories, preserving the correct semantics. There is no longer a need to permit users to create hard links to directories. This is doubly true since symbolic links were introduced - they were not in 7th Edition UNIX, but were in the BSD versions of UNIX from quite early on.
With normal directories, the .. entry unambiguously links back to the (single, solitary) parent directory. If you have two hard links (two names) for the same directory in different directories, where does the .. entry point? Presumably, to the original parent directory - and presumably there is no way to get to the 'other' parent directory from the linked directory. That's an asymmetry that can cause trouble. Normally, if you do:
chdir("./subdir");
chdir("..");
(where ./subdir is not a symbolic link), then you will be back in the directory you started from. If ./subdir is a hard link to a directory somewhere else, then you will be in a different directory from where you started after the second chdir(). You'd have to show that with a pair of stat() calls before and after the chdir() operations shown.
This is entirely because allowing hard links to directories allows for potential loops and cycles in the directory graph without adding much value.
In addition to the possibility of getting cycles (much like with symlinks, by the way, but these are easier to detect and handle), there is a second reason I can think of.
On UNIX, there is a common assumption in use by many programs, that will assume that all directories will have a link count of 2+number of child directories. This is due to the POSIX standard directory entries '.' and '..' which link to the directory or it's parent.
(After verification, I can say that the root (/) is not an exception).
This is especially useful as a performance optimization to detect leaf directories when recursing, but many applications will exist that have found other uses for it
Clarifying
By allowing 'userdefined' hardlinks to directories, these invariants so to say will no longer hold, and any dependent applications might stop working correctly.
The element of surprise is why you need root permissions (and some good design (re)thinking) in order to force creation of directory hardlinks
Because then the directory tree will cease to be a directory tree. One directory could have multiple parents.
Cyclic references will break garbage collection by reference counting. Wikipedia describes the problem:
There are a variety of ways of handling the problem of detecting and collecting reference cycles. One is that a system may explicitly forbid reference cycles.
That it the way Linux does it.

How is the SELinux MLS range restricted during user login?

I'm trying to understand what logic determines whether a user can log in with a particular MLS sensitivity level. At first I suspected that pam_selinux.so reads the /etc/selinux/.../seusers file to understand which user is bound to which seuser and then restricts the user to sensitivities equal to or lower than the high component of the MLS range.
However, after scratching through its source code I found that, after asking the user if he would like to change their security context from the the default context, pam_selinux checks that the new MLS labels are appropriate by calling into the kernel policy.
The following code is in modules/pam_selinux/pam_selinux.c from the Ubuntu libpam-modules 1.1.1-4ubuntu2 package.
static int mls_range_allowed(pam_handle_t *pamh, security_context_t src, security_context_t dst, int debug)
{
struct av_decision avd;
int retval;
unsigned int bit = CONTEXT__CONTAINS;
context_t src_context = context_new (src);
context_t dst_context = context_new (dst);
context_range_set(dst_context, context_range_get(src_context));
if (debug)
pam_syslog(pamh, LOG_NOTICE, "Checking if %s mls range valid for %s", dst, context_str(dst_context));
retval = security_compute_av(context_str(dst_context), dst, SECCLASS_CONTEXT, bit, &avd);
context_free(src_context);
context_free(dst_context);
if (retval || ((bit & avd.allowed) != bit))
return 0;
return 1;
}
It seems to me that this check is actually checked in the kernel policy, seen in the security_compute_av() call. This turned my understanding of SELinux login on my head.
So, could someone please explain:
How is the validity of a user-chosen login security level determined?
How exactly is that logic implemented in the policy, in pam_selinux, and in the kernel?
Currently, I'm not too interested in type enforcement multi, categories security, or role based access control, so no need to explain how those components are validated if they don't affect MLS sensitivities.
Given that I also share the "SELinux folds my brain in half" problem, I think I can help. First and foremost, you need to remember the difference between discretionary access control and mandatory access control. You also need to remember that user space defines a lot of things, but the kernel gets to enforce them.
First, here is a partial list of user space versus kernel space issues:
User space defines a valid user ID, the kernel creates processes owned by that user ID (the number, not the name)
User space puts permissions and ownership on a file on an ext3/4 file system, the kernel enforces access to that file based upon the file inode and every subsequent parent directory inode
If two users share the same user ID in /etc/passwd, the kernel will grant them both the same privileges because enforcement is done by the numeric identifier, not the textual one
User space requests a network socket to another host, the kernel isolates that conversation from others on the same system
With SELinux, user space defines roles, logins, and users via semanage and the kernel compiles those down into a large Access Vector Cache (AVC) so that it can enforce role-based access control and mandatory access control
Also under SELinux, a security administrator can use semanage to define a minimum and maximum security context. If you are in a multi-level security (MLS) configuration, and during log in, the users picks some context, then the kernel measures that against the AVC to determine if it is allowed.
What would probably help this make sense is to be in a multi-level security configuration. I took the class on SELinux and we touched it for about two hours. Most people don't want to go there. Ever. I've been in an MLS configuration quite a bit, so I understand the reasoning behind the coding decision you were chasing, but I agree that tinkering with MLS is a pretty painful way to understand how and why PAM works like it does.
Discretionary Access Control (DAC) is where user space, especially non-root users, can define who can access data that they control and in what fashion. Think file permissions. Because users control it, there is a trivial amount of effort needed to allow one user to see processes and/or files owned by another user. Normally, we don't care that much because a good administrator assumes that any one user could compromise the whole box and so all users are trusted equally. This might be very little trust, but there is still some level of trust.
Mandatory Access Control (MAC) is where user space is not to be trusted. Not all users are created equal. From a non-MLS perspective, consider the case where you have a web server and a database server on the same hardware (it will never survive the Slashdot effect). The only time the two processes communicate is over a dedicated connection channel over TCP. Otherwise, they must not even know that the other exists. We would operate them under two different contexts and the kernel will enforce the separation. Even looking at the process table or wandering around the hard drive as root will not get you any closer unless you change contexts.
In an MLS configuration, I can't tell you how many times I've tried to get random combinations of sensitivity and context only to be rebuffed for picking an invalid combination. It can be very frustrating because it takes a lot of exploring of your existing policy (/etc/selinux/policy/src/policy/policy under Red Hat 5) to know what it or is not allowed.
Under a Strict configuration, I can clearly see why all of this is overkill. And that's simply because SELinux is overkill for simple situations. It has other features though that partially redeem it, but chief among them is fine grained access control that enforces administrator-set permissions. One area that this is used the most is in restricting service daemons to just their essential access needed. It is painful to set up a new daemon, but it keeps trivial exploitations like a shared library exploit from going any further because the process in question may be assigned to a role that won't let it run non-daemon commands like /bin/ls or a shell. Exploits don't do you much good in those situations.

Resources