Make isolated build or any shell command - linux

Please, give me a hint to the simplest and lightest solution to isolate a linux shell script (usually ubuntu in case it has smth special)
What I mean about isolation:
1. Filesystem - the most important - I want it cannot access any folders (read) outside workspace except those I will manually configure in some way
2. actually, other types of isolation does not matter
It is ok for "soft" isolation, I mean script may just fail/aborted if trying to access(read) denied paths, but "hard" isolation to get "Not found" for such attempts looks like a cleaner solution
I do not need any process isolations, script may use sudo/fakeroot/etc. inside it, but this should not affect isolation.
Also, I plan to use different isolations inside one workspace:
for ex., I have folders:
a/
b/
include/
target/
I want to make a giving it access only to "a"(rw), "include"(r) and "target" (rw+sudo)
make b giving it access only to "b"(rw), "include"(r) and "target" (rw+sudo)
and target will get both results from A and B, allowing B overwrite anything of results of A - the same if there is no isolation
The target of isolation I'm talking about is to prevent B reading from A, even knowing that there is A and vice versa
Thanks!

Two different users and SSH is a simple way to solve your problem. One of the key benefits is that this will start a "clean" environment in a new shell.
ssh <user_a>#localhost '<path_to_build_script_a>'
ssh <user_b>#localhost '<path_to_build_script_b>'
User a and b must both be members of the group that owns common directories.
Note that it's the directory write permission that decide if a user can create new files inside that directory.
Edit: 2013-07-29
For lots of sequential isolated builds like in your case, one solution is to do as you already have suggested; automate file permission changes so that each build only have access to the files and folders it should.

Related

Counter file placement and naming convention

Ok this one might be stupid, but i'm losing too much time overthinking a solution.
I have a web app with 2 differents kind of payment modules.
These modules need (each) a counter file, incremented each time someone want to pay, and locked while incrementing to make sure the payment get a unique payment reference.
The files were placed inside the main directory (public_html) and have been overriden by a bad versionning move.
So I want to move them outside of public_html, where I already placed the main config file.
But having these critical file placed at the root of my ftp sounds stupind and dangerous. So I'll create a directory to place them.
This is a lot of text just to ask this :
How would you call this directory ?
IMO, your question has not related especially with PHP, it's a common issue. You can use of one of standard directories to share data between the applications.
/var
From the Filesystem Hierarchy Standard (FHS):
/var contains variable data files. This includes spool directories and files, administrative and logging data, and transient and temporary files.
(read more)
Some options:
You can store your file directly in the /var.
Also /var/tmp can hold temporary files for a longer time and doesn't clean it after reboot (depends on your system).
Or you can create a custom subdirectory in the /var/opt with name that relevant to your applications.

It's about Linux soft links with one source and several destinations :)

Let's assume that we've several non-identical versions of the same folder in different locations as follows:
/in/some/location/version1
/different/path/version2
/third/place/version3
Each version of them contains callerFile, which is a pre-compiled executable that we can't control its working functionality. this callerFile will create and edit a folder called cache
/some/fourth/destination/cache
So we've contradiction between the setting of every version so what I want to do is converting the /some/fourth/destination/cache to a link with 3 different destinations
/some/fourth/destination/cache --> /in/some/location/version1/cache
/some/fourth/destination/cache --> /different/path/version2/cache
/some/fourth/destination/cache --> /third/place/version3/cache
so for example:
if /in/some/location/version1/callerFile calls /some/fourth/destination/cache it should redirected to /in/some/location/version1/cache
and if /different/path/version2/callerFile calls /some/fourth/destination/cache it should redirected to /different/path/version2/cache
and if /third/place/version3/callerFile calls /some/fourth/destination/cache it should redirected to /third/place/version3/cache
So, How can I do so on Ubuntu 12.04 64 bit Operating System?
Assuming you have no control over what callerFile actually does, I mean it does what it wants and always the same, so the conclusion is you need to modify it's environment. This will be quite advanced trick, requiring deep experience of Linux kernel and Unix programming in general, and you should think over if it's worth. It will also require root priviledges on the machine where your callerFile binary exists.
Solution I'd propose would be creating an executable ( or some script calling one of exec() family function ), which will prepare special environment ( or make sure it's ready to use ), based on "mount -o bind" or unshare() system call.
Like said, playing with so called "execution context", is quite advanced trick. Theoretically you could also try some autofs-like solution, however you'll probably end up with the same, and bindmount/unshare will be probably more effective than some FS-detection daemon. I wouldn't recommend diving into FUSE, for the same reason. And playing with some over-complicated game with symlinks is probably not the way too.
http://www.kernel.org/doc/Documentation/unshare.txt
Note: whatever "callerFile" binary does, I'm pretty sure it won't check its own filename, which makes possible replacing it with something else in-between, which will do exec() on "callerFileRenamed".
As I understand it, basically what you want is to get different result with the same activity, distinguished by some condition external to activity itself, like, for example, returning different list for "ls" in same directory, based upon e.g. UID of user who issued "ls" command, without modifying some ./ls program binary.

Regarding Hard Link

Can somebody please explain me why the kernel doesn't allow us to make a hard link to a directory. Whether it is because it breaks the rule of directed acyclic graph structure of the file-system or it is because of some other reason. What other complications come if it allows that?
Back in the days of 7th Edition (or Version 7) UNIX, there were no system calls mkdir(2) and rmdir(2). The mkdir(1) program was SUID root, and used the mknod(2) system call to create the directory and the link(2) system call to make the entries for . and .. in the new directory. The link(2) system call only allowed root to do that. Consequently, way back then (circa 1978), it was possible for the superuser to create links to directories, but only the superuser was permitted to do so to ensure that there were no problems with cycles or other missing links. There were diagnostic programs to pick up the pieces if the system crashed while a directory was partly created, for example.
You can find the Unix 7th Edition manuals at Bell Labs. Sections 2 and 3 are devoid of mkdir(2) and rmdir(2). You used the mknod(2) system call to make the directory:
NAME
mknod – make a directory or a special file
SYNOPSIS
mknod(name, mode, addr)
char *name;
DESCRIPTION
Mknod creates a new file whose name is the null-terminated string pointed to by name. The mode of
the new file (including directory and special file bits) is initialized from mode. (The protection part of
the mode is modified by the process’s mode mask; see umask(2)). The first block pointer of the i-node
is initialized from addr. For ordinary files and directories addr is normally zero. In the case of a special
file, addr specifies which special file.
Mknod may be invoked only by the super-user.
SEE ALSO
mkdir(1), mknod(1), filsys(5)
DIAGNOSTICS
Zero is returned if the file has been made; – 1 if the file already exists or if the user is not the superuser.
The entry for link(2) states:
DIAGNOSTICS
Zero is returned when a link is made; – 1 is returned when name1 cannot be found; when name2 already
exists; when the directory of name2 cannot be written; when an attempt is made to link to a directory by
a user other than the super-user; when an attempt is made to link to a file on another file system; when a
file has too many links.
The entry for unlink(2) states:
DIAGNOSTICS
Zero is normally returned; – 1 indicates that the file does not exist, that its directory cannot be written,
or that the file contains pure procedure text that is currently in use. Write permission is not required on
the file itself. It is also illegal to unlink a directory (except for the super-user).
The manual page for the ln(1) command noted:
It is forbidden to link to a directory or to link across file systems.
The manual page for the mkdir(1) command notes:
Standard entries, '.', for the directory itself, and '..'
for its parent, are made automatically.
This would not be worthy of comment were it not that it was possible to create directories without those links.
Nowadays, the mkdir(2) and rmdir(2) system calls are standard and permit any user to create and remove directories, preserving the correct semantics. There is no longer a need to permit users to create hard links to directories. This is doubly true since symbolic links were introduced - they were not in 7th Edition UNIX, but were in the BSD versions of UNIX from quite early on.
With normal directories, the .. entry unambiguously links back to the (single, solitary) parent directory. If you have two hard links (two names) for the same directory in different directories, where does the .. entry point? Presumably, to the original parent directory - and presumably there is no way to get to the 'other' parent directory from the linked directory. That's an asymmetry that can cause trouble. Normally, if you do:
chdir("./subdir");
chdir("..");
(where ./subdir is not a symbolic link), then you will be back in the directory you started from. If ./subdir is a hard link to a directory somewhere else, then you will be in a different directory from where you started after the second chdir(). You'd have to show that with a pair of stat() calls before and after the chdir() operations shown.
This is entirely because allowing hard links to directories allows for potential loops and cycles in the directory graph without adding much value.
In addition to the possibility of getting cycles (much like with symlinks, by the way, but these are easier to detect and handle), there is a second reason I can think of.
On UNIX, there is a common assumption in use by many programs, that will assume that all directories will have a link count of 2+number of child directories. This is due to the POSIX standard directory entries '.' and '..' which link to the directory or it's parent.
(After verification, I can say that the root (/) is not an exception).
This is especially useful as a performance optimization to detect leaf directories when recursing, but many applications will exist that have found other uses for it
Clarifying
By allowing 'userdefined' hardlinks to directories, these invariants so to say will no longer hold, and any dependent applications might stop working correctly.
The element of surprise is why you need root permissions (and some good design (re)thinking) in order to force creation of directory hardlinks
Because then the directory tree will cease to be a directory tree. One directory could have multiple parents.
Cyclic references will break garbage collection by reference counting. Wikipedia describes the problem:
There are a variety of ways of handling the problem of detecting and collecting reference cycles. One is that a system may explicitly forbid reference cycles.
That it the way Linux does it.

keeping home directories synchronized on to Linux Boxes

I have two servers, computer A and computer B, both running Linux. I need to write a program or a shell script which will continuously monitor the contents of my home directory on computer A and if anything changes, copy the changes to my home directory on computer B such that both home directories are always the same. (Any changes made to the home directory on computer B and safely be ignored.)
Have you considered exporting /home from computer A to computer B via a network file system, e.g. NFS ?
You could also mount the exported filesystem on B in read-only mode so you won't be able to modify the contents of /home from B if that's desired.
Assuming a reasonably recent Linux kernel (one including inotify - it's been present since 2.6.13), you could use inotify-tools as described here to monitor for changes and call rsync on the files to update computer B. That should do what you're asking for, and allow changes on B that don't propagate to A, as well.
You could probably do the same job with incron, which works like cron but based on filesystem events instead of times, but it seems more intended for use with single files.
Use rsync, which will solve your problems. Most distributions would have this already pre-installed.

Secure access to files in a directory identified by an environment variable?

Can anyone point to some code that deals with the security of files access via a path specified (in part) by an environment variable, specifically for Unix and its variants, but Windows solutions are also of interest?
This is a big long question - I'm not sure how well it fits the SO paradigm.
Consider this scenario:
Background:
Software package PQR can be installed in a location chosen by users.
The environment variable $PQRHOME is used to identify the install directory.
By default, all programs and files under $PQRHOME belong to a special group, pqrgrp.
Similarly, all programs and files under $PQRHOME either belong to a special user, pqrusr, or to user root (and those are SUID root programs).
A few programs are SUID pqrusr; a few more programs are SGID pqrgrp.
Most directories are owned by pqrusr and belong to pqrgrp; some can belong to other groups, and the members of those groups acquire extra privileges with the software.
Many of the privileged executables must be run by people who are not members of pqrgrp; the programs have to validate that the user is permitted to run it by arcane rules that do not directly concern this question.
After startup, some of the privileged programs have to retain their elevated privileges because they are long-running daemons that may act on behalf of many users over their lifetime.
The programs are not authorized to change directory to $PQRHOME for a variety of arcane reasons.
Current checking:
The programs currently check that $PQRHOME and key directories under it are 'safe' (owned by pqrusr, belong to pqrgrp, do not have public write access).
Thereafter, programs access files under $PQRHOME via the full value of environment variable.
In particular, the G11N and L10N is achieved by accessing files in 'safe' directories, and reading format strings for printf() etc out of the files in those directories, using the full pathname derived from $PQRHOME plus a known sub-structure (for example, $PQRHOME/g11n/en_us/messages.l10n).
Assume that the 'as installed' value of $PQRHOME is /opt/pqr.
Known attack:
Attacker sets PQRHOME=/home/attacker/pqr.
This is actually a symlink to /opt/pqr, so when one of the PQR programs, call it pqr-victim, checks the directory, it has correct permissions.
Immediately after the security checking is completed successfully, the attacker changes the symlink so that it points to /home/attacker/bogus-pqr, which is clearly under the attacker's control.
Dire things happen when the pqr-victim now accesses a file under the supposedly safe directory.
Given that PQR currently behaves as described, and is a large package (multiple millions of lines of code, developed over more than a decade to a variety of coding standards, which were frequently ignored, anyway), what techniques would you use to remediate the problem?
Known options include:
Change all formatting calls to use function that checks actual arguments against the format strings, with an extra argument indicating the actual types passed to the function. (This is tricky, and potentially error prone because of the sheer number of format operations to be changed - but if the checking function is itself sound, works well.)
Establish the direct path to PQRHOME and validate it for security (details below), refusing to start if it is not secure, and thereafter using the direct path and not the value of $PQRHOME (when they differ). (This requires all file operations that use $PQRHOME to use not the value from getenv() but the mapped path. For example, this would require the software to establish that /home/attacker/pqr is a symlink to /opt/pqr, that the path to /opt/pqr is secure, and thereafter, whenever a file is referenced as $PQRHOME/some/thing, the name used would be /opt/pqr/some/thing and not /home/attacker/pqr/some/thing. This is a large code base - not trivial to fix.)
Ensure that all directories on $PQRHOME, even tracking through symlinks, are secure (details below, again), and the software refuses to start if anything is insecure.
Hard-code the path to the software install location. (This won't work PQR; it makes testing hell, if nothing else. For users, it means they can have but one version installed, and upgrades etc require parallel running. This does not work for PQR.)
Proposed criteria for secure paths:
For each directory, the owner must be trusted. (Rationale: the owner can change permissions at any time, so the owner must be trusted not to make changes at random that break the security of the software.)
For each directory, the group must either not have write privileges (so members of the group cannot modify the directory contents) or the group must be trusted. (Rationale: if the group members can modify the directory, then they can break the security of the software, so either they must be unable to change it, or they must be trusted not to changed it.)
For each directory, 'others' must have no write privilege on the directory.
By default, the users root, bin, sys, and pqrusr can be trusted (where bin and sys exist).
By default, the group with GID=0 (variously known as root, wheel or system), bin, sys, and pqrgrp can be trusted. Additionally, the group that owns the root directory (which is called admin on MacOS X) can be trusted.
The POSIX function realpath() provides a mapping service that will map /home/attacker/pqr to /opt/pqr; it does not do the security checking, but that need only be done on the resolved path.
So, with all that as background, is there any known software which goes through vaguely related gyrations to ensure its security? Is this being overly paranoid? (If so, why - and are you really sure?)
Edited:
Thanks for the various comments.
#S.Lott: The attack (outlined in the question) means that at least one setuid root program can be made to use a format string of the (unprivileged) user's choosing, and can at least crash the program and therefore most probably can acquire a root shell. It requires local shell access, fortunately; it is not a remote attack. It requires a non-negligible amount of knowledge to get there, but I consider it unwise to assume that the expertise is not 'out there'.
So, what I'm describing is a 'format string vulnerability' and the known attack path involves faking the program out so that although it thinks it is accessing secure message files, it actually goes and uses the message files (which contain format strings) that are under the control of the user, not under the control of the software.
Option 2 works, if you write a new value for $PQRHOME after resolving its real path and check its security. That way very little of your code needs changing thereafter.
As far as keeping the setuid privileges, it would help if you can do some sort of privilege separation, so that any operations involving input from the real user runs under the real uid. The privileged process and the real-uid process then talk using a socketpair or something like it.
Well, it sounds paranoid, but if it is or not depends on which system(s) your application is running on and which damage can an attacker do.
So, if your userbase is possibly hostile and if the damage is possibly very high, I'd go for the option 4, but modified as follows to remove its drawbacks.
Let me quote two relevant things:
1)
The programs currently check that $PQRHOME and key directories
under it are 'safe' (owned by pqrusr,
belong to pqrgrp, do not have public
write access).
2)
Thereafter, programs access files under $PQRHOME via the full
value of environment variable.
You don't need to actually hard-code the full path, you can hard-code just the relative path from the "program" you mentioned in 1) to the path mentioned in 2) where the files are.
Issue to control:
a) you must be sure that there isn't anything "attacker-accessible" (e.g. in term of symlinks) in between the executable's path and the files' path
b) you must be sure that the executable check its own path in a reliable way, but this should not be a problem in all the Unix'es I know (but I don't know all 'em and I don't know windows at all).
EDITED after the 3rd comment:
If your OS support /proc, the syslink /proc/${pid}/exe is the best way to solve b)
EDITED after sleeping on it:
Is the installation a "safe" process? If so, you might create (at installation time) a wrapper script. This script should be executable but not writable (and possibly neither readable). It would set the $PQRHOME env var to the "safe" value and then call your actual program (it might eventually do other useful things too). Since in UNIX the env vars of a running process cannot be changed by anything else but the running process, you are safe (of course the env vars can be changed by the parent before the process starts). I do not know if this approach works in Windows, though.

Resources