Can I restrict access to certain files for a certain process? - linux

Is it possible to start a process in Linux, and restrict its access to certain files/directories? For example:
$ start-process --enable-dir=./sandbox --exec="some-script.sh"
some-script.sh won't be able to do anything outside of ./sandbox.

You can use chroot to set the root directory of your process tree. This means however, that all dependencies of that process must be available in it's new root.
There are a number of packages that can help you setup chroot-environments for your needs. Google is your friend ;)
Some pointers on building a chroot environment
When builing a chroot for some program or daemon you have to have a complete environment for the program you want to chroot. This means you have to provide a minimum system in a directory. That might contain:
A shell and some shell utilities, or a variant of busybox. (this encompasses the next step too, if you aren't planning on deploying one single static executable that is).
Libc and other dependent shared libraries.
You need to check shared library dependencies using ldd or objdump. Every library that appears has to be in your private root directory. This step might be repeated several times for every executable and library you need. Note that some libraries, which are linked explicitly at runtime using dlopen need to be checked separately.
Depending on what you plan to chroot a minimal /dev tree.
If you plan to chroot a daemon process this may well be needing some minimal files in /dev such as random or zero. You can create those with the mknod command. Please refer to the mknod documentation, as well as the linux documentation on which major/minor numbers which device should have.
Also depending on what you plan to chroot is a minimal /etc. Files needed therein are:
A minimal passwd and shadow (not your system passwd/shadow).
A minimal mtab containing /.
A minimal group (again, not your system group file).
You have to start somewhere, so it's best to start with the prerequisites for you program. Refer to your documentation for specifics.

Typically you want to chroot the process, so that it can only access a directory and its sub-directories, and only execute some defined commands.
See How to chroot.

Related

Solution for shared home directory across separate machines - change $HOME variable?

I have an account on a set of machines which have separate processors, memory, and storage but share the same home directory. So, for instance, if I ssh into either machine, they will both source the same bashrc file.
The sysadmin does not install all of the software I wish to use so I have compiled some from source and store it in bin, lib, etc. directories in the home directory and change my PATH, LD_LIBRARY_PATH variables to include these. Each machine, at least up until recently, had different operating systems (or at least versions) installed and I was told that compiled code on one machine would not necessarily give the same result on the other. Therefore my (very hacky) solution was the following:
Create two directories in $HOME: ~/server1home and ~/server2home,
each with their own set of bin, lib, etc. with separately
compiled libraries.
Edit my .bashrc to check which server I am on and set the path variables to look in the correct directories for binaries and libraries for the server.
Lately, we moved building and the servers were rebooted and I believe they both run the same OS now. Most of my setup was broken by the reboot so I have to remake it. In principle, I don't need anything different on each machine and they could be identical apart from the fact that there are more processors and memory to run code on. They don't have the same hardware, as far as I'm aware, so I still don't know if they can safely run the same binaries. Is such a setup safe to run code that needs to be numerically precise?
Alternatively, I would redo my hack differently this time. I had a lot of dotfiles that still ended up going into $HOME, rather than my serverXhome directories and the situation was always a little messy. I want to know if it's possible to redefine $HOME on login, based on hostname and then have nothing but the two serverXhome directories inside the shared $HOME, with everything duplicated inside each of these new home directories. Is this possible to set up without administrative privileges? I imagine I could make a .profile script that runs on login and changes $HOME to point at the right directory and then sources the new .bashrc within that directory to set all the rest of the environment variables. Is this the correct way of going about this or are there some pitfalls to be wary of?
TL;DR: I have the same home directory for two separate machines. Do binary libraries and executables compiled on one machine run safely on the other? Otherwise, is there a strategy to redefine $HOME on each machine to point to a subdirectory of the shared home directory and have separate binaries for each?
PS: I'm not sure if this is more relevant in superuser stackexchange or not. Please let me know if I'd have better luck posting there.
If the two machines have the same processor architecture, in general compiled binaries should work on both.
Now there is a number of factors that come into play, and the result will hugely depend on the type of programs that you want to run, but in general, if the same shared libraries are installed, then your programs will work identically.
On different distributions, or different versions of a given distribution, it is likely that the set of installed libraries, or that the version of them will differ, which means that your programs will work on the machine on which they are built, but probably not on another one.
If you can control how they are built, you can rebuild your applications to have static linkage instead of dynamic, which means that they will embed all the libraries they need when built, resulting in a much bigger executable, but providing a much improved compatibility.
If the above doesn't work and you need to use a different set of programs for each machine, I would recommend leaving the $HOME environment variable alone, and only change your $PATH depending on the machine you are on.
You can have a short snipper in your .bashrc like so:
export PATH=$HOME/$(hostname)/bin:$PATH
export LD_LIBRARY_PATH=$HOME/$(hostname)/lib:$LD_LIBRARY_PATH
Then all you need is a folder bearing the machine's hostname for each machine you can connect to. If several machines share the same operating system and architecture, making symbolic links will save you some space.

How to "safely" allow others to work on my server?

I sometimes have a need to pay someone to perform some programming which exceeds my expertise. And sometimes that someone is someone I might not know.
My current need is to configure Apache which happens to be running on Centos.
Giving root access via SSH on my main physical server is not an option.
What are my options?
One thought is to create a VPS (guest as Linux) on my main physical server (operating system as Linux) using virtualbox (or equal), have them do the work, figure out what they did, and manually implement the changes my self.
Seem secure? Maybe better options? Thank you
I suggest looking into the chroot command.
chroot() changes the root directory of the calling process to that specified in path. This directory will be used for pathnames beginning with /. The root directory is inherited by all children of the calling process.
This implications of this, are that once inside a chroot "jail" a user cannot see "outside" of the jail. You've changed their root file. You can include custom binaries, or none at all (I don't see why you'd want that, but point being YOU decide what the developer can and can't see.)
We can use a directory for chroot, or you could use my personal favorite: a mounted file, so your "jail" is easily portable.
Unfortunately I am a Debian user, and I would use
debootstrap to build a minimal system to a small file (say, 5GB), but there doesn't seem to be an official RPM equivalent. However the process is fairly simple. Create a file, I would do so with dd if=/dev/zero of=jailFile bs=1M count=5120. Then we can mkfs.ext4 jailFile. Finally, we must mount and include any files we wish the jailed user to use (this is what debootstrap does. It downloads all the default goodies in /bin and such) either manually or with a tool.
After these steps you can copy this file around, make backups, or move servers even. All with little to no effort on the user side.
From a short google search there appears to be a third party tool that does nearly the same thing as debootstrap, here. If you are comfortable compiling this tool, can build a minimal system manually, or can find an alternative; and the idea of a portable ext4 jail is appealing to you, I suggest this approach.
If the idea is unappealing, you can always chroot a directory which is very simple.
Here are some great links on chroot:
https://wiki.archlinux.org/index.php/Change_root
https://wiki.debian.org/chroot
http://www.unixwiz.net/techtips/chroot-practices.html
Also, here and here are great links about using chroot with OpenSSHServer.
On a side note: I do not think the question was off topic, but if you feel the answers here are inadequate, you can always ask on https://serverfault.com/ as well!
Controlling permissions is some of the magic at the core of Linux world.
You... could add the individual as a non-root user, and then work towards providing specific access to the files you would like him to work on.
Doing this requires a fair amount of 'nixing to get right.
Of course, this is one route... If the user is editing something like an Apache configuration file, why not set-up the file within a private bitbucket or github repository?
This way, you can see the changes that are made, confirm they are suitable, then pull them into production at your leisure.

Developmental testing of programs using Linux's POSIX capabilities

I'm developing a project where the executables use Linux's POSIX capabilities rather than being setuid root. So far I've had to keep one root shell open so that each time I recompile I can redo the setcap command to give the needed capability to the executable file so that I can test the results. That's getting tedious, plus if I ever hope that anyone else would want to contribute to the project's development I'll have to come up with a better way of doing it.
So far I've come up with two ways of dealing with this:
1) Have a single make target to be run as root to create a special setuid program which will be used to by the makefiles to give the capability to the executables. The program will be compiled from a template modified via sed so that it will only run if used by the non-root user the developer is working as, and will only modify files owned by the developer (and which are sitting in directories owned by the developer which aren't world writeable).
The problem with this is that I'm using GNU autotools to generate my make files, and I can't figure out how to get the makefiles to run a program on a linked executable after it's been linked. I could create a setcap-all target which has all the executables as its dependencies, with a rule that runs the setuid program on them, but then you can't simply do make executable-1 if that's all you want to build.
2) Have a single make target to be run as root to create a setuid daemon which will use inotify to monitor the src directory and grant the capability to any new executables (and which has security consideration similar to the setuid program from #1).
My problem with this is that I can't figure out how to get the build system to automatically and transparently start up the daemon, plus my intuition that This Is Not The Way Things Are Done in a proper build system.
Are there any better ways of doing this?
Maybe I'm a bit confused about the question, but it seems you're trying to use the build-system to solve an installation problem.
Whether you're packaging your project using dpkg, rpm or anything else, there should be a rule to enforce usage of setcap, which will set the capabilities of the installed binary using the Filesystem Extended Attributes (xattrs).
# Post-install rule example
setcap cap_net_raw=+pe /usr/bin/installed-binary
However, of you're installing a system daemon, you may count on the init-script to already have all the capabilities, so it's a matter of letting your process to drop unneeded capabilities.

What is the proper place to put named pipes on Linux?

I've got a few processes that talk to each other through named pipes. Currently, I'm creating all my pipes locally, and keeping the applications in the same working directory. At some point, it's assumed that these programs can (and will) be run from different directories. I need to create these pipes I'm using in a known location, so all of the different applications will be able to find the pipes they need.
I'm new to working on Linux and am not familiar with the filesystem structure. In Windows, I'd use something like the AppData folder to keep these pipes. I'm not sure what the equivalent is in Linux.
The /tmp directory looks like it probably could function just nicely. I've read in a few places that it's cleared on system shutdowns (and that's fine, I have no probably re-creating the pipes when I start back up.) but I've seen a few other people say they're losing files while the system is up, as if it's cleaned periodically, which I don't want to happen while my applications are using those pipes!
Is there a place more suited for application specific stores? Or would /tmp be the place that I'd want to keep these (since they are after all, temporary.)?
I've seen SaltStack using /var/run. The only problem is that you need root access to write into that directory, but let's say that you are going to run your process as a system daemon. SaltStack creates /var/run/salt at the installation time and changes the owner to salt so that later on it can be used without root privileges.
I also checked the Filesystem Hierarchy Standard and even though it's not really important so much, even they say:
System programs that maintain transient UNIX-domain sockets must place them in this directory.
Since named pipes are something very similar, I would go the same way.
On newer Linux distros with systemd /run/user/<userid> (created by pam_systemd during login if it doesn't already exist) can be used for opening up sockets and putting .pid files there instead of /var/run where only root has access. Also note that /var/run is a symlink to /run so /var/run/user/<userid> can also be used. For more infos check out this thread. The idea is that system daemons should have a /var/run/<daemon name>/ directory created during installation with proper permissions and put their sockets/pid files in there while daemons run by the user (such as pulseaudio) should use /run/user/<userid>/. Another option is /tmp and /var/tmp.

How can I sandbox filesystem activity, particularly writes?

Gentoo has a feature in portage, that prevents and logs writes outside of the build and packaging directories.
Checkinstall is able to monitor writes, and package up all the generated files after completion.
Autotools have the DESTDIR macro that enables you to usually direct most of the filesystem activity to an alternate location.
How can I do this myself with the
safety of the Gentoo sandboxing
method?
Can I use SELinux, rlimit, or
some other resource limiting API?
What APIs are available do this from
C, Python?
Update0
The mechanism used will not require root privileges or any involved/persistent system modification. This rules out creating users and using chroot().
Please link to the documentation for APIs that you mention, for some reason they're exceptionally difficult to find.
Update1
This is to prevent accidents. I'm not worried about malicious code, only the poorly written variety.
The way Debian handles this sort of problem is to not run the installation code as root in the first place. Package build scripts are run as a normal user, and install scripts are run using fakeroot - this LD_PRELOAD library redirects permission-checking calls to make it look like the installer is actually running as root, so the resulting file ownership and permissions are right (ie, if you run /usr/bin/install from within the fakeroot environment, further stats from within the environment show proper root ownership), but in fact the installer is run as an ordinary user.
Builds are also, in some cases (primarily for development), done in chroots using eg pbuilder - this is likely easier on a binary distribution however, as each build using pbuilder reinstalls all dependencies beyond the base system, acting as a test that all necessary dependencies are specified (this is the primary reason for using a chroot; not for protection against accidental installs)
One approach is to virtualize a process, similar to how wine does it, and reinterpret file paths. That's rather heavy duty to implement though.
A more elegant approach is to use the chroot() system call which sets a subtree of the filesystem as a process's root directory. Create a virtual subtree, including /bin, /tmp, /usr, /etc as you want the process to see them, call chroot with the virtual tree, then exec the target executable. I can't recall if it is possible to have symbolic links within the tree reference files outside, but I don't think so. But certainly everything needed could be copied into the sandbox, and then when it is done, check for changes against the originals.
Maybe get the sandbox safety with regular user permissions? So the process running the show has specific access to specific directories.
chroot would be an option but I can't figure out how to track these tries to write outside the root.
Another idea would be along the lines of intercepting system calls. I don't know much about this but strace is a start, try running a program through it and check if you see something you like.
edit:
is using kernel modules an option? because you could replace the write system call with your own so you could prevent whatever you needed and also log it.
It sounds a bit like what you are describing is containers. Once you've got the container infrastructure set up, it's pretty cheap to create containers, and they're quite secure.
There are two methods to do this. One is to use LD_PRELOAD to hook library calls that result in syscalls, such as those in libc, and call dlsym/dlopen. This will not allow you to directly hook syscalls.
The second method, which allows hooking syscalls, is to run your executable under ptrace, which provides options to stop and examine syscalls when they occur. This can be set up programmatically to sandbox calls to restricted areas of the filesystem, among other things.
LD_PRELOAD can not intercept syscalls, but only libcalls?
Dynamic linker tricks: Using LD_PRELOAD to cheat, inject features and investigate programs
Write Yourself an Strace in 70 Lines of Code

Resources