How does AppArmor do "Environment Scrubbing"? - linux

The AppArmor documentation mentions giving applications the ability to execute other programs with or without enviroment scrubbing. Apparently a scrubbed environment is more secure, but the documentation doesn't seem to specify exactly how environment scrubbing happens.
What is environment scrubbing and what does AppArmor do to scrub the environment?

"Environment scrubbing" is the removal of various "dangerous" environment variables which may be used to affect the behaviour of a binary - for example, LD_PRELOAD can be used to make the dynamic linker pull in code which can make essentially arbitrary changes to the running of a program; some variables can be set to cause trace output to files with well-known names; etc.
This scrubbing is normally performed for setuid/setgid binaries as a security measure, but the kernel provides a hook to allow security modules to enable it for arbitrary other binaries as well.
The kernel's ELF loader code uses this hook to set the AT_SECURE entry in the "auxiliary vector" of information which is passed to the binary. (See here and here for the implementation of this hook in the AppArmor code.)
As execution starts in userspace, the dynamic linker picks up this value and uses it to set the __libc_enable_secure flag; you'll see that the same routine also contains the code which sets this flag for setuid/setgid binaries. (There is equivalent code elsewhere for binaries which are statically linked.)
__libc_enable_secure affects a number of places in the main body of the dynamic linker code, and causes a list of specific environment variables to be removed.

Related

How to use mod_exec proftpd linux

i used this code to execute external script, from mod_exec proftpd.
ExecEngine on
ExecLog /opt/proftpd_mod_exec.log
ExecOptions logStderr logStdout
<IfUser yogi>
ExecBeforeCommand STOR,RETR /home/yogi/Desktop/kab.sh EVENT=BeforeCommand FILE='%f'
ExecOnCommand STOR,RETR /home/yogi/Desktop/kab.sh EVENT=OnCommand FILE='%f'
</IfUser>
but i get error code like this on proftpd_mod_exec.log file. STOR ExecBeforeCommand '/home/yogi/Desktop/kab.sh' failed: Exec format error
how can i fix it?
from http://www.proftpd.org/docs/contrib/mod_exec.html
This module will not work properly for logins, or for logins that are affected by DefaultRoot. These directives use the chroot(2) system call, which wreaks havoc when it comes to scripts. The path to script/shell interpreters often assume a certain location that is no longer valid within a chroot. In addition, most modern operating systems use dynamically loaded libraries (.so libraries) for many binaries, including script/shell interpreters. The location of these libraries, when they come to be loaded, are also assumed; those assumptions break within a chroot. Perl, in particular, is so wrought with filesystem location assumptions that it's almost impossible to get a Perl script to work within a chroot, short of installing Perl itself into the chroot environment.
From the error message it sounds like that just that. You have enabled chroot and the script cannot get executed because of files not available at expected places within chroot.
Author suggest not to use the module because of this.
To get it work You need to figure out the dependencies You need in the chroot target and set them up there at the appropriate places. Or disable chroot for the users and try again. Third possibility: build a statically linked binary with almost no dependencies.
Or try, as the author of the module suggest, to use a FIFO and proftpd logging functionality to trigger the scripts outside of the chroot environment.

Gulp/Node: error while loading shared libraries: cannot allocate memory in static TLS block

Trying to run gulp and getting this output
$ gulp
node: error while loading shared libraries: cannot allocate memory in static TLS block
From what I have found, this seems to relate to gcc or g++, not sure how it pertains to node or gulp. Either way I can't seem to run gulp anymore. Should also mention, this just popped up today. It was running fine yesterday.
EDIT: seems like it's for all node commands. Just tried running npm -v to get the version number and it has the same output. Same with node -v
Running CentOS 6.9
The GNU toolchain supports various kinds of TLS, and one of them (the initial-exec model) involves what is essentially a fixed offset from the thread control block. At program startup, the dynamic linker computes all the offsets and makes sure that all threads have sufficient space for all the required thread local variables.
However, with dlopen, this does not work in general because it is not possible to move the thread control block around to make room for more thread-local variables. The current glibc dynamic linker has a heuristic which reserves some space for future dlopen calls, but if you load a number of shared objects, each wither their own thread-local variables, this is not enough.
The usual workaround is to use the LD_DEBUG=files environment variable (or strace) to find relevant shared objects loaded with dlopen (unfortunately, the error message you quoted does not provide this information). After that, you can use the LD_PRELOAD environment variable to tell the dynamic linker to load them early. (It is sufficient to do this for the shared object which is dlopened, its dependencies are processed automatically.) This has the side effect that the computation at program startup takes into account their TLS needs, and when the dlopen call happens later at run time, no additional TLS variables have to be allocated. However, this approach does not work for all shared objects because it affects symbol lookup and the order in which ELF constructors run.
In the general case, it may be necessary to switch some shared objects to the global-dynamic TLS model (which requires recompiling them), or use a glibc build with an increased TLS reserve. Unfortunately, the reserve cannot currently be set at run time.

Grub bootloader with shared library support

I'd like to load a shared library (closed-source binary user-space library) at boot stage with grub boot-loader. Are there any chances for this or I must write a custom-elf-loader (grub module) to do it?
29/08/2014: For more detail, this is a programming problem in which I
want to customize or add some new features to Grub boot-loader
project. Thank you for your all supporting!
So, you don't make it crystal clear what you are trying to do, but:
Loading a userspace (assuming Linux SysV ELF type) shared library straight into GRUB is not possible. GRUB modules are indeed in ELF format, but they contain additional headers. Among the information contained in that header is an explicit license statement - GRUB will refuse to load any modules that are not explicitly GPLv2+, GPLv3 or GPLv3+.
It should be possible to write an ELF loader, but an easier way might be to write a tool to convert a userspace library to a GRUB module. There would of course be several restrictions here:
You would need to ensure the userspace library performed no system calls - GRUB would have nothing in place to handle them.
You would need to abide by the licensing rules (so only above three licenses would be acceptable).
You would need to ensure these libraries were not dependent on a global offset table being set up by glibc for them.
If recompiling is an option, GRUB also provides a POSIX emulation layer - add CPPFLAGS_POSIX to your CPPFLAGS, and use core standard POSIX header files. Have a look at the gcrypt support for an example.

How to build the elf interpreter (ld-linux.so.2/ld-2.17.so) as static library?

I apologize if my question is not precise because I don't have a lot
of Linux related experience. I'm currently building a Linux from
scratch (mostly following the guide at linuxfromscratch.org version
7.3). I ran into the following problem: when I build an executable it
gets a hardcoded path to something called ELF interpreter.
readelf -l program
shows something like
[Requesting program interpreter: /lib/ld-linux.so.2]
I traced this library ld-linux-so.2 to be part of glibc. I am not very
happy with this behaviour because it makes the binary very unportable
- if I change the location of /lib/ld-linux.so.2 the executable no
longer works and the only "fix" I found is to use the patchelf utility
from NixOS to change the hardcoded path to another hardcoded path. For
this reason I would like to link against a static version of the ld
library but such is not produced. And so this is my question, could
you please explain how could I build glibc so that it will produce a
static version of ld-linux.so.2 which I could later link to my
executables. I don't fully understand what this ld library does, but I
assume this is the part that loads other dynamic libraries (or at
least glibc.so). I would like to link my executables dynamically, but
I would like the dynamic linker itself to be statically built into
them, so they would not depend on hardcoded paths. Or alternatively I
would like to be able to set the path to the interpreter with
environment variable similar to LD_LIBRARY_PATH, maybe
LD_INTERPRETER_PATH. The goal is to be able to produce portable
binaries, that would run on any platform with the same ABI no matter
what the directory structure is.
Some background that may be relevant: I'm using Slackware 14 x86 to
build i686 compiler toolchain, so overall it is all x86 host and
target. I am using glibc 2.17 and gcc 4.7.x.
I would like to be able to set the path to the interpreter with environment variable similar to LD_LIBRARY_PATH, maybe LD_INTERPRETER_PATH.
This is simply not possible. Read carefully (and several times) the execve(2), elf(5) & ld.so(8) man pages and the Linux ABI & ELF specifications. And also the kernel code doing execve.
The ELF interpreter is responsible for dynamic linking. It has to be a file (technically a statically linked ELF shared library) at some fixed location in the file hierarchy (often /lib/ld.so.2 or /lib/ld-linux.so.2 or /lib64/ld-linux-x86-64.so.2)
The old a.out format from the 1990s had a builtin dynamic linker, partly implemented in old Linux 1.x kernel. It was much less flexible, and much less powerful.
The kernel enables, by such (in principle) arbitrary dynamic linker path, to have various dynamic linkers. But most systems have only one. This is a good way to parameterize the dynamic linker. If you want to try another one, install it in the file system and generate ELF executables mentioning that path.
With great pain and effort, you might make your own ld.so-like dynamic linker implementing your LD_INTERPRETER_PATH wish, but that linker still has to be an ELF shared library sitting at some fixed location in the file tree.
If you want a system not needing any files (at some predefined, and wired locations, like /lib/ld.so, /dev/null, /sbin/init ...), you'll need to build all its executable binaries statically. You may want (but current Linux distributions usually don't do that) to have a few statically linked executables (like /sbin/init, /bin/sash...) that will enable you to repair a system broken to the point of not having any dynamic linker.
BTW, the /sbin/init -or /bin/sh - path is wired inside the kernel itself. You may pass some argument to the kernel at boot load time -e.g. with GRUB- to overwrite the default. So even the kernel wants some files to be here!
As I commented, you might look into MUSL-Libc for an alternative Libc implementation (providing its own dynamic linker). Read also about VDSO and ASLR and initrd.
In practice, accept the fact that modern Linuxes and Unixes are expecting some non-empty file system ... Notice that dynamic linking and shared libraries are a huge progress (it was much more painful in the 1990s Linux kernels and distributions).
Alternatively, define your own binary format, then make a kernel module or a binfmt_misc entry to handle it.
BTW, most (or all) of Linux is free software, so you can improve it (but this will take months -or many years- of work to you). Please share your improvements by publishing them.
Read also Drepper's Hwo to Write Shared Libraries paper; and this question.
I ran into the same issue. In my case I want to bundle my application with a different GLIBC than comes system installed. Since ld-linux.so must match the GLIBC version I can't simply deploy my application with the according GLIBC. The problem is that I can't run my application on older installations that don't have the required GLIBC version.
The path to the loader interpreter can be modified with --dynamic-linker=/path/to/interp. However, this needs to be set at compile time and therefore would require my application to be installed in that location (or at least I would need to deploy the ld-linux.so that goes with my GLIBC in that location which goes against a simple xcopy deployment.
So what's needed is an $ORIGIN option equivalent to what the -rpath option can handle. That would allow for a fully dynamic deployment.
Given the lack of a dynamic interpreter path (at runtime) leaves two options:
a) Use patchelf to modify the path before the executable gets launched.
b) Invoke the ld-linux.so directly with the executable as an argument.
Both options are not as 'integrated' as a compiled $ORIGIN path in the executable itself.

Executing binaries without execve?

I saw somewhere mentioned that one can "emulate" execve (primarily with open and mmap) in order to load some other binary (without actual "execve" syscall).
Are there any already implemented examples for it?
Can we load both static and dynamic binaries?
Can it be done portably?
Such feature may be useful for delegating work to arbitrary binaries ignoring filesystem bits or with seccomp policy installed not allowing actual execve.

Resources