I am implementing a Linux security sandbox for a custom bytecode interpreter through seccomp mode. To minimize as much as possible the attack surface, I want to run it in a completely clean virtual address space. I only need code and data segments plus stack available, but I do not need vsyscall, vdso nor vvar.
Is there any way to disable allocation of this pages for a given process?
Basically, no, you will have to disable vsyscall/vDSO globally if you want the mapping itself to be unavailable. If you only want the program to be unable to call vsyscall/vDSO syscalls, then seccomp will be able to do it. Some caveats though:
See https://www.kernel.org/doc/Documentation/prctl/seccomp_filter.txt
On x86-64, vsyscall emulation is enabled by default. (vsyscalls are
legacy variants on vDSO calls.) Currently, emulated vsyscalls will honor seccomp, with a few oddities:
A return value of SECCOMP_RET_TRAP will set a si_call_addr pointing to
the vsyscall entry for the given call and not the address after the
'syscall' instruction. Any code which wants to restart the call
should be aware that (a) a ret instruction has been emulated and (b)
trying to resume the syscall will again trigger the standard vsyscall
emulation security checks, making resuming the syscall mostly
pointless.
A return value of SECCOMP_RET_TRACE will signal the tracer as usual,
but the syscall may not be changed to another system call using the
orig_rax register. It may only be changed to -1 order to skip the
currently emulated call. Any other change MAY terminate the process.
The rip value seen by the tracer will be the syscall entry address;
this is different from normal behavior. The tracer MUST NOT modify
rip or rsp. (Do not rely on other changes terminating the process.
They might work. For example, on some kernels, choosing a syscall
that only exists in future kernels will be correctly emulated (by
returning -ENOSYS).
To detect this quirky behavior, check for addr & ~0x0C00 ==
0xFFFFFFFFFF600000. (For SECCOMP_RET_TRACE, use rip. For
SECCOMP_RET_TRAP, use siginfo->si_call_addr.) Do not check any other
condition: future kernels may improve vsyscall emulation and current
kernels in vsyscall=native mode will behave differently, but the
instructions at 0xF...F600{0,4,8,C}00 will not be system calls in these
cases.
Note that modern systems are unlikely to use vsyscalls at all -- they
are a legacy feature and they are considerably slower than standard
syscalls. New code will use the vDSO, and vDSO-issued system calls
are indistinguishable from normal system calls.
So emulated vsyscalls can be confined by seccomp, and vDSOs are likewise confined by seccomp. If you disable gettimeofday(), the confined program will not be able to call that syscall through emulated vsyscall, vDSO, or regular syscall. If you confine them this way with seccomp, you shouldn't have to worry about the attack surface they create.
If you are worried about an attacker exploiting the vDSO mapping itself (which doesn't require calling a syscall), then I don't believe there's a way to disable it on a per-process basis reliably. You can prevent it from being linked in, but it would be hard to prevent a compromised bytecode interpreter from allocating memory and putting it back. You can boot with the vdso=0 kernel parameter which will disable it globally, though, so linking it in would do nothing.
Related
I'm seeking possible solution to achieve Kernel Mode Linux without modify Glibc.
The project called "Kernel Mode Linux on aarch64", which make specified processes execute in kernel mode, not all processes. (ex: programs in /trusted/) It enhance the speed of invoking system call. The background research is from Toshiyuki Maeda Website and sonicyang/KML.
If the user program execute in kernel mode, that means it can access syscall function directly.(Monolithic kernel) However, the access path of syscall is a hard path in arm64 glibc. The syscall will eventually use "svc 0" which cause an "Instruction Abort" exception. (# define INTERNAL_SYSCALL_RAW(name, nr, args...) \ in sysdeps/unix/sysv/linux/aarch64/sysdep.h). Of course, there is vDSO (vsyscall) way to go, but the current impl doesn't let most syscall functions have option to go vsyscal way.
In this situation, I have two modification plan, but both miss critical step.
Modify INTERNAL_SYSCALL_RAW to be multiplex of syscall or dl-call (or vsyscall) in glibc. How can I determine the process is in kernel mode or user mode without heavy overhead? (mrs x0, CurrentEL isn't allowed in EL0)
Replace svc 0 to bl dl-call when binelf loader loads. The program will be loaded by elf loader. We set it in kernel mode, no problem, but as we knew the libc.so is an dynamic link library. It keeps one piece in vma, but other normal user program will use it too. How can I deal with this situation? compile in static is great, but the size is really not acceptable.
Due to my limit understanding, please drop me any practical idea.
After a few research, the option 1 could work well as long as compile a customized glibc. The program runs in kernel mode must link to the the customized glibc. It'll not affect the system's glibc.
I'm under the impression that the Linux kernel's attempts to protect itself revolve around not letting malicious code run in kernelspace. In particular, if a malicious kernel module were to get loaded, that it would be "game over" from a security standpoint. However, I recently came across a post that contradicts this belief and says that there is some way that the kernel can protect parts of itself from other parts of itself:
There is plenty of mechanisms to protect you against malicious modules. I write kernel code for fun so I have some experience in the field; it's basically a flag in the pagetable.
What's there to stop any kernel module from changing that flag in the pagetable back? The only protection against malicious modules is keeping them from loading at all. Once one loads, it's game over.
Make the pagetable readonly. Done.
Kernel modules can just make it read-write again, the same way your code made it read-only, then carry on with their changes.
You can actually lock this down so that kernel mode cannot modify the page table until an interrupt occurs. If your IDT is read-only as well there is no way for the module to do anything about it.
That doesn't make any sense to me. Am I missing something big about how kernel memory works? Can kernelspace code restrict itself from modifying the page table? Would that really prevent kernel rootkits? If so, then why doesn't the Linux kernel do that today to put an end to all kernel rootkits?
If the malicious kernel code is loaded in the trusted way (e.g. loading a kernel module and not exploiting a vulnerability) then no: kernel code is kernel code.
Intel CPUs do have a series of mechanisms to disable read/write access to kernel memory:
CR0.WP if set disallows writes accesses to both user and kernel read-only pages. Used to detect bugs in the kernel code.
CR4.PKE if set (4-level paging must be enabled, mandatory in 64-bit mode) disallows the kernel from accessing (not including instruction fetches) the user page mode unless these are tagged with the right key (which marks their RW permissions). Used to allow the kernel to write to structures like VSDO and KUSER_SHARED_DATA but not other user mode structures. The keys permissions are in an MSR, not in memory; the keys themselves are in the page table entries.
CR4.SMEP if set disallows kernel instruction fetching from user mode pages. Used to prevent attacks where a kernel function pointer is redirected to a user mode allocated page (e.g. the nelson.c privilege escalation exploit).
CR4.SMAP if set disallows kernel access to user mode pages during implicit access or during any type (implicit or explicit) of access (if EFLAGS.AC=0, thus overriding the protection keys). Used to enforce a more strictly no-user-mode-access policy.
Of course the R/W and U/S bits in the paging structures control if the item is read-only/read-write and assigned to user or kernel.
You can read how permissions are applied for supervisor-mode accesses in the Intel manual:
Data writes to supervisor-mode addresses.
Access rights depend on the value of CR0.WP:
- If CR0.WP = 0, data may be written to any supervisor-mode address.
- If CR0.WP = 1, data may be written to any supervisor-mode address with a translation for which the
R/W flag (bit 1) is 1 in every paging-structure entry controlling the translation; data may not be written
to any supervisor-mode address with a translation for which the R/W flag is 0 in any paging-structure
entry controlling the translation.
So even if the kernel protected a page X as read-only and then protected the page structures themselves as read-only, a malicious module could simply clear CR0.WP.
It could also change CR3 and use its own paging structures.
Note that Intel developed SGX to address the threat model where the kernel itself is evil.
However, running the kernel components into enclaves in a secure way (i.e. no single point of failure) may not be trivial.
Another approach is virtualizing the kernel with the VMX extension, though this is by no way trivial to implement.
Finally, the CPU has four protection levels at the segmentation layer but paging has only two: supervisor (CPL = 0) and user (CPL > 0).
It is theoretically possible to run a kernel component in "Ring 1" but then you'd need to make an interface (e.g. something like a call gate or syscall) for it to access the other kernel functions.
It's easier to run it in user mode altogether (since you don't trust the module in the first place).
I have no idea what this is supposed to mean:
You can actually lock this down so that kernel mode cannot modify the page table until an interrupt occurs.
I don't recall any mechanism by which the interrupt handling will lock/unlock anything.
I'm curious though, if anybody can shed some light they are welcome.
Security in the x86 CPUs (but this may be generalized) has always been hierarchical: whoever cames first set up the constraints for whoever cames later.
There is usually little to no protection between nonisolated components at the same hierarchical level.
Im studying for the first time "Operating System". In my book i found this sentence about "User Mode" and "Kernel Mode":
"Switch from user to kernel mode" instruction is executed only in kernel
mode
I think that is a incorrect sentence as in practice there is no "switch of kernel". In fact, when a user process need to do a privileged instruction it simply ask the kernel to do something for itself. Is it correct ?
In fact, when a user process need to do a privileged instruction it simply ask the kernel to do something for itself.
But how does that happen? Details are processor (i.e. instruction set architecture) and OS specific (explained in ABI specifications relevant to your system, e.g. here), but that usually involves some machine code instruction like SYSENTER or SYSCALL (or SVC on mainframes) capable of atomically changing the CPU mode (that is switching it in a controlled manner to kernel mode). The actual parameters of the system call (including even the syscall number) are often passed in registers (but details are ABI specific).
So I feel the concept of switching from user-mode to kernel-mode is relevant, and meaningful (so "correct").
BTW, user-mode code is forbidden (by the hardware) to execute privileged machine instructions, such as those interacting with IO hardware devices (read about protection rings). If you try, you get some hardware exception (a bit similar to interrupts). Hence your code (even if it is malicious) has to make system calls, which the kernel controls (it has lots of code related to permission checking), for e.g. all IO.
Read also Operating Systems: Three Easy Pieces - freely downloadable. See also http://osdev.org/. Read system call wikipage & syscalls(2), and the Assembler HowTo.
In real life, things are much more complex. Read about System Management Mode and about the (scary) Intel Management Engine.
I understand that system calls exist to provide access to capabilities that are disallowed in user space, such as accessing a HDD using the read() system call. I also understand that these are abstracted by a user-mode layer in the form of library calls such as fread(), to provide compatibility across hardware.
So from the application developers point of view, we have something like;
//library //syscall //k_driver //device_driver
fread() -> read() -> k_read() -> d_read()
My question is; what is stopping me inlining all the instructions in the fread() and read() functions directly into my program? The instructions are the same, so the CPU should behave in the same way? I have not tried it, but I assume that this does not work for some reason I am missing. Otherwise any application could get arbitrary kernel mode operation.
TL;DR: What allows system calls to 'enter' kernel mode that is not copy-able by an application?
System calls do not enter the kernel themselves. More precisely, for example the read function you call is still, as far as your application is concerned, a library call. What read(2) does internally is calling the actual system call using some interruption or the syscall(2) assembly instruction, depending on the CPU architecture and OS.
This is the only way for userland code to have privileged code to be executed, but it is an indirect way. The userland and kernel code execute in different contexts.
That means you cannot add the kernel source code to your userland code and expect it to do anything useful but crash. In particular, the kernel code has access to physical memory addresses required to interact with the hardware. Userland code is limited to access a virtual memory space that has not this capability. Also, the instructions userland code is allowed to execute is a subset of the ones the CPU support. Several I/O, interruption and virtualization related instructions are examples of prohibited code. They are known as privileged instructions and require to be in an lower ring or supervisor mode depending on the CPU architecture.
You could inline them. You can issue system calls directly through syscall(2), but that soon gets messy. Note that the system call overhead (context switches back and forth, in-kernel checks, ...), not to mention the time the system call itself takes, makes your gain by inlining dissapear in the noise (if there is any gain, more code means cache isn't so useful, and performance suffers). Trust the libc/kernel folks to have studied the matter and done the inlining for you behind your back (in the relevant *.h file) if it really is a measurable gain.
I am writing a JIT on ARM Linux that executes an instruction set that contains self-modifying code. The instruction set does not have any cache flush instructions (similar to x86 in that respect).
If I write out some code to a page and then call mprotect on that page, is that sufficient to invalidate the instruction cache? Or do I also need to use the cacheflush syscall on those pages?
You'd expect that the mmap/mprotect syscalls would establish mappings that are updated immediately, and need no further interaction to use the memory ranges as specified. I see that the kernel does indeed flush caches on mprotect. In that case, no cache flush would be required.
However, I also see that some versions of libc do call cacheflush after mprotect, which would imply that some environments would need the caches flushed (or have previously). I'd take a guess that this is a workaround to a bug.
You could always add the call to cacheflush; although it's extra code, it shouldn't be to harmful - at worst, the caches will already be flushed. You could always write a quick test and see what happens...
In Linux specifically, mprotect DOES cacheflush all caches since at least version 2.6.39 (and even before that for sure). You can see that in the code:
https://elixir.bootlin.com/linux/v2.6.39.4/source/mm/mprotect.c#L122 .
If you are writing a POSIX portable code, I would call cacheflush as the standard C library is not demanding such behavior from the kernel, nor from the implementation.
Edit: You should also be carefull and check what flush_cache_range does in the specific architecture you are implementing for, as in some architecture (like ARM64) this function does nothing...
I believe you do not have to explicitly flush the cache.
Which processor is this? ARMv5? ARMv7?