How can ASLR be effective? - security

I've heard the theory. Address Space Location Randomization takes libraries and loads them at randomized locations in the virtual address space, so that in case a hacker finds a hole in your program, he doesn't have a pre-known address to execute a return-to-libc attack against, for example. But after thinking about it for a few seconds, it doesn't make any sense as a defensive measure.
Let's say that our hypothetical TargetLib (libc or anything else the hacker is looking for) is loaded at a randomized address instead of a deterministic one. Now the hacker doesn't know ahead of time where TargetLib and the routines inside it are, but neither does the application code. It needs to have some sort of lookup table somewhere in the binary in order to find the routines inside of TargetLib, and that has to be at a deterministic location. (Or at a random location, pointed to by something else. You can add as many indirections as you want, but eventually you have to start at a known location.)
This means that instead of pointing his attack code at the known location of TargetLib, all the hacker needs to do is point his attack code at the application's lookup table's entry for TargetLib and dereference the pointer to the target routine, and the attack proceeds unimpeded.
Is there something about the way ASLR works that I don't understand? Because as described, I don't see how it's anything more than a speed bump, providing the image of security but no actual substance. Am I missing something?

I believe that this is effective because it changes the base address of the shared library. Recall that imported functions from a shared library are patched into your executable image when it is loaded, and therefore there is no table per se, just specific addresses pointing at data and code scattered throughout the program's code.
It raises the bar for an effective attack because it makes a simple buffer overrun (where the return address on the stack can be set) into one where the overrun must contain the code to determine the correct location and then jmp to it. Presumably this just makes it harder.
Virtually all DLLs in Windows are compiled for a base address that they will likely not run at and will be moved anyway, but the core Windows ones tend to have their base address optimized so that the relocation is not needed.

I don't know if get you question correctly but I'll explain when ASLR is effective and when not.
Let's say that we have app.exe and TargetLib.dll.
app.exe is using(linked to) TargetLib.dll.
To make the explanation simple, let's assume that the virtual address space only has these 2 modules.
If both are ALSR enabled, app.exe's base address is unknown. It may resolve some function call addresses when it is loaded but an attacker knows neither where the function is nor where the resolved variables are. The same thing happens when TargetLib.dll is loaded.
Even though app.exe has a lookup table, an attacker does not know where the table is.
Since an attacker cannot tell what is the content of specific address he must attack the application without using any fixed address information. It is usually harder if he uses usual attacking method like stack overflow, heap overflow, use-after-free...
On the other hand, if app.exe is NOT ASLR enabled, it is much easier for an attacker to exploit the application. Because there may be a function call to a interesting API at specific address in app.exe and the attacker can use the address as a target address to jump. (Attacking an application usually starts from jumping to arbitrary address.).
Supplementation:
You may already understand it but I want to make one thing clear.
When an attacker exploit an application by vulnerability like memory corruption he is usually forced to usefixed address jump instruction. They cannot use relative address jump instruction to exploit. This is the reason why ALSR is really effective to such exploits.

Related

Does the .so file still contain infomation about label

In what phase of compilation does the compiler replace label into actual addr
I understanding instructions like jmp abc where abc is just a note and will be replace to actual address eventually, does it ?
Does the final .so file still contain infomation about label or the label is replace to actual addr when its load in the memory ?
TL;DR - your question is hard to answer, because it is mixing a few concepts. For typical assembler labels, we use PC relative and labels are resolve at assemble time. For other 'external' labels, there are many cases and the resolution depends on the case.
There are four conceptual ways to address on almost all CPUs, and definitely on the ARM.
PC relative address. Current instruction +/- offset.
Absolute address. This is the one you are conceptually thinking of.
Register computed address. Calculated at run time. ldr pc, [rn, #xx]
Table based addressing. Global offset table, etc. Much like registers computed addresses. ldr pc, [Rbase, Rindex, lsl #2]
The first two fit in a single instruction and are very efficient. The first is most desirable as the code can execute at ANY address as long as it maintains it's original layout (ie, you don't load it by splitting the code up).
In the table above, there is also the concept of 'build time' and 'run time'. The distinction is the difference between a linker and a loader. You have tagged this 'linux' and refer to an 'so' or shared library. Also, you are referring to assembler 'labels'. They are very similar concepts, but can be different as they will be one of the four classes of addressing above.
Most often in assembler, the labels are PC relative. There is no additional structure to be implemented with PC relative, except to keep the chunk of code continuous. In the case of an assembler that is a 'module' (compilation unit, for a compile) or is processed by the assembler and produced an 'object', it will use a PC relative addressing.
The object format can be annotate with external addresses and there are many choices in how an assembler may output these address. They are generally controlled by 'psuedo-ops'. That is a note (separate section with defined format) in the object file; the instruction is semi-complete in this form. It may prepare to use an offset table, use a register based compute (like r9+constant), etc.
For the typical case of linking (done at build time), we will either use PC relative or absolute. If we fix our binary to only run at one address, the assembler can setup for absolute addressing and resolve these through linking. In this case, the binary must be loaded at a fixed address. The assembler 'modules' or object files can be completely glued together to have everything resolved. Then there is no 'load' time fix ups. Other determining factor are whether code/data are separate, and whether the system is using an MMU. It is often desirable to keep code constant, so that many processes can use the same RAM/ROM pages, but they will have separate data. As well as memory efficient, this can provide some form of security (although it is not extremely robust) it will prevent accidental code overwrites and will provide debugging help in the form of SIGSEGV.
It is possible to write a PC-relative initialization routine which will do the fix-ups to create a table in your own binary. So a 'loader' is just to determine where you are running and then make calculations. For statically shared libraries, you typically know the libraries you will run, but not where they are. For dynamically shared libraries, you might not even know at compile time what the library is that you will run.
A Linux distribution can use either. If you have some sort of standard Linux Desktop distribution, (Ubuntu/Debian, Redhat, etc). You will have something base on ARM ELF LSB and dynamic shared libraries. You need to use the assembler pseudo ops to support this type of addressing or use a compiler to do it for you. The majority of all 'labels' in a shared library will be PC relative and not show up. Some labels can show up for debugging reasons (man strip) and some are absolutely needed to resolve addresses at run time.
I have also asked a question that I find related some time ago, Using GCC pre-processor as an assembler... So the key concept is that the assembler is generally 'two pass' and needs to do these local address fix ups. Then this question asks a 2nd level Concept A/B where we are adding shared libraries. The online book Linkers and Loaders is a great resource if you want to known more.
See also:
Static linked shared libraries
Thumb start function
What is the point of busybox?
Final executable has to have all addresses, otherwise it would not work.
Thing to remember is there are static linking and dynamic linking (eg using shared libraries). In case of static linkage binary file has all addresses resolved. In case of dynamic linkage addresses are resolved during loading, while binary has relocation information that are replaced with actual addresses by dynamic linker. But by the end of a day, loaded binary in memory has all addresses.
In what phase of compilation does the compiler replace label into
actual addr
Compiler could replace with actual address when it knows destination address. For example that's a call to function in same compilation unit.
When destination address is outside of compilation unit and out of reach for compiler, compiler leaves a relocation information in object file. Linker replace that with an actual address in memory at same time.

Running a brief asm script inline for dynamic analysis

Is there any good reason not to run a brief unknown (30 line) assembly script inline in a usermode c program for dynamic analysis directly on my laptop?
There's only one system call to time, and at this point I can tell that it's a function that takes a c string and it's length, and performs some sort of encryption on it in a loop, which only iterates through the string as long as the length argument tells it.
I know that the script is (supposed to be) from a piece of malicious code, but for the life of me I can't think of any way it could possibly pwn my computer barring some sort of hardware bug (which seems unlikely given that the loop is ~ 7 instructions long and the strangest instruction in the whole script is a shr).
I know it sounds bad running an unknown piece of assembly code directly on the metal, but given my analysis up to this point I can't think of any way it could bite me or escape.
Yes, you can but I won't recommend it.
The problem is not how dangerous is the code this time (assuming you really understand all of the code and you can predict the outcome of any system call), the problem is that it's a slippery slope and it's not worth it considering what's at stake.
I've done quite a few malware analysis and rarely happened that a piece of code caught me off guard but it happened.
Luckily I was working on a virtual machine within an isolated network: I just restored the last snapshot and stepped through the code more carefully.
If you do this analysis on your real machine you may take the habit and one day this will bite you back.
Working with VMs, albeit not as comfortable as using your OS native GUI, is the way to go.
What could go wrong with running a 7 lines assembly snippet?
I don't know, it really depends on the code but a few things to be careful about:
Exceptions. An instruction may intentionally fault to pass the control to an exception handler. This is why it very important that you totally understand the code: both the instruction and the data.
System calls exploits. A specially crafted input to a system call may trigger a 0-day or an unpatched vulnerability in your system. This is why is important that you can predict the outcome of every system call.
Anti debugger techniques. There are a lot of way a piece of code could escape a debugger (I'm thinking Windows debugging here), it's hard to remember them all, be suspicious of everything.
I've just named a few, it's catastrophically possible that an hardware bug could lead to privileged code execution but if that's really a possibility then nothing but a spare sacrificable machine will do.
Finally, if you are going to run the malware (because I assume the work of extracting the code and its context is too much of a burden) up to a breakpoint on your machine, think of what's at stake.
If you place the break point on the wrong spot, if the malware takes another path or if the debugger has a glitchy GUI, you may loose your data or the confidentiality of your machine.
I'n my opinion is not worth it.
I had to make this premise for generality sake but we all sin something, don't we?
I've never run a piece of malware on my machine but I've stepped through some with a virtual machine directly connected on the company network.
It was a controlled move, nothing happened, the competent personnel was advised and it was an happy ending.
This may very well be your case: it can just be a decryption algorithm and nothing more.
However, only you have the final responsibility to judge if it is acceptable to run the piece of code or not.
As I remarked above, in general it is not a good idea and it presupposes that you really understand the code (something that is hard to do and be honest about).
If you think these prerequisites are all satisfied then go ahead and do it.
Before that I would:
Create an unprivileged user and deny it access to my data and common folders (ideally deny it everything but what's is necessary to make the program work).
Backup the the critical data, if any.
Optionally
Make a restore point.
Take an hash of the system folders, a list of installed services and the value of the usual startup registry keys (Sysinternals have a tool to enum them all).
After the analysis, you can check that nothing important system-wide has changed.
It may be helpful to subst a folder and put the malware there so that a dummy path traversal stops in that folder.
Isn't there better solution?
I like using VMs for their snapshotting capabilities, though you may stumble into an anti-VM check (but they are really dumb checks, so it's easy to skip them).
For a 7-line assembly I'd simply rewrite it as a JS function and run it directly in a browser console.
You can simply transform each register in a variable and transcript the code, you don't need to understand it globally but only locally (i.e. each instruction).
JS is handy if you don't have to work with 64-bit quantities because you have an interpreter in front of you right now :)
Alternatively I use any programming language I have at hand (One time even assembly it self, it seems paradoxical but due to a nasty trick I had to convert a 64-bit piece of code to a 32-bit one and patch the malware with it).
You can use Unicorn to easily emulate a CPU (if the architecture is supported) and play with your shellcode without any risk.

Why is return-to-libc much earlier than return-to-user?

I'm really new to this topic and only know some basic concepts. Nevertheless, there is a question that quite confuses me.
Solar Designer proposed the idea of return-to-libc in 1997 (http://seclists.org/bugtraq/1997/Aug/63). Return-to-user, from my understanding, didn't become popular until 2007 (http://seclists.org/dailydave/2007/q1/224).
However, return-to-user seems much easier than return-to-libc. So my question is, why did hackers spend so much effort in building a gadget chain by using libc, rather than simply using their own code in the user space when exploiting a kernel vulnerability?
I don't believe that they did not realize there are NULL pointer dereference vulnerabilities in the kernel.
Great question and thank you for taking some time to research the topic before making a post that causes it's readers to lose hope in the future of mankind (you might be able to tell I've read a few [exploit] tags tonight)
Anyway, there are two reasons
return-to-libc is generic, provided you have the offsets. There is no need to programmatically or manually build either a return chain or scrape existing user functionality.
Partially because of linker-enabled relocations and partly because of history, the executable runtime of most programs executing on a Linux system essentially demand the libc runtime, or at least a runtime that can correctly handle _start and cons. This formula still stands on Windows, just under the slightly different paradigm of return-to-kernel32/oleaut/etc but can actually be more immediately powerful, particularly for shellcode with length requirements, for reasons relating to the way in which system calls are invoked indirectly by kernel32-SSDT functions
As a side-note, If you are discussing NULL pointer dereferences in the kernel, you may be confusing return to vDSO space, which is actually subject to a different set of constraints that standard "mprotect and roll" userland does not.
Ret2libc or ROP is a technique you deploy when you cannot return to your shellcode (jmp esp/whatever) because of memory protections (DEP/NX).
They perform different goals, and you may engage with both.
Return to libc
This is a buffer overrun exploit where you have control over some input, and you know there exists a badly written program which will copy your input onto the stack and return through it. This allows you to start executing code on a machine you have not got a login to, or a very restrictive access.
Return to libc gives you the ability to control what code is executed, and would generally result in you running within the space, a more natural piece of code.
return from kernel
This is a privilege escalation attack. It takes code which is running on the machine, and exploits bugs in the kernel causing the kernel privileges to be applied to the user process.
Thus you may use return-to-libc to get your code running in a web-browser, and then return-to-user to perform restricted tasks.
Earlier shellcode
Before return-to-libc, there was straight shellcode, where the buffer overrun included the exploit code, with knowledge about where the stack would be, this was possible to run directly. This became somewhat obsolete with the NX bit in windows. (x86 hardware was capable of having execute on a segment, but not a page).
Why is return-to-libc much earlier than return-to-user?
The goal in these attacks is to own the machine, frequently, it was easy to find a vulnerability in a user-mode process which gave you access to what you needed, it is only with the hardening of the system by reducing the privilege at the boundary, and fixing the bugs in important programs, that the new return-from-kernel became necessary.

Copying garbage collection of generated code

Copying (generational) garbage collection offers the best performance of any form of automatic memory management, but requires pointers to relocated chunks of data be fixed up. This is enabled, in languages which support this memory management technique, by disallowing pointer arithmetic and making sure all pointers are to the beginning of identifiable objects.
If you're generating code at run time with a JIT compiler, things look a bit trickier because return addresses on the call stack will point to, not the beginning of code blocks, but random locations within them, so fixup is a problem.
How is this typically solved?
Quite often, you don't relocate code. This is both because it is indeed complicated to fix the stack and other addresses (think jumps across code fragments), and because you don't actually need garbage collection for such code (as it is only manipulated by code you write anyway, so you can do manual memory management). You also don't expect to create a whole lot of machine code (compared to application objects), so fragmentation etc. is not a concern.
If you insist on moving machine code and fixing up the stack, there is a way, I think: Similar to Mark-Compact, build a "break table" (I have no idea where this name comes from; "relocation table" might be clearer) that tells you the amount by which you should adjust pointers to moved objects. Now, walk the stack for return addresses (highly platform-specific, of course) and fix them if they refer to relocated code. Instead of looking for exact matches, search for the highest address lower than the return address you're currently replacing. You can check that this address indeed refers to some machine code that moved by looking at the object size (you have a pointer to the start of the object, after all). This approach isn't feasible for all objects, for various reasons.
There are other reasons to do something similar though. Some JIT compilers feature on-stack replacement, which means creating a new version (e.g. more optimized, or less optimized) of some machine code and replacing all occurrences of the old version with it. This is far more complicated than just fixing the return addresses though. You have to ensure the new version logically continue where the old one was left hanging. I am not familiar with how this is implemented, so I will not go into detail.

How to modify an ELF file in a way that changes data length of parts of the file?

I'm trying to modify the executable contents of my own ELF files to see if this is possible. I have written a program that reads and parses ELF files, searches for the code that it should update, changes it, then writes it back after updating the sh_size field in the section header.
However, this doesn't work. If I simply exchange some bytes, with other bytes, it works. However, if I change the size, it fails. I'm aware of that some sh_offsets are immediately adjacent to each other; however this shouldn't matter when I'm reducing the size of the executable code.
Of course, there might be a bug in my program (or more than one), but I've already painstakingly gone through it.
Instead of asking for help with debugging my program I'm just wondering, is there anything else than the sh_size field I need to update in order to make this work (when reducing the size)? Is there anything that would make changing the length fail other than that field?
Edit:
It seems that Andy Ross was perfectly correct. Even in this very simple program I have come across some indirect addressing in __libc_start_main that I cannot trivially modify to update the offset it will reach.
I was curious though, what would be the best approach to still trying to get as far as possible with this problem? I know I cannot solve this in every case, but for some simple programs, it should be possible to update what is required to make it run? Should I try writing my own virtual machine or try developing a "debugger" that would replace each suspected problem instruction with INT 3? Any ideas?
The text segment is likely internally linked with relative offsets. So one function might be trying to jump to, say, "current address plus 194 bytes". If you move things around such that the jump target is now 190 bytes, you will obviously break things. The same is true of constant data on some architectures (e.g. x86-64 but not i686). There is no simple way short of a complete disassembly to know where the internal references are, and in fact it's computationally undecidable to find them all (i.e. trying to figure out all possible jump targets of a runtime-computed branch is the Halting Problem).
Basically, this isn't solvable in the general case, so if you have an ELF binary from someone else you're trying to patch, you'll need to try other techniques. But with (great!) care it's possible to produce a library where all internal references go through the GOT/PLT which can be sliced up and relinked like this. What are you trying to accomplish?
is there anything else than the sh_size field I need to update in order to make this work
It sounds like you are patching a fully-linked binary (ET_EXEC or ET_DYN). Please note that .sh_size is not used for anything after the static link is done. You can strip the entire section table, and the binary will continue to work fine. What matters at runtime are the segments in the ELF, not sections.
ELF stands for executable and linking format, and the executable and linking form "dual nature" of the ELF. Sections are used at (static) link time to combine into segments; which are used at execution time (aka runtime, aka dynamic linking time).
Of course you haven't told us exactly what your patching strategy is when you are shrinking your binary, and in what way the result is broken. It is very likely that Andy Ross's answer is the real cause of your breakage.

Resources