Injecting code into Windows process running under Wine in Linux - linux

I'm trying to inject code into Windows process (a game) running under Wine in Linux (for learning purposes, of course). Not a pre-built DLL, but native code.
On Windows, we can use procedures like VirtualAllocEx and WriteProcessMemory to write code to the virtual memory of a foreign process and then create a thread in it via CreateRemoteThread, thus executing our code in the context of the foreign process.
This is a popular approach, which is often used by all sorts of cheats and trainers for games. The popular game hacking program Cheat Engine has its own API that does this trick.
To do the same on Linux, you need a lot more, much more complex manipulations: attach to the process using ptrace, swapping registers and making system calls to allocate memory, write code there, then create a thread using the clone system call that will execute our code. And it will be even better and safer if the thread created by clone creates another thread using pthread_create, and the last one will already execute our code.
In order for a thread created with clone to jump to the desired code, the address of this code is written to the end of the memory allocated for its stack, and the clone system call itself is performed using the syscall instruction, followed by the ret instruction. This combination of instructions is easily found in libc.so.
I found an example in C on Github on how to do it correctly and implemented my own program. I won't include its source code here because it's quite large. Let me just say that it works as expected with native Linux processes. Moreover, I can also inject code that calls Linux routines (for example, puts from libc.so) into the Wine process and see the correct result:
lea rdi, [text]
call <libc.so code vaddr + `puts` offset>
ret
text:
db 'puts called successfully!', 0
But when I try to call any Windows procedure in the thread created by pthread_create (be it any procedure from the game code or, for example, MessageBoxA from user32.dll), the thread hangs:
sub rsp, 32
xor rcx, rcx
lea rdx, [text]
lea r8, [caption]
mov r9, 0
call <user32.dll code vaddr + `MessageBoxA` offset>
add rsp, 32
ret
text:
db 'MessageBoxA called successfully!', 0
caption:
db 'Yaaay!', 0
But why is this happening? After all, Wine is not an emulator, but just an API and system call compatibility layer. Doesn't this mean that the code itself runs natively on the processor, just like the code of any Linux programs, and I should be able to interoperate between them?
I suspect that the problem may be, for example, in the stack incompatibility. I once heard that Wine uses some kind of trick to convert the stack between Windows and Linux, but I don't know what exactly that trick is and I'm too bad to find and understand it in the Wine source code.
Could you explain to me exactly why my idea of calling Windows procedures from a thread created by pthread_create does not work and how can I make it work?

Related

Fastest Linux system call

On an x86-64 Intel system that supports syscall and sysret what's the "fastest" system call from 64-bit user code on a vanilla kernel?
In particular, it must be a system call that exercises the syscall/sysret user <-> kernel transition1, but does the least amount of work beyond that. It doesn't even need to do the syscall itself: some type of early error which never dispatches to the specific call on the kernel side is fine, as long as it doesn't go down some slow path because of that.
Such a call could be used to estimate the raw syscall and sysret overhead independent of any work done by the call.
1 In particular, this excludes things that appear to be system calls but are implemented in the VDSO (e.g., clock_gettime) or are cached by the runtime (e.g., getpid).
One that doesn't exist, and therefore returns -ENOSYS quickly.
From arch/x86/entry/entry_64.S:
#if __SYSCALL_MASK == ~0
cmpq $__NR_syscall_max, %rax
#else
andl $__SYSCALL_MASK, %eax
cmpl $__NR_syscall_max, %eax
#endif
ja 1f /* return -ENOSYS (already in pt_regs->ax) */
movq %r10, %rcx
/*
* This call instruction is handled specially in stub_ptregs_64.
* It might end up jumping to the slow path. If it jumps, RAX
* and all argument registers are clobbered.
*/
#ifdef CONFIG_RETPOLINE
movq sys_call_table(, %rax, 8), %rax
call __x86_indirect_thunk_rax
#else
call *sys_call_table(, %rax, 8)
#endif
.Lentry_SYSCALL_64_after_fastpath_call:
movq %rax, RAX(%rsp)
1:
Use an invalid system call number so the dispatching code simply returns with
eax = -ENOSYS instead of dispatching to a system-call handling function at all.
Unless this causes the kernel to use the iret slow path instead of sysret / sysexit. That might explain the measurements showing an invalid number being 17 cycles slower than syscall(SYS_getpid), because glibc error handling (setting errno) probably doesn't explain it. But from my reading of the kernel source, I don't see any reason why it wouldn't still use sysret while returning -ENOSYS.
This answer is for sysenter, not syscall. The question originally said sysenter / sysret (which was weird because sysexit goes with sysenter, while sysret goes with syscall). I answered based on sysenter for a 32-bit process on an x86-64 kernel.
Native 64-bit syscall is handled more efficiently inside the kernel. (Update; with Meltdown / Spectre mitigation patches, it still dispatches via C do_syscall_64 in 4.16-rc2).
My What happens if you use the 32-bit int 0x80 Linux ABI in 64-bit code? Q&A gives an overview of the kernel side of system-call entry points from compat mode into an x86-64 kernel (entry_64_compat.S). This answer is just taking the relevant parts of that.
The links in that answer and this are to Linux 4.12 sources, which doesn't contain the Meltdown mitigation page-table manipulation, so that will be significant extra overhead.
int 0x80 and sysenter have different entry points. You're looking for entry_SYSENTER_compat. AFAIK, sysenter always goes there, even if you execute it in a 64-bit user-space process. Linux's entry point pushes a constant __USER32_CS as the saved CS value, so it will always return to user-space in 32-bit mode.
After pushing registers to construct a struct pt_regs on the kernel stack, there's a TRACE_IRQS_OFF hook (no idea how many instructions that amounts to), then call do_fast_syscall_32 which is written in C. (Native 64-bit syscall dispatching is done directly from asm, but 32-bit compat system calls are always dispatched through C).
do_syscall_32_irqs_on in arch/x86/entry/common.c is pretty light-weight: just a check if the process is being traced (I think this is how strace can hook system calls via ptrace), then
...
if (likely(nr < IA32_NR_syscalls)) {
regs->ax = ia32_sys_call_table[nr]( ... arg );
}
syscall_return_slowpath(regs);
}
AFAIK, the kernel can use sysexit after this function returns.
So the return path is the same whether or not EAX had a valid system call number, and obviously returning without dispatching at all is the fastest path through that function, especially in a kernel with Spectre mitigation where the indirect branch on the table of function pointers would go through a retpoline and always mispredict.
If you want to really test sysenter/sysexit without all that extra overhead, you'll need to modify Linux to put a much simpler entry point without checking for tracing or pushing / popping all the registers.
You'd probably also want to modify the ABI to pass a return address in a register (like syscall does on its own) instead of saved on the user-space stack which Linux's current sysenter ABI does; it has to get_user() to read the EIP value it should return to.
Of if all this overhead is part of what you want to measure, you're definitely all set with an eax that gives you -ENOSYS; at worst you'll be getting one extra branch miss from the range-check if branch predictors are hot for that branch based on normal 32-bit system calls.
In this benchmark by Brendan Gregg (linked from this blog post which is interesting reading on the topic) close(999) (or some other fd not in use) is recommended.
Some system calls don't even go thru any user->kernel transition, read vdso(7).
I suspect that these VDSO system calls (e.g. time(2), ...) are the fastest. You could claim that there are no "real" system calls.
BTW, you could add a dummy system call to your kernel (e.g. some system call always returning 0, or a hello world system call, see also this) and measure it.
I suspect (without having benchmarked it) that getpid(2) should be a very fast system call, because the only thing it needs to do is fetch some data from the kernel memory. And AFAIK, it is a genuine system call, not using VDSO techniques. And you could use syscall(2) to avoid its caching done by your libc and forcing the genuine system call.
I maintain my position (given in a comment to your initial question): without actual motivation your question does not make any concrete sense. Then I still do think that syscall(2) doing getpid is measuring the typical overhead to make a system call (and I guess you really care about that one). In practice almost all system calls are doing more work that such a getpid (or getppid).

64 bit PE injection - Suspend, inject, resume (x64 equivalent to changing EAX?)

All,
I'm trying to figure this out by endlessly debugging applications, but I can't seem to find my answer.
In my 32bit PE injection I eventually change EAX with the new EntryPoint of the injected PE, then resume the thread. I've read that the kernel runs a call EAX at the end to get to the entrypoint(I did not see this when debugging applications, so no idea if that is really the case).
However, I can't seem to find if this is possible in x64 (Tried about all registers :)).
So all in all two questions:
Does the kernel actually call EAX, because I can't see that call when debugging
Is the same method usable of changing a register to get the new entrypoint to run in x64 or do I need to rely on e.g. CreateRemoteThread?
P.S.: I'm a security researcher :)
in x64 RCX register used as application defined entry point of thread. in x86 - EAX register used. and not kernel call this address. but kernel32.dll

Linux syscall strategy through vsyscall page

I am reading about VM handling on Linux. Apparently to perform a syscall there's a page at 0xFFFFF000 on x86. called vsyscall page. In the past, the strategy to call a syscall was to use int 0x80. Is this vsyscall page strategy still using int 0x80 under the hood, or is it using a different call strategy (e.g. syscall opcode?). Collateral question: is the int 0x80 method outdated?
If you run ldd on a modern Linux binary, you'll see that it's linked to a dynamic library called linux-vdso.1 (on amd64) or linux-gate.so.1 (on x86), which is located in that vsyscall page. This is a shared library provided by the kernel, mapped into every process's address space, which contains C functions that encapsulate the specifics of how to perform a system call.
The reason for this encapsulation is that the "preferred" way to perform a system call can differ from one machine to another. The interrupt 0x80 method should always work on x86, but recent processors support the sysenter (Intel) or syscall (AMD) instructions, which are much more efficient. You want your programs to use those when available, but you also want the same compiled binary to run on both Intel and AMD (and other) processors, so it shouldn't contain vendor-specific opcodes. The linux-vdso/linux-gate library hides these processor-specific decisions behind a consistent interface.
For more information, see this article.

Which linux process handles syscalls?

This might be a silly question, but I was debugging a binary with gdb trying to "reverse engineer" it and reached an instruction that makes a syscall after which the effect I want to reverse engineer appears. I assume that another process is taking over and does the job so I was wondering if it was possible to debug the kernel code that handles the syscall with gdb.
Here is the x86 assembly snippet that makes the syscall (it appears that it is sys_getpid):
0x00007ffff7660d3e <+14>: movsxd rdx,edx
0x00007ffff7660d41 <+17>: movsxd rdi,edi
0x00007ffff7660d44 <+20>: mov eax,0x14
0x00007ffff7660d49 <+25>: syscall
The syscall (or sysenter or int 0x80 etc...) machine instruction is for making syscalls which by definition are handled by the Linux kernel. Details are defined in the x86-64 ABI specification. Read Advanced Linux Programming to get an overview of most of them. See also Linux Assembly HowTo.
From the point of view of a user application, a syscall is a virtual atomic instruction.
No specific userland process is handling syscalls, it is the job of the kernel to handle them, and it is nearly the sole way for an application to interact with the kernel.
the processing of syscalls by the kernel for a given process is accounted as system CPU time, e.g. by time(1)
The list of documented syscalls is given in syscalls(2). See also <asm/unistd.h> and <asm/unistd_64.h> etc... headers.
You could use strace(1) to understand the sequence of syscalls done by a particular run (of some process).
See also vdso(7).

Can we modify the int 0x80 routine?

How does linux 2.6 differ from 2.4?
Can we modify the source kernel?
Can we modify the int 0x80 service routine?
UPDATE:
1. the 0x80 handler is essentially the same between 2.4 and 2.6, although the function called from the handler is called by the 'syscall' instruction handler for x86-64 in 2.6.
2. the 0x80 handler can be modified like the rest of the kernel.
3. You won't break anything by modifying it, unless you remove backwards compatibility. E.g., you can add your own trace or backdoor if you feel so inclined. The other post that says you will break your libs and toolchain if you modify the handler is incorrect. If you break the dispatch algorithm, or modify the dispatch table incorrectly, then you will break things.
3a. As I originally posted, the best way to extend the 0x80 service is to extend the system call handler.
As the kernel source says:
What: The kernel syscall interface
Description:
This interface matches much of the POSIX interface and is based
on it and other Unix based interfaces. It will only be added to
over time, and not have things removed from it.
Note that this interface is different for every architecture
that Linux supports. Please see the architecture-specific
documentation for details on the syscall numbers that are to be
mapped to each syscall.
The system call table entries for i386 are in:
arch/i386/kernel/syscall_table.S
Note that the table is a sequence of pointers, so if you want to maintain a degree of forward compatibility with the kernel maintainers, you'd need to pad the table before placement of your pointer.
The syscall vector number is defined in irq_vectors.h
Then traps.c sets the address of the system_call function via set_system_gate, which places the entry into the interrupt descriptor table. The system_call function itself is in entry.S, and calls the requested pointer from the system call table.
There are a few housekeeping details, which you can see reading the code, but direct modification of the 0x80 interrupt handler is accomplished in entry.S inside the system_call function. In a more sane fashion, you can modify the system call table, inserting your own function without modifying the dispatch mechanism.
In fact, having read the 2.6 source, it says directly that int 0x80 and x86-64 syscall use the same code, so far. So you can make portable changes for x86-32 and x86-64.
END Update
The INT 0x80 method invokes the system call table handler. This matches register arguments to a call table, invoking kernel functions based on the contents of the EAX register. You can easily extend the system call table to add custom kernel API functions.
This may even work with the new syscall code on x86-64, as it uses the system call table, too.
If you alter the current system call table in any manner other than to extend it, you will break all dependent libraries and code, including libc, init, etc.
Here's the current Linux system call table: http://asm.sourceforge.net/syscall.html
It's an architectural overhaul. Everything has changed internally. SMP support is complete, the process scheduler is vastly improved, memory management got an overhaul, and many, many other things.
Yes. It's open-source software. If you do not have a copy of the source, you can get it from your vendor or from kernel.org.
Yes, but it's not advisable because it will break libc, it will break your baselayout, and it will break your toolchain if you change the sequence of existing syscalls, and nearly everything you might think you want to do should be done in userspace when at all possible.

Resources