gettimeofday is a syscall of x86-86 according to this page(just search gettimeofday in the box):
int gettimeofday(struct timeval *tv, struct timezone *tz);
I thought the disassembly should be easy enough, just prepare the two pointers and call the related syscall, but its disassembly is doing much more:
(gdb) disas gettimeofday
Dump of assembler code for function gettimeofday:
0x00000034f408c2d0 <gettimeofday+0>: sub $0x8,%rsp
0x00000034f408c2d4 <gettimeofday+4>: mov $0xffffffffff600000,%rax
0x00000034f408c2db <gettimeofday+11>: callq *%rax
0x00000034f408c2dd <gettimeofday+13>: cmp $0xfffff001,%eax
0x00000034f408c2e2 <gettimeofday+18>: jae 0x34f408c2e9 <gettimeofday+25>
0x00000034f408c2e4 <gettimeofday+20>: add $0x8,%rsp
0x00000034f408c2e8 <gettimeofday+24>: retq
0x00000034f408c2e9 <gettimeofday+25>: mov 0x2c4cb8(%rip),%rcx # 0x34f4350fa8 <free+3356736>
0x00000034f408c2f0 <gettimeofday+32>: xor %edx,%edx
0x00000034f408c2f2 <gettimeofday+34>: sub %rax,%rdx
0x00000034f408c2f5 <gettimeofday+37>: mov %edx,%fs:(%rcx)
0x00000034f408c2f8 <gettimeofday+40>: or $0xffffffffffffffff,%rax
0x00000034f408c2fc <gettimeofday+44>: jmp 0x34f408c2e4 <gettimeofday+20>
End of assembler dump.
And I don't see syscall at all.
Can anyone explain how it works?
gettimeofday() on Linux is what's called a vsyscall and/or vdso. Hence you see the two lines:
0x00000034f408c2d4 : mov $0xffffffffff600000,%rax
0x00000034f408c2db : callq *%rax
in your disassembly. The address 0xffffffffff600000 is the vsyscall page (on x86_64).
The mechanism maps a specific kernel-created code page into user memory, so that a few "syscalls" can be made without the overhead of a user/kernel context switch, but rather as "ordinary" function call. The actual implementation is right here.
Syscalls generally create a lot of overhead, and given the abundance of gettimeofday() calls, one would prefer not to use a syscall. To that end, Linux kernel may map one or two special areas into each program, called vdso and vsyscall. Your implementation of gettimeofday() seems to be using vsyscall:
mov $0xffffffffff600000,%rax
This is the standard address of vsyscall map. Try cat /proc/self/maps to see that mapping. The idea behind vsyscall is that kernel provides fast user-space implementations of some functions, and libc just calls them.
Read this nice article for more details.
Related
I decided yesterday to learn assembly (NASM syntax) after years of C++ and Python and I'm already confused about the way to exit a program. It's mostly about ret because it's the suggested instruction on SASM IDE.
I'm speaking for main obviously. I don't care about x86 backward compatibility. Only the x64 Linux best way. I'm curious.
If you use printf or other libc functions, it's best to ret from main or call exit. (Which are equivalent; main's caller will call the libc exit function.)
If not, if you were only making other raw system calls like write with syscall, it's also appropriate and consistent to exit that way, but either way, or call exit are 100% fine in main.
If you want to work without libc at all, e.g. put your code under _start: instead of main: and link with ld or gcc -static -nostdlib, then you can't use ret. Use mov eax, 231 (__NR_exit_group) / syscall.
main is a real & normal function like any other (called with a valid return address), but _start (the process entry point) isn't. On entry to _start, the stack holds argc and argv, so trying to ret would set RIP=argc, and then code-fetch would segfault on that unmapped address. Nasm segmentation fault on RET in _start
System call vs. ret-from-main
Exiting via a system call is like calling _exit() in C - skip atexit() and libc cleanup, notably not flushing any buffered stdout output (line buffered on a terminal, full-buffered otherwise).
This leads to symptoms such as Using printf in assembly leads to empty output when piping, but works on the terminal (or if your output doesn't end with \n, even on a terminal.)
main is a function, called (indirectly) from CRT startup code. (Assuming you link your program normally, like you would a C program.) Your hand-written main works exactly like a compiler-generate C main function would. Its caller (__libc_start_main) really does do something like int result = main(argc, argv); exit(result);,
e.g. call rax (pointer passed by _start) / mov edi, eax / call exit.
So returning from main is exactly1 like calling exit.
Syscall implementation of exit() for a comparison of the relevant C functions, exit vs. _exit vs. exit_group and the underlying asm system calls.
C question: What is the difference between exit and return? is primarily about exit() vs. return, although there is mention of calling _exit() directly, i.e. just making a system call. It's applicable because C main compiles to an asm main just like you'd write by hand.
Footnote 1: You can invent a hypothetical intentionally weird case where it's different. e.g. you used stack space in main as your stdio buffer with sub rsp, 1024 / mov rsi, rsp / ... / call setvbuf. Then returning from main would involve putting RSP above that buffer, and __libc_start_main's call to exit could overwrite some of that buffer with return addresses and locals before execution reached the fflush cleanup. This mistake is more obvious in asm than C because you need leave or mov rsp, rbp or add rsp, 1024 or something to point RSP at your return address.
In C++, return from main runs destructors for its locals (before global/static exit stuff), exit doesn't. But that just means the compiler makes asm that does more stuff before actually running the ret, so it's all manual in asm, like in C.
The other difference is of course the asm / calling-convention details: exit status in EAX (return value) or EDI (first arg), and of course to ret you have to have RSP pointing at your return address, like it was on function entry. With call exit you don't, and you can even do a conditional tailcall of exit like jne exit. Since it's a noreturn function, you don't really need RSP pointing at a valid return address. (RSP should be aligned by 16 before a call, though, or RSP%16 = 8 before a tailcall, matching the alignment after call pushes a return address. It's unlikely that exit / fflush cleanup will do any alignment-required stores/loads to the stack, but it's a good habit to get this right.)
(This whole footnote is about ret vs. call exit, not syscall, so it's a bit of a tangent from the rest of the answer. You can also run syscall without caring where the stack-pointer points.)
SYS_exit vs. SYS_exit_group raw system calls
The raw SYS_exit system call is for exiting the current thread, like pthread_exit().
(eax=60 / syscall, or eax=1 / int 0x80).
SYS_exit_group is for exiting the whole program, like _exit.
(eax=231 / syscall, or eax=252 / int 0x80).
In a single-threaded program you can use either, but conceptually exit_group makes more sense to me if you're going to use raw system calls. glibc's _exit() wrapper function actually uses the exit_group system call (since glibc 2.3). See Syscall implementation of exit() for more details.
However, nearly all the hand-written asm you'll ever see uses SYS_exit1. It's not "wrong", and SYS_exit is perfectly acceptable for a program that didn't start more threads. Especially if you're trying to save code size with xor eax,eax / inc eax (3 bytes in 32-bit mode) or push 60 / pop rax (3 bytes in 64-bit mode), while push 231/pop rax would be even larger than mov eax,231 because it doesn't fit in a signed imm8.
Note 1: (Usually actually hard-coding the number, not using __NR_... constants from asm/unistd.h or their SYS_... names from sys/syscall.h)
And historically, it's all there was. Note that in unistd_32.h, __NR_exit has call number 1, but __NR_exit_group = 252 wasn't added until years later when the kernel gained support for tasks that share virtual address space with their parent, aka threads started by clone(2). This is when SYS_exit conceptually became "exit current thread". (But one could easily and convincingly argue that in a single-threaded program, SYS_exit does still mean exit the whole program, because it only differs from exit_group if there are multiple threads.)
To be honest, I've never used eax=252 / int 0x80 in anything, only ever eax=1. It's only in 64-bit code where I often use mov eax,231 instead of mov eax,60 because neither number is "simple" or memorable the way 1 is, so might as well be a cool guy and use the "modern" exit_group way in my single-threaded toy program / experiment / microbenchmark / SO answer. :P (If I didn't enjoy tilting at windmills, I wouldn't spend so much time on assembly, especially on SO.)
And BTW, I usually use NASM for one-off experiments so it's inconvenient to use pre-defined symbolic constants for call numbers; with GCC to preprocess a .S before running GAS you can make your code self-documenting with #include <sys/syscall.h> so you can use mov $SYS_exit_group, %eax (or $__NR_exit_group), or mov eax, __NR_exit_group with .intel_syntax noprefix.
Don't use the 32-bit int 0x80 ABI in 64-bit code:
What happens if you use the 32-bit int 0x80 Linux ABI in 64-bit code? explains what happens if you use the COMPAT_IA32_EMULATION int 0x80 ABI in 64-bit code.
It's totally fine for just exiting, as long as your kernel has that support compiled in, otherwise it will segfault just like any other random int number like int 0x7f. (e.g. on WSL1, or people that built custom kernels and disabled that support.)
But the only reason you'd do it that way in asm would be so you could build the same source file with nasm -felf32 or nasm -felf64. (You can't use syscall in 32-bit code, except on some AMD CPUs which have a 32-bit version of syscall. And the 32-bit ABI uses different call numbers anyway so this wouldn't let the same source be useful for both modes.)
Related:
Why am I allowed to exit main using ret? (CRT startup code calls main, you're not returning directly to the kernel.)
Nasm segmentation fault on RET in _start - you can't ret from _start
Using printf in assembly leads to empty output when piping, but works on the terminal stdout buffer (not) flushing with raw system call exit
Syscall implementation of exit() call exit vs. mov eax,60/syscall (_exit) vs. mov eax,231/syscall (exit_group).
Can't call C standard library function on 64-bit Linux from assembly (yasm) code - modern Linux distros config GCC in a way that call exit or call puts won't link with nasm -felf64 foo.asm && gcc foo.o.
Is main() really start of a C++ program? - Ciro's answer is a deep dive into how glibc + its CRT startup code actually call main (including x86-64 asm disassembly in GDB), and shows the glibc source code for __libc_start_main.
Linux x86 Program Start Up
or - How the heck do we get to main()? 32-bit asm, and more detail than you'll probably want until you're a lot more comfortable with asm, but if you've ever wondered why CRT runs so much code before getting to main, that covers what's happening at a level that's a couple steps up from using GDB with starti (stop at the process entry point, e.g. in the dynamic linker's _start) and stepi until you get to your own _start or main.
https://stackoverflow.com/tags/x86/info lots of good links about this and everything else.
I'm using the linux kernel as the basis for my 'OS' (which is just a game). Obviously, I use system calls to interact with the kernel, e.g. input, sound, graphics, etc. As I don't have a linker on this system to use pre-defined wrapper in <sys/syscall.h>, I use this syscall wrapper for C in NASM asm:
global _syscall
_syscall:
mov rax, rdi
mov rdi, rsi
mov rsi, rdx
mov rdx, rcx
mov r10, r8
mov r8, r9
mov r9, [rsp + 8]
syscall
ret
I understand the registers used based on System V calling convention. However, I don't understand the last line mov r9, [rsp + 8]. I feel like it has something to do with return address, but I'm not sure.
The 7th arg is passed on the stack in x86-64 System V.
This is taking the first C arg and putting it in RAX, then copying the next 6 C args to the 6 arg-passing registers of the kernel's system-call calling convention. (Like functions, but with R10 instead of RCX).
The only reason the craptastic glibc syscall() function exists / is written that way is because there's no way to tell C compilers about a custom calling convention where an arg is also passed in RAX. That wrapper makes it look just like any other C function with a prototype.
It's fine for messing around with new system calls, but as you noted it's inefficient. If you wanted something better in C, use inline asm macros for your ISA, e.g. https://github.com/linux-on-ibm-z/linux-syscall-support/blob/master/linux_syscall_support.h. Inline asm is hard, and historically some syscall1 / syscall2 (per number of args) macros have been missing things like a "memory" clobber to tell the compiler that pointed-to memory could also be an input or output. That github project is safe and has code for various ISAs. (Some missed optimizations, like could use a dummy input operand instead of a full "memory" clobber... But that's irrelevant to asm)
Of course, you can do much better if you're writing in asm:
Just use the syscall instruction directly with args in the right registers (RDI, RSI, RDX, R10, R8, R9) instead of call _syscall with the function-calling convention. That's strictly worse than just inlining the syscall instruction: With syscall you know that registers are unmodified except for RAX (return value) and RCX/R11 (syscall itself uses them to save RIP and RFLAGS before kernel code runs.) And it would take just as much code to get args into registers for a function call as it would for syscall.
If you do want a wrapper function at all (e.g. to cmp rax, -4095 / jae handle_syscall_error afterwards and maybe set errno), use the same calling convention for it as the kernel expects, so the first instruction can be syscall, not all that stupid shuffling of args over by 1.
Functions in asm (that you only need to call from asm) can use whatever calling convention is convenient. It's a good idea to use a good standard one most of the time, but any "obviously special" function can certainly use a special convention.
I'm trying to run a binary program that uses CMPXCHG16B instruction at one place, unfortunately my Athlon 64 X2 3800+ doesn't support it. Which is great, because I see it as a programming challenge. The instruction doesn't seem to be that hard to implement with a cave jump, so that's what I did, but something didn't work, program just froze in a loop. Maybe someone can tell me if I implemented my CMPXCHG16B wrong?
Firstly the actual piece of machine code that I'm trying to emulate is this:
f0 49 0f c7 08 lock cmpxchg16b OWORD PTR [r8]
Excerpt from Intel manual describing CMPXCHG16B:
Compare RDX:RAX with m128. If equal, set ZF and load RCX:RBX into m128.
Else, clear ZF and load m128 into RDX:RAX.
First I replace all 5 bytes of the instruction with a jump to code cave with my emulation procedure, luckily the jump takes up exactly 5 bytes! The jump is actually a call instruction e8, but could be a jmp e9, both work.
e8 96 fb ff ff call 0xfffffb96(-649)
This is a relative jump with a 32-bit signed offset encoded in two's complement, the offset points to a code cave relative to address of next instruction.
Next the emulation code I'm jumping to:
PUSH R10
PUSH R11
MOV r10, QWORD PTR [r8]
MOV r11, QWORD PTR [r8+8]
TEST R10, RAX
JNE ELSE
TEST R11, RDX
JNE ELSE
MOV QWORD PTR [r8], RBX
MOV QWORD PTR [r8+8], RCX
JMP END
ELSE:
MOV RAX, r10
MOV RDX, r11
END:
POP R11
POP R10
RET
Personally, I'm happy with it, and I think it matches the functional specification given in manual. It restores stack and two registers r10 and r11 to their original order and then resumes execution. Alas it does not work! That is the code works, but the program acts as if it's waiting for a tip and burning electricity. Which indicates my emulation was not perfect and I inadvertently broke it's loop. Do you see anything wrong with it?
I notice that this is an atomic variant of it—owning to the lock prefix. I'm hoping it's something else besides contention that I did wrong. Or is there a way to emulate atomicity too?
It's not possible to emulate lock cmpxchg16b. It's sort of possible if all accesses to the target address are synchronised with a separate lock, but that includes all other instructions, including non-atomic stores to either half of the object, and atomic read-modify-writes (like xchg, lock cmpxchg, lock add, lock xadd) with one half (or other part) of the 16 byte object.
You can emulate cmpxchg16b (without lock) like you've done here, with the bugfixes from #Fifoernik's answer. That's an interesting learning exercise, but not very useful in practice, because real code that uses cmpxchg16b always uses it with a lock prefix.
A non-atomic replacement will work most of the time, because it's rare for a cache-line invalidate from another core to arrive in the small time window between two nearby instructions. This doesn't mean it's safe, it just means it's really hard to debug when it does occasionally fail. If you just want to get a game working for your own use, and can accept occasional lockups / errors, this might be useful. For anything where correctness is important, you're out of luck.
What about MFENCE? Seems to be what I need.
MFENCE before, after, or between the loads and stores won't prevent another thread from seeing a half-written value ("tearing"), or from modifying the data after your code has made the decision that the compare succeeded, but before it does the store. It might narrow the window of vulnerability, but it can't close it, because MFENCE only prevents reordering of the global visibility of our own stores and loads. It can't stop a store from another core from becoming visible to us after our loads but before our stores. That requires an atomic read-modify-write bus cycle, which is what locked instructions are for.
Doing two 8-byte atomic compare-exchanges would solve the window-of-vulnerability problem, but only for each half separately, leaving the "tearing" problem.
Atomic 16B loads/stores solves the tearing problem but not the atomicity problem between loads and stores. It's possible with SSE on some hardware, but not guaranteed to be atomic by the x86 ISA the way 8B naturally-aligned loads and stores are.
Xen's lock cmpxchg16b emulation:
The Xen virtual machine has an x86 emulator, I guess for the case where a VM starts on one machine and migrates to less-capable hardware. It emulates lock cmpxchg16b by taking a global lock, because there's no other way. If there was a way to emulate it "properly", I'm sure Xen would do that.
As discussed in this mailing list thread, Xen's solution still doesn't work when the emulated version on one core is accessing the same memory as the non-emulated instruction on another core. (The native version doesn't respect the global lock).
See also this patch on the Xen mailing list that changes the lock cmpxchg8b emulation to support both lock cmpxchg8b and lock cmpxchg16b.
I also found that KVM's x86 emulator doesn't support cmpxchg16b either, according to the search results for emulate cmpxchg16b.
I think all this is good evidence that my analysis is correct, and that it's not possible to emulate it safely.
I see these things wrong with your code to emulate the cmpxchg16b instruction:
You need to use cmp in stead of test to get a correct comparison.
You need to save/restore all flags except the ZF. The manual mentions :
The CF, PF, AF, SF, and OF flags are unaffected.
The manual contains the following:
IF (64-Bit Mode and OperandSize = 64)
THEN
TEMP128 ← DEST
IF (RDX:RAX = TEMP128)
THEN
ZF ← 1;
DEST ← RCX:RBX;
ELSE
ZF ← 0;
RDX:RAX ← TEMP128;
DEST ← TEMP128;
FI;
FI
So to really write code that "matches the functional specification given in manual" a write to the m128 is required. Although this particular write is part of the locked version lock cmpxchg16b, it won't of course do any good to the atomicity of the emulation! A straightforward emulation of lock cmpxchg16b is thus not possible. See #PeterCordes' answer.
This instruction can be used with a LOCK prefix to allow the instruction to be executed atomically. To simplify the interface to the processor’s bus, the destination operand receives a write cycle without regard to the result of the comparison
ELSE:
MOV RAX, r10
MOV RDX, r11
MOV QWORD PTR [r8], r10
MOV QWORD PTR [r8+8], r11
END:
I decided to take a crack at assembly the other day, and I've been playing around with really basic things like printing stuff from argv to stdout. I found this great list of linux syscall numbers with arguments and everything, and I'm curious why r10 is used for arguments before r8 and r9. I've found all kinds of weird conventions about what can be used what for what and when, like how loop counters go in rcx. Is there a particular reason why r10 was moved up? Was it more convenient?
I should probably also mention I'm interested in this out of curiosity, not because it's causing me problems.
Edit: I found this question which gets close, referencing the x64 ABI documentation on page 124, where it notes that user level applications use rdi, rsi, rdx, rcx, r8, r9. The kernel on the other hand uses r10 instead of rcx, and destroys rcx and r11. That might explain how r10 ended up there, but then why was it swapped in?
RCX, along with R11, is used by the syscall instruction, being immediately destroyed by it. Thus these registers are not only not saved after syscall, but they can't even be used for parameter passing. Thus R10 was chosen to replace unusable RCX to pass fourth parameter.
See also this answer for a bit more information on how syscall uses these registers.
Reference: Intel's Instruction Set Reference, look for SYSCALL.
see x86-64.orgs abi documentation page 124
User-level applications use as integer registers for passing the sequence
%rdi, %rsi, %rdx, %rcx, %r8 and %r9. The kernel interface uses %rdi,
%rsi, %rdx, %r10, %r8 and %r9.
A system-call is done via the syscall instruction. The kernel destroys
registers %rcx and %r11.
This is saying that when you use the syscall instruction the kernel destroys %rcx so you need to use %r10 instead.
Also the comment from #technosaurus explains that the kernel is using %rcx to store the entry point in case of an interrupt during a syscall.
Inspired by this question
How can I force GDB to disassemble?
and related to this one
What is INT 21h?
How does an actually system call happen under linux? what happens when the call is performed, until the actual kernel routine is invoked ?
Assuming we're talking about x86:
The ID of the system call is deposited into the EAX register
Any arguments required by the system call are deposited into the locations dictated by the system call. For example, some system calls expect their argument to reside in the EBX register. Others may expect their argument to be sitting on the top of the stack.
An INT 0x80 interrupt is invoked.
The Linux kernel services the system call identified by the ID in the EAX register, depositing any results in pre-determined locations.
The calling code makes use of any results.
I may be a bit rusty at this, it's been a few years...
The given answers are correct but I would like to add that there are more mechanisms to enter kernel mode. Every recent kernel maps the "vsyscall" page in every process' address space. It contains little more than the most efficient syscall trap method.
For example on a regular 32 bit system it could contain:
0xffffe000: int $0x80
0xffffe002: ret
But on my 64-bitsystem I have access to the way more efficient method using the syscall/sysenter instructions
0xffffe000: push %ecx
0xffffe001: push %edx
0xffffe002: push %ebp
0xffffe003: mov %esp,%ebp
0xffffe005: sysenter
0xffffe007: nop
0xffffe008: nop
0xffffe009: nop
0xffffe00a: nop
0xffffe00b: nop
0xffffe00c: nop
0xffffe00d: nop
0xffffe00e: jmp 0xffffe003
0xffffe010: pop %ebp
0xffffe011: pop %edx
0xffffe012: pop %ecx
0xffffe013: ret
This vsyscall page also maps some systemcalls that can be done without a context switch. I know certain gettimeofday, time and getcpu are mapped there, but I imagine getpid could fit in there just as well.
This is already answered at
How is the system call in Linux implemented?
Probably did not match with this question because of the differing "syscall" term usage.
Basically, its very simple: Somewhere in memory lies a table where each syscall number and the address of the corresponding handler is stored (see http://lxr.linux.no/linux+v2.6.30/arch/x86/kernel/syscall_table_32.S for the x86 version)
The INT 0x80 interrupt handler then just takes the arguments out of the registers, puts them on the (kernel) stack, and calls the appropriate syscall handler.