I was reading through some ARM kernel sources till I stumbled upon the following function :-
314 #define __get_user_asm_byte(x, addr, err) \
315 __asm__ __volatile__( \
316 "1: " TUSER(ldrb) " %1,[%2],#0\n" \
317 "2:\n" \
318 " .pushsection .fixup,\"ax\"\n" \
319 " .align 2\n" \
320 "3: mov %0, %3\n" \
321 " mov %1, #0\n" \
322 " b 2b\n" \
323 " .popsection\n" \
324 " .pushsection __ex_table,\"a\"\n" \
325 " .align 3\n" \
326 " .long 1b, 3b\n" \
327 " .popsection" \
328 : "+r" (err), "=&r" (x) \
329 : "r" (addr), "i" (-EFAULT) \
330 : "cc")
The calling context seems is as follows :-
299 #define __get_user_err(x, ptr, err) \
300 do { \
301 unsigned long __gu_addr = (unsigned long)(ptr); \
302 unsigned long __gu_val; \
303 __chk_user_ptr(ptr); \
304 might_fault(); \
305 switch (sizeof(*(ptr))) { \
306 case 1: __get_user_asm_byte(__gu_val, __gu_addr, err); break; \
Now I have a few doubts I'd like to clear about the ARM assembly above.
What is the function __get_user_asm_byte doing? I can see that r3 is copied into r0 and that the value 0 is moved into r1. After that does it branches to the offset 0x2b?
What is the function trying to do? What does the "+r" (err), "=&r" (x) from line 328 onward mean?
What is a .pushsection and a .popsection?
Why does ARM assembly thats written for the kernel so different syntactically(what's the assembler used? Why are there %<regno> instead of r<regno>?)
First, as comments have noted, the syntax is standard gcc inline assembly syntax (the +r, =&r, %<arg> parts).
The rest is kernel magic designed to handle page faults. The point of get_user_asm_byte is to pull a byte from user-space. However, in pulling data from user-space, two special situations need to be accommodated:
A perfectly legitimate user-space address that is simply not present at the moment (i.e. usually because it's paged out)
An illegal user-space address.
Either could cause a page fault. For (1), the desired behavior is to restore the user page (read it back in from swap space or whatever), then re-execute the load instruction, resulting in eventual success; for (2), the desired behavior is to fail the operation without retrying, returning an EFAULT error to the caller.
Without special handling, the normal behavior of a page fault in kernel mode is an "Oops". The special sections coordinate with the page fault handling code and make it possible to recover correctly. If you want to understand the details of how this works, search for __ex_table in the kernel source.
Also, checkout kernel doc at: Documentation/x86/exception-tables.txt
The only ARM specific item is to use either ldrt or MMU domains; this is conditional in domain.h. Modern ARM CPUs support domains. The translated load/store variants apply a user mode access for older CPUs.
Related
I'm trying to create a Linux i386 a.out executable shorter than 4097 bytes, but all my efforts have failed so far.
I'm compiling it with:
$ nasm -O0 -f bin -o prog prog.nasm && chmod +x prog
I'm testing it in a Ubuntu 10.04 i386 VM running Linux 2.6.32 with:
$ sudo modprobe binfmt_aout
$ sudo sysctl vm.mmap_min_addr=4096
$ ./prog; echo $?
Hello, World!
0
This is the source code of the 4097-byte executable which works:
; prog.nasm
bits 32
cpu 386
org 0x1000 ; Linux i386 a.out QMAGIC file format has this.
SECTION_text:
a_out_header:
dw 0xcc ; magic=QMAGIC; Demand-paged executable with the header in the text. The first page (0x1000 bytes) is unmapped to help trap NULL pointer references.
db 0x64 ; type=M_386
db 0 ; flags=0
dd SECTION_data - SECTION_text ; a_text=0x1000 (byte size of .text; mapped as r-x)
dd SECTION_end - SECTION_data ; a_data=0x1000 (byte size of .data; mapped as rwx, not just rw-)
dd 0 ; a_bss=0 (byte size of .bss)
dd 0 ; a_syms=0 (byte size of symbol table data)
dd _start ; a_entry=0x1020 (in-memory address of _start == file offset of _start + 0x1000)
dd 0 ; a_trsize=0 (byte size of relocation info or .text)
dd 0 ; a_drsize=0 (byte size of relocation info or .data)
_start: mov eax, 4 ; __NR_write
mov ebx, 1 ; argument: STDOUT_FILENO
mov ecx, msg ; argument: address of string to output
mov edx, msg_end - msg ; argument: number of bytes
int 0x80 ; syscall
mov eax, 1 ; __NR_exit
xor ebx, ebx ; argument: EXIT_SUCCESS == 0.
int 0x80 ; syscall
msg: db 'Hello, World!', 10
msg_end:
times ($$ - $) & 0xfff db 0 ; padding to multiple of 0x1000 ; !! is this needed?
SECTION_data: db 0
; times ($$ - $) & 0xfff db 0 ; padding to multiple of 0x1000 ; !! is this needed?
SECTION_end:
How can I make the executable file smaller? (Clarification: I still want a Linux i386 a.out executable. I know that that it's possible to create a smaller Linux i386 ELF executable.) There is several thousands bytes of padding at the end of the file, which seems to be required.
So far I've discovered the following rules:
If a_text or a_data is 0, Linux doesn't run the program. (See relevant Linux source block 1 and 2.)
If a_text is not a multiple of 0x1000 (4096), Linux doesn't run the program. (See relevant Linux source block 1 and 2.)
If the file is shorter than a_text + a_data bytes, Linux doesn't run the program. (See relevant Linux source code location.)
Thus file_size >= a_text + a_data >= 0x1000 + 1 == 4097 bytes.
The combinations nasm -f aout + ld -s -m i386linux and nasm -f elf + ld -s -m i386linux and as -32 + ld -s -m i386linux produce an executable of 4100 bytes, which doesn't even work (because its a_data is 0), and by adding a single byte to section .data makes the executable file 8196 bytes long, and it will work. Thus this path doesn't lead to less than 4097 bytes.
Did I miss something?
TL;DR It doesn't work.
It is impossible to make a Linux i386 a.out QMAGIC executable shorter than 4097 bytes work on Linux 2.6.32, based on evidence in the Linux kernel source code of the binfmt_aout module.
Details:
If a_text is 0, Linux doesn't run the program. (Evidence for this check: a_text is passed as the length argument to mmap(2) here.)
If a_data is 0, Linux doesn't run the program. (Evidence for this check: a_data is passed as the length argument to mmap(2) here.)
If a_text is not a multiple of 0x1000 (4096), Linux doesn't run the program. (Evidence for this check: fd_offset + ex.a_text is passed as the offset argument to mmap(2) here. For QMAGIC, fd_offset is 0.)
If the file is shorter than a_text + a_data bytes, Linux doesn't run the program. (Evidence for this check: file sizes is compared to a_text + a_data + a_syms + ... here.)
Thus file_size >= a_text + a_data >= 0x1000 + 1 == 4097 bytes.
I've also tried OMAGIC, ZMAGIC and NMAGIC, but none of them worked. Details:
For OMAGIC, read(2) is used instead of mmap(2) within here, thus it can work. However, Linux tries to load the code to virtual memory address 0 (N_TXTADDR is 0), and this causes SIGKILL (if non-root and vm.mmap_min_addr is larger than 0) or SIGILL (otherwise), thus it doesn't work. Maybe the reason for SIGILL is that the page allocated by set_brk is not executable (but that should be indicated by SIGSEGV), this could be investigated further.
For ZMAGIC and NMAGIC, read(2) instead of mmap(2) within here if fd_offset is not a multiple of the page size (0x1000). fd_offset is 32 for NMAGIC, and 1024 for ZMAGIC, so good. However, it doesn't work for the same reason (load to virtual memory address 0).
I wonder if it's possible to run OMAGIC, ZMAGIC or NMAGIC executables at all on Linux 2.6.32 or later.
First I tried to reverse engineer it a bit:
printf '
#include <stdio.h>
int main() {
puts("hello world");
}
' > main.c
gcc -std=c99 -pie -fpie -ggdb3 -o pie main.c
echo 2 | sudo tee /proc/sys/kernel/randomize_va_space
readelf -s ./pie | grep -E 'main$'
gdb -batch -nh \
-ex 'set disable-randomization off' \
-ex 'start' -ex 'info line' \
-ex 'start' -ex 'info line' \
-ex 'set disable-randomization on' \
-ex 'start' -ex 'info line' \
-ex 'start' -ex 'info line' \
./pie \
;
Output:
64: 000000000000063a 23 FUNC GLOBAL DEFAULT 14 main
Temporary breakpoint 1, main () at main.c:4
4 puts("hello world");
Line 4 of "main.c" starts at address 0x5575f5fd263e <main+4> and ends at 0x5575f5fd264f <main+21>.
Temporary breakpoint 2 at 0x5575f5fd263e: file main.c, line 4.
Temporary breakpoint 2, main () at main.c:4
4 puts("hello world");
Line 4 of "main.c" starts at address 0x55e3fbc9363e <main+4> and ends at 0x55e3fbc9364f <main+21>.
Temporary breakpoint 3 at 0x55e3fbc9363e: file main.c, line 4.
Temporary breakpoint 3, main () at main.c:4
4 puts("hello world");
Line 4 of "main.c" starts at address 0x55555555463e <main+4> and ends at 0x55555555464f <main+21>.
Temporary breakpoint 4 at 0x55555555463e: file main.c, line 4.
Temporary breakpoint 4, main () at main.c:4
4 puts("hello world");
Line 4 of "main.c" starts at address 0x55555555463e <main+4> and ends at 0x55555555464f <main+21>.
which indicates that it is 0x555555554000 + random offset + 63e.
But then I tried to grep the Linux kernel and glibc source code for 555555554 and there were no hits.
Which part of which code calculates that address?
I came across this while answering: What is the -fPIE option for position-independent executables in gcc and ld?
Some Internet search of 0x555555554000 gives hints: there were problems with ThreadSanitizer https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual
Q: When I run the program, it says: FATAL: ThreadSanitizer can not mmap the shadow memory (something is mapped at 0x555555554000 < 0x7cf000000000). What to do? You need to enable ASLR:
$ echo 2 >/proc/sys/kernel/randomize_va_space
This may be fixed in future kernels, see https://bugzilla.kernel.org/show_bug.cgi?id=66721
...
$ gdb -ex 'set disable-randomization off' --args ./a.out
and https://lwn.net/Articles/730120/ "Stable kernel updates." Posted Aug 7, 2017 20:40 UTC (Mon) by hmh (subscriber) https://marc.info/?t=150213704600001&r=1&w=2
(https://patchwork.kernel.org/patch/9886105/, commit c715b72c1ba4)
Moving the x86_64 and arm64 PIE base from 0x555555554000 to 0x000100000000
broke AddressSanitizer. This is a partial revert of:
commit eab09532d400 ("binfmt_elf: use ELF_ET_DYN_BASE only for PIE") (https://patchwork.kernel.org/patch/9807325/ https://lkml.org/lkml/2017/6/21/560)
commit 02445990a96e ("arm64: move ELF_ET_DYN_BASE to 4GB / 4MB") (https://patchwork.kernel.org/patch/9807319/)
Reverted code was:
b/arch/arm64/include/asm/elf.h
/*
* This is the base location for PIE (ET_DYN with INTERP) loads. On
- * 64-bit, this is raised to 4GB to leave the entire 32-bit address
+ * 64-bit, this is above 4GB to leave the entire 32-bit address * space open for things that want to use the area for 32-bit pointers. */
-#define ELF_ET_DYN_BASE 0x100000000UL
+#define ELF_ET_DYN_BASE (2 * TASK_SIZE_64 / 3)
+++ b/arch/x86/include/asm/elf.h
/*
* This is the base location for PIE (ET_DYN with INTERP) loads. On
- * 64-bit, this is raised to 4GB to leave the entire 32-bit address
+ * 64-bit, this is above 4GB to leave the entire 32-bit address
* space open for things that want to use the area for 32-bit pointers.
*/
#define ELF_ET_DYN_BASE (mmap_is_ia32() ? 0x000400000UL : \
- 0x100000000UL)
+ (TASK_SIZE / 3 * 2))
So, 0x555555554000 is related with ELF_ET_DYN_BASE macro (referenced in fs/binfmt_elf.c for ET_DYN as not randomized load_bias) and for x86_64 and arm64 it is like 2/3 of TASK_SIZE. When there is no CONFIG_X86_32, x86_64 has TASK_SIZE of 2^47 - one page in arch/x86/include/asm/processor.h
/*
* User space process size. 47bits minus one guard page. The guard
* page is necessary on Intel CPUs: if a SYSCALL instruction is at
* the highest possible canonical userspace address, then that
* syscall will enter the kernel with a non-canonical return
* address, and SYSRET will explode dangerously. We avoid this
* particular problem by preventing anything from being mapped
* at the maximum canonical address.
*/
#define TASK_SIZE_MAX ((1UL << 47) - PAGE_SIZE)
Older versions:
/*
* User space process size. 47bits minus one guard page.
*/
#define TASK_SIZE_MAX ((1UL << 47) - PAGE_SIZE)
Newer versions also have support of 5level with __VIRTUAL_MASK_SHIFT of 56 bit - v4.17/source/arch/x86/include/asm/processor.h (but don't want to use it before enabled by user + commit b569bab78d8d ".. Not all user space is ready to handle wide addresses")).
So, 0x555555554000 is rounded down (by load_bias = ELF_PAGESTART(load_bias - vaddr);, vaddr is zero) from the formula (2^47-1page)*(2/3) (or 2^56 for larger systems):
$ echo 'obase=16; (2^47-4096)/3*2'| bc -q
555555554AAA
$ echo 'obase=16; (2^56-4096)/3*2'| bc -q
AAAAAAAAAAA000
Some history of 2/3 * TASK_SIZE:
commit 9b1bbf6ea9b2 "use ELF_ET_DYN_BASE only for PIE" has usefull comments: "The ELF_ET_DYN_BASE position was originally intended to keep loaders away from ET_EXEC binaries ..."
Don't overflow 32bits with 2*TASK_SIZE "[uml-user] [PATCH] x86, UML: fix integer overflow in ELF_ET_DYN_BASE", 2015 and "ARM: 8320/1: fix integer overflow in ELF_ET_DYN_BASE", 2015:
Almost all arches define ELF_ET_DYN_BASE as 2/3 of TASK_SIZE. Though
it seems that some architectures do this in a wrong way. The problem
is that 2*TASK_SIZE may overflow 32-bits so the real ELF_ET_DYN_BASE
becomes wrong. Fix this overflow by dividing TASK_SIZE prior to
multiplying: (TASK_SIZE / 3 * 2)
Same in 4.x, 3.y, 2.6.z, (where is davej-history git repo? archive and at or.cz) 2.4.z, ... added in 2.1.54 of 06-Sep-1997
diff --git a/include/asm-i386/elf.h b/include/asm-i386/elf.h
+/* This is the location that an ET_DYN program is loaded if exec'ed. Typical
+ use of this is to invoke "./ld.so someprog" to test out a new version of
+ the loader. We need to make sure that it is out of the way of the program
+ that it will "exec", and that there is sufficient room for the brk. */
+
+#define ELF_ET_DYN_BASE (2 * TASK_SIZE / 3)
I am reading the copy_from_user function, in copy_from_user function, the macro __get_user_asm is used.
there is a mmap syscall in linux, mmap syscall will call function copy_from_user. this function will use the macro __get_user_asm if the size is constant. the content of __get_user_asm is
#define __get_user_asm(x, addr, err, itype, rtype, ltype, errret) \
asm volatile("1: mov"itype" %2,%"rtype"1\n" \
"2:\n" \
".section .fixup,\"ax\"\n" \
"3: mov %3,%0\n" \
" xor"itype" %"rtype"1,%"rtype"1\n" \
" jmp 2b\n" \
".previous\n" \
_ASM_EXTABLE(1b, 3b) \
: "=r" (err), ltype(x) \
: "m" (__m(addr)), "i" (errret), "0" (err))
when i try to translate
__get_user_asm(*(u8 *)dst, (u8 __user *)src, ret, "b", "b", "=q", 1); to the real source,
1: movb %2,%b1\n
2:\n
.section .fixup, "ax" \n
3: mov %3, %0 \n
**xorb %b1, %b1\n**
jmp 2b\n
.previous\n
: "=r" (ret), =q(dst)
:"m"(dst), "i"(1), "0"(ret)
.quad "1b", "2b"\n
.previous\n```
,
there are somewhere i can't understand.
1, in xorb %b1, %b1, what's %b1(b one, not b L)?
2, in jmp 2b, is 2b a label or a memroy address? if 2b is a label, how can i find this lable?
3, what's the function of .quad "1b", "2b"?
where can i get the knowledge that make me to understand the linux kernel source in semantics layer?
Reading the docs for gcc's extended asm, we see that %1 refers to the second parameter (because parameter numbers are zero based). In your example, that's dst.
Adding b (ie %b1) is described here:
Modifier Description Operand masm=att masm=intel
b Print the QImode name of the register. %b0 %al al
jmp 2b means look backward for a label named 2.
The .quad directive is defined here:
.quad expects zero or more bignums, separated by commas. For each
bignum, it emits an 8-byte integer. If the bignum won't fit in 8
bytes, it prints a warning message; and just takes the lowest order 8
bytes of the bignum.
As for where to get info, hopefully the links I have provided help.
XOR any register with itself sets it to zero. So %B1 = 0.
Problem
I want to execute the exit system call in ARM using inline assembly on a Linux Android device, and I want the exit value to be read from a location in memory.
Example
Without giving this extra argument, a macro for the call looks like:
#define ASM_EXIT() __asm__("mov %r0, #1\n\t" \
"mov %r7, #1\n\t" \
"swi #0")
This works well.
To accept an argument, I adjust it to:
#define ASM_EXIT(var) __asm__("mov %r0, %0\n\t" \
"mov %r7, #1\n\t" \
"swi #0" \
: \
: "r"(var))
and I call it using:
#define GET_STATUS() (*(int*)(some_address)) //gets an integer from an address
ASM_EXIT(GET_STATUS());
Error
invalid 'asm': operand number out of range
I can't explain why I get this error, as I use one input variable in the above snippet (%0/var). Also, I have tried with a regular variable, and still got the same error.
Extended-asm syntax requires writing %% to get a single % in the asm output. e.g. for x86:
asm("inc %eax") // bad: undeclared clobber
asm("inc %%eax" ::: "eax"); // safe but still useless :P
%r7 is treating r7 as an operand number. As commenters have pointed out, just omit the %s, because you don't need them for ARM, even with GNU as.
Unfortunately, there doesn't seem to be a way to request input operands in specific registers on ARM, the way you can for x86. (e.g. "a" constraint means eax specifically).
You can use register int var asm ("r7") to force a var to use a specific register, and then use an "r" constraint and assume it will be in that register. I'm not sure this is always safe, or a good idea, but it appears to work even after inlining. #Jeremy comments that this technique was recommended by the GCC team.
I did get some efficient code generated, which avoids wasting an instruction on a reg-reg move:
See it on the Godbolt Compiler Explorer:
__attribute__((noreturn)) static inline void ASM_EXIT(int status)
{
register int status_r0 asm ("r0") = status;
register int callno_r7 asm ("r7") = 1;
asm volatile("swi #0\n"
:
: "r" (status_r0), "r" (callno_r7)
: "memory" // any side-effects on shared memory need to be done before this, not delayed until after
);
// __builtin_unreachable(); // optionally let GCC know the inline asm doesn't "return"
}
#define GET_STATUS() (*(int*)(some_address)) //gets an integer from an address
void foo(void) { ASM_EXIT(12); }
push {r7} # # gcc is still saving r7 before use, even though it sees the "noreturn" and doesn't generate a return
movs r0, #12 # stat_r0,
movs r7, #1 # callno,
swi #0
# yes, it literally ends here, after the inlined noreturn
void bar(int status) { ASM_EXIT(status); }
push {r7} #
movs r7, #1 # callno,
swi #0 # doesn't touch r0: already there as bar()'s first arg.
Since you always want the value read from memory, you could use an "m" constraint and include a ldr in your inline asm. Then you wouldn't need the register int var asm("r0") trick to avoid a wasted mov for that operand.
The mov r7, #1 might not always be needed either, which is why I used the register asm() syntax for it, too. If gcc wants a 1 constant in a register somewhere else in a function, it can do it in r7 so it's already there for the ASM_EXIT.
Any time the first or last instructions of a GNU C inline asm statement are mov instructions, there's probably a way to remove them with better constraints.
in the switch_to macro in 32 bit mode, there the following code is executed before the __switch_to function is called:
asm volatile("pushfl\n\t" /* save flags */ \
"pushl %%ebp\n\t" /* save EBP */ \
"movl %%esp,%[prev_sp]\n\t" /* save ESP */ \
"movl %[next_sp],%%esp\n\t" /* restore ESP */ \
"movl $1f,%[prev_ip]\n\t" /* save EIP */ \
"pushl %[next_ip]\n\t" /* restore EIP */ \
__switch_canary \
"jmp __switch_to\n" /* regparm call */
The EIP is pushed onto the stack (restore EIP). When __switch_to finishes, there is a ret which returns to that location.
Here is the corrsponding 64 bit code:
asm volatile(SAVE_CONTEXT \
"movq %%rsp,%P[threadrsp](%[prev])\n\t" /* save RSP */ \
"movq %P[threadrsp](%[next]),%%rsp\n\t" /* restore RSP */ \
"call __switch_to\n\t"
There, only the rsp is saved and restored. I think that the RIP is already at
the top of stack. But I cannot find the instruction where that is done.
How is the 64 bit context switch, especially for the RIP register, actually done?
Thanks in advance!
In 32 bit kernel, the thread.ip may be one of:
the 1 label in switch_to
ret_from_fork
ret_from_kernel_thread
The return to the proper place is ensured by simulating a call using a push + jmp pair.
In 64 bit kernel, thread.ip is not used like this. Execution always continues after the call (which used to be the 1 label in the 32 bit case). As such, there is no need to emulate the call, it can be done normally. Dispatching to ret_from_fork happens using a conditional jump after __switch_to returns (you have omitted this part):
#define switch_to(prev, next, last) \
asm volatile(SAVE_CONTEXT \
"movq %%rsp,%P[threadrsp](%[prev])\n\t" /* save RSP */ \
"movq %P[threadrsp](%[next]),%%rsp\n\t" /* restore RSP */ \
"call __switch_to\n\t" \
"movq "__percpu_arg([current_task])",%%rsi\n\t" \
__switch_canary \
"movq %P[thread_info](%%rsi),%%r8\n\t" \
"movq %%rax,%%rdi\n\t" \
"testl %[_tif_fork],%P[ti_flags](%%r8)\n\t" \
"jnz ret_from_fork\n\t" \
RESTORE_CONTEXT \
The ret_from_kernel_thread is incorporated into the ret_from_fork path, using yet another conditional jump in entry_64.S:
ENTRY(ret_from_fork)
DEFAULT_FRAME
LOCK ; btr $TIF_FORK,TI_flags(%r8)
pushq_cfi $0x0002
popfq_cfi # reset kernel eflags
call schedule_tail # rdi: 'prev' task parameter
GET_THREAD_INFO(%rcx)
RESTORE_REST
testl $3, CS-ARGOFFSET(%rsp) # from kernel_thread?
jz 1f