fio supports a whole bunch of io engines - all supported engines are present here : https://github.com/axboe/fio/tree/master/engines
I have been trying to understand the internals of how fio works and got stuck on how fio loads all the io engines.
For example I see every engine has a method to register and unregister itself, for example sync.c registers and unregisters using the following methods
fio_syncio_register : https://github.com/axboe/fio/blob/master/engines/sync.c#L448
and fio_syncio_unregister :
https://github.com/axboe/fio/blob/master/engines/sync.c#L461
My question is who calls these methods ?
To find answer I tried running fio under gdb - placed a break point in fio_syncio_register and in the main function, fio_syncio_register gets called even before main which tells me it has something to do with __libc_csu_init
and backtrace confirmed that
(gdb) bt
#0 fio_syncio_register () at engines/sync.c:450
#1 0x000000000047fb9d in __libc_csu_init ()
#2 0x00007ffff6ee27bf in __libc_start_main (main=0x40cd90 <main>, argc=2, argv=0x7fffffffe608, init=0x47fb50 <__libc_csu_init>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffe5f8)
at ../csu/libc-start.c:247
#3 0x000000000040ce79 in _start ()
I spent sometime reading about __libc_csu_init and __libc_csu_fini and every single description talks about methods being decorated with __attribute__((constructor)) will be called before main, but in the case of fio sync.c I dont see fio_syncio_register decorated with __attribute__
Can someone please help me out in understanding how this flow works ? Are there other materials I should be reading to understand this ?
Thanks
Interesting question. I couldn't figure out the answer looking at the source, so here are the steps I took:
$ make
$ find . -name 'sync.o'
./engines/sync.o
$ readelf -WS engines/sync.o | grep '\.init'
[12] .init_array INIT_ARRAY 0000000000000000 0021f0 000008 00 WA 0 0 8
[13] .rela.init_array RELA 0000000000000000 0132a0 000018 18 36 12 8
This tells us that global initializers are present in this object. These are called at program startup. What are they?
$ objdump -Dr engines/sync.o | grep -A4 '\.init'
Disassembly of section .init_array:
0000000000000000 <.init_array>:
...
0: R_X86_64_64 .text.startup
Interesting. There is apparently a special .text.startup section. What's in it?
$ objdump -dr engines/sync.o | less
...
Disassembly of section .text.startup:
0000000000000000 <fio_syncio_register>:
0: 48 83 ec 08 sub $0x8,%rsp
4: bf 00 00 00 00 mov $0x0,%edi
5: R_X86_64_32 .data+0x380
9: e8 00 00 00 00 callq e <fio_syncio_register+0xe>
a: R_X86_64_PC32 register_ioengine-0x4
...
Why, it's exactly the function we are looking for. But how did it end up in this special section? To answer that, we can look at preprocessed source (in retrospect, I should have started with that).
How could we get it? The command line to compile sync.o is hidden. Looking in Makefile, we can unhide the command line with QUIET_CC=''.
$ rm engines/sync.o && make QUIET_CC=''
gcc -o engines/sync.o -std=gnu99 -Wwrite-strings -Wall -Wdeclaration-after-statement -g -ffast-math -D_GNU_SOURCE -include config-host.h -I. -I. -O3 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -DBITS_PER_LONG=64 -DFIO_VERSION='"fio-2.16-5-g915ca"' -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -DFIO_INTERNAL -DFIO_INC_DEBUG -c engines/sync.c
LINK fio
Now we know the command line, and can produce preprocessed file:
$ gcc -E -dD -std=gnu99 -ffast-math -D_GNU_SOURCE -include config-host.h -I. -I. -O3 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -DBITS_PER_LONG=64 -DFIO_VERSION='"fio-2.16-5-g915ca"' -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -DFIO_INTERNAL -DFIO_INC_DEBUG engines/sync.c -o /tmp/sync.i
Looking in /tmp/sync.i, we see:
static void __attribute__((constructor)) fio_syncio_register(void)
{
register_ioengine(&ioengine_rw);
register_ioengine(&ioengine_prw);
...
Hmm, it is __attribute__((constructor)) after all. But how did it get there? Aha! I missed the fio_init on this line:
static void fio_init fio_syncio_register(void)
What does fio_init stand for? Again in /tmp/sync.i:
#define fio_init __attribute__((constructor))
So that is how it works.
Related
For a university course, I like to compare code-sizes of functionally similar programs if written and compiled using gcc/clang versus assembly. In the process of re-evaluating how to further shrink the size of some executables, I couldn't trust my eyes when the very same assembly code I assembled/linked 2 years ago now has grown >10x in size after building it again (which true for multiple programs, not only helloworld):
$ make
as -32 -o helloworld-asm-2020.o helloworld-asm-2020.s
ld -melf_i386 -o helloworld-asm-2020 helloworld-asm-2020.o
$ ls -l
-rwxr-xr-x 1 xxx users 708 Jul 18 2018 helloworld-asm-2018*
-rwxr-xr-x 1 xxx users 8704 Nov 25 15:00 helloworld-asm-2020*
-rwxr-xr-x 1 xxx users 4724 Nov 25 15:00 helloworld-asm-2020-n*
-rwxr-xr-x 1 xxx users 4228 Nov 25 15:00 helloworld-asm-2020-n-sstripped*
-rwxr-xr-x 1 xxx users 604 Nov 25 15:00 helloworld-asm-2020.o*
-rw-r--r-- 1 xxx users 498 Nov 25 14:44 helloworld-asm-2020.s
The assembly code is:
.code32
.section .data
msg: .ascii "Hello, world!\n"
len = . - msg
.section .text
.globl _start
_start:
movl $len, %edx # EDX = message length
movl $msg, %ecx # ECX = address of message
movl $1, %ebx # EBX = file descriptor (1 = stdout)
movl $4, %eax # EAX = syscall number (4 = write)
int $0x80 # call kernel by interrupt
# and exit
movl $0, %ebx # return code is zero
movl $1, %eax # exit syscall number (1 = exit)
int $0x80 # call kernel again
The same hello world program, compiled using GNU as and GNU ld (always using 32-bit assembly) was 708 bytes then, and has grown to 8.5K now. Even when telling the linker to turn off page alignment (ld -n), it still has almost 4.2K. stripping/sstripping doesn't pay off either.
readelf tells me that the start of section headers is much later in the code (byte 468 vs 8464), but I have no idea why. It's running on the same arch system as in 2018, the Makefile is the same and I'm not linking against any libraries (especially not libc). I guess something regarding ld has changed due to the fact that the object file is still quite small, but what and why?
Disclaimer: I'm building 32-bit executables on an x86-64 machine.
Edit: I'm using GNU binutils (as & ld) version 2.35.1 Here is a base64-encoded archive which includes the source and both executables (small old one, large new one) :
cat << EOF | base64 -d | tar xj
QlpoOTFBWSZTWVaGrEQABBp////xebj/7//Xf+a8RP/v3/rAAEVARARAeEADBAAAoCAI0AQ+NAam
ytMpCGmpDVPU0aNpGmh6Rpo9QAAeoBoADQaNAADQ09IAACSSGUwaJpTNQGE9QZGhoADQPUAA0AAA
AA0aA4AAAABoAAAAA0GgAAAAZAGgAHAAAAANAAAAAGg0AAAADIA0AASJCBIyE8hHpqPVPUPU/VAa
fqn6o0ep6BB6TQaNGj0j1ABobU00yeU9JYiuVVZKYE+dKNa3wls6x81yBpGAN71NoylDUvNryWiW
E4ER8XkfpaJcPb6ND12ULEqkQX3eaBHP70Apa5uFhWNDy+U3Ekj+OLx5MtDHxQHQLfMcgCHrGayE
Dc76F4ZC4rcRkvTW4S2EbJAsbBGbQxSbx5o48zkyk5iPBBhJowtCSwDBsQBc0koYRSO6SgJNL0Bg
EmCoxCDAs5QkEmTGmQUgqZNIoxsmwDmDQe0NIDI0KjQ64leOr1fVk6AaVhjOAJjLrEYkYy4cDbyS
iXSuILWohNh+PA9Izk0YUM4TQQGEYNgn4oEjGmAByO+kzmDIxEC3Txni6E1WdswBJLKYiANdiQ2K
00jU/zpMzuIhjTbgiBqE24dZWBcNBBAAioiEhCQEIfAR8Vir4zNQZFgvKZa67Jckh6EHZWAWuf6Q
kGy1lOtA2h9fsyD/uPPI2kjvoYL+w54IUKBEEYFBIWRNCNpuyY86v3pNiHEB7XyCX5wDjZUSF2tO
w0PVlY2FQNcLQcbZjmMhZdlCGkVHojuICHMMMB5kQQSZRwNJkYTKz6stT/MTWmozDCcj+UjtB9Cf
CUqAqqRlgJdREtMtSO4S4GpJE2I/P8vuO9ckqCM2+iSJCLRWx2Gi8VSR8BIkVX6stqIDmtG8xSVU
kk7BnC5caZXTIynyI0doXiFY1+/Csw2RUQJroC0lCNiIqVVUkTqTRMYqKNVGtCJ5yfo7e3ZpgECk
PYUEihPU0QVgfQ76JA8Eb16KCbSzP3WYiVApqmfDhUk0aVc+jyBJH13uKztUuva8F4YdbpmzomjG
kSJmP+vCFdKkHU384LdRoO0LdN7VJlywJ2xJdM+TMQ0KhMaicvRqfC5pHSu+gVDVjfiss+S00ikI
DeMgatVKKtcjsVDX09XU3SzowLWXXunnFZp/fP3eN9Rj1ubiLc0utMl3CUUkcYsmwbKKrWhaZiLO
u67kMSsW20jVBcZ5tZUKgdRtu0UleWOs1HK2QdMpyKMxTRHWhhHwMnVEsWIUEjIfFEbWhRTRMJXn
oIBSEa2Q0llTBfJV0LEYEQTBTFsDKIxhgqNwZB2dovl/kiW4TLp6aGXxmoIpVeWTEXqg1PnyKwux
caORGyBhTEPV2G7/O3y+KeAL9mUM4Zjl1DsDKyTZy8vgn31EDY08rY+64Z/LO5tcRJHttMYsz0Fh
CRN8LTYJL/I/4u5IpwoSCtDViIA=
EOF
Update:
When using ld.gold instead of ld.bfd (to which /usr/bin/ld is symlinked to by default), the executable size becomes as small as expected:
$ cat Makefile
TARGET=helloworld
all:
as -32 -o ${TARGET}-asm.o ${TARGET}-asm.s
ld.bfd -melf_i386 -o ${TARGET}-asm-bfd ${TARGET}-asm.o
ld.gold -melf_i386 -o ${TARGET}-asm-gold ${TARGET}-asm.o
rm ${TARGET}-asm.o
$ make -q
$ ls -l
total 68
-rw-r--r-- 1 eso eso 200 Dec 1 13:57 Makefile
-rwxrwxr-x 1 eso eso 8700 Dec 1 13:57 helloworld-asm-bfd
-rwxrwxr-x 1 eso eso 732 Dec 1 13:57 helloworld-asm-gold
-rw-r--r-- 1 eso eso 498 Dec 1 13:44 helloworld-asm.s
Maybe I just used gold previously without being aware.
It's not 10x in general, it's page-alignment of a couple sections as Jester says, per changes to ld's default linker script for security reasons:
First change: Making sure data from .data isn't present in any of the mapping of .text, so none of that static data is available for ROP / Spectre gadgets in an executable page. (In older ld, that meant the program-headers mapped the same disk-block twice, also into a RW-without-exec segment for the actual .data section. The executable mapping was still read-only.)
More recent change: Separate .rodata from .text into separate segments, again so static data isn't mapped into an executable page. Previously, const char code[]= {...} could be cast to a function pointer and called, without needing mprotect or gcc -z execstack or other tricks, if you wanted to test shellcode that way. (A separate Linux kernel change made -z execstack only apply to the actual stack, not READ_IMPLIES_EXEC.)
See Why an ELF executable could have 4 LOAD segments? for this history, including the strange fact that .rodata is in a separate segment from the read-only mapping for access to the ELF metadata.
That extra space is just 00 padding and will compress well in a .tar.gz or whatever.
So it has a worst-case upper bound of about 2x 4k extra pages of padding, and tiny executables are close to that worst case.
gcc -Wl,--nmagic will turn off page-alignment of sections if you want that for some reason. (see the ld(1) man page) I don't know why that doesn't pack everything down to the old size. Perhaps checking the default linker script would shed some light, but it's pretty long. Run ld --verbose to see it.
stripping won't help for padding that's part of a section; I think it can only remove whole sections.
ld -z noseparate-code uses the old layout, only 2 total segments to cover the .text and .rodata sections, and the .data and .bss sections. (And the ELF metadata that dynamic linking wants access to.)
Related:
Linking with gcc instead of ld
This question is about ld, but note that if you're using gcc -nostdlib, that used to also default to making a static executable. But modern Linux distros config GCC with -pie as the default, and GCC won't make a static-pie by default even if there aren't any shared libraries being linked. Unlike with -no-pie mode where it will simply make a static executable in that case. (A static-pie still needs startup code to apply relocations for any absolute addresses.)
So the equivalent of ld directly is gcc -nostdlib -static (which implies -no-pie). Or gcc -nostdlib -no-pie should let it default to -static when there are no shared libs being linked. You can combine this with -Wl,--nmagic and/or -Wl,-z -Wl,noseparate-code.
Also:
A Whirlwind Tutorial on Creating Really Teensy ELF Executables for Linux - eventually making a 45 byte executable, with the machine code for an _exit syscall stuffed into the ELF program header itself.
FASM can make quite small executables, using its mode where it outputs a static executable (not object file) directly with no ELF section metadata, just program headers. (It's a pain to debug with GDB or disassemble with objdump; most tools assume there will be section headers, even though they're not needed to run static executables.)
What is a reasonable minimum number of assembly instructions for a small C program including setup?
What's the difference between "statically linked" and "not a dynamic executable" from Linux ldd? (static vs. static-pie vs. (dynamic) PIE that happens to have no shared libraries.)
Suppose my binaries are running in a customer site where I cannot enable core dump generation using ulimit -c . How do engineers debug the segmentation faults in such real world scenarios? Is there any other method of debugging or identifying crashes without core dumps generated.
In the past, I had to deal with this kind of restriction on several occasions. A segmentation fault or, more generally, abnormal process termination had to be investigated with the caveat that a core dump was not available.
For Linux, our platform of choice for this walkthrough, a few reasons come to mind:
Core dump generation is disabled altogether (using limits.conf or ulimit)
The target directory (current working directory or a directory in /proc/sys/kernel/core_pattern) does not exist or is inaccessible due to filesystem permissions or SELinux
The target filesystem has insufficient diskspace resulting in a partial dump
For all of those, the net result is the same: there's no (valid) core dump to use for analysis. Fortunately, a workaround exists for post-mortem debugging that has the potential to save the day, but given it's inherent limitations, your mileage may vary from case to case.
Identifying the Faulting Instruction
The following sample contains a classic use-after-free memory error:
#include <iostream>
struct Test
{
const std::string &m_value;
Test(const std::string &value):
m_value(value)
{
}
void print()
{
std::cout << m_value << std::endl;
}
};
int main()
{
std::string *value = new std::string("this is a test");
Test test(*value);
delete value;
test.print();
return 0;
}
After delete value, the std::string reference Test::m_value points to inaccessible memory. Therefore, running it results in a segmentation fault:
$ ./a.out
Segmentation fault
When a process terminates due to an access violation, the Linux kernel creates a log entry accessible via dmesg and, depending on the system's configuration, the syslog (usually /var/log/messages). The example (compiled with -O0) creates the following entry:
$ dmesg | grep segfault
[80440.957955] a.out[7098]: segfault at ffffffffffffffe8 ip 00007f9f2c2b56a3 sp 00007ffc3e75bc48 error 5 in libstdc++.so.6.0.19[7f9f2c220000+e9000]
The corresponding Linux kernel source from arch/x86/mm/fault.c:
printk("%s%s[%d]: segfault at %lx ip %px sp %px error %lx",
loglvl, tsk->comm, task_pid_nr(tsk), address,
(void *)regs->ip, (void *)regs->sp, error_code);
The error (error_code) reveals what the trigger was. It's a CPU-specific bit set (x86). In our case, the value 5 (101 in binary) indicates that the page represented by the faulting address 0xffffffffffffffe8 was mapped but inaccessible due to page protection and a read was attempted.
The log message identifies the module that executed the faulting instruction: libstdc++.so.6.0.1. The sample was compiled without optimization, so the call to std::basic_ostream<char, std::char_traits<char> >& std::operator<< <char, std::char_traits<char>, std::allocator<char> >(std::basic_ostream<char, std::char_traits<char> >&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) was not inlined:
400bef: e8 4c fd ff ff callq 400940 <_ZStlsIcSt11char_traitsIcESaIcEERSt13basic_ostreamIT_T0_ES7_RK
SbIS4_S5_T1_E#plt>
The STL performs the read access. Knowing those basics, how can we identify where the segmentation fault occurred exactly? The log entry features two essential addresses we need for doing so:
ip 00007f9f2c2b56a3 [...] error 5 in
^^^^^^^^^^^^^^^^
libstdc++.so.6.0.19[7f9f2c220000+e9000]
^^^^^^^^^^^^
The first is the instruction pointer (rip) at the time of the access violation, the second is the address the .text section of the library is mapped to. By subtracting the .text base address from rip, we get the relative address of the instruction in the library and can disassemble the implementation using objdump (you can simply search for the offset):
0x7f9f2c2b56a3-0x7f9f2c220000=0x956a3
$ objdump --demangle -d /usr/lib64/libstdc++.so.6
[...]
00000000000956a0 <std::basic_ostream<char, std::char_traits<char> >& std::operator<< <char, std::char_traits<char>, s
td::allocator<char> >(std::basic_ostream<char, std::char_traits<char> >&, std::basic_string<char, std::char_traits<ch
ar>, std::allocator<char> > const&)##GLIBCXX_3.4>:
956a0: 48 8b 36 mov (%rsi),%rsi
956a3: 48 8b 56 e8 mov -0x18(%rsi),%rdx
^^^^^
956a7: e9 24 4e fc ff jmpq 5a4d0 <std::basic_ostream<char, std::char_traits<char> >& std::__ostream_insert<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*, long)#plt>
956ac: 0f 1f 40 00 nopl 0x0(%rax)
[...]
Is that the correct instruction? We can consult GDB to confirm our analysis:
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff7b686a3 in std::basic_ostream<char, std::char_traits<char> >& std::operator<< <char, std::char_traits<char>, std::allocator<char> >(std::basic_ostream<char, std::char_traits<char> >&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) () from /lib64/libstdc++.so.6
Missing separate debuginfos, use: debuginfo-install glibc-2.17-323.el7_9.x86_64 libgcc-4.8.5-44.el7.x86_64 libstdc++-4.8.5-44.el7.x86_64
(gdb) disass
Dump of assembler code for function _ZStlsIcSt11char_traitsIcESaIcEERSt13basic_ostreamIT_T0_ES7_RKSbIS4_S5_T1_E:
0x00007ffff7b686a0 <+0>: mov (%rsi),%rsi
=> 0x00007ffff7b686a3 <+3>: mov -0x18(%rsi),%rdx
0x00007ffff7b686a7 <+7>: jmpq 0x7ffff7b2d4d0 <_ZSt16__ostream_insertIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_PKS3_l#plt>
End of assembler dump.
GDB shows the very same instruction. We can also use a debugging session to verify the read address:
(gdb) print /x $rsi-0x18
$2 = 0xffffffffffffffe8
This value matches the read address in the log entry.
Identifying the Callers
So, despite the absence of a core dump, the kernel output enables us to identify the exact location of the segmentation fault. In many scenarios, though, that is far from being enough. For one thing, we're missing the list of calls that got us to that point - the call stack or stack trace.
Without a dump in the backpack, you have two options to get hold of the callers: you can start your process using catchsegv (a glibc utility) or you can implement your own signal handler.
catchsegv serves as a wrapper, generates the stack trace, and also dumps register values and the memory map:
$ catchsegv ./a.out
*** Segmentation fault
Register dump:
RAX: 0000000002158040 RBX: 0000000002158040 RCX: 0000000002158000
[...]
Backtrace:
/lib64/libstdc++.so.6(_ZStlsIcSt11char_traitsIcESaIcEERSt13basic_ostreamIT_T0_ES7_RKSbIS4_S5_T1_E+0x3)[0x7f1794fd36a3]
??:?(_ZN4Test5printEv)[0x400bf4]
??:?(main)[0x400b2d]
/lib64/libc.so.6(__libc_start_main+0xf5)[0x7f179467a555]
??:?(_start)[0x4009e9]
Memory map:
00400000-00401000 r-xp 00000000 08:02 50331747 /home/user/a.out
[...]
7f1794f3e000-7f1795027000 r-xp 00000000 08:02 33600977 /usr/lib64/libstdc++.so.6.0.19
7f1795027000-7f1795227000 ---p 000e9000 08:02 33600977 /usr/lib64/libstdc++.so.6.0.19
7f1795227000-7f179522f000 r--p 000e9000 08:02 33600977 /usr/lib64/libstdc++.so.6.0.19
7f179522f000-7f1795231000 rw-p 000f1000 08:02 33600977 /usr/lib64/libstdc++.so.6.0.19
[...]
How does catchsegv work? It essentially injects a signal handler using LD_PRELOAD and the library libSegFault.so. If your application already happens to install a signal handler for SIGSEGV and you intend to take advantage of libSegFault.so, your signal handler needs to forward the signal to the original handler (as returned by sigaction(SIGSEGV, NULL)).
The second option is to implement the stack trace functionality yourself using a custom signal handler and backtrace(). This allows you to customize the output location and the output itself.
Based on that information, we can essentially do the same we did before (0x7f1794fd36a3-0x7f1794f3e000=0x956a3). This time around, we can go back to the callers to dig deeper. The second frame is represented by the following line:
??:?(_ZN4Test5printEv)[0x400bf4]
0x400bf4 is the address the callee returns to after Test::print(), it's located in the executable. We can visualize the call site as follows:
$ objdump --demangle -d ./a.out
[...]
400bea: bf a0 20 60 00 mov $0x6020a0,%edi
400bef: e8 4c fd ff ff callq 400940 <std::basic_ostream<char, std::char_traits<char> >& std::operator<< <char, std:
:char_traits<char>, std::allocator<char> >(std::basic_ostream<char, std::char_traits<char> >&, std::basic_string<char, std::char_trai
ts<char>, std::allocator<char> > const&)#plt>
400bf4: be 70 09 40 00 mov $0x400970,%esi
^^^^^^
400bf9: 48 89 c7 mov %rax,%rdi
400bfc: e8 5f fd ff ff callq 400960 <std::ostream::operator<<(std::ostream& (*)(std::ostream&))#plt>
[...]
Note that the output of objdump matches the address in this instance because we run it against the executable, which has a default base address of 0x400000 on x86_64 - objdump takes that into account. With address space layout randomization (ASLR) enabled (compiled with -fpie, linked with -pie), the base address has to be taken into account as outlined before.
Going back further involves the same steps:
??:?(main)[0x400b2d]
$ objdump --demangle -d ./a.out
[...]
400b1c: e8 af fd ff ff callq 4008d0 <operator delete(void*)#plt>
400b21: 48 8d 45 d0 lea -0x30(%rbp),%rax
400b25: 48 89 c7 mov %rax,%rdi
400b28: e8 a7 00 00 00 callq 400bd4 <Test::print()>
400b2d: b8 00 00 00 00 mov $0x0,%eax
^^^^^^
400b32: eb 2a jmp 400b5e <main+0xb1>
[...]
Until now, we've been manually translating the absolute address to a relative address. Instead, the base address of the module can be passed to objdump via --adjust-vma=<base-address>. That way, the value of rip or a caller's address can be used directly.
Adding Debug Symbols
We've come a long way without a dump. For debugging to be effective, another critical puzzle piece is absent, however: debug symbols. Without them, it can be difficult to map the assembly to the corresponding source code. Compiling the sample with -O3 and without debug information illustrates the problem:
[98161.650474] a.out[13185]: segfault at ffffffffffffffe8 ip 0000000000400a4b sp 00007ffc9e738270 error 5 in a.out[400000+1000]
As a consequence of inlining, the log entry now points to our executable as the trigger. Using objdump gets us to the following:
400a3e: e8 dd fe ff ff callq 400920 <operator delete(void*)#plt>
400a43: 48 8b 33 mov (%rbx),%rsi
400a46: bf a0 20 60 00 mov $0x6020a0,%edi
400a4b: 48 8b 56 e8 mov -0x18(%rsi),%rdx
^^^^^^
400a4f: e8 4c ff ff ff callq 4009a0 <std::basic_ostream<char, std::char_traits<char> >& std::__ostream_insert<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*, long)#plt>
400a54: 48 89 c5 mov %rax,%rbp
400a57: 48 8b 00 mov (%rax),%rax
Part of the stream implementation was inlined, making it harder to identify the associated source code. Without symbols, you have to use export symbols, calls (like operator delete(void*)) and the surrounding instructions (mov $0x6020a0 loads the address of std::cout: 00000000006020a0 <std::cout##GLIBCXX_3.4>) for the purpose of orientation.
With debug symbols (-g), more context is available by calling objdump with --source:
400a43: 48 8b 33 mov (%rbx),%rsi
operator<<(basic_ostream<_CharT, _Traits>& __os,
const basic_string<_CharT, _Traits, _Alloc>& __str)
{
// _GLIBCXX_RESOLVE_LIB_DEFECTS
// 586. string inserter not a formatted function
return __ostream_insert(__os, __str.data(), __str.size());
400a46: bf a0 20 60 00 mov $0x6020a0,%edi
400a4b: 48 8b 56 e8 mov -0x18(%rsi),%rdx
^^^^^^
400a4f: e8 4c ff ff ff callq 4009a0 <std::basic_ostream<char, std::char_traits<char> >& std::__ostream_insert<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*, long)#plt>
400a54: 48 89 c5 mov %rax,%rbp
That worked as expected. In the real world, debug symbols are not embedded in the binaries - they are managed in separate debuginfo packages. In those circumstances, objdump ignores debug symbols even if they are installed. To address this limitation, symbols have to be re-added to the affected binary. The following procedure creates detached symbols and re-adds them using eu-unstrip from elfutils to the benefit of objdump:
# compile with debug info
g++ segv.cxx -O3 -g
# create detached debug info
objcopy --only-keep-debug a.out a.out.debug
# remove debug info from executable
strip -g a.out
# re-add debug info to executable
eu-unstrip ./a.out ./a.out.debug -o ./a.out-debuginfo
# objdump with executable containing debug info
objdump --demangle -d ./a.out-debuginfo --source
Using GDB instead of objdump
Thus far, we've been using objdump because it's usually available, even on production systems. Can we just use GDB instead? Yes, by executing gdb with the module of interest. I use 0x0x400a4b as in the previous objdump invocation:
$ gdb ./a.out
[...]
(gdb) disass 0x400a4b
Dump of assembler code for function main():
[...]
0x0000000000400a43 <+67>: mov (%rbx),%rsi
0x0000000000400a46 <+70>: mov $0x6020a0,%edi
0x0000000000400a4b <+75>: mov -0x18(%rsi),%rdx
0x0000000000400a4f <+79>: callq 0x4009a0 <_ZSt16__ostream_insertIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_PKS3_l#plt>
0x0000000000400a54 <+84>: mov %rax,%rbp
In contrast to objdump, GDB can deal with external symbol information without a hitch. disass /m corresponds to objdump --source:
(gdb) disass /m 0x400a4b
Dump of assembler code for function main():
[...]
21 Test test(*value);
22 delete value;
0x0000000000400a25 <+37>: test %rbx,%rbx
0x0000000000400a28 <+40>: je 0x400a43 <main()+67>
0x0000000000400a3b <+59>: mov %rbx,%rdi
0x0000000000400a3e <+62>: callq 0x400920 <_ZdlPv#plt>
23 test.print();
24 return 0;
25 }
0x0000000000400a88 <+136>: add $0x18,%rsp
[...]
End of assembler dump.
In case of an optimized binary, GDB might skip instructions in this mode if the source code cannot be mapped unambiguously. Our instruction at 0x400a4b is not listed. objdump never skips instructions and might skip the source context instead - an approach, that I prefer for debugging at this level. This does not mean that GDB is not useful for this task, it's just something to be aware of.
Final Thoughts
Termination reason, registers, memory map, and stack trace. It's all there without even a trace of a core dump. While definitely useful (I fixed quite a few crashes that way), you have to keep in mind that you're still missing valuable information by going that route, most notably the stack and heap as well as per-thread data (thread metadata, registers, stack).
So, whatever the scenario may be, you should seriously consider enabling core dump generation and ensure that dumps can be generated successfully if push comes to shove. Debugging in itself is complex enough, debugging without information you could technically have needlessly increases complexity and turnaround time, and, more importantly, significantly lowers the probability that the root cause can be found and addressed in a timely manner.
For a university course, I like to compare code-sizes of functionally similar programs if written and compiled using gcc/clang versus assembly. In the process of re-evaluating how to further shrink the size of some executables, I couldn't trust my eyes when the very same assembly code I assembled/linked 2 years ago now has grown >10x in size after building it again (which true for multiple programs, not only helloworld):
$ make
as -32 -o helloworld-asm-2020.o helloworld-asm-2020.s
ld -melf_i386 -o helloworld-asm-2020 helloworld-asm-2020.o
$ ls -l
-rwxr-xr-x 1 xxx users 708 Jul 18 2018 helloworld-asm-2018*
-rwxr-xr-x 1 xxx users 8704 Nov 25 15:00 helloworld-asm-2020*
-rwxr-xr-x 1 xxx users 4724 Nov 25 15:00 helloworld-asm-2020-n*
-rwxr-xr-x 1 xxx users 4228 Nov 25 15:00 helloworld-asm-2020-n-sstripped*
-rwxr-xr-x 1 xxx users 604 Nov 25 15:00 helloworld-asm-2020.o*
-rw-r--r-- 1 xxx users 498 Nov 25 14:44 helloworld-asm-2020.s
The assembly code is:
.code32
.section .data
msg: .ascii "Hello, world!\n"
len = . - msg
.section .text
.globl _start
_start:
movl $len, %edx # EDX = message length
movl $msg, %ecx # ECX = address of message
movl $1, %ebx # EBX = file descriptor (1 = stdout)
movl $4, %eax # EAX = syscall number (4 = write)
int $0x80 # call kernel by interrupt
# and exit
movl $0, %ebx # return code is zero
movl $1, %eax # exit syscall number (1 = exit)
int $0x80 # call kernel again
The same hello world program, compiled using GNU as and GNU ld (always using 32-bit assembly) was 708 bytes then, and has grown to 8.5K now. Even when telling the linker to turn off page alignment (ld -n), it still has almost 4.2K. stripping/sstripping doesn't pay off either.
readelf tells me that the start of section headers is much later in the code (byte 468 vs 8464), but I have no idea why. It's running on the same arch system as in 2018, the Makefile is the same and I'm not linking against any libraries (especially not libc). I guess something regarding ld has changed due to the fact that the object file is still quite small, but what and why?
Disclaimer: I'm building 32-bit executables on an x86-64 machine.
Edit: I'm using GNU binutils (as & ld) version 2.35.1 Here is a base64-encoded archive which includes the source and both executables (small old one, large new one) :
cat << EOF | base64 -d | tar xj
QlpoOTFBWSZTWVaGrEQABBp////xebj/7//Xf+a8RP/v3/rAAEVARARAeEADBAAAoCAI0AQ+NAam
ytMpCGmpDVPU0aNpGmh6Rpo9QAAeoBoADQaNAADQ09IAACSSGUwaJpTNQGE9QZGhoADQPUAA0AAA
AA0aA4AAAABoAAAAA0GgAAAAZAGgAHAAAAANAAAAAGg0AAAADIA0AASJCBIyE8hHpqPVPUPU/VAa
fqn6o0ep6BB6TQaNGj0j1ABobU00yeU9JYiuVVZKYE+dKNa3wls6x81yBpGAN71NoylDUvNryWiW
E4ER8XkfpaJcPb6ND12ULEqkQX3eaBHP70Apa5uFhWNDy+U3Ekj+OLx5MtDHxQHQLfMcgCHrGayE
Dc76F4ZC4rcRkvTW4S2EbJAsbBGbQxSbx5o48zkyk5iPBBhJowtCSwDBsQBc0koYRSO6SgJNL0Bg
EmCoxCDAs5QkEmTGmQUgqZNIoxsmwDmDQe0NIDI0KjQ64leOr1fVk6AaVhjOAJjLrEYkYy4cDbyS
iXSuILWohNh+PA9Izk0YUM4TQQGEYNgn4oEjGmAByO+kzmDIxEC3Txni6E1WdswBJLKYiANdiQ2K
00jU/zpMzuIhjTbgiBqE24dZWBcNBBAAioiEhCQEIfAR8Vir4zNQZFgvKZa67Jckh6EHZWAWuf6Q
kGy1lOtA2h9fsyD/uPPI2kjvoYL+w54IUKBEEYFBIWRNCNpuyY86v3pNiHEB7XyCX5wDjZUSF2tO
w0PVlY2FQNcLQcbZjmMhZdlCGkVHojuICHMMMB5kQQSZRwNJkYTKz6stT/MTWmozDCcj+UjtB9Cf
CUqAqqRlgJdREtMtSO4S4GpJE2I/P8vuO9ckqCM2+iSJCLRWx2Gi8VSR8BIkVX6stqIDmtG8xSVU
kk7BnC5caZXTIynyI0doXiFY1+/Csw2RUQJroC0lCNiIqVVUkTqTRMYqKNVGtCJ5yfo7e3ZpgECk
PYUEihPU0QVgfQ76JA8Eb16KCbSzP3WYiVApqmfDhUk0aVc+jyBJH13uKztUuva8F4YdbpmzomjG
kSJmP+vCFdKkHU384LdRoO0LdN7VJlywJ2xJdM+TMQ0KhMaicvRqfC5pHSu+gVDVjfiss+S00ikI
DeMgatVKKtcjsVDX09XU3SzowLWXXunnFZp/fP3eN9Rj1ubiLc0utMl3CUUkcYsmwbKKrWhaZiLO
u67kMSsW20jVBcZ5tZUKgdRtu0UleWOs1HK2QdMpyKMxTRHWhhHwMnVEsWIUEjIfFEbWhRTRMJXn
oIBSEa2Q0llTBfJV0LEYEQTBTFsDKIxhgqNwZB2dovl/kiW4TLp6aGXxmoIpVeWTEXqg1PnyKwux
caORGyBhTEPV2G7/O3y+KeAL9mUM4Zjl1DsDKyTZy8vgn31EDY08rY+64Z/LO5tcRJHttMYsz0Fh
CRN8LTYJL/I/4u5IpwoSCtDViIA=
EOF
Update:
When using ld.gold instead of ld.bfd (to which /usr/bin/ld is symlinked to by default), the executable size becomes as small as expected:
$ cat Makefile
TARGET=helloworld
all:
as -32 -o ${TARGET}-asm.o ${TARGET}-asm.s
ld.bfd -melf_i386 -o ${TARGET}-asm-bfd ${TARGET}-asm.o
ld.gold -melf_i386 -o ${TARGET}-asm-gold ${TARGET}-asm.o
rm ${TARGET}-asm.o
$ make -q
$ ls -l
total 68
-rw-r--r-- 1 eso eso 200 Dec 1 13:57 Makefile
-rwxrwxr-x 1 eso eso 8700 Dec 1 13:57 helloworld-asm-bfd
-rwxrwxr-x 1 eso eso 732 Dec 1 13:57 helloworld-asm-gold
-rw-r--r-- 1 eso eso 498 Dec 1 13:44 helloworld-asm.s
Maybe I just used gold previously without being aware.
It's not 10x in general, it's page-alignment of a couple sections as Jester says, per changes to ld's default linker script for security reasons:
First change: Making sure data from .data isn't present in any of the mapping of .text, so none of that static data is available for ROP / Spectre gadgets in an executable page. (In older ld, that meant the program-headers mapped the same disk-block twice, also into a RW-without-exec segment for the actual .data section. The executable mapping was still read-only.)
More recent change: Separate .rodata from .text into separate segments, again so static data isn't mapped into an executable page. Previously, const char code[]= {...} could be cast to a function pointer and called, without needing mprotect or gcc -z execstack or other tricks, if you wanted to test shellcode that way. (A separate Linux kernel change made -z execstack only apply to the actual stack, not READ_IMPLIES_EXEC.)
See Why an ELF executable could have 4 LOAD segments? for this history, including the strange fact that .rodata is in a separate segment from the read-only mapping for access to the ELF metadata.
That extra space is just 00 padding and will compress well in a .tar.gz or whatever.
So it has a worst-case upper bound of about 2x 4k extra pages of padding, and tiny executables are close to that worst case.
gcc -Wl,--nmagic will turn off page-alignment of sections if you want that for some reason. (see the ld(1) man page) I don't know why that doesn't pack everything down to the old size. Perhaps checking the default linker script would shed some light, but it's pretty long. Run ld --verbose to see it.
stripping won't help for padding that's part of a section; I think it can only remove whole sections.
ld -z noseparate-code uses the old layout, only 2 total segments to cover the .text and .rodata sections, and the .data and .bss sections. (And the ELF metadata that dynamic linking wants access to.)
Related:
Linking with gcc instead of ld
This question is about ld, but note that if you're using gcc -nostdlib, that used to also default to making a static executable. But modern Linux distros config GCC with -pie as the default, and GCC won't make a static-pie by default even if there aren't any shared libraries being linked. Unlike with -no-pie mode where it will simply make a static executable in that case. (A static-pie still needs startup code to apply relocations for any absolute addresses.)
So the equivalent of ld directly is gcc -nostdlib -static (which implies -no-pie). Or gcc -nostdlib -no-pie should let it default to -static when there are no shared libs being linked. You can combine this with -Wl,--nmagic and/or -Wl,-z -Wl,noseparate-code.
Also:
A Whirlwind Tutorial on Creating Really Teensy ELF Executables for Linux - eventually making a 45 byte executable, with the machine code for an _exit syscall stuffed into the ELF program header itself.
FASM can make quite small executables, using its mode where it outputs a static executable (not object file) directly with no ELF section metadata, just program headers. (It's a pain to debug with GDB or disassemble with objdump; most tools assume there will be section headers, even though they're not needed to run static executables.)
What is a reasonable minimum number of assembly instructions for a small C program including setup?
What's the difference between "statically linked" and "not a dynamic executable" from Linux ldd? (static vs. static-pie vs. (dynamic) PIE that happens to have no shared libraries.)
I'm new to Cygwin - so hopefully, someone can point me in the right direction. I would like to be able to choose to use the shared libraries to compile my code. However, so far, it seems that it always uses the static library, and I don't know where exactly I did wrong.
I installed Cygwin on my Windows 10 computer. Created a file: test.c, which contains:
#include <stdio.h>
const char msg[] = "Hello, world.";
int main(void){
puts (msg);
return 0;
}
I then compiled it with:
$ gcc -Wall -c test.c -o test.o
Then I checked the symbols using:
$ nm test.o
It gives me what I expected:
U __main
0000000000000000 T main
0000000000000000 R msg
U puts
where none of the symbols have been assigned addresses yet. This is all good.
Then, I linked it using the following:
$ gcc -Wall test.o –o test
Then checked the symbols like below:
$ nm test
I got the following:
0000000100401080 T main
0000000100401000 T mainCRTStartup
0000000100401640 T malloc
0000000100403000 R msg
0000000100401650 T posix_memalign
00000001004010d0 T puts
while I was expecting the symbol puts being something like
U puts##GLIBC_x.x.x`.
It seems like I did not have the shared libraries, or I'm not using the process correctly. What is wrong then? Thanks.
using objdump
objdump -x test.exe
DLL Name: cygwin1.dll
vma: Hint/Ord Member-Name Bound-To
813c 15 __cxa_atexit
814c 46 __main
8158 108 _dll_crt0
8164 115 _impure_ptr
8174 257 calloc
8180 373 cygwin_detach_dll
8194 375 cygwin_internal
81a8 403 dll_dllcrt0
81b8 579 free
81c0 909 malloc
81cc 1015 posix_memalign
81e0 1170 puts
81e8 1196 realloc
so puts is an external symbol taken from cygwin1.dll shared lib
Edit: added important note that it is about debugging MPI application
System installed shared library doesn't have debugging symbols:
$ readelf -S /usr/lib64/libfftw3.so | grep debug
$
I have therefore compiled and instaled in my home directory my owne version, with debugging enabled (--with-debug CFLAGS=-g):
$ $ readelf -S ~/lib64/libfftw3.so | grep debug
[26] .debug_aranges PROGBITS 0000000000000000 001d3902
[27] .debug_pubnames PROGBITS 0000000000000000 001d8552
[28] .debug_info PROGBITS 0000000000000000 001ddebd
[29] .debug_abbrev PROGBITS 0000000000000000 003e221c
[30] .debug_line PROGBITS 0000000000000000 00414306
[31] .debug_str PROGBITS 0000000000000000 0044aa23
[32] .debug_loc PROGBITS 0000000000000000 004514de
[33] .debug_ranges PROGBITS 0000000000000000 0046bc82
I have set both LD_LIBRARY_PATH and LD_RUN_PATH to include ~/lib64 first, and ldd program confirms that local version of library should be used:
$ ldd a.out | grep fftw
libfftw3.so.3 => /home/narebski/lib64/libfftw3.so.3 (0x00007f2ed9a98000)
The program in question is parallel numerical application using MPI (Message Passing Interface). Therefore to run this application one must use mpirun wrapper (e.g. mpirun -np 1 valgrind --tool=callgrind ./a.out). I use OpenMPI implementation.
Nevertheless, various profilers: callgrind tool in Valgrind, CPU profiling google-perfutils and perf doesn't find those debugging symbols, resulting in more or less useless output:
calgrind:
$ callgrind_annotate --include=~/prog/src --inclusive=no --tree=none
[...]
--------------------------------------------------------------------------------
Ir file:function
--------------------------------------------------------------------------------
32,765,904,336 ???:0x000000000014e500 [/usr/lib64/libfftw3.so.3.2.4]
31,342,886,912 /home/narebski/prog/src/nonlinearity.F90:__nonlinearity_MOD_calc_nonlinearity_kxky [/home/narebski/prog/bin/a.out]
30,288,261,120 /home/narebski/gene11/src/axpy.F90:__axpy_MOD_axpy_ij [/home/narebski/prog/bin/a.out]
23,429,390,736 ???:0x00000000000fc5e0 [/usr/lib64/libfftw3.so.3.2.4]
17,851,018,186 ???:0x00000000000fdb80 [/usr/lib64/libmpi.so.1.0.1]
google-perftools:
$ pprof --text a.out prog.prof
Total: 8401 samples
842 10.0% 10.0% 842 10.0% 00007f200522d5f0
619 7.4% 17.4% 5025 59.8% calc_nonlinearity_kxky
517 6.2% 23.5% 517 6.2% axpy_ij
427 5.1% 28.6% 3156 37.6% nl_to_direct_xy
307 3.7% 32.3% 1234 14.7% nl_to_fourier_xy_1d
perf events:
$ perf report --sort comm,dso,symbol
# Events: 80K cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... .................... ............................................
#
32.42% a.out libfftw3.so.3.2.4 [.] fdc4c
16.25% a.out 7fddcd97bb22 [.] 7fddcd97bb22
7.51% a.out libatlas.so.0.0.0 [.] ATL_dcopy_xp1yp1aXbX
6.98% a.out a.out [.] __nonlinearity_MOD_calc_nonlinearity_kxky
5.82% a.out a.out [.] __axpy_MOD_axpy_ij
Edit Added 11-07-2011:
I don't know if it is important, but:
$ file /usr/lib64/libfftw3.so.3.2.4
/usr/lib64/libfftw3.so.3.2.4: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, stripped
and
$ file ~/lib64/libfftw3.so.3.2.4
/home/narebski/lib64/libfftw3.so.3.2.4: ELF 64-bit LSB shared object, x86-64, version 1 (GNU/Linux), dynamically linked, not stripped
If /usr/lib64/libfftw3.so.3.2.4 is listed in callgrind output, then your LD_LIBRARY_PATH=~/lib64 had no effect.
Try again with export LD_LIBRARY_PATH=$HOME/lib64. Also watch out for any shell scripts you invoke, which might reset your environment.
You and Employed Russian are almost certainly right; the mpirun script is messing things up here. Two options:
Most x86 MPI implementations, as a practical matter, treat just running the executable
./a.out
the same as
mpirun -np 1 ./a.out.
They don't have to do this, but OpenMPI certainly does, as does MPICH2 and IntelMPI. So if you can do the debug serially, you should just be able to
valgrind --tool=callgrind ./a.out.
However, if you do want to run with mpirun, the issue is probably that your ~/.bashrc
(or whatever) is being sourced, undoing your changes to LD_LIBRARY_PATH etc. Easiest is just to temporarily put your changed environment variables in your ~/.bashrc for the duration of the run.
The way recent profiling tools typically handle this situation is to consult an external, matching non-stripped version of the library.
On debian-based Linux distros this is typically done by installing the -dbg suffixed version of a package; on Redhat-based they are named -debuginfo.
In the case of the tools you mentioned above; they will typically Just Work (tm) and find the debug symbols for a library if the debug info package has been installed in the standard location.