How to get complete stacktrace from a stripped binary using perf? - linux

I am trying to profile a process written for en embedded linux system. It is for a product we are developing. Name of the process is set to "dummy" for confidentiality reasons. It was suggested to me to use perf to profile dummy. So i tried bunch of different perf commands in order to get idea of what internal functions defined within dummy are in hot zone and how well we are calling stuff into libc and libglib. Here is an example of perf top when dummy was running at 99% CPU. It is run as "perf top -G -p "
+ 13.75% libc-2.18.so [.] 0x000547a2
+ 7.52% [kernel] [k] __do_softirq
+ 6.97% libglib-2.0.so.0.3600.4 [.] 0x00089d60
+ 2.87% libpthread-2.18.so [.] 0x000085c4
+ 2.71% libpthread-2.18.so [.] pthread_mutex_lock
+ 2.62% libc-2.18.so [.] malloc
+ 2.51% [kernel] [k] vector_swi
+ 1.87% [kernel] [k] _raw_spin_unlock_irqrestore
+ 1.49% [kernel] [k] __getnstimeofday
+ 1.45% dummy [.] chksum_crc32_int
+ 1.34% [kernel] [k] fget_light
+ 1.29% [kernel] [k] _raw_spin_unlock_bh
+ 1.28% [kernel] [k] memcpy
+ 1.24% [adkNetD] [k] adk_netd_clean_nca_queue
.
.
.
I don't understand the output fully, even though I have some ideas. So here are my questions.
What does last column signify? Isn't it the functions being called within the binary or library that is listed on left?
Then does that mean 0x000547a2 in 1st row is a symbol/function defined in process libc?
Since all libraries and binaries on the target product are stripped, I can understand the perf can't show a resolved stacktrace with real function names. But I searched for 0x000547a2 in debug symbol file built separately for libc (was delivered together with libc), but I couldn't find the symbol. The closest I could find was
00054750 t _int_malloc
Is there a way I can specify the debug symbol files that I have to perf?

Related

Minimal executable size now 10x larger after linking than 2 years ago, for tiny programs?

For a university course, I like to compare code-sizes of functionally similar programs if written and compiled using gcc/clang versus assembly. In the process of re-evaluating how to further shrink the size of some executables, I couldn't trust my eyes when the very same assembly code I assembled/linked 2 years ago now has grown >10x in size after building it again (which true for multiple programs, not only helloworld):
$ make
as -32 -o helloworld-asm-2020.o helloworld-asm-2020.s
ld -melf_i386 -o helloworld-asm-2020 helloworld-asm-2020.o
$ ls -l
-rwxr-xr-x 1 xxx users 708 Jul 18 2018 helloworld-asm-2018*
-rwxr-xr-x 1 xxx users 8704 Nov 25 15:00 helloworld-asm-2020*
-rwxr-xr-x 1 xxx users 4724 Nov 25 15:00 helloworld-asm-2020-n*
-rwxr-xr-x 1 xxx users 4228 Nov 25 15:00 helloworld-asm-2020-n-sstripped*
-rwxr-xr-x 1 xxx users 604 Nov 25 15:00 helloworld-asm-2020.o*
-rw-r--r-- 1 xxx users 498 Nov 25 14:44 helloworld-asm-2020.s
The assembly code is:
.code32
.section .data
msg: .ascii "Hello, world!\n"
len = . - msg
.section .text
.globl _start
_start:
movl $len, %edx # EDX = message length
movl $msg, %ecx # ECX = address of message
movl $1, %ebx # EBX = file descriptor (1 = stdout)
movl $4, %eax # EAX = syscall number (4 = write)
int $0x80 # call kernel by interrupt
# and exit
movl $0, %ebx # return code is zero
movl $1, %eax # exit syscall number (1 = exit)
int $0x80 # call kernel again
The same hello world program, compiled using GNU as and GNU ld (always using 32-bit assembly) was 708 bytes then, and has grown to 8.5K now. Even when telling the linker to turn off page alignment (ld -n), it still has almost 4.2K. stripping/sstripping doesn't pay off either.
readelf tells me that the start of section headers is much later in the code (byte 468 vs 8464), but I have no idea why. It's running on the same arch system as in 2018, the Makefile is the same and I'm not linking against any libraries (especially not libc). I guess something regarding ld has changed due to the fact that the object file is still quite small, but what and why?
Disclaimer: I'm building 32-bit executables on an x86-64 machine.
Edit: I'm using GNU binutils (as & ld) version 2.35.1 Here is a base64-encoded archive which includes the source and both executables (small old one, large new one) :
cat << EOF | base64 -d | tar xj
QlpoOTFBWSZTWVaGrEQABBp////xebj/7//Xf+a8RP/v3/rAAEVARARAeEADBAAAoCAI0AQ+NAam
ytMpCGmpDVPU0aNpGmh6Rpo9QAAeoBoADQaNAADQ09IAACSSGUwaJpTNQGE9QZGhoADQPUAA0AAA
AA0aA4AAAABoAAAAA0GgAAAAZAGgAHAAAAANAAAAAGg0AAAADIA0AASJCBIyE8hHpqPVPUPU/VAa
fqn6o0ep6BB6TQaNGj0j1ABobU00yeU9JYiuVVZKYE+dKNa3wls6x81yBpGAN71NoylDUvNryWiW
E4ER8XkfpaJcPb6ND12ULEqkQX3eaBHP70Apa5uFhWNDy+U3Ekj+OLx5MtDHxQHQLfMcgCHrGayE
Dc76F4ZC4rcRkvTW4S2EbJAsbBGbQxSbx5o48zkyk5iPBBhJowtCSwDBsQBc0koYRSO6SgJNL0Bg
EmCoxCDAs5QkEmTGmQUgqZNIoxsmwDmDQe0NIDI0KjQ64leOr1fVk6AaVhjOAJjLrEYkYy4cDbyS
iXSuILWohNh+PA9Izk0YUM4TQQGEYNgn4oEjGmAByO+kzmDIxEC3Txni6E1WdswBJLKYiANdiQ2K
00jU/zpMzuIhjTbgiBqE24dZWBcNBBAAioiEhCQEIfAR8Vir4zNQZFgvKZa67Jckh6EHZWAWuf6Q
kGy1lOtA2h9fsyD/uPPI2kjvoYL+w54IUKBEEYFBIWRNCNpuyY86v3pNiHEB7XyCX5wDjZUSF2tO
w0PVlY2FQNcLQcbZjmMhZdlCGkVHojuICHMMMB5kQQSZRwNJkYTKz6stT/MTWmozDCcj+UjtB9Cf
CUqAqqRlgJdREtMtSO4S4GpJE2I/P8vuO9ckqCM2+iSJCLRWx2Gi8VSR8BIkVX6stqIDmtG8xSVU
kk7BnC5caZXTIynyI0doXiFY1+/Csw2RUQJroC0lCNiIqVVUkTqTRMYqKNVGtCJ5yfo7e3ZpgECk
PYUEihPU0QVgfQ76JA8Eb16KCbSzP3WYiVApqmfDhUk0aVc+jyBJH13uKztUuva8F4YdbpmzomjG
kSJmP+vCFdKkHU384LdRoO0LdN7VJlywJ2xJdM+TMQ0KhMaicvRqfC5pHSu+gVDVjfiss+S00ikI
DeMgatVKKtcjsVDX09XU3SzowLWXXunnFZp/fP3eN9Rj1ubiLc0utMl3CUUkcYsmwbKKrWhaZiLO
u67kMSsW20jVBcZ5tZUKgdRtu0UleWOs1HK2QdMpyKMxTRHWhhHwMnVEsWIUEjIfFEbWhRTRMJXn
oIBSEa2Q0llTBfJV0LEYEQTBTFsDKIxhgqNwZB2dovl/kiW4TLp6aGXxmoIpVeWTEXqg1PnyKwux
caORGyBhTEPV2G7/O3y+KeAL9mUM4Zjl1DsDKyTZy8vgn31EDY08rY+64Z/LO5tcRJHttMYsz0Fh
CRN8LTYJL/I/4u5IpwoSCtDViIA=
EOF
Update:
When using ld.gold instead of ld.bfd (to which /usr/bin/ld is symlinked to by default), the executable size becomes as small as expected:
$ cat Makefile
TARGET=helloworld
all:
as -32 -o ${TARGET}-asm.o ${TARGET}-asm.s
ld.bfd -melf_i386 -o ${TARGET}-asm-bfd ${TARGET}-asm.o
ld.gold -melf_i386 -o ${TARGET}-asm-gold ${TARGET}-asm.o
rm ${TARGET}-asm.o
$ make -q
$ ls -l
total 68
-rw-r--r-- 1 eso eso 200 Dec 1 13:57 Makefile
-rwxrwxr-x 1 eso eso 8700 Dec 1 13:57 helloworld-asm-bfd
-rwxrwxr-x 1 eso eso 732 Dec 1 13:57 helloworld-asm-gold
-rw-r--r-- 1 eso eso 498 Dec 1 13:44 helloworld-asm.s
Maybe I just used gold previously without being aware.
It's not 10x in general, it's page-alignment of a couple sections as Jester says, per changes to ld's default linker script for security reasons:
First change: Making sure data from .data isn't present in any of the mapping of .text, so none of that static data is available for ROP / Spectre gadgets in an executable page. (In older ld, that meant the program-headers mapped the same disk-block twice, also into a RW-without-exec segment for the actual .data section. The executable mapping was still read-only.)
More recent change: Separate .rodata from .text into separate segments, again so static data isn't mapped into an executable page. Previously, const char code[]= {...} could be cast to a function pointer and called, without needing mprotect or gcc -z execstack or other tricks, if you wanted to test shellcode that way. (A separate Linux kernel change made -z execstack only apply to the actual stack, not READ_IMPLIES_EXEC.)
See Why an ELF executable could have 4 LOAD segments? for this history, including the strange fact that .rodata is in a separate segment from the read-only mapping for access to the ELF metadata.
That extra space is just 00 padding and will compress well in a .tar.gz or whatever.
So it has a worst-case upper bound of about 2x 4k extra pages of padding, and tiny executables are close to that worst case.
gcc -Wl,--nmagic will turn off page-alignment of sections if you want that for some reason. (see the ld(1) man page) I don't know why that doesn't pack everything down to the old size. Perhaps checking the default linker script would shed some light, but it's pretty long. Run ld --verbose to see it.
stripping won't help for padding that's part of a section; I think it can only remove whole sections.
ld -z noseparate-code uses the old layout, only 2 total segments to cover the .text and .rodata sections, and the .data and .bss sections. (And the ELF metadata that dynamic linking wants access to.)
Related:
Linking with gcc instead of ld
This question is about ld, but note that if you're using gcc -nostdlib, that used to also default to making a static executable. But modern Linux distros config GCC with -pie as the default, and GCC won't make a static-pie by default even if there aren't any shared libraries being linked. Unlike with -no-pie mode where it will simply make a static executable in that case. (A static-pie still needs startup code to apply relocations for any absolute addresses.)
So the equivalent of ld directly is gcc -nostdlib -static (which implies -no-pie). Or gcc -nostdlib -no-pie should let it default to -static when there are no shared libs being linked. You can combine this with -Wl,--nmagic and/or -Wl,-z -Wl,noseparate-code.
Also:
A Whirlwind Tutorial on Creating Really Teensy ELF Executables for Linux - eventually making a 45 byte executable, with the machine code for an _exit syscall stuffed into the ELF program header itself.
FASM can make quite small executables, using its mode where it outputs a static executable (not object file) directly with no ELF section metadata, just program headers. (It's a pain to debug with GDB or disassemble with objdump; most tools assume there will be section headers, even though they're not needed to run static executables.)
What is a reasonable minimum number of assembly instructions for a small C program including setup?
What's the difference between "statically linked" and "not a dynamic executable" from Linux ldd? (static vs. static-pie vs. (dynamic) PIE that happens to have no shared libraries.)

how fio loads various io engines when it starts?

fio supports a whole bunch of io engines - all supported engines are present here : https://github.com/axboe/fio/tree/master/engines
I have been trying to understand the internals of how fio works and got stuck on how fio loads all the io engines.
For example I see every engine has a method to register and unregister itself, for example sync.c registers and unregisters using the following methods
fio_syncio_register : https://github.com/axboe/fio/blob/master/engines/sync.c#L448
and fio_syncio_unregister :
https://github.com/axboe/fio/blob/master/engines/sync.c#L461
My question is who calls these methods ?
To find answer I tried running fio under gdb - placed a break point in fio_syncio_register and in the main function, fio_syncio_register gets called even before main which tells me it has something to do with __libc_csu_init
and backtrace confirmed that
(gdb) bt
#0 fio_syncio_register () at engines/sync.c:450
#1 0x000000000047fb9d in __libc_csu_init ()
#2 0x00007ffff6ee27bf in __libc_start_main (main=0x40cd90 <main>, argc=2, argv=0x7fffffffe608, init=0x47fb50 <__libc_csu_init>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffe5f8)
at ../csu/libc-start.c:247
#3 0x000000000040ce79 in _start ()
I spent sometime reading about __libc_csu_init and __libc_csu_fini and every single description talks about methods being decorated with __attribute__((constructor)) will be called before main, but in the case of fio sync.c I dont see fio_syncio_register decorated with __attribute__
Can someone please help me out in understanding how this flow works ? Are there other materials I should be reading to understand this ?
Thanks
Interesting question. I couldn't figure out the answer looking at the source, so here are the steps I took:
$ make
$ find . -name 'sync.o'
./engines/sync.o
$ readelf -WS engines/sync.o | grep '\.init'
[12] .init_array INIT_ARRAY 0000000000000000 0021f0 000008 00 WA 0 0 8
[13] .rela.init_array RELA 0000000000000000 0132a0 000018 18 36 12 8
This tells us that global initializers are present in this object. These are called at program startup. What are they?
$ objdump -Dr engines/sync.o | grep -A4 '\.init'
Disassembly of section .init_array:
0000000000000000 <.init_array>:
...
0: R_X86_64_64 .text.startup
Interesting. There is apparently a special .text.startup section. What's in it?
$ objdump -dr engines/sync.o | less
...
Disassembly of section .text.startup:
0000000000000000 <fio_syncio_register>:
0: 48 83 ec 08 sub $0x8,%rsp
4: bf 00 00 00 00 mov $0x0,%edi
5: R_X86_64_32 .data+0x380
9: e8 00 00 00 00 callq e <fio_syncio_register+0xe>
a: R_X86_64_PC32 register_ioengine-0x4
...
Why, it's exactly the function we are looking for. But how did it end up in this special section? To answer that, we can look at preprocessed source (in retrospect, I should have started with that).
How could we get it? The command line to compile sync.o is hidden. Looking in Makefile, we can unhide the command line with QUIET_CC=''.
$ rm engines/sync.o && make QUIET_CC=''
gcc -o engines/sync.o -std=gnu99 -Wwrite-strings -Wall -Wdeclaration-after-statement -g -ffast-math -D_GNU_SOURCE -include config-host.h -I. -I. -O3 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -DBITS_PER_LONG=64 -DFIO_VERSION='"fio-2.16-5-g915ca"' -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -DFIO_INTERNAL -DFIO_INC_DEBUG -c engines/sync.c
LINK fio
Now we know the command line, and can produce preprocessed file:
$ gcc -E -dD -std=gnu99 -ffast-math -D_GNU_SOURCE -include config-host.h -I. -I. -O3 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -DBITS_PER_LONG=64 -DFIO_VERSION='"fio-2.16-5-g915ca"' -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -DFIO_INTERNAL -DFIO_INC_DEBUG engines/sync.c -o /tmp/sync.i
Looking in /tmp/sync.i, we see:
static void __attribute__((constructor)) fio_syncio_register(void)
{
register_ioengine(&ioengine_rw);
register_ioengine(&ioengine_prw);
...
Hmm, it is __attribute__((constructor)) after all. But how did it get there? Aha! I missed the fio_init on this line:
static void fio_init fio_syncio_register(void)
What does fio_init stand for? Again in /tmp/sync.i:
#define fio_init __attribute__((constructor))
So that is how it works.

How do you get debugging symbols working in linux perf tool inside Docker containers?

I am using Docker containers based on the "ubuntu" tag and cannot get linux perf tool to display debugging symbols.
Here is what I'm doing to demonstrate the problem.
First I start a container, here with an interactive shell.
$ docker run -t -i ubuntu:14.04 /bin/bash
Then from the container prompt I install linux perf tool.
$ apt-get update
$ apt-get install -y linux-tools-common linux-tools-generic linux-tools-`uname -r`
I can now use the perf tool. My kernel is 3.16.0-77-generic.
Now I'll install gcc, compile a test program, and try to run it under perf record.
$ apt-get install -y gcc
I paste in the test program into test.c:
#include <stdio.h>
int function(int i) {
int j;
for(j = 2; j <= i / 2; j++) {
if (i % j == 0) {
return 0;
}
}
return 1;
}
int main() {
int i;
for(i = 2; i < 100000; i++) {
if(function(i)) {
printf("%d\n", i);
}
}
}
Then compile, run, and report:
$ gcc -g -O0 test.c && perf record ./a.out && perf report
The output looks something like this:
72.38% a.out a.out [.] 0x0000000000000544
8.37% a.out a.out [.] 0x000000000000055a
8.30% a.out a.out [.] 0x000000000000053d
7.81% a.out a.out [.] 0x0000000000000551
0.40% a.out a.out [.] 0x0000000000000540
This does not have symbols, even though the executable does have symbol information.
Doing the same general steps outside the container works fine, and shows something like this:
96.96% a.out a.out [.] function
0.35% a.out libc-2.19.so [.] _IO_file_xsputn##GLIBC_2.2.5
0.14% a.out [kernel.kallsyms] [k] update_curr
0.12% a.out [kernel.kallsyms] [k] update_cfs_shares
0.11% a.out [kernel.kallsyms] [k] _raw_spin_lock_irqsave
In the host system I have already turned on kernel symbols by becoming root and doing:
$ echo 0 > /proc/sys/kernel/kptr_restrict
How do I get the containerized version to work properly and show debugging symbols?
Running the container with -v /:/host flag and running perf report in the container with --symfs /host flag fixes it:
96.59% a.out a.out [.] function
2.93% a.out [kernel.kallsyms] [k] 0xffffffff8105144a
0.13% a.out [nvidia] [k] 0x00000000002eda57
0.11% a.out libc-2.19.so [.] vfprintf
0.11% a.out libc-2.19.so [.] 0x0000000000049980
0.09% a.out a.out [.] main
0.02% a.out libc-2.19.so [.] _IO_file_write
0.02% a.out libc-2.19.so [.] write
Part of the reason why it doesn't work as is? The output from perf script sort of sheds some light on this:
...
a.out 24 3374818.880960: cycles: ffffffff81141140 __perf_event__output_id_sample ([kernel.kallsyms])
a.out 24 3374818.881012: cycles: ffffffff817319fd _raw_spin_lock_irqsave ([kernel.kallsyms])
a.out 24 3374818.882217: cycles: ffffffff8109aba3 ttwu_do_activate.constprop.75 ([kernel.kallsyms])
a.out 24 3374818.884071: cycles: 40053d [unknown] (/var/lib/docker/aufs/diff/9bd2d4389cf7ad185405245b1f5c7d24d461bd565757880bfb4f970d3f4f7915/a.out)
a.out 24 3374818.885329: cycles: 400544 [unknown] (/var/lib/docker/aufs/diff/9bd2d4389cf7ad185405245b1f5c7d24d461bd565757880bfb4f970d3f4f7915/a.out)
...
Note the /var/lib/docker/aufs path. That's from the host so it won't exist in the container and you need to help perf report to locate it. This likely happens because the mmap events are tracked by perf outside of any cgroup and perf does not attempt to remap the paths.
Another option is to run perf host-side, like sudo perf record -a docker run -ti <container name>. But the collection has to be system-wide here (the -a flag) as containers are spawned by docker daemon process which is not in the process hierarchy of the docker client tool we run here.
Another way that doesn't require changing how you run the container (so you can profile an already running process) is to mount container's root on host using bindfs:
bindfs /proc/$(docker inspect --format {{.State.Pid}} $CONTAINER_ID)/root /foo
Then run perf report as perf report --symfs /foo
You'll have to run perf record system wide, but you can restrict it to only collect events for the specific container:
perf record -g -a -F 100 -e cpu-clock -G docker/$(docker inspect --format {{.Id}} $CONTAINER_ID) sleep 90

What does perf's option to measure events at user and kernel levels mean?

The Linux perf tool provides access to CPU event counters. It lets you specify the events to be counted and when to count those events.
https://perf.wiki.kernel.org/index.php/Tutorial
By default, events are measured at both user and kernel levels:
perf stat -e cycles dd if=/dev/zero of=/dev/null count=100000
To measure only at the user level, it is necessary to pass a modifier:
perf stat -e cycles:u dd if=/dev/zero of=/dev/null count=100000
To measure both user and kernel (explicitly):
perf stat -e cycles:uk dd if=/dev/zero of=/dev/null count=100000
From this, I expected that cycles:u meant "only count events while running non-kernel code" and recorded counts would not map to kernel symbols but that doesn't seem to be the case.
Here's an example:
perf record -e cycles:u du -sh ~
[...]
perf report --stdio -i perf.data
[...]
9.24% du [kernel.kallsyms] [k] system_call
[...]
0.70% du [kernel.kallsyms] [k] page_fault
[...]
If I do the same but use cycles:uk then I do get more kernel symbols reported so the event modifiers do have an effect. Using cycles:k produces reports with almost exclusively kernel symbols but it does include a few libc symbols.
What's going on here? Is this the expected behavior? Am I misunderstanding the language used in the linked document?
The linked document also includes this table which uses slightly different descriptions if that helps:
Modifiers | Description | Example
----------+--------------------------------------+----------
u | monitor at priv level 3, 2, 1 (user) | event:u
k | monitor at priv level 0 (kernel) | event:k
Edit: more info:
CPU is an Intel Haswell. The specific model is an i7-5820K.
Distro is up to date Arch Linux (rolling release schedule) with kernel 4.1.6.
The version of perf itself is 4.2.0.
Edit2:
More output from example runs. As you can see, cycles:u mostly reports non-kernel symbols. I know that perf sometimes mis-attributes counts to a neighboring instruction when you look at the annotated assembly output. Maybe this is related?
cycles:u
# perf record -e cycles:u du -sh ~
179G /home/khouli
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.116 MB perf.data (2755 samples) ]
# sudo perf report --stdio -i perf.data
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 2K of event 'cycles:u'
# Event count (approx.): 661835375
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. ..............................
#
11.02% du libc-2.22.so [.] _int_malloc
9.73% du libc-2.22.so [.] _int_free
9.24% du du [.] fts_read
9.23% du [kernel.kallsyms] [k] system_call
4.17% du libc-2.22.so [.] strlen
4.17% du libc-2.22.so [.] __memmove_sse2
3.47% du libc-2.22.so [.] __readdir64
3.33% du libc-2.22.so [.] malloc_consolidate
2.87% du libc-2.22.so [.] malloc
1.83% du libc-2.22.so [.] msort_with_tmp.part.0
1.63% du libc-2.22.so [.] __memcpy_avx_unaligned
1.63% du libc-2.22.so [.] __getdents64
1.52% du libc-2.22.so [.] free
1.47% du libc-2.22.so [.] __memmove_avx_unaligned
1.44% du du [.] 0x000000000000e609
1.41% du libc-2.22.so [.] _wordcopy_bwd_dest_aligned
1.19% du du [.] 0x000000000000e644
0.93% du libc-2.22.so [.] __fxstatat64
0.85% du libc-2.22.so [.] do_fcntl
0.73% du [kernel.kallsyms] [k] page_fault
[lots more symbols, almost all in du...]
cycles:uk
# perf record -e cycles:uk du -sh ~
179G /home/khouli
[ perf record: Woken up 1 times to write data ]
[ext4] with build id 0f47443e26a238299e8a5963737da23dd3530376 not found,
continuing without symbols
[ perf record: Captured and wrote 0.120 MB perf.data (2856 samples) ]
# perf report --stdio -i perf.data
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 2K of event 'cycles:uk'
# Event count (approx.): 3118065867
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. ..............................................
#
13.80% du [kernel.kallsyms] [k] __d_lookup_rcu
6.16% du [kernel.kallsyms] [k] security_inode_getattr
2.52% du [kernel.kallsyms] [k] str2hashbuf_signed
2.43% du [kernel.kallsyms] [k] system_call
2.35% du [kernel.kallsyms] [k] half_md4_transform
2.31% du [kernel.kallsyms] [k] ext4_htree_store_dirent
1.97% du [kernel.kallsyms] [k] copy_user_enhanced_fast_string
1.96% du libc-2.22.so [.] _int_malloc
1.93% du du [.] fts_read
1.90% du [kernel.kallsyms] [k] system_call_after_swapgs
1.83% du libc-2.22.so [.] _int_free
1.44% du [kernel.kallsyms] [k] link_path_walk
1.33% du libc-2.22.so [.] __memmove_sse2
1.19% du [kernel.kallsyms] [k] _raw_spin_lock
1.19% du [kernel.kallsyms] [k] __fget_light
1.12% du [kernel.kallsyms] [k] kmem_cache_alloc
1.12% du [kernel.kallsyms] [k] __ext4_check_dir_entry
1.05% du [kernel.kallsyms] [k] lockref_get_not_dead
1.02% du [kernel.kallsyms] [k] generic_fillattr
0.95% du [kernel.kallsyms] [k] do_dentry_open
0.95% du [kernel.kallsyms] [k] path_init
0.95% du [kernel.kallsyms] [k] lockref_put_return
0.91% du libc-2.22.so [.] do_fcntl
0.91% du [kernel.kallsyms] [k] ext4_getattr
0.91% du [kernel.kallsyms] [k] rb_insert_color
0.88% du [kernel.kallsyms] [k] __kmalloc
0.88% du libc-2.22.so [.] __readdir64
0.88% du libc-2.22.so [.] malloc
0.84% du [kernel.kallsyms] [k] ext4fs_dirhash
0.84% du [kernel.kallsyms] [k] __slab_free
0.84% du [kernel.kallsyms] [k] in_group_p
0.81% du [kernel.kallsyms] [k] get_empty_filp
0.77% du libc-2.22.so [.] malloc_consolidate
[more...]
cycles:k
# perf record -e cycles:k du -sh ~
179G /home/khouli
[ perf record: Woken up 1 times to write data ]
[ext4] with build id 0f47443e26a238299e8a5963737da23dd3530376 not found, continuing
without symbols
[ perf record: Captured and wrote 0.118 MB perf.data (2816 samples) ]
# perf report --stdio -i perf.data
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 2K of event 'cycles:k'
# Event count (approx.): 2438426748
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. ..............................................
#
17.11% du [kernel.kallsyms] [k] __d_lookup_rcu
6.97% du [kernel.kallsyms] [k] security_inode_getattr
4.22% du [kernel.kallsyms] [k] half_md4_transform
3.10% du [kernel.kallsyms] [k] str2hashbuf_signed
3.01% du [kernel.kallsyms] [k] system_call_after_swapgs
2.59% du [kernel.kallsyms] [k] ext4_htree_store_dirent
2.24% du [kernel.kallsyms] [k] copy_user_enhanced_fast_string
2.14% du [kernel.kallsyms] [k] lockref_get_not_dead
1.86% du [kernel.kallsyms] [k] ext4_getattr
1.85% du [kernel.kallsyms] [k] kfree
1.68% du [kernel.kallsyms] [k] __ext4_check_dir_entry
1.53% du [kernel.kallsyms] [k] __fget_light
1.34% du [kernel.kallsyms] [k] link_path_walk
1.34% du [kernel.kallsyms] [k] path_init
1.22% du [kernel.kallsyms] [k] __kmalloc
1.22% du [kernel.kallsyms] [k] kmem_cache_alloc
1.14% du [kernel.kallsyms] [k] do_dentry_open
1.11% du [kernel.kallsyms] [k] ext4_readdir
1.07% du [kernel.kallsyms] [k] __find_get_block_slow
1.07% du libc-2.22.so [.] do_fcntl
1.04% du [kernel.kallsyms] [k] _raw_spin_lock
0.99% du [kernel.kallsyms] [k] _raw_read_lock
0.95% du libc-2.22.so [.] __fxstatat64
0.94% du [kernel.kallsyms] [k] rb_insert_color
0.94% du [kernel.kallsyms] [k] generic_fillattr
0.93% du [kernel.kallsyms] [k] ext4fs_dirhash
0.93% du [kernel.kallsyms] [k] find_get_entry
0.89% du [kernel.kallsyms] [k] rb_next
0.89% du [kernel.kallsyms] [k] is_dx_dir
0.89% du [kernel.kallsyms] [k] in_group_p
0.89% du [kernel.kallsyms] [k] cp_new_stat
[more...]
perf_event_paranoid
$ cat /proc/sys/kernel/perf_event_paranoid
1
kernel config for perf
$ cat /proc/config.gz | gunzip | grep -A70 'Kernel Perf'
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
CONFIG_VM_EVENT_COUNTERS=y
CONFIG_SLUB_DEBUG=y
# CONFIG_COMPAT_BRK is not set
# CONFIG_SLAB is not set
CONFIG_SLUB=y
CONFIG_SLUB_CPU_PARTIAL=y
CONFIG_SYSTEM_TRUSTED_KEYRING=y
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
CONFIG_OPROFILE=m
# CONFIG_OPROFILE_EVENT_MULTIPLEX is not set
CONFIG_HAVE_OPROFILE=y
CONFIG_OPROFILE_NMI_TIMER=y
CONFIG_KPROBES=y
CONFIG_JUMP_LABEL=y
CONFIG_KPROBES_ON_FTRACE=y
CONFIG_UPROBES=y
# CONFIG_HAVE_64BIT_ALIGNED_ACCESS is not set
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_ARCH_USE_BUILTIN_BSWAP=y
CONFIG_KRETPROBES=y
CONFIG_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_KPROBES_ON_FTRACE=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_ATTRS=y
CONFIG_HAVE_DMA_CONTIGUOUS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_CLK=y
CONFIG_HAVE_DMA_API_DEBUG=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP_FILTER=y
CONFIG_HAVE_CC_STACKPROTECTOR=y
CONFIG_CC_STACKPROTECTOR=y
# CONFIG_CC_STACKPROTECTOR_NONE is not set
# CONFIG_CC_STACKPROTECTOR_REGULAR is not set
CONFIG_CC_STACKPROTECTOR_STRONG=y
CONFIG_HAVE_CONTEXT_TRACKING=y
CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
CONFIG_HAVE_ARCH_HUGE_VMAP=y
CONFIG_HAVE_ARCH_SOFT_DIRTY=y
CONFIG_MODULES_USE_ELF_RELA=y
CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK=y
CONFIG_ARCH_HAS_ELF_RANDOMIZE=y
CONFIG_OLD_SIGSUSPEND3=y
CONFIG_COMPAT_OLD_SIGACTION=y
I understand your question to be: Why does perf for user mode recording show values from inside the kernel? Well, it's doing exactly what it's supposed to do, from a "system accounting" standpoint.
You did: perf record -e cycles:u du -sh ~ and you got stats on system_call and page_fault and you're wondering why that happened?
When you did the du, it had to traverse the file system. In doing so, it issued system calls for things it needed (e.g. open, readdir, etc.). du initiated these things for you, so it got "charged back" for them. Likewise, du page faulted a number of times.
perf is keeping track of any activity caused by a given process/program, even if it happens inside kernel address space. In other words, the program requested the activity, and the kernel performed it at the program's behest, so it gets charged appropriately. The kernel had to do "real work" to do FS I/O and/or resolve page faults, so you must "pay for the work you commissioned". Anything that a given program does that consumes system resources gets accounted for.
This is the standard accounting model for computer systems, dating back to the 1960's when people actually rented out time on mainframe computers. You got charged for everything you did [just like a lawyer :-)], directly or indirectly.
* charge per minute of connect time
* charge per cpu cycle consumed in user program
* charge per cpu cycle executed for program in kernel space
* charge for each network packet sent/received
* charge for any page fault caused by your program
* charge for each disk block read/written, either to a file or to the paging/swap disk
At the end of the month, they mailed you an itemized bill [just like a utility bill], and you had to pay:
Real money.
Note that there are some things that will not be charged for. For example, let's assume your program is compute bound but does not do [much] I/O and uses a relatively small amount of memory (i.e. it does not cause a page fault on its own). The program will get charged for user space CPU usage.
The OS may have to swap out (i.e. steal) one or more of your pages to make room for some other memory hog program. After the hog runs, your program will run again. Your program will need to fault back in the page or pages that were stolen from it.
Your program will not be charged for these because your program did not cause the page fault. In other words, for every page "stolen" from you, you're given a "credit" for that page when your program has to fault it back in.
Also, when trying to run a different process, the kernel does not charge the CPU time consumed by its process scheduler to any process. This is considered "overhead" and/or standard operating costs. For example, if you have a checking account with a bank, they don't charge you for the upkeep costs on the local branch office that you visit.
So perf, while useful to measure performance, it is using an accounting model to get the data.
It's like a car. You can drive your car to the store and pick up something, and you will consume some gasoline. Or, you can ask a friend to drive your car to the store. In either case, you have to pay for the gasoline because you drove the car, or because [when the friend drove the car] the gasoline was consumed when doing something for you. In this case, the kernel is your friend :-)
UPDATE:
My source for this is the source [kernel source]. And, I've been doing kernel programming for 40 years.
There are two basic types of perf counters. "Macro" ones such as page fault, that the kernel can generate. Others are the syscall counter.
The other time is the "micro" or "nano" type. These come from x86's PMC arch, and have counters for things like "cache miss", "branch mispredict", "data fetch mispredict", etc. that the kernel can't compute.
The PMC counters just free run. That's why you get your global stats, regardless of what recording mode you're doing. The kernel can interrogate them periodically, but it can't get control every time a PMC is incremented. Want the global/system-wide and/or per-CPU values for these? Just execute the appropriate RDPMC instruction.
To keep track of PMC for a process, when you start a process, do RDPMC and save the value in the task struct [for as many that are marked "of interest"] as "PMC value at start". When the given CPU core is rescheduled, the scheduler computes the "next" task, the scheduler gets the current PMC value, takes the difference between it and one it stored in the "old" task block when it started that task, and bumps up that task's "total count" for that PMC. The "current value" becomes the new task's "PMC value at start"
In Linux, when a task/context switch occurs, it generates two perf events, one for "entering new task on cpu X" and "stopping old task on cpu X".
Your question was why monitoring for "user mode" produced kernel addresses. That's because when recording (and this is not the perf program), it stores the temp data [as mentioned above] in the current task/context block, until a task switch actually occurs.
The key thing to note is that this context does not change simply because a syscall was executed--only when a context switch occurs. For example, the gettimeofday syscall justs gets wall clock time and returns it to user space. It does not do a context switch, so any perf event that it kicks off will be charged to active/current context. It doesn't matter whether it comes from kernel space or user space.
As a further example, suppose the process does a file read syscall. In traversing the file handle data, inode, etc. it may generate several perf events. Also, it will probably generate a few more cache misses and other PMC counter bumps. If the desired block is already in the FS block cache, the syscall will just do a copy_to_user and then reenter user space. No expensive context switch with the above PMC difference calculations as the pmc_value_at_start is still valid.
One of the reasons that it's done this way is performance [of the perf mechanism]. If you did the PMC save/restore immediately upon crossing to kernel space after a syscall starts [to separate kernel stats from user stats for a given process, as you'd like], the overhead would be enormous. You wouldn't be performance measuring the base kernel. You'd be performance measuring the kernel + a lot of perf overhead.
When I had to do performance analysis of a commercial hard realtime system based on Linux, I developed my own performance logging system. The system had 8 CPU cores, interacting with multiple custom hardware boards on the PCIe bus with multiple FPGAs. The FPGAs also had custom firmware running inside a Microblaze. Event logs from user space, kernel space, and microblaze could all be time coordinated to nanosecond resolution and the time to store an event record was 70ns.
To me, Linux's perf mechanism is a bit crude and bloated. If one were to use it to try and troubleshoot a performance/timing bug that involved race conditions, possible lock/unlock problems, etc. it might be problematic. That is, running the system without perf and you get the bug. Turn on perf, and you don't because you've changed the fundamental characteristic timing of the system. Turn perf off, and timing bug reappears.
What's going on here? Is this the expected behavior? Am I
misunderstanding the language used in the linked document?
There seems to be a wide difference in terms of kernel & processor stated in the link and that is being used for evaluation.
The introduction section in link https://perf.wiki.kernel.org/index.php/Tutorial states that "Output was obtained on a Ubuntu 11.04 system with kernel 2.6.38-8-generic results running on an HP 6710b with dual-core Intel Core2 T7100 CPU)" whereas the current evaluation is over Intel Haswell(i7-5820K - 6 core) on Arch Linux distro with kernel 4.1.6.
One of the option to rule out difference in behavior and documentation is to test on a system with an equivalent configuration that is mentioned in the introduction section of the link https://perf.wiki.kernel.org/index.php/Tutorial.

How to make profilers (valgrind, perf, pprof) pick up / use local version of library with debugging symbols when using mpirun?

Edit: added important note that it is about debugging MPI application
System installed shared library doesn't have debugging symbols:
$ readelf -S /usr/lib64/libfftw3.so | grep debug
$
I have therefore compiled and instaled in my home directory my owne version, with debugging enabled (--with-debug CFLAGS=-g):
$ $ readelf -S ~/lib64/libfftw3.so | grep debug
[26] .debug_aranges PROGBITS 0000000000000000 001d3902
[27] .debug_pubnames PROGBITS 0000000000000000 001d8552
[28] .debug_info PROGBITS 0000000000000000 001ddebd
[29] .debug_abbrev PROGBITS 0000000000000000 003e221c
[30] .debug_line PROGBITS 0000000000000000 00414306
[31] .debug_str PROGBITS 0000000000000000 0044aa23
[32] .debug_loc PROGBITS 0000000000000000 004514de
[33] .debug_ranges PROGBITS 0000000000000000 0046bc82
I have set both LD_LIBRARY_PATH and LD_RUN_PATH to include ~/lib64 first, and ldd program confirms that local version of library should be used:
$ ldd a.out | grep fftw
libfftw3.so.3 => /home/narebski/lib64/libfftw3.so.3 (0x00007f2ed9a98000)
The program in question is parallel numerical application using MPI (Message Passing Interface). Therefore to run this application one must use mpirun wrapper (e.g. mpirun -np 1 valgrind --tool=callgrind ./a.out). I use OpenMPI implementation.
Nevertheless, various profilers: callgrind tool in Valgrind, CPU profiling google-perfutils and perf doesn't find those debugging symbols, resulting in more or less useless output:
calgrind:
$ callgrind_annotate --include=~/prog/src --inclusive=no --tree=none
[...]
--------------------------------------------------------------------------------
Ir file:function
--------------------------------------------------------------------------------
32,765,904,336 ???:0x000000000014e500 [/usr/lib64/libfftw3.so.3.2.4]
31,342,886,912 /home/narebski/prog/src/nonlinearity.F90:__nonlinearity_MOD_calc_nonlinearity_kxky [/home/narebski/prog/bin/a.out]
30,288,261,120 /home/narebski/gene11/src/axpy.F90:__axpy_MOD_axpy_ij [/home/narebski/prog/bin/a.out]
23,429,390,736 ???:0x00000000000fc5e0 [/usr/lib64/libfftw3.so.3.2.4]
17,851,018,186 ???:0x00000000000fdb80 [/usr/lib64/libmpi.so.1.0.1]
google-perftools:
$ pprof --text a.out prog.prof
Total: 8401 samples
842 10.0% 10.0% 842 10.0% 00007f200522d5f0
619 7.4% 17.4% 5025 59.8% calc_nonlinearity_kxky
517 6.2% 23.5% 517 6.2% axpy_ij
427 5.1% 28.6% 3156 37.6% nl_to_direct_xy
307 3.7% 32.3% 1234 14.7% nl_to_fourier_xy_1d
perf events:
$ perf report --sort comm,dso,symbol
# Events: 80K cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... .................... ............................................
#
32.42% a.out libfftw3.so.3.2.4 [.] fdc4c
16.25% a.out 7fddcd97bb22 [.] 7fddcd97bb22
7.51% a.out libatlas.so.0.0.0 [.] ATL_dcopy_xp1yp1aXbX
6.98% a.out a.out [.] __nonlinearity_MOD_calc_nonlinearity_kxky
5.82% a.out a.out [.] __axpy_MOD_axpy_ij
Edit Added 11-07-2011:
I don't know if it is important, but:
$ file /usr/lib64/libfftw3.so.3.2.4
/usr/lib64/libfftw3.so.3.2.4: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, stripped
and
$ file ~/lib64/libfftw3.so.3.2.4
/home/narebski/lib64/libfftw3.so.3.2.4: ELF 64-bit LSB shared object, x86-64, version 1 (GNU/Linux), dynamically linked, not stripped
If /usr/lib64/libfftw3.so.3.2.4 is listed in callgrind output, then your LD_LIBRARY_PATH=~/lib64 had no effect.
Try again with export LD_LIBRARY_PATH=$HOME/lib64. Also watch out for any shell scripts you invoke, which might reset your environment.
You and Employed Russian are almost certainly right; the mpirun script is messing things up here. Two options:
Most x86 MPI implementations, as a practical matter, treat just running the executable
./a.out
the same as
mpirun -np 1 ./a.out.
They don't have to do this, but OpenMPI certainly does, as does MPICH2 and IntelMPI. So if you can do the debug serially, you should just be able to
valgrind --tool=callgrind ./a.out.
However, if you do want to run with mpirun, the issue is probably that your ~/.bashrc
(or whatever) is being sourced, undoing your changes to LD_LIBRARY_PATH etc. Easiest is just to temporarily put your changed environment variables in your ~/.bashrc for the duration of the run.
The way recent profiling tools typically handle this situation is to consult an external, matching non-stripped version of the library.
On debian-based Linux distros this is typically done by installing the -dbg suffixed version of a package; on Redhat-based they are named -debuginfo.
In the case of the tools you mentioned above; they will typically Just Work (tm) and find the debug symbols for a library if the debug info package has been installed in the standard location.

Resources