Weird Backtrace in Perf - linux

I used the following command to extract backtraces leading to user level L3-misses in a simple evince benchmark:
sudo perf record -d --call-graph dwarf -c 10000 -e mem_load_uops_retired.l3_miss:uppp /opt/evince-3.28.4/bin/evince
As it is clear, the sampling period is quite large (10000 events between consecutive samples). For this experiment, the output of perf script had some samples similar to this one:
EvJobScheduler 27529 26441.375932: 10000 mem_load_uops_retired.l3_miss:uppp: 7fffcd5d8ec0 5080022 N/A|SNP N/A|TLB N/A|LCK N/A
7ffff17bec7f bits_image_fetch_separable_convolution_affine+0x2df (inlined)
7ffff17bec7f bits_image_fetch_separable_convolution_affine_pad_x8r8g8b8+0x2df (/usr/lib/x86_64-linux-gnu/libpixman-1.so.0.34.0)
7ffff17d1fd1 general_composite_rect+0x301 (/usr/lib/x86_64-linux-gnu/libpixman-1.so.0.34.0)
ffffffffffffffff [unknown] ([unknown])
At the bottom of the backtrace, there is a symbol called [unknown], which seems OK. But then a line in general_composite_rect() is called. Is this backtrace OK?
AFAIK, the first caller in the backtrace should be something like _start() or __GI___clone(). But the backtrace is not in this form. What is wrong?
Is there any way to resolve the issue? Are the truncated (parts of) backtraces reliable?

TL;DR perf backtracing process may stop at some function if there is no frame pointer saved in the stack or no CFI tables for dwarf method. Recompile libraries with -fno-omit-frame-pointer or with -g or get debuginfo. With release binaries and libs perf often will stop backtrace early without chance to reach main() or _start or clone()/start_thread() top functions.
perf profiling tool in Linux is statistical sampling profiler (without binary instrumentation): it programs software timer or event source or hardware performance monitoring unit (PMU) to generate periodic interrupt. In your example
-c 10000 -e mem_load_uops_retired.l3_miss:uppp is used to select hardware PMU in x86_64 in some kind of PEBS mode (https://easyperf.net/blog/2018/06/08/Advanced-profiling-topics-PEBS-and-LBR) to generate interrupt after 10000 of mem_load_uops_retired (with l3_miss mask). Generated interrupt is handled by Linux Kernel (perf_events subsystem, kernel/events and arch/x86/events). In this handler PMU is reset (reprogrammed) to generate next interrupt after 10000 more events and sample is generated. Sample data dump is saved into perf.data file by perf report command, but every wake of tool can save thousands of samples; samples can be read by perf script or perf script -D.
perf_events interrupt handler, something near __perf_event_overflow of kernel/events/core.c, has full access to the registers of current function, and has some time to do additional data retrieval to record current time, pid, etc. Part of such process is https://en.wikipedia.org/wiki/Call_stack data collection. But with x86_64 and -fomit-frame-pointer (often enabled for many system libraries of Debian/Ubuntu/others) there is no default place in registers or in function stack to store frame pointers:
https://gcc.gnu.org/onlinedocs/gcc-4.6.4/gcc/Optimize-Options.html#index-fomit_002dframe_002dpointer-692
-fomit-frame-pointer
Don't keep the frame pointer in a register for functions that don't need one. This avoids the instructions to save, set up and
restore frame pointers; it also makes an extra register available in
many functions. It also makes debugging impossible on some machines.
Starting with GCC version 4.6, the default setting (when not optimizing for size) for 32-bit Linux x86 and 32-bit Darwin x86
targets has been changed to -fomit-frame-pointer. The default can be
reverted to -fno-omit-frame-pointer by configuring GCC with the
--enable-frame-pointer configure option.
With frame pointers saved in the function stack backtracing/unwinding is easy. But for some functions modern gcc (and other compilers) may not generate frame pointer. So backtracing code like in perf_events handler either will stop backtrace at such function or needs another method of frame pointer recovery. Option -g method (--call-graph) of perf record selects the method to be used. It is documented in man perf-record http://man7.org/linux/man-pages/man1/perf-record.1.html:
--call-graph Setup and enable call-graph (stack chain/backtrace) recording, implies -g. Default is "fp".
Allows specifying "fp" (frame pointer) or "dwarf" (DWARF's CFI -
Call Frame Information) or "lbr" (Hardware Last Branch Record
facility) as the method to collect the information used to show the
call graphs.
In some systems, where binaries are build with gcc
--fomit-frame-pointer, using the "fp" method will produce bogus call graphs, using "dwarf", if available (perf tools linked to the
libunwind or libdw library) should be used instead. Using the "lbr"
method doesn't require any compiler options. It will produce call
graphs from the hardware LBR registers. The main limitation is that
it is only available on new Intel platforms, such as Haswell. It
can only get user call chain. It doesn't work with branch stack
sampling at the same time.
When "dwarf" recording is used, perf also records (user) stack dump
when sampled. Default size of the stack dump is 8192 (bytes). User
can change the size by passing the size after comma like
"--call-graph dwarf,4096".
So, dwarf method reuses CFI tables to find stack frame sizes and find caller's stack frame. I'm not sure are CFI tables stripped from release libraries by default or not; but debuginfo probably will have them. LBR will not help because it is rather short hardware buffer. Dwarf split processing (kernel handler saves part of stack and perf user-space tool will parse it with libdw+libunwind) may lose some parts of call stack, so try also to increase dwarf stack dumps by using --call-graph dwarf,10240 or --call-graph dwarf,81920 etc.
Backtracing is implemented in arch-dependent part of perf_events: arch/x86/events/core.c:perf_callchain_user(); called from kernel/events/callchain.c:get_perf_callchain() <- perf_callchain <- perf_prepare_sample <-
__perf_event_output <- *(event->overflow_handler) <- READ_ONCE(event->overflow_handler)(event, data, regs); of __perf_event_overflow.
Gregg did warn about incomplete call stacks of perf: http://www.brendangregg.com/blog/2014-06-22/perf-cpu-sample.html
Incomplete stacks usually mean -fomit-frame-pointer was used – a compiler optimization that makes little positive difference in the real world, but breaks stack profilers. Always compile with -fno-omit-frame-pointer. More recent perf has a -g dwarf option, to use the alternate libunwind/dwarf method for retrieving stacks.
I also did write about backtraces in perf with some additional links: How does linux's perf utility understand stack traces?

I had the same problem and it was like this: when you are collecting traces with --call-graph dwarf, if the size of the stack is too big, you will get unknown in the stack backtrace.
The default maximum stack size is 8kB, but it can be increased like this, --call-graph dwarf,16578. Unfortunately, perf has some other problems when you increase the stack size. In my case, the solution was to get rid of a large stack-allocated array by allocating it on the heap.

Related

How to collect some readable stack traces with perf?

I want to profile C++ program on Linux using random sampling that is described in this answer:
However, if you're in a hurry and you can manually interrupt your
program under the debugger while it's being subjectively slow, there's
a simple way to find performance problems.
The problem is that I can't use gdb debugger because I want to profile on production under heavy load and debugger is too intrusive and considerably slows down the program. However I can use perf record and perf report for finding bottlenecks without affecting program performance. Is there a way to collect a number of readable (gdb like) stack traces with perf instead of gdb?
perf does offer callstack recording with three different techniques
By default is uses the frame pointer (fp). This is generally supported and performs well, but it doesn't work with certain optimizations. Compile your applications with -fno-omit-frame-pointer etc. to make sure it works well.
dwarf uses a dump of the sack for each sample for post-processing. That has a significant performance penalty
Modern systems can use hardware-supported last branch record, lbr.
The stack is accessible in perf analysis tools such as perf report or perf script.
For more details check out man perf-record.

Perf trace calling function

I am learning how to use perf. I have used perf stat followed by perf report. So I noticed that I was getting cache misses in memcpy. Is it possible to do a backtrace of some sort to figure out which memcpy this is? Just knowing that it's from memcpy is pretty useless.
Passing -g flag to perf record will make it collect the call stacks with each event. Viewing perf report for a trace collected with the -g flag will help you understand where the problematic memcpy was called from. You may also want to use the --children flag of the perf report command.

Major Perf and PIN profiling discrepancies

To analyze certain attributes of execution times, I was going to use both Perf and PIN in separate executions of a program to get all of my information. PIN would give me instruction mixes, and Perf would give me hardware performance on those mixes. As a sanity check, I profiled the following command line argument:
g++ hello_world.cpp -o hello
So my complete command line inputs were the following:
perf stat -e cycles -e instructions g++ hello_world.cpp -o hello
pin -t icount.so -- g++ hello_world.cpp -o hello
In the PIN commands, I ignored all the path stuff for the files for the sake of this post. Additionally, I altered the basic icount.so to also record instruction mixes in addition to the default dynamic instruction count. The results were astonishingly different
PIN Results:
Count 1180608
14->COND_BR: 295371
49->UNCOND_BR: 21869
//skipping all of the other instruction types for now
Perf Results:
20,538,346 branches
105,662,160 instructions # 0.00 insns per cycle
0.072352035 seconds time elapsed
This was supposed to serve as a sanity check by having roughly the same instruction counts and roughly the same branch distributions. Why would the dynamic instruction counts be off by a factor of x100?! I was expection some noise, but that's a bit much.
Also, the amount of branches is 20% for Perf, but PIN reports around 25% (that also seems like a tad wide of a discrepancy, but it's probably just a side effect from the massive instruction count distortion).
There are significant differences between what's counted by the icount pintool and the instructions performance event, which is mapped to the architectural Instructions Retired hardware performance event on modern Intel processors. I assume you're on an Intel processor.
pin is only injected in child processes when the -follow_execv command-line option is specified and, if the pintool registered a callback function to intercept process creation, the callback returned true. On the other hand, perf profiles all child processes by default. You can tell perf to only profile the specified process using the -i option.
perf, by default, profiles all events that occurs in user mode and kernel mode (if /proc/sys/kernel/perf_event_paranoid is smaller than 2). pin only supports profiling in user mode.
The icount pintool counts at the basic block granularity, which is essentially a short, single-entry, single-exit sequence of instructions. If an instruction in the block caused an exception, the rest of the instructions in the block will not be executed, but they've already been counted. An exception may be handled without terminating the program. instructions only count instructions at retirement.
The icount pintool, by default, counts each iteration of a rep-prefixed instruction as one instruction. The instructions event counts a rep-prefixed instruction as a single instruction irrespective of the number of iterations.
On some processors, the instructions event may over count or under count.
The instructions event count may be larger due to the first two reasons. The icount pintool instruction count may be larger due to the next two reasons. The last reason may result in unpredictable discrepancies. Since the perf count is about 100x larger than the icount count, it's clear that the first two factors are dominant in this case.
You can get the two tools to get a lot closer counts by passing -i to perf to not profile children, adding the :u modifier to the instructions event name to count only in user mode, and passing -reps 1 to pin to count rep-prefixed instructions per instruction rather than per iteration.
perf stat -i -e cycles,instructions:u g++ hello_world.cpp -o hello
pin -t icount.so -reps 1 -- g++ hello_world.cpp -o hello
Instead of passing -i to perf, you can pass -follow_execv to pin as follows:
pin -follow_execv -t icount.so -reps 1 -- g++ hello_world.cpp -o hello
In this way, both tools will profile the entire process hierarchy rooted at the specified process (i.e., a running g++).
I expect the counts to be very close with these measures, but they still won't be identical.

changing linux memory protection

Is there a way to check which memory protection machenizem is used by the OS?
I have a program that fails with segmentation fault, in one computer (ubuntu) but not in another (RH6).
One of the explanations was memory protection mechanizem used by the OS.
Is there a way I can find / change it?
Thanks,
You might want to learn more about virtual memory, system calls, the linux kernel, ASLR.
Then you could study the role and usage of mmap & munmap system calls (also mprotect). They are the syscalls used to retrieve memory (e.g. to implement malloc & free), sometimes with obsolete syscalls like sbrk (which is increasingly useless).
You should use the gdb debugger (its watch command may be handy), and the valgrind utility. strace could also be useful.
Look also inside the /proc pseudo file system. Try to understand what
cat /proc/self/maps
is telling you (about the process running that cat). Look also inside /proc/$(pidof your-program)/maps
consider also using the pmap utility.
If it is your own source code, always compile it with all warnings and debuggiing info, e.g. gcc -Wall -Wextra -g and improve it till the compiler don't give any warnings. Use a recent version of gcc (ie 4.7) and of gdb (i.e. 7.4).

Is there any profiler that works with -fomit-frame-pointer on x86_64?

SysProf doesn't properly generate call stack without it, GProf isn't accurate at all. And also, are profilers that work without -fno-omit-frame-pointer as accurate as those that rely on it?
Recent versions of linux perf can be used (with --call-graph dwarf):
perf record -F99 --call-graph dwarf myapp
It uses .eh_frames (or .debug_frames) with libunwind to unwind the stack.
In my experience, it get lost, sometimes.
With recent version of perf+kernel on Haswell, you might be able to use the Last Branch Record with --call-graph lbr.
There are none that I'm aware of. With frame pointers, walking a stack is a fairly simple exercise. You simply dereference the frame pointer to find the old frame pointer, stack pointer, and instruction pointer, and repeat until you're done. Without frame pointers you cannot reliably walk a stack without additional information, which on ELF platforms generally means DWARF CFI. DWARF is fairly complex to parse, and requires you to read in a fair amount of additional information which is tricky to do in the time constraints that profilers need to work in.
One plausible method for implementing this would be to simply save the stack memory at every sample and then walk it offline using the CFI to unwind properly. Depending on the depth of the stack this could require quite a bit of storage, and the copying could be prohibitive. I've never heard of a profiler using this technique, but Julian Seward floated it as a potential implementation strategy for Firefox's built-in profiler.
It would be hard for most profilers to work when -fomit-frame-pointer is asserted. You probably need to not use that and to link against debugging versions of the libraries (which are almost certainly compiled without -fomit-frame-pointer) if you want to do reasonable profiling.

Resources