FreeBSD: lldb does crash even in hello.c - freebsd

on FreeBSD I started to play around with LLDB, but it crashes right at the start.
user#host ~/sandbox % rake hello
cc -I/usr/local/include -g -O0 -o hello.o -c hello.c
cc -Wl,-L/usr/local/lib -o hello hello.o
user#host ~/sandbox % lldb
(lldb) target create hello
Current executable set to 'hello' (i386).
(lldb) source list
8 {
9 printf( "Hello, world!\n");
10 return 0;
11 }
12
(lldb) breakpoint set -f hello.c -l 9
Breakpoint 1: where = hello`main + 31 at hello.c:9, address = 0x080485af
(lldb) process launch
Process 2409 launching
Process 2409 stopped
(lldb) Process 2409 launched: '/usr/home/user/sandbox/hello' (i386)
Process 2409 stopped
* thread #1: tid = 100224, 0x0818188f, stop reason = hardware error
frame #0: 0x0818188f
-> 0x818188f: addb %al, (%eax)
0x8181891: addb %al, (%eax)
0x8181893: addb %al, (%eax)
0x8181895: addb %al, (%eax)
(lldb)
It is the same on three machines.
I have also tried Gdb on Linux. There, everything worked fine.
What did I do wrong?
Thanks in advance,
Bertram

LLDB doesn't support FreeBSD/i386 host for now. Use recent gdb from ports or switch to amd64.

Related

Cygwin install does not have shared libraries, or how should I activate the shared libraries?

I'm new to Cygwin - so hopefully, someone can point me in the right direction. I would like to be able to choose to use the shared libraries to compile my code. However, so far, it seems that it always uses the static library, and I don't know where exactly I did wrong.
I installed Cygwin on my Windows 10 computer. Created a file: test.c, which contains:
#include <stdio.h>
const char msg[] = "Hello, world.";
int main(void){
puts (msg);
return 0;
}
I then compiled it with:
$ gcc -Wall -c test.c -o test.o
Then I checked the symbols using:
$ nm test.o
It gives me what I expected:
U __main
0000000000000000 T main
0000000000000000 R msg
U puts
where none of the symbols have been assigned addresses yet. This is all good.
Then, I linked it using the following:
$ gcc -Wall test.o –o test
Then checked the symbols like below:
$ nm test
I got the following:
0000000100401080 T main
0000000100401000 T mainCRTStartup
0000000100401640 T malloc
0000000100403000 R msg
0000000100401650 T posix_memalign
00000001004010d0 T puts
while I was expecting the symbol puts being something like
U puts##GLIBC_x.x.x`.
It seems like I did not have the shared libraries, or I'm not using the process correctly. What is wrong then? Thanks.
using objdump
objdump -x test.exe
DLL Name: cygwin1.dll
vma: Hint/Ord Member-Name Bound-To
813c 15 __cxa_atexit
814c 46 __main
8158 108 _dll_crt0
8164 115 _impure_ptr
8174 257 calloc
8180 373 cygwin_detach_dll
8194 375 cygwin_internal
81a8 403 dll_dllcrt0
81b8 579 free
81c0 909 malloc
81cc 1015 posix_memalign
81e0 1170 puts
81e8 1196 realloc
so puts is an external symbol taken from cygwin1.dll shared lib

GNU debugger __text_start () at wrong filepath

First a little preface: I'm using the Windows Subsystem for Linux and the Gaisler BCC version of GCC for cross-compiling (aka "machine-gcc" where the machine is sparc-gaisler-elf in this case).
I compile a simple hello world program for debugging like this
$ sparc-gaisler-elf-gcc -g hello.c -o hello
Then I open the simulator for the particular processor with the GNU debugger (GDB) as a server
$ tsim-leon3 -gdb
...
gdb interface: using port 1234
Starting GDB server. Use Ctrl-C to stop waiting for connection.
In another bash session I start a remote GDB and connect to the server
$ sparc-gaisler-elf-gdb -ex "target extended-remote localhost:1234"
...
Remote debugging using localhost:1234
This works fine. But if I try to load the hello executable I get a problem
$ sparc-gaisler-elf-gdb -ex "target extended-remote localhost:1234" hello
...
Remote debugging using localhost:1234
__text_start () at /opt/bcc-2.0.4-gcc/src/libbcc/shared/trap/trap_table_mvt.S:167
167 /opt/bcc-2.0.4-gcc/src/libbcc/shared/trap/trap_table_mvt.S: No such file or directory.
in /opt/bcc-2.0.4-gcc/src/libbcc/shared/trap/trap_table_mvt.S
Current language: auto; currently asm
(gdb) run
The program being debugged has been started already.
Start it from the beginning? (y or n) y
Starting program: /mnt/c/Users/<username>/bcc-2.0.4-gcc/src/examples/hello/hello
Program received signal SIGSEGV, Segmentation fault.
__text_start () at /opt/bcc-2.0.4-gcc/src/libbcc/shared/trap/trap_table_mvt.S:167
167 in /opt/bcc-2.0.4-gcc/src/libbcc/shared/trap/trap_table_mvt.S
Now, with my Windows Subsystem for Linux setup I have the particular file it's looking for at
/mnt/c/Users/<username>/bcc-2.0.4-gcc/src/libbcc/shared/trap/trap_table_mvt.S
instead of in /opt/bcc-2.0.4-gcc/...
How can I tell it where to find this file?
Update
I tried setting dir as per Employed Russian's answer
(gdb) dir /mnt/c/Users/<user>/bcc-2.0.4-gcc/src/libbcc/shared/trap
Source directories searched: /mnt/c/Users/<user>/bcc-2.0.4-gcc/src/libbcc/shared/trap:$cdir:$cwd
(gdb) list
162 BAD_TRAP; BAD_TRAP; BAD_TRAP; BAD_TRAP; ! 78 - 7B undefined
163 BAD_TRAP; BAD_TRAP; BAD_TRAP; BAD_TRAP; ! 7C - 7F undefined
164
165 /* trap_instruction 0x80 - 0xFF */
166 /* NOTE: "ta 5" can be generated by compiler. */
167 SOFT_TRAP; ! 0 System calls
168 SOFT_TRAP; ! 1 Breakpoints
169 SOFT_TRAP; ! 2 Division by zero
170 FLW_TRAP; ! 3 Flush windows
171 SOFT_TRAP; ! 4 Clean windows
(gdb) run
The program being debugged has been started already.
Start it from the beginning? (y or n) y
Starting program: /mnt/c/Users/<user>/bcc-2.0.4-gcc/src/examples/hello/hello
Program received signal SIGSEGV, Segmentation fault.
__text_start () at /opt/bcc-2.0.4-gcc/src/libbcc/shared/trap/trap_table_mvt.S:167
167 SOFT_TRAP; ! 0 System calls
Even though it's still saying /opt/... it seems to have found the right file now. However, I don't know why it's giving a segmentation fault.
How can I tell it where to find this file?
With the directory command.
(gdb) dir /mnt/c/Users/<username>/bcc-2.0.4-gcc/src/libbcc/shared/trap
(gdb) list # should find the file in the new location
See also source path and set substitite-path.

how fio loads various io engines when it starts?

fio supports a whole bunch of io engines - all supported engines are present here : https://github.com/axboe/fio/tree/master/engines
I have been trying to understand the internals of how fio works and got stuck on how fio loads all the io engines.
For example I see every engine has a method to register and unregister itself, for example sync.c registers and unregisters using the following methods
fio_syncio_register : https://github.com/axboe/fio/blob/master/engines/sync.c#L448
and fio_syncio_unregister :
https://github.com/axboe/fio/blob/master/engines/sync.c#L461
My question is who calls these methods ?
To find answer I tried running fio under gdb - placed a break point in fio_syncio_register and in the main function, fio_syncio_register gets called even before main which tells me it has something to do with __libc_csu_init
and backtrace confirmed that
(gdb) bt
#0 fio_syncio_register () at engines/sync.c:450
#1 0x000000000047fb9d in __libc_csu_init ()
#2 0x00007ffff6ee27bf in __libc_start_main (main=0x40cd90 <main>, argc=2, argv=0x7fffffffe608, init=0x47fb50 <__libc_csu_init>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffe5f8)
at ../csu/libc-start.c:247
#3 0x000000000040ce79 in _start ()
I spent sometime reading about __libc_csu_init and __libc_csu_fini and every single description talks about methods being decorated with __attribute__((constructor)) will be called before main, but in the case of fio sync.c I dont see fio_syncio_register decorated with __attribute__
Can someone please help me out in understanding how this flow works ? Are there other materials I should be reading to understand this ?
Thanks
Interesting question. I couldn't figure out the answer looking at the source, so here are the steps I took:
$ make
$ find . -name 'sync.o'
./engines/sync.o
$ readelf -WS engines/sync.o | grep '\.init'
[12] .init_array INIT_ARRAY 0000000000000000 0021f0 000008 00 WA 0 0 8
[13] .rela.init_array RELA 0000000000000000 0132a0 000018 18 36 12 8
This tells us that global initializers are present in this object. These are called at program startup. What are they?
$ objdump -Dr engines/sync.o | grep -A4 '\.init'
Disassembly of section .init_array:
0000000000000000 <.init_array>:
...
0: R_X86_64_64 .text.startup
Interesting. There is apparently a special .text.startup section. What's in it?
$ objdump -dr engines/sync.o | less
...
Disassembly of section .text.startup:
0000000000000000 <fio_syncio_register>:
0: 48 83 ec 08 sub $0x8,%rsp
4: bf 00 00 00 00 mov $0x0,%edi
5: R_X86_64_32 .data+0x380
9: e8 00 00 00 00 callq e <fio_syncio_register+0xe>
a: R_X86_64_PC32 register_ioengine-0x4
...
Why, it's exactly the function we are looking for. But how did it end up in this special section? To answer that, we can look at preprocessed source (in retrospect, I should have started with that).
How could we get it? The command line to compile sync.o is hidden. Looking in Makefile, we can unhide the command line with QUIET_CC=''.
$ rm engines/sync.o && make QUIET_CC=''
gcc -o engines/sync.o -std=gnu99 -Wwrite-strings -Wall -Wdeclaration-after-statement -g -ffast-math -D_GNU_SOURCE -include config-host.h -I. -I. -O3 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -DBITS_PER_LONG=64 -DFIO_VERSION='"fio-2.16-5-g915ca"' -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -DFIO_INTERNAL -DFIO_INC_DEBUG -c engines/sync.c
LINK fio
Now we know the command line, and can produce preprocessed file:
$ gcc -E -dD -std=gnu99 -ffast-math -D_GNU_SOURCE -include config-host.h -I. -I. -O3 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -DBITS_PER_LONG=64 -DFIO_VERSION='"fio-2.16-5-g915ca"' -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -DFIO_INTERNAL -DFIO_INC_DEBUG engines/sync.c -o /tmp/sync.i
Looking in /tmp/sync.i, we see:
static void __attribute__((constructor)) fio_syncio_register(void)
{
register_ioengine(&ioengine_rw);
register_ioengine(&ioengine_prw);
...
Hmm, it is __attribute__((constructor)) after all. But how did it get there? Aha! I missed the fio_init on this line:
static void fio_init fio_syncio_register(void)
What does fio_init stand for? Again in /tmp/sync.i:
#define fio_init __attribute__((constructor))
So that is how it works.

How do you get debugging symbols working in linux perf tool inside Docker containers?

I am using Docker containers based on the "ubuntu" tag and cannot get linux perf tool to display debugging symbols.
Here is what I'm doing to demonstrate the problem.
First I start a container, here with an interactive shell.
$ docker run -t -i ubuntu:14.04 /bin/bash
Then from the container prompt I install linux perf tool.
$ apt-get update
$ apt-get install -y linux-tools-common linux-tools-generic linux-tools-`uname -r`
I can now use the perf tool. My kernel is 3.16.0-77-generic.
Now I'll install gcc, compile a test program, and try to run it under perf record.
$ apt-get install -y gcc
I paste in the test program into test.c:
#include <stdio.h>
int function(int i) {
int j;
for(j = 2; j <= i / 2; j++) {
if (i % j == 0) {
return 0;
}
}
return 1;
}
int main() {
int i;
for(i = 2; i < 100000; i++) {
if(function(i)) {
printf("%d\n", i);
}
}
}
Then compile, run, and report:
$ gcc -g -O0 test.c && perf record ./a.out && perf report
The output looks something like this:
72.38% a.out a.out [.] 0x0000000000000544
8.37% a.out a.out [.] 0x000000000000055a
8.30% a.out a.out [.] 0x000000000000053d
7.81% a.out a.out [.] 0x0000000000000551
0.40% a.out a.out [.] 0x0000000000000540
This does not have symbols, even though the executable does have symbol information.
Doing the same general steps outside the container works fine, and shows something like this:
96.96% a.out a.out [.] function
0.35% a.out libc-2.19.so [.] _IO_file_xsputn##GLIBC_2.2.5
0.14% a.out [kernel.kallsyms] [k] update_curr
0.12% a.out [kernel.kallsyms] [k] update_cfs_shares
0.11% a.out [kernel.kallsyms] [k] _raw_spin_lock_irqsave
In the host system I have already turned on kernel symbols by becoming root and doing:
$ echo 0 > /proc/sys/kernel/kptr_restrict
How do I get the containerized version to work properly and show debugging symbols?
Running the container with -v /:/host flag and running perf report in the container with --symfs /host flag fixes it:
96.59% a.out a.out [.] function
2.93% a.out [kernel.kallsyms] [k] 0xffffffff8105144a
0.13% a.out [nvidia] [k] 0x00000000002eda57
0.11% a.out libc-2.19.so [.] vfprintf
0.11% a.out libc-2.19.so [.] 0x0000000000049980
0.09% a.out a.out [.] main
0.02% a.out libc-2.19.so [.] _IO_file_write
0.02% a.out libc-2.19.so [.] write
Part of the reason why it doesn't work as is? The output from perf script sort of sheds some light on this:
...
a.out 24 3374818.880960: cycles: ffffffff81141140 __perf_event__output_id_sample ([kernel.kallsyms])
a.out 24 3374818.881012: cycles: ffffffff817319fd _raw_spin_lock_irqsave ([kernel.kallsyms])
a.out 24 3374818.882217: cycles: ffffffff8109aba3 ttwu_do_activate.constprop.75 ([kernel.kallsyms])
a.out 24 3374818.884071: cycles: 40053d [unknown] (/var/lib/docker/aufs/diff/9bd2d4389cf7ad185405245b1f5c7d24d461bd565757880bfb4f970d3f4f7915/a.out)
a.out 24 3374818.885329: cycles: 400544 [unknown] (/var/lib/docker/aufs/diff/9bd2d4389cf7ad185405245b1f5c7d24d461bd565757880bfb4f970d3f4f7915/a.out)
...
Note the /var/lib/docker/aufs path. That's from the host so it won't exist in the container and you need to help perf report to locate it. This likely happens because the mmap events are tracked by perf outside of any cgroup and perf does not attempt to remap the paths.
Another option is to run perf host-side, like sudo perf record -a docker run -ti <container name>. But the collection has to be system-wide here (the -a flag) as containers are spawned by docker daemon process which is not in the process hierarchy of the docker client tool we run here.
Another way that doesn't require changing how you run the container (so you can profile an already running process) is to mount container's root on host using bindfs:
bindfs /proc/$(docker inspect --format {{.State.Pid}} $CONTAINER_ID)/root /foo
Then run perf report as perf report --symfs /foo
You'll have to run perf record system wide, but you can restrict it to only collect events for the specific container:
perf record -g -a -F 100 -e cpu-clock -G docker/$(docker inspect --format {{.Id}} $CONTAINER_ID) sleep 90

How to single step ARM assembly in GDB on QEMU?

I'm trying to learn about ARM assembler programming using the GNU assembler. I've setup my PC with QEmu and have a Debian ARM-HF chroot environment.
If I assemble and link my test program:
.text
.global _start
_start:
mov r0, #6
bx lr
with:
as test.s -o test.o
ld test.o -o test
Then load the file into gdb and set a breakpoint on _start:
root#Latitude-E6420:/root# gdb test
GNU gdb (GDB) 7.6.1 (Debian 7.6.1-1)
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "arm-linux-gnueabihf".
For bug reporting instructions, please see:
...
Reading symbols from /root/test...(no debugging symbols found)...done.
(gdb) break _start
Breakpoint 1 at 0x8054
(gdb)
How do I single step the code, display the assembler source code and monitor the registers?
I tried some basic commands and they did not work:
(gdb) break _start
Breakpoint 1 at 0x8054
(gdb) info regi
The program has no registers now.
(gdb) stepi
The program is not being run.
(gdb) disas
No frame selected.
(gdb) r
Starting program: /root/test
qemu: Unsupported syscall: 26
qemu: uncaught target signal 11 (Segmentation fault) - core dumped
qemu: Unsupported syscall: 26
During startup program terminated with signal SIGSEGV, Segmentation fault.
(gdb)
Your problem here is that you're trying to run an ARM gdb under QEMU's user-mode emulation. QEMU doesn't support the ptrace syscall (that's what syscall number 26 is), so this is never going to work.
What you need to do is run your test binary under QEMU with the QEMU options to enable QEMU's own builtin gdb stub which will listen on a TCP port. Then you can run a gdb compiled to run on your host system but with support for ARM targets, and tell that to connect to the TCP port.
(Emulating ptrace within QEMU is technically very tricky, and it would not provide much extra functionality that you can't already achieve via the QEMU builtin gdbstub. It's very unlikely it'll ever be implemented.)
Minimal working QEMU user mode example
I was missing the -fno-pie -no-pie options:
sudo apt-get install gdb-multiarch gcc-arm-linux-gnueabihf qemu-user
printf '
#include <stdio.h>
#include <stdlib.h>
int main() {
puts("hello world");
return EXIT_SUCCESS;
}
' > hello_world.c
arm-linux-gnueabihf-gcc -fno-pie -ggdb3 -no-pie -o hello_world hello_world.c
qemu-arm -L /usr/arm-linux-gnueabihf -g 1234 ./hello_world
On another terminal:
gdb-multiarch -q --nh \
-ex 'set architecture arm' \
-ex 'set sysroot /usr/arm-linux-gnueabihf' \
-ex 'file hello_world' \
-ex 'target remote localhost:1234' \
-ex 'break main' \
-ex continue \
-ex 'layout split'
;
This leaves us at main, in a split code / disassembly view due to layout split. You will also interested in:
layout regs
which shows the registers.
At the end of the day however, GDB Dashboard is more flexible and reliable: gdb split view with code
-fno-pie -no-pie is required because the packaged Ubuntu GCC uses -fpie -pie by default, and those fail due to a QEMU bug: How to GDB step debug a dynamically linked executable in QEMU user mode?
There was no gdbserver --multi-like functionality for the QEMU GDB stub on QEMU 2.11: How to restart QEMU user mode programs from the GDB stub as in gdbserver --multi?
For those learning ARM assembly, I am starting some runnable examples with assertions and using the C standard library for IO at: https://github.com/cirosantilli/arm-assembly-cheat
Tested on Ubuntu 18.04, gdb-multiarch 8.1, gcc-arm-linux-gnueabihf 7.3.0, qemu-user 2.11.
Freestanding QEMU user mode example
This analogous procedure also works on an ARM freestanding (no standard library) example:
printf '
.data
msg:
.ascii "hello world\\n"
len = . - msg
.text
.global _start
_start:
/* write syscall */
mov r0, #1 /* stdout */
ldr r1, =msg /* buffer */
ldr r2, =len /* len */
mov r7, #4 /* Syscall ID. */
swi #0
/* exit syscall */
mov r0, #0 /* Status. */
mov r7, #1 /* Syscall ID. */
swi #0
' > hello_world.S
arm-linux-gnueabihf-gcc -ggdb3 -nostdlib -o hello_world -static hello_world.S
qemu-arm -g 1234 ./hello_world
On another terminal:
gdb-multiarch -q --nh \
-ex 'set architecture arm' \
-ex 'file hello_world' \
-ex 'target remote localhost:1234' \
-ex 'layout split' \
;
We are now left at the first instruction of the program.
QEMU full system examples
Linux kernel: How to debug the Linux kernel with GDB and QEMU?
Bare metal: https://github.com/cirosantilli/newlib-examples/tree/f70f8a33f8b727422bd6f0b2975c4455d0b33efa#gdb
Single step of an assembly instruction is done with stepi. disas will disassemble around the current PC. info regi will display the current register state. There are some examples for various processors on my blog for my ELLCC cross development tool chain project.
You should add the -g option too to the assembling. Otherwise the codeline info is not included.
That crash probably comes from running some garbage after the code lines.
Maybe you should add the exit system call:
mov eax, 1 ; exit
mov ebx, 0 ; returm value
int 0x80 ; system call

Resources