I am going to setup kgdb to debug Ubuntu debian kernel.
By default, the kernel compiled by make-kpkg has been optimized (-O2) so I am not able to debug the variables.
Is there a way to disable the kernel compilation optimization (for example, -O0)?
thanks!
Currently, gdb reports the variable has been optimized:
(gdb) p pb
$5 = <optimized out>
The Linux kernel depends on -O2. It will not compile with any lower optimization levels. It uses several GCC "tricks" that only work when certain optimizations are turned on, such as inline functions that are supposed to act like macros.
Related
I need to build a complete linux development framework for a Cortex-M MCU, specifically a STM32F7 Cortex-M7. First I've to explain some background info so please bear with me.
I've downloaded and built a gcc toolchain with croostool-ng 1.24 specifying an armv7e-m architecture with thumb-only instructions and linux 4.20 as the OS and that I want the output to be FLAT executables (I assumed it will mean bFLT).
Then I proceeded to compile the linux kernel (version 4.20) using configs/stm32_defconf, and then a statically compiled busybox rootfs, all using my new toolchain.
Kernel booted just fine but throw me an error and kernel painc with the following message:
Starting init: /sbin/init exists but couldn't execute it (error -8)
and
request_module: modprobe binfmt-464c cannot be processed, kmod busy with 50 threads
The interesting part is the last message. My busybox excutable turned out to be an .ELF! Cortex-M has no MMU, so it's imposible to build a linux kernel on a MMU-less architecture with .ELF support, that's why an (464c)"LF" binary loader can't be found, there is none.
So at last, my question is:
how could I build bFLT executables to run on MMU-less Linux architectures? My toolchain has elf2flt, but in crosstool-ng I've already specified a MMU-less architecture and FLAT binary and I was expecting direct bFLT output, not a "useless" executable. Is that even possible?
Or better: is there anywhere a documented standard procedure to build a complete, working Linux system based on Cortex-M?
Follow-up:
I gave up on building FLAT binaries and tried FDPIC executables. Another dead end. AFAIK:
Linux has long been supporting ELF FDPIC, but the ABI for ARM is pretty new.
It seems that still at this day and age, GCC has not a standard way to enable FDPIC. On some architectures you can use -mfdpic. Not on arm, don't know why. I even don't know if ARM FDPIC is supported at all by mainline GCC. Info is extremely scarce if inexistent.
It seems crosstool-ng 1.24 is BROKEN at building ARM ELF FDPIC support. Resulting gcc has not -mfdpic, and -fPIC generates ARM executables, not ARM FDPIC.
Any insight will be very appreciated.
you can generate FDPIC ELF files just with a prebuilt arm-linux-gnueabi-gcc compiler.
Specifications of an FDPIC ELF file:
Position independent executable/code (i.e. -fPIE and fPIC)
Should be compiled as a shared executable (ET_DYN ELF) to be position independent
use these flags to compile your programs:
arm-linux-gnueabi-gcc -shared -fPIE -fPIC <YOUR PROGRAM.C> -o <OUTPUT FILE>
I've compiled busybox successfully for STM32H7 with this method.
As I know, unfortunately FDPIC ELFs should be compiled with - shared flag so, they use shared libraries and cannot be compiled as -static ELF.
For more information take a look at this file:
https://github.com/torvalds/linux/blob/master/fs/binfmt_elf_fdpic.c
Track the crosstool-ng BFLAT issue from here:
https://github.com/crosstool-ng/crosstool-ng/issues/1399
I have setup Linux Kernel debug environment with VMware Workstation. But When I tried to connect with gdb that connects correctly but I can't set any breakpoint or examine any kernel symbol.
Target Machine (debugee) Ubuntu 18:
I have compiled linux kernel 5.0-0 with the following directives:
CONFIG_DEBUG_INFO=y
# CONFIG_DEBUG_INFO_REDUCED is not set
# CONFIG_DEBUG_INFO_SPLIT is not set
CONFIG_DEBUG_INFO_DWARF4=y
CONFIG_DEBUG_FS=y
# CONFIG_DEBUG_SECTION_MISMATCH is not set
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
Also my VMX file configuration:
debugStub.listen.guest64 = "TRUE"
debugStub.listen.guest64.remote="TRUE"
After that I transfered vmlinux to debugger machine and use gdb:
bash$ gdb vmlinux
gdb-peda$ target remote 10.251.31.28:8864
Remote debugging using 10.251.31.28:8864
Warning: not running or target is remote
0xffffffff9c623f36 in ?? ()
gdb-peda$ disas sys_open
No symbol "do_sys_open" in current context.
First you need to install kernel-debug-devel, kernel-debuginfo, kernel-debuginfo-common for corresponding kernel version.
Then you can use crash utility to debug kernel, which internally uses gdb
The symbol name you're looking for is sometimes not exactly what you expect it to be. You can use readelf or other similar tools to find the full name of the symbol in the kernel image. These names sometimes differ from the names in the code because of various architecture level differences and their related header and C definitions in kernel code. For example you might be able to disassemble the open() system call by using:
disas __x64_do_sys_open
if you've compiled it for x86-64 architecture.
Also keep in mind that these naming conventions are subject to change in different versions of kernel.
Result of below investigation is: Recent Node.js is not portable to AMD Geode (or other non-SSE x86) Processors !!!
I dived deeper into the code and got stuck in ia32-assembler implementation, that deeply integrates SSE/SSE2 instructions into their code (macros, macros, macros,...). The main consequence is, that you can not run a recent version of node.js on AMD geode processors due to the lack of newer instuction set extensions. The fallback to 387 arithmetics only works for the node.js code, but not for the javascript V8 compiler implementation that it depends on. Adjusting V8 to support non-SSE x86 processors is a pain and a lot of effort.
If someone produces proof of the contrary, I would be really happy to hear about ;-)
Investigation History
I have a running ALIX.2D13 (https://www.pcengines.ch), which has an AMD Geode LX as the main processor. It runs voyage linux, a debian jessi based distribution for resource restricted embedded devices.
root#voyage:~# cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 5
model : 10
model name : Geode(TM) Integrated Processor by AMD PCS
stepping : 2
cpu MHz : 498.004
cache size : 128 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fdiv_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 1
wp : yes
flags : fpu de pse tsc msr cx8 sep pge cmov clflush mmx mmxext 3dnowext 3dnow 3dnowprefetch vmmcall
bugs : sysret_ss_attrs
bogomips : 996.00
clflush size : 32
cache_alignment : 32
address sizes : 32 bits physical, 32 bits virtual
When I install nodejs 8.x following the instructions on https://nodejs.org/en/download/package-manager/, I get some "invalid machine instruction" (not sure if correct, but translated from german error output). This also happens, when I download the binary for 32-bit x86 and also when I compile it manually.
After the answers below, I changed the compiler flags in deps/v8/gypfiles/toolchain.gypi by removing -msse2 and adding -march=geode -mtune=geode. And now I get the same error but with a stack trace:
root#voyage:~/GIT/node# ./node
#
# Fatal error in ../deps/v8/src/ia32/assembler-ia32.cc, line 109
# Check failed: cpu.has_sse2().
#
==== C stack trace ===============================
./node(v8::base::debug::StackTrace::StackTrace()+0x12) [0x908df36]
./node() [0x8f2b0c3]
./node(V8_Fatal+0x58) [0x908b559]
./node(v8::internal::CpuFeatures::ProbeImpl(bool)+0x19a) [0x8de6d08]
./node(v8::internal::V8::InitializeOncePerProcessImpl()+0x96) [0x8d8daf0]
./node(v8::base::CallOnceImpl(int*, void (*)(void*), void*)+0x35) [0x908bdf5]
./node(v8::internal::V8::Initialize()+0x21) [0x8d8db6d]
./node(v8::V8::Initialize()+0xb) [0x86700a1]
./node(node::Start(int, char**)+0xd3) [0x8e89f27]
./node(main+0x67) [0x846845c]
/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0xb74fc723]
./node() [0x846a09c]
Ungültiger Maschinenbefehl
root#voyage:~/GIT/node#
If you now look into this file, you will find the following
... [line 107-110]
void CpuFeatures::ProbeImpl(bool cross_compile) {
base::CPU cpu;
CHECK(cpu.has_sse2()); // SSE2 support is mandatory.
CHECK(cpu.has_cmov()); // CMOV support is mandatory.
...
I commented the line but still the "Ungültiger Maschinenbefehl" (Invalid machine instruction).
This is what gdb ./node shows (executed run):
root#voyage:~/GIT/node# gdb ./node
GNU gdb (Debian 7.7.1+dfsg-5) 7.7.1
[...]
This GDB was configured as "i586-linux-gnu".
[...]
Reading symbols from ./node...done.
(gdb) run
Starting program: /root/GIT/node/node
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/i386-linux-gnu/libthread_db.so.1".
[New Thread 0xb7ce2b40 (LWP 29876)]
[New Thread 0xb74e2b40 (LWP 29877)]
[New Thread 0xb6ce2b40 (LWP 29878)]
[New Thread 0xb64e2b40 (LWP 29879)]
Program received signal SIGILL, Illegal instruction.
0x287a23c0 in ?? ()
(gdb)
I think, it is necessary to compile with debug symbols...
make clean
make CFLAGS="-g"
No chance to resolve all SSE/SSE2-Problems... Giving up! See my topmost section
Conclusion: node.js + V8 normally requires SSE2 when running on x86.
On the V8 ports page: x87 (not officially supported)
Contact/CC the x87 team in the CL if needed. Use the mailing list v8-x87-ports.at.googlegroups.com for that purpose.
Javascript generally requires floating point (every numeric variable is floating point, and using integer math is only an optimization), so it's probably hard to avoid having V8 actually emit FP math instructions.
V8 is currently designed to always JIT, not interpret. It starts off / falls-back to JITing un-optimized machine code when it's still profiling, or when it hits something that makes it "de-optimize".
There is an effort to add an interpreter to V8, but it might not help because the interpreter itself will be written using the TurboFan JIT backend. It's not intended to make V8 portable to architectures it doesn't currently know how to JIT for.
Crazy idea: run node.js on top of a software emulation layer (like Intel's SDE or maybe qemu-user) that could emulate x86 with SSE/SSE2 on an x86 CPU supporting only x87. They use dynamic translation, so would probably run at near-native speed for code that didn't use any SSE instructions.
This may be crazy because node.js + V8 probably some virtual-memory tricks that might confuse an emulation layer. I'd guess that qemu should be robust enough, though.
Original answer left below as a generic guide to investigating this kind of issue for other programs. (tip: grep the Makefiles and so on for -msse or -msse2, or check compiler command lines for that with pgrep -a gcc while it's building).
Your cpuinfo says it has CMOV, which is a 686 (ppro / p6) feature. This says that Geode supports i686. What's missing compared to a "normal" CPU is SSE2, which is enabled by default for -m32 (32-bit mode) in some recent compiler versions.
Anyway, what you should do is compile with -march=geode -O3, so gcc or clang will use everything your CPU supports, but no more.
-O3 -msse2 -march=geode would tell gcc that it can use everything Geode supports as well as SSE2, so you need to remove any -msse and -msse2 options, or add -mno-sse after them. In node.js, deps/v8/gypfiles/toolchain.gypi was setting -msse2.
Using -march=geode implies -mtune=geode, which affects code-gen choices that don't involve using new instructions, so with luck your binary will run faster than if you'd simply used -mno-sse to control instruction-set stuff without overriding -mtune=generic. (If you're building on the geode, you could use -march=native, which should be identical to using -march=geode.)
The other possibility is the problem instructions are in Javascript functions that were JIT-compiled.
node.js uses V8. I did a quick google search, but didn't find anything about telling V8 to not assume SSE/SSE2. If it doesn't have a fall-back code-gen strategy (x87 instructions) for floating point, then you might have to disable JIT altogether and make it run in interpreter mode. (Which is slower, so that may be a problem.)
But hopefully V8 is well-behaved, and checks what instruction sets are supported before JITing.
You should check by running gdb /usr/bin/node, and see where it faults. Type run my_program.js on the GDB command line to start the program. (You can't pass args to node.js when you first start gdb. You have to specify args from inside gdb when you run.)
If the address of the instruction that raised SIGILL is in a region of memory that's mapped to a file (look in /proc/pid/maps if gdb doesn't tell you), that tells you which ahead-of-time compiled executable or library is responsible. Recompile it with -march=geode.
If it's in anonymous memory, it's most likely JIT-compiler output.
GDB will print the instruction address when it stops when the program receives SIGILL. You can also print $ip to see the current value of EIP (the 32-bit mode instruction pointer).
I'm trying to make custom binaries for initrd for x86 system. I took generic precompiled Debian 7 gcc (version 4.7.2-5) and compiled kernel with it. Next step was to make helloworld program instead of init script in initrd to check my development progress. Helloworld program was also compiled with that gcc. When I tried to start my custom system, kernel started with no problem, but helloworld program encountered some errors:
kernel: init[24879] general protection ip:7fd7271585e0 sp:7fff1ef55070 error:0 in init[7fd727142000+20000]
(numbers are not mine, I took similar string from google). Helloworld program:
#include <stdio.h>
int main(){
printf("Helloworld\r\n");
sleep(9999999);
return 0;
}
Compilation:
gcc -static -o init test.c
Earlier I also had stuck with same problem on ARM system (took generic compiler, compiled kernel and some binaries with it and tried to run, kernel runs, but binary - not). Solved it with complete buildroot system, and took buildroot compiler in next projects.
So my question is: what difference between gcc compiled as part of buildroot and generic precompiled gcc?
I know that buildroot compiler is made in several steps, with differenet libs and so on, is this main difference, platform independence?
I don't need a solution, I can take buildroot anytime. I want to know source of my problem, to avoid such problems in future. Thanks.
UPD: Replaced sleep with while(1); and got same situation. My kernel output:
init[1]: general protection ip: 8053682 sp: bf978294 error: 0 in init[8048000+81000]
printk: 14300820 message suppressed.
and repeating every second.
UPD2: I added vdso32-int80.so (original name, like in kernel tree), tested - no luck.
I added ld-linux.so (2 files: ld-2.13.so with symbolic link), tested - same error.
Busybox way allows to run binaries without any of this libraries, tested by me on ARM platform.
Thanks for trying to help me, any other ideas?
I want to debug pthreads on my custom linux distribution but I am missing something. My host is Ubuntu 12.04, my target is an i486 custom embedded Linux built with a crosstool-NG cross compiler toolset, the rest of the OS is made with Buildroot.
I will list the facts:
I can run multi-threaded applications on my target
Google Breakpad fails to create a crash report when I run a multi-threaded application on the target. The exact same application with the exact same build of Breakpad libraries will succeed when I run it on my host.
GDB fails to debug multithreaded applications on my target.
e.g.
$./gdb -n -ex "thread apply all backtrace" ./a.out --pid 716
dlopen failed on 'libthread_db.so.1' - /lib/libthread_db.so.1: undefined symbol: ps_lgetfpregs
GDB will not be able to debug pthreads.
GNU gdb 6.8
I don't think ps_lgetfpregs is a problem because of this.
My crosstool build created the libthread_db.so file, and I put it on the target.
My crosstool build created the gdb for my target, so it should have been linked against the same libraries that I run on the target.
If I run gdb on my host, against my test app, I get a backtrace of each running thread.
I suspect the problem with Breakpad is related to the problem with GDB, but I cannot substantiate this. The only commonality is lack of multithreaded debug.
There is some crucial difference between my host and target that stops me from being able to debug pthreads on the target.
Does anyone know what it is?
EDIT:
Denys Dmytriyenko from TI says:
Normally, GDB is not very picky and you can mix and match different
versions of gdb and gdbserver. But, unfortunately, if you need to
debug multi-threaded apps, there are some dependencies for specific
APIs...
For example, this is one of the messages you may see if you didn't
build GDB properly for the thread support:
dlopen failed on 'libthread_db.so.1' - /lib/libthread_db.so.1:
undefined symbol: ps_lgetfpregs GDB will not be able to debug
pthreads.
Note that this error is the same as the one that I get but he doesn't go in to detail about how to build GDB "properly".
and the GDB FAQ says:
(Q) GDB does not see any threads besides the one in which crash occurred;
or SIGTRAP kills my program when I set a breakpoint.
(A) This frequently
happen on Linux, especially on embedded targets. There are two common
causes:
you are using glibc, and you have stripped libpthread.so.0
mismatch between libpthread.so.0 and libthread_db.so.1
GDB itself does
not know how to decode "thread control blocks" maintained by glibc and
considered to be glibc private implementation detail. It uses
libthread_db.so.1 (part of glibc) to help it do so. Therefore,
libthread_db.so.1 and libpthread.so.0 must match in version and
compilation flags. In addition, libthread_db.so.1 requires certain
non-global symbols to be present in libpthread.so.0.
Solution: use
strip --strip-debug libpthread.so.0 instead of strip libpthread.so.0.
I tried a non-stripped libpthread.so.0 but it didn't make a difference. I will investigate any mismatch between pthread and thread_db.
This:
dlopen failed on 'libthread_db.so.1' - /lib/libthread_db.so.1: undefined symbol: ps_lgetfpregs
GDB will not be able to debug pthreads.
means that the libthread_db.so.1 library was not able to find the symbol ps_lgetfpregs in gdb.
Why?
Because I built gdb using Crosstoolg-NG with the "Build a static native gdb" option and this adds the -static option to gcc.
The native gdb is built with the -rdynamic option and this populates the .dynsym symbol table in the ELF file with all symbols, even unused ones. libread_db uses this symbol table to find ps_lgetfpregs from gdb.
But -static strips the .dynsym table from the ELF file.
At this point there are two options:
Don't build a static native gdb if you want to debug threads.
Build a static gdb and a static libthread_db (not tested)
Edit:
By the way, this does not explain why Breakpad in unable to debug multithreaded applications on my target.
Just a though... To use the gdb debugger, you need to compile your code with -g option. For instance, gcc -g -c *.c.