Running Berkley Boot Loader on gem5 RISCV FS mode - riscv

I was trying to run Berkley Boot Loader on gem5 RISCV FS mode. I used the fs.py script provided with gem5, passed the bbl binary path to the --kernel option of the script. gem5 shows
'Starting Simulation...' and it just hangs without any output. How can I specify a binary that the bbl can execute? Are there any kernels that can be booted on gem5 RISCV FS mode? Can anyone provide some resources on FS mode in gem5 RISCV.

In the current implementation of GEM5, RISC-V only supports Bare Metal applications. So when you pass the flag --kernel, it is actually converted to --boot-loader internally and run as a bare-metal ELF.
You can find out what's going on by enabling the execution flags, will will display a trace of instruction log.
--debug-flags=Exec

Related

How to get kernel version without running it (Get ARM kernel version on an AMD64 Linux)

I need to know the version of a kernel file without running it. Therefore, the following questions arise in this realm:
Is it possible to get the kernel version in the u-boot environment? I mean before running the kernel I want to get the version of my kernel file.
Suppose I am running ubuntu on my amd64 processor and I have a zImage file which is cross compiled for ARM processor. Therefore I can not run this zImage file on amd64. Then how can I get version of this zImage file without running it on an ARM processor? I checked out uname manual but it does not accept a file as argument. I also issued readelf -V on a vmlinux kernel file, but it was an unsuccessful attempt.

A working linux kernel + gem5 config for FS boot up in x86 SMP

I'm trying to bring up a gem5 FullSystem (FS) simulation of Linux kernel 2.6.22.9 (as the binary was provided by gem5) and also with a custom Linux kernel 3.4.112 on a TimingSimpleCPU. While both of these work in a single core x86 FS simulated machine, they fail to boot up in a multi-core simulated machine.
I'm lost on how to even begin debugging. I have tried connecting to the remote gdb port provided by gem5 TimingSimpleCPU for each processor on ports 7000, 7001 and so on. I see that on a dual core boot up, after a point, core 0 gets stuck on schedule() call and core 1 always stays on idle() and never schedules() anything until core 0 also gets stuck on the schedule() call.
What is a proper way to go about debugging gem5 and its compatibility with Linux Kernel for multi core full system boot up on a TimingSimpleCPU X86 arch? I'm thinking there could be issues relating to spinlock support or the APIC.
X86 2 core Linux kernel 5.1, TimingSimpleCPU, gem5 08c79a194d1a3430801c04f37d13216cc9ec1da3 happened to work on this setup: https://github.com/cirosantilli/linux-kernel-module-cheat/tree/6aa2f783a8a18589ae66e85f781f86b08abb3397#gem5-buildroot-setup-getting-started Boot completes and cat /proc/cpuinfo says 2 CPUs.
The final run command was:
./run --cpus 2 --emulator gem5 -- --cpu-type TimingSimpleCPU
Everything is specified in that repo, including how to build gem5, the Linux kernel, and how to run them.
Then, with a mere flick of a switch, the same works on aarch64 as well if you are curious:
./run --arch aarch64 --cpus 2 --emulator gem5 -- --cpu-type TimingSimpleCPU
I then added the options --caches --l2cache as per OP's comment, and now I reproduce the failure, to which I don't have a solution:
./run --cpus 2 --emulator gem5 -- --cpu-type TimingSimpleCPU --caches --l2cache
Boot hangs, the last terminal message is:
pci 0000:00:04.0: legacy IDE quirk: reg 0x1c: [io 0x0376]
and a bit above we can see the suspicious message:
[Firmware Bug]: CPU1: APIC id mismatch. Firmware: 1 APIC: 0
ARM boot with the extra options worked however:
./run --arch aarch64 --cpus 2 --emulator gem5 -- --cpu-type TimingSimpleCPU --caches --l2cache
However, I later tried with more cache options:
/run --arch aarch64 --emulator gem5 --cpu 2 --run-id 2 -- --cpu-type=HPI --caches --l2cache --l1d_size=64kB --l1i_size=64kB --l2_size=256kB
and it also failed as explained at: https://github.com/cirosantilli/linux-kernel-module-cheat/tree/99180e6616331b7385b09147f11f67962f9facc4#gem5-arm-multicore-hpi-boot-fails ...
How to debug such problems to get things working in general is an extremely difficult problem that requires understanding enough Linux kernel + X86 ISA + gem5, where enough is undefined. This learning process is closely intertwined with enabling just the right log options / focusing on the right part of the code. That setup just happened to work out of "luck".

gdb can't resolve symbols for linux kernel

I have setup Linux Kernel debug environment with VMware Workstation. But When I tried to connect with gdb that connects correctly but I can't set any breakpoint or examine any kernel symbol.
Target Machine (debugee) Ubuntu 18:
I have compiled linux kernel 5.0-0 with the following directives:
CONFIG_DEBUG_INFO=y
# CONFIG_DEBUG_INFO_REDUCED is not set
# CONFIG_DEBUG_INFO_SPLIT is not set
CONFIG_DEBUG_INFO_DWARF4=y
CONFIG_DEBUG_FS=y
# CONFIG_DEBUG_SECTION_MISMATCH is not set
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
Also my VMX file configuration:
debugStub.listen.guest64 = "TRUE"
debugStub.listen.guest64.remote="TRUE"
After that I transfered vmlinux to debugger machine and use gdb:
bash$ gdb vmlinux
gdb-peda$ target remote 10.251.31.28:8864
Remote debugging using 10.251.31.28:8864
Warning: not running or target is remote
0xffffffff9c623f36 in ?? ()
gdb-peda$ disas sys_open
No symbol "do_sys_open" in current context.
First you need to install kernel-debug-devel, kernel-debuginfo, kernel-debuginfo-common for corresponding kernel version.
Then you can use crash utility to debug kernel, which internally uses gdb
The symbol name you're looking for is sometimes not exactly what you expect it to be. You can use readelf or other similar tools to find the full name of the symbol in the kernel image. These names sometimes differ from the names in the code because of various architecture level differences and their related header and C definitions in kernel code. For example you might be able to disassemble the open() system call by using:
disas __x64_do_sys_open
if you've compiled it for x86-64 architecture.
Also keep in mind that these naming conventions are subject to change in different versions of kernel.

make-kpkg build kernel with -O0 for kgdb

I am going to setup kgdb to debug Ubuntu debian kernel.
By default, the kernel compiled by make-kpkg has been optimized (-O2) so I am not able to debug the variables.
Is there a way to disable the kernel compilation optimization (for example, -O0)?
thanks!
Currently, gdb reports the variable has been optimized:
(gdb) p pb
$5 = <optimized out>
The Linux kernel depends on -O2. It will not compile with any lower optimization levels. It uses several GCC "tricks" that only work when certain optimizations are turned on, such as inline functions that are supposed to act like macros.

How to debug Linux kernel modules with QEMU?

I am working on academic project that modifies some Kernel Networking code as well as include a new Kernel module.
I am using QEMU to load modified kernel and test.
However, i find that a complete OS is required in some .img to debug.
Is it possible without it ?
Or, which is the distro that can be used with Kernel 2.6 for system. The distro need not have any features, except ability to run programs, including networking support.
The easiest way in my opinion is to use buildroot
http://buildroot.uclibc.org/
clone it, configure it to use your custom kernel (default userspace is fine for a start, you might want to change it later).
it will build your kernel and root filesystem. the entire process takes about half an hour, twenty minutes of which is compiling the monster
my run line looks something:
qemu-system-i386
-hda rootfs.ext2
-kernel bzImage
-m 512M
-append "root=/dev/sda console=ttyS0"
-localtime
-serial stdio
and some more options regarding a tap device
Minimal fully automated QEMU + GDB + Buildroot example
QEMU + GDB on non-module Linux kernel is covered in detail at: How to debug the Linux kernel with GDB and QEMU? and building the kernel modules inside QEMU at: How to add Linux driver as a Buildroot package Get those working first.
Next, I have also fully automated GDB module debugging at: https://github.com/cirosantilli/linux-kernel-module-cheat/tree/1c29163c3919d4168d5d34852d804fd3eeb3ba67#kernel-module-debugging
These are the main steps you have to take:
Compile the kernel module with debug symbols:
ccflags-y += -g -DDEBUG
as mentioned at: kernel module no debugging symbols found
Stop GDB with Ctrl + C and run:
lx-symbols path/to/parent/of/modules/
This amazing command, which is defined in a GDB Python script inside the Linux kernel source tree, automatically loads symbols for loaded modules present under the given directory recursively whenever GDB stops.
The best way to make that command available is to use:
gdb -ex add-auto-load-safe-path /full/path/to/linux/kernel
as explained at: GDB: lx-symbols undefined command
insmod the kernel module.
This must be done before setting breakpoints, because we don't know where the kernel will insert the module in memory beforehand.
lx-symbols automatically takes care of finding the module location (in host filesystem and guest memory!) for us.
Break GDB again with Ctrl + C, set breakpoints, and enjoy.
If were feeling hardcore, you could also drop lx-symbols entirely, and find the module location after insmod with:
cat /proc/modules
and then add the .ko manually with:
add-symbol-file path/to/mymodule.ko 0xfffffffa00000000

Resources