What is the lldb equivalent for gdb load - rust

I am trying to flash a binary to the STM32F303 MCU, the equivalent gdb command is "load", what is the equivalent command for lldb?

target modules load --load --set-pc-to-entry --slide 0 --file <absolute path to binary>

Related

Rust discovery, config file not working as expected

I am trying to avoid entering the same commands in each GDB sessions. For this, I have followed the instructions in rust discovery book but the program is not working as mentioned in the book when I run the program through cargo run it is giving the following error:
ts/project/discovery/src/06-hello-world$ cargo run
error: could not load Cargo configuration
cargo run --target thumbv7em-none-eabihf
Finished dev [unoptimized + debuginfo] target(s) in 0.04s
Running `arm-none-eabi-gdb -q -x openocd.gdb /home/jawwad-turabi/Documents/project/discovery/target/thumbv7em-none-eabihf/debug/led-roulette`
error: could not execute process `arm-none-eabi-gdb -q -x openocd.gdb /home/jawwad-turabi/Documents/project/discovery/target/thumbv7em-none-eabihf/debug/led-roulette` (never executed)
Caused by:
No such file or directory (os error 2)
My openocd.gdb file contains these content:
target remote: 3333
load
break main
continue
My config file contain these content:
[target.thumbv7em-none-eabihf]
runner = "arm-none-eabi-gdb -q -x openocd.gdb"
rustflags = [
"-C", "link-arg=-Tlink.x",
]
+[build]
+target = "thumbv7em-none-eabihf"
Please change runner = "arm-none-eabi-gdb -q -x openocd.gdb" to this
runner = "gdb-multiarch -q -x openocd.gdb".
Because, if you are using the Ubuntu 18.04 LTS version then this command will be used as the book mention.
Ubuntu 18.04 or newer / Debian stretch or newer
NOTE gdb-multiarch is the GDB command you'll use to debug your ARM
Cortex-M programs
Ubuntu 14.04 and 16.04
NOTE arm-none-eabi-gdb is the GDB command you'll use to debug your ARM
Cortex-M programs
While flashing the STM32F3, we have to connect to the respective GDB server. It may be arm-none-eabi-gdb, gdb-multiarch or gdb. You may have to try all the three.
Now, as far as your question is concerned, you have to use the same parameter in your openocd.gdb. In my case, I have successfully tried with arm-none-eabi-gdb. Remember, I am using rust on Windows 10.

How to make gdb for a target and use it there

I am trying to compile gdb-8.2 from source.
Build machine: x86-64
Host AND target: arm-linux-gnueabi
I ran:
CC=arm-linux-gnueabi-gcc ./configure --host=arm-linux-gnueabi --target=arm-linux-gnueabi
make
Then I ran:
make DESTDIR=<Some Path>/gdb_installation install
So I got a usr folder inside gdb_installation folder. I copied the usr/local/bin/gdb to my target and ran
./gdb
Output:
#./gdb
#
But it does not show anything. It exits without any message.
What am I missing here?
Running the file command shows that the gdb executable is indeed built for my target.
PS: Running a sample hello world program using arm-linux-gnueabi-gcc works perfectly fine on the target; and file command shows the same output that it did for gdb.
What am I missing here?
Your build looks correct, but doesn't work. It's not clear why, so you need to debug that.
What is the exit status of this gdb on the target?
./gdb --version; echo $?
Does it actually do anything? strace ./gdb --version
Is there anything interesting in the kernel message log?
Depending on answers to above questions, further guesses of what has gone wrong will be possible.
Perhaps there is some .gdbinit that tells GDB to quit? What does this do:
./gdb -nx --version?

Can we debug the goldfish kernel 3.4 of the default SDK Android emulator running on Windows? (i.e. by a break-point)

I need to know if we can add a break-point inside the goldfish kernel and monitor some variable. Please note that I am using a Linux machine to cross compile both the emulator and goldfish kernel 3.4 (fetched from AOSP) - Also as I said, emulator is a .exe process running on Windows
I spent a couple of days searching for an answer for this question and I figured out that it is possible, and here is the answer:
Compilation goldfish 3.4 to turn the debugging options in kernel configuration file
Get prebuilt toolchain
cd ~
git clone https://android.googlesource.com/platform/prebuilts/gcc/linux-x86/arm/arm-eabi-4.6
Setting env vars for the prebuilt toolchain by appending the following env vars to ~/.bashrc
gedit ~/.bashrc
export ARCH=arm
export SUBARCH=arm
export PATH=~/arm-eabi-4.6/linux-x86/toolchain/arm-eabi-4.4.3/bin:$PATH
export CROSS_COMPILE=arm-eabi-
export PATH=$PATH:~/arm-eabi-4.6/darwin-x86/toolchain/arm-eabi-4.4.3
bash # execute the bash again to re-execute the ./bashrc file
Get goldfish kernel
git clone https://android.googlesource.com/kernel/goldfish.git
git checkout -t origin/android-goldfish-3.4 -b goldfish3.4
Configure and compile goldfish kernel
gedit $GOLDFISH_KERNEL_PATH/arch/arm/configs/goldfish_armv7_defconfig
Add 'CONFIG_DEBUG_INFO=y' to goldfish_armv7a_deconfig
make goldfish_armv7_defconfig
make -j8
Testing new built goldfish kernel
Now you can find the newly built kernel at $GOLDFISH_KERNEL_PATH/arch/arm/boot/zImage
also you need the ELF format of kernel since it contains the debug symbols needed by gdb: $GOLDFISH_KERNEL_PATH/vmlinux
Start default android emulator with an instance of gdb server listening of 1234
emulator-arm.exe -verbose -show-kernel -netfast -avd test -qemu -S -gdb tcp::1234,ipv4 -kernel $GOLDFISH_KERNEL_PATH/arch/arm/boot/zImage
Getting apprpriate gdb version - target should be android arm
Download gdb for windows: https://github.com/ikonst/gdb-7.7-android
Copy vmlinx from kernel root folder to your windows machine
Start gdb using command line: gdb.exe %GOLDFISH_KERNEL_PATH/vmlinux
(gdb) Set remotetimeout 10
(gdb) Set debug remote 1
(gdb) Target remote localhost:1234
(gdb) b sdhci_request
(gdb) step
(gdb) step
(gdb) cont
(gdb) etc ..

chroot into other arch's environment

Following the Linux from Scratch book I have managed to build a toolchain for an ARM on
an ARM. This is till chapter 6 of the book, and on the ARM board itself I could go on further with no problems.
My question is if I can use the prepared environment to continue building the soft from chapter 6 on my x86_64 Fedora 16 laptop?
I thought that while I have all the binaries set up I could just copy them to laptop, chroot inside and feel myself as on the ARM board, but using the command from the book gives no result:
`# chroot "$LFS" /tools/bin/env -i HOME=/root TERM="$TERM" PS1='\u:\w\$
PATH=/bin:/usr/bin:/sbin:/usr/sbin:/tools/bin /tools/bin/bash --login +h
chroot: failed to run command `/tools/bin/env': No such file or directory`
The binary is there, but it doesn't belong to this system:
`# ldd /tools/bin/env
not a dynamic executable`
The binary is compiled as per the book:
# readelf -l /tools/bin/env | grep interpreter
[Requesting program interpreter: /tools/lib/ld-linux.so.3]
So I wonder if there is a way, like using proper environment variables for CC LD READELF, to continue building for ARM using these tools on x86_64 host.
Thank you.
Yes, you certainly can chroot into an ARM rootfs on an x86 box.
Basically, like this:
$ sudo chroot /path/to/arm/rootfs /bin/sh
sh-4.3# ls --version 2>&1 | head
/bin/ls: unrecognized option '--version'
BusyBox v1.22.1 (2017-03-02 15:41:43 CST) multi-call binary.
Usage: ls [-1AaCxdLHRFplinsehrSXvctu] [-w WIDTH] [FILE]...
List directory contents
-1 One column output
-a Include entries which start with .
-A Like -a, but exclude . and ..
sh-4.3# ls
bin css dev home media proc sbin usr wav
boot data etc lib mnt qemu-arm sys var
My rootfs is for a small embedded device, so everything is BusyBox-based.
How is this working? Firstly, I have the binfmt-misc support running in the kernel. I didn't have to do anything; it came with Ubuntu 18. When the kernel sees an ARM binary, it hands it off to the registered interpreter /usr/bin/qemu-arm-static.
A static executable by that name is found inside my rootfs:
sh-4.3# ls /usr/bin/q*
/usr/bin/qemu-arm-static
I got it from a Ubuntu package. I installed:
$ apt-get install qemu-user-static
and then copied /usr/bin/qemu-arm-static into the usr/bin subdirectory of the rootfs tree.
That's it; now I can chroot into that rootfs without even mentioning QEMU on the chroot command line.
Nope. You can't run ARM binaries on x86, so you can't enter its chroot. No amount of environment variables will change that.
You might be able to continue the process by creating a filesystem image for the target and running it under an emulator (e.g, qemu-system-arm), but that's quite a different thing.
No you cannot, at least not using chroot. What you have in your hands is a toolchain with an ARM target for an ARM host. Binaries are directly executable only on architectures compatible with their host architecture - and x86_64 is not ARM-compatible.
That said, you might be able to use an emulated environment. qemu, for example, offers two emulation modes for ARM: qemu-system-arm that emulates a whole ARM-based system and qemu-arm that uses ARM-native libraries to provide a thinner emulation layer for running ARM Linux executables on non-ARM hosts.

How to debug my Cross compiled Linux Kernel?

I 've cross compiled a Linux Kernel (for ARM on i686 - using Cross-LFS).
Now I'm trying to boot this Kernel using QEMU.
$ qemu-system-arm -m 128 -kernel /mnt/clfs-dec4/boot/clfskernel-2.6.38.2 --nographic -M versatilepb
Then, it shows this line and waits for infinite time !!
Uncompressing Linux... done, booting the kernel.
So, I want to debug the kernel, so that I can study what exactly is happening.
I'm new to these kernel builds, Can someone please help me to debug my custom built kernel as it is not even showing anything after that statement. Is there any possibility of the kernel being broken? ( I dont think so, b'se it didnot give any error while compiling )
And my aim is to generate a custom build very minimal Linux OS. Any suggestions regarding any tool-chains etc which would be easy & flexible depending on my requirements like drivers etc.,
ThankYou
You can use GDB to debug your kernel with QEMU you can use -s -S options. If you want a simple and reliable toolchain, you can use ELDK from DENX (http://www.denx.de/wiki/DULG/ELDK).
You can install it like this (It's not the last version, but you got the idea):
wget http://ftp.denx.de/pub/eldk/4.2/arm-linux-x86/iso/arm-2008-11-24.iso
sudo mkdir -p /mnt/cdrom (if necessary)
sudo mount -o loop arm-2008-11-24.iso /mnt/cdrom
/mnt/cdrom/install -d $HOME/EMBEDDED_TOOLS/ELDK/
The command above should install the toolchain under $HOLE/EMBEDDED_TOOLS/ELDK (modify it if you need)
echo "export PATH=$PATH:$HOME/EMBEDDED_TOOLS/ELDK/ELDK42/usr/bin" >> $HOME/.bashrc
You can then see the version of your ARM toolchain like this:
arm-linux-gcc -v
You can test a hello_world.c program like this:
arm-linux-gcc hello_world.c -o hello_world
And you type: file hello_wrold to see the target architecture of the binary, it should be something like this:
hello_wrold: ELF 32-bit LSB executable, ARM, version 1 (SYSV)
Now if you want to compile a production kernel, you need to optimize it (i suggest using busybox) and if you want just one for testing now, try this steps:
Create a script to set your chain tool set_toolchain.sh:
#! /usr/bin/sh
PATH=$PATH:$HOME/EMBEDDED_TOOLS/ELDK/ELDK42/usr/bin
ARCH=arm
CROSS_COMPILE=arm-linux-gnueabi-
export PATH ARCH CROSS_COMPILE
And run your script (source ./set_toolchain.sh)
Download a linux kernel and unzip it (Let's assume 2.6.x, it's an old kernel, but there are a lot of chances that it work without compilation errors).
Inside your unzipped kernel:
cd ~/linux-2.6.29/arch/arm/configs
make versatile_defconfig
Here we use versatile chip, you may need to use make menuconfig to modify the option OABI and set it to ARM EABI, this option is under Kernel features menu
After all this steps, you can compile you kernel:
make
if you want verbose compilation make v=1
After this you got your kernel under arch/arm/boot/zImage.
Hope this help.
Regards.
I would suggest to build your kernel by activating the option in the section Kernel hacking of your configuration file.
Then you may use kdb or kgdb which is easier to use but requires another machine running gdb.
`
You can also connect Qemu and GDB. Qemu has the -s and -S options that run a GDB server and allow you to connect to it via TCP to localhost:1234. Then you can load your kernel image (the unzipped one) in GDB and see how far your kernel boots.

Resources