Kernel debugging - vmlinux-gdb.py fails to run on gdb - linux

I'm trying to remotely-debug a Linux's kernel.
I've created a VM (using VMware) and connected to it from my PC using gdb, and everything works fine.
The problem is that gdb fails to load vmlinux-gdb.py script. I've tried to add it using the source command on gdb, and got the following error:
Traceback (most recent call last):
File "~/workspace/kernels/default-kernel/scripts/gdb/vmlinux-gdb.py", line 28, in <module>
ImportError: No module named 'linux'
The directory tree:
drwxr-xr-x 2 iofek iofek 4096 Mar 22 19:59 linux
-rwxr-xr-x 1 iofek iofek 577 Mar 22 19:43 Makefile
-rwxrwxr-x 1 iofek iofek 0 Mar 22 19:43 modules.order
-rwxr-xr-x 1 iofek iofek 759 Mar 22 20:00 vmlinux-gdb.py
Now I can't understand why the module fails to find the linux directory.
I've updated the PYTHONPATH, as well as added the path using sys.path.append.
Additionaly, all files under linux has the right permissions.
Any ideas?

Short answer
Never use ...linux.../scripts/gdb/vmlinux-gdb.py.Use the file vmlinux-gdb.py that is in the root directory of your kernel build output, alongside your vmlinux file.
If this file does not exist, you need to:
Activate CONFIG_GDB_SCRIPTS in your kernel configuration
Long tutorial
First make sure the gdb-scripts will be created during the kernel build:
make menuconfig
Enable CONFIG_GDB_SCRIPTS
make
Then find out if your kernel build is using a separate build output folder and then follow ONE (xor) of the following chapters:
Either: Building in-place without a build binary dir
If you compile your kernel with .o and .ko files cluttered inside the source (which is e.g. the way Ubuntu recommends it on wiki.ubuntu.com) you can cd into the source root folder, let's for example assume you built in the folder ~/gdbenv, start gdb from there and then loading should be possible out-of-the-box:
cd ~/gdbenv
gdb ./vmlinux
(gdb) add-auto-load-safe-path ~/gdbenv
(gdb) source ~/gdbenv/vmlinux-gdb.py
Or: When your way of building a kernel outputs binaries in a separate build dir
Which is done e.g. in a Yocto build, where all binaries end up in a different folder, not mixed with the source folder. In such environments you need to grab everything together in one environment (vmlinux, gdb-scripts and optionally the kernel sources).
tar -xf ~/Downloads/linux-blabla.tgz -C ~/gdbenv (optional)
cp .../build/vmlinux-gdb.py ~/gdbenv
mkdir ~/gdbenv/scripts
cp -r .../build/scripts/gdb ~/gdbenv/scripts
cp .../build/vmlinux ~/gdbenv
Then procede like in the preceeding chapter (cd ~/gdbenv, gdb ./vmlinux ...)

For latest kernel, you need build gdb scripts:
<root of kernel source>: make scripts_gdb
After making, a symlink of vmlinux-gdb.py is created at the root of kernel source. Then:
<root of kernel source>: gdb vmlinux
<gdb cmd>: add-auto-load-safe-path root-of-kernel-source
<gdb cmd:> source vmlinux-gdb.py

Related

Why can't I execute binary copied into a container?

I have a container built from base image alpine:3.11
Now I have a binary my_bin that I copied into the running container. From within the running container I moved to /usr/local/bin and I confirmed that the binary is there with the right permissions. Eg
/ # ls -l /usr/local/bin/my_bin
-rwxr-xr-x 1 root root 55662376 Jun 12 18:52 /usr/local/bin/my_bin
But when I attempt to execute/run this binary I get the following:
/ # my_bin init
/bin/sh: my_bin: not found
This is also the case if I switch into /usr/local/bin/ and run via ./my_bin
also if I try using the full path
/# /usr/local/bin/my_bin init
/bin/sh: /usr/local/bin/my_bin: not found
Why am I seeing this behavior? and how do I get to be able to execute the binary?
EDIT 1
I installed file and I can also confirm that the binary is copied and is an executable
file /usr/local/bin/my_bin
/usr/local/bin/my_bin: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=b36f0aad307c3229850d8db8c52e00033eae900c, for GNU/Linux 3.2.0, not stripped
Maybe this gives some extra clues?
Edit 2
As suggested by #BMitch in the answer I also ran ldd and here is the output
# ldd /usr/local/bin/my_bin
/lib64/ld-linux-x86-64.so.2 (0x7f91a79f3000)
libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x7f91a79f3000)
libdl.so.2 => /lib64/ld-linux-x86-64.so.2 (0x7f91a79f3000)
libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f91a79f3000)
** Edit 3 **
Based on the output of ldd and more googling, I find that running apk add libc6-compat installed the missing libraries and I could then run the binary.
For a binary, this most likely indicates a missing dynamic library. You can run ldd /usr/local/bin/my_bin to see all the libraries that binary uses. With alpine, the most common library to be missing from an externally compiled program is libc. Alpine is built with musl instead of libc, and therefore you'll want to compile programs specifically for Alpine.
For others that may encounter this error in docker containers, I cover various issues in my faq presentation and other questions on the site.
/ # my_bin init
/bin/sh: my_bin: not found
When you execute above line it says file which you are trying to execute can't be found, my_bin is the file in your case.
Check if file is copied properly and with the same name or you might be trying to execute file from different location.
e.g. Try /usr/local/bin/my_bin init if you are not doing cd /usr/local/bin after ls -l /usr/local/bin/my_bin command.

Why two kernel module copies are required in a linux kernel debug package installed system?

In a Linux machine with kernel debug packages installed, I could see that two copies of kernel modules are there in two locations as mentioned below:
/lib/modules/<$KERNELVERSION>/kernel/
/usr/lib/debug/lib/modules/<$KERNELVERSION>/kernel/
I do have a doubt that which module will be executed and what is the need for two modules.
/lib/modules/<$KERNELVERSION>/kernel/ - modules, that will be loaded with kernel ( they are without debug symbols )
Example:
ll /lib/modules/4.15.0-20-generic/kernel/fs/xfs/xfs.ko
-rw-r--r-- 1 root root 1883966 Apr 24 2018 /lib/modules/4.15.0-20-generic/kernel/fs/xfs/xfs.ko
/usr/lib/debug/lib/modules/<$KERNELVERSION>/kernel/ - modules with debug symbols
Example:
ll /usr/lib/debug/lib/modules/4.15.0-20-generic/kernel/fs/xfs/xfs.ko
-rw-r--r-- 1 root root 40247182 Apr 24 2018 /usr/lib/debug/lib/modules/4.15.0-20-generic/kernel/fs/xfs/xfs.ko
As you can see, it's 1.8Mb vs 40Mb. If you compare outputs of readelf -S <module>, then you will notice additional sections like debug_aranges, debug_info, debug_ranges, etc. in debug module

How to load kernel module while system is booting up

I cannot load kernel module when the system boots up. I found one article that suggests me to try the following steps:
(a) Create directory for kmodule(the module I created):
# mkdir -p /lib/modules/$(uname -r)/kernel/drivers/mymodule
(b) Copy kmodule to that directory:
# cp kmodule.ko /lib/modules/$(uname -r)/kernel/drivers/mymodule/
(c) Edit /etc/modules file and add a line to it that consist your module name. In my case, it's kmodule as per the following:
# vi /etc/modules
1 # /etc/modules: kernel modules to load at boot time.
2 #
3 # This file contains the names of kernel modules that should beloaded
4 # at boot time, one per line. Lines beginning with "#" are ignored.
5 kmodule
(d) Reboot the system to see changes. Use lsmod command to check if the module is loaded or not.
# lsmod | grep kmodule
My Problem: it's not loaded when I reboot the system and when I debug using cmd $ cat /var/log/syslog | grep kmodule.
I found this:
May 20 15:40:14 SARATHI kernel: [17499.486762] kmodule: loading out-of-tree module taints kernel.
May 20 15:40:14 SARATHI kernel: [17499.486800] kmodule: module verification failed: signature and/or required key missing - tainting kernel
May 20 19:31:46 SARATHI systemd-modules-load[243]: Failed to find module 'kmodule'
What does that mean? How to resolve it?
NOTE: I'm new to kernel module and I'm using Ubuntu 16.04. Also, note that when I loaded manually using insmod cmd, it's successfully loaded.

Installing QT on -- ubuntu linux -- few questions

I have installed QT5.1 on my host linux platform.
Now i am getting an error while opening Qt creator from dashboard :---
qt creator linux cannot overwrite file /home qt version xml
This link suggest how to get rid of this error, I followed it & above error is resolved :--
https://askubuntu.com/questions/253785/cannot-overwrite-file-home-baadshah-config-qtproject-qtcreator-toolchains-xml
.config folder contains :---
dinesh#ubuntu:~/.config$ ls
dconf goa-1.0 QtProject update-notifier
gnome-disk-utility ibus QtProject.conf user-dirs.dirs
gnome-session nautilus Trolltech.conf user-dirs.locale
My PATH variable is :---
dinesh#ubuntu:~/.config$ $PATH
bash: /usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games: No such file or directory
qt creator executable, is here :---
dinesh#ubuntu:~/.config/QtProject$ ls -l
total 16
drwxr-xr-x 5 dinesh dinesh 4096 Nov 19 21:48 qtcreator
-rw-r--r-- 1 dinesh dinesh 3072 Nov 18 02:56 QtCreator.db
-rw-r--r-- 1 dinesh dinesh 5739 Nov 19 21:53 QtCreator.ini
Which command is not able to locate qt creator executable :---
dinesh#ubuntu:~/.config/QtProject$ which qtcreator
dinesh#ubuntu:~/.config/QtProject$
I have installed QT in /opt/Qt5.1.1 folder :--
dinesh#ubuntu:/opt/Qt5.1.1$ ls
5.1.1 Licenses MaintenanceTool.ini README.txt
components.xml MaintenanceTool network.xml Tools
InstallationLog.txt MaintenanceTool.dat qt-project.org.html
Now I have few questions :---
1> In $PATH enviromental variable we have not mentioned the location of QTcreator then how dashboard is able to open QTcreator ?
2> Also what exactly is the difference between these two folders , ~/.config & /opt/Qt5.1.1 ?
3> How can i make my QT project to compile against library QT4.8.5 ? Do i have to make some changes in ~/.config ?
4> If i install QT4.8.5 does it install seprate QTcreator for itself ?
Please suggest so that i come to know how everything works in synchronization with each other ?

chroot into other arch's environment

Following the Linux from Scratch book I have managed to build a toolchain for an ARM on
an ARM. This is till chapter 6 of the book, and on the ARM board itself I could go on further with no problems.
My question is if I can use the prepared environment to continue building the soft from chapter 6 on my x86_64 Fedora 16 laptop?
I thought that while I have all the binaries set up I could just copy them to laptop, chroot inside and feel myself as on the ARM board, but using the command from the book gives no result:
`# chroot "$LFS" /tools/bin/env -i HOME=/root TERM="$TERM" PS1='\u:\w\$
PATH=/bin:/usr/bin:/sbin:/usr/sbin:/tools/bin /tools/bin/bash --login +h
chroot: failed to run command `/tools/bin/env': No such file or directory`
The binary is there, but it doesn't belong to this system:
`# ldd /tools/bin/env
not a dynamic executable`
The binary is compiled as per the book:
# readelf -l /tools/bin/env | grep interpreter
[Requesting program interpreter: /tools/lib/ld-linux.so.3]
So I wonder if there is a way, like using proper environment variables for CC LD READELF, to continue building for ARM using these tools on x86_64 host.
Thank you.
Yes, you certainly can chroot into an ARM rootfs on an x86 box.
Basically, like this:
$ sudo chroot /path/to/arm/rootfs /bin/sh
sh-4.3# ls --version 2>&1 | head
/bin/ls: unrecognized option '--version'
BusyBox v1.22.1 (2017-03-02 15:41:43 CST) multi-call binary.
Usage: ls [-1AaCxdLHRFplinsehrSXvctu] [-w WIDTH] [FILE]...
List directory contents
-1 One column output
-a Include entries which start with .
-A Like -a, but exclude . and ..
sh-4.3# ls
bin css dev home media proc sbin usr wav
boot data etc lib mnt qemu-arm sys var
My rootfs is for a small embedded device, so everything is BusyBox-based.
How is this working? Firstly, I have the binfmt-misc support running in the kernel. I didn't have to do anything; it came with Ubuntu 18. When the kernel sees an ARM binary, it hands it off to the registered interpreter /usr/bin/qemu-arm-static.
A static executable by that name is found inside my rootfs:
sh-4.3# ls /usr/bin/q*
/usr/bin/qemu-arm-static
I got it from a Ubuntu package. I installed:
$ apt-get install qemu-user-static
and then copied /usr/bin/qemu-arm-static into the usr/bin subdirectory of the rootfs tree.
That's it; now I can chroot into that rootfs without even mentioning QEMU on the chroot command line.
Nope. You can't run ARM binaries on x86, so you can't enter its chroot. No amount of environment variables will change that.
You might be able to continue the process by creating a filesystem image for the target and running it under an emulator (e.g, qemu-system-arm), but that's quite a different thing.
No you cannot, at least not using chroot. What you have in your hands is a toolchain with an ARM target for an ARM host. Binaries are directly executable only on architectures compatible with their host architecture - and x86_64 is not ARM-compatible.
That said, you might be able to use an emulated environment. qemu, for example, offers two emulation modes for ARM: qemu-system-arm that emulates a whole ARM-based system and qemu-arm that uses ARM-native libraries to provide a thinner emulation layer for running ARM Linux executables on non-ARM hosts.

Resources