Linux Kernel Code Coverage - GCOV - linux

I'm trying to use some test scenarios from Linux test Project and get the Kernel source code coverage.
I'm using GCOV/LCOV to do so.
here are the things I have so far:
GCOV flags in the build config
GCOV-based kernel profiling
CONFIG_GCOV_KERNEL=y
CONFIG_GCOV_PROFILE_ALL=y
On Linux kernel version: 2.6.32.60+drm33.26
After building the kernel I have all the .gcov files in the source folder
GCOV/LCOV works when I use a source file as the input
Things that I should have but I don't
/Proc/GCOV folder
GCOV Kernel Module (gcov.o?)
Now what I want is to run the test scenarios and with LCOV get which portion of Linux Kernel Code has been used so far. but when I call LCOV -c this is what I get even though all the build flags are ok.
Loading required gcov kernel module.
lcov: ERROR: cannot load required gcov kernel module!
There is a kernel patch for < 2.6.30 and afterwards it is built in.

please read this document
http://www.mjmwired.net/kernel/Documentation/gcov.txt
Here is answer to your questions:
There is no proc fs for kernel coverage. After booting from the new kenrel, you had to mount the debug-fs via command: "mount -t debugfs none /sys/kernel/debug" and read coverage log of kernel from this file
Kernel coverage can not be built as module. As you see, the CONFIG option is 'Y', not 'M'

below is my try on the ubuntu 12.04 default kernel.
thought the gcov is not enabled, but the debug fs is mounted and some kvm debug inforation can be found in it.
ubuntu:/sys/kernel# mount -t debugfs none /sys/kernel/debug
mount: none already mounted or /sys/kernel/debug busy
mount: according to mtab, none is already mounted on /sys/kernel/debug
ubuntu:/sys/kernel# umount /sys/kernel/debug
ubuntu:/sys/kernel# mount -t debugfs none /sys/kernel/debug
ubuntu:/sys/kernel# ls debug
acpi bdi bluetooth extfrag gpio hid kprobes kvm mce regmap regulator sched_features suspend_stats tracing usb wakeup_sources x86
ubuntu:/sys/kernel# cat debug/kvm/
efer_reload host_state_reload io_exits mmio_exits mmu_pte_write nmi_window signal_exits
exits hypercalls irq_exits mmu_cache_miss mmu_recycled pf_fixed tlb_flush
fpu_reload insn_emulation irq_injections mmu_flooded mmu_shadow_zapped pf_guest
halt_exits insn_emulation_fail irq_window mmu_pde_zapped mmu_unsync remote_tlb_flush
halt_wakeup invlpg largepages mmu_pte_updated nmi_injections request_irq
ubuntu:/sys/kernel# cat debug/kvm/io_exits
467789515
ubuntu:/sys/kernel#

Related

single partition & no initramfs linux boot gets kernel panic

compiled a kernel using gentoo specifications for Thinkpad T430
mounted empty ext4 hard drive and created boot/ directory on it, moved bzImage and System.map inside
Installed extlinux to it with "extlinux --install [mounted directory]/boot"
placed syslinux.cfg inside with the following config:
DEFAULT linux
SAY Now booting the kernel from EXTLINUX...
LABEL linux
KERNEL /boot/bzImage
APPEND root=/dev/sdb rw init=/bin/bash
Created bin/ folder in mounted hard drive, placed bash binary inside
At this point i'm able to boot the kernel to the point where it has to run init, however it panics:
---[ Kernel Panic - not syncing: Requested init /bin/bash failed (error -2). ]---
4chan solved my question in 10 minutes, i didn't have the libc.so libraries

Qemu arm with versatilepb 3.board support hda image with kernel 3.17.6

I am trying to build & run linux kernel 3.17.6 version for Qemu versatile board target.
qemu-system-arm -M versatilepb -kernel zImage-versatile -hdc rootfs.img
While running qemu with hda image, it reports
Please append a correct "root=" boot option; here are the available partitions:
1f00 65536 mtdblock0 (driver?)
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
Can anyone tell, what i am missing?
why does hda, other scsi drives are not recongnised.
I am trying for follow step from below link
http://fedoraproject.org/wiki/Architectures/ARM/HowToQemu#Build_Kernel_Image_From_Source

Failed to execute /init

I am trying to build a basic root filesystem using Buildroot, for an embedded system (the Banana PI D1).
I am using a kernel from an SDK supplied by the SoC vendor. From this repo I am using only the kernel, found in src/kernel.
There's nothing remarkable about the Buildroot configuration. It builds with no errors and the resulting root filesystem looks like it contains everything I would expect.
I have configured it to build the filesystem as an initramfs embedded within the zImage.
The kernel appears to start up correctly, but cannot load init and then panics:
Booting Linux on physical CPU 0
Linux version 3.4.35 (harmic#penski.harmic.moo.org) (gcc version 4.8.4 (Buildroot 2015.02) ) #7 Sat Mar 21 22:59:18 AEDT 2015
CPU: ARM926EJ-S [41069265] revision 5 (ARMv5TEJ), cr=00053177
...
Kernel command line: root=/dev/mtdblock1 ro init=/sbin/init mem=64M console=ttySAK0,115200
...
Freeing init memory: 4632K
Failed to execute /init
Failed to execute /sbin/init. Attempting defaults...
mmc0: host does not support reading read-only switch. assuming write-enable.
mmc0: new SDHC card at address 0007
mmcblk0: mmc0:0007 SD08G 7.42 GiB
mmcblk0: p1
Kernel panic - not syncing: No init found. Try passing init= option to kernel. See Linux Documentation/init.txt for guidance.
I have tried a number of troubleshooting steps:
I've built a root filesystem using this miniroot project (took some doing, as it is quite out of date). It booted OK, using the same kernel as I am trying to use with the buildroot root fs.
I've tried using both uClibc and eglibc
I've tried using Buildroot's own cross-tools as well as the cross tools supplied by the SoC vendor
I've confirmed that the built rootfs does include an /init (it does!)
There is a gist here containing the buildroot configuration, a copy of the kernel boot log, and a listing of the contents of the generated filesystem.
What steps can I take to troubleshoot this further?
Update:
The generated rootfs.cpio.gz weighs in at 2139200 bytes. I have read that there is a maximum size of initramfs you can use, but I have not been able to find where the hard limit is documented.
I have attached a listing of the generated root filesystem to the gist linked above.
I have unpacked the rootfs on the host and inspected it. /init contains this:
#!/bin/sh
# devtmpfs does not get automounted for initramfs
/bin/mount -t devtmpfs devtmpfs /dev
exec 0</dev/console
exec 1>/dev/console
exec 2>/dev/console
exec /sbin/init $*
/sbin/init is a symlink to /bin/busybox.
/bin/busybox is dynamically linked:
$ file busybox
busybox: setuid ELF 32-bit MSB executable, ARM, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.16, stripped
$ ../../../host/usr/bin/armeb-buildroot-linux-gnueabi-readelf -a busybox | grep "Shared library:"
0x00000001 (NEEDED) Shared library: [libc.so.6]
libc.so.6 is present in /lib. /lib32 is a symlink to /lib for good measure.
The device has 64M RAM.
Both the vendor's cross tools and the buildroot cross tools are set up for eabi
As suggested by #sawdust, the problem was that the CPU was not supposed to be run in big endian mode.
After changing the target to 'ARM (Little Endian)', cleaning and re-building, it now boots correctly.
In retrospect this should have been obvious - inspecting the vendor's kernel image:
$ file zImage
zImage: Linux kernel ARM boot executable zImage (little-endian)

Upgraded Redhat Linux kernel gives kernel panic on boot

I'm trying to use a new kernel (2.6.32) on RHEL 5.10 32bit (2.6.18 kernel). The .32 kernel is downloaded from kernel.org not patched by Redhat. I know this is silly, but upgrading to RHEL 6 is not an option to us.
I did make menuconfig; make; make modules; make modules_install; make install; reboot. Then I got a kernel panic. I built 2.6.18 kernel from source, either patched by redhat or not. Both worked fine.
My question is whether it's possible to use a 2.6.32 kernel with all the filesystem and libraries from a RHEL 5.10 installation (2.6.18). If it's possible then what's wrong with my process?
========
Mounting root filesystem.
mount: could not find filesystem '/dev/root'
Setting up other filesystems.
Setting up new root fs
setuproot: moving /dev failed: No such file or directory
no fstab.sys, mounting internal defaults
setuproot: error mounting /proc: Nosuch file or dirctory
Switching to new root and running init.
unmounting old /dev
unmounting old /proc
unmounting old /sys
switchroot: mount failed: No such file or directory
Kernel panic - not syncing: Attempted to kill init!
Pid: 1, comm: init Not tainted 2.6.32.63 #1
Call Trace:
[<c0xxxxxx>] ? panic
[<c0xxxxxx>] ? do_exit
[<c0xxxxxx>] ? do_group_exit
[<c0xxxxxx>] ? sys_exit_group
[<c0xxxxxx>] ? syscall_call
My /boot/grub/grub.conf has the following.
root (hd0,0)
kernel /boot/vmlinuz-2.6.32-63 ro root=LABEL=/ rhgb
initrd /boot/initrd-2.6.32-63.img
Thanks to all the help and comments I'm able to answer it myself.
This is what I tried but failed. Make a diff between the old and new initrd (gunzip | cpio). There are different modules installed, but they don't matter. I disabled loadable modules (everything build-in), the problem remains. I compiled busybox and put it in the initrd (edit /init), and I got a shell. From there I can manually mknod and mount the file system, but still have the kernel panic when switching root.
Finally I found this. It has a better description and solution to the problem. Enable "deprecated sysfs" and it's all fixed.

chroot into other arch's environment

Following the Linux from Scratch book I have managed to build a toolchain for an ARM on
an ARM. This is till chapter 6 of the book, and on the ARM board itself I could go on further with no problems.
My question is if I can use the prepared environment to continue building the soft from chapter 6 on my x86_64 Fedora 16 laptop?
I thought that while I have all the binaries set up I could just copy them to laptop, chroot inside and feel myself as on the ARM board, but using the command from the book gives no result:
`# chroot "$LFS" /tools/bin/env -i HOME=/root TERM="$TERM" PS1='\u:\w\$
PATH=/bin:/usr/bin:/sbin:/usr/sbin:/tools/bin /tools/bin/bash --login +h
chroot: failed to run command `/tools/bin/env': No such file or directory`
The binary is there, but it doesn't belong to this system:
`# ldd /tools/bin/env
not a dynamic executable`
The binary is compiled as per the book:
# readelf -l /tools/bin/env | grep interpreter
[Requesting program interpreter: /tools/lib/ld-linux.so.3]
So I wonder if there is a way, like using proper environment variables for CC LD READELF, to continue building for ARM using these tools on x86_64 host.
Thank you.
Yes, you certainly can chroot into an ARM rootfs on an x86 box.
Basically, like this:
$ sudo chroot /path/to/arm/rootfs /bin/sh
sh-4.3# ls --version 2>&1 | head
/bin/ls: unrecognized option '--version'
BusyBox v1.22.1 (2017-03-02 15:41:43 CST) multi-call binary.
Usage: ls [-1AaCxdLHRFplinsehrSXvctu] [-w WIDTH] [FILE]...
List directory contents
-1 One column output
-a Include entries which start with .
-A Like -a, but exclude . and ..
sh-4.3# ls
bin css dev home media proc sbin usr wav
boot data etc lib mnt qemu-arm sys var
My rootfs is for a small embedded device, so everything is BusyBox-based.
How is this working? Firstly, I have the binfmt-misc support running in the kernel. I didn't have to do anything; it came with Ubuntu 18. When the kernel sees an ARM binary, it hands it off to the registered interpreter /usr/bin/qemu-arm-static.
A static executable by that name is found inside my rootfs:
sh-4.3# ls /usr/bin/q*
/usr/bin/qemu-arm-static
I got it from a Ubuntu package. I installed:
$ apt-get install qemu-user-static
and then copied /usr/bin/qemu-arm-static into the usr/bin subdirectory of the rootfs tree.
That's it; now I can chroot into that rootfs without even mentioning QEMU on the chroot command line.
Nope. You can't run ARM binaries on x86, so you can't enter its chroot. No amount of environment variables will change that.
You might be able to continue the process by creating a filesystem image for the target and running it under an emulator (e.g, qemu-system-arm), but that's quite a different thing.
No you cannot, at least not using chroot. What you have in your hands is a toolchain with an ARM target for an ARM host. Binaries are directly executable only on architectures compatible with their host architecture - and x86_64 is not ARM-compatible.
That said, you might be able to use an emulated environment. qemu, for example, offers two emulation modes for ARM: qemu-system-arm that emulates a whole ARM-based system and qemu-arm that uses ARM-native libraries to provide a thinner emulation layer for running ARM Linux executables on non-ARM hosts.

Resources