How to extract the config from a kernel image file when CONFIG_IKCONFIG is set as a module (=m)? - linux

How do I extract the kernel configuration from a kernel image file?
The kernel image file type is:
/boot/kernel7.img: Linux kernel ARM boot executable zImage (little-endian)
The kernel has been compiled with CONFIG_IKCONFIG enabled. However,
scripts/extract-ikconfig /boot/kernel7.img
returns
extract-ikconfig: Cannot find kernel config.
Note: I am trying the get the config without booting the kernel.

If the kernel has been compiled with CONFIG_IKCONFIG=m (note the m), the configuration in stored in a module (configs.ko) and not in the kernel itself. That's the reason why running extract-ikconfig on the kernel image fails.
In this case, we can extract the config from the configuration module:
/usr/src/$(uname -r)/scripts/extract-ikconfig \
/lib/modules/$(uname -r)/kernel/kernel/configs.ko

Related

Yocto Warrior on INITRAMFS_IMAGE_BUNDLE and Linux Kernel Image on SD Card Image

I am building my Embedded Linux system using Yocto warrior on Ubuntu 18.04. I have my own core image recipe and an initramfs image recipe.
I've been reading the docs ( https://www.yoctoproject.org/docs/current/mega-manual/mega-manual.html#var-INITRAMFS_IMAGE ) and various posts on the internet in order to come up with the following in my local.conf:
# Use the INITRAMFS bundled in kernel
#KERNEL_IMAGETYPE = "Image-initramfs-jetson-nano.bin"
#KERNEL_IMAGE_BASE_NAME = "Image-initramfs-jetson-nano.bin"
#INITRAMFS_LINK_NAME = ""
INITRAMFS_NAME = "Initramfs"
INITRAMFS_IMAGE = "tegra-minimal-initramfs"
INITRAMFS_IMAGE_BUNDLE = "1"
These lines do in fact create an initramfs built in version of my Kernel and puts it in the deploy directory by the name Image-Initramfs.bin. It is slightly larger than the Image kernel file that successfully boots. So Yocto ends up building 2 kernels, one with initramfs, and one without.
ubuntu#ip:~/Desktop/jetson-yocto/build$ du -sh tmp/deploy/images/jetson-nano/Image-Initramfs.bin
36M tmp/deploy/images/jetson-nano/Image-Initramfs.bin
ubuntu#ip:~/Desktop/jetson-yocto/build$ du -sh tmp/deploy/images/jetson-nano/Image--4.9+git0+3c02a65d91-r0-jetson-nano-20190729195650.bin
33M tmp/deploy/images/jetson-nano/Image--4.9+git0+3c02a65d91-r0-jetson-nano-20190729195650.bin
The docs say this is accomplished with a secondary compilation path:
Controls whether or not the image recipe specified by INITRAMFS_IMAGE is run through an extra pass (do_bundle_initramfs) during kernel compilation in order to build a single binary that contains both the kernel image and the initial RAM filesystem (initramfs) image. This makes use of the CONFIG_INITRAMFS_SOURCE kernel feature.
Note
Using an extra compilation pass to bundle the initramfs avoids a circular dependency between the kernel recipe and the initramfs recipe should the initramfs include kernel modules. Should that be the case, the initramfs recipe depends on the kernel for the kernel modules, and the kernel depends on the initramfs recipe since the initramfs is bundled inside the kernel image.
The problem is that this initramfs Kernel is not installed by Yocto into the final SD Card image. Only the non-initramfs Kernel is installed. I have not been able to find a Yocto directive/setting on how to make it install the initramfs version instead of the non-initramfs one.
How can I do this?
If you are using wic tool to generate the SD card image then you can add something like below in local.conf
`IMAGE_BOOT_FILES_append = " Image-Initramfs.bin;${KERNEL_IMAGETYPE}"`
however if you are using custom script then you have to provide more information
and customize the SD card generation script.

What is the use of vmlinux file generated when we compile linux kernel

I am compiling Linux Kernel to my ARM board. I have seen file called vmlinux generated in kernel root folder. Can someone give good explanation about this file and it's use.
vmlinux is a ELF format based file which is nothing but the uncompressed version of kernel image which can be used for debugging. The zImage or bzImage are the compressed version of kernel image which is normally used for booting.
The vmlinux as such directly cannot be used by UBoot. However, by addition of metadata info in the process of creation of uImage for vmlinux, it is possible to boot via UBoot.
The vmlinux is the boot file in ELF format, and then the initrd file (ram disk) is run in the same directory (/boot).
The vmlinux file is practically the kernel itself.

Linux: Compiling a kernel device driver in standalone fashion

I'm compiling linux for an ARM board. I need to make some customized changes into an existing driver code present in the kernel repository and reload the driver.
I was expecting to find a ".ko" file in the driver directory after doing the make, but no such file exists. Apparently uImage/device tree image compilation doesn't work that way.
Do I need to write my own Makefile for standalone device driver compilation?
It may be a silly question, but sorry I'm pretty new to kernel/device drivers.
EDIT:
I followed the process outlined here: http://odroid.com/dokuwiki/doku.php?id=en:c1_building_kernel
After git checkout and installing the cross-compiler(arm-linux-gnueabihf-gcc 4.9.2), I issue the basic make comands
$ make odroidc_defconfig
$ make -j4
$ make -j4 modules
$ make uImage
All the steps are successful. The last few lines of log look like
KSYM .tmp_kallsyms1.o
KSYM .tmp_kallsyms2.o
LD vmlinux
SORTEX vmlinux
SYSMAP System.map
OBJCOPY arch/arm/boot/ccImage
Kernel: arch/arm/boot/ccImage is ready
Image arch/arm/boot/ccImage.lzo is ready
UIMAGE arch/arm/boot/uImage
Image Name: Linux-3.10.72
Created: Sat Mar 28 22:44:45 2015
Image Type: ARM Linux Kernel Image (lzo compressed)
Data Size: 5459649 Bytes = 5331.69 kB = 5.21 MB
Load Address: 00208000
Entry Point: 00208000
Image arch/arm/boot/uImage is ready
EDIT 2: Path to the driver code
https://github.com/hardkernel/linux/tree/odroidc-3.10.y/drivers/amlogic/efuse
Examining your Makefile
#
# Makefile for eFuse.
#
obj-$(CONFIG_EFUSE) += efuse_bch_8.o efuse_version.o efuse_hw.o efuse.o
We learn that the code can be built as either a loadable module, or permanently linked into the kernel itself.
Examining odroidc_defconfig from branch odroidc-3.10.y-android mentioned in your instructions we find
#
# EFUSE Support
#
CONFIG_EFUSE=y
With the "y" indicating that the code is to be linked into the driver. Had it instead said "m" it would be built as a module.
It's possible you could change that in the kernel config, but it might also cause problems if there's nothing setup to load the module before it is needed.
Likely simply installing the newly built kernel with the code already linked inside (ie, forgetting about the module idea) will work.
Not sure if you are still looking for the answers to this question.
But looking at the Kconfig file in your code, show that -
config EFUSE
bool "EFUSE Driver"
And since all your driver files are compiled with this config, the above config description allows the CONFIG_EFUSE to be 'n' or 'y'. So you can only build static modules (build-in) with this.
All you need to do is change the above description to:
config EFUSE
**tristate** "EFUSE Driver"
and also change the other configs in Kconfig to tristate.
This will allow your driver to be compiled as module once you select the driver as 'M' in your kernel config. Then you should be able to see the ".ko" file corresponding to the driver.
Also do make sure to use EXPORT_SYMBOL(foo) when building the driver as module so that any dependencies are taken care of when loading module symbols.

Compiling a kernel - no bzImage/vmlinuz produced

I'm trying to compile a kernel (altered version of 2.6.32.9, found here https://github.com/rabeeh/linux-2.6.32.9). I am doing the compilation on a emulated ARM system (qemu) (yes, I should probably cross-compile, but that's a different topic) running Ubuntu Core (https://wiki.ubuntu.com/Core) and the kernel (vmlinuz) from Ubuntu 11.04 (downloaded from http://ports.ubuntu.com/ubuntu-ports/dists/natty/main/installer-armel/current/images/versatile/netboot/vmlinuz).
After running make bzImage, I look in the arch/arm/boot folder, and find only a file called zImage. I tried using this zImage instead of the vmlinuz I downloaded from ubuntu.com in qemu, but that doesn't work, just shows a black screen. I guess zImage is not the same as bzImage, which is what I think vmlinuz (judging from different articles on the internet) is.
So, a few questions:
Why doesn't make bzImage produce a bzImage/vmlinuz?
Can I convert a vmlinux to a vmlinuz using for example mkimage (there are lots of guides on the opposite...)?
Thanks
The bzImage filename and make target was originally x86-specific (big zImage). Many of the bootloaders on architectures that are not equal to baremetal-x86 (SPARC, PPC, IA64, etc. and also Xen on *) directly take vmlinux (or one of its compressed forms, for example vmlinux.gz, aka zImage). I guess some maintainers just added bzImage as a make target name because they wanted to have the x86 madness on their arch as well.
I get the result you describe by asking qemu to emulate a different cpu than arm926ej-s. But booting versatilepb with the default cpu works. I've cross-compiled my kernel, and I compiled all the drivers into it (so I don't use initrd).
Just download 100MB arm-eabi toolchain from http://www.mentor.com/embedded-software/sourcery-tools/sourcery-codebench/editions/lite-edition/ (it's free but they want your email, like the x86 Intel compiler). It has an installer, just say "next" until it's done, like on Windows. Then add the bin directory to your path:
export PATH=~/CodeSourcery/Sourcery_CodeBench_Lite_for_ARM_EABI/bin/:$PATH
Then go back to your kernel source dir and do
make ARCH=arm CROSS_COMPILE=arm-none-eabi- menuconfig
make ARCH=arm CROSS_COMPILE=arm-none-eabi- zImage modules
You can do
sudo make ARCH=arm CROSS_COMPILE=arm-none-eabi- INSTALL_MOD_PATH=path_to_arm_root modules_install
if you can reach your ARM filesystem from the host. If you're using NFS root it's trivial, but if you're using a disk image you need to either:
use a raw disk image and kpartx (depends on your host kernel having dm-multipath) or
qemu-nbd which supports qcow (and depends on host kernel having network block device support)
To boot in qemu with disk you need the right drivers (SYM53C8XX SCSI). The versatile defconfig doesn't select those.

Recompile Linux kernel (with Xen) config flags not being used

I am trying to compile the linux kernel (3.0.0-13) with the Xen dom0 config flags which are not exposed via menuconfig. (Yes, I know that ubuntu provides a 'virtual' flavoured kernel that supports Xen paravirtualization, but that kernel does not seem to boot on my hardware. So, I am trying to compile the 'generic' flavoured ubuntu kernel with the extra Xen config flags since I know that the 'generic' flavour runs on my hardware). Every time that I try to compile my config flags are ignored based on the .config file that is generated and packaged with my kernel binary.
I have tried the following the following:
Downloaded the kernel source using apt-get source linux-image
I have then followed all of the steps from this guide: How to compile a new Ubuntu 11.10 (Oneiric) kernel and performed the following extra steps:
put my own config flags in the config.flavour.xxx file then compiled the linux-image package
Paused the 'debian/rules editconfigs' command immediately after the it runs 'menuconfig' and replaced the build/.config file with my custom .config file then compiled the linux-image package
I have also used the following howto How To Compile A Kernel - The Ubuntu Way and run the following commands on kernel source code that I already had:
edit the .config file to have my config flags
run 'make oldconfig'
run 'make-kpkg clean && fakeroot make-kpkg --initrd --append-to-version=-custom kernel_image kernel_headers'
After every time I have compiled the kernel I have installed the newly compiled linux-image package and have discovered that my config flags are not in the /boot/config-xxx file as I expect.
What am I doing wrong to cause my config flags to be ignored?
What can I do to make sure that my kernel config flags are used when compiling?
Is there some other option than recompiling the kernel to get a Xen dom0 kernel that work for my hardware?
For question 3: Is there another way to get a xen dom0 kernel for my hardware?
Yes.
Although all of the xen documentation says that all stock kernel support xen dom0, what they mean is that the source for all stock kernels now support xen dom0 but that support is turned off in their precompiled binaries.
On debian there is the following package which is a prebuilt linux kernel with the xen dom0 support turned on. Package: linux-image-xen-686
For anyone else who is really looking to compile their own xen dom0 kernel the following site has a good guide: Compiling a Xen Dom0 Kernel for Ubuntu Jaunty
What am I doing wrong to cause my config flags to be ignored?
The root of the issue lies in the first portion of your problem; the Xen dom0 config flags are not exposed via menuconfig
Simply setting them in the .config doesn't mean they'll be activated. You have to consider the dependencies for the config options.
From the linux 3.0 tag at github: https://github.com/torvalds/linux/blob/02f8c6aee8df3cdc935e9bdd4f2d020306035dbe/arch/x86/xen/Kconfig
config XEN_DOM0
def_bool y
depends on XEN && PCI_XEN && SWIOTLB_XEN
depends on X86_LOCAL_APIC && X86_IO_APIC && ACPI && PCI
Are all these depends flags met?
What can I do to make sure that my kernel config flags are used when compiling?
In the beginning stages of the kernel compile process, the .config file is re-written if there are any discrepancies. A good test to make sure your edits will persist is checking if they still exist in your .config file after doing a make menuconfig and saving changes. If after that your flags are still there, you can be sure that your flags are being used.
Is there some other option than recompiling the kernel to get a Xen dom0 kernel that work for my hardware?
Not unless another distribution ships with XEN_DOM0 enabled.

Resources