Trying to find all the kernel modules needed for my machine using shell script - linux

I'm developing kernel modules right now, and the build times are starting to get under my skin. As a side effect I'm taking way too many "coffee" breaks during builds.
So I was looking for a way to build only the stuffs I need for my platform. Chapter 7 and 8 of "linux kernel in a nutshell" gave a good detail of how to do that by hand. Its a good read : http://www.kroah.com/lkn/
But Although I understand the stuffs, this is still a lot of tweaks to make that work.
2.6.32 and later kernels added a new target make localmodconfig. which scans through lsmod and change the .config appropriately. So I thought I found my "automation". But this perl script has some problem too.
This thread describes the problems : https://bbs.archlinux.org/viewtopic.php?pid=845113
There was also a proposed solution which apparently worked for others , is to run the script directly instead of using make's target.
Although for me, make localmodconfig does not work at all. its because of the following :
make clean
make mrproper
cp /boo/config-'uname -r' .config
make localmodconfig
and it halts with
vboxguest config not found!!
nf_defrag_ipv6 config not found!!
vboxsf config not found!!
vboxvideo config not found!!
The thing is my kernel development environment is inside virtualbox. These vbox modules were installed when I chose to install "virtualbox guest addtion".
And the netfilter module might be a Distribution specific module(Lot of netfilter modules are not part of the mainline kernel, so its not a shock to me), which is not included in mainline kernel.
Now the workaround this obviously unloading these module and trying again. But I'm thinking if there is patch for the streamline_config.pl which will enable the user to exclude certain modules if s/he wants. Problem is I have zero knowledge about perl and I like it that way.
So my problems in nutshell
Patching streamline_config.pl so I can give a list of module name as argument which it will exclude from processing the config file.
The script is located at kernel.org
EDIT: Removed the stuff about perl script not running. As mugen kenichi pointed out (How dumb I can be?). But still make localmodconfig does not work because of not having some modules code under source tree. patching streamline_config.pl still valid requirement.

Anyone else trying to build a minimum kernel image also looking for reducing build time, should do the following:
1) copy distribution kernel config in your source tree. It can be done with either command given below:
$zcat /proc/config.gz > .config
or
$cp /boot/config-'uname -r' .config
2) Use localmodconfig target.
$make localmodconfig
It will use lsmod to find which modules are loaded at this moment. Then it will search through distribution's .config to enable them and disable others.
Its important to know that it does not always work flawlessly. So you should tweak your config further using make menuconfig. You will see some modules are still marked to be built which is in reality unnecessary for your system.
Sometimes out of tree modules may cause make localmodconfig to fail. If that's the case you can work around that issue in two ways :
a) unload the out of tree modules and try make localmodconfig again.
b) Run the perl script directly:
$chmod +x script/kconfig/streamline_config.pl
$perl script/kconfig/streamline_config.pl > .config
3) Install ccache[1]. It will improve your build time dramatically. It caches objects. So it will reduce subsequent builds.
Its possible that ccache is included in the distribution's repository so that you can install it through apt-get or yum. In CentOS its available in EPEL repo.[2]
4) Give as many cores as possible for the build job
$make -j8 CC="ccache gcc"
My results are :
real 3m10.871s
user 4m36.949s
sys 1m52.656s
[1] http://ccache.samba.org/
[2] http://fedoraproject.org/wiki/EPEL

Related

Manually building a Kernel source from Yocto build

I have a Yocto build for i.mx6 and I want to modify its Kernel. I figured that if I copy Kernel source outside the Yocto project and make my modifications without dealing with patches, I can speed things up significantly. But the thing is, the Kernel source I have to use is already patched and I want to fetch and continue working from there. I will work on the already-patched source files and re-arranging them is a painful process.
For starting point, my patches work fine, and I can get a working image using bitbake fsl-image-multimedia-full command. The Kernel source I want to use is created after this process.
I have tried copying the source under ..../tmp/work-shared/imx6qsabresd/kernel-source. Although make zImage and make modules finished without any trouble, manual building was not successful with an error in a dtsi file (Unable to parse...). Of course, I have checked the file and there was no syntax error.
Also, I checked the kernel source files I copied and it seems that the patches are successfully implemented.
Am I doing something wrong with the patches? With my manual build routine, I can build unpatched kernel source with no errors. I am sure that there are experienced Yocto users here that have their own workarounds to make this process shorter. So, any help is appreciated. Thanks in advance.
You can also edit files in tmp/work-shared/<machine>/kernel-source then compile modified kernel with bitbake -C compile virtual/kernel
My favorite method of doing kernel development in a Yocto project is to create an SDK and build the kernel outside of the Yocto system. This allows more rapid builds because make will only build new changes, whereas a kernel build within Yocto always starts from scratch.
Here are some of my notes on compiling the Linux kernel outside of the Yocto system. The exact paths for this will depend on your exact configuration and software versions. In your case, IMAGE_NAME=fsl-image-multimedia-full
Run bitbake -c populate_sdk ${IMAGE_NAME}. You will get a
self-extracting and self-installing shell script.
Run the shell script (for me it was
tmp/deploy/sdk/${NAME}-glibc-i686-${IMAGE_NAME}-cortexa9hf-neon-toolchain-1.0.0.sh),
and agree to the default SDK location (for me it was
usr/local/oecore-i686).
Source the scripts generated by the install script. I use the
following helper script to load the SDK so I don't have to keep
track of the paths involved. You need to source this in each time
you want to use the SDK.
enable_sdk.sh:
#!/bin/bash
if [[ "$0" = "$BASH_SOURCE" ]]
then
echo "Error: you must source this script."
exit 1
fi
source /usr/local/oecore-i686/environment-setup-corei7-32-${NAME}-linux
source /usr/local/oecore-i686/environment-setup-cortexa9hf-neon-${NAME}-linux-gnueabi
Copy the defconfig file from your Yocto directory to your kernel
directory (checked out somewhere outside of the Yocto tree) as
.config.
Run make oldconfig in your kernel directory so that the Linux
kernel build system picks up the existing .config.
Note: you may have to answer questions about config options that
are not set in the .config file.
Note: running make menuconfig will fail when the SDK is enabled,
because the SDK does not have the ncurses libraries set up
correctly. For this command, run it in a new terminal that has not
enabled the SDK so that it uses the local ncurses-dev packages you
have installed.
Run make -jN to build the kernel.
To run the new kernel, copy the zImage and ${NAME}.dtb files to
your NFS/TFTP share or boot device. I use another script to speed
up the process.
update_kernel.sh:
#!/bin/bash
set -x
sudo cp /path-to-linux-source/arch/arm/boot/dts/${NAME}.dtb /srv/nfs/${DEVICE}/boot/
sudo cp /path-to-linux-source/arch/arm/boot/zImage /srv/nfs/${DEVICE}/boot/
set +x
You can also point Yocto to your local Linux repo in your .bb
file. This is useful for making sure your kernel changes still
build correctly within Yocto.
SRC_URI = "git:///path-to-linux-source/.git/;branch=${KBRANCH};protocol=file"
UPDATE: Over a year later, I realize that I completely missed the question about broken patches. Without more information, I can't be sure what went wrong copying the kernel source from Yocto to an external build. I'd suggest opening a Bitbake devshell for the kernel and doing a diff with the external directory after manually applying patches to see what went wrong, or just copy the source from inside the devshell to your external directory.
https://www.yoctoproject.org/docs/1.4.2/dev-manual/dev-manual.html#platdev-appdev-devshell
When debugging certain commands or even when just editing packages, devshell can be a useful tool. When you invoke devshell, source files are extracted into your working directory and patches are applied.
Since it can't parse it, there seems to be a problem with patch. How do you patch the device tree? Are you patching it in the .bb file?
If so, check your patch for possible syntax errors, it's very easy to overlook the syntax errors in device tree. You can remove the patch and do it manually from bitbake -c devshell <kernel-name>
If not, please try to do it there and check again. Please share results if any of these helps you.

Installing dependencies in configure script

I'm writing a program that requires LLVM, and thinking of using autotools to ship it on Linux, so from the user's viewpoint the process would look like the well-known ./configure && make && sudo make install.
With autotools, one normally relies on the system package manager to install dependencies. The problem is that, for whatever reason, this doesn't work with LLVM; on Ubuntu 14.04, apt-get thinks the latest version is 3.4, whereas a more recent version would actually be needed. Thus, I need to supply a script to download and build LLVM first (a local copy thereof, not interfering with any older version that might be on the system), a process which takes a few hours.
The most obvious place to put this process is at the start of configure. Is this considered normal and reasonable? Or is there a convention that configure should only contain the things autotools normally puts in it, and installing dependencies should be another script that the user runs first and separately? In the latter case, is there a convention regarding what that separate script should be called?
Don't install anything during configure. The scripts name is "configure" not "install-dependencies".
Write a configure check, and if llvm is missing, Give the user an explanation how to install it. If necessary provide a separate script to download llvm.
It is good practice to run configure (and make) as normal unprivileged user and not as root. So you may not even have permissions to install anything. You would have to check if "sudo" is installed, etc.
It may also happen that the system the user is installing has no network connectivity (firewall etc.), so your download will fail.

Set environmental vars and enable core dumps in autotools build

I am using Autotools for my current project. I'm using Ubuntu and Linux mint. With Autotools I can tell it to check a users's system to check for any required libraries my project needs in order to function properly. Now I would like to check if a user's system has enabled core dumps and if not, then execute the command ulimit -c unlimited to enable core dumps. How and where do I specify this?
Also, once the user has executed the make command to compile the source code, they execute sudo make install in order to move the binaries at /usr/local/bin/MYPROJECT. I want to add the location of my project's binaries into the path environmental variable, so that the user can execute any of the binaries in my project from a terminal without the need of typing the full path. How and where do I specify this in Autotools?
I'm thinking this is something I would add in the configure.ac file, but I haven't found any examples on how I can do this. Any help would be appreciated.
It sounds as if you basically misunderstand what installation of a software
package on Linux is about.
The job of autotools is to build a portable installation package of your
software. When I install your package, it does not become your decision
whether programs that crash will generate core dumps on my computer
when I run them. It does not become your decision what PATH I use to
invoke programs by unqualified name. These are my decisions or defaults
that I have accepted from my OS distribution.
If you execute ulimit -c unlimited, the command will in any case
only apply to the shell in which it is invoked. It doesn't
reconfigure the host system (!).
If you would like users to be able to invoke your program by unqualified
name, the normal procedure is make your package install it by default in the place,
/usr/local/bin, that unix-like OSes traditionally add to a
user's default PATH for finding locally installed programs. That is
where autotools will configure it to be installed, by default. Change it
only if you don't want your program to be in the user's default PATH.
And in any case, a user can decide where your software is installed by
passing --prefix=/path/of/my/choice to the ./configure command. Unless
you have some unavoidable reason not to, make your package installation
use the defaults that everybody expects and leave it up to the installing user
to change them.
Bottom line: You are asking how to do installation actions with autotools that
are not meant to be done with autotools, because they are not meant to be
done by package installations.

How can I modify the source code of random.c in Linux? And do I have to recompile the kernel to make it take effect?

I want to add some debug info or printf in the random.c in order to look deeply into the Linux random number generator. The entropy in /dev/random and /dev/urandom are both generated by random.c. My questions are:
1. Where I can find the random.c file in Linux 2.6.32?
2. What is the best way to add my modification of random source code into the kernel? Is it OK to just compile random.c and load it as loadable kernel module? Or do I have to recompile and install the kernel to make the new random.c with debug msg somehow take effect? The key point is to make sure that only one copy of random number generator is running in the kernel.
Thank you. Any kind of suggestion is highly appreciated.
random.c is linked directly into the kernel, it isn't built as a module, so you can't just recompile it alone and load it into your kernel, you need to recompile the whole new kernel.
To build the kernel, make sure you have the usual development tools installed: gcc, GNU make, etc. Some distros provide a "build-essentials" or "Development Tools" or similar metapackage that include all of the usual development tools for building the core system packages.
How you build your kernel depends on whether you have any distribution specific patches that are needed to use your system, or if you want to ensure that you use your distro's packaging system to install the kernel. If so, you should probably follow your distro's instructions for building the kernel. For example, Ubuntu's instructions, Arch's instructioins, Fedora's instructions, CentOS instructions (likely similar on RHEL 6, Red Hat doesn't provide documentation as they don't support building custom kernels), SuSE instructions.
Otherwise, if you don't mind configuring and installing your kernel manually, you can do it by hand. The following instructions should cover most distros reasonable well, but be sure to check your distro docs in case there are any distro-specific gotchas.
Download the appropriate tarball from kernel.org and decompress it somewhere. Or if you prefer, check it out using Git. Since you reference 2.6.32, I've included the latest version of 2.6.32 in the below instructions.
$ curl -O https://www.kernel.org/pub/linux/kernel/v2.6/longterm/v2.6.32/linux-2.6.32.61.tar.xz
$ xzcat linux-2.6.32.61.tar.xz | tar xvf -
$ cd linux-2.6.32.61
# or...
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ cd linux
$ git checkout -b my-branch v2.6.32.61
Now you need to do to configure it, build it, and install it. Greg Kroah-Hartmann, a leading kernel developer and stable kernel maintainer, has a free book on the subject. I'd recommend reading his book, but if you want a quick rundown, I'll summarize the highlights.
There are several ways to configure it. A good way to start is to just copy your current config in, and then run make menuconfig or make xcconfig to get a curses or graphical kernel configuration utility that allows you to easily browse and choose the right options (as there may be new options in the new kernel that you are building). Many distros install the config for a given kernel in /boot/config or /boot/config-version corresponding to the kernel version. Copy that into your source tree as .config, and then run make menuconfig or make xconfig:
$ cp /boot/config .config
$ make xconfig
After configuring it, I'd recommend adding something to the EXTRAVERSION definition in the Makefile. The contents of that are tacked on to the version, to help distinguish your modified kernel from the upstream one. I'd recommend setting it to help keep track of which is your modified kernel.
Once it's configured, just build it like anything else. I recommend using -j to run a parallel build if you have multiple cores.
$ make -j8
Now it's built, and you can install it. On most systems, the following works; if not, check out Greg's book or check your distro's documentation:
$ sudo make modules_install
$ sudo make install
And finally you have to add it to your bootloader (on some systems, make install may do this, on some it may not). Depending on whether you use Lilo, Grub, or Grub2, you may need to edit /etc/lilo.conf (followed by running sudo lilo to install the changes), /boot/grub/menu.lst, or /boot/grub/custom.cfg (followed by sudo grub-mkconfig -o /boot/grub/grub.cfg to install the changes). See the relevant documentation for the given bootloader for more details. Generally you want to copy an existing entry and duplicate it, updating it to point to your new kernel. Make sure you leave the existing entries, so you will be able to boot back into your old kernel if this doesn't work.
Now reboot, select your new kernel, and hope your system boots. Woo! You've built your own kernel.
Now that you've made sure you can do that successfully without modifications, you can make your change. You are going to want to modify drivers/char/random.c. To print out debugging statements, use printk(). It works mostly like printf(), though it's not exactly the same so check out the documentation before using it. After you modify, rebuild, and reinstall your new kernel, and reboot into it, you can see the messages printed out with printk() statements using the dmesg command.
For more information, check out Greg's book that I linked to above, the kernel README, and HOWTO, browse around the kernel's Documentation directory, and various other docs.
If you look at the Makefile for it, char driver is not meant to be compiled as a module (random.o is included as obj-y in drivers/char/Makefile).
You can read more about how to kbuild (kernel build) system works from: https://www.kernel.org/doc/Documentation/kbuild/makefiles.txt
Particularly section --- 3.1 Goal definitions touches this topic.
Generally you can search for files in kernel sources from source cross references (called LXR's). One is for example provided in http://lxr.free-electrons.com/
Indeed, you can add your modifications to the drivers/char/random.c, and recompile the char driver. After that you will have to rebuild the kernel, so that it will link also your new random.o to the kernel. And then you will have to boot that kernel, that process will depend on your distribution.
Most distributions have good/decent instructions around how to recompile/boot your own kernel.

Linux Kernel Source Code Modification and Re-compile

I'm modifiying the kernel source code (/linux/net/mac80211/mesh_hwmp.c) to add some signature authentication to the routing frames. After modifying the source code, do I have to build and install the kernel again for the changes to take effect?
Following are the steps I followed:
Downloaded the kernel from git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-testing.git
After downloading, copied the current config from the / boot directory in wireless-testing $ cp /boot/config- `uname-r` ./.config
Ran make menuconfig and selected the following features:
Networking -> Wireless -> Generic IEEE 802.11 Networking Stack (mac80211)
Built it using fakeroot make-kpkg - initrd kernel_image kernel_headers
After building the kernel, installed the created .deb packages (the core and its headers) using the command
$ sudo dpkg-i linux-*.deb
Did a reboot of the system
It is a time consuming process if I have to undergo this for every change that I make to the code (/net/mac80211/mesh_hwmp.c). I'm not sure if I'm overdoing by building the kernel again. Is it sufficient if I just run the Makefile(s) in mac80211 directory? Or, do I have to go through this process no matter what.
Is the current config from /boot the distro default config? If so, it probably contains hundreds or thousands of modules you will never need. Do that once, install and boot the kernel. Then, make sure you load the modules you're interested in (e.g. enable wifi, plug in USB devices) and run make localmodconfig in your kernel source tree (see make help for details). Enable more configs as needed, and use that for development.
You might also find sudo make INSTALL_MOD_STRIP=1 modules_install install will do the right thing on a lot of distros to install the kernel, and you'll avoid any problems related to creating a package, forcing rebuilds. The downside is you'll have to manually remove the old kernels, configs, initrds from /boot and modules from /lib/modules.

Resources