Building a minimal linux kernel for vmware - linux

I am building a linux kernel for a vmware machine for learning kernel development (understanding and using basic kernel APIs for writing kernel code). What configuration options can i safely turn off, if i want a minimal kernel?

Related

How do I compile a userspace application for a particular Linux kernel?

I have a Linux kernel source. I am building Linux kernel image from that source. Now, I have a userspace application. How do I compile the userspace application for that particular Linux kernel source?

Testing Linux Kernel for an embedded device with diferent Debug options

How can one test a Linux Kernel for a specific Target? Also keeping in mind Point 12 from https://www.kernel.org/doc/html/latest/process/submit-checklist.html
Has been tested with CONFIG_PREEMPT, CONFIG_DEBUG_PREEMPT, CONFIG_DEBUG_SLAB, CONFIG_DEBUG_PAGEALLOC, CONFIG_DEBUG_MUTEXES, CONFIG_DEBUG_SPINLOCK, CONFIG_DEBUG_ATOMIC_SLEEP, CONFIG_PROVE_RCU and CONFIG_DEBUG_OBJECTS_RCU_HEAD all simultaneously enabled.
Consider the scenario, I want to compile a Linux Kernel for an ARM based target. I have Linux source files and I code on my Ubuntu Host PC. I did some research and found out that there are Linux Kernel self-tests in tools/testing directory. It says on this page from kernel.org
that
These are intended to be small tests to exercise individual code paths in the kernel. Tests are intended to be run after building, installing and booting a kernel.
But when I run this makefile and execute the script, it tests the kernel on my Host system. And it does not test the source files of the Kernel i want to build.
Is my understanding correct?
What I want to do is run this self-test suite after my ARM Target has been booted. Is this possible ? How can one run these selftests or some other kind of tests for Linux Kernel?
My knowledge background: Embedded C developer, Certified Linux Foundation Sysadmin. I have generated custom images for RaspberryPi using Yocto. So i have limited knowledge in Yocto. I have no knowledge in Linux Kernel Driver Development.

Linux porting for RISCV multicore processor

We are developing a multi-core processor with RISCV architecture.
We had already ported Linux for single-core RISCV processor and it is working on our own FPGA based board with busybox rootfs.
I want to port Linux for multi-core RISCV processor now.
My doubts are:
Whether the gnu-riscv-gcc toolchain available now supports multi-core?
Whether spike available now supports multi-core?
Should I make any change to the bbl bootloader (Berkely bootloader) to support multi-core?
What are the changes I should make for my single-core Linux kernel to support multi-core?
The current RISC-V ecosystem already supports SMP Linux.
No changes to the compiler are required for multicore.
Spike can simulate multicore when using the '-p' flag.
BBL supports multicore.
Before building linux, configure it to support SMP.
Any hiccups, are probably due to the toolchain out of sync with the newest privileged spec changes. Last Fall, users successfully built and ran multicore Linux on RISC-V.
This is all expected to work out of the box. My standard testing flow for Linux and QEMU pull requests is to boot a Fedora root filesystem on QEMU via Linux+BBL. Instructions can be found on the QEMU Wiki Article about RISC-V. This will boot in our "virt" board, which uses VirtIO based devices. These devices have standard upstream Linux drivers that are very well supported, so there isn't really any platform-level work to be done.
In addition to the standard VirtIO-based devices, SiFive has devices that are part of the Freedom SOC platform. If you platform differs significantly from SiFive's Freedom platform then you'll need some additional drivers in both Linux and BBL.
We maintain an out-of-tree version of the drivers we haven't cleaned up for upstream yet in freedom-u-sdk, which should give you a rough idea of how much work it is. Running make qemu in that repository will boot Linux on QEMU via BBL, and running make will show you how to flash an SD card image for the HiFive Unleashed board.

Does Installing a driver increase linux kernel memory footprint?

I have read that linux kernel is monolithic kernel and it contains drivers within it, in a single file.
For example, I have two linux kernels 3.16.0.40 and 3.16.0.50
Currently booted system with 3.16.0.40 and installed for e.g nvidia driver
Does the driver is pushed in a kernel ?
If so, If I select 3.16.0.50 from grub and boot a system
Can it access the currently installed driver from previous kernel?
Linux kernel is monolithic indeed, but it also utilizes some micro-kernel features. One of those is loadable kernel modules support. So linux kernel has 2 options for driver:
driver can be built-in; those drivers will reside inside of kernel image file, which is /boot/vmlinuz-$(uname -r)
driver can be loadable; those drivers are separate files; look at /lib/modules/$(uname -r)/kernel/*
So in your case you are gonna have two video drivers (files) in your system, one for each kernel version. And only one driver will be used, for the version of kernel you are running at the moment.

How are device driver development and linux kernel programming related/different?

This might be a stupid question but I am confused and google couldn't help.
I know Linux is the Kernel which is the heart of many distros( Ubuntu, Mint). But when we say "Linux kernel programming", what do we exactly mean? Is it Bash scripting?
And how it is related to the device driver development? (Do we mean that the hardware is running linux kernel and we do kernel programming to support peripherals, this is ,in general, device driver development in relation to linux? )
Linux Kernel Programming is something which involves kernel components, meaning - kernel data structures and headers. A program in which one uses the existing kernel features or enhances the current features is a kernel program, typically a kernel module. In a way even the Bash scripting can be called as Linux Kernel programming. The device driver in a broad term is nothing but a set of interrupt handlers. Having said that, a device driver is a kernel program in itself as it uses the Linux kernel capabilities which is ported on a device/hardware. So in short the relation between the two is Device driver development is a form of Linux Kernel Programming.
Basically you have two kinds of programs running on your computer : the kernel, which has access to the computer hardware, and "userland" programs which ask the kernel to do low-level stuff (allocate memory, send data to the network, ...).
To do this, the kernel must know how to interact with some given piece of hardware. This is what we call "device drivers". In Linux, the device drivers are implemented as kernel module and device driver programming is akin to kernel programming because you deal with low-level operations straight to the metal instead of higher-level operations that go through the kernel.
Bash scripting is programming a shell (Bash) to run userland programs that themselves use the kernel to do the actual work. Bash-scripting is userland programming.
Device driver development is a subset of Linux kernel programming.
Device driver development is writing or modifying kernel modules that will handle a device. A device driver is a special case of kernel modules.
Kernel modules are codes that work from within the kernel and do privileged tasks.
Kernel modules are an integral part of Linux kernel programming. That is how device driver development and Linux kernel programming are related. The former is a part of the latter.
Also, device drivers will ultimately be inserted into the kernel, and will work in a kernel context. That is, device drivers ultimately become a part of the kernel.
Hence driver development is a subset within Linux kernel programming.

Resources