I have read that linux kernel is monolithic kernel and it contains drivers within it, in a single file.
For example, I have two linux kernels 3.16.0.40 and 3.16.0.50
Currently booted system with 3.16.0.40 and installed for e.g nvidia driver
Does the driver is pushed in a kernel ?
If so, If I select 3.16.0.50 from grub and boot a system
Can it access the currently installed driver from previous kernel?
Linux kernel is monolithic indeed, but it also utilizes some micro-kernel features. One of those is loadable kernel modules support. So linux kernel has 2 options for driver:
driver can be built-in; those drivers will reside inside of kernel image file, which is /boot/vmlinuz-$(uname -r)
driver can be loadable; those drivers are separate files; look at /lib/modules/$(uname -r)/kernel/*
So in your case you are gonna have two video drivers (files) in your system, one for each kernel version. And only one driver will be used, for the version of kernel you are running at the moment.
Related
I am building a linux kernel for a vmware machine for learning kernel development (understanding and using basic kernel APIs for writing kernel code). What configuration options can i safely turn off, if i want a minimal kernel?
We are developing a multi-core processor with RISCV architecture.
We had already ported Linux for single-core RISCV processor and it is working on our own FPGA based board with busybox rootfs.
I want to port Linux for multi-core RISCV processor now.
My doubts are:
Whether the gnu-riscv-gcc toolchain available now supports multi-core?
Whether spike available now supports multi-core?
Should I make any change to the bbl bootloader (Berkely bootloader) to support multi-core?
What are the changes I should make for my single-core Linux kernel to support multi-core?
The current RISC-V ecosystem already supports SMP Linux.
No changes to the compiler are required for multicore.
Spike can simulate multicore when using the '-p' flag.
BBL supports multicore.
Before building linux, configure it to support SMP.
Any hiccups, are probably due to the toolchain out of sync with the newest privileged spec changes. Last Fall, users successfully built and ran multicore Linux on RISC-V.
This is all expected to work out of the box. My standard testing flow for Linux and QEMU pull requests is to boot a Fedora root filesystem on QEMU via Linux+BBL. Instructions can be found on the QEMU Wiki Article about RISC-V. This will boot in our "virt" board, which uses VirtIO based devices. These devices have standard upstream Linux drivers that are very well supported, so there isn't really any platform-level work to be done.
In addition to the standard VirtIO-based devices, SiFive has devices that are part of the Freedom SOC platform. If you platform differs significantly from SiFive's Freedom platform then you'll need some additional drivers in both Linux and BBL.
We maintain an out-of-tree version of the drivers we haven't cleaned up for upstream yet in freedom-u-sdk, which should give you a rough idea of how much work it is. Running make qemu in that repository will boot Linux on QEMU via BBL, and running make will show you how to flash an SD card image for the HiFive Unleashed board.
I was checking project Embedded ECG data acquisition system from instructables and there is mension a TODO:
Combining the OS and bare-bone firmware
UNDER CONSTRUCTION
** Since the bootloader only loads one firmware to the Core,
I need to modify the ELF file, to have Linux and bare-bone Core at the same time **
It seems to me as interresting approach how to make full featured Linux and critical realtime OS on one board (for example Raspberry PI). It is really possible? I have heard, that Linux can be setup to not use some cores. But I suppose that Linux use virtual memory and bare-bone firmware does usually not. Can the memory be shared between these OS. What about interruptions? Can these two OS handle interruptions separately? Can boot loader load these two systems to both core at once? I can imagine that one thread in boot loader will skip to address of bare-bone OS. It is correct approach?
Yes, it is possible, even if the full setup is not straightforward.
A couple of examples:
Xilinx released a white paper explaining how to run Linux + FreeRTOS on a dual-core Zynq ARM
Evidence explained how to run Linux + Erika Enterprise RTOS on a dual-core Freescale imx6 ARM
Those examples are based on system partitioning by hard-coding the assignment of the different cores to different OSs.
If your system is capable of hardware-assisted virtualization, you can use an hypervisor for making (and enforcing) such partitioning. You can for example use Siemen's Jailhouse, KVM or Xen.
Kind of. This is what people already do to some extent with network stack / driver. For example IsoStack idea works in a similar way. There's a project which actually implements this on linux by dedicating cores to network cards, but my google-fu is failing me.
This might be a stupid question but I am confused and google couldn't help.
I know Linux is the Kernel which is the heart of many distros( Ubuntu, Mint). But when we say "Linux kernel programming", what do we exactly mean? Is it Bash scripting?
And how it is related to the device driver development? (Do we mean that the hardware is running linux kernel and we do kernel programming to support peripherals, this is ,in general, device driver development in relation to linux? )
Linux Kernel Programming is something which involves kernel components, meaning - kernel data structures and headers. A program in which one uses the existing kernel features or enhances the current features is a kernel program, typically a kernel module. In a way even the Bash scripting can be called as Linux Kernel programming. The device driver in a broad term is nothing but a set of interrupt handlers. Having said that, a device driver is a kernel program in itself as it uses the Linux kernel capabilities which is ported on a device/hardware. So in short the relation between the two is Device driver development is a form of Linux Kernel Programming.
Basically you have two kinds of programs running on your computer : the kernel, which has access to the computer hardware, and "userland" programs which ask the kernel to do low-level stuff (allocate memory, send data to the network, ...).
To do this, the kernel must know how to interact with some given piece of hardware. This is what we call "device drivers". In Linux, the device drivers are implemented as kernel module and device driver programming is akin to kernel programming because you deal with low-level operations straight to the metal instead of higher-level operations that go through the kernel.
Bash scripting is programming a shell (Bash) to run userland programs that themselves use the kernel to do the actual work. Bash-scripting is userland programming.
Device driver development is a subset of Linux kernel programming.
Device driver development is writing or modifying kernel modules that will handle a device. A device driver is a special case of kernel modules.
Kernel modules are codes that work from within the kernel and do privileged tasks.
Kernel modules are an integral part of Linux kernel programming. That is how device driver development and Linux kernel programming are related. The former is a part of the latter.
Also, device drivers will ultimately be inserted into the kernel, and will work in a kernel context. That is, device drivers ultimately become a part of the kernel.
Hence driver development is a subset within Linux kernel programming.
filp_open allows us to open a file in the file system. But is it safe to use from Kernel space ? If used what needs to be taken care. Will this be supported in future versions of Linux kernel as well.
Currently using 2.6.28 Linux kernel version.
A lot of drivers use the filp_open() function, it is pretty much a helper to open a file in kernelspace. No reason to assume it won't continue to be supported. Even the kernel's filesystem subsystem uses filp_open().