I have a question regarding best practices for Yocto Project.
What i want to do is to add sources for a driver from github in kernel and rebuild the whole yocto image, but i am not sure what's the best way of doing that. I am thinking of two options possible here.
Fork the kernel sources into my own repo, then add the driver sources, update the Makefile and Kconfig and provide my own defconfig file. (this definitely works)
The second thing that crosses my mind is to use the initial kernel sources and just create a recipe that fetches the driver code into place (drivers/net/...) and create a patch that adds the driver to the Makefile and Kconfig and replaces the defconfig file and then rebuild. (I am not sure about this, if it is possible to fetch a driver sources into a specific place in the kernel sources)
So my question is if the second way would be possible and it's common to have it this way.
But if i am thinking again maybe that is not possible because i have the recipe for the kernel which fetches the kernel sources then compile them, so i think that it may not be possible to have a kernel recipe that fetches the kernel sources then another recipe that fetches the driver sources, applies the patch and only after this the kernel is compiled. Am i right? or this should be possible somehow?
Thanks.
Related
I'm trying to add a system call to my OS and when I read the online tutorials, it always starts with downloading and extracting a kernel source code from the Internet. For example:
$ wget https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.4.56.tar.xz to download the .tar.xz file.
And $ tar -xvJf linux-4.4.56.tar.xz to extract the kernel source code.
My question is: why do we have to use another kernel source from the Internet? Can we add the new system call to the running OS and compile it directly?
I'm trying to add a system call to my OS and when I read the online tutorials, it always starts with downloading and extracting a kernel source code from the Internet.
Well, that's right. You need to modify the kernel source code in order to implement a new syscall.
why do we have to use another kernel source from the Internet?
It's not "another kernel source", it's just "a kernel source". You don't usually have the source code for your currently installed kernel already at hand.
Normally, most Linux distributions provide a binary package for the kernel itself (which is automatically installed), a package for its headers only (which can be used to compile new modules), and possibly a source package related the binary package.
For example, on Ubuntu or Debian (assuming that you have enabled source packages) you should be able to get the current kernel source:
apt-get source linux-image-$(uname -r)
Since the tutorial author cannot possibly know which kernel version or which Linux distribution you are using, or even if your distribution provides a kernel source package at all, they just tell you to download a kernel source package from that Linux kernel website. This also ensures you use the exact same version that is shown in the tutorial, to avoid any compatibility problem with newer/older kernel versions.
Furthermore, you usually don't want to play around with the kernel of the machine you are using, since if something bad happens, you can end up damaging your system. You usually want to use a virtual machine for experimenting.
Can we add the new system call to the running OS and compile it directly?
Not really, it's not possible to hot-patch a new syscall into a running kernel. Since you need to modify the source code, first of all you need to have the source. Secondly, you'll need to do whatever modification you need and then compile the new kernel. Thirdly, you'll need to properly install the new kernel and reboot the machine.
I'am learning about embedded systems, and i was able to compile and setup a SAM9x35 EK with buildroot, mounting the bootstrap, the U-Boot, the Linux and The rootfs (Buildroot's basic RFS[root file system]skeleton).
I have LOTS of questions, but one of the most important is:
Pre Question Statements, for context:
I already have a provided JFFS2 with and app inside that is made of several NetBeans (c++) projects.
These projects use kernel built in (if selected in buildroot's menu) libraries
How does it work?
How the rootfs and the netbeans (makefiles) connect to linux packages?
What I mean is, How the kernel manages the makefiles from the netbeans projects?
i.e.: If i create a project that shows a picture on the screen i add some needed packages to the rootfs and then this is flashed to the device. How the kernel knows how to read and run this app? What I have read after doing this question is that Kernel start some script in init.d folder. But I would like a more Conceptual explanation of the interaction between Kernel and Rootfs
Any Explanation could help me because i dont understand how exactly works. The application is a standalone application that is loaded at the start of the linux (power on) and is only that, it runs and uses hardware to go through its different functions.
Please feel free to use links or examples.
Thank you very Much.
EDIT: As stated in the comments, the question seems too broad to answer, so I'll leave the explanation of the problem and the questions, because they haven't changed, but I have changed the title (it doesn't seem good yet, but it's better than before) so they are more in tune.
What lead me to the question
I want to compile OpenWRT for my board. At the moment I am compiling it to a beagle bone black, and it's quite straight forward since there are tutorials available for that, but it got me thinking. How would I build it for a completely bare board? Like it or not BBB comes with u-boot and a version of linux (Amstrong if I'm not mistaken) so when I build OpenWRT for it maybe many things have already been taken care of for me.
I know that I need to first set up the board to boot from somehere, then it must have the bootloader and finally the kernel (there is the SPL and all that, but ok, let' leave it aside for now).
Hypothetical system
Let's imagine I have a hardware similar to the beaglebone, except it has a dipswitch connected to the boot pins in order to select from where I'm going to boot my device from. Imagine I have set it to boot from ethernet, which means that on startup a bootloader located in ROM will receive a binary file and store it in flash, all that via TFTP.
The questions
At this point I imagine that the binary file given via TFTP is the bootloader, am I right?
So after that I'd need to give the bootloader the kernel?
Does this mean that it is a 2 step process? First load the bootloader an dthen the kernel?
Is it possible to compile both at the same time and load it into the microprocessor?
Does OpenWRT build u-boot as well or do I need to compile it separately? I know it downloads the kernel and compiles it.
How would I build this for production? Imagining that I have to build u-boot and openwrt separately, would I create a script that compiles both and then does the entire process of downloading it into the microprocessor?
Is it possible to pre-configure the kernel so that it doesn't need to be configured after the code is downloaded? I mean, for example, compile it with initialization scripts instead of connecting to the device and configuring this. Is it possible or do I have to connect to the board and configure it manually?
PS: Sorry for such basic questions, but it's my first time compiling the kernel for real, and I've only worked with microcontrollers and RTOSs at most
Let's try to answer the queries one by one
At this point I imagine that the binary file given via TFTP is the bootloader, am I right?
No, It should be the firmware(kernel+HLOS). TFTP is available in uboot or only after SBL(Secondary boot loader) is loaded into memory.
So after that I'd need to give the bootloader the kernel?
bootloader needs to be present in the memory and if required it can get the firmware from ethernet, This can be simply done by changing the uboot env(bootcmd), can also be configured at compile time.
Does this mean that it is a 2 step process? First load the bootloader an dthen the kernel?
Yes, bootloader needs to be loaded earlier, but if you designing a custom board, you can combine the images in a big file and then flash/load that file at once.
Is it possible to compile both at the same time and load it into the microprocessor?
Does OpenWRT build u-boot as well or do I need to compile it separately? I know it downloads the kernel and compiles it.
Yes, Openwrt is very flexible and it compiling uboot, kernel, userspace package at once and create a desired image(based upon user configuration).
How would I build this for production? Imagining that I have to build u-boot and openwrt separately, would I create a script that compiles both and then does the entire process of downloading it into the microprocessor?
You can configure the openwrt to generate the appropriate image(based upon the flash and system requirement) and then flash that image in production(so, simple).
Is it possible to pre-configure the kernel so that it doesn't need to be configured after the code is downloaded? I mean, for example, compile it with initialization scripts instead of connecting to the device and configuring this. Is it possible or do I have to connect to the board and configure it manually?
Yes, use make kernel_menuconfig to configure the kernel parameter at compile time.
Hope, I have answered all the queries!!!
I am trying to build minimal kernel under 1 Mb with Buildroot. It is intended for small board with qspi memory and basic functionality, ethernet, usb, spi, and some GPIO's. Basic terminal access via ssh and UART.
My first thoughts are if it is even possible to modify kernel .config via linux-menuconfig to reach this size.
Also if it is possible to identify the redundant parts without deep knowledge about kernel architecture and exclude them from compilation.
If someone can direct me to good direction how to solve this problem or even specify some tools and ways how to do it would be very helpful.
Thank you!
If you have working build root for your board, than, it's better to continue to work with it. Technic for disabling kernel options should be the same. In the article he reached ~0,7MB uImage with lost a lot of functionality (p40). He started with minimal (bare) config (p27) and add blocks of configs. So instead of identify the redundant parts you can build smallest possible uImage you can boot. Than add to it more options: ext2, serial and so on. Actually this work require a lot of testing. And you probably brake dependencies.
Kernel configs (working and new one) could be compared using diff -Naur, so you can see what changed.
Offtopic:
Looks like yocto officially supported by altera. here are instructions how to build altera-image-minimal. If you fine with it size, than use it and don't spend time on minimizing uImage. If you need extra packages installed into it, than you can ease extend it.
And here are instructions about building Angstrom (yocto based distribution). You can create you custom image based on console-image-minimal.
I use Angstrom in production. Must say, it was really hard to use it first time.
Whether or not you build the kernel with buildroot is not really relevant. The important thing is to configure it so it fits in 1MB. When you build the kernel from buildroot, you can do that with make linux-menuconfig, as you mention.
That said, getting a kernel under 1MB will be quite hard. Biff once did this for an x86-based platform, bifferboard. But that was without networking or USB.
You can refer to the kernel size tuning guide, which has links to some patches to reduce the size. But it's not been updated in a couple of years.
When building a Kernel Driver out of tree,
i run make like this in the drivers directory, where KERNELDIR either is the path to the kernel source, or to the headers.
make -C $(KERNELDIR) M=$(PWD) modules
when trying to build headers myself using:
make headers_install ARCH=i386 INSTALL_HDR_PATH=$(HEADERSDIR)
i find the export unsuitable to build modules against (without a full kernel source tree)
Several files and folders seem to be missing, like a Makefile, scripts , include/generated/autoconf.h or include/config/auto.conf etc.
Debian does things in an usable way, as described in rules.real, although it does more than is described in Documentation/make/headers_install.txt , which seems to be not the "standard" way.
In short: how do i correctly export linux headers, so i can build external modules against it?
headers_install is meant to export a set of header files suitable to use from a user space point of view. It is the userspace exposed API of the kernel. Let's say you create a wonderful new ioctl, with a custome data structure. This is the kind of information you wan't userspace to know, so that userspace program can use your wonderful new ioctl.
But everything that is not visible from userspace, that is "private" to the kernel, or in other word the internal API, is not exposed to userspace.
So to build an out of tree module, you need either a full configured source tree, or the kernel headers as packaged by your distro. Look for the linux-headers or linux-kernel-headers
package on a Ubuntu / Debian for example.
I believe the kernel make file target of headers_install is meant for the production of Linux header for the production of C library and tool chain and not for the purpose of enabling to build out of tree kernel modules sans full configured kernel source code.
In fact, I'm guessing building out of tree kernel modules without full kernel source code is not supported and is in fact a "hack" created by distributions.