What is the difference between images and recipes in yocto? - linux

I am try to learn Yocto.I read the files on the website of Yocto.It is confusion that what is the difference between images and recipes?

A recipe is a module/program configuration (e.g. config for building and installing into OS our desired libraries or programs like SSH services or nano editor). Image - is a resulting OS image ready for deployment onto e.g. USV stick or NAND.

Recipe - All software/library should be described for Yocto Project via bitbake to process it: Download, decompress, patch if necessary, compile, and package (.deb, .ipk, .rpm)
Image - Includes all the recipes you want to install, example (openssh, picocom, python3, ...) in addition to being able to automate the image with Hostname, configured IP, startup scripts. The image would already be the end result of Bootloader + Kernel + Rootfs (your installed applications and programs). In addition to many other customizations for the OS at startup, different File Systems and etc.

Recipes are the most common file type in a Yocto build description. They contain instructions on how to configure, compile and deploy a given piece of software. Recipes also contain the location of the source code. This location can either be a static release archive, or a reference to a Git repository. Custom modifications to the sources themselves or the build process can be provided in form of patches. To minimize repetition for common tasks in recipes, such behavior is encapsulated in recipe class files, from which recipes can inherit.
Image contains all packages which have to be built and installed into the final root file system. The build system will take care that any known dependent package will also be installed. The end goal for anyone using the Yocto Project should be to create a Linux distributions that is customized to match your product(s) requirements. Images are a central concept within the Yocto Project and essential to the definition of a Linux distribution.

Related

Linux kernel getting rebuilt when moving the source folder

I'm trying to optimize the way our system is getting built and one of the problems I am faced with is the linux kernel getting rebuilt every time the build systems recompile.
There is a customized cache mechanism in place which allows our developers to patch the root fs at different point of the building process. Some applications are copied just before buildroot is rebuilt by updating the target root fs before buildroot can generate the target linux image (vmlinux, which includes the initramfs).
To avoid recompiling buildroot we have a system which copies all the object files from a previously compiled folder into a local folder and then make is invoked in the latest. It works fine for all packages in buildroot BUT for the linux kernel, which gets rebuilt every time.
After a long analysis of the makefile logs, I think this is happening because of absolute paths being present in some of the kernel dependencies (which forces some generated files to be re-generated again, thus recompiling almost everything).
I have multiple tracks to explore starting from there but I can't find any more info on neither of those:
Can I configure/compile the linux kernel so that it uses only relative paths ?
If not, can I patch those paths safely ?
If not, can I tell buildroot to use a previously compiled vmlinux image to build it's final package ?

How to correctly make install of binaries and data after compile in linux?

After make of sources I have compiled executable file and data directory with images for it. What should I do at "make install" phase to correctly install these files to the linux system? And how then application can find installed data (in case when binary and data are placed in different directories)?
Are there any standards for this?
There are many ways to install packages on a Linux and Unix system much like any other operating system. The normal method of installing software is through your distributions package manager. Package managers are different based on the distribution you are using but in general they take a package (a file filled with binaries source code or other files required for the piece of software to work) and place it into the corresponding places as defined by the Filesystem Hierarchy Standard. When you do a make install what you are doing is bypassing the package manager and placing the binaries into the hierarchy standard directly making it nearly impossible for the package manager to handle or account for that programs existence. This is not a good thing for anyone as it is hard to keep a system secure or stable with many unknown files placed throughout the system. Please if you want to install something manually please take a look at the filesystem hierarchy stabdard and place the files under the appropriate folder in either /opt and create a symlink in an area covered by your PATH variable or under /usr/local/

Yocto and Linux

I am following the instructions here:
http://www.rocketboards.org/foswiki/Documentation/AlteraSoCDevelopmentBoardYoctoGettingStarted
I run this command
bitbake virtual/kernel
Everything works fine except it does not create a socfpga_cyclone5.dtb
I run this command, which should be the same
bitbake altera-image
And I get the error
ERROR: Multiple .bb files are due to be built which each provide virtual/kernel (/home/bobo/yocto/meta-altera/recipes-kernel/linux/linux-altera_3.11.bb /home/bobo/yocto/meta-altera/recipes-kernel/linux/linux-altera-dist.bb).
This usually means one provides something the other doesn't and should.
Does anyone know how to create that .dtb file or fix the second command? Up to that point I had no errors.
Ideally your .dtb file should be coming from the Altera software flow through Qsys and that is the one you should use, rather than the one that is created from the Yocto build system.
The Yocto build system will not be aware of the FPGA design and hence that .dtb won't be useful.
The error you're getting is mostly due to conflicting meta files. Sometimes a target might have multiple providers. A common example is "virtual/kernel", which is provided by each kernel recipe. Each machine often selects the best kernel provider by using a line similar to the following in the machine configuration file, which should be somewhere in
poky/meta-altera/conf/machine/your-machine.conf:
PREFERRED_PROVIDER_virtual/kernel = "linux-altera-3.11"

Open Embedded startup difficulties

So, I am currently trying to get a hold on Open Embedded to build for an i.MX53 platform, but i have some difficulties in understanding the main outline of the OE concept, as well as how the folder structure should be, gaining an upper view.
So, I was hoping someone could in a few words summarize why not just using the make command in the kernel root.
more importantly(for me ), i would like to know how the folder structure should be, having a Oe-core and the meta-fsl-arm layer built for an i:MX53QSB.
Which file am I supposed to run with bitbake to get a custom image for my device?
you should start building some bitbake example recipes, if you hace success with this you should move forward to build your own images. look for the configuration scripts for angstrom, they will help you for setting up some things like architecture and deploy platform. After all these stuff you should put your custom image in your openembedded images folder and execute:
bitbake my_image
Start with this link It is a comprehensive study of the Yocto Open Embedded project. Yocto is a ready-to-go subset that is known to work and is a good place to start.
This link shows you the directory hierarchy.

Distributing source files with an open source app

I have written an open source (GPL) application for Linux and OSX and now wish to distribute it. Is it normal to distribute the source code along with the binaries by default, or just provide a link to where it can be obtained?
If I include the source files, where is the normal location for writing them on the users system for Linux and OSX (I thought /usr/local/src but on my Ubuntu machine, supposedly chock-full of open source apps, this directory is empty).
It is usual to distribute the sources and binaries separately. Binaries would normally be distributed in distro-specific package formats whilst sources would be a simple .tar.gz containing a project folder. The user could unpack it to /usr/local/src if they wanted but it should build anywhere. It's not up to your program to drop its sources in any particular location.
I thought /usr/local/src but on my Ubuntu machine, supposedly chock-full of open source apps, this directory is empty
It will be empty if you are only using the Ubuntu repos. The OS is in charge of /usr and will drop any sources you install into /usr/src. But /usr/local is left for you to play with; that's where you install stuff that the distro doesn't provide.
About /usr/local/src
/usr/local and any subdirectories are always going to be empty on your machine unless YOU have specifically put something in there. It's a section of the filesystem that is reserved for user-installed software for that specific machine. Ubuntu (or any distribution) is not ever supposed to touch it.
Your distro will have separate places for its own source code, if any. Most Ubuntu installations won't need source code anyway (though you can download it if you want to), but if they do it'll go somewhere like /usr/src. But if you want to place your own source code somewhere and don't want your distro to mess with it, then just:
If it's just for developing/compiling in your own user account, you can just put it somewhere in your home directory.
If it's a piece of software you'll installing on the system, /usr/local/src is the suggested spot and your distro won't mess with it there.
FHS is the standard which says where in the filesystems things go, and includes distinctions such as the ones I've discussed above.
Your software should be able to be compiled no matter which directory it's in, because as you can see, it can depend.
It's worth looking at a few projects on Sourceforge (http://www.sf.net). As menioned by #bobince it's normal to distribute binaries and source separately. It's certainly kind to users not to require compilation so they can download and run.

Resources