I'm newbie in Yocto Project. I'm managing several projects which all of them have a version for each: development/debug and field/industrialization. While working with the build system, I've noticed the recurrent following scenario.
Let's assume workspace is clean, a fresh build.
Launch bitbake, minimal-image with certain linux-kernel device tree and defconfig parameters. Bitbake will take some time and output files will be created.
Now, change parameters in previously mentioned device tree and defconfig (imagine new peripherals are added). Relaunch bitbake, output files wil be created for this new compilation.
Now, here comes the trick. Reset device tree and defconfig files to the setting before compiling in step 2. Relaunch bitbake and it will be nearly instant. Output files are replaced by the ones created in step 2.
So, I know this is possible due to bitbake and the use of the sstate-cache, or that's what I suppose. I've been googling around for a while but info is not too clear. How does this exactly work? Is there any kind of signature created in compilation time with the inputs of config files and stored associated to the compilation? I'm concerned about this becase I really need to trust that what I'm sending to field is precisely the correct compilation and not the unsafe development version.
And related to this, which is the difference between launching a bitbake -c cleanall or hard deleting deploy and sstate-cache dirs?
Thanks in advance.
The information is available in the official manual:
https://www.yoctoproject.org/docs/3.0/overview-manual/overview-manual.html#shared-state-cache
Related
I'm packaging an application I have written into an AppImage so that it can be delivered to Linux users.
One of the key features of the GUI toolkit I'm using is that it is small and lightweight, allowing me to compile a build which is statically linked to the GUI library of around 6Mb.
However, after building the AppImage - where I do what the instructions say - use all the functionality (which basically includes only using file browser dialogues to load files) - it generates an absolutely enormous AppImage of around 200Mb!
I know that AppImages are supposed to be a "little bit" big, but this is completely mad as a proposed solution for portability when the natively compiled binary including a statically linked GUI toolkit is only 6Mb.
However, I'm not convinced at all that I need all of that 200Mb. A very similar piece of software to mine, but that additionally uses Qt (which is pretty bloated in comparison) is only about 30Mb. I actually suspect that appimage-builder is doing something very wrong - I think it is listing the files in the directory I explore when using the file browser dialogue as dependencies (they are big files). I have no real other explanation. But if so how do I stop it doing that?
Why is mine so big? What can I do about it?
For the record I am using this method for building my AppImage
Building my binary separately
Running appimage-builder --generate and completing the form
Running appimage-builder --recipe AppImageBuilder.yml --skip-tests
Edit: Removing the obviously not needed files that were being packaged have reduced the size of the appimage to just 140Mb, but this is still almost 5 times bigger than equivalent appimages I've seen. Are there some tricks/options I'm not aware of?
In few recent days got started with AppImage and faced the same problem.
Shortly: check dependencies of your app by any possible ways and configure recipe to include only concrete dependencies and avoid includings of any theme/icon/etc packages which are not realy used :)
My case is a small app, written in Dart (with Flutter). The built project itself weights about 22MB (du -sh . in output directory). My host os is Linux Mint (Cinnamon).
First time I run appimage-builder --generate it generated me the "recipe" with 17 packages to be installed and bunch of libraries to be copied from /lib/x86_64-linux-gnu/. When I generated AppImage from this recipe, result was about 105MB, which are extremely large in my opinion for small app.
My first experiments was to cleanup included files section, as I guess "all necessary" libraries should be installed from apt. I referred to a few configs from network where were marked only few libraries for include and was exclude section, which contains some DE related files (themes, fonts, icons and etc.)
Using that I got result about 50MB (which are still large enough).
Next experiments were referred to from this issue - https://github.com/AppImageCrafters/appimage-builder/issues/130#issuecomment-843288012
Shortly - after generating an AppImage file, there appeared file .bundle.yml file inside AppDir folder, which contains deployed libraries. Advice is to try exclude something from that. May be it's a good enough advice, but it takes too long time to check for each package/library if it breaks resulted AppImage file at least with official tests of appimage-builder (docker containers). I faced more broken results than any sane size reduction.
My next experiment was to reduce dependencies which should be installed from package manager and use files from host system. I deleted AppDir and appimage-builder-cache folders and regenerated the recipe. At next step I commented all packages which should be installed from package manager and leaved only included files. Result was fail, because of needing one package, but after adding it I got AppImage result in 36MB. That sounds much better than starting 105MB or even previous result of 50MB.
Here I got small "boost" - Flutter project built into AOT binaries, without runtime. So I checked output of ldd for my app, and then mapped list of required libraries to list of library files which were detected by appimage-builder. Finally some of them was correct, some not found in ldd output and some was in ldd output, but were not detected by appimage-builder. I added all undetected, removed all unused. My final result is 26MB and it passed all appimage-builder tests (running in docker images of fedora, cent, debian, ubuntu and arch)
I understand that it's bad enough for continuous building, because it will require to always check for used libraries and adapt config if something changed, but for rare enough builds I guess it's has some kind of balance between good and bad.
I am creating a simple linux kernel with buildroot and I am adding a small driver I've done myself, I created the Config.in file and drivername.mk to be able to select the driver in make menuconfig succesfully.
When executing make to build the image, the compilation goes correctly until my driver starts to compile, it looks to compile and create the image right but I get loooots of warnings saying that different files in ./lib/gcc/arm-buildroot-linux-uclibcgnueabihf/ are touched by more than one package: [u'host-gcc-initial', u'host-gcc-final'].
Anyone can explain me a bit about this issue and what is causing it? Do you need any more info to know what is happening? Is it safe to ignore them?
Thanks beforehand
Actually, doing a search on 'touched by more than one package', I found http://lists.busybox.net/pipermail/buildroot/2017-October/205602.html, where we find that this warning can safely be ignored if you're not doing a parallel build and aren't a kernel maintainer.
That said, if you're submitting code for inclusion in the Linux kernel, please be a good citizen and make sure you identify all of the things your code is dependent upon. (I'm not actually an active kernel hacker, so I don't know what method they're using for this right now.)
The basic idea is that there are a bunch of steps in compiling things that need to be done in a logical order. In a small project, we simply use dependencies that we know to put in because we also coded in that dependency. But with a project the size of the kernel, you can guarantee that not everyone does this. Some of them instead just specify dependencies if they're needed for things to build properly - if the default order works, things could go years before someone figures out that there was a missing dependency, causing them grief when they were trying to update just the one thing that was a missing dependency, and the other code not getting updated as a result.
When you're doing things in parallel, on the other hand, it becomes a lot more complicated. Now you really need to have every dependency specified, because there is no longer any inherent dependable order. Some people will probably still build serially, while others use two processing threads. I'll use 8. I've worked in groups that would be inclined to do 30, because they're on a 32 processor machine, and don't really need all of those during the off hours. Suddenly the fact that the file you needed from a directory that normally got processed 30 directories before yours is now getting processed at the same time as your file that needed it, because you didn't list the dependency and everything in those 30 directories that hasn't already been processed and isn't being processed has a dependency that's not yet finished its processing.
The scenario outlined is this:
Someone has built the Linux kernel from source code.
That person wants to change the build configuration.
They still have all of the object files and temporary files that were produced by the previous build operation.
Given all of that, what needs to be done to rebuild as few things as possible in order to save time?
I understand that these will trigger or necessitate a complete recompilation of the source code:
Running make clean.
Running make menuconfig.
make clean is an obvious course of action to avoid to achieve the desired goal because it deletes all object files, both those that would need to be rebuilt and those that could otherwise be left alone. I don't know why make menuconfig would cause the build system to recompile everything, but I've read on here that that is what it would do.
The problem I see with not having the second avenue open to me is that if I change the configuration manually with a text editor, the options that I change might require changes in other options that depend on them (e.g., IMA_TRUSTED_KEYRING depends on SYSTEM_TRUSTED_KEYRING) and I'd be working without an interface that would automatically make those required secondary changes.
It occurred to me that invoking scripts/kconfig/mconf, the program built and launched by make menuconfig, could possibly be a solution to the problems described in the previous paragraph since it was not stated that mconf is what makes the build system recompile everything. But, it possibly could be that very program, so I do not wish to try it until I know it won't do that.
Sooooo, how does one achieve the stated objective given the stated scenario?
I have a CMake project I am working on, and when I run "make" then the rebuild of one of the targets is getting triggered regardless of whether I touch any of the files in the system. I have managed to discover that this is because one of the files on which the target depends is somehow getting deleted when I run "make", so the system registers that it needs to be rebuilt. I tried setting "chattr +i" on this file to make it immutable and indeed, with this flag set, the rebuilding of the target is not triggered, though there are also no errors from a deletion attempt. But I think I can be sure that this is the culprit.
So now my question. How can I figure out what script or makefile is actually deleting this file? It is a big project with quite a few scripts and sub-makefiles which potentially run at different times during the build, so I am having a hard time manually discovering what is doing this. Is there some nice trick I can pull from the filesystem side to help? I tried doing "strace make", but parsing the output I can't seem to find the filename appearing anywhere. I am no strace expert though, so perhaps the file is getting deleted via some identifier rather than the filename? Or I am not stracing spawned processes perhaps?
I'm trying to automate the process of recompile a upgraded kernel. (I mean version upgrade)
What I do:
Backup the object files (*.o) with rsync
Remove the directory and make mrproper
Extract new source and patch
Restore object files with rsync
But I found it doesn't make sense. Since skip compiled things need to get a hash, this should removed it.
Question: What file do I need to keep? or it doesn't exists?
BTW: I already know ccache but it broke with a little config change.
You're doing it wrong™ :-)
Keep the kernel tree as-is and simply patch it using the appropriate incremental patch. For example, for 3.x, you find these patches here:
https://www.kernel.org/pub/linux/kernel/v3.x/incr/
If you currently have 3.18.11 built and want to upgrade to 3.18.12, download the 3.18.11-12 patch:
https://www.kernel.org/pub/linux/kernel/v3.x/incr/patch-3.18.11-12.xz
(or the .gz file, if you don't have the xz utilities installed.)
and apply it. Then "make oldconfig" and "make". Whatever needs to be rebuilt will be rebuilt.
However, it's actually best to not rely on the object file dependency mechanism. Who knows if something might end up not being rebuilt even though it should due to a bug. So I'd recommend starting clean every time with a "make clean" before applying the patch, even though it will rebuild everything.
Are you really in such a big need to save build time? If yes, it might be a better idea to configure the kernel ("make menuconfig") and disable all functionality you don't need (like device drivers for hardware you don't have, file systems you don't care about, networking features you will not use, etc.) Such a kernel that's optimized for my needs only takes about 3 or 4 minutes to build (normally, the full kernel with everything enabled would need over half an hour; or even more these days, it's been a very long time since I've built non-optimized kernels.)
Some more info on kernel patches:
https://www.kernel.org/doc/Documentation/applying-patches.txt
The incremental patch is a good way since it updates time stamps properly.
(GNU) Make use time stamps to identify rebuild so just keep the time stamps to avoid rebuild.
If we need rsync, we should use it with -t option.
Also for a patch doesn't have incremental patches, we can make it manually by comparing patched files.