Meaning warning "File is touched by more than one package" - linux

I am creating a simple linux kernel with buildroot and I am adding a small driver I've done myself, I created the Config.in file and drivername.mk to be able to select the driver in make menuconfig succesfully.
When executing make to build the image, the compilation goes correctly until my driver starts to compile, it looks to compile and create the image right but I get loooots of warnings saying that different files in ./lib/gcc/arm-buildroot-linux-uclibcgnueabihf/ are touched by more than one package: [u'host-gcc-initial', u'host-gcc-final'].
Anyone can explain me a bit about this issue and what is causing it? Do you need any more info to know what is happening? Is it safe to ignore them?
Thanks beforehand

Actually, doing a search on 'touched by more than one package', I found http://lists.busybox.net/pipermail/buildroot/2017-October/205602.html, where we find that this warning can safely be ignored if you're not doing a parallel build and aren't a kernel maintainer.
That said, if you're submitting code for inclusion in the Linux kernel, please be a good citizen and make sure you identify all of the things your code is dependent upon. (I'm not actually an active kernel hacker, so I don't know what method they're using for this right now.)
The basic idea is that there are a bunch of steps in compiling things that need to be done in a logical order. In a small project, we simply use dependencies that we know to put in because we also coded in that dependency. But with a project the size of the kernel, you can guarantee that not everyone does this. Some of them instead just specify dependencies if they're needed for things to build properly - if the default order works, things could go years before someone figures out that there was a missing dependency, causing them grief when they were trying to update just the one thing that was a missing dependency, and the other code not getting updated as a result.
When you're doing things in parallel, on the other hand, it becomes a lot more complicated. Now you really need to have every dependency specified, because there is no longer any inherent dependable order. Some people will probably still build serially, while others use two processing threads. I'll use 8. I've worked in groups that would be inclined to do 30, because they're on a 32 processor machine, and don't really need all of those during the off hours. Suddenly the fact that the file you needed from a directory that normally got processed 30 directories before yours is now getting processed at the same time as your file that needed it, because you didn't list the dependency and everything in those 30 directories that hasn't already been processed and isn't being processed has a dependency that's not yet finished its processing.

Related

Changing the configuration of an already-built kernel and recompiling only what's been changed

The scenario outlined is this:
Someone has built the Linux kernel from source code.
That person wants to change the build configuration.
They still have all of the object files and temporary files that were produced by the previous build operation.
Given all of that, what needs to be done to rebuild as few things as possible in order to save time?
I understand that these will trigger or necessitate a complete recompilation of the source code:
Running make clean.
Running make menuconfig.
make clean is an obvious course of action to avoid to achieve the desired goal because it deletes all object files, both those that would need to be rebuilt and those that could otherwise be left alone. I don't know why make menuconfig would cause the build system to recompile everything, but I've read on here that that is what it would do.
The problem I see with not having the second avenue open to me is that if I change the configuration manually with a text editor, the options that I change might require changes in other options that depend on them (e.g., IMA_TRUSTED_KEYRING depends on SYSTEM_TRUSTED_KEYRING) and I'd be working without an interface that would automatically make those required secondary changes.
It occurred to me that invoking scripts/kconfig/mconf, the program built and launched by make menuconfig, could possibly be a solution to the problems described in the previous paragraph since it was not stated that mconf is what makes the build system recompile everything. But, it possibly could be that very program, so I do not wish to try it until I know it won't do that.
Sooooo, how does one achieve the stated objective given the stated scenario?

Track origin of file deletion during make

I have a CMake project I am working on, and when I run "make" then the rebuild of one of the targets is getting triggered regardless of whether I touch any of the files in the system. I have managed to discover that this is because one of the files on which the target depends is somehow getting deleted when I run "make", so the system registers that it needs to be rebuilt. I tried setting "chattr +i" on this file to make it immutable and indeed, with this flag set, the rebuilding of the target is not triggered, though there are also no errors from a deletion attempt. But I think I can be sure that this is the culprit.
So now my question. How can I figure out what script or makefile is actually deleting this file? It is a big project with quite a few scripts and sub-makefiles which potentially run at different times during the build, so I am having a hard time manually discovering what is doing this. Is there some nice trick I can pull from the filesystem side to help? I tried doing "strace make", but parsing the output I can't seem to find the filename appearing anywhere. I am no strace expert though, so perhaps the file is getting deleted via some identifier rather than the filename? Or I am not stracing spawned processes perhaps?

Fast kernel recompile

I'm trying to automate the process of recompile a upgraded kernel. (I mean version upgrade)
What I do:
Backup the object files (*.o) with rsync
Remove the directory and make mrproper
Extract new source and patch
Restore object files with rsync
But I found it doesn't make sense. Since skip compiled things need to get a hash, this should removed it.
Question: What file do I need to keep? or it doesn't exists?
BTW: I already know ccache but it broke with a little config change.
You're doing it wrong™ :-)
Keep the kernel tree as-is and simply patch it using the appropriate incremental patch. For example, for 3.x, you find these patches here:
https://www.kernel.org/pub/linux/kernel/v3.x/incr/
If you currently have 3.18.11 built and want to upgrade to 3.18.12, download the 3.18.11-12 patch:
https://www.kernel.org/pub/linux/kernel/v3.x/incr/patch-3.18.11-12.xz
(or the .gz file, if you don't have the xz utilities installed.)
and apply it. Then "make oldconfig" and "make". Whatever needs to be rebuilt will be rebuilt.
However, it's actually best to not rely on the object file dependency mechanism. Who knows if something might end up not being rebuilt even though it should due to a bug. So I'd recommend starting clean every time with a "make clean" before applying the patch, even though it will rebuild everything.
Are you really in such a big need to save build time? If yes, it might be a better idea to configure the kernel ("make menuconfig") and disable all functionality you don't need (like device drivers for hardware you don't have, file systems you don't care about, networking features you will not use, etc.) Such a kernel that's optimized for my needs only takes about 3 or 4 minutes to build (normally, the full kernel with everything enabled would need over half an hour; or even more these days, it's been a very long time since I've built non-optimized kernels.)
Some more info on kernel patches:
https://www.kernel.org/doc/Documentation/applying-patches.txt
The incremental patch is a good way since it updates time stamps properly.
(GNU) Make use time stamps to identify rebuild so just keep the time stamps to avoid rebuild.
If we need rsync, we should use it with -t option.
Also for a patch doesn't have incremental patches, we can make it manually by comparing patched files.

Exactly what does the phases of distcc mean? Am I already using pump mode? And how do I use pump mode in Cygwin?

From what I have read being able to use pump mode with distcc requires that you encapsulate make in the pump script. However, I do not have it in my path and I can not find not find it as a package or included in the distcc package for Cygwin.
However, when I compile with distcc and use distccmon-text to monitor which hosts are contacted and their phase I clearly see that some of them, sometimes, are in the Preprocess phase. I thought all preprocessing was done on the client executing the make script when not using pump mode. And that the whole idea of pump mode was preprocessing on the remote hosts (and thus requireing the same include files).
This has left me confused. My main question is: Exactly what does the phases: Startup, Blocked, Connected, Preprocess, Conect, Send, Receive and Done of distcc mean?
And as a sub-question: How can I use pump mode with distcc in Cygwin?
Exactly what does the phases of distcc
mean?
Ok, this is embarrassing, but I just wasted four hours on the web trying to answer that question. Next time I'll just pull the source and look at it. But, you raise a good point: It's amazing this isn't readily available information.
THESE ARE MY GUESSES (because I don't want to admit I wasted four hours not answering the question!):
Startup - could otherwise be called
"initialization/loading", not yet
ready for first task
Blocked - is awaiting access to local
file or local processor, I stumbled
across recent bug fixes that set "timeout" to one second while it waited for the
processor to become available, and I'm aware that it uses zero-length "flock" files to block at times
Connected - process initiated contact
with a client, is now reserved for a
job (??), or is compiling a job (??)
Preprocess - is performing preprocess
operation
Conect - is hand-shaking with client for atomic operation, maybe to become reserved
(??)
Send - is sending compiled object
file back to client
Receive - is receiving source to be
compiled, or receiving zipped headers
(if using pump-mode)
Done - could otherwise be called
"idle/available"
NOTE: Because of Google's "pump-mode" algorithm, there's actually quite a bit of "hand-shaking" that goes on between the client (running distcc) and volunteer (running distccd). First, in pump-mode, all the headers that are expected to be "needed" are bundled, zipped, and pushed to the volunteer (where it is unzipped into a local mirror like that on the client machine). However, it appears that some further communication between the volunteer-and-client is possible to incrementally transfer other headers as-needed, so that would explain the "more-rich" communications phases/states listed above.
Am I already using pump mode?
I very much doubt it, as you did not configure it by wrapping the compile option through make or scons (necessary to run the Google algorithm to predict header usage for bundling-and-transport to the volunteer), nor can you find the "pump" script. But, I cannot explain your seeing the "Preprocess" states on your volunteers. (You're not referring to the "Preprocess" state on your clients, right? That would be entirely understandable, as by default preprocessing is on the client.)
Rather, I suppose the implementation makes it possible that the hard-state-machine would move through ALL the states, INCLUDING "preprocessing", even when there is no preprocessing to do, before it advances to the next state. For example, even if it did no preprocessing on the volunteer side, distccd would receive the source file, write it to disk, and then launch the compiler. If you're on Cywin, those are not instantaneous, especially if it is a large source file (especially after all the headers included in it). So, you might see the "preprocess" phase until it manually initiates the next phase for the compile operation itself.
HEY ... I don't see an obvious "compile" phase, so it is POSSIBLE that the "preprocess" phase embodies "compile" or "preprocess-and-compile" (since those phases are often combined in many compilers anyway).
Sorry -- I'm just guessing.
And how do I use pump mode in Cygwin?
I haven't tried it, but it is supposed to be possible. Apparently the most common problem with Cygwin is that some Windows compilers cannot handle the default TMPDIR setting when distcc is run under Cygwin. The fix is to put something like export TMPDIR=c:/temp in /etc/profile
The FAQ may be able to help more: http://distcc.googlecode.com/svn/trunk/doc/web/faq.html

How to reduce compilation cost in GCC and make?

I am trying to build some big libraries, like Boost and OpenCV, from their source code via make and GCC under Ubuntu 8.10 on my laptop. Unfortunately the compilation of those big libraries seem to be big burden to my laptop (Acer Aspire 5000). Its fan makes higher and higher noises until out of a sudden my laptop shuts itself down without the OS gracefully turning off.
So I wonder how to reduce the compilation cost in case of make and GCC?
I wouldn't mind if the compilation will take much longer time or more space, as long as it can finish without my laptop shutting itself down.
Is building the debug version of libraries always less costly than building release version because there is no optimization?
Generally speaking, is it possible to just specify some part of a library to install instead of the full library? Can the rest be built and added into if later found needed?
Is it correct that if I restart my laptop, I can resume compilation from around where it was when my laptop shut itself down? For example, I noticed that it is true for OpenCV by looking at the progress percentage shown during its compilation does not restart from 0%. But I am not sure about Boost, since there is no obvious information for me to tell and the compilation seems to take much longer.
UPDATE:
Thanks, brianegge and Levy Chen! How to use the wrapper script for GCC and/or g++? Is it like defining some alias to GCC or g++? How to call a script to check sensors and wait until the CPU temperature drops before continuing?
I'd suggest creating a wrapper script for gcc and/or g++
#!/bin/bash
sleep 10
exec gcc "$#"
Save the above as "gccslow" or something, and then:
export CC="gccslow"
Alternatively, you can call the script gcc and put it at the front of your path. If you do that, be sure to include the full path in the script, otherwise, the script will call itself recursively.
A better implementation could call a script to check sensors and wait until the CPU temperature drops before continuing.
For your latter question: A well written Makefile will define dependencies as a directed a-cyclical graph (DAG), and it will try to satisfy those dependencies by compiling them in the order according to the DAG. Thus as a file is compiled, the dependency is satisfied and need not be compiled again.
It can, however, be tricky to write good Makefiles, and thus sometime the author will resort to a brute force approach, and recompile everything from scratch.
For your question, for such well known libraries, I will assume the Makefile is written properly, and that the build should resume from the last operation (with the caveat that it needs to rescan the DAG, and recalculate the compilation order, that should be relatively cheap).
Instead of compiling the whole thing, you can compile each target separately. You have to examine the Makefile for identifying them.
Tongue-in-cheek: What about putting the laptop into the fridge while compiling?

Resources