I have to build custom kernels often enough, that it would make sense for me to have a tool which would take some a short list of approx. 50-100 configuration options and apply them over the default stock config file.
Now, I surely could just take a fresh kernel config (call make nconfig -> save the file without any changes done). Then change the corresponding options and compile the thing. But, the nconfig/menuconfig UIs perform some dependencies - automatically setting the values of depending/relative options. So, my question is:
(How) can I easily tap into the dependency resolving mechanism? What I'd ideally like to get is the following: taking a fresh .config file. applying a single change. do something to change the values of related configuration options, if there are any to be cahnged.
Thanks in advance!
Related
I am building on one machine and running it on the other.
Build:
runcpu --action build --config xxx
Run:
runcpu --action run --config xxx --nobuild
All cases reported checksum mismatched. How do I resolve this.
Explanation
For SPEC CPU 2017, check out the config file options for runcpu. It lists two options that may be of interest that you can put in a header section: strict_rundir_verify and verify_binaries. I pasted their descriptions below.
strict_rundir_verify=[yes|no]:
When set, the tools will verify that the file contents in existing run directories match the expected checksums. Normally, this should always be on, and reportable runs will force it to be on. Turning it off might make the setup phase go a little faster while you are tuning the benchmarks.
Developer notes: setting strict_rundir_verify=no might be useful when prototyping a change to a workload or testing the effect of differing workloads. Note, though, that once you start changing your installed tree for such purposes it is easy to get lost; you might as well keep a pristine tree without modifications, and use a second tree that you convert_to_development.
verify_binaries=[yes|no]:
runcpu uses checksums to verify that executables match the config file that invokes them, and if they do not, runcpu forces a recompile. You can turn that feature off by setting verify_binaries=no.
Warning: It is strongly recommended that you keep this option at its default, yes (that is, enabled). If you disable this feature, you effectively say that you are willing to run a benchmark even if you don't know what you did or how you did it -- that is, you lack information as to how it was built!
The feature can be turned off because it may be useful to do so sometimes when debugging (for an example, see env_vars), but it should not be routinely disabled.
Since SPEC requires that you disclose how you build benchmarks, reportable runs (using the command-line switch --reportable or config file setting reportable=yes) will cause verify_binaries to be automatically enabled. For CPU 2017, this field replaces the field check_md5.
For SPEC CPU 2006, these two options also exist, but note that verify_binaries used to be called check_md5.
Example
Example. I recently built the SPEC CPU 2017 binaries, patched them (in their respective exe directories), and then performed a (non-reportable) run. To do this, I put the following in the "global options" header section of my configuration file:
#--------- Global Settings ----------------------------------------------------
...
reportable = 0
verify_binaries = 0
...
before building, patching, and running (with the --nobuild flag) the suite.
I am creating a simple linux kernel with buildroot and I am adding a small driver I've done myself, I created the Config.in file and drivername.mk to be able to select the driver in make menuconfig succesfully.
When executing make to build the image, the compilation goes correctly until my driver starts to compile, it looks to compile and create the image right but I get loooots of warnings saying that different files in ./lib/gcc/arm-buildroot-linux-uclibcgnueabihf/ are touched by more than one package: [u'host-gcc-initial', u'host-gcc-final'].
Anyone can explain me a bit about this issue and what is causing it? Do you need any more info to know what is happening? Is it safe to ignore them?
Thanks beforehand
Actually, doing a search on 'touched by more than one package', I found http://lists.busybox.net/pipermail/buildroot/2017-October/205602.html, where we find that this warning can safely be ignored if you're not doing a parallel build and aren't a kernel maintainer.
That said, if you're submitting code for inclusion in the Linux kernel, please be a good citizen and make sure you identify all of the things your code is dependent upon. (I'm not actually an active kernel hacker, so I don't know what method they're using for this right now.)
The basic idea is that there are a bunch of steps in compiling things that need to be done in a logical order. In a small project, we simply use dependencies that we know to put in because we also coded in that dependency. But with a project the size of the kernel, you can guarantee that not everyone does this. Some of them instead just specify dependencies if they're needed for things to build properly - if the default order works, things could go years before someone figures out that there was a missing dependency, causing them grief when they were trying to update just the one thing that was a missing dependency, and the other code not getting updated as a result.
When you're doing things in parallel, on the other hand, it becomes a lot more complicated. Now you really need to have every dependency specified, because there is no longer any inherent dependable order. Some people will probably still build serially, while others use two processing threads. I'll use 8. I've worked in groups that would be inclined to do 30, because they're on a 32 processor machine, and don't really need all of those during the off hours. Suddenly the fact that the file you needed from a directory that normally got processed 30 directories before yours is now getting processed at the same time as your file that needed it, because you didn't list the dependency and everything in those 30 directories that hasn't already been processed and isn't being processed has a dependency that's not yet finished its processing.
The scenario outlined is this:
Someone has built the Linux kernel from source code.
That person wants to change the build configuration.
They still have all of the object files and temporary files that were produced by the previous build operation.
Given all of that, what needs to be done to rebuild as few things as possible in order to save time?
I understand that these will trigger or necessitate a complete recompilation of the source code:
Running make clean.
Running make menuconfig.
make clean is an obvious course of action to avoid to achieve the desired goal because it deletes all object files, both those that would need to be rebuilt and those that could otherwise be left alone. I don't know why make menuconfig would cause the build system to recompile everything, but I've read on here that that is what it would do.
The problem I see with not having the second avenue open to me is that if I change the configuration manually with a text editor, the options that I change might require changes in other options that depend on them (e.g., IMA_TRUSTED_KEYRING depends on SYSTEM_TRUSTED_KEYRING) and I'd be working without an interface that would automatically make those required secondary changes.
It occurred to me that invoking scripts/kconfig/mconf, the program built and launched by make menuconfig, could possibly be a solution to the problems described in the previous paragraph since it was not stated that mconf is what makes the build system recompile everything. But, it possibly could be that very program, so I do not wish to try it until I know it won't do that.
Sooooo, how does one achieve the stated objective given the stated scenario?
I wonder if there is any way i could retrieve all changes i made to my various configuration files since install(residing in /etc and so on) in one shot?
I imagine some kind of loop, that uses 'diff' to compare all those files to a 'standard installation' of ubuntu. Output should be a single file with information regarding the changes that were made and a timestamp.
Perhaps there is even a way to put all that in a script and let it run regularly to automatically keep track of future config file changes.
If the files are already modified, I guess your only option is to diff your files with a fresh install. Keep in mind some files might be specific to you computer, I'm thinking of files that can hold device-specific values like your mac address udev/rules.d/70-persistent-net.rules, your drives uuid /etc/fstab, etc.
If you're planning this ahead, there are at least two options you can consider:
use a VCS such as git.
use a filesystem that keeps a complete history of the changes made.
What is says really. We need a cache because most files are built using the same versions of every file, but any developer who is altering files will only be altering a few files, and generally they get altered a lot.
There's little point in writing that change into the cache specified with CacheDir() until it is approved for production, but there's a lot of point in copying stuff from CacheDir
But I can only see options to disable the cache entirely.
(I'd post this to the scons mailing lists but it's just come up with a completely illegible captcha)
I can think of 2 different options:
Instead of a CacheDir, how about using a Repository() for those files that almost never change?
Consider using the --implicit-deps-unchanged option as described in the SCons man pages, and here.
Here's a similar discussion.
How do you plan on toggling the read-only behavior? You'll need some logic to do that. Since its not possible to use the CacheDir in a read-only way, an alternative would be to use this same toggling logic to switch between using a Repository and the CacheDir.
Fwiw, scones has this now. Version 2.3.1
From: http://www.scons.org/doc/production/HTML/scons-man.html
scons can maintain a cache of target (derived) files that can be shared between multiple builds. When caching is enabled in a SConscript file, any target files built by scons will be copied to the cache. If an up-to-date target file is found in the cache, it will be retrieved from the cache instead of being rebuilt locally. Caching behavior may be disabled and controlled in other ways by the --cache-force, --cache-disable, --cache-readonly, and --cache-show command-line options. The --random option is useful to prevent multiple builds from trying to update the cache simultaneously.