I am trying to build some big libraries, like Boost and OpenCV, from their source code via make and GCC under Ubuntu 8.10 on my laptop. Unfortunately the compilation of those big libraries seem to be big burden to my laptop (Acer Aspire 5000). Its fan makes higher and higher noises until out of a sudden my laptop shuts itself down without the OS gracefully turning off.
So I wonder how to reduce the compilation cost in case of make and GCC?
I wouldn't mind if the compilation will take much longer time or more space, as long as it can finish without my laptop shutting itself down.
Is building the debug version of libraries always less costly than building release version because there is no optimization?
Generally speaking, is it possible to just specify some part of a library to install instead of the full library? Can the rest be built and added into if later found needed?
Is it correct that if I restart my laptop, I can resume compilation from around where it was when my laptop shut itself down? For example, I noticed that it is true for OpenCV by looking at the progress percentage shown during its compilation does not restart from 0%. But I am not sure about Boost, since there is no obvious information for me to tell and the compilation seems to take much longer.
UPDATE:
Thanks, brianegge and Levy Chen! How to use the wrapper script for GCC and/or g++? Is it like defining some alias to GCC or g++? How to call a script to check sensors and wait until the CPU temperature drops before continuing?
I'd suggest creating a wrapper script for gcc and/or g++
#!/bin/bash
sleep 10
exec gcc "$#"
Save the above as "gccslow" or something, and then:
export CC="gccslow"
Alternatively, you can call the script gcc and put it at the front of your path. If you do that, be sure to include the full path in the script, otherwise, the script will call itself recursively.
A better implementation could call a script to check sensors and wait until the CPU temperature drops before continuing.
For your latter question: A well written Makefile will define dependencies as a directed a-cyclical graph (DAG), and it will try to satisfy those dependencies by compiling them in the order according to the DAG. Thus as a file is compiled, the dependency is satisfied and need not be compiled again.
It can, however, be tricky to write good Makefiles, and thus sometime the author will resort to a brute force approach, and recompile everything from scratch.
For your question, for such well known libraries, I will assume the Makefile is written properly, and that the build should resume from the last operation (with the caveat that it needs to rescan the DAG, and recalculate the compilation order, that should be relatively cheap).
Instead of compiling the whole thing, you can compile each target separately. You have to examine the Makefile for identifying them.
Tongue-in-cheek: What about putting the laptop into the fridge while compiling?
Related
I am compiling the Linux kernel in a VM(virtual box) with 2 out of 4GB and 4 out of 8 CPUs allocated. My initial compilation took around 8-9 hours, and I was using make -j4 optimisation too. Now I added a simple system call to the kernel and just ran the make -j4 and and it has been compiling for the past 3 hours. I thought that after the initial compilation, make would only compile the small changes but it seems to be compiling everything (mostly the drivers). Is there any way I can speed up this compilation process?
For example, is there anyway by which I can disable some of the drivers that I don't really need, for example if I just want to implement a simple system call, I don't really need all the networking drivers, and maybe that would speed it up? i.e. I just want the bare minimum functionality for my kernel to test my system calls.
Compiling the kernel will always take a very long time, unfortunately there's no way around that besides having a really good processor with a lot of multi threading, however in huge projects like this, ccache will help with compilation times tremendously, it's not perfect, but far better than just compiling objects.
You won't see the difference at the initial compilation, but it will speed up recompilation by using the cache it has generated instead of compiling most of what already has been compiled before.
I'm trying to automate the process of recompile a upgraded kernel. (I mean version upgrade)
What I do:
Backup the object files (*.o) with rsync
Remove the directory and make mrproper
Extract new source and patch
Restore object files with rsync
But I found it doesn't make sense. Since skip compiled things need to get a hash, this should removed it.
Question: What file do I need to keep? or it doesn't exists?
BTW: I already know ccache but it broke with a little config change.
You're doing it wrong™ :-)
Keep the kernel tree as-is and simply patch it using the appropriate incremental patch. For example, for 3.x, you find these patches here:
https://www.kernel.org/pub/linux/kernel/v3.x/incr/
If you currently have 3.18.11 built and want to upgrade to 3.18.12, download the 3.18.11-12 patch:
https://www.kernel.org/pub/linux/kernel/v3.x/incr/patch-3.18.11-12.xz
(or the .gz file, if you don't have the xz utilities installed.)
and apply it. Then "make oldconfig" and "make". Whatever needs to be rebuilt will be rebuilt.
However, it's actually best to not rely on the object file dependency mechanism. Who knows if something might end up not being rebuilt even though it should due to a bug. So I'd recommend starting clean every time with a "make clean" before applying the patch, even though it will rebuild everything.
Are you really in such a big need to save build time? If yes, it might be a better idea to configure the kernel ("make menuconfig") and disable all functionality you don't need (like device drivers for hardware you don't have, file systems you don't care about, networking features you will not use, etc.) Such a kernel that's optimized for my needs only takes about 3 or 4 minutes to build (normally, the full kernel with everything enabled would need over half an hour; or even more these days, it's been a very long time since I've built non-optimized kernels.)
Some more info on kernel patches:
https://www.kernel.org/doc/Documentation/applying-patches.txt
The incremental patch is a good way since it updates time stamps properly.
(GNU) Make use time stamps to identify rebuild so just keep the time stamps to avoid rebuild.
If we need rsync, we should use it with -t option.
Also for a patch doesn't have incremental patches, we can make it manually by comparing patched files.
I have core i5 with 8gb RAM.
I have VMware workstation 10.0.1 installed on my machine.
I have fedora 20 Desktop Edition installed on VMware as guest OS.
I am working on Linux kernel source code v 3.14.1. I am developing an I/O scheduler for Linux kernel. After any modifications in code every time it takes around 1 hour and 30 minutes for compiling and installing the whole kernel code to see the changes.
Compilation and Installation commands:
make menuconfig,
make,
make modules,
make modules_install,
make install
So my question is it possible to reduce 1 hour and 30 minutes time into only 10 to 15 minutes?
Do not do make menuconfig for every change you make to the sources, because it will trigger a full compilation of everything, no matter how trivial your change is. This is only needed when the configuration option of the kernel changes, and that should sheldom happen during your development.
Just do:
make
or if you prefer the parallel compilation:
make -j4
or whatever number of concurrent tasks you fancy.
Then the make install, etc. may be needed for deploying the recently built binaries, of course.
Another trick is to configure the kernel to the minimum needed for your tests. I've found that for many tasks a UML compilation (User Mode Linux) is the fastest. You may also find useful make localmodconfig instead of make menuconfig to start with.
Use make parallel build with -j option
Compile for the target architecture only, since otherwise make will build the kernel for every listed architecture.
i.e. for eg instead of running:
make
run:
make ARCH=<your architecture> -jN
where N is the no of cores on your machine (cat /proc/cpuinfo lists the no of cores). For eg, for i386 target and host machine with 4 cores (output of cat /proc/cpuinfo):
make ARCH=i386 -j4
Similarly you can run the other make targets (modules, modules_install, install) with -jN flag.
Note: make does a check of the files modified and compiles only those files which have been modified so only the initial build should take time, subsequent builds will be faster.
make -j will make use of all available CPUs.
You do not need to run make menuconfig again every time you make a change — it is only needed once to create the kernel .config file. (Or possibly again if you edit Kconfig files to add or modify configuration options, but this certainly shouldn't be happening often.)
So long as your .config is left alone, running make should only recompile files that you changed. There are a few files that must be compiled every time, but the vast majority are not.
ccache should be able to dramatically speed up your compile times. It speeds up recompilation by caching previous compilations and detecting when the same compilation is being done again. Your first compilation with ccache will be slower since it needs to populate the cache, but subsequent builds should be much faster.
If you don't want to fuss with ccache configurations you can just run it like so to compile the kernel:
ccache make
Perhaps in addition to the previous suggestions, while using ccache, you might want to unset CONFIG_GCC_PLUGINS (if it was set) otherwise you may get a lot of cache misses, as seen in this example.
Perhaps in addition to the previous suggestions, using ccache software (https://ccache.samba.org/) and a compilation directory on SSD drive should drastically decrease the compilation time.
If you have suffitient RAM and you wont be using your machine while the kernel is being built u can spawn a large number of concurrent jobs. But make sure your RAM is sufficient otherwise your system will hang and crash.
Use this command:
sudo make -j 4 && sudo make modules_install -j 4 && sudo make install -j 4
Where 4 is the number of cores I have alloted to working on this process.
Credits
Simple trick. If you don't use your own machine or have another one, you can log out completely and switch to a TTY terminal using CTRL + ALT + F*. Everything is much much faster.
I downloaded the Linux kernel and started compiling it, so question has arisen since
I'm building on an old laptop.
What if I hit Ctrl-C while make-ing and then run make again later; does it start the build
from the beginning?
The simple answer is as long as all the target files that make creates still exist and have a newer timestamp than the files they depend on they will not be rebuilt. Generally in order to have the behavior you're trying avoid occur you would have to issue a make clean command or something similar.
I downloaded the tight vnc source code from its website. Now I am trying to use gdb on its executable. The debugger successfully adds breakpoints on functions but when I try to step through the function it says :
Single Stepping until exit from function func, which has no line number information
I think it is due to the fact that the compilation wasnt done with correct flags. I am trying to search the configuration files to understand how to enable it, but haven't been able to so far. I am not acquainted with Imakefiles etc. Maybe someone who has done this previously can help ?
Using gnu GCC and GDB on an ubuntu machine
You should compile with the -g flag.
If you are trying to learn the code, I would recommend "-g -O0". That will turn off the optimizer - gcc optimization can make it confusing to step through code.