I have core i5 with 8gb RAM.
I have VMware workstation 10.0.1 installed on my machine.
I have fedora 20 Desktop Edition installed on VMware as guest OS.
I am working on Linux kernel source code v 3.14.1. I am developing an I/O scheduler for Linux kernel. After any modifications in code every time it takes around 1 hour and 30 minutes for compiling and installing the whole kernel code to see the changes.
Compilation and Installation commands:
make menuconfig,
make,
make modules,
make modules_install,
make install
So my question is it possible to reduce 1 hour and 30 minutes time into only 10 to 15 minutes?
Do not do make menuconfig for every change you make to the sources, because it will trigger a full compilation of everything, no matter how trivial your change is. This is only needed when the configuration option of the kernel changes, and that should sheldom happen during your development.
Just do:
make
or if you prefer the parallel compilation:
make -j4
or whatever number of concurrent tasks you fancy.
Then the make install, etc. may be needed for deploying the recently built binaries, of course.
Another trick is to configure the kernel to the minimum needed for your tests. I've found that for many tasks a UML compilation (User Mode Linux) is the fastest. You may also find useful make localmodconfig instead of make menuconfig to start with.
Use make parallel build with -j option
Compile for the target architecture only, since otherwise make will build the kernel for every listed architecture.
i.e. for eg instead of running:
make
run:
make ARCH=<your architecture> -jN
where N is the no of cores on your machine (cat /proc/cpuinfo lists the no of cores). For eg, for i386 target and host machine with 4 cores (output of cat /proc/cpuinfo):
make ARCH=i386 -j4
Similarly you can run the other make targets (modules, modules_install, install) with -jN flag.
Note: make does a check of the files modified and compiles only those files which have been modified so only the initial build should take time, subsequent builds will be faster.
make -j will make use of all available CPUs.
You do not need to run make menuconfig again every time you make a change — it is only needed once to create the kernel .config file. (Or possibly again if you edit Kconfig files to add or modify configuration options, but this certainly shouldn't be happening often.)
So long as your .config is left alone, running make should only recompile files that you changed. There are a few files that must be compiled every time, but the vast majority are not.
ccache should be able to dramatically speed up your compile times. It speeds up recompilation by caching previous compilations and detecting when the same compilation is being done again. Your first compilation with ccache will be slower since it needs to populate the cache, but subsequent builds should be much faster.
If you don't want to fuss with ccache configurations you can just run it like so to compile the kernel:
ccache make
Perhaps in addition to the previous suggestions, while using ccache, you might want to unset CONFIG_GCC_PLUGINS (if it was set) otherwise you may get a lot of cache misses, as seen in this example.
Perhaps in addition to the previous suggestions, using ccache software (https://ccache.samba.org/) and a compilation directory on SSD drive should drastically decrease the compilation time.
If you have suffitient RAM and you wont be using your machine while the kernel is being built u can spawn a large number of concurrent jobs. But make sure your RAM is sufficient otherwise your system will hang and crash.
Use this command:
sudo make -j 4 && sudo make modules_install -j 4 && sudo make install -j 4
Where 4 is the number of cores I have alloted to working on this process.
Credits
Simple trick. If you don't use your own machine or have another one, you can log out completely and switch to a TTY terminal using CTRL + ALT + F*. Everything is much much faster.
Related
I am compiling the Linux kernel in a VM(virtual box) with 2 out of 4GB and 4 out of 8 CPUs allocated. My initial compilation took around 8-9 hours, and I was using make -j4 optimisation too. Now I added a simple system call to the kernel and just ran the make -j4 and and it has been compiling for the past 3 hours. I thought that after the initial compilation, make would only compile the small changes but it seems to be compiling everything (mostly the drivers). Is there any way I can speed up this compilation process?
For example, is there anyway by which I can disable some of the drivers that I don't really need, for example if I just want to implement a simple system call, I don't really need all the networking drivers, and maybe that would speed it up? i.e. I just want the bare minimum functionality for my kernel to test my system calls.
Compiling the kernel will always take a very long time, unfortunately there's no way around that besides having a really good processor with a lot of multi threading, however in huge projects like this, ccache will help with compilation times tremendously, it's not perfect, but far better than just compiling objects.
You won't see the difference at the initial compilation, but it will speed up recompilation by using the cache it has generated instead of compiling most of what already has been compiled before.
I am doing project on Pandaboard using Embedded Linux (UBUNTU 12.10 Server Prebuild image) to optimize boot time. I need techniques or tools through which I can find boot time and techniques to optimize the boot time. If anyone can help.
Just remove application which is not required from /etc/init.d/rc file also put echo after every process initialization and check which process is taking much time for starting,
if you find application which is taking more time then debug that application and so on.
There is program that can be helpful to know the approximate boot-up time. Check this link
Time Stamp.
First of all the best you have to do is to compile yourself your own made kernel, get the source on the internet and do a make xconfig and then unselected everythin you don't need.
In a second time create your own root filesystem using Buildroot and make xconfig to select/unselect everything you need or not.
Hope this help.
I had the same problem and do that way, now it's clearly not the same ;)
EDIT: Everything you need will be here
to analyze the boot process, you can use Bootchart2, its available on github: https://github.com/mmeeks/bootchart
or Bootchart, from the Ubuntu packages:
sudo apt-get update
sudo apt-get install bootchart pybootchartgui
There are broadly 3 areas where you can reduce boot time
Bootloader:
Modify the linker script to initialize only the required h/w. Also, if you are using an SD card to boot, merge kernel and bootloader image to save time.
Kernel:
Remove unwanted modules from kernel config. Also try using compressed and uncompressed image. If your CPU is good enough to handle it go compressed image and check uncompression time required for different compression types.
Filesystem:
FS size can be significantly reduced by removing the unwanted bins and libs. Check for dependencies and use only the one's that are required.
For more techniques and information on tools that help in measuring the boot time please refer to the following link.
Refer to Training Material
The basic rule is: the fastest code is code that never gets loaded and
run, so remove everything you don't need:
in U-Boot: don't load and run the full U-Boot at all; use FALCON
mode and have the SPL load the Linux kernel and DTB directly
in Linux: remove all drivers and other stuff you don't really need;
load all drivers that are not essential for your core application as
modules - and load them after your application was started. If you
take this serious, you may even want to start only one CPU core
initially (and start the remaining ones after your application is
running).
in user space: minimize the size of the root file system. throuw
out anything you don't need; configure tools (like busybox) to
contain only the really needed functionality; use efficient code
(for example, link against musl libc instead of glibc) etc.
What can be acchieved by combining all these measures can be seen in
this video - and yes, the complete code for this optimization is
available here.
Optimizing embedded Linux Boot process , needs modifications in three level of embedded Linux design.
Note: you will need the source codes of bootloader and kernel
Boot : the first step in optimizing and reducing boot time of board is optimizing boot loader. first you should know what is your bootloader is. If your bootloader is an opensource bootloader like u-boot than you have the opportunity to modify and optimize it. In u-boot we have a procedure that we can skip unnecessary system check and just upload kernel image to ram and start. the documentation and instruction for this is available in u-boot website. by doing this you will save about 4 ~ 5 second in boot.
Kernel : for having a quicker kernel , you should optimize kernel in many sections. for editing you can use on of Linux config menu. I always use a low graphic menu. it need some dependency you can use it by this command:
$ make menuconfig
our goal for Linux kernel is to have smaller kernel image and less module to load in boot. first change the algorithm of compression from gzip to LZO. the point of this action is gzip algorithm will take much time to extract kernel. by using LZO we have a quicker kernel decompression process. the second , disable any unnecessary driver or module that you don’t have it on your board or you don’t use it any more. by doing this , you will lose some device access and cannot use them in Linux but you will have two positive points: less Ram usage , quicker boot time.
but please remind that some driver are necessary for Linux and by disabling them you will lose some of main features (for example if you disable I2C driver in Linux you will no longer have a HDMI interface) that you need or in worst case you will have a boot problem (such as boot-loop). The third is to disable some of unusable filesystem to reduce kernel size and boot time. The Fourth is to remove some of compression algorithm to have smaller kernel image.
the last thing , If you are using a u-boot bootloader create a uImage instead of zImage. the following steps , are general and main actions , for having quicker boot as 1 second after power attach you should change more option.
after two base layer modifications, now we should optimize boot process in user-space (root file system). depend on witch system are you using , we have different changes to do. in abstract root file system of Linux that have necessary package and system to boot Linux we should use systemd instead of Unix systemv , because systemd have a multi-task init. system and it is faster , after that is udev that you should modify some of loading modules. if you have a graphical user-interface , we can use an easy trick to have a big boot time reduction by initing GUI first and load other module after loading GUI.
if you do all of following tasks , you can have quick boot time and fast system to work with.
I have a question about changing kernel frequency.
I compiled kernel by using:
make menuconfig(do some changes in config)
(under Processor type and features->Timer frequency to change frequency)
1.fakeroot make-kpkg --initrd --append-to-version=-mm kernel-image kernel-headers
2.export CONCURRENCY_LEVEL=3
3.sudo dpkg -i linux-image-3.2.14-mm_3.2.14-mm-10.00.Custom_amd64.deb
4.sudo dpkg -i linux-headers-3.2.14-mm_3.2.14-mm-10.00.Custom_amd64.deb
then say if I want to change the frequency of kernel,
what I did is:
I replaced .config file with my own config file
(since I want to do this automatically without opening make menuconfig ui)
then I repeat the step1,2,3,4 again
Is there anyway I do not need repeat the above 4 steps?
Thanks a lot!!!!
The timer frequency is fixed in Linux (unless you build a tickless kernel - CONFIG_NO_HZ=y - but the upper limit will still be fixed). You cannot change it at runtime or at boot time. You can only change it at compile time.
So the answer is: no. You need to rebuild the kernel when you want to change it.
The kernel timer frequency (CONFIG_HZ) is not configurable at runtime - you will have to compile a new kernel when you change the setting and you will have to reboot the system with the new kernel to see the effects of any change.
If you are doing this a lot, though, you should be able to create a little shell script to automate the kernel configure/build/install process. For example it should not be too hard to automate the procedure so that e.g.
./kernel-prep-with-hz 100
would rebuild and install a new kernel, only requiring from you to issue the final reboot command.
Keep in mind though, that the timer frequency may subtly affect various subsystems in unpredictable ways, although things have become a lot better since the tickless timer code was introduced.
Why do you want to do this anyway?
Maybe this will help. As the articale says, you can change the frequency between the available frequency that your system supports. (Check if CPUfreq is already enabled in your system)
Example, mine.
#cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies
2000000 1667000 1333000 1000000
#echo 1000000 > cpu0/cpufreq/scaling_min_freq
http://www.ibm.com/developerworks/linux/library/l-cpufreq-2/
I recently read a post (admittedly its a few years old) and it was advice for fast number-crunching program:
"Use something like Gentoo Linux with 64 bit processors as you can compile it natively as you install. This will allow you to get the maximum punch out of the machine as you can strip the kernel right down to only what you need."
can anyone elaborate on what they mean by stripping down the kernel? Also, as this post was about 6 years old, which current version of Linux would be best for this (to aid my google searches)?
There is some truth in the statement, as well as something somewhat nonsensical.
You do not spend resources on processes you are not running. So as a first instance I would try minimise the number of processes running. For that we quite enjoy Ubuntu server iso images at work -- if you install from those, log in and run ps or pstree you see a thing of beauty: six or seven processes. Nothing more. That is good.
That the kernel is big (in terms of source size or installation) does not matter per se. Many of this size stems from drivers you may not be using anyway. And the same rule applies again: what you do not run does not compete for resources.
So think about a headless server, stripped down -- rather than your average desktop installation with more than a screenful of processes trying to make the life of a desktop user easier.
You can create a custom linux kernel for any distribution.
Start by going to kernel.org and downloading the latest source. Then choose your configuration interface (you have the choice of console text, 'config', ncurses style 'menuconfig', KDE style 'xconfig' and GNOME style 'gconfig' these days) and execute ./make whateverconfig. After choosing all the options, type make to create your kernel. Then make modules to compile all the selected modules for this kernel. Then, make install will copy the files to your /boot directory, and make modules_install, copies the modules. Next, go to /boot and use mkinitrd to create the ram disk needed to boot properly, if needed. Then you'll add the kernel to your GRUB menu.lst, by editing menu.lst and copying the latest entry and adding a similar one pointing to the new kernel version.
Of course, that's a basic overview and you should probably search for 'linux kernel compile' to find more detailed info. Selecting the necessary kernel modules and options takes a bit of experience - if you choose the wrong options, the kernel might not be bootable and you'll have to start over, which is a pain because selecting the options and compiling the kernel can take 15-30 minutes.
Ultimately, it isn't going to make a large difference to compile a stripped-down custom kernel unless your given task is very, very performance sensitive. It makes sense to remove things you're never going to use from the kernel, though, like say ISDN support.
I'd have to say this question is more suited to SuperUser.com, by the way, as it's not quite about programming.
I am trying to build some big libraries, like Boost and OpenCV, from their source code via make and GCC under Ubuntu 8.10 on my laptop. Unfortunately the compilation of those big libraries seem to be big burden to my laptop (Acer Aspire 5000). Its fan makes higher and higher noises until out of a sudden my laptop shuts itself down without the OS gracefully turning off.
So I wonder how to reduce the compilation cost in case of make and GCC?
I wouldn't mind if the compilation will take much longer time or more space, as long as it can finish without my laptop shutting itself down.
Is building the debug version of libraries always less costly than building release version because there is no optimization?
Generally speaking, is it possible to just specify some part of a library to install instead of the full library? Can the rest be built and added into if later found needed?
Is it correct that if I restart my laptop, I can resume compilation from around where it was when my laptop shut itself down? For example, I noticed that it is true for OpenCV by looking at the progress percentage shown during its compilation does not restart from 0%. But I am not sure about Boost, since there is no obvious information for me to tell and the compilation seems to take much longer.
UPDATE:
Thanks, brianegge and Levy Chen! How to use the wrapper script for GCC and/or g++? Is it like defining some alias to GCC or g++? How to call a script to check sensors and wait until the CPU temperature drops before continuing?
I'd suggest creating a wrapper script for gcc and/or g++
#!/bin/bash
sleep 10
exec gcc "$#"
Save the above as "gccslow" or something, and then:
export CC="gccslow"
Alternatively, you can call the script gcc and put it at the front of your path. If you do that, be sure to include the full path in the script, otherwise, the script will call itself recursively.
A better implementation could call a script to check sensors and wait until the CPU temperature drops before continuing.
For your latter question: A well written Makefile will define dependencies as a directed a-cyclical graph (DAG), and it will try to satisfy those dependencies by compiling them in the order according to the DAG. Thus as a file is compiled, the dependency is satisfied and need not be compiled again.
It can, however, be tricky to write good Makefiles, and thus sometime the author will resort to a brute force approach, and recompile everything from scratch.
For your question, for such well known libraries, I will assume the Makefile is written properly, and that the build should resume from the last operation (with the caveat that it needs to rescan the DAG, and recalculate the compilation order, that should be relatively cheap).
Instead of compiling the whole thing, you can compile each target separately. You have to examine the Makefile for identifying them.
Tongue-in-cheek: What about putting the laptop into the fridge while compiling?