Can anyone point me to a good tutorial on creating a bootable Linux CD from scratch?
I need help with a fairly specialized problem: my firm sells an expansion card that requires custom firmware. Currently we use an extremely old live CD image of RH7.2 that we update with current firmware. Manufacturing puts the cards in a machine, boots off the CD, the CD writes the firmware, they power off and pull the cards. Because of this cycle, it's essential that the CD boot and shut down as quickly as possible.
The problem is that with the next generation of cards, I have to update the CD to a 2.6 kernel. It's easy enough to acquire a pre-existing live CD - but those all are designed for showing off Linux on the desktop - which means they take forever to boot.
Can anyone fix me up with a current How-To?
Update:
So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator".
My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell.
The whole job would have only taken an hour or two if there had been a README explaining the configuration file!
There are a couple of interesting projects you could look into.
But first: does it have to be a CD-ROM? That's probably the slowest possible storage (well, apart from tape, maybe) you could use. What about a fast USB stick or a an IEE1394 hard-disk or maybe even an eSATA hard-disk?
Okay, there are several Live-CDs that are designed to be very small, in order to e.g. fit on a business card sized CD. Some were also designed to be booted from a USB stick, back when that meant 64-128 MiByte: Damn Small Linux is one of the best known ones, however it uses a 2.4 kernel. There is a sister project called Damn Small Linux - Not, which has a 2.6 kernel (although it seems it hasn't been updated in years).
Another project worth noting is grml, a Live-CD for system administration tasks. It does not boot into a graphic environment, and is therefore quite fast; however, it still contains about 2 GiByte of software compressed onto a CD-ROM. But it also has a smaller flavor, aptly named grml-small, which only contains about 200 MiByte of software compressed into 60 MiByte.
Then there is Morphix, which is a Live-CD builder toolkit based on Knoppix. ("Morphable Knoppix"!) Morphix is basically a tool to build your own special purpose Live-CD.
The last thing I want to mention is MachBoot. MachBoot is a super-fast Live-CD. It uses various techniques to massively speed up the boot process. I believe they even trace the order in which blocks are accessed during booting and then remaster the ISO so that those blocks are laid out contiguously on the medium. Their current record is less than 6 seconds to boot into a full graphical desktop environment. However, this also seems to be stale.
One key piece of advice I can give is that most LiveCDs use a compressed filesystem called squashfs to cram as much data on the CD as possible. Since you don't need compression, you could run the mksquashfs step (present in most tutorials) with -noDataCompression and -noFragmentCompression to save on decompression time. You may even be able to drop the squashfs approach entirely, but this would require some restructuring. This may actually be slower depending on your CD-ROM read speed vs. CPU speed, but it's worth looking into.
This Ubuntu tutorial was effective enough for me to build a LiveCD based on 8.04. It may be useful for getting the feel of how a LiveCD is composed, but I would probably not recommend using an Ubuntu LiveCD.
If at all possible, find a minimal LiveCD and build up with only minimal stripping out, rather than stripping down a huge LiveCD like Ubuntu. There are some situations in which the smaller distros are using smaller/faster alternatives rather than just leaving something out. If you want to get seriously hardcore, you could look at Linux From Scratch, and include only what you want, but that's probably more time than you want to spend.
Creating Your Own Custom Ubuntu 7.10 Or Linux Mint 4.0 Live-CD With Remastersys
Depends on your distro. Here's a good article you can check out from LWN.net
There is a book I used which covers a lot of distros, though it does not cover creating a flash-bootable image. The book is Live Linux(R) CDs: Building and Customizing Bootables. You can use it with supplemental information from your distro of choice.
So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator".
My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell.
The whole job would have only taken an hour or two if there had been a README explaining the configuration file!
Debian Live provides the best tools for building a Linux Live CD. Webconverger uses Debian Live for example.
It's very easy to use.
sudo apt-get install live-helper # from Debian unstable, which should work fine from Ubuntu
lh_config # edit config/* to your liking
sudo lh_build
Related
I am doing project on Pandaboard using Embedded Linux (UBUNTU 12.10 Server Prebuild image) to optimize boot time. I need techniques or tools through which I can find boot time and techniques to optimize the boot time. If anyone can help.
Just remove application which is not required from /etc/init.d/rc file also put echo after every process initialization and check which process is taking much time for starting,
if you find application which is taking more time then debug that application and so on.
There is program that can be helpful to know the approximate boot-up time. Check this link
Time Stamp.
First of all the best you have to do is to compile yourself your own made kernel, get the source on the internet and do a make xconfig and then unselected everythin you don't need.
In a second time create your own root filesystem using Buildroot and make xconfig to select/unselect everything you need or not.
Hope this help.
I had the same problem and do that way, now it's clearly not the same ;)
EDIT: Everything you need will be here
to analyze the boot process, you can use Bootchart2, its available on github: https://github.com/mmeeks/bootchart
or Bootchart, from the Ubuntu packages:
sudo apt-get update
sudo apt-get install bootchart pybootchartgui
There are broadly 3 areas where you can reduce boot time
Bootloader:
Modify the linker script to initialize only the required h/w. Also, if you are using an SD card to boot, merge kernel and bootloader image to save time.
Kernel:
Remove unwanted modules from kernel config. Also try using compressed and uncompressed image. If your CPU is good enough to handle it go compressed image and check uncompression time required for different compression types.
Filesystem:
FS size can be significantly reduced by removing the unwanted bins and libs. Check for dependencies and use only the one's that are required.
For more techniques and information on tools that help in measuring the boot time please refer to the following link.
Refer to Training Material
The basic rule is: the fastest code is code that never gets loaded and
run, so remove everything you don't need:
in U-Boot: don't load and run the full U-Boot at all; use FALCON
mode and have the SPL load the Linux kernel and DTB directly
in Linux: remove all drivers and other stuff you don't really need;
load all drivers that are not essential for your core application as
modules - and load them after your application was started. If you
take this serious, you may even want to start only one CPU core
initially (and start the remaining ones after your application is
running).
in user space: minimize the size of the root file system. throuw
out anything you don't need; configure tools (like busybox) to
contain only the really needed functionality; use efficient code
(for example, link against musl libc instead of glibc) etc.
What can be acchieved by combining all these measures can be seen in
this video - and yes, the complete code for this optimization is
available here.
Optimizing embedded Linux Boot process , needs modifications in three level of embedded Linux design.
Note: you will need the source codes of bootloader and kernel
Boot : the first step in optimizing and reducing boot time of board is optimizing boot loader. first you should know what is your bootloader is. If your bootloader is an opensource bootloader like u-boot than you have the opportunity to modify and optimize it. In u-boot we have a procedure that we can skip unnecessary system check and just upload kernel image to ram and start. the documentation and instruction for this is available in u-boot website. by doing this you will save about 4 ~ 5 second in boot.
Kernel : for having a quicker kernel , you should optimize kernel in many sections. for editing you can use on of Linux config menu. I always use a low graphic menu. it need some dependency you can use it by this command:
$ make menuconfig
our goal for Linux kernel is to have smaller kernel image and less module to load in boot. first change the algorithm of compression from gzip to LZO. the point of this action is gzip algorithm will take much time to extract kernel. by using LZO we have a quicker kernel decompression process. the second , disable any unnecessary driver or module that you don’t have it on your board or you don’t use it any more. by doing this , you will lose some device access and cannot use them in Linux but you will have two positive points: less Ram usage , quicker boot time.
but please remind that some driver are necessary for Linux and by disabling them you will lose some of main features (for example if you disable I2C driver in Linux you will no longer have a HDMI interface) that you need or in worst case you will have a boot problem (such as boot-loop). The third is to disable some of unusable filesystem to reduce kernel size and boot time. The Fourth is to remove some of compression algorithm to have smaller kernel image.
the last thing , If you are using a u-boot bootloader create a uImage instead of zImage. the following steps , are general and main actions , for having quicker boot as 1 second after power attach you should change more option.
after two base layer modifications, now we should optimize boot process in user-space (root file system). depend on witch system are you using , we have different changes to do. in abstract root file system of Linux that have necessary package and system to boot Linux we should use systemd instead of Unix systemv , because systemd have a multi-task init. system and it is faster , after that is udev that you should modify some of loading modules. if you have a graphical user-interface , we can use an easy trick to have a big boot time reduction by initing GUI first and load other module after loading GUI.
if you do all of following tasks , you can have quick boot time and fast system to work with.
NanoBSD is a script that makes light, small and in-memory FreeBSD copy. It is useful in embedded systems. Is there something similar to NanoBSD in Linux? Specially a feature like Everything is read-only at run-time as it mentioned here .
A lot of toolchain / system build systems build Linux root filesystems which are designed to run completely out of a ramdisc (rootfs / tmpfs). This means that everything is read/write at runtime, but it does not persist across reboots (a persistent FS can of course, be mounted as a non-root FS).
The most well known of these is Busybox (with or without uclibc), which ships with various scripts to build very small-footprint Linux-based embedded systems (root FS is typically a few Mb only; just add a kernel). Busybox/Linux is not the same as GNU/Linux, but it is fairly similar - most things are simpler or have fewer options; some features are entirely absent or can be disabled at compile-time.
Linux is NOT an Operating system like FreeBSD, rather it is a kernel. You can choose to layer either GNU C library and tools (which I think all major general-purpose distributions do) or something else - which is mostly used for smaller systems, including uclibc, Android etc.
There are literally hundreds of toolchains, build environments and embedded distros of Linux, some only a couple of megabytes in size. Many also support some or many of the different processors Linux runs on (i386 and friends, ARM, Power, ...).
To get you started a couple of projects I find interesting: OpenWrt and OpenEmbedded, and lpclinux, Linux for NXP LPC3xxx ARM processors but there are really hundreds of them.
Some other resources
A very good source that (also) touches a number of issues specific to embedded systems is Linux from scratch. And this pdf gives some insight in the different available filesystems for an embedded Linux system.
i would take a look at TinyCore-Linux.
which isn't really ro but nearly the same Concept and i think there is also a was to get the OS/Binary Part ro were the config part is writeable.
I have an application that runs on an embedded Linux device and every now and then changes are made to the software and occasionally also to the root file system or even the installed kernel.
In the current update system the contents of the old application directory are simply deleted and the new files are copied over it. When changes to the root file system have been made the new files are delivered as part of the update and simply copied over the old ones.
Now, there are several problems with the current approach and I am looking for ways to improve the situation:
The root file system of the target that is used to create file system images is not versioned (I don't think we even have the original rootfs).
The rootfs files that go into the update are manually selected (instead of a diff)
The update continually grows and that becomes a pita. There is now a split between update/upgrade where the upgrade contains larger rootfs changes.
I have the impression that the consistency checks in an update are rather fragile if at all implemented.
Requirements are:
The application update package should not be too large and it must also be able to change the root file system in the case modifications have been made.
An upgrade can be much larger and only contains the stuff that goes into the root file system (like new libraries, kernel, etc.). An update can require an upgrade to have been installed.
Could the upgrade contain the whole root file system and simply do a dd on the flash drive of the target?
Creating the update/upgrade packages should be as automatic as possible.
I absolutely need some way to do versioning of the root file system. This has to be done in a way, that I can compute some sort of diff from it which can be used to update the rootfs of the target device.
I already looked into Subversion since we use that for our source code but that is inappropriate for the Linux root file system (file permissions, special files, etc.).
I've now created some shell scripts that can give me something similar to an svn diff but I would really like to know if there already exists a working and tested solution for this.
Using such diff's I guess an Upgrade would then simply become a package that contains incremental updates based on a known root file system state.
What are your thoughts and ideas on this? How would you implement such a system? I prefer a simple solution that can be implemented in not too much time.
I believe you are looking wrong at the problem - any update which is non atomic (e.g. dd a file system image, replace files in a directory) is broken by design - if the power goes off in the middle of an update the system is a brick and for embedded system, power can go off in the middle of an upgrade.
I have written a white paper on how to correctly do upgrade/update on embedded Linux systems [1]. It was presented at OLS. You can find the paper here: https://www.kernel.org/doc/ols/2005/ols2005v1-pages-21-36.pdf
[1] Ben-Yossef, Gilad. "Building Murphy-compatible embedded Linux systems." Linux Symposium. 2005.
I absolutely agree that an update must be atomic - I have started recently a Open Source project with the goal to provide a safe and flexible way for software management, with both local and remote update. I know my answer comes very late, but it could maybe help you on next projects.
You can find sources for "swupdate" (the name of the project) at github.com/sbabic/swupdate.
Stefano
Currently, there are quite a few Open Source embedded Linux update tools growing, with different focus each.
Another one that is worth being mentioned is RAUC, which focuses on handling safe and atomic installations of signed update bundles on your target while being really flexible in the way you adapt it to your application and environment. The sources are on GitHub: https://github.com/rauc/rauc
In general, a good overview and comparison of current update solutions you might find on the Yocto Project Wiki page about system updates:
https://wiki.yoctoproject.org/wiki/System_Update
Atomicity is critical for embedded devices, one of the reasons highlighted is power loss; but there could be others like hardware/network issues.
Atomicity is perhaps a bit misunderstood; this is a definition I use in the context of updaters:
An update is always either completed fully, or not at all
No software component besides the updater ever sees a half installed update
Full image update with a dual A/B partition layout is the simplest and most proven way to achieve this.
For Embedded Linux there are several software components that you might want to update and different designs to choose from; there is a newer paper on this available here: https://mender.io/resources/Software%20Updates.pdf
File moved to: https://mender.io/resources/guides-and-whitepapers/_resources/Software%2520Updates.pdf
If you are working with the Yocto Project you might be interested in Mender.io - the open source project I am working on. It consists of a client and server and the goal is to make it much faster and easier to integrate an updater into an existing environment; without needing to redesign too much or spend time on custom/homegrown coding. It also will allow you to manage updates centrally with the server.
You can journal an update and divide your update flash into two slots. Power failure always returns you to the currently executing slot. The last step is to modify the journal value. Non atomic and no way to make it brick. Even it if fails at the moment of writing the journal flags. There is no such thing as an atomic update. Ever. Never seen it in my life. Iphone, adroid, my network switch -- none of them are atomic. If you don't have enough room to do that kind of design, then fix the design.
I recently read a post (admittedly its a few years old) and it was advice for fast number-crunching program:
"Use something like Gentoo Linux with 64 bit processors as you can compile it natively as you install. This will allow you to get the maximum punch out of the machine as you can strip the kernel right down to only what you need."
can anyone elaborate on what they mean by stripping down the kernel? Also, as this post was about 6 years old, which current version of Linux would be best for this (to aid my google searches)?
There is some truth in the statement, as well as something somewhat nonsensical.
You do not spend resources on processes you are not running. So as a first instance I would try minimise the number of processes running. For that we quite enjoy Ubuntu server iso images at work -- if you install from those, log in and run ps or pstree you see a thing of beauty: six or seven processes. Nothing more. That is good.
That the kernel is big (in terms of source size or installation) does not matter per se. Many of this size stems from drivers you may not be using anyway. And the same rule applies again: what you do not run does not compete for resources.
So think about a headless server, stripped down -- rather than your average desktop installation with more than a screenful of processes trying to make the life of a desktop user easier.
You can create a custom linux kernel for any distribution.
Start by going to kernel.org and downloading the latest source. Then choose your configuration interface (you have the choice of console text, 'config', ncurses style 'menuconfig', KDE style 'xconfig' and GNOME style 'gconfig' these days) and execute ./make whateverconfig. After choosing all the options, type make to create your kernel. Then make modules to compile all the selected modules for this kernel. Then, make install will copy the files to your /boot directory, and make modules_install, copies the modules. Next, go to /boot and use mkinitrd to create the ram disk needed to boot properly, if needed. Then you'll add the kernel to your GRUB menu.lst, by editing menu.lst and copying the latest entry and adding a similar one pointing to the new kernel version.
Of course, that's a basic overview and you should probably search for 'linux kernel compile' to find more detailed info. Selecting the necessary kernel modules and options takes a bit of experience - if you choose the wrong options, the kernel might not be bootable and you'll have to start over, which is a pain because selecting the options and compiling the kernel can take 15-30 minutes.
Ultimately, it isn't going to make a large difference to compile a stripped-down custom kernel unless your given task is very, very performance sensitive. It makes sense to remove things you're never going to use from the kernel, though, like say ISDN support.
I'd have to say this question is more suited to SuperUser.com, by the way, as it's not quite about programming.
When I use top the iowait on the host is really high.
iostat tells me which disk is utilized more but I want to find out which process is the culprit?
I am trying to find this out on a red hat linux host. Any suggestions.
EDIT: My linux flavor does not either have atop or ntop and since building kernel is not an option for me don't ask me why :) (since this is not my personal box). are there any other alternatives
I usually use atop. There's a really good article at Debian Package A Day about it. It does require kernel patching (although Ubuntu already has the patch applied, I'm not sure about any other distributions.)
Use iotop.
Or you can get it standalone, it's a simple python script which requires a recent kernel (can't remember, but at least > 2.6.20)