I have an application that runs on an embedded Linux device and every now and then changes are made to the software and occasionally also to the root file system or even the installed kernel.
In the current update system the contents of the old application directory are simply deleted and the new files are copied over it. When changes to the root file system have been made the new files are delivered as part of the update and simply copied over the old ones.
Now, there are several problems with the current approach and I am looking for ways to improve the situation:
The root file system of the target that is used to create file system images is not versioned (I don't think we even have the original rootfs).
The rootfs files that go into the update are manually selected (instead of a diff)
The update continually grows and that becomes a pita. There is now a split between update/upgrade where the upgrade contains larger rootfs changes.
I have the impression that the consistency checks in an update are rather fragile if at all implemented.
Requirements are:
The application update package should not be too large and it must also be able to change the root file system in the case modifications have been made.
An upgrade can be much larger and only contains the stuff that goes into the root file system (like new libraries, kernel, etc.). An update can require an upgrade to have been installed.
Could the upgrade contain the whole root file system and simply do a dd on the flash drive of the target?
Creating the update/upgrade packages should be as automatic as possible.
I absolutely need some way to do versioning of the root file system. This has to be done in a way, that I can compute some sort of diff from it which can be used to update the rootfs of the target device.
I already looked into Subversion since we use that for our source code but that is inappropriate for the Linux root file system (file permissions, special files, etc.).
I've now created some shell scripts that can give me something similar to an svn diff but I would really like to know if there already exists a working and tested solution for this.
Using such diff's I guess an Upgrade would then simply become a package that contains incremental updates based on a known root file system state.
What are your thoughts and ideas on this? How would you implement such a system? I prefer a simple solution that can be implemented in not too much time.
I believe you are looking wrong at the problem - any update which is non atomic (e.g. dd a file system image, replace files in a directory) is broken by design - if the power goes off in the middle of an update the system is a brick and for embedded system, power can go off in the middle of an upgrade.
I have written a white paper on how to correctly do upgrade/update on embedded Linux systems [1]. It was presented at OLS. You can find the paper here: https://www.kernel.org/doc/ols/2005/ols2005v1-pages-21-36.pdf
[1] Ben-Yossef, Gilad. "Building Murphy-compatible embedded Linux systems." Linux Symposium. 2005.
I absolutely agree that an update must be atomic - I have started recently a Open Source project with the goal to provide a safe and flexible way for software management, with both local and remote update. I know my answer comes very late, but it could maybe help you on next projects.
You can find sources for "swupdate" (the name of the project) at github.com/sbabic/swupdate.
Stefano
Currently, there are quite a few Open Source embedded Linux update tools growing, with different focus each.
Another one that is worth being mentioned is RAUC, which focuses on handling safe and atomic installations of signed update bundles on your target while being really flexible in the way you adapt it to your application and environment. The sources are on GitHub: https://github.com/rauc/rauc
In general, a good overview and comparison of current update solutions you might find on the Yocto Project Wiki page about system updates:
https://wiki.yoctoproject.org/wiki/System_Update
Atomicity is critical for embedded devices, one of the reasons highlighted is power loss; but there could be others like hardware/network issues.
Atomicity is perhaps a bit misunderstood; this is a definition I use in the context of updaters:
An update is always either completed fully, or not at all
No software component besides the updater ever sees a half installed update
Full image update with a dual A/B partition layout is the simplest and most proven way to achieve this.
For Embedded Linux there are several software components that you might want to update and different designs to choose from; there is a newer paper on this available here: https://mender.io/resources/Software%20Updates.pdf
File moved to: https://mender.io/resources/guides-and-whitepapers/_resources/Software%2520Updates.pdf
If you are working with the Yocto Project you might be interested in Mender.io - the open source project I am working on. It consists of a client and server and the goal is to make it much faster and easier to integrate an updater into an existing environment; without needing to redesign too much or spend time on custom/homegrown coding. It also will allow you to manage updates centrally with the server.
You can journal an update and divide your update flash into two slots. Power failure always returns you to the currently executing slot. The last step is to modify the journal value. Non atomic and no way to make it brick. Even it if fails at the moment of writing the journal flags. There is no such thing as an atomic update. Ever. Never seen it in my life. Iphone, adroid, my network switch -- none of them are atomic. If you don't have enough room to do that kind of design, then fix the design.
Related
I am looking for a program that would allow me to mirror one partition to another disk (something like RAID1) for Linux. It doesn't have to be a windowed application, it can be a console application, I just want what is in one place to be mirrored to another.
It would be nice if it were possible to mirror a specific folder that I would care for instead of copying everything from the given partition.
I was looking on the internet, but it's hard to find something that would give such opportunities, hence the idea to ask such a question.
I do not want to make fake RAID on Linux or hardware RAID because I read that if the motherboard fails then it is best to have the same second one to recover data.
I will be grateful for every suggestion :)
You can check my script "CopyDirFile" written in bash, which is located on github.
You can perform a replication (mirroring) task of any source folder to another destination folder (deleting a file in the source folder means deleting it in the destination folder).
The script also allows you to create copy tasks (deleted files in the source folder will not be deleted in the target folder).
The tasks are executed in background at a specified time, not all the time, frequency is set by the user when creating the task.
You can also set the task to start automatically when the user logs on.
All the necessary information can be found in the README file in repository.
If I understood you correctly, I think it meets your requirements.
Linux has standard support for software RAID: mdraid.
It allows you to bundle two disk devices into a RAID 1 device (among other things); you then create a filesystem on top of that device.
LVM offers another way to do software RAID; it doesn't seem to be very popular, but it's certainly supported.
(If your system supports hardware RAID, on the motherboard or with a separate RAID controller, Linux can use that, too, but that doesn't seem to be what you're asking here.)
I have a linux busybox based system on a chip. I want to provide an update to users in the field and this requires updating some files in /lib /usr/bin and /etc. I don't think that it's safe to simple untar the files directly. Is there a safe way to do this including /lib files that may be in use?
Some things I strongly prefer in embedded systems:
a) Have the root file system be a ramdisk uncompressed from an image in flash. This is great because you can experimentally monkey around with it to your heart's content and if you mess up, all you need is a reboot to get back to the flashed configuration. When you have tested a set of change you like, you generate a new compressed root filesystem image and flash that.
b) Use a bootloader such as u-boot to do your updates - flashing a new complete image - rather than trying to change the linux system while it is running. Though since the flashed copy isn't live, you can actually flash it while running. If you flash a bad version, u-boot is still there to flash a good one.
c) Processors which have mask-rom UART (or even USB) bootloaders, making the system un-brickable - nothing more than a laptop and a serial cable or usb/serial converter is ever needed to do maintenance (ie, get a working u-boot image on the flash, which you then use to get a working linux kernel+compressed root fs image on it)
Ideally your flash device is big enough to partition into two complete filesystems and each update updates the other side (plus copying over config files if necessary) and updates the boot configuration to boot from the updated side.
Less ideal is to update in-place but have some means of detecting boot failure (watchdog that's not touched until after boot, for example) and have a smaller, fallback partition which is capable of accepting another update and fixing the primary partition.
As far as the in-place update of a live filesystem, just use a real installer (which will move the target files out of the way before replacing them to avoid the problem you describe).
You received two excellent answers above and I Strongly encourage you to do what you were advised to.
There is, however, a more simple way. In a matter of fact you can just untar your libraries, provided that the process that does this is statically linked.
I recently read a post (admittedly its a few years old) and it was advice for fast number-crunching program:
"Use something like Gentoo Linux with 64 bit processors as you can compile it natively as you install. This will allow you to get the maximum punch out of the machine as you can strip the kernel right down to only what you need."
can anyone elaborate on what they mean by stripping down the kernel? Also, as this post was about 6 years old, which current version of Linux would be best for this (to aid my google searches)?
There is some truth in the statement, as well as something somewhat nonsensical.
You do not spend resources on processes you are not running. So as a first instance I would try minimise the number of processes running. For that we quite enjoy Ubuntu server iso images at work -- if you install from those, log in and run ps or pstree you see a thing of beauty: six or seven processes. Nothing more. That is good.
That the kernel is big (in terms of source size or installation) does not matter per se. Many of this size stems from drivers you may not be using anyway. And the same rule applies again: what you do not run does not compete for resources.
So think about a headless server, stripped down -- rather than your average desktop installation with more than a screenful of processes trying to make the life of a desktop user easier.
You can create a custom linux kernel for any distribution.
Start by going to kernel.org and downloading the latest source. Then choose your configuration interface (you have the choice of console text, 'config', ncurses style 'menuconfig', KDE style 'xconfig' and GNOME style 'gconfig' these days) and execute ./make whateverconfig. After choosing all the options, type make to create your kernel. Then make modules to compile all the selected modules for this kernel. Then, make install will copy the files to your /boot directory, and make modules_install, copies the modules. Next, go to /boot and use mkinitrd to create the ram disk needed to boot properly, if needed. Then you'll add the kernel to your GRUB menu.lst, by editing menu.lst and copying the latest entry and adding a similar one pointing to the new kernel version.
Of course, that's a basic overview and you should probably search for 'linux kernel compile' to find more detailed info. Selecting the necessary kernel modules and options takes a bit of experience - if you choose the wrong options, the kernel might not be bootable and you'll have to start over, which is a pain because selecting the options and compiling the kernel can take 15-30 minutes.
Ultimately, it isn't going to make a large difference to compile a stripped-down custom kernel unless your given task is very, very performance sensitive. It makes sense to remove things you're never going to use from the kernel, though, like say ISDN support.
I'd have to say this question is more suited to SuperUser.com, by the way, as it's not quite about programming.
I am currently working on a PXE bootable environment that I would like to put into revision control.
The filesystem and server will both be Linux (SLES if you must know).
I've considered using some kind of hack that stores file ownership/permissions via getfacl -R -P, but this doesn't cover symlinks or devices. And it's kind of ugly.
Tricky things that I need to be covered:
file ownership
file permissions (ACLs are not necessary)
devices
symlinks
Are there any revision control systems that will cover my needs for this?
Note: Rather than a "block volume", I need to put a "set of files" into revision control and keep individual changes.
I'm pretty sure that Perforce will handle all of those things. I believe another group at work uses it for nearly the exact same purpose that you list (storing a variant of linux for an embedded device).
etckeeper is a tool that puts the contents of /etc under revision control (with a choice of revision control systems). It has a layer on top that takes care of permissions, symlinks, etc. I'm not sure it if handles device nodes - probably not - but it could probably easily be extended to do so.
You may find that this tool can be adapted to handle an entire filesystem.
You can use versioning filesystems, that is, filesystems with version control built into the filesystem code itself.
Some examples:
NILFS
Ext3cow
In principle, any filesystem that supports snapshots could be used.
More information: Versioning file system on Wikipedia.
If your file system will not be extremely huge, you might just store it all in a single file, version control that binary file, and then mount that to work with it. It won't give you pretty file-level control, but it might work acceptably for some tasks.
Can anyone point me to a good tutorial on creating a bootable Linux CD from scratch?
I need help with a fairly specialized problem: my firm sells an expansion card that requires custom firmware. Currently we use an extremely old live CD image of RH7.2 that we update with current firmware. Manufacturing puts the cards in a machine, boots off the CD, the CD writes the firmware, they power off and pull the cards. Because of this cycle, it's essential that the CD boot and shut down as quickly as possible.
The problem is that with the next generation of cards, I have to update the CD to a 2.6 kernel. It's easy enough to acquire a pre-existing live CD - but those all are designed for showing off Linux on the desktop - which means they take forever to boot.
Can anyone fix me up with a current How-To?
Update:
So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator".
My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell.
The whole job would have only taken an hour or two if there had been a README explaining the configuration file!
There are a couple of interesting projects you could look into.
But first: does it have to be a CD-ROM? That's probably the slowest possible storage (well, apart from tape, maybe) you could use. What about a fast USB stick or a an IEE1394 hard-disk or maybe even an eSATA hard-disk?
Okay, there are several Live-CDs that are designed to be very small, in order to e.g. fit on a business card sized CD. Some were also designed to be booted from a USB stick, back when that meant 64-128 MiByte: Damn Small Linux is one of the best known ones, however it uses a 2.4 kernel. There is a sister project called Damn Small Linux - Not, which has a 2.6 kernel (although it seems it hasn't been updated in years).
Another project worth noting is grml, a Live-CD for system administration tasks. It does not boot into a graphic environment, and is therefore quite fast; however, it still contains about 2 GiByte of software compressed onto a CD-ROM. But it also has a smaller flavor, aptly named grml-small, which only contains about 200 MiByte of software compressed into 60 MiByte.
Then there is Morphix, which is a Live-CD builder toolkit based on Knoppix. ("Morphable Knoppix"!) Morphix is basically a tool to build your own special purpose Live-CD.
The last thing I want to mention is MachBoot. MachBoot is a super-fast Live-CD. It uses various techniques to massively speed up the boot process. I believe they even trace the order in which blocks are accessed during booting and then remaster the ISO so that those blocks are laid out contiguously on the medium. Their current record is less than 6 seconds to boot into a full graphical desktop environment. However, this also seems to be stale.
One key piece of advice I can give is that most LiveCDs use a compressed filesystem called squashfs to cram as much data on the CD as possible. Since you don't need compression, you could run the mksquashfs step (present in most tutorials) with -noDataCompression and -noFragmentCompression to save on decompression time. You may even be able to drop the squashfs approach entirely, but this would require some restructuring. This may actually be slower depending on your CD-ROM read speed vs. CPU speed, but it's worth looking into.
This Ubuntu tutorial was effective enough for me to build a LiveCD based on 8.04. It may be useful for getting the feel of how a LiveCD is composed, but I would probably not recommend using an Ubuntu LiveCD.
If at all possible, find a minimal LiveCD and build up with only minimal stripping out, rather than stripping down a huge LiveCD like Ubuntu. There are some situations in which the smaller distros are using smaller/faster alternatives rather than just leaving something out. If you want to get seriously hardcore, you could look at Linux From Scratch, and include only what you want, but that's probably more time than you want to spend.
Creating Your Own Custom Ubuntu 7.10 Or Linux Mint 4.0 Live-CD With Remastersys
Depends on your distro. Here's a good article you can check out from LWN.net
There is a book I used which covers a lot of distros, though it does not cover creating a flash-bootable image. The book is Live Linux(R) CDs: Building and Customizing Bootables. You can use it with supplemental information from your distro of choice.
So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator".
My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell.
The whole job would have only taken an hour or two if there had been a README explaining the configuration file!
Debian Live provides the best tools for building a Linux Live CD. Webconverger uses Debian Live for example.
It's very easy to use.
sudo apt-get install live-helper # from Debian unstable, which should work fine from Ubuntu
lh_config # edit config/* to your liking
sudo lh_build