Embedded Linux – mechanism for deploying firmware updates? [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am considering developing on the Yocto project for an embedded Linux project (an industrial application) and I have a few questions for those with experience with embedded Linux in general -- Yocto experience a bonus. Just need to get an idea of what is being commonly done in firmware updates.
I have a few requirements, that being authentication, a secure communications protocol, some type of rollback if the update failed. Also, if there is a way to gradually release the patch across the fleet of devices then that would also be interesting as I want to avoid bricked devices in the field.
How do you deploy updates/patches to field devices today – and how long did it take to develop it? Are there any other considerations I am missing?

Although you certainly can use rpm, deb, or ipk for your upgrades, my preferred way (at least for small to reasonably sized images) is to have two images stored on flash, and to only update complete rootfs images.
Today I would probably look at meta-swupdate if I were to start working with embedded Linux using OpenEmbedded / Yocto Project.
What I've been using for myself and multiple clients is something more like this:
A container upgrade file which is a tarball consisting of another tarball (hereafter called the upgrade file), the md5sum of the upgrade file, and often a gpg-signature.
An updater script stored in the running image. This script is responsible to unpack the outer container of the upgrade file, verify the correctness of the upgrade file using md5sum and often to verify a cryptographic signature (normally gpg based). If the update file passes these tests, the updater script looks for a upgrade script inside the update file, and executes this.
The upgrade script inside the update file performs the actual upgrade, ie normally rewrite the non-running image, extracting and rewriting the kernel and if these steps are successful, instruct the bootloader to use the newly written kernel and image instead of the currently running system.
The benefit of having the script that performs the actual upgrade inside the upgrade file, is that you can do whatever you need in the future in a single step. I've made special upgrade images that upgrades the FW of attached modems, or that extracted some extra diagnostics information instead of performing an actual upgrade. This flexibility will payoff in the future.
To make the system even more reliable, the bootloader users a feature called bootcount, which could the number of boot attempts, and if this number gets above a threshold, eg 3, the bootloader chooses to boot the other image (as the image configured to be booted is considered to be faulty). This ensures that of the image is completely corrupt, the other, stored image will automatically be booted.
The main risk with this scheme is that you upgrade to an image, whose upgrade mechanism is broken. Normally, we also implement some kind of restoration mechanism in the bootloader, such that the bootloader can reflash a completely new system; though this rescue mechanism usually means that the data partition (used to store configurations, databases, etc) also will be erased. This is partly for security (not leaking info) and partly to ensure that after this rescue operation the system state will be completely known to us. (Which is a great benefit when this is performed by an inexperienced technician far away).

If you do have enough flash storage, you can do the following. Make two identical partitions, one for the live system, the other for the update. Let the system pull the updated image over a secure method, and write it directly to the other partition. It can be as simple as plugging in a flash drive, with the USB socket behind a locked plate (physical security), or using ssh/scp with appropriate host and user keys. Swap the partitions with sfdisk, or edit the setting of your bootloader only if the image is downloaded and written correctly. If not, then nothing happens, the old firmware lives on, and you can retry later. If you need gradual releases, then let the clients select an image based on the last byte of their MAC address. All this can be implemented with a few simple shellscripts in a few hours. Or a few days with actually testing it :)

#Anders answer is complete exaustive and very good. The only thing I can add as a suggestion to you is to think on some things:
Has your application an internet connection/USB/SD card to store a
complete new rootfs? Working with Linux embedded is not the like write a 128K firmware on a Cortex M3..
Has your final user the capability to do the update work?
Is your application installed in a accessible area remote controlled?
About the time you need to develop a complete/robust/stable solution is not a so simple question, but take notes that is a key point of an application that impact on the market feeling of your application. Especially in early days/month of first deploy where is usual to send updates to fix little/big youth bugs.

Related

Easy ways to separate DATA (Dev-Environments/apps/etc) partition from Linux System minimal sized OS partition? Docker or Overlayfs Overlayroot? Other? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
Back when the powers that be didn't squeeze the middle-class as much and there was time to waste "fooling around" etc, I used to compile everything from scratch from .tgz and manually get dependencies and make install to localdir.
Sadly, there's no more time for such l(in)uxuries these days so I need a quick lazy way to keep my 16GB Linux Boot OS partition as small as possible and have apps/software/Development Environment and other data on a separate partition.
I can deal with mounting my home dir to other partition but my remaining issue is with /var and /usr etc and all the stuff that gets installed there every time I apt-get some packages I end up with a trillion dependencies installed because an author of a 5kb app decided not to include a 3kb parser and wanted me to install another 50MB package to get that 3kb library :) yay!
Of course later when I uninstall those packages, all those dependencies that got installed and never really have a need for anymore get left behind.
But anyway the point is I don't want to have to manually compile and spend hours chasing down dependencies so I can compile and install to my own paths and then have to tinker with a bunch of configuration files. So after some research this is the best I could come up with, did I miss some easier solution?
Use OVERLAYFS and Overlayroot to do an overlay of my root / partition on my secondary drive or partition so that my Linux OS is never written to anymore but everything will be transparently written to the other partition.
I like the idea of this method and I want to know who uses this method and if its working out well. What I like is that this way I can continue to be lazy and just blindly apt-get install toolchain and everything should work as normal without any special tinkering with each apps config files etc to change paths.
Its also nice that dependencies will be easily re-used by the different apps.
Any problems I haven't foreseen with this method? Is anyone using this solution?
DOCKER or Other Application Containers, libvirt/lxc etc?
This might be THE WAY to go? With this method I assume I should install ALL my apps I want to install/try-out inside ONE Docker container otherwise I will be wasting storage space by duplication of dependencies in each container? Or does DOCKER or other app-containers do DEDUPLICATION of files/libs across containers?
Does this work fine for graphical/x-windows/etc apps inside docker/containers?
If you know of something easier than Overlayfs/overlayroot or Docker/LXC to accomplish what I want and that's not any more hassle to setup please tell me.tx
After further research and testing/trying out docker for this, I've decided that "containers" like docker are the easy way to go to install apps you may want to purge later. It seems that this technique already uses the Overlayfs overlayroot kind of technique under the hood to make use of already installed dependencies in the host and installs other needed dependencies in the docker image. So basically I gain the same advantages as the other manual overlayroot technique I talked about and yet without having to work to set all that up.
So yep, I'm a believer in application containers now! Even works for GUI apps.
Now I can keep a very light-weight small size main root partition and simply install anything I want to try out inside app containers and delete when done.
This also solves the problem of lingering no longer needed dependencies which I'd have to deal with myself if manually doing an overlayroot over the main /.

How does RunKit make their virtual servers?

There are many websites providing cloud coding sush as Cloud9, repl.it. They must use server virtualisation technologies. For example, Clould9's workspaces are powered by Docker Ubuntu containers. Every workspace is a fully self-contained VM (see details).
I would like to know if there are other technologies to make sandboxed environment. For example, RunKit seems to have a light solution:
It runs a completely standard copy of Node.js on a virtual server
created just for you. Every one of npm's 300,000+ packages are
pre-installed, so try it out
Does anyone know how RunKit acheives this?
You can see more in "Tonic is now RunKit - A Part of Stripe! " (see discussion)
we attacked the problem of time traveling debugging not at the application level, but directly on the OS by using the bleeding edge virtualization tools of CRIU on top of Docker.
The details are in "Time Traveling in Node.js Notebooks"
we were able to take a different approach thanks to an ambitious open source project called CRIU (which stands for checkpoint and restore in user space).
The name says it all. CRIU aims to give you the same checkpointing capability for a process tree that virtual machines give you for an entire computer.
This is no small task: CRIU incorporates a lot of lessons learned from earlier attempts at similar functionality, and years of discussion and work with the Linux kernel team. The most common use case of CRIU is to allow migrating containers from one computer to another
The next step was to get CRIU working well with Docker
Part of that setup is being opened-source, as mentioned in this HackerNews feed.
It uses linux containers, currently powered by Docker.

Best practices for maintaining configuration of embedded linux system for production

I have a small single board computer which will be running a linux distribution and some programs and has specific user configuration, directory structure, permissions settings etc.
My question is, what is the best way to maintain the system configuration for release? In my time thinking about this problem I've thought of a few ideas but each has its downsides.
Configure the system and burn the image to an iso file for distribution
This one has the advantage that the system will be configured precisely the way I want it, but committing an iso file to a repository is less than desirable since it is quite large and checking out a new revision means reflashing the system.
Install a base OS (which is version locked) and write a shell script to configure the settings from scratch.
This one has the advantage that I could maintain the script in a repository and update and config changes by pulling changes to the script and running it again, however now I have to maintain a shell script to configure a system and its another place where something can go wrong.
I'm wondering what the best practices are in embedded in general so that I can maybe implement a good deployment and maintenance strategy.
Embeddded systems tend to have a long lifetime. Do not be surprised if you need to refer to something released today in ten years' time. Make an ISO of the whole setup, source code, diagrams, everything... and store it away redundantly. Someone will be glad you did a decade from now. Just pretend it's going to last forever and that you'll have to answer a question or research a defect in ten years.

maintain multiple versions of Windows CE (QFEs)

We build firmware using Windows CE (6 and 7) on a Windows XP system. We often install the QFEs (CE patches/updates) from Microsoft as they are released. When we have to go back to a certain release to develop a patch, it can be a real pain because we will need to build a system with the same patch level that existed on the system at the time that the product was released. Is there any easy way to maintain a QFE history that can easily be reverted at any given time? Something along the lines of snapshotting the system state as it pertains to the CE install/QFEs at each release? We don't want to use virtual machine snapshots or anything that controls the state of anything outside of the Windows CE components for this. It is a pretty specific requirement, so I am guessing no, but perhaps someone has tackled this exact problem.
I understand that you're saying you don't want to use VMs, though I'm not entirely sure why. I'd recommend at least thinking about it.
Back when I controlled builds for multiple platforms across multiple OS versions, I used Virtual Machines for this. Each VM was a bare snapshot of a PC with the tools and SDKs installed. A build script would then pull the source for each BSP and build it nightly. They key is to maintain and archive "clean" VMs (without source) and just pitch the changes after doing builds. It was way faster and way cleaner than trying to keep the WINCEROOT for each QFE level in source control and pulling that - you have to reset the machine to zero in that case anyway to be confident of no cross-pollution between levels.

How accurate is "everything is a file" in regards to linux? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Suppose I were to set up an ubuntu machine and install some services and software on it. Further suppose I were to set up another stock ubuntu machine, this time without the additional services and software. I know there are ways of creating installation/setup scripts or taking disk images and such to build large numbers of identical machines, but if I were to programmatically take a file-based diff between the installations and migrate all file additions/changes/removals/etc from the fully configured system to the stock system, would I then have two identical, working systems (i.e. a full realization of the 'everything is a file' linux philosophy), or would the newly configured system be left in an inconsistent state because simply transferring files isn't enough? I am excluding hostname references and such in my definitions of identical and inconsistent.
I ask this because I need to create a virtual machine, install a bunch of software, and add a bunch of content to tools like redmine, and in the near future I'm going to have to mirror that onto another vm. I cannot simply take a disk image because the source I receive the second vm from does not give me that sort of access and the vm will have different specs. I also cannot go with an installation script based approach at this point because that will require a lot of overhead, will not account for the added user content, and I won't know everything that is going to be needed on the first vm until it our environment is stable. The approach I asked about above seems to be a roundabout but reasonable way to get things done so long as it its assumptions are theoretically accurate.
Thanks.
Assuming that the two systems are largely identical in terms of hardware (that is, same network cards, video cards, etc), simply copying the files from system A to system V is generally entirely sufficient. In fact, at my workplace we have used exactly this process as a "poor man's P2V" mechanism in a number of successful cases.
If the two systems have different hardware configurations, you may need to make appropriate adjustments on the target system to take this into account.
UUID Mounts
If you have UUID based mounts -- e.g., your /etc/fstab looks like this...
UUID=03b4f2f3-aa5a-4e16-9f1c-57820c2d7a72 /boot ext4 defaults 1 2
...then you will probably need to adjust those identifiers. A good solution is to use label based mounts instead (and set up the appropriate labels, of course).
Network cards
Some distributions record the MAC address of your network card as part of the network configuration and will refuse to configure your NIC if the MAC address is different. Under RHEL-derivatives, simply removing the MAC address from the configuration will take care of this. I don't think this will be an issue under Ubuntu.

Resources