maintain multiple versions of Windows CE (QFEs) - windows-ce

We build firmware using Windows CE (6 and 7) on a Windows XP system. We often install the QFEs (CE patches/updates) from Microsoft as they are released. When we have to go back to a certain release to develop a patch, it can be a real pain because we will need to build a system with the same patch level that existed on the system at the time that the product was released. Is there any easy way to maintain a QFE history that can easily be reverted at any given time? Something along the lines of snapshotting the system state as it pertains to the CE install/QFEs at each release? We don't want to use virtual machine snapshots or anything that controls the state of anything outside of the Windows CE components for this. It is a pretty specific requirement, so I am guessing no, but perhaps someone has tackled this exact problem.

I understand that you're saying you don't want to use VMs, though I'm not entirely sure why. I'd recommend at least thinking about it.
Back when I controlled builds for multiple platforms across multiple OS versions, I used Virtual Machines for this. Each VM was a bare snapshot of a PC with the tools and SDKs installed. A build script would then pull the source for each BSP and build it nightly. They key is to maintain and archive "clean" VMs (without source) and just pitch the changes after doing builds. It was way faster and way cleaner than trying to keep the WINCEROOT for each QFE level in source control and pulling that - you have to reset the machine to zero in that case anyway to be confident of no cross-pollution between levels.

Related

Best practices for maintaining configuration of embedded linux system for production

I have a small single board computer which will be running a linux distribution and some programs and has specific user configuration, directory structure, permissions settings etc.
My question is, what is the best way to maintain the system configuration for release? In my time thinking about this problem I've thought of a few ideas but each has its downsides.
Configure the system and burn the image to an iso file for distribution
This one has the advantage that the system will be configured precisely the way I want it, but committing an iso file to a repository is less than desirable since it is quite large and checking out a new revision means reflashing the system.
Install a base OS (which is version locked) and write a shell script to configure the settings from scratch.
This one has the advantage that I could maintain the script in a repository and update and config changes by pulling changes to the script and running it again, however now I have to maintain a shell script to configure a system and its another place where something can go wrong.
I'm wondering what the best practices are in embedded in general so that I can maybe implement a good deployment and maintenance strategy.
Embeddded systems tend to have a long lifetime. Do not be surprised if you need to refer to something released today in ten years' time. Make an ISO of the whole setup, source code, diagrams, everything... and store it away redundantly. Someone will be glad you did a decade from now. Just pretend it's going to last forever and that you'll have to answer a question or research a defect in ten years.

Containers - What are their benefits, if they can't run across platform

I read over internet "containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server".
I also read that linux containers cannot run on windows.
The benifits of containers stated "Containers run as an isolated process in userspace on the host operating system."
I don't understand if the containers are not platform independent what we are actually achieving out it?
1) Anyhow all the applications on a linux box should run as an isolated process in their userspace.
2) If containers only contain app code + runtimes + tools + libraries. They can be shipped together. What conatiners are getting here?
Posting the comment as answer::
If containers only contain app code + runtimes + tools + libraries.
They can be shipped together. What conatiners are getting here?
Suppose there is an enterprise with thousands of employees and all of them work on Visual Studio C++. Now, the administrator can create a container with the installed (only C++ components) and configured VS, and deploy that container to all employees. The employees can instantly start working without bothering about installation and configuration of the application. Again, if the employee somehow corrupts the application, they only need to download the container again and they are good to go.
Sandboxing
Security
Maintenance
Mobility
Backup
Many more to go.
Are container platform independent?
IMHO, I don't think so, as they rely on the system calls. Though, I am open to other concepts if anybody knows better on this topic.
Even only considering one platform, containers have their advantages; just not perhaps the ones you need right now. :-) Containers help in administration/maintenance of complex IT systems. With containers you can easily isolate applications, their configuration, and their users, to achieve:
Better security (if someone breaks in, damage is usually limited to one container)
Better safety (if something breaks, or e.g. you make an error, only applications in a given container will be victim to this)
Easier management (containers can be started/stopped separately, can be transferred to another hosts (granted: host with the same OS; in case of Linux containers the host must also be Linux))
Easier testing (you can create and dispose-off containers at will, anytime)
Lighter backup (you can backup just the container; not the whole host)
Some form of increasing availaibility (by proper pre-configuration and automatic switching a container over to another host you can be up and running quicker in case of the primary host failure)
...just to name the first advantages coming to mind.

Using Vagrant, why is puppet provisioning better than a custom packaged box?

I'm creating a virtual machine to mimic our production web server so that I can share it with new developers to get them up to speed as quickly as possible. I've been through the Vagrant docs however I do not understand the advantage of using a generic base box and provisioning everything with Puppet versus packaging a custom box with everything already installed and configured. All I can think of is;
Advantages of using Puppet vs custom packaged box
Easy to keep everyone up to date - Ability to put manifests under
version control and share the repo so that other developers can
simply pull new updates and re-run puppet i.e. 'vagrant provision'.
Environment is documented in the manifests.
Ability to use puppet modules defined in production environment to
ensure identical environments.
Disadvantages of using Puppet vs custom packaged box
Takes longer to write the manifests than to simply install and
configure a custom packaged box.
Building the virtual machine the first time would take longer using
puppet than simply downloading a custom packaged box.
I feel like I must be missing some important details, can you think of any more?
Advantages:
As dependencies may change over time, building a new box from scratch will involve either manually removing packages, or throwing the box away and repeating the installation process by hand all over again. You could obviously automate the installation with a bash or some other type of script, but you'd be making calls to the native OS package manager, meaning it will only run on the operating system of your choice. In other words, you're boxed in ;)
As far as I know, Puppet (like Chef) contains a generic and operating system agnostic way to install packages, meaning manifests can be run on different operating systems without modification.
Additionally, those same scripts can be used to provision the production machine, meaning that the development machine and production will be practically identical.
Disadvantages:
Having to learn another DSL, when you may not be planning on ever switching your OS or production environment. You'll have to decide if the advantages are worth the time you'll spend setting it up. Personally, I think that having an abstract and repeatable package management/configuration strategy will save me lots of time in the future, but YMMV.
One great advantages not explicitly mentioned above is the fact that you'd be documenting your setup (properly), and your documentation will be the actual setup - not a (one-time) description of how things were/may have been intended to be.

Linux development environment for a small team

Approach (A)
From my experience I saw that for a small team there's a dedicated server with all development tools (e.g. compiler, debugger, editor etc.) installed on it. Testing is done on dedicated per developer machine.
Approach (B)
On my new place there's team utilizing a different approach. Each developer has a dedicated PC which is used both as development and testing server. For testing an in-house platform is installed on the PC to run application over it. The platform executes several modules on kernel space and several processes on user space.
Problem
Now there are additional 2 small teams (~ 6 developers at all) joining to work on the exactly same OS and development environment. The teams don't use the mentioned platform and can execute application over plain Linux, so no need in dedicated machine for testing. We'd like to adopt approach (A) for all 3 teams, but the server must be stable and installing on it in-house platform, described above, is highly not desirable.
What would you advise?
What is practice for development environmentin your place - one server per team(s) or dedicated PC/server per developer?
Thanks
Dima
We've started developing on VMs that run on the individual developers' computers, with a common subversion repository.
Benefits:
Developers work on multiple projects simultaneously; one VM per project.
It's easy to create a snapshot (or simply to copy the VM) at any time, particularly before those "what happens if I try something clever" moments. A few clicks will restore the VM to its previous (working) state. For you, this means you needn't worry about kernel-space bugs "blowing up" a machine.
Similarly, it's trivial to duplicate one developer's environment so, for example, a temporary consultant can help troubleshoot. Best-practices warning: It's tempting to simply copy the VM each time you need a new development machine. Be sure you can reproduce the environment from your repository!
It doesn't really matter where the VMs run, so you can host them either locally or on a common server; the developers can still either collaborate or work independently.
Good luck — and enjoy the luxury of 6 additional developers!

LINUX: Upgrading a production machine

Our production machines are running on debian etch. Now, they finally released lenny, the day will come we need to upgrade these systems. How can I do this with minimal risk? Are there any premises, preparations of fall-back scenarios and do I need a plan B in case something goes wrong? Besides the binary packages handled by the debian installer there are a couple of compiled applications running on the machines.
Personally I wouldn't upgrade any OS on an important server. OS upgrades always have the potential for subtle bugs, whether it's Windows, Linux or anything else. Debian has got better than it used to be in this regard; dist-upgrade doesn't hose the machine nearly as often as it used to back in the day. But for production machines there is no point in risking it.
Set up new servers with a fresh OS and application deployment and swap them in as needs arise. There is no need to hurry to replace Etch companywide in one go. It will be supported with security updates for a while yet.
Having just gone through that transition for some dev boxes, I wanted to point out that you'll probably want to recompile any custom libraries that you'll be linking against. Lenny uses GCC 4.3, whereas Etch uses 4.1. The output from either compiler isn't very compatible with the other. You may need to install the gcc-4.1 package to do things like compile custom kernel modules.
If you're using 3rd party tools that have a plugin interface, you may have challenges there. I've been having troubles getting Matlab plugins (mex files) to work.
I'd suggest starting with a test system. After hammering it for a while and verifying that everything's working, switch it to be a production box.
Most people don't update production servers for exactly this reason - if it's working correctly, you wouldn't update unless you had a compelling reason.
Assuming you have a dev box built similarly to the production machine, you can simulate the update on the dev box.

Resources