What was the reason of the non-preemptivity of older Linux kernels? - linux

Why did the first Linux developers choose to implement a non-preemptive kernel? Is it to save synchronization?
As far as I know, Linux was developed in the early 90's, when PCs had a single processor. What advantage does a non-preemptive kernel give in such PCs? Why, however, the advantage is reduced by multi-core processors?

Remember that Linux was intended to be somewhat compatible with the versions of Unix already out there, notably System V and BSD.
Unix of that era was very primitive compared to the commercial operating systems available at that time and in many ways remains so to this day. The big selling point of Unix in the 1990 was "open systems." Unix allowed the various upstart computer companies (e.g., Apollo, Sun) to have an operating system without doing much operating system development. They were able to turn the really poor quality of Unix compared to commercial operating systems of the day (e.g. VMS) into an advantage as an "open system."
One of the many features lacking in Unix was a preemptive kernel. If you are building a Unix clone, there was little reason to create one.
There are DEC and IBM systems that run for years without rebooting. It's amazing how far backwards we have gone.

Related

Is an operating system kernel an interpeter for all other programs?

So, from my understanding, there are two types of programs, those that are interpreted and those that are compiled. Interpreted programs are executed by an interpreter that is a native application for the platform its on, and compiled programs are themselves native applications (or system software) for the platform they are on.
But my question is this: is anything besides the kernel actually being directly run by the CPU? A Windows Executable is a "Windows Executable", not an x86 or amd64 executable. Does that mean every other process that's not the kernel is literally being interpreted by the kernel in the same way that a browser interprets Javascript? Or is the kernel placing these processes on the "bare metal" that the kernel sits on top of?
IF they're on the "bare metal", how, say does Windows know that a program is a windows program and not a Linux program, since they're both compiled for amd64 processors? If it's because of the "format" of the executable, how is that executable able to run on the "bare metal", since, to me, the fact that it's formatted to run on a particular OS would mean that some interpretation would be required for it to run.
Is this question too complicated for Stack Overflow?
They run on the "bare metal", but they do contain operating system-specific things. An executable file will typically provide some instructions to the kernel (which are, arguably, "interpreted") as to how the program should be loaded into memory, and the file's code will provide ways for it to "hook" in to the running operating system, such as by an operating system's API or via device drivers. Once such a non-interpreted program is loaded into memory, it runs on the bare metal but continues to communicate with the operating system, which is also running on the bare metal.
In the days of single-process operating systems, it was common for executables to essentially "seize" control of the entire computer and communicate with hardware directly. Computers like the Apple ][ and the Commodore 64 work like that. In a modern multitasking operating system like Windows or Linux, applications and the operating system share use of the CPU via a complex multitasking arrangement, and applications access the hardware via a set of abstractions built in to the operating system's API and its device drivers. Take a course in Operating System design if you are interested in learning lots of details.
Bouncing off Junaid's answer, the way that the kernel blocks a program from doing something "funny" is by controlling the allocation and usage of memory. The kernel requires that memory be requested and accessed through it via its API, and thus protects the computer from "unauthorized" access. In the days of single-process operating systems, applications had much more freedom to access memory and other things directly, without involving the operating system. An application running on an old Apple ][ can read to or write to any address in RAM that it wants to on the entire computer.
One of the reasons why a compiled application won't just "run" on another operating system is that these "hooks" are different for different operating systems. For example, an application that knows how to request the allocation of RAM from Windows might not have any idea how to request it from Linux or the Mac OS. As Disk Crasher mentioned, these low level access instructions are inserted by the compiler.
I think you are confusing things. A compiled program is in machine readable format. When you run the program, kernel will allocate memory, cpu etc and ensure that the program does not interfere with other programs. If the program requires access to HW resources or disk etc, the kernel will handle it so kernel will always be between hardware and any software you run in user space.
If the program is interpreted, then a relevant interpreter for that language will convert the code to machine readable on the fly and kernel will still provide the same functionality like access to hardware and making sure programs aren't doing anything funny like trying to access other program memory etc.
The only thing that runs on "bare metal" is assembly language code, which is abstracted from the programmer by many layers in the OS and compiler. Generally speaking, applications are compiled to an OS and CPU architecture. They will not run on other OS's, at least not without a compatible framework in place (e.g. Mono on Linux).
Back in the day a lot of code used to be written on bare metal using macro assemblers, but that's pretty much unheard of on PCs today. (And there was even a time before macro assemblers.)

Does it help to have a Linux frame of mind for being a better embedded programmer?

was wondering - if knowing The Linux way of life or the Linux architecture, would give a better frame of mind for programming on embedded devices especially when they have some kind of OS in them.
Just want to be sure that I did not miss a major thing :)
Note:
I come from a windows background, can program in C and C++.
Passionate and finally want to get started into Embedded programming. I would like to start by doing typical hobbyists project at home.
It would be nice if anyone would also comment on BeagleBoard as a starting point for me.
"Embedded" is a fuzzy word. There are two categories:
There are realtime embedded systems: microcontroller/microprocessor applications that are intimately communicating directly with the hardware on a low abstraction level. Typical applications are control systems/automation, industrial, automotive, medtech, household electronics, data/telecom communications etc.
And then there are fluffy embedded systems: various laptop:ish computers, embedded linux, embedded windows, phones and phoney operative systems, anything involving internet, human-machine intefaces etc.
People working in both categories will firmly state that they are working with embedded systems, while the latter kind are often just doing another flavour of desktop applications. Depending on which category you are aiming for, Linux may or may not be a merit. The telecom branch for example, overlaps both of these categories, and they are often using embedded Linux even for non-fluffy applications.
In either case, *nix may be used as the development platform, so knowing it won't hurt.
Yes and no. Mostly yes.
Lundin correctly described the "two worlds of embedded" (although the border between them is very fuzzy).
If you're writing for "higher embedded", like Android, or other devices that run Linux, then definitely expert knowledge of Linux will be of much help. You still need to know some "bare bones" and don't get scared when you see the likes of &=~ operator in C, but knowing Linux - the Linux of the old, where you configured stuff by editing files in /etc, where you compiled your own kernels for everyday use, where you would build software from tarballs, that's what helps. Knowing modern Linux - Gnome, gconf-editor, Synaptic and the likes will not be of much help.
Then next, if you're programming devices without OS, in the middle area - fast and strong enough to run C programs, but not the OS, you still need Linux. Because crosscompile. You don't need actual Linux. Cygwin is okay for that. MinGW may suffice. Still, you will probably need to be able to build your own crosscompiler (based on GCC), linker, debugger, make tools, and the rest of "backbone" of the IDE. Unless your chip supplier is awesome and provides a complete toolchain with IDE.
Only when you're into tiny processors, you don't need Linux. Stuff like car alarm remote, christmas lights blinker, car tire pressure sensor, battery level monitor - stuff that can have 16 bytes RAM, 1KB EEPROM, and the rest of CPU to match, you will need to use an IDE that works with this CPU, no OS, no C compiler, nothing remotely close to Linux - the IDE will most likely be Windows based.
I'd say you really do not need to know Linux for embedded programming. Many companies developing embedded software do it on windows and have no contact with other OS.
But sure, knowing more makes you more versatile, and general knowledge makes you a better engineer. This includes different OS as many other things.
When it comes to BeagleBoard, it depends on the kind of application you are interested in.
If you want to understand the low-level, I would start on a simpler processor and learn how to use peripherals, hardware interrupts, debouncing signals... There is an educational point in doing this yourself some time.
I suppose you can also skip that and start with an ARM-A8 and possibly an embedded OS, it's just not the path I followed.
What I am about to say may cause a flame war, but...
I have found that Linux is a much more productive development environment than Windows. At my previous job, we were developing firmware for managed switches and industrial automation equipment, which ran an embedded Linux operating system. All the developers had both Windows and Linux boxes, as the user interface software only ran on Windows. We all used Linux for developing, though, as it was simply easier.
At my current job, the only choice is to run Windows, but to make it more productive we are running Cygwin, which provides a Linux-like environment. It is very difficult to develop software on Windows that is not specifically for Windows.
As for developing for an embedded system without an OS... I have an Arduino that I play with occasionally. I have programmed it both from Windows and Linux, and have found the experiences fairly similar. Using Arduino's own tools, Windows seems to run a bit more smoothly, but if you want to hack on it and make something interesting, you're better off using Linux.
Personally (and this will likely provoke some nasty comments), I feel that Linux is best for doing productive work, and Windows is best for playing games.
So basically, this all boils down to this: Try using Linux for developing your project. You will probably find it to be a much smoother, more productive experience. If you don't like it, you don't have to keep using it. But the experience will probably be worth it.
Edit (due to question rewording): Knowing the "Linux way of life" is unlikely to help much when coding for an embedded project that is not running Linux itself. As I understand it, the Unix philosophy is about two main issues:
Each tool should do one thing and do it well (don't make something that tries to be everything).
Whenever possible, data should be plain text (allows for simple piping through processes and searching for content).
If you are working on a system without an operating system, you are writing code for a compiler and not likely working with a full shell at any point. You also are unlike to have any sort of file system. So both of these points are moot; you are not likely to gain anything concretely related to embedded programming by studying Linux, although it certainly couldn't hurt :-)
I really think if you want to learn a little about embedded sphere you should not start by using an OS directly. Prefer to have hands on a small low level project then add an OS if it's really needed for your final application.
I don't think setting up an OS into an embedded device will be easier than starting from scratch. It will bring you some functionalities (that I am not sure you really need to learn embedded) but it will bring you lot of hard debugging time in case of problems in the OS port.
I have been doing embedded programming for 10 years, currently for networking equipment and before that Apache helicopters. Both companies had POSIX-like operating systems on the target, but not embedded Linux directly. My current company uses mostly Windows for individual developer environments. However, we do have a few Linux boxes hanging around for special purposes. My previous company used a mix of Windows and Sun Solaris Unix. So wherever you go, you may not use Unix or Linux on your day to day computer, but you are likely to come across it at least occasionally.
On the other hand, I've known developers who have programmed on Linux for embedded Linux targets their entire careers. It really depends on the company, as smaller or newer companies have a tendency to use Linux more than corporations. However, using embedded forms of Windows on targets is very rare in my experience. I know devices are out there, but I've never personally met a developer who worked on one.
Anyway, Linux is free to use and has other benefits besides being good for a job. There's really no downside to giving it a try for a couple of months, other than giving up some of your time.
Linux is growing in embedded... see latest research:
Top 10 trends for the embedded software and tools market in 2011 - VDC research
Android Becomes Number One in U.S. Smartphone Market Share
Knowing the Linux way of life will definitely be a plus in embedded domain provided the kind of apps you are interested in are contained in the above mentioned links.
understanding Linux architecture will be over kill (although basic overview is good) before just starting in embedded field
e.g. to cut a tree you don't have to invent an axe - just start using one, then gradually you could learn to sharpen the axe
Its better to get started small - get hands-on, and focus on specific areas as is the need of the hour. grow with your work and work keeping your goals in mind
you will surely gain much faster and not get stuck in self loop - R&D to do R&D ;)
Only if you want to embed Linux! And as an embedded systems developer of some 22 years, I would suggest that Linux is unsuitable and unnecessary for a very large proportion of embedded systems projects.
Understanding the workings of an RTOS, and real-time priority based pre-emptive scheduling and IPC mechanisms would stand you in better stead. Take a look at this for example.

shifting from windows to *nix programming platform

How to migrate to *nix platform after spending more than 10 years on windows? Which flavor will be easy to handle to make me more comfortable and then maybe I can switch over to more stadard *nix flavors?
I have been postponing for a while now. Help me with the extra push.
Linux is the most accessible and has the most mature desktop functionality. BSD (in its various flavours) has less userspace baggage and would be easier to understand at a fundamental level. In this regard it is more like a traditional Unix than a modern Linux distribution. Some might view this as a good thing (and from certain perspectives it is) but will be more alien to someone familiar with Windows.
The main desktop distributions are Ubuntu and Fedora. These are both capable systems but differ somewhat in their userspace architecture The tooling for the desktop environment and default configuration for system security works a bit differently on Ubuntu than it does on most other Linux or Unix flavours but this is of little relevance to development. From a user perspective either of these would be a good start.
From a the perspective of a developer, all modern flavours of Unix and Linux are very similar and share essentially the same developer tool chain. If you want to learn about the system from a programmer's perspective there is relatively little to choose.
Most unix programming can be accomplished quite effectively with a programmer's editor such as vim or emacs, both of which come in text mode and windowing flavours. These editors are very powerful and have rather quirky user interfaces - the user interfaces are ususual but contribute significantly to the power of the tools. If you are not comfortable with these tools, this posting discusses several other editors that offer a user experience closer to common Windows tooling.
There are several IDEs such as Eclipse that might be of more interest to someone coming off Windows/Visual Studio.
Some postings on Stackoverflow that discuss linux/unix resources are:
What are good linux-unix books for an advancing user
What are some good resources for learning C beyond K&R
Resources for learning C program design
If you have the time and want to do a real tour of the nuts and bolts Linux From Scratch is a tutorial that goes through building a linux installation by hand. This is quite a good way to learn in depth.
For programming, get a feel for C/unix from K&R and some of the resources mentioned in the questions linked above. The equivalent of Petzold, Prosise and Richter in the Unix world are W Richard Stevens' Advanced Programming in the Unix Environment and Unix Network Programming vol. 1 and 2.
Learning one of the dynamic languages such as Perl or Python if you are not already familiar with these is also a useful thing to do. As a bonus you can get good Windows ports of both the above from Activestate which means that these skills are useful on both platforms.
If you're into C++ take a look at QT. This is arguably the best cross-platform GUI toolkit on the market and (again) has the benefit of a skill set and tool chain that is transferrable back into Windows. There are also several good books on the subject and (as a bonus) it also works well with Python.
Finally, Cygwin is a unix emulation layer that runs on Windows and gives substantially unix-like environment. Architecturally, Cygwin is a port of glibc and the crt (the GNU tool chain's base libraries) as an adaptor on top of Win32. This emulation layer makes it easy to port unix/linux apps onto Cygwin. The platform comes with a pretty complete set of software - essentially a full linux distribution hosted on a Windows kernel. It allows you to work in a unix-like way on Windows without having to maintain a separate operating system installations. If you don't want to run VMs, multiple boots or multiple PCs it may be a way of easing into unix.
Ubuntu is nicely balanced, with a user friendly desktop but the potential to set up a fully functional programming environment.
I would advise experimenting with virtual machines - there is no reason to ditch your current setup until you've tried a few of the major distributions. VMware and others have a wide variety of server and desktop builds available.
I guess it also depends on what programming languages your are comfortable with.
If you worked with C# in the past then you could look at using the knowledge by running Mono , or maybe look at using Java (which is syntactically very similar). Either way Linux would be good.
I personally would recommend you look at the Mac's OS X. Its a unix BSD based OS, but with a really slick user interface over the top. To me it feels like the best of both the Windows and Unix worlds.
I do all my unix development on it, deploying onto Ubuntu servers. If you do look at a Mac, definitely take a look at the MacPorts project, which packages a large amount of the open source unix/linux software up making installation of programming tools incredibly easy.
Ubuntu seems to be very user-friendly, and has a lot of specific information for it in forums etc. So support-wise you'll be covered.
I experienced the shift from windows to ubuntu as very much do-able, things you can do graphically in windows can be done exactly the same in ubuntu (maybe some exceptions) and a bit more. A computer savvy individual should not have any problems.
However, it helps greatly if you are familiar with the basic shell commands (you'll need them as a programmer!). Some are the same as on windows but especially ls (dir) sometimes has me wracking my brain for "what was that command again", and vice versa when I'm back on windows.
Take some time to try them out. (for example: pwd, ls, mv, rm, ps, kill)
Finally, when installing programs often a simple "sudo apt-get install X " does all the work for you, even more user friendly than the windows installer executables I find.
Edit: You might want to try a VMware player and try a few linux distributions to play around in before you install the dual boot.
Get a macbook pro. OSX is the smoothest flavour of unix and the laptop should give you the push you need.
Then when you're feeling more confident, you can decide whether or not you want to spend most of your time configuring your soundcard, running ./autoconfigure && make, and debugging package manager screwups.
Any modern version of Unix (or Linux) you can get running on your machine will be fine.
Here are the ones that I would consider:
Ubuntu. As others have noted, this is often considered to be the easiest to use. However some parts are not "standard" Unix. For example, the startup scripts do not use init. This is mostly a good thing, but if you're trying to learn Unix may not be what you need.
Fedora. Bleeding edge but with rough edges.
Slackware. Possibly the most Unix-like Linux distribution (some would say dated!).
One of the *BSDs: FreeBSD, OpenBSD, NetBSD. Different approach to some things than Linux.
Solaris. This is "proper" Unix. Seems bare-bones compared with Linux but worth playing with to see what's "standard."
In fact, I would consider running at least a couple of them, most run fine as a VM. One of the good and bad things about Unix is that what's standardised is more the philosophy than many of the details. There's no Visual Studio, there's no C# (by that I mean no canonical high level language; I know about Mono).
Excellent answers. A few comments:
Almost all distros support LiveCDs, to let you try before installing. folks mentioned VMWare and VirtualBox, also note that Ubuntu's WUBI installer lets you install Linux under Windows without repartitioning; very nice; I used it when I first switched to my 64-bit system, since I wasn't sure how good the driver support was. Ubuntu 9.04 works great in 64, though. Also, since Ubuntu is so popular, that are many versions, Kubuntu uses KDE instead of Gnome, Mint and Xubuntu are both lighter weight.
Expect to run side-by-side for a while when transitioning from Windows. Cygwin has some nice downloadable manuals for people getting used to bash, and basic information about how *nix works underneath, targeted at Windows users. There are tons of useful sites; the Ubuntu community forums have a tremendous amount of information, for both beginners and advanced.
For getting used to developing under Linux, check the Linux documentation project. In addition to KDevelop, there's Anjuta, Eclipse, and many more. Some are light, some are heavyweight.
One thing that can ease the transition is to use software that runs in both operating systems. Firefox, Thunderbird, OpenOffice, Subversion, and hundreds if not thousands of others run fine in both Linux and Windows. And with very little effort, you can use the same folders for application settings and data for many of these. Firefox and Thunderbird can easily use the same folders/files on an NTFS partition. Makes dual booting much easier. Instructions are on the Ubuntu community site and other locations.
Note that some Linux software isn't NTFS friendly; in Linux keep your Subversion working folders on a native partition.
One caveat for sharing application settings; some applications store absolute paths; as a workaround, you can create symlinks that look like Windows drive letters.
After you get comfortable with Linux, branch out and try non-Windowsy applications and tools. Sometimes different is better. Lots of people use Emacs and Vim for good reasons.
Try Kubuntu as a distro and Kdevelop and Qt to start programming with, it's all very civilised.
Kate's an ok notepad-esque text editor if you want to go that way but I don't see why you'd want to get in to Vi or Emacs apart from the geeky appeal of using something really arcane.

In which situations is it advisable to opt for BSD systems instead of Linux?

For an everyday-user with new hardware Linux seems for me the natural choice if somebody is looking for an alternative to Windows. But when does it make sense to give the BSD variants a try?
I've always found the BSD's to be more intuitive. There are some different philosophies in BSD than in Linux. For example, Linux prefers GNU commands, while BSD opts for either classic BSD commands (which are similar, but often times have different options) or newly written ones, falling back to GNU when nothing else is available. Also, I find the BSD Man pages to be more comprehensive and contain more examples than GNU man pages, since GNU tends to prefer info pages (which I despise) for examples.
Many ISP sysadmins swear by BSD. They claim it holds up better under load, hasn't made as many compromsies for the desktop, and that it's networking stack is more efficient and less buggy. I don't know if those are, or are still true, but this is what i've been told.
Also, OpenBSD has a reputation of focusing heavily on security, and they have historically had a very good track record when it comes to security. They take proactive measures (developing new C Runtime library routines, for instance) to prevent security flaws before they can be written.
NetBSD has a reputation of running on just about anything. They have a long list of platforms they actively support. Linux, to some extent tries to do this as well, but typically only a small subset of these are mainline supported.
Finally, it often just comes down to personal preference. Do the guys you have or are going to hire know BSD? Do you personally like it?
There are also some reasons NOT to run BSD. If you're primarily a desktop user, BSD may not be the best choice. Sure, you can install most of the same stuff on BSD as Linux, but you won't find a "distro" similar to, say, Ubuntu which focuses strictly on the desktop. Also, some device drivers aren't available on BSD because they were written with GPL only licenses.
I'm told that the BSDs are more... coherent than the Linuxes. I've had long conversations with my sysadmin friend on why/why not BSD/Linux. Here's a link:
http://www.over-yonder.net/~fullermd/rants/bsd4linux/bsd4linux1.php?dupe=with_honor
Having said that, I started using Debian in 2007, and I've never looked back! :)
One of the big areas that BSD has over Linux is licensing. Linux's GPL can make it difficult to use some differently-licensed features of other Operating Systems. The first one that springs to mind is ZFS.
Plus, BSD is a bit more mature operating system (being directly descendent from AT&T System V UNIX).
The commonly cited wisdom is that BSD is more useful for a server OS and Linux is more useful for a desktop OS. But don't take that as the gospel truth as lots of people have successfully used Linux as a server OS and lots of people have used BSD as a desktop OS.

Windows CE vs Embedded Linux [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Now I'm sure we're all well aware of the relative merits of Linux vs Windows Desktop. However I've heard much less about the world of embedded development. I'm mainly interested in solutions for industry and am therefore uninterested about the IPhone or Android and more interested in these two OSes.
What are the relative trade-offs between the two platforms in the embedded world? If you were considering building a box for a specific project with custom hardware, a partially customised OS and a custom app then which would you choose and why?
I would assume that Windows CE wins on tools and Linux wins on both cost and possibly performance. However this is just utter speculation. Does anyone have any facts or experience of the two?
I worked for several years at a company that provided both CE and Linux for all of their hardware, so I'm fairly familiar with both sides of this equation.
Tools: Windows CE tools certainly are better than those provided by Linux, though the linux tools are certainly getting better.
Performance: Windows CE is real-time. Linux is not. The linux kernel is not designed for determinism at all. There are extensions that you can add to get sort-of real time, but CE beats it.
Cost: This is an area of great misunderstanding. My general experience is that CE is lower cost out of the box ($1k for Platform Builder and as low as $3 per device for a shipping runtime. "What?" you ask? "Linux is free." Well, not really so much, especially in the embedded arena. Yes, there are free distributions like Debian. But there are plenty of pieces that you might need that aren't in that free category. UI frameworks like QT, Java runtimes and media codecs just as a start. Also, most Linux distributions with a commercially-backed support system (e.g. MontaVista) are far from free.
Source Availability: Linux proponents may like to say that CE is a bad choice due to lack of source code. All I can say is that in over a decade of working with CE, half of which spent doing custom kernel and driver work for custom boards, I've only ever had need for source that didn't ship with CE (they ship a vast majority of it) once. I like having source too, but Microsoft provides support, so in the rare case you might think you need that source, you can get them to fix the problem (the one time we needed source, Microsoft provided a fix, and for free - which is their model under CE.
Does this mean that CE wins every time? No. I wouldn't suggest that at all. If you are a Linux shop and you have lots of Linux experience and code assets, you'd be foolish to run out and go CE. However, if you're coming into it from scratch CE usually has a lower TCO. Developers with Win32/C# experience are more prevalent and consequently less expensive. You also get a lot more "in the box" with CE than most other distributions, meaning faster time to market if you don't already have these things done in-house already.
I'll speak for the Linux side, at least for the category of software I'm familiar with (which is RF data collection equipment). Or industrial apps vs. consumer apps.
Windows CE (and its associated tools) IMH fairly recent E) is strongly biased to creating a "Windows Experience" on a small screen. The user input mode emphasizes mouse-like actions. Logons, application selection, etc. all try to be as similar to standard Windows as possible.
If a user is driving a lift truck, or filling a picking cart, or moving material from one place to another, there's a problem.
And it's a moving target - particularly on the .NET side. The Compact .NET runtime is seriously handicapped, and important libraries (like networking, data handling, and UI) are incomplete and versions too often deprecate the previous version. . CE seems to be the stepchild in the Windows family (possibly because there's not a lot of active competition selling to the hardware integrators.)
A nice stable rows-and-columns Linux console is a pretty handy context for many (in my experience most) high-use apps on a dinky screen.
Not much good for games on your cell-phone or Zune, though.
NOTE:
I think ctacke probably speaks accurately for the hardware integrator's side. I'm more aligned with the players further down the pipe - software integrators and users.
Choice is often made largely on perception and culture, rather than concrete data. And, making a choice based on concrete data is difficult when you consider the complexity of a modern OS, all the issues associated with porting it to custom hardware, and unknown future requirements. Even from an application perspective, things change over the life of a project. Requirements come and go. You find yourself doing things you never thought you would, especially if they are possible. The ubiquitous USB and network ports open a lot of possibilities -- for example adding Cell modem support or printer support. Flash based storage makes in-field software updates the standard mode of operation. And in the end, each solution has its strengths and weaknesses -- there is no magic bullet that is the best in all cases.
When considering Embedded Linux development, I often use the iceberg analogy; what you see going into a project is the part above the water. These are the pieces your application interacts with, drivers you need to customize, the part you understand. The other 90% is under water, and herein lies a great deal of variability. Quality issues with drivers or not being able to find a driver for something you may want to support in the future can easily swamp known parts of the project. There are very few people who have a lot of experience with both WinCE and Linux solutions, hence the tendency to go with what is comfortable (or what managers are comfortable with), or what we have experience with. Below are thoughts on a number of aspects to consider:
SYSTEM SOFTWARE DEVELOPMENT
Questions in this realm include CPU support, driver quality, in field software updates, filesystem support, driver availability, etc. One of the changes that has happened in the past two years, is CPU vendors are now porting Linux to their new chips as the first OS. Before, the OS porting was typically done by Linux software companies such as MontaVista, or community efforts. As a result, the Linux kernel now supports most mainstream embedded cpus with few additional patches. This is radically different than the situation 5 years ago. Because many people are using the same source code, issues get fixed, and often are contributed back to the mainstream source. With WinCE, the BSP/driver support tends to be more of a reference implementation, and then OEM/users take it, fix any issues, and that is where the fixes tend to stay.
From a system perspective, it is very important to consider flexibility for future needs. Just because it is not a requirement now does not mean it will not be a requirement in the future. Obtaining driver support for a peripheral may be nearly impossible, or be too large an effort to make it practical.
Most people give very little thought to the build system, or never look much beyond the thought that "if there is a nice gui wrapped around the tool, it must be easy". OpenEmbedded is very popular way to build embedded Linux products, and has recently been endorsed as the technology base of MontaVista's Linux 6 product, and is generally considered "hard to use" by new users. While WinCE build tools look simpler on the surface (the 10% above water), you still have the problem of what happens when I need to customize something, implement complex features such as software updates, etc. To build a production system with production grade features, you still need someone on your team who understands the OS and can work at the detail level of both the operating system, and the build system. With either WinCE or Embedded Linux, this generally means companies either need to have experienced developers in house, or hire experts to do portions of the system software development. System software development is not the same as application development, and is generally not something you want to take on with no experience unless you have a lot of time. It is quite common for companies to hire expert help for the first couple projects, and then do follow-on projects in-house. Another feature to consider is parallel build support. With quad core workstations becoming the standard, is it a big deal that a full build can be done in 1.2 hours versus 8? How flexible is the build system at pulling and building source code from various sources such as diverse revision control systems, etc.
Embedded processors are becoming increasingly complex. It is no longer good enough to just have the cpu running. If you consider the OMAP3 cpu family from TI, then you have to ask the following questions: are there libraries available for the 3D acceleration engine, and can I even get them without being committing to millions of units per year? Is there support for the DSP bridge? What is the cost of all this? On a recent project I was involved in, a basic WinCE BSP for the Atmel AT91SAM9260 cost $7000. In terms of developer time, this is not much, but you have to also consider the on-going costs of maintenance, upgrading to new versions of the operating system, etc.
APPLICATION DEVELOPMENT
Both Embedded Linux and WinCE support a range of application libraries and programming languages. C and C++ are well supported. Most business type applications are moving to C# in the WinCE world. Linux has Mono, which provides extensive support for .NET technologies and runs very well in embedded Linux systems. There are numerous Java development environments available for Embedded Linux. One area where you do run into differences is graphics libraries. Generally the Microsoft graphical APIs are not well supported on Linux, so if you have a large application team that are die-hard windows GUI programmers, then perhaps WinCE makes sense. However, there are many options for GUI toolkits that run on both Windows PCs and Embedded Linux devices. Some examples include GTK+, Qt, wxWidgets, etc. The Gimp is an example of a GTK+ application that runs on windows, plus there are many others. The are C# bindings to GTK+ and Qt. Another feature that seems to be coming on strong in the WinCE space is the Windows Communication Foundation (WCF). But again, there are projects to bring WCF to Mono, depending what portions you need. Embedded Linux support for scripting languages like Python is very good, and Python runs very well on 200MHz ARM processors.
There is often the perception that WinCE is realtime, and Linux is not. Linux realtime support is decent in the stock kernels with the PREEMPT option, and real-time support is excellent with the addition of a relatively small real-time patch. You can easily attain sub millisecond timing with Linux. This is something that has changed in the past couple years with the merging of real-time functionality into the stock kernel.
DEVELOPMENT FLOW
In a productive environment, most advanced embedded applications are developed and debugged on a PC, not the target hardware. Even in setups where remote debugging on a target system works well, debugging an application on workstation works better. So the fact that one solution has nice on-target debugging, where the other does not is not really relevant. For data centric systems, it is common to have simulation modes where the application can be tested without connection to real I/O. With both Linux and WinCE applications, application programing for an embedded device is similar to programming for a PC. Embedded Linux takes this a step further. Because embedded Linux technology is the same as desktop, and server Linux technology, almost everything developed for desktop/server (including system software) is available for embedded for free. This means very complete driver support (see USB cell modem and printer examples above), robust file system support, memory management, etc. The breadth of options for Linux is astounding, but some may consider this a negative point, and would prefer a more integrated solution like Windows CE where everything comes from one place. There is a loss of flexibility, but in some cases, the tradeoff might be worth it. For an example of the number of packages that can be build for Embedded Linux systems using Openembedded, see.
GUI TRENDS
It is important to consider trends for embedded devices with small displays being driven by Cell Phones (iPhone, Palm Pre, etc). Standard GUI widgets that are common in desktop systems (dialog boxes, check boxes, pull down lists, etc) do not cut it for modern embedded systems. So, it will be important to consider support for 3D effects, and widget libraries designed to be used by touch screen devices. The Clutter library is an example of this type of support.
REMOTE SUPPORT
Going back to the issue of debugging tools, most people stop at the scenario where the device is setting next to a workstation in the lab. But what about when you need to troubleshoot a device that is being beta-tested half-way around the world? That is where a command-line debugger like Gdb is an advantage, and not a disadvantage. And how do you connect to the device if you don't have support for cell modems in New Zealand, or an efficient connection mechanism like ssh for shell access and transferring files?
SUMMARY
Selecting any advanced technology is not a simple task, and is fairly difficult to do even with experience. So it is important to be asking the right questions, and looking at the decision from many angles. Hopefully this article can help in that.
I have worked in projects that involved customizing the software of an OEM board and I wouldn't say that Linux is cheaper. When buying a board you also need to buy the SDK. You still need to pay even for the Linux version. Some manufacturers offer both Windows CE and Linux solutions for their boards and there isn't a price difference. For Windows CE you also need the Platform Builder and pay for the licenses, but it is easier to go without support.
Another important issue is if you are building a User Interface or a headless device. For devices that require an LCD screen and human interaction is much easier to go with Windows CE. If on the other hand you are building a headless device, Linux may be a sounder option - especially if network protocols are involved. I believe that Linux implementations are more reliable and easier to tweak.
With Linux you are never on you own and you are never dependent on one single entity to provide permissions. There are many support options and you have the freedom to choose your support options for any part of the system through many competing sources.
With Windows CE you must adhere to the license and restrictions as set forth in the complex license agreements that must be agreed to. Get a lawyer. With windows CE you have only one proprietary source for OS support and you will proceed only as they see fit to support and provide what you need. You may not agree with their position, but will not have any recourse but to bend to what they prescribe. The costs of incremental components, modules, development kits, licensing, and support tend to pile up with proprietary platforms. In the longer term, what happens when the vendor no longer desires to support the platform and you do not have the rights to support and distribute it yourself? What happens when the vendor moves to newer technology and wants you to move along with them even though you may not be ready to make the move? $$$
Our experience with Windows solutions in general is that they tend to become more expensive over time. What was originally considered lowest TCO gravitates quickly towards and solution that is encumbered and costly to maintain and support. Licenses have to be re-negotiated over time and the new technologies, often unneeded, are forced into the picture at the whim of the provider for the sake of THEIR business needs. On top of that, the license agreements are CONTINUALLY changing--get a lawyer.
With Linux you have the freedom to provide in-house support and expertise without being encumbered against distributing the solution as you need. You also have the freedom to continue to use and support technology that original providers no longer want to support. Having the source code and the RIGHTs to do with it what you want (GPL, LGPL) is a powerful attractor when it comes to business continuity and containing costs while providing access to the very latest technologies or technologies that fit your needs.
I have developed network drivers that work both on RT Linux (to be more specific, Linux preemptive kernel with RT patch) and Windows CE. My experience was windows CE was more stable in terms of real-time response. Frame timings also showed that windows CE had less jitter.
On RT Linux, we had all sorts of problems. For example, when user moved the mouse; our frames were being delayed. Guess what, certain variants of x-windows disable interrupts. You may also feel that you are safer on console screen only. If you have VGA frame buffers enabled, you are doomed again. We had only one problem with windows CE in terms of jitter again. The problem happened when the USB controller was set to an incorrect mode in the BIOS and windows CE was using lots of time for polling.
To be honest, windows CE had more support. On Linux, you are on your own. You have to read every possible mailing list to understand what problems you may hve.
a partially customised OS
Is much easier to achieve if the OS is open source (and you have the expertise).
Android is a good option for some embedded systems.(it's linux based)
You have many experts that are able to develop on this system.
You have access to many libraries in java or C.
but it uses lot of memory and energy.
What we often forget with paid / licenced software is that you have to deal with licenses. It takes time and energy! Then you have to track if you pay it correctly. It involves many different people with different skills and it costs in decision.
This cost is often not included in the studies that show that open-source/free is more expensive than paid software.
With "free software" it's way easier to deal with licenses and you spend less time on dealing with these issues. Personally I prefer to avoid unnecessary communications with your legal / financing team every time you change some pieces of the software.

Resources