Reprogram a device [closed] - programming-languages

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Is it possible to take a device, say, a PDA, and wipe an software off of it and install your own?
For example, could I take a mac terminal program and install it onto a PDA (with wifi) and do SSHing and such?
And what language would / could it be in?

The language this could be in is not really the issue; it is, mostly, a matter of system compatibility.
Software applications do not run in a vacuum: they rely on the underlying operating system or for the least some form of virtual environment or a runtime such as Java, Silverlight etc.
Before one can re-purpose a PDA or other similar device, he/she need to install some system / host-software of sorts on it, and doing so can be rather complicated because of the proprietary and dedicated nature of many of the hardware subsystems therein.
General purpose systems such as Linux or Windows can be installed on various hardware platforms (including appliances) provided that:
- said hardware subsystems (CPU, keyboard/input devices, display device, storage devices...) comply to some specification, and
- the corresponding device drivers are available.
In the case of PDA, GPS appliances, smartphones and various other hardware platforms (and while many such platforms run on custom versions of Windows, Linux, Android etc.), there is typically enough proprietary differences, custom hardware and other deviations from specifications that installing alternative operating systems or runtimes is typically a challenge. Lack of documentation can also be a limiting factor.
Many such devices however host some form of runtime atop the system (Java in many cases), and rather than installing anew a alternative operating system, it is possible, in some cases, to install and run applications written in these hosted languages.
Even though, uninstalling existing applications (say to make room) and installing new applications may be a challenge as well. Difficulties arise because of
- purposeful "locking in" of the appliances (the manufacturers purposely prevent such re-purposing, using various forms of encryption, undocumented features and the like)
- intrinsic limitations of the runtime (whereby only a subset / sandboxed version of the language features is available).
In short, the specific approach for re-purposing appliances depends on:
the specific appliance/device: make, version etc.
the intended purpose: which particular uses are desired for the new device
the technical expertise and patience of the implementers ;-)
In general this is far from trivial: beginners beware! (*)
(*) BTW, the relative lack of sophistication apparent in the question seem to indicate the OP may not have the necessary skills involved in this kind of "hacking". It can however, be a very fun and rewarding learning experience.

No, but you can probably find a PDA terminal and do SSH with it.
Mac and PDAs have different architecture (their processors talk different language).

Related

Why do I need to choose my processor architecture when downloading an application for Linux? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
Isn't the operating system an abstraction on top of the hardware?
Making hardware architectures irrelevant for software being run on the same operating system?
If so why do I need to choose my processor architecture (e.g: ARM or amd64) when downloading NodeJS for example?
Different platforms abstract away different things:
Java/WASM abstract away CPU architecture, memory model, device access, terminal output and file access.
Any program can run anywhere.
Linux/Windows abstract away device access, terminal output and file access.
Any program built for that CPU and ABI can run.
DOS abstracts away terminal output and file access.
Any program built for that CPU and ABI that includes drivers for devices can run.
BIOS abstracts away terminal output.
Any program built for that CPU and ABI that includes device drivers and file system drivers to load its own data can run.
You need to account for everything that is not abstracted away, and on Linux that includes the CPU architecture.
It's better than DOS where you additionally needed to make sure your program supported your sound card, but not as convenient as Java where a single Android app can run on both x86 and arm64.
You've probably heard programs can be compiled to "machine code." These are low-level instructions for the hardware, different for every type of machine (and are influenced not only by CPUs but also by peripherals).
The NodeJS interpreter is written in C and C++ and compiled to machine code. This compiled code is only valid on a particular type of a machine. So you need to download the correct version of the NodeJS interpreter for your machine.
You can write pure JS code to be run on NodeJS and then it will usually not depend on the machine type - it will be "universal" to an extent. But as soon as the JS code (this is usually true for some specific modules and libraries) uses native code (C, C++, & others) for performance reasons, this code gets to be compiled for a specific machine, and then the JS module also becomes bound to a specific machine.
The operating system has little to no influence in all of this. It basically says how the machine code will be written into a file (e.g. which file format to use), and abstracts access to hardware (such as the disk drives) in a way this code can use.
Historically, there have been attempts to create operating systems which would completely abstract the underlying machine in a way which makes programs completely portable. They usually do it by disallowing user-written machine code (i.e. user-compiled programs) to execute, and only allow interpreted code to run.
The operating system installed must support the processor(s), data buses and memory addressing of the hardware.
At a systems level, in kernel code and device drivers it is impossible to ignore details of the hardware architecture. Applications typically sit a level above all this but are still dependent on the abstraction layers below.
Incidentally, Node.js is written in part in C and C++ which takes advantage of the performance improvements offered by 64 bit processing. Wrangling an optimised performance has been a key objective of node.js design, it has been refactored more than once to that end.

Can one Linux OS based environment application run in another Linux OS environment?

I don't have any knowledge about Linux/Unix environment. So for some understanding I have put this question in front of all the developers and Unix/Linux technical people.
By applications I target IDE's used by developers, especially:
Visual Studio
IntelliJ Idea Community Version
PyCharm Community Version
Eclipse
And other peripheral apps used by developers, gamer and network engineers
To some experienced Linux users, my question might be baseless. But consider me a beginner with Linux. Thank You in advance.
The term "application" is a very vague, fuzzy one these days. It does not describe some artifact with a certain internal structure and way how to invoke it but merely the general fact that it is something that can be "used".
Different types of applications are in wide spread use on today's systems, that is why I asked for a clarification of your usage of the term "application" in the comments. The examples you then gave are diverse though they appear comparable at first sight.
A correct and general answer to your question would be:
One application can be used in different Linux based environments if that that environment provides the necessary preconditions to do so.
So the core of your question is kind of shifted towards whether different flavors of Linux based systems offer similar execution environments. Actually it makes sense to extend that question to operating systems in general, the difference between today's alternative is relatively small from an applications point of view.
A more detailed answer will have to differ between the different types of applications or better between their different preconditions. Those can be derived from the architectural platform the application is build on The following is a bit simplified, but should express what the situation actually is:
Take for example the IntelliJ IDEA and the Eclipse IDE. Both are java based IDEs. Java can be seen as a kind of abstraction layer that offers a very similar execution environment on different systems. Therefore both IDEs typically can be used on all systems offering such a "java runtime environment", though differences in behavior will exist where necessary. Those differences are either programmed into the IDEs or origin from the fact that certain components (for example file selection dialogs) are not actually part of the application, but the chosen platform. Naturally they may look and behave different on different platforms.
There is however another aspect that is important here especially when regarding Linux based environments: the diversity of what is today referred to as "Linux". Unlike with pure operating systems like MS-Windows or Apple's MaxOSX which both follow a centralized and restrictively controlled approach we find differences in various Linux flavors that far extend things like component versions and that availability. Freedom of choice allows for flexibility, but also holds a slightly more complex reality in the outcome. Here that means different Linux flavors do indeed offer different environments:
different hardware architecture, unlike MS-Windows and MacOSX the system can not only be used on intel x86 based hardware, but on a variety of maybe 120 completely different hardware architectures.
the graphical user interface (GUI or desktop environment, so windows, panels, buttons, ...) is not an integral part of the operating system in the Linux (Unix) world, but a separate add on. That means you can chose.
the amount of base components available in installations of different Linux flavors differs vastly. For example there are "full fledged, fat desktop flavors" like openSUSE, RedHat or Ubuntu, but there are also minimalistic variants like Raspbian, Damn Small Linux, Puppy, Scientific Linux, distributions specialized in certain tasks like firewalling or even variants tailored for embedded devices like washing machines or moon rockets. Obviously they offer a different environment for applications. They only share the same operating system core, the "kernel", which is what the name "Linux" actually only refers to.
...
However, given all that diversity with it's positive and negative aspects, the Linux community has always been extremely clever and active and crafted solutions to handle that specific situations. That is why all modern desktop targeting distributions come with a mighty software management system these days. That controls dependencies between software packages and makes sure that those dependencies are met or resolved when attempting to install some package, like for example an addition IDE as in your example. So the system would take care to install a working java environment if you attempt to install one of the two java based IDE's mentioned above. That mechanism only works however if the package to be installed is correctly prepared for the distribution. That is where the usage of Linux based systems differs dramatically from other operating systems: here comes the introduction of repositories, how to search, select and install available and usable software packages for a system and and and, all a bit to wide a field to be covered here. Basically: if the producer of a package does his homework (or someone else does for him) and correctly "packages" the product, then the dependencies are correctly resolved. If however the producer only dumps the raw bunch of files, maybe as a ZIP archive and insists on a "wild" installation as typically done for example on MS-Windows based systems, so writing files into the local file system by handing administrative rights to some bundled "installer" script that can do whatever it wants (including breaking and ruining or corrupting) the system is executed on, then the systems software management is bypassed and often the outcome is "broken".
However no sane Linux user or administrator would follow such a path and install such a software. That would show a complete lack of understanding how the own system actually works and the consequent abandonment of all the advantages and comfort offered.
To make a complex story simple:
An "application" usually can be used in different Linux based environments if that application is packaged in a suitable way and the requirements like runtime environment posed by the application are offered by that system.
I hope that shed some light on a non trivial situation ;-)
Have fun!

Will using the Linux Kernel support current programs? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
There are many distributions of Linux. All of them have one thing in common, however: the kernel. And Linux programs run across all of them. If I make a minimalistic distribution from the kernel, will current programs made for Linux run? What defines the differences of distributions? I am a beginner on this stuff, don't go harsh if it is a stupid question. Thank you
Yes, with caveats.
You need to make sure you have full C support and by that I mean something like glibc installed or installable or you cannot build programs for your minimal install. If you can install and compile C programs on Linux then you can in effect build practically everything else from scratch.
If you want to be able to download binaries and run them this is different, the binaries will likely require shared libraries that they would have on the systems they were built for. Unless you have those libraries you cannot run the existing binaries you find online.
What defines the differences of distributions?
There are a lot of defining factors in each distribution. If we disregard things like...
Licensing ie Redhat vs Debian
Stance on things like GPL/BSD/NonFree
Release schedules Debian Vs Ubuntu
Target audience ie Ubuntu vs Debian
I think the biggest defining factor is package management ie yum/rpm vs apt/dpkg and how the base configuration is managed on the machine. This is certainly the thing I seem to use the most and miss the most when I change distributions. The kernel itself is very rarely on my mind which is in part a large part of it's success.
Most people start with something like ISO Linux and get a bootable CD but even then you normally choose a base distribution. If you want to create a base distributions that's a ton of work. Have a look at this great info graphic of the linux family tree
https://en.wikipedia.org/wiki/List_of_Linux_distributions#/media/File:Linux_Distribution_Timeline.svg
If you look at Debian/Ubuntu the amount of infrastructure these distributions have setup is quite staggering. They have millions perhaps even billions of lines of code in them all designed to run on their supported versions. You might be able to take a binary from one of them and run it on Redhat but it's likely to fail unless the planets are in alignment etc. Some people think this is actually a bad thing
https://en.wikipedia.org/wiki/Ingo_Moln%C3%A1r#Quotes
The basic failure of the free Linux desktop is that it's, perversely, not free
enough...
Desktop Linux distributions are trying to "own" 20 thousand
application packages consisting of over a billion lines of code and have
created parallel, mostly closed ecosystems around them... The Linux package
management method system works reasonably well in the enterprise (which is a
hierarchical, centrally planned organization in most cases), but desktop Linux
on the other hand stopped scaling 10 years ago, at the 1000 packages limit...
If I make a minimalistic distribution from the kernel, will current programs made for Linux run?
Very few programs actually use the kernel directly. They also need a libc, which is responsible for actually implementing most of the C routines used by either the programs themselves or the VMs running their code.
It is possible to statically link the libc to the program, but this both bloats the size of the program and makes it impossible to fix security issues in the linked libraries without rebuilding the whole program.
Well, certain programs demand a specific version of the kernel. Usually these programs act as "drivers" for the rest of the system (e.g. nvidia proprietary drivers: some of them act in kernel space while others run in user space, but require that very specific kernel module and thus that very specific kernel build).
A less stricter case is when a program demand a specific capability from kernel. For example almost all modern Linux virtualization systems rely on cgroups feature. Thus, to use them you need to have a reasonably fresh kernel.
Nevertheless a lot of kernel API is stable, so you can rely on it. But usually programs don't call kernel routines directly. A typical way to use a kernel function is to call a correspondent library routine which wraps and leverages the kernel API. The main, most basic library of that kind is libc.
Technically programs compiled for one version of libc (as well as other shared libraries) can be used with slightly different versions of correspondent libraries. For example, a lot of people use Skype compiled for SuSE in completely different Linux distributions. That Skype is a pretty complex application with a lot libraries being linked in and so on, but nevertheless it works without any significant problem. So that with a lot of other proprietary programs, which couldn't be compiled for a given distribution or even for a given installation. But sometimes shit just happens :) Those binary incompabilities are quite rare but they happen from time to time.

What OSes can I use if I want to use Intel Atom based board as an embedded system? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Im planning to use Intel atom on a board for an embedded system. The embedded system will be running programs written in C for image processing. Since its an embedded system footprint is obviously a concern. I was thinking about using a modified version of the linux kernel. Any other options??
I've written my own O/S for embedded systems so I'm not too sure. But one project I've been wanting to try is uCLinux. Though that might not be enough for what you want to do. If you have more ressources you might want PuppyLinux or Damn Small Linux. They all should have a C compiler which will suit your need.
Hope this helps!
p.s. since I'm a new user, I can only post one hyperlink, you'll have to google the other two, sorry!
I don't know how much memory you have, but Windows CE might be another choice. Going this route lets you stay with Windows tools (if you like those) There is also a Micro edition of the .NET framework available for use on Windows CE
It depends what services you need form your OS. The smallest footprint will be achieved by using a simple RTOS kernel such as uC/OS-II or FreeRTOS; however support for devices and filesystems etc will be entirely down to you or third-party libraries with associated integration issues. Also the simpler kernels do not utilise the MMU to provide protection between tasks and the kernel - typically everything runs as a single multithreaded application.
Broader and more comprehensive hardware support can be provided by 'heavyweights' such as Linux or Windows Embedded.
A middle ground can probably be achieved with a more fully featured RTOS such eCOS, VxWorks, Neucleus, or QNX Neutrino. QNX is especially strong on MMU support.
"Image processing" in an embedded box almost always means real-time image processing. Your number one concerns are going to be maximizing data throughput and minimizing latency processing overhead.
My personal prejudice, from having done real-time image processing (staring focal plane array FLIR nonuniformity compensation and target tracking) for a living, is that using an Intel x86-ANYTHING for real-time embedded image processing is a horrible mistake.
However, assuming that your employer has crammed that board down your throat, and you aren't willing to quit over their insistence on screwing up, my first recommendation would be QNX, and my second choice would be VxWorks. I might consider uCOS.
Because of the low-overhead, low-latency requirements inherent in moving massive numbers of pixels through a system, I would not consider ANYTHING from Microsoft, and I would put any Linux at a distant third or fourth place, behind QNX, VxWorks, and uCOS.
If you are needing to do real-time image processing, then you will likely want to use a Real-Time Operating System. If that is the route you want to take, I would recommend trying out QNX. I (personally) find that QNX has a nice balance of available features and low overhead. I have not used VxWorks personally, but I have heard some good things about it as well.
If you do not need Real-Time capabilities, then I would suggest starting with a Linux platform. You will have much better luck stripping it down to meet your hardware limitations than you would a Windows OS.
The biggest factor you should consider is not your CPU, but the rest of the hardware on your board. You will want to make sure that whatever OS you choose has drivers available for all of your hardware (unless you are planning on writing your own drivers), and embedded boards can often have uncommon or specialized chipsets that don't yet have open-source drivers available. Driver availability alone might make your decision for you.

Windows CE vs Embedded Linux [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Now I'm sure we're all well aware of the relative merits of Linux vs Windows Desktop. However I've heard much less about the world of embedded development. I'm mainly interested in solutions for industry and am therefore uninterested about the IPhone or Android and more interested in these two OSes.
What are the relative trade-offs between the two platforms in the embedded world? If you were considering building a box for a specific project with custom hardware, a partially customised OS and a custom app then which would you choose and why?
I would assume that Windows CE wins on tools and Linux wins on both cost and possibly performance. However this is just utter speculation. Does anyone have any facts or experience of the two?
I worked for several years at a company that provided both CE and Linux for all of their hardware, so I'm fairly familiar with both sides of this equation.
Tools: Windows CE tools certainly are better than those provided by Linux, though the linux tools are certainly getting better.
Performance: Windows CE is real-time. Linux is not. The linux kernel is not designed for determinism at all. There are extensions that you can add to get sort-of real time, but CE beats it.
Cost: This is an area of great misunderstanding. My general experience is that CE is lower cost out of the box ($1k for Platform Builder and as low as $3 per device for a shipping runtime. "What?" you ask? "Linux is free." Well, not really so much, especially in the embedded arena. Yes, there are free distributions like Debian. But there are plenty of pieces that you might need that aren't in that free category. UI frameworks like QT, Java runtimes and media codecs just as a start. Also, most Linux distributions with a commercially-backed support system (e.g. MontaVista) are far from free.
Source Availability: Linux proponents may like to say that CE is a bad choice due to lack of source code. All I can say is that in over a decade of working with CE, half of which spent doing custom kernel and driver work for custom boards, I've only ever had need for source that didn't ship with CE (they ship a vast majority of it) once. I like having source too, but Microsoft provides support, so in the rare case you might think you need that source, you can get them to fix the problem (the one time we needed source, Microsoft provided a fix, and for free - which is their model under CE.
Does this mean that CE wins every time? No. I wouldn't suggest that at all. If you are a Linux shop and you have lots of Linux experience and code assets, you'd be foolish to run out and go CE. However, if you're coming into it from scratch CE usually has a lower TCO. Developers with Win32/C# experience are more prevalent and consequently less expensive. You also get a lot more "in the box" with CE than most other distributions, meaning faster time to market if you don't already have these things done in-house already.
I'll speak for the Linux side, at least for the category of software I'm familiar with (which is RF data collection equipment). Or industrial apps vs. consumer apps.
Windows CE (and its associated tools) IMH fairly recent E) is strongly biased to creating a "Windows Experience" on a small screen. The user input mode emphasizes mouse-like actions. Logons, application selection, etc. all try to be as similar to standard Windows as possible.
If a user is driving a lift truck, or filling a picking cart, or moving material from one place to another, there's a problem.
And it's a moving target - particularly on the .NET side. The Compact .NET runtime is seriously handicapped, and important libraries (like networking, data handling, and UI) are incomplete and versions too often deprecate the previous version. . CE seems to be the stepchild in the Windows family (possibly because there's not a lot of active competition selling to the hardware integrators.)
A nice stable rows-and-columns Linux console is a pretty handy context for many (in my experience most) high-use apps on a dinky screen.
Not much good for games on your cell-phone or Zune, though.
NOTE:
I think ctacke probably speaks accurately for the hardware integrator's side. I'm more aligned with the players further down the pipe - software integrators and users.
Choice is often made largely on perception and culture, rather than concrete data. And, making a choice based on concrete data is difficult when you consider the complexity of a modern OS, all the issues associated with porting it to custom hardware, and unknown future requirements. Even from an application perspective, things change over the life of a project. Requirements come and go. You find yourself doing things you never thought you would, especially if they are possible. The ubiquitous USB and network ports open a lot of possibilities -- for example adding Cell modem support or printer support. Flash based storage makes in-field software updates the standard mode of operation. And in the end, each solution has its strengths and weaknesses -- there is no magic bullet that is the best in all cases.
When considering Embedded Linux development, I often use the iceberg analogy; what you see going into a project is the part above the water. These are the pieces your application interacts with, drivers you need to customize, the part you understand. The other 90% is under water, and herein lies a great deal of variability. Quality issues with drivers or not being able to find a driver for something you may want to support in the future can easily swamp known parts of the project. There are very few people who have a lot of experience with both WinCE and Linux solutions, hence the tendency to go with what is comfortable (or what managers are comfortable with), or what we have experience with. Below are thoughts on a number of aspects to consider:
SYSTEM SOFTWARE DEVELOPMENT
Questions in this realm include CPU support, driver quality, in field software updates, filesystem support, driver availability, etc. One of the changes that has happened in the past two years, is CPU vendors are now porting Linux to their new chips as the first OS. Before, the OS porting was typically done by Linux software companies such as MontaVista, or community efforts. As a result, the Linux kernel now supports most mainstream embedded cpus with few additional patches. This is radically different than the situation 5 years ago. Because many people are using the same source code, issues get fixed, and often are contributed back to the mainstream source. With WinCE, the BSP/driver support tends to be more of a reference implementation, and then OEM/users take it, fix any issues, and that is where the fixes tend to stay.
From a system perspective, it is very important to consider flexibility for future needs. Just because it is not a requirement now does not mean it will not be a requirement in the future. Obtaining driver support for a peripheral may be nearly impossible, or be too large an effort to make it practical.
Most people give very little thought to the build system, or never look much beyond the thought that "if there is a nice gui wrapped around the tool, it must be easy". OpenEmbedded is very popular way to build embedded Linux products, and has recently been endorsed as the technology base of MontaVista's Linux 6 product, and is generally considered "hard to use" by new users. While WinCE build tools look simpler on the surface (the 10% above water), you still have the problem of what happens when I need to customize something, implement complex features such as software updates, etc. To build a production system with production grade features, you still need someone on your team who understands the OS and can work at the detail level of both the operating system, and the build system. With either WinCE or Embedded Linux, this generally means companies either need to have experienced developers in house, or hire experts to do portions of the system software development. System software development is not the same as application development, and is generally not something you want to take on with no experience unless you have a lot of time. It is quite common for companies to hire expert help for the first couple projects, and then do follow-on projects in-house. Another feature to consider is parallel build support. With quad core workstations becoming the standard, is it a big deal that a full build can be done in 1.2 hours versus 8? How flexible is the build system at pulling and building source code from various sources such as diverse revision control systems, etc.
Embedded processors are becoming increasingly complex. It is no longer good enough to just have the cpu running. If you consider the OMAP3 cpu family from TI, then you have to ask the following questions: are there libraries available for the 3D acceleration engine, and can I even get them without being committing to millions of units per year? Is there support for the DSP bridge? What is the cost of all this? On a recent project I was involved in, a basic WinCE BSP for the Atmel AT91SAM9260 cost $7000. In terms of developer time, this is not much, but you have to also consider the on-going costs of maintenance, upgrading to new versions of the operating system, etc.
APPLICATION DEVELOPMENT
Both Embedded Linux and WinCE support a range of application libraries and programming languages. C and C++ are well supported. Most business type applications are moving to C# in the WinCE world. Linux has Mono, which provides extensive support for .NET technologies and runs very well in embedded Linux systems. There are numerous Java development environments available for Embedded Linux. One area where you do run into differences is graphics libraries. Generally the Microsoft graphical APIs are not well supported on Linux, so if you have a large application team that are die-hard windows GUI programmers, then perhaps WinCE makes sense. However, there are many options for GUI toolkits that run on both Windows PCs and Embedded Linux devices. Some examples include GTK+, Qt, wxWidgets, etc. The Gimp is an example of a GTK+ application that runs on windows, plus there are many others. The are C# bindings to GTK+ and Qt. Another feature that seems to be coming on strong in the WinCE space is the Windows Communication Foundation (WCF). But again, there are projects to bring WCF to Mono, depending what portions you need. Embedded Linux support for scripting languages like Python is very good, and Python runs very well on 200MHz ARM processors.
There is often the perception that WinCE is realtime, and Linux is not. Linux realtime support is decent in the stock kernels with the PREEMPT option, and real-time support is excellent with the addition of a relatively small real-time patch. You can easily attain sub millisecond timing with Linux. This is something that has changed in the past couple years with the merging of real-time functionality into the stock kernel.
DEVELOPMENT FLOW
In a productive environment, most advanced embedded applications are developed and debugged on a PC, not the target hardware. Even in setups where remote debugging on a target system works well, debugging an application on workstation works better. So the fact that one solution has nice on-target debugging, where the other does not is not really relevant. For data centric systems, it is common to have simulation modes where the application can be tested without connection to real I/O. With both Linux and WinCE applications, application programing for an embedded device is similar to programming for a PC. Embedded Linux takes this a step further. Because embedded Linux technology is the same as desktop, and server Linux technology, almost everything developed for desktop/server (including system software) is available for embedded for free. This means very complete driver support (see USB cell modem and printer examples above), robust file system support, memory management, etc. The breadth of options for Linux is astounding, but some may consider this a negative point, and would prefer a more integrated solution like Windows CE where everything comes from one place. There is a loss of flexibility, but in some cases, the tradeoff might be worth it. For an example of the number of packages that can be build for Embedded Linux systems using Openembedded, see.
GUI TRENDS
It is important to consider trends for embedded devices with small displays being driven by Cell Phones (iPhone, Palm Pre, etc). Standard GUI widgets that are common in desktop systems (dialog boxes, check boxes, pull down lists, etc) do not cut it for modern embedded systems. So, it will be important to consider support for 3D effects, and widget libraries designed to be used by touch screen devices. The Clutter library is an example of this type of support.
REMOTE SUPPORT
Going back to the issue of debugging tools, most people stop at the scenario where the device is setting next to a workstation in the lab. But what about when you need to troubleshoot a device that is being beta-tested half-way around the world? That is where a command-line debugger like Gdb is an advantage, and not a disadvantage. And how do you connect to the device if you don't have support for cell modems in New Zealand, or an efficient connection mechanism like ssh for shell access and transferring files?
SUMMARY
Selecting any advanced technology is not a simple task, and is fairly difficult to do even with experience. So it is important to be asking the right questions, and looking at the decision from many angles. Hopefully this article can help in that.
I have worked in projects that involved customizing the software of an OEM board and I wouldn't say that Linux is cheaper. When buying a board you also need to buy the SDK. You still need to pay even for the Linux version. Some manufacturers offer both Windows CE and Linux solutions for their boards and there isn't a price difference. For Windows CE you also need the Platform Builder and pay for the licenses, but it is easier to go without support.
Another important issue is if you are building a User Interface or a headless device. For devices that require an LCD screen and human interaction is much easier to go with Windows CE. If on the other hand you are building a headless device, Linux may be a sounder option - especially if network protocols are involved. I believe that Linux implementations are more reliable and easier to tweak.
With Linux you are never on you own and you are never dependent on one single entity to provide permissions. There are many support options and you have the freedom to choose your support options for any part of the system through many competing sources.
With Windows CE you must adhere to the license and restrictions as set forth in the complex license agreements that must be agreed to. Get a lawyer. With windows CE you have only one proprietary source for OS support and you will proceed only as they see fit to support and provide what you need. You may not agree with their position, but will not have any recourse but to bend to what they prescribe. The costs of incremental components, modules, development kits, licensing, and support tend to pile up with proprietary platforms. In the longer term, what happens when the vendor no longer desires to support the platform and you do not have the rights to support and distribute it yourself? What happens when the vendor moves to newer technology and wants you to move along with them even though you may not be ready to make the move? $$$
Our experience with Windows solutions in general is that they tend to become more expensive over time. What was originally considered lowest TCO gravitates quickly towards and solution that is encumbered and costly to maintain and support. Licenses have to be re-negotiated over time and the new technologies, often unneeded, are forced into the picture at the whim of the provider for the sake of THEIR business needs. On top of that, the license agreements are CONTINUALLY changing--get a lawyer.
With Linux you have the freedom to provide in-house support and expertise without being encumbered against distributing the solution as you need. You also have the freedom to continue to use and support technology that original providers no longer want to support. Having the source code and the RIGHTs to do with it what you want (GPL, LGPL) is a powerful attractor when it comes to business continuity and containing costs while providing access to the very latest technologies or technologies that fit your needs.
I have developed network drivers that work both on RT Linux (to be more specific, Linux preemptive kernel with RT patch) and Windows CE. My experience was windows CE was more stable in terms of real-time response. Frame timings also showed that windows CE had less jitter.
On RT Linux, we had all sorts of problems. For example, when user moved the mouse; our frames were being delayed. Guess what, certain variants of x-windows disable interrupts. You may also feel that you are safer on console screen only. If you have VGA frame buffers enabled, you are doomed again. We had only one problem with windows CE in terms of jitter again. The problem happened when the USB controller was set to an incorrect mode in the BIOS and windows CE was using lots of time for polling.
To be honest, windows CE had more support. On Linux, you are on your own. You have to read every possible mailing list to understand what problems you may hve.
a partially customised OS
Is much easier to achieve if the OS is open source (and you have the expertise).
Android is a good option for some embedded systems.(it's linux based)
You have many experts that are able to develop on this system.
You have access to many libraries in java or C.
but it uses lot of memory and energy.
What we often forget with paid / licenced software is that you have to deal with licenses. It takes time and energy! Then you have to track if you pay it correctly. It involves many different people with different skills and it costs in decision.
This cost is often not included in the studies that show that open-source/free is more expensive than paid software.
With "free software" it's way easier to deal with licenses and you spend less time on dealing with these issues. Personally I prefer to avoid unnecessary communications with your legal / financing team every time you change some pieces of the software.

Resources