Sat nav with windows CE and iGO - windows-ce

I have a GPS 84H-3 sat nav which runs on windows CE. It uses iGO for navigation.
The interface has an icon which opens the Windows CE desktop. It appears to give unrestricted access to the file system. It has reader versions of various MS office programs; other than that it seems to serve no purpose.
I have 3 questions:
Why would the manufacturer leave access to the desktop in the devices interface?
Is it possible to download and run an updated version of iGO on the machine (I found that the existing iGO version has a icon in the programs folder and the non-Windows interface has a app which lets me set the file the system runs when I click on the navigation icon; remember that the file system is accessible)
Is it possible to run alternative navigation software on the machine?

this question is not well suited to Stack Overflow, but I will try to answer:
1) to allow users to run other programs, simply to make this device more user friendly.
2) probably yes, I suggest you ask iGO for any updates to your software
3) I suppose yes, but you must be aware that navigation software very often needs some form of integration with device, ie. it should take over the sound subsystem of your device while generating navigation voices, it should allow to switch between device UI interface and back to navigation, etc. This might need some changes from navigation software side (like usage of device API).

Be aware than an embedded OS like Windows CE isn't the same as a desktop OS. The application is likely part of the OS and it's quite likely that you cannot replace it without replacing the entire OS, which would have to come from the device OEM. You might be able to "hide" the existing app with a new one, but it's also possible that any replacement will get lost when the device is restarted.
Running any other applications (a replacement navigation app or other) would depend on a lot of things. The app would have to be built for CE. The OS would have to have any dependencies the app needs. The app would have to "understand" or be configurable for any peripherals you may need to use (like the GPS).
It may be possible to install apps or override behavior, but it's completely up to the device OEM on how they implemented things. They have the choice of blocking all apps but the one(s) they want which would give you zero ability to do anything, or they may have left it wide open. Short of some formal documentation from them, testing would be the only way to know for sure.

Related

What is means by 'Universal Windows Platform (UWP) app is a Windows experience'

In this link https://msdn.microsoft.com/en-us/library/windows/apps/dn726767.aspx
it is said 'Universal Windows Platform (UWP) app is a Windows experience'.What is actually Windows experience means?
Well, basically what it means is; That by creating a UWP you are creating an app that will run across the Windows Platforms, thus giving the full Windows experience. Having an app that gives a familiar "experience" across all the devices that support Windows UWP (Xbox One, PC, Tablet, Phone, IoT, HoloLens) that is the goal.
Microsoft’s language around Windows has changed in the last year. Where Windows was originally a brand indicating an interconnected system of software, it is now used philosophically to represent the mission statement to let people do more.
The idea is, because of the Windows experience, developers can create software that operates in a new paradigm. More specifically, users can experience software in a way that centers around them as a user and not the device they are using.
This new workflow approach downplays the built-in interoperability of Microsoft products, and highlights the opportunity to create software that can do far more to change the way users and companies leverage and experience software.
At its heart, the Windows experience, is the experience that defines a better way to use software. This is a subjective thing depending on the type of user or industry – but it is also a far more broad-reaching definition that no longer simply implies: “build an app that can run on multiple devices”.
Good job teasing out this new language. Not everyone has noticed yet.
At the core of the Universal Windows Platform is the technology enabling code written for one Windows device to seamlessly transition to other devices and form factors. The Windows experience is the full panoply of Microsoft services, including those targeting iOS, Android, and traditionally competing products like Sales Force.
PS: the Windows Experience is not the performance measurement tool introduced in Vista to evaluate hardware for its readiness for an advanced graphics, etc.
The Windows experience, in the new mission-oriented form, is intended to promote a love for Windows - but thinking of Windows not as an Operating System, but, instead, as a family of reliable solutions. It's a nice change, and I (personally) am excited to see how it inspires developers.
I hope this helps.
I believe the intention is to encourage those creating software to not think about an isolated app that just runs on a single device (or class of devices) but to create an "experience" that can travel with the user across multiple devices.
For example, don't just think about creating a phone app or a desktop/tablet app. Instead think about how the user will experience interacting with your software (and presumably the same data) as they use different devices, in the Windows 10 family, at different times and in different ways.
Windows as an ecosystem has been moving closer and closer together in terms of developing for different devices for years. With Windows 10, you finally have a true universal platform where you can develop for phones, tablets, desktops, HoloLens, Xbox, etc with one code base. Sure, there are specific API tweaks, but those devices run the same core allowing you the developer to create experiences across multiple different devices!

GPS navigation software/SDK for Linux

Is there a (open source or commercial) software solution available for the Linux platform to build a custom embedded navigation device? It should be able to display maps and do routing (just like a TomTom/Navigon/Garmin/... navi device).
Unfortunately all navigation solutions seem to target Windows CE only.
Something based on OpenStreeMap data is not an option, because the map data is IMHO not always good enough for serious routing / driving instructions.
Since I'm searching for a long time now without luck I'm not too restrictive on the implementation details, however it should be possible to extend the software with custom functions or ideally embed the navigation in my own software.
Android with Google Maps comes to my mind, but I'd like to avoid setting up Android for my device.
Alternatively, if there is no such solution, I might use a end-user navigation device if that allows me some kind of communication with my own device to control it.
I'm open for any suggestions, thanks..
There is a huge list here. Take a look if anything suites your needs.

Looking for a super tiny linux distro that's sole purpose is running an AIR application?

I'm looking for a really really small linux distribution or process of making my own that's sole purpose is to get an air application to launch full screen and stay there; Essentially I'm building a home kitchen computer that runs entirely as an AIR app.
I have looked into using windows xp; and windows xp embedded but they pose so many issues I figured I'd try modern linux.
I have also seen TinyCore Linux which looks interestingly small but not sure what issues that poses in regards to running AIR and "hardware" accelerated display. I've also thought about stripping down an Ubuntu installation but I'm sure somebody must have done this already; google is just failing me right now...
I'm also interested in running an "embedded" version of say android and running the air app on some arm-based hardware again; with just the AIR runtimes only - although this is less preferred as it's more complex.
I'm also hooking this up to a touch screen monitor (not yet arrived) so I'll need to hunt down or write some drivers for translating the touch events into something AIR can understand... (this was my main intention for using windows in that all the drivers will just work).
What I'm after
Minified Linux kernel with JUST the drivers for the box I need
X Display with accelerated graphics support (Doesn't have to be X if AIR can run on a frame buffer?)
Running a Full screen AIR application (simple enough)
Ability to write back to the filesystem (enough support for AIR)
SSH Access for remote control
Samba for updating the filesystem (easier to maintain the system)
Touch screen support (3M Ex III I think...)
Audio support
Don't need
Don't need any window manager or any other GUI tools unless required by AIR
Don't need any toolbars or file managers or anything; The AIR app is the "OS"
Don't need any package managers or repos
Don't need multi user or logging in; everything can just run as an unprivileged account
Don't need to
I don't mind hand crafting the filesystem and configs if that makes it easier; I'm mainly looking for a "filesystem" that is as tiny as possible that I can just plop my AIR app into and write some scripts to get it to start when the X server starts
Thanks,
Chris
Try an embedded Linux build system such as Buildroot. It can build an entire system from source, and be very lightweight. The basic system is less than 1 MB in size.
Ended up going with Tiny Core. Very tiny and quick to boot up. You can also write extensions for it and you don't have a persistent drive which allows you to just switch the thing off without worry that it's going to break something -- exactly what you need in a kitchen :-D.
My current plan is to:
Just set up a working version using Ubuntu as this is mostly supported by Adobe
Slowly strip it back and try and get as little things to start as possible on boot
Try building my own distro/package from source and selecting only the packages I need
Compile my own kernel with nearly everything turned off and just leave on the things I need

Remote browser access to Windows CE/Mobile/Embedded emulators?

My company has a Compact Framework.NET WinForms application which runs on rugged handhelds manufactured by companies like Motorola, Intermec and Psion. These are expensive devices with built-in barcode scanners that are used in harsh conditions.
The configuration of the handheld application is managed by business users through our web site. The devices pick up the configuration when they sync from within the handheld application. Field workers use the handhelds, business users use the web site.
The business users have expressed the desire to, for lack of a better description, configure and preview or even fully use the actual handheld application through a web browser. They want to make configuration changes in the web site and immediately see what the impact will be in the handheld, without having to have a physical device (again, the devices are quite expensive). They want to be able to create training materials or conduct sales meetings and be able to demonstrate the application to their customers without having a physical device on hand.
Microsoft offers several Device Emulators, but they are probably too complex for business users. They are developer tools. One idea might be to somehow use the emulators within virtual machines possibly in conjunction with Terminal Services or even some kind of clever screen capture/VNC to show an emulated device in a browser. I suspect running emulators in the fashion may not exactly be a scalable solution, however. Also, only one emulator at a time on a single machine can be "cradled" and connected to network.
I'm looking for any suggestions which might help me meet the business users' requirements.
Thanks.
The only thing I can think of offhand is not that simple, but would probably be useful (and certainly the only "true" way for them to test).
I'd create a service that works like the Remote Display app (part of the WinMo Developer power tools, also ships with Platform Builder for CE), in fact it might just use that app (the source code for it actually ships with Platform Builder, so the eval version of PB would get you that source).
You would then create a web interface that acts as a "shell" for that service, marshalling the display image out to a web page and image clicks back as mouse events to the device.

X11 / X - linux desktop software, I don't understand how this fits together

I recently started using Linux (where I work is a Microsoft shop, so I only code in C#, work with MS products etc).
I'm trying to understand at a high level how some basic things in Linux hang together.
I've been reading www.linfo.org
Anyway I've never quite got what X is.
From reading this article it seems to me that X is layer that sits on top of the operating system (one X server sitting on top of the OS??) and X client applications make requests to the X server.
I think KDE, Xfce and Gnome are display managers, are they X server clients then?
I'm quite confused where everything sits.
Any explanation would be really appreciated!
It's all very modular and flexible; however this leads to complexity.
The "X Server" drives the display device. It provides graphics services to clients, and those services are pretty simple - such as:
"Give me a window frame to draw in"
"Put this bitmap here"
"Draw a horizontal black line 100px wide"
"Render the text 'hello' at (100,100)"
"Tell me if any mouse clicks or key presses have been aimed at my window frame"
There is a library called Xlib, provided by X, that has a standard interface for all these simple services. Any program that wants to use the X server's display eventually uses this client library and is called an X Client. Xlib knows how to connect to an arbitrary X server - on the local machine, or via TCP/IP across the LAN, or across the world - to call these services.
The Window Manager, which is just another X client program, is in charge of the "look and feel" of the desktop - how you move and arrange windows, etc. Because the window manager draws all the window decorations, it can make the desktop look like WindowsXP, or a Mac, or NeXTSTEP.
Part of the philosophy of X was to define "mechanism and not policy" - meaning, they give you tools to do it, but don't tell you how to use those tools. One such tool is the window manager, which can be replaced at will.
Many modern X applications are written to use a desktop enviroment such as Gnome or KDE. This offers these programs a consistent set of buttons and controls to draw, and a consistent interface for some things not traditionally included in X, but often considered part of a desktop - such as how to respond to drag-and-drop or how to present a standard file chooser dialog box.
The desktop environment usually provides an object model or programmatic interface that takes care of making all the simple X client requests and lets the program handle more important things. Removing these low-level calls yields another important benefit - platform independence.
Many desktop environments include a window manager, so that the look and feel of window controls and buttons is consistent and works with the desktop metaphor provided by the environment. However, it can usually still be switched out.
The separation of the X Server (running the display) and the X Client (wanting to use the display) has a few implications:
The graphics system is separate from the GUI programs, and they are separated about as completely as a web browser and web server are.
So the GUI program might not be displaying on the local machine - just like a web browser doesn't have to point at a web server on the local machine.
A machine can run JUST the client, with the X server elsewhere.
The machine with the display doesn't have to run the client - it can run JUST the X server, and all the clients can run on a dedicated machine. This is the original thin client: big programs running on big central server - with graphical user interaction handled by dedicated hardware on the desk in front of the user.
You need to know what your X server's network address is so you can tell GUI programs where to display their GUI. (this is usually done by setting the DISPLAY environment variable)
You can display many programs, from many different machines, all on the same desktop at the same time. It is all handled seamlessly, including cut-and-paste.
X11 is a network protocol, currently at release 7 (hence X11R7). It encapsulates graphics and input information, and connects an X client (application or window manager) running on a local or remote machine to the X server currently driving the local screen and input devices.
Gnome, KDE, XFCE, and LXDE are desktop environments; they contain pieces that talk to/with the X server (metacity, kwin, etc.), but also consist of specifications that applications must follow and libraries that are available in order for an application to "belong" to the DE.
In addition, it's worth remembering that the X server is just another program that gets run under linux. There's nothing special about it, it just happens to know how to grab onto the graphics card and take over the monitor using video drivers.
You can (theoretically) run linux very happily without ever running an X server - although of course, you would be limited to the command line programs.
That's how linux organises itself - kernel at the base, then a set of programs that provide functionality to higher level programs, which themselves provide functionality to higher level programs, all building up into a complete stack of software oriented to whatever the job of the machine is (say, general desktop, software development, web server, etc).
Beyond the kernel and it's modules, nothing is 'special'.
Wikipedia has some info about it.

Resources