Can we run two applications in a embedded device simultaneously - linux

just curious if we can run two applications (any sample applications) in a embedded device (lets say imx6) simultaneously, as generally we have only one serial console to access the target?

Related

How to extend Shared Address Space Process Model to Devices and Services

I tried to generate code for a device changing the implementation type to SharedLibrary and renaming the Entry Point to Device_Name.so. I was able to generate and build, but in main.cpp it kept a main function not a make_component to be called by ComponentHost. the device constructors deals with arguments that ComponentHost doesn't, like the Device Manager IOR. I believe this functionality extension implies changing the source code of ComponenHost. Is it part of REDHAWK roadmap? any comments on how can I make it work?
So are you trying to use the shared process space within a node to communicate between devices and services? Because I don't believe that there is tooling specifically for this yet, but I think there is a way to do this. Just to be clear I haven't tried this, but based on the test used by the bulkio ports to determine local vs remote transport usage, I think this will work.
If you look at the persona pattern, you'll see that there is a Programmable Device which is responsible for loading Persona Devices. Most of the details for this aren't necessary for what you're trying to do, but the pattern should be helpful. To accomplish communication between Devices using shared memory, you could generate a Programmable device whose sole purpose is to forward parameters from the DeviceManager to the Personas. The Personas would then act as your normal Devices normally do, just launched in the same process space as one another.
The code generators for the Programmable and Persona Devices are not yet integrated into the IDE, so you'll have to create a new Device project in eclipse for each Device you want (so that you'll have the spd files). Be sure to add the appropriate AggregateDevice interface to your Devices. This let's the framework know that multiple devices can technically be considered one entity, but you can also individually communicate with each. Also make sure that the Programmable is an Executable Device, since it needs to launch the Persona Devices. Then, from the command line, you can run redhawk-codegen - - pgdevice </path/to/programmable/spd> to generate a Programmable Device, and redhawk-codegen - - persona </path/to/persona/spd> to generate your Persona Device(s).
Once all of this is done, you'll notice the main function for your Programmable launches the Device like you described in your question. However, the main function for the Personas has code to launch the Device as either a standalone Device or as simply an object in its own thread.
This should allow the bulkio ports of the Programmable and Personas to communicate with each other via shared memory. Obviously this will break down if you attempt to push data out of the process, at least until someone adds interprocess shared memory via something like shm. Not sure if that's on the road map, but it would certainly be neat.
Update: It appears that interprocess shared memory was added in RH 2.1.2, so you should be able to communicate between collocated Devices, Services, and Components using that mechanism. This renders the above unnecessary, but I'm going to leave it for earlier versions of RH.
Let me know if you have any questions!
As of RH 2.1.2, the default behavior for Devices/Services/Components whose user code uses redhawk::buffer for the data memory allocator and the stream API for interaction with the bulkio port is to use a shared memory transport between C++ entities that are running in different processes

Why is a Trusted Execution Environment more significant for mobile devices?

I've been trying to understand what a Trusted Execution Environment is and how they work. Why is there such a strong emphasis on mobile devices? I've been trying to look for what's the difference in personal computers versus mobile devices with respect to a TEE. What am I missing?
Even though it’s late; I will add my comments in simplest possible way for reference.
As the world starts to move toward Enterprise Mobility, using mobile devices for work starts to become essential for different companies and organizations. From there a need come up to secure that devices, not only the data, but the processes and memory allocation as well; Especially when governments and sensitive departments start to use mobile devices.
Starting from the very low level of mobile devices architecture, every mobile device has a processor, processor manufacturers come up with new technology which creates two isolated areas running at the same time (e.g. ARM Trustzone) on the CPU and controlled by SoC (Software on Chip).
First area is where everyone use on mobile devices (Normal World / Rich Execution Environment - REE), the second one is the secure area (Secure World / Trusted Execution Environment - TEE). Each area has its own operating system running at the same CPU but their processes and memory allocation are totally separate.
Many mobile device manufactures (e.g Samsung), start to utilize that area, by loading third party secure Operating System (OS) into there (e.g Kinibi OS from Trustonic).
Developing applications (Trusted Application - TA) in the secure world is not easy process, provisioning them there is another story and integrating that applications with the normal world is another story as well (Some sort of especial SDK provided by TEE OS owner has to be used).
It is worth to mentioning that applications running in the TEE can have extraordinary privileges and normally TEE OS Owners limit what TA’s can do.
Lastly, although TEE is considered a secure area for sensitive processes (So far). There are other ways to achieve same level (or even better) of security on mobile devices.

Can I run ndk-gdb connected to two devices?

I am trying to run two instances of ndk-gdb attached to two devices.
but based on this page, I can not find a way to do that...
https://developer.android.com/ndk/guides/ndk-gdb.html
Is it even possible to have two separate devices connected to two instance of ndk-gdb running over adb?

Is there a way to run FirefoxOS with multiple outputs (HDMI devices)

We are trying to get FirefoxOS to use multiple outputs (HDMI devices) simultaneously in the way to show applications running on either of the screens (presumably different apps on different screens). Has anyone tried it?
There's a proof-of-concept demonstration of multiple screen feature on Firefox OS.
https://www.youtube.com/watch?v=A9QbW-paPZo
You can contact the author for further information.

X11 networking on linux

I know you can connect to a remote X11 server to use them like a local X11 system.
My question is: Can you connect multiple computers to work together and display (through their videos outputs) an unique instance of X11 desktop?
Or, another phrasing: Can you process and display an image using several X11 servers?
Take a look at Distributed Multihead X Project
X11 is a protocol. If you use it over the network, the GUI application that you run remotely, actually connects to your local X11 server. So yes, you can have multiple clients running applications on the server that display on different X11 servers. As for processing images using X11 server - what do you actually mean by that? The only thing comes to mind is multiple monitors. If so, then yes - you can use a dedicated X11 server per monitor.
If I understood your question correctly, you want to have multiple computers collaborate on displaying a single X11 display. This is not directly possible.
However, you can have multiple video cards in a single computer and use the Xinerama extension to have the multiple cards show a single logical X server. This can allow you to use a single machine to drive several monitors with ease. (With video cards that support multiple outputs, you ought to be able to get up to four or six monitors without too much hassle. Dozens might be very difficult.)
I can't think of any mechanism that would allow a single keyboard and mouse to reliably work across multiple monitors driven by multiple computers. But if your problem is significantly restricted (if it really is just viewing an image via several X servers), then you could write a client application that renders only a portion of the image and run multiple clients that each display only a portion of the image -- that when taken together, looks like the image is seamlessly displayed by several systems simultaneously. This is definitely a bit awkward though, as the coordination of the system will require some thought.
If you want to drag windows from one screen to another, or display a part of a window here and another part of it there, then no, this is not possible with existing out-of-the-box software. You can try to modify a "virtual" X server such as Xephyr such that it uses multiple backend X servers for portions of its framebuffer. This is not exactly trivial but should be much easier than writing your own multi-box X server from scratch.
If you want to clone one desktop to many displays connected to different computers, you can try running VNC or RDP clients on all displays but one.

Resources