Linux and RTOS using SoC (ARM, Xilinx) - linux

I am facing a design "issue". I have a board with Xilinx Zynq Soc including dual-core ARM9 and I need to develop an application to support real-time property control application (time deadlines to response time) and also application to do heavy processing (image etc.) and some basic communications between them, but most importantly I will need to be able to control the Linux part (at least e.g. to somehow suspend it, "pause it" in best case to have possibility to shut it down and then run it again). So I was wondering how to combine it.
One of the option, could be RTLinux, which at least to description, what I found offers possibility to run realtime kernel and linux kernel next to it as a thread but it seems that it is now proprieatary by WindRiver..
Then I stepped up over MicroBlaze, where it could be possible to "create" soft processor on Programmable logic, but I am not sure if I can run RTOS on ARM and Linux there?

There are two things that seem to be known as rtlinux. The one you mention, a Wind River revival of the MERT system is a product of that company. Another one, seemingly “RT Linux”, is a real time patch to the mainline kernel which provides deterministic scheduling and fine grained kernel pre-emption.
I think it is the latter one that you want. 10s of google indicates that there is a kconfig target for this SoC, so all the pieces you need should be there.
Do remember there is more to a real time system than just the ability to be real time; the subsystems also have to be well behaved.

Given your description, you have (at least) the following design options:
Dual kernel approach: this means patching the Linux kernel with a (quite invasive) patch that runs a tiny real-time kernel alongside the standard kernel. This approach allows reaching good real-time performance (even in the order of us) at the cost of complexity. It was implemented by the RTLinux project (acquired and then discontinued by Windriver), then by RTAI (mostly focusing on x86) and Xenomai.
If you go along this path, you can see if Xenomai supports your specific SoC; then patch, configure and rebuild the kernel; and finally write the real-time code following Xenomai's API.
Improving the responsiveness of the Linux standard kernel: this is what the PREEMPT_RT project aims at. The real-time performance is lower with respect to the previous approach, but you don't have to write real-time specific code. With this approach, you can patch and build the kernel, then see if the real-time performance is sufficient for your needs.
Synthesizing a Microblaze soft-core on the FPGA, then run Linux on the ARM cores and the real-time code ((either bare-metal or with an RTOS) on the Microblaze.
Unfortunately, your specific SoC does not support ARM's virtualization extensions. Otherwise there would be the additional option of Multi-OS approach: running the Linux OS on one ARM core and the real-time code (either bare-metal or with an RTOS like ERIKA Enterprise) on the other ARM core, through a hypervisor like Jailhouse or Xen.

Related

Best way to simulate old, slow processor on modern hardware?

I really like the idea of running, optimizing my software on old hardware, because you can viscerally feel when things are slower (or faster!). The most obvious way to do this is to buy an old system and literally use it for development, but that would allow down my IDE, and compiler and all other development tasks, which is less helpful, and (possibly) unnecessary.
I want to be able to:
Run my application at various levels of performance, on demand
At the same time, run my IDE, debugger, compiler at full speed
On a single system
Nice to have:
Simulate real, specific old systems, with some accuracy
Similarly throttle memory speed, and size
Optionally run my build system slowly
Try use QEMU in full emulation mode, but keep in mind it's use more cpu resources.
https://stuff.mit.edu/afs/sipb/project/phone-project/OldFiles/share/doc/qemu/qemu-doc.html
QEMU has two operating modes:
Full system emulation. In this mode, QEMU emulates a full system (for example a PC), including one or several processors and various peripherals. It can be used to launch different Operating Systems without rebooting the PC or to debug system code.
User mode emulation (Linux host only). In this mode, QEMU can launch Linux processes compiled for one CPU on another CPU.
Possible architectures can see there:
https://wiki.qemu.org/Documentation/Platforms

It is possible to combine Linux (one core) and bare-bone firmware (second core) on one dual core computer?

I was checking project Embedded ECG data acquisition system from instructables and there is mension a TODO:
Combining the OS and bare-bone firmware
UNDER CONSTRUCTION
** Since the bootloader only loads one firmware to the Core,
I need to modify the ELF file, to have Linux and bare-bone Core at the same time **
It seems to me as interresting approach how to make full featured Linux and critical realtime OS on one board (for example Raspberry PI). It is really possible? I have heard, that Linux can be setup to not use some cores. But I suppose that Linux use virtual memory and bare-bone firmware does usually not. Can the memory be shared between these OS. What about interruptions? Can these two OS handle interruptions separately? Can boot loader load these two systems to both core at once? I can imagine that one thread in boot loader will skip to address of bare-bone OS. It is correct approach?
Yes, it is possible, even if the full setup is not straightforward.
A couple of examples:
Xilinx released a white paper explaining how to run Linux + FreeRTOS on a dual-core Zynq ARM
Evidence explained how to run Linux + Erika Enterprise RTOS on a dual-core Freescale imx6 ARM
Those examples are based on system partitioning by hard-coding the assignment of the different cores to different OSs.
If your system is capable of hardware-assisted virtualization, you can use an hypervisor for making (and enforcing) such partitioning. You can for example use Siemen's Jailhouse, KVM or Xen.
Kind of. This is what people already do to some extent with network stack / driver. For example IsoStack idea works in a similar way. There's a project which actually implements this on linux by dedicating cores to network cards, but my google-fu is failing me.

Hardware clock signals implementation in Linux Kernel

I am looking at some pointers for understanding how the Linux kernel implements the setting up of various hardware clocks. This basically relates to working with setting up the various clocks that hardware features like the LCD, UART etc will use. For example when Linux boots how does it handle setting up the clocks for UART or USB. Maybe something like a Clock manager or something.
I am basically trying to implement something similar for a different OS on a new hardware that i am working on. Any help would be really appreciated.
[Edit]
Thanks for the replies and the links. So here is what i have implemented up until now. This should give you an idea of where I'm headed.
I looked up the Hardware Reference Manual for the particular system I'm targeting and wrote some code to monitor/modify the signals/pins of the peripherals I am interested in i.e. turning them ON/OFF from the command line.Now a collection of these clocks/signals together control a peripheral.The HRM would say that if you want to turn on the UART or something then turn on such and such signals/pins. And #BjoernD yes I am using something like a mmap() function to talk to the peripherals.
The meat of my question is that I want to understand the design and implementation of a Clock/Peripheral Manager which uses the utility that I have already written. This Clock/Peripheral Manager would give me the control of enabling/disabling the peripherals I want.Basically this Manager would enable me to make changes in the init code that is right now running. Also during run time processes can call this Manager to turn ON/OFF the devices so that power consumption is optimized. It might not have made perfect sense but I'm myself trying to wrap my head around this.
Now I'm sure something like this would have been implemented in Linux or for that matter any OS for performance issues (nobody would want to waste power by turning on all peripherals at boot time). I want to understand the Software Architecture of it. Reference from any OS would do as of now to atleast get a headstart. Also I am not writing my own OS, there is an OS in place but Im looking more at a board level software aka BSP to work on. But thanks for the OS link anyways, they are really good. Appreciate it.
Thanks!
What you want to achieve is highly specific to a) the platform you are using and b) the device you want to use. For instance, on x86 there are 3 ways to communicate with a device:
Interrupts allow the device to signal the CPU. The OS usually provides mechanisms to register interrupt handlers - functions that are called upon occurrence of an interrupt. In Linux see request_irq() and friends in linux/include/interrupt.h
Memory-mapped I/O is physical memory of the device that the platform's BIOS makes available in the same way you also access plain physical memory - simply by writing to a memory address. What exactly is behind such memory (e.g., network interface config registers or an LCD frame buffer) depends on the device and is usually specified in the device's data sheet.
I/O ports are accessed through a special address space and special instructions (INB/OUTB & co.). Other than that they work similar to I/O memory.
There's a multitude of ways to find out what resources a device provies and where the BIOS mapped them. Some platforms use ACPI tables (google yourself for the 1,000k page spec), PCI provides info on devices in a standardized way through the PCI config space, USB has similar ways of discovering devices attached to the bus, and some devices, e.g., UARTS, are simply specified to be available at a pre-configured I/O range that is fixed for your platform.
As a start for understanding Linux, I'd recommend "Understanding the Linux kernel". For specifics on how Linux handles devices and what is there to write drivers, have a look at Linux Device Drivers. Furthermore, you will need to have a look at the peculiarities of your platform and the device you want to drive.
If you want to start an own OS, a UART is certainly something that will be veeery helpful to print debug output, so you might want to go for this first.
Now that I wrote down all this, it seems that your actual question is: How to get started with Operating System design. This question should be highly valuable for you: What are some resources for getting started in operating system development?
The two big power users in most computers are the CPU and the disks. Both of these have capabilities for power saving in Linux. The CPU clock can be slowed down when the system is not busy, and the disk motors can be stopped when no I/O is happening. For a UART, even if you save all of the power that it uses by turning off its clock, it is still tiny compared to the others because a UART doesn't have much logic in it.
Best ways to save power are
1) more efficient power supply
2) replace rotating disk with SSD
3) Slow down the CPU and memory bus

Minimum configuration to run embedded Linux on an ARM processor?

I need to produce an embedded ARM design that has requirements to do many things that embedded Linux would do. However the design is cost sensitive and does not need huge amounts of horse power. Mostly will be talking to serial interfaces. Ideally I would like to use one of the low end ARMs. What is the lowest configuration of an ARM that you have successfully used embedded Linux on.
Edit:
The application needs a file system on some kind of flash device and the ability to run applications for processing the data. Some of the applications might be written by others than myself. I also need to ability to load new applications or update old apps using the serial ports to accept the apps.
When I have looked at other embedded OSes they seem to be more of a real time threading solution than having the ability to run applications. I am open to what ever will get the job done.
I think you need to weigh your cost options here.
ARM + linux is an option but you will be paying a very high operating overhead for such a simple (from your description) set of features. You can't just look at the cost of the ARM chip but must also consider external RAM which will very likely be required as well as flash to get enough space available to run the kernel + apps.
NOTE: you may be able to avoid the external requirements with a very minimal kernel and simple apps combined with a uC with large internal resources.
A second option is a much simpler microcontroller with a light weight OS. This will cut your hardware costs on the CPU and you can likely run something like this without external RAM or flash (dependent on application RAM and program space requirement)
third option: I don't actually see anything in your requirements that demands any OS at all be used. Basic file systems are very simple, for instance there are even FAT drivers out there for 8 bit PIC's. Interfacing to an SD card only requires a SPI port and minimal external circuitry.
The application bit could be simple or complex. I've built systems around PIC18 microcontollers that run a web server and allow program updates via a simple upload screen, it just stores the new program into an EEPROM or flash, reboots into a bootloader and copies the new program into internal program memory. You could likely design a way to do this without the reboot via a cooperative multitasking type of architecture. Any way you go the programmers writing the apps are going to need to have knowledge of the architecture and access to libraries / driver you write. Your best bet to simplify this is to provide as simple an API as possible and to try to automate the build process for them.
The third option will be the "cheapest" in terms of hardware as there will be very little overhead in the processing of your applications allowing you to get away with minimal processing power and memory. It likely will require some more programming/software architecting on your part but won't require nearly the research you will need to undertake to get linux up and running in addition to learning to write the needed device drivers under a linux paradigm.
As always you have to include the software development costs in the build cost of the device. If you plan to build 10,000+ of these your likely better off keeping hardware costs down and putting more man power into designing a software solution that allows that hardware to meet the design goals. If your building 10 of them, your better off spending an extra $15-20 on hardware if it can cut down on your software development costs. For example an ARM with MMU with full linux kernel support and available device drivers.
I kind of feel that your selecting the worst of both worlds at the moment, your paying extra to get a uC you can run linux on but by doing so your also selecting a part that will likely be the most complex to get linux up and running on, especially having not worked with linux on embedded platforms before.
I've had success even on ARM7TDMI, so I don't think you're going to have any trouble. If you have a low-requirements system, you could use any kind of lightweight real-time executive and have a lot better experience than you would getting Linux to work.
I've used a TS-7200 for about five years to run a web server and mail server, using Debian GNU Linux. It is 200 MHz and has 32 MB of RAM, and is quite adequate for these tasks. It has serial port built in. It's based on a ARM920T.
This would be overkill for your job; I mention it so you have another data point.
For several years I've been using a gumstix to do prototyping and testing and I've had good results with it. I don't know if the processor they are using (Intel PXA255 on my board) is considered low-cost, but the entire Verdex line seems pretty cheap to me for an adaptable device.
ucLinux is designed specifically for resource constrained targets, but perhaps more importantly for targets without an MMU.
However you have to have a good reason to use Linux on such a system rather than a small real-time executive. Out-of-the-box networking, readily available drivers and protocol stacks for complex hardware and support for existing POSIX legacy or open source code are a few perhaps. However if you don't need that, Linux is still large, and you may be squandering resources for no real benefit. In most cases you will still need off-chip SDRAM and Flash if you choose Linux of any flavour.
I would not regard serial I/O as 'complex hardware', so unless you are running a complex, but standard protocol, your brief description does not appear to warrant the use of Linux IMO
My DLINK DIR-320 router runs Linux inside.
And I know some handymen, flashing it with Optware and connecting USB-hub, HDDs, USB-flash, and much more.
It's low-cost ready for use "platform". (If you don't need mass production). But maybe more powerful than you need.
Additionally, it can be configured wirelessly via web-interface even through your pda :)

Why I need to re-compile vmware kernel module after a linux kernel upgrade?

After a linux kernel upgrade, my VMWare server cannot start until using vmware-config.pl to do some re-config work (including build some kernel modules).
If I update my windows VMWare host with latest Windows Service Pack, I usually not need to do anything to run VMWare.
Why VMWare works differently between Linux and Windows? Does this re-compile action brings any benifits on Linux platform over Windows?
Go read The Linux Kernel Driver Interface.
This is being written to try to explain why Linux does not have a binary kernel interface, nor does it have a stable kernel interface. Please realize that this article describes the _in kernel_ interfaces, not the kernel to userspace interfaces. The kernel to userspace interface is the one that application programs use, the syscall interface. That interface is _very_ stable over time, and will not break. I have old programs that were built on a pre 0.9something kernel that still works just fine on the latest 2.6 kernel release. This interface is the one that users and application programmers can count on being stable.
It reflects the view of a large portion of Linux kernel developers:
the freedom to change in-kernel implementation details and APIs at any time allows them to develop much faster and better.
Without the promise of keeping in-kernel interfaces identical from release to release, there is no way for a binary kernel module like VMWare's to work reliably on multiple kernels.
As an example, if some structures change on a new kernel release (for better performance or more features or whatever other reason), a binary VMWare module may cause catastrophic damage using the old structure layout. Compiling the module again from source will capture the new structure layout, and thus stand a better chance of working -- though still not 100%, in case fields have been removed or renamed or given different purposes.
If a function changes its argument list, or is renamed or otherwise made no longer available, not even recompiling from the same source code will work. The module will have to adapt to the new kernel. Since everybody (should) have source and (can find somebody who) is able to modify it to fit. "Push work to the end-nodes" is a common idea in both networking and free software: since the resources [at the fringes]/[of the developers outside the Linux kernel] are larger than the limited resources [of the backbone]/[of the Linux developers], the trade-off to make the former do more of the work is accepted.
On the other hand, Microsoft has made the decision that they must preserve binary driver compatibility as much as possible -- they have no choice, as they are playing in a proprietary world. In a way, this makes it much easier for outside developers who no longer face a moving target, and for end-users who never have to change anything. On the downside, this forces Microsoft to maintain backwards-compatibility, which is (at best) time-consuming for Microsoft's developers and (at worst) is inefficient, causes bugs, and prevents forward progress.
Linux does not have a stable kernel ABI - things like the internal layout of datastructures, etc changes from version to version. VMWare needs to be rebuilt to use the ABI in the new kernel.
On the other hand, Windows has a very stable kernel ABI that does not change from service pack to service pack.
To add to bdonlan's answer, ABI compatibility is a mixed bag. On one hand, it allows you to distribute binary modules and drivers which will work with newer versions of the kernel. On the other hand, it forces kernel programmers to add a lot of glue code to retain backwards compatibility. Because Linux is open-source, and because kernel developers even whether they're even allowed, the ability to distribute binary modules isn't considered that important. On the upside, Linux kernel developers don't have to worry about ABI compatibility when altering datastructures to improve the kernel. In the long run, this results in cleaner kernel code.
It's a consequence of Linux and Windows being developed in different cultural environments and expectations: http://www.joelonsoftware.com/articles/Biculturalism.html. In short: Windows is designed to be suitable for users, whereas Linux evolves to be suitable for open source developers.

Resources