Minimum CSR requirements to run Linux - riscv

What are the minimum CSR required to run Linux on a RISC-V processor?
The privileged ISA spec does not seem to clarify this point.

A minimum requirement for an operating system would be to be able to service exceptions: external interrupts & software exceptions.
The following two answers may shed some light on this, though will not answer specifically the question of the requirements for linux.  I don't know the additional requirements of linux, but we would be thinking about clock/timer, virtual memory, others?
RISC-V Interrupt Handling Flow
Risc-V: Minimum CSR requirements for simple RV32I implementation capable of leveraging GCC

Related

Linux and RTOS using SoC (ARM, Xilinx)

I am facing a design "issue". I have a board with Xilinx Zynq Soc including dual-core ARM9 and I need to develop an application to support real-time property control application (time deadlines to response time) and also application to do heavy processing (image etc.) and some basic communications between them, but most importantly I will need to be able to control the Linux part (at least e.g. to somehow suspend it, "pause it" in best case to have possibility to shut it down and then run it again). So I was wondering how to combine it.
One of the option, could be RTLinux, which at least to description, what I found offers possibility to run realtime kernel and linux kernel next to it as a thread but it seems that it is now proprieatary by WindRiver..
Then I stepped up over MicroBlaze, where it could be possible to "create" soft processor on Programmable logic, but I am not sure if I can run RTOS on ARM and Linux there?
There are two things that seem to be known as rtlinux. The one you mention, a Wind River revival of the MERT system is a product of that company. Another one, seemingly “RT Linux”, is a real time patch to the mainline kernel which provides deterministic scheduling and fine grained kernel pre-emption.
I think it is the latter one that you want. 10s of google indicates that there is a kconfig target for this SoC, so all the pieces you need should be there.
Do remember there is more to a real time system than just the ability to be real time; the subsystems also have to be well behaved.
Given your description, you have (at least) the following design options:
Dual kernel approach: this means patching the Linux kernel with a (quite invasive) patch that runs a tiny real-time kernel alongside the standard kernel. This approach allows reaching good real-time performance (even in the order of us) at the cost of complexity. It was implemented by the RTLinux project (acquired and then discontinued by Windriver), then by RTAI (mostly focusing on x86) and Xenomai.
If you go along this path, you can see if Xenomai supports your specific SoC; then patch, configure and rebuild the kernel; and finally write the real-time code following Xenomai's API.
Improving the responsiveness of the Linux standard kernel: this is what the PREEMPT_RT project aims at. The real-time performance is lower with respect to the previous approach, but you don't have to write real-time specific code. With this approach, you can patch and build the kernel, then see if the real-time performance is sufficient for your needs.
Synthesizing a Microblaze soft-core on the FPGA, then run Linux on the ARM cores and the real-time code ((either bare-metal or with an RTOS) on the Microblaze.
Unfortunately, your specific SoC does not support ARM's virtualization extensions. Otherwise there would be the additional option of Multi-OS approach: running the Linux OS on one ARM core and the real-time code (either bare-metal or with an RTOS like ERIKA Enterprise) on the other ARM core, through a hypervisor like Jailhouse or Xen.

How is the TPM related to the CPU?

I'm confusing myself at the moment on how the CPU relates to the TPM.
When I tried learning about Apple's Enclave (TPM), the video I watched made it seem like the TPM is a separate processing unit connected to the CPU. As in the TPM itself is a microprocessor connected to the main processing unit.
However, when I tried to learn about ARM TrustZone TPM (found in Android based devices), the article I am reading made it seem like the TPM is within the CPU, not separate. The article specifically states "ARM TrustZone Technology is a hardware-based solution embedded in the ARM processor cores that allows the cores to run two execution environments".
I am having a hard time finding the answer online. I just want to understand the data flow so I can better understand mobile based security options for applications.
Think of the TPM as a specification that describes the inputs and outputs necessary for its operation. Theoretically you could implement this specification purely in software and remain compliant with it. You could also implement it as firmware running on another chip. However, the more removed from the host OS and other hardware the implementation is, the more secure it's considered -- as it makes it harder to compromise the secrets it holds -- so the so called "discrete implementation" is the preferred one, if it can be afforded.

Will using the Linux Kernel support current programs? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
There are many distributions of Linux. All of them have one thing in common, however: the kernel. And Linux programs run across all of them. If I make a minimalistic distribution from the kernel, will current programs made for Linux run? What defines the differences of distributions? I am a beginner on this stuff, don't go harsh if it is a stupid question. Thank you
Yes, with caveats.
You need to make sure you have full C support and by that I mean something like glibc installed or installable or you cannot build programs for your minimal install. If you can install and compile C programs on Linux then you can in effect build practically everything else from scratch.
If you want to be able to download binaries and run them this is different, the binaries will likely require shared libraries that they would have on the systems they were built for. Unless you have those libraries you cannot run the existing binaries you find online.
What defines the differences of distributions?
There are a lot of defining factors in each distribution. If we disregard things like...
Licensing ie Redhat vs Debian
Stance on things like GPL/BSD/NonFree
Release schedules Debian Vs Ubuntu
Target audience ie Ubuntu vs Debian
I think the biggest defining factor is package management ie yum/rpm vs apt/dpkg and how the base configuration is managed on the machine. This is certainly the thing I seem to use the most and miss the most when I change distributions. The kernel itself is very rarely on my mind which is in part a large part of it's success.
Most people start with something like ISO Linux and get a bootable CD but even then you normally choose a base distribution. If you want to create a base distributions that's a ton of work. Have a look at this great info graphic of the linux family tree
https://en.wikipedia.org/wiki/List_of_Linux_distributions#/media/File:Linux_Distribution_Timeline.svg
If you look at Debian/Ubuntu the amount of infrastructure these distributions have setup is quite staggering. They have millions perhaps even billions of lines of code in them all designed to run on their supported versions. You might be able to take a binary from one of them and run it on Redhat but it's likely to fail unless the planets are in alignment etc. Some people think this is actually a bad thing
https://en.wikipedia.org/wiki/Ingo_Moln%C3%A1r#Quotes
The basic failure of the free Linux desktop is that it's, perversely, not free
enough...
Desktop Linux distributions are trying to "own" 20 thousand
application packages consisting of over a billion lines of code and have
created parallel, mostly closed ecosystems around them... The Linux package
management method system works reasonably well in the enterprise (which is a
hierarchical, centrally planned organization in most cases), but desktop Linux
on the other hand stopped scaling 10 years ago, at the 1000 packages limit...
If I make a minimalistic distribution from the kernel, will current programs made for Linux run?
Very few programs actually use the kernel directly. They also need a libc, which is responsible for actually implementing most of the C routines used by either the programs themselves or the VMs running their code.
It is possible to statically link the libc to the program, but this both bloats the size of the program and makes it impossible to fix security issues in the linked libraries without rebuilding the whole program.
Well, certain programs demand a specific version of the kernel. Usually these programs act as "drivers" for the rest of the system (e.g. nvidia proprietary drivers: some of them act in kernel space while others run in user space, but require that very specific kernel module and thus that very specific kernel build).
A less stricter case is when a program demand a specific capability from kernel. For example almost all modern Linux virtualization systems rely on cgroups feature. Thus, to use them you need to have a reasonably fresh kernel.
Nevertheless a lot of kernel API is stable, so you can rely on it. But usually programs don't call kernel routines directly. A typical way to use a kernel function is to call a correspondent library routine which wraps and leverages the kernel API. The main, most basic library of that kind is libc.
Technically programs compiled for one version of libc (as well as other shared libraries) can be used with slightly different versions of correspondent libraries. For example, a lot of people use Skype compiled for SuSE in completely different Linux distributions. That Skype is a pretty complex application with a lot libraries being linked in and so on, but nevertheless it works without any significant problem. So that with a lot of other proprietary programs, which couldn't be compiled for a given distribution or even for a given installation. But sometimes shit just happens :) Those binary incompabilities are quite rare but they happen from time to time.

Minimum configuration to run embedded Linux on an ARM processor?

I need to produce an embedded ARM design that has requirements to do many things that embedded Linux would do. However the design is cost sensitive and does not need huge amounts of horse power. Mostly will be talking to serial interfaces. Ideally I would like to use one of the low end ARMs. What is the lowest configuration of an ARM that you have successfully used embedded Linux on.
Edit:
The application needs a file system on some kind of flash device and the ability to run applications for processing the data. Some of the applications might be written by others than myself. I also need to ability to load new applications or update old apps using the serial ports to accept the apps.
When I have looked at other embedded OSes they seem to be more of a real time threading solution than having the ability to run applications. I am open to what ever will get the job done.
I think you need to weigh your cost options here.
ARM + linux is an option but you will be paying a very high operating overhead for such a simple (from your description) set of features. You can't just look at the cost of the ARM chip but must also consider external RAM which will very likely be required as well as flash to get enough space available to run the kernel + apps.
NOTE: you may be able to avoid the external requirements with a very minimal kernel and simple apps combined with a uC with large internal resources.
A second option is a much simpler microcontroller with a light weight OS. This will cut your hardware costs on the CPU and you can likely run something like this without external RAM or flash (dependent on application RAM and program space requirement)
third option: I don't actually see anything in your requirements that demands any OS at all be used. Basic file systems are very simple, for instance there are even FAT drivers out there for 8 bit PIC's. Interfacing to an SD card only requires a SPI port and minimal external circuitry.
The application bit could be simple or complex. I've built systems around PIC18 microcontollers that run a web server and allow program updates via a simple upload screen, it just stores the new program into an EEPROM or flash, reboots into a bootloader and copies the new program into internal program memory. You could likely design a way to do this without the reboot via a cooperative multitasking type of architecture. Any way you go the programmers writing the apps are going to need to have knowledge of the architecture and access to libraries / driver you write. Your best bet to simplify this is to provide as simple an API as possible and to try to automate the build process for them.
The third option will be the "cheapest" in terms of hardware as there will be very little overhead in the processing of your applications allowing you to get away with minimal processing power and memory. It likely will require some more programming/software architecting on your part but won't require nearly the research you will need to undertake to get linux up and running in addition to learning to write the needed device drivers under a linux paradigm.
As always you have to include the software development costs in the build cost of the device. If you plan to build 10,000+ of these your likely better off keeping hardware costs down and putting more man power into designing a software solution that allows that hardware to meet the design goals. If your building 10 of them, your better off spending an extra $15-20 on hardware if it can cut down on your software development costs. For example an ARM with MMU with full linux kernel support and available device drivers.
I kind of feel that your selecting the worst of both worlds at the moment, your paying extra to get a uC you can run linux on but by doing so your also selecting a part that will likely be the most complex to get linux up and running on, especially having not worked with linux on embedded platforms before.
I've had success even on ARM7TDMI, so I don't think you're going to have any trouble. If you have a low-requirements system, you could use any kind of lightweight real-time executive and have a lot better experience than you would getting Linux to work.
I've used a TS-7200 for about five years to run a web server and mail server, using Debian GNU Linux. It is 200 MHz and has 32 MB of RAM, and is quite adequate for these tasks. It has serial port built in. It's based on a ARM920T.
This would be overkill for your job; I mention it so you have another data point.
For several years I've been using a gumstix to do prototyping and testing and I've had good results with it. I don't know if the processor they are using (Intel PXA255 on my board) is considered low-cost, but the entire Verdex line seems pretty cheap to me for an adaptable device.
ucLinux is designed specifically for resource constrained targets, but perhaps more importantly for targets without an MMU.
However you have to have a good reason to use Linux on such a system rather than a small real-time executive. Out-of-the-box networking, readily available drivers and protocol stacks for complex hardware and support for existing POSIX legacy or open source code are a few perhaps. However if you don't need that, Linux is still large, and you may be squandering resources for no real benefit. In most cases you will still need off-chip SDRAM and Flash if you choose Linux of any flavour.
I would not regard serial I/O as 'complex hardware', so unless you are running a complex, but standard protocol, your brief description does not appear to warrant the use of Linux IMO
My DLINK DIR-320 router runs Linux inside.
And I know some handymen, flashing it with Optware and connecting USB-hub, HDDs, USB-flash, and much more.
It's low-cost ready for use "platform". (If you don't need mass production). But maybe more powerful than you need.
Additionally, it can be configured wirelessly via web-interface even through your pda :)

Learning kernel hacking and embedded development at home? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I was always attracted to the world of kernel hacking and embedded systems.
Has anyone got good tutorials (+easily available hardware) on starting to mess with such stuff?
Something like kits for writing drivers etc, which come with good documentation and are affordable?
Thanks!
If you are completely new to kernel development, i would suggest not starting with hardware development and going to some "software-only" kernel modules like proc file / sysfs or for more complex examples filesystem / network development , developing on a uml/vmware/virtualbox/... machine so crashing your machine won't hurt so much :) For embedded development you could go for a small ARM Development Kit or a small Via C3/C4 machine, or any old PC which you can burn with your homebrew USB / PCI / whatever device.
A good place to start is probably Kernelnewbies.org - which has lots of links and useful information for kernel developers, and also features a list of easy to implement tasks to tackle for beginners.
Some books to read:
Understanding the Linux Kernel - a very good reference detailing the design of the kernel subsystems
Linux Device Drivers - is written more like a tutorial with a lot of example code, focusing on getting you going and explaining key aspects of the linux kernel. It introduces the build process and the basics of kernel modules.
Linux Kernel Module Programming Guide - Some more introductory material
As suggested earlier, looking at the linux code is always a good idea, especially as Linux Kernel API's tend to change quite often ... LXR helps a lot with a very nice browsing interface - lxr.linux.no
To understand the Kernel Build process, this link might be helpful:
Linux Kernel Makefiles (kbuild)
Last but not least, browse the Documentation directory of the Kernel Source distribution!
Here are some interesting exercises insolently stolen from a kernel development class:
Write a kernel module which creates the file /proc/jiffies reporting the current time in jiffies on every read access.
Write a kernel module providing the proc file /proc/sleep. When an application writes a number of seconds as ASCII text into this file ("echo 3 > /proc/sleep"), it should block for the specified amount of seconds. Write accesses should have no side effect on the contents of the file, i.e., on the read accesses, the file should appear to be empty (see LDD3, ch. 6/7)
Write a proc file where you can store some text temporarily (using echo "blah" > /proc/pipe) and get it out again (cat /proc/pipe), clearing the file. Watch out for synchronisation issues.
Modify the pipe example module to register as a character device /dev/pipe, add dynamic memory allocation for write requests.
Write a really simple file system.
An absolute must is this book by Rubini. (available both as a hardcopy or a free soft copy)
He gives implementations of several dummy drivers that don't require that you have any hardware other than your pc. So for getting started in kernel development it's the easiest way to go.
As for doing embedded work I would recommend purchasing one of the numerous SBC (single board computers) that are out there. There are a number of these that are based on x86 processors, usually with PC/104 interfaces (electrically PC/104 is identical to the ISA bus standard, but based on stackable connectors rather than edge connectors - very easy to interface custom hardware to)
They usually have vga connectors that make it easier to do debugging.
For embedded Linux hacking, simple Linksys WRT54G router that you can buy everywhere is a development platform on its own http://en.wikipedia.org/wiki/Linksys_WRT54G_series:
The WRT54G is notable for being the first consumer-level network device that had its firmware source code released to satisfy the obligations of the GNU GPL. This allows programmers to modify the firmware to change or add functionality to the device. Several third-party firmware projects provide the public with enhanced firmware for the WRT54G.
I've tried installing OpenWrt and DD-WRT firmware on it. You can check those out as a starting point for hacking on a low-cost platform.
For starters, the best way is to read a lot of code. Since Linux is Open Source, you'll find dozens of drivers. Find one that works in some ways like what you want to write. You'll find some decent and relatively easy-to-understand code (the loopback device, ROM fs, etc.)
You can also use the lxr.linux.no, which is the Linux code cross-referenced. If you have to find out how something works, and need to look into the code, this is a good and easy way.
There's also an O'Reilly book (Understanding the Linux Kernel, the 3rd edition is about the 2.6 kernels) or if you want something for free, you can use the Advanced Linux Programing book (http://www.advancedlinuxprogramming.com/). There are also a lot of specific documentation about file systems, networking, etc.
Some things to be prepared for:
you'll be cross-compiling. The embedded device will use a MIPS, PowerPC, or ARM CPU but won't have enough CPU power, memory, or storage to compile its own kernel in a reasonable amount of time.
An embedded system often uses a serial port as the console, and to lower the cost there is usually no connector soldered onto production boards. Debugging kernel panics is very difficult unless you can solder on a serial port connector, you won't have much information about what went wrong.
The Linksys NSLU2 is a low-cost way to get a real embedded system to work with, and has a USB port to add peripherals. Any of a number of wireless access points can also be used, see the OpenWrt compatibility page. Be aware that current models of the Linksys WRT54G you'll find in stores can no longer be used with Linux: they have less RAM and Flash in order to reduce the cost. Cisco/Linksys now uses vxWorks on the WRT54G, with a smaller memory footprint.
If you really want to get into it, evaluation kits for embedded CPUs start at a couple hundred US dollars. I'd recommend not spending money on these unless you need it professionally for a job or consulting contract.
I am completely beginner in kernel hacking :) I decided to buy two books "Linux Program Development: a guide with exercises" and "Writing Linux Device Drivers: a guide with exercises" They are very clearly written and provide good base to further learning.

Resources