Now, some pcie device has a cpu, ex:DPU.
I want to use qemu to emulate this device.
Can qemu support this requirment?
QEMU's emulation framework doesn't support having devices which have fully programmable CPUs which can execute arbitrary guest-provided code in the same way as the main system emulated CPUs. (The main blocker is that all the CPUs in the system have to be the same architecture, eg all x86 or all Arm.)
For devices that have a CPU on them as part of their implementation but where that CPU is generally running fixed firmware that exposes a more limited interface to guest code, a QEMU device model can provide direct emulation of that limited interface, which is typically more efficient anyway.
In theory you could write a device that did a purely interpreted emulation of an onboard CPU, using QEMU facilities like timers and bottom-half callbacks to interpret a small chunk of instructions every so often. I don't know of any examples of anybody having written a device like that, though. It would be quite a lot of work and the speed of the resulting emulation would not be very fast.
This can be done by hoisting two instances of QEMU, one built with the host system architecture and the other with the secondary architecture, and connecting them through some interface.
This has been done by Xilinx by starting two separate QEMU processes with some inter-process communication between them, and by Neuroblade by building QEMU in nios2 architecture as a shared library and then loading it from a QEMU process that models the host architecture (in this case the interface can simply be modelled by direct function calls).
Related:
How can I use QEMU to simulate mixed platforms?
https://lists.gnu.org/archive/html/qemu-devel/2021-12/msg01969.html
Related
I am developing a program for a micro controller using FreeRTOS. My micro controller has a CAN driver and uses hardware interrupts. There is an interrupt fired when the CAN driver finished transmitting a CAN frame.
For simplicity I am developing and testing some part on Linux (Ubuntu 20). I am using socketCAN on Linux, with a virtual CAN port.
Is it possible to mimic the hardware interrupts on Linux ?
I was thinking to use the POSIX Signals, what do you think ?
Thanks
I found the solution.
The solution is to execute a task in parallel and call the function normally pointed by the interrupt vector.
While finding the solution I remarked that using POSIX signals between FreeRTOS tasks of the POSIX_GCC may lead to problem with linux system calls.
I also figured out that FreeRTOS tasks are monopolizing CPU time if they are used along with classical pthreads.
I find the boundary between QEMU and KVM very blurred. I find that someone says a virtual machine is a qemu process while others say a kvm process. What is it exactly?
And what role does QEMU and KVM plays in virtual machine I/O? For example, when a vm does PIO/MMIO, is it qemu or kvm that will trap it and turns it to hardware operation. Or both are responsible?
KVM: the code in the Linux kernel which provides a friendly interface to userspace for using the CPU virtualization. This includes functions that userspace can call for "create CPU", "run CPU", etc. For a full virtual machine, you need to have some userspace code which can use this. This is usually either QEMU, or the simpler "kvmtool"; some large cloud providers have their own custom userspace code instead.
QEMU: emulates a virtual piece of hardware, including disks, memory, CPUs, serial port, graphics, and other devices. Also provides mechanisms (a UI, and some programmable interfaces) for doing things like starting, stopping, and migrating. QEMU supports several different 'accelerator' modes for how it handles the CPU emulation:
TCG: pure emulation -- works anywhere, but very slow
KVM: uses the Linux kernel's KVM APIs to allow running guest code using host CPU hardware virtualization support
hax: similar to KVM, but using the Intel HAXM code, which will work on Windows hosts
From an implementation point of view the boundary between KVM and QEMU is very clear -- KVM is a part of the host kernel, whereas QEMU is a separate userspace program. For a user, you typically don't have to care where the boundary is, because that's an implementation detail.
To answer your question about what happens for MMIO:
the guest makes an MMIO access
this is trapped to the host kernel by the hardware
the host kernel (KVM) might then emulate this MMIO access itself, because a few devices are implemented in the kernel for performance reasons. This usually applies only to the interrupt controller and perhaps the iommu.
otherwise, KVM reports the MMIO access back to userspace (ie to QEMU, kvmtool, etc)
userspace then can handle the access, using its device emulation code
userspace then returns the result (eg the data to return for a read) to the kernel
the kernel updates the vcpu's register state as required to complete emulation of the instruction
the kernel then resumes execution of the VM at the following instruction
I was checking project Embedded ECG data acquisition system from instructables and there is mension a TODO:
Combining the OS and bare-bone firmware
UNDER CONSTRUCTION
** Since the bootloader only loads one firmware to the Core,
I need to modify the ELF file, to have Linux and bare-bone Core at the same time **
It seems to me as interresting approach how to make full featured Linux and critical realtime OS on one board (for example Raspberry PI). It is really possible? I have heard, that Linux can be setup to not use some cores. But I suppose that Linux use virtual memory and bare-bone firmware does usually not. Can the memory be shared between these OS. What about interruptions? Can these two OS handle interruptions separately? Can boot loader load these two systems to both core at once? I can imagine that one thread in boot loader will skip to address of bare-bone OS. It is correct approach?
Yes, it is possible, even if the full setup is not straightforward.
A couple of examples:
Xilinx released a white paper explaining how to run Linux + FreeRTOS on a dual-core Zynq ARM
Evidence explained how to run Linux + Erika Enterprise RTOS on a dual-core Freescale imx6 ARM
Those examples are based on system partitioning by hard-coding the assignment of the different cores to different OSs.
If your system is capable of hardware-assisted virtualization, you can use an hypervisor for making (and enforcing) such partitioning. You can for example use Siemen's Jailhouse, KVM or Xen.
Kind of. This is what people already do to some extent with network stack / driver. For example IsoStack idea works in a similar way. There's a project which actually implements this on linux by dedicating cores to network cards, but my google-fu is failing me.
Normally MMU-less systems don't have MPU (memory protection unit) as well, there's also no distinction between the user & kernel modes. In such a case, assuming we have a MMU-less system with some piece of hardware which is mapped in CPU address space, does it really make sense to have device drivers in the kernel, if all the hardware resources can be accessed from the userspace?
Does a kernel code have more control over memory, then the usercode?
Yes, on platforms without MMUs that host ucLinux it makes sense to do everything as if you had a normal embedded Linux environment. It is a cleaner design to have user applications and services go through their normal interfaces (syscalls, etc.) and have the OS route those kernel requests through to device drivers, file systems, network stack, etc.
Although the kernel does not have more control over the hardware in these circumstances, the actual hardware should only be touched by system software running in the kernel. Not limiting access to the hardware would make debugging things like system resets and memory corruption virtually impossible. This practice also makes your design more portable.
Exceptions may be for user mode debugging binaries that are only used in-house for platform bring-up and diagnostics.
I have experience writing a C program and burning the program into a chip using an IDE provided by the chip manufacturer.
I also heard that there is a concept called SoC, which means an operating system, like Linux, is running on a chip. In this case, I can run my program on the chip just like on a Linux PC.
I don't really know the differences between these two kinds of chips. Are they the same? Can I install Linux on every chip?
And I have to use a chip called Renesas V850 in my work. Which kind of chip is this V850?
SoC is just a marketing term for 'more than a processor on a chip'. It doesn't mean Linux or operating system.
Years ago, each part of a system was on its own chip: processor, serial port, memory, ADC, DAC, etc. You had a PCB and a schematic that tied them all together.
Over time, more and more got integrated into the processor, particularly for application-specific processors and microcontrollers. Today, pretty much only big iron processors like Intel and AMD flagship processors are stand-alone, and even then there's some x86 chip produced that are 'SoC's (like the AMD Geode line, if that's still around). Everything else has USB ports, serial ports, ADCs, DACs, even wireless radios integrated into the same die.
As for 'what is a Renasas v850?' You'd do better to google that and read the product documentation. It isn't an ARM or MIPs core, and it doesn't appear to support the mainline Linux kernel, only μClinux.
The Renesas V850 Wikipedia page states that the Linux kernel support for v850 has been absent since version 2.6.27 (which released in 2008).
Typically, you need to know what group your chip belongs to and to read more about it on Renesas website. They provide all the documentation you may need. There is also a section for application notes and sample code that may also help.