Normally MMU-less systems don't have MPU (memory protection unit) as well, there's also no distinction between the user & kernel modes. In such a case, assuming we have a MMU-less system with some piece of hardware which is mapped in CPU address space, does it really make sense to have device drivers in the kernel, if all the hardware resources can be accessed from the userspace?
Does a kernel code have more control over memory, then the usercode?
Yes, on platforms without MMUs that host ucLinux it makes sense to do everything as if you had a normal embedded Linux environment. It is a cleaner design to have user applications and services go through their normal interfaces (syscalls, etc.) and have the OS route those kernel requests through to device drivers, file systems, network stack, etc.
Although the kernel does not have more control over the hardware in these circumstances, the actual hardware should only be touched by system software running in the kernel. Not limiting access to the hardware would make debugging things like system resets and memory corruption virtually impossible. This practice also makes your design more portable.
Exceptions may be for user mode debugging binaries that are only used in-house for platform bring-up and diagnostics.
Related
Now, some pcie device has a cpu, ex:DPU.
I want to use qemu to emulate this device.
Can qemu support this requirment?
QEMU's emulation framework doesn't support having devices which have fully programmable CPUs which can execute arbitrary guest-provided code in the same way as the main system emulated CPUs. (The main blocker is that all the CPUs in the system have to be the same architecture, eg all x86 or all Arm.)
For devices that have a CPU on them as part of their implementation but where that CPU is generally running fixed firmware that exposes a more limited interface to guest code, a QEMU device model can provide direct emulation of that limited interface, which is typically more efficient anyway.
In theory you could write a device that did a purely interpreted emulation of an onboard CPU, using QEMU facilities like timers and bottom-half callbacks to interpret a small chunk of instructions every so often. I don't know of any examples of anybody having written a device like that, though. It would be quite a lot of work and the speed of the resulting emulation would not be very fast.
This can be done by hoisting two instances of QEMU, one built with the host system architecture and the other with the secondary architecture, and connecting them through some interface.
This has been done by Xilinx by starting two separate QEMU processes with some inter-process communication between them, and by Neuroblade by building QEMU in nios2 architecture as a shared library and then loading it from a QEMU process that models the host architecture (in this case the interface can simply be modelled by direct function calls).
Related:
How can I use QEMU to simulate mixed platforms?
https://lists.gnu.org/archive/html/qemu-devel/2021-12/msg01969.html
I apologize in advance for the lack of precision in my phrasing/terminology...I'm not a system programmer by any means...
This is a security-related programming question...at work, I've been asked to assess the "risk" to a PCIe add-in card depending on the integrity of the host operating-system (specifically, Windows Server 2012 x64, and Redhat Enterprise 6/7 x86-64.)
So my question is this:
We have a PCIe-peripheral (add-in board) that contains several embedded processors that will handle sensitive data. The preferred solution would be to encrypt the data before it enters the PCIe-bus, and decrypt it after it leaves the PCIe-bus...but we can't do this for a variety of reasons (performance, cost, etc.) Instead, we'll be passing data in cleartext form over the PCIe-bus.
Let's assume an attacker has network access to the machine, but not physical access. If a vendor's PCIe-endpoint device is installed in a server, and the vendor's (signed) driver is up and running with the associated hardware, is it possible for a malicious process/thread to access (read/write) the PCI memory-mapped space(s) of the PCIe-endpoint?
I know there are utilities that allow me to dump (read) the pci config space of all endpoints in a pcie hierarchy...but I have no idea if that extends to reading and writing inside the memory-mapped windows of the installed endpoints (especially if the endpoint is already associated with a device-driver.)
Also, if this is possible, how difficult is it?
Are we talking a user-space program being able to do this, or does it require the attacker to have root/admin-access to the machine (to run a program of his design, or install a fake/proxy driver.)?
Also, does virtualization make a difference?
Accessing device memory requires operating in a lower protection ring than userland software, also known as kernel mode. The only way to access it is going through a driver or the kernel.
So, from my understanding, there are two types of programs, those that are interpreted and those that are compiled. Interpreted programs are executed by an interpreter that is a native application for the platform its on, and compiled programs are themselves native applications (or system software) for the platform they are on.
But my question is this: is anything besides the kernel actually being directly run by the CPU? A Windows Executable is a "Windows Executable", not an x86 or amd64 executable. Does that mean every other process that's not the kernel is literally being interpreted by the kernel in the same way that a browser interprets Javascript? Or is the kernel placing these processes on the "bare metal" that the kernel sits on top of?
IF they're on the "bare metal", how, say does Windows know that a program is a windows program and not a Linux program, since they're both compiled for amd64 processors? If it's because of the "format" of the executable, how is that executable able to run on the "bare metal", since, to me, the fact that it's formatted to run on a particular OS would mean that some interpretation would be required for it to run.
Is this question too complicated for Stack Overflow?
They run on the "bare metal", but they do contain operating system-specific things. An executable file will typically provide some instructions to the kernel (which are, arguably, "interpreted") as to how the program should be loaded into memory, and the file's code will provide ways for it to "hook" in to the running operating system, such as by an operating system's API or via device drivers. Once such a non-interpreted program is loaded into memory, it runs on the bare metal but continues to communicate with the operating system, which is also running on the bare metal.
In the days of single-process operating systems, it was common for executables to essentially "seize" control of the entire computer and communicate with hardware directly. Computers like the Apple ][ and the Commodore 64 work like that. In a modern multitasking operating system like Windows or Linux, applications and the operating system share use of the CPU via a complex multitasking arrangement, and applications access the hardware via a set of abstractions built in to the operating system's API and its device drivers. Take a course in Operating System design if you are interested in learning lots of details.
Bouncing off Junaid's answer, the way that the kernel blocks a program from doing something "funny" is by controlling the allocation and usage of memory. The kernel requires that memory be requested and accessed through it via its API, and thus protects the computer from "unauthorized" access. In the days of single-process operating systems, applications had much more freedom to access memory and other things directly, without involving the operating system. An application running on an old Apple ][ can read to or write to any address in RAM that it wants to on the entire computer.
One of the reasons why a compiled application won't just "run" on another operating system is that these "hooks" are different for different operating systems. For example, an application that knows how to request the allocation of RAM from Windows might not have any idea how to request it from Linux or the Mac OS. As Disk Crasher mentioned, these low level access instructions are inserted by the compiler.
I think you are confusing things. A compiled program is in machine readable format. When you run the program, kernel will allocate memory, cpu etc and ensure that the program does not interfere with other programs. If the program requires access to HW resources or disk etc, the kernel will handle it so kernel will always be between hardware and any software you run in user space.
If the program is interpreted, then a relevant interpreter for that language will convert the code to machine readable on the fly and kernel will still provide the same functionality like access to hardware and making sure programs aren't doing anything funny like trying to access other program memory etc.
The only thing that runs on "bare metal" is assembly language code, which is abstracted from the programmer by many layers in the OS and compiler. Generally speaking, applications are compiled to an OS and CPU architecture. They will not run on other OS's, at least not without a compatible framework in place (e.g. Mono on Linux).
Back in the day a lot of code used to be written on bare metal using macro assemblers, but that's pretty much unheard of on PCs today. (And there was even a time before macro assemblers.)
I understand that an Operating System forces security policies on users when they use the system and filesystem via the System Calls supplied by stated OS.
Is it possible to circumvent this security by implementing your own hardware instructions instead of making use of the supplied System Call Interface of the OS? Even writing a single bit to a file where you normally have no access to would be enough.
First, for simplicity, I'm considering the OS and Kernel are the same thing.
A CPU can be in different modes when executing code.
Lets say a hypothetical CPU has just two modes of execution (Supervisor and User)
When in Supervisor mode, you are allowed to execute any instructions, and you have full access to the hardware resources.
When in User mode, there is subset of instructions you don't have access to, such has instructions to deal with hardware or change the CPU mode. Trying to execute one of those instructions will cause the OS to be notified your application is misbehaving, and it will be terminated. This notification is done through interrupts. Also, when in User mode, you will only have access to a portion of the memory, so your application can't even touch memory it is not supposed to.
Now, the trick for this to work is that while in Supervisor Mode, you can switch to User Mode, since it's a less privileged mode, but while in User Mode, you can't go back to Supervisor Mode, since the instructions for that are not permitted anymore.
The only way to go back to Supervisor mode is through system calls, or interrupts. That enables the OS to have full control of the hardware.
A possible example how everything fits together for this hypothetical CPU:
The CPU boots in Supervisor mode
Since the CPU starts in Supervisor Mode, the first thing to run has access to the full system. This is the OS.
The OS setups the hardware anyway it wants, memory protections, etc.
The OS launches any application you want after configuring permissions for that application. Launching the application switches to User Mode.
The application is running, and only has access to the resources the OS allowed when launching it. Any access to hardware resources need to go through System Calls.
I've only explained the flow for a single application.
As a bonus to help you understand how this fits together with several applications running, a simplified view of how preemptive multitasking works:
In a real-world situation. The OS will setup an hardware timer before launching any applications.
When this timer expires, it causes the CPU to interrupt whatever it was doing (e.g: Running an application), switch to Supervisor Mode and execute code at a predetermined location, which belongs to the OS and applications don't have access to.
Since we're back into Supervisor Mode and running OS code, the OS now picks the next application to run, setups any required permissions, switches to User Mode and resumes that application.
This timer interrupts are how you get the illusion of multitasking. The OS keeps changing between applications quickly.
The bottom line here is that unless there are bugs in the OS (or the hardware design), the only way an application can go from User Mode to Supervisor Mode is through the OS itself with a System Call.
This is the mechanism I use in my hobby project (a virtual computer) https://github.com/ruifig/G4DevKit.
HW devices are connected to CPU trough bus, and CPU does use to communicate with them in/out instructions to read/write values at I/O ports (not used with current HW too much, in early age of home computers this was the common way), or a part of device memory is "mapped" into CPU address space, and CPU controls the device by writing values at defined locations in that shared memory.
All of this should be not accessible at "user level" context, where common applications are executed by OS (so application trying to write to that shared device memory would crash on illegal memory access, actually that piece of memory is usually not even mapped into user space, ie. not existing from user application point of view). Direct in/out instructions are blocked too at CPU level.
The device is controlled by the driver code, which is either run is specially configured user-level context, which has the particular ports and memory mapped (micro-kernel model, where drivers are not part of kernel, like OS MINIX). This architecture is more robust (crash in driver can't take down kernel, kernel can isolate problematic driver and restart it, or just kill it completely), but the context switches between kernel and user level are a very costly operation, so the throughput of data is hurt a bit.
Or the device drivers code runs on kernel-level (monolithic kernel model like Linux), so any vulnerability in driver code can attack the kernel directly (still not trivial, but lot more easier than trying to get tunnel out of user context trough some kernel bug). But the overall performance of I/O is better (especially with devices like graphics cards or RAID disc clusters, where the data bandwidth goes into GiBs per second). For example this is the reason why early USB drivers are such huge security risk, as they tend to be bugged a lot, so a specially crafted USB device can execute some rogue code from device in kernel-level context.
So, as Hyd already answered, under ordinary circumstances, when everything works as it should, user-level application should be not able to emit single bit outside of it's user sandbox, and suspicious behaviour outside of system calls will be either ignored, or crash the app.
If you find a way to break this rule, it's security vulnerability and those get usually patched ASAP, when the OS vendor gets notified about it.
Although some of the current problems are difficult to patch. For example "row hammering" of current DRAM chips can't be fixed at SW (OS) or CPU (configuration/firmware flash) level at all! Most of the current PC HW is vulnerable to this kind of attack.
Or in mobile world the devices are using the radiochips which are based on legacy designs, with closed source firmware developed years ago, so if you have enough resources to pay for a research on these, it's very likely you would be able to seize any particular device by fake BTS station sending malicious radio signal to the target device.
Etc... it's constant war between vendors with security researchers to patch all vulnerabilities, and hackers to find ideally zero day exploit, or at least picking up users who don't patch their devices/SW fast enough with known bugs.
Not normally. If it is possible it is because of an operating system software error. If the software error is discovered it is fixed fast as it is considered to be a software vulnerability, which equals bad news.
"System" calls execute at a higher processor level than the application: generally kernel mode (but system systems have multiple system level modes).
What you see as a "system" call is actually just a wrapper that sets up registers then triggers a Change Mode Exception of some kind (the method is system specific). The system exception hander dispatches to the appropriate system server.
You cannot just write your own function and do bad things. True, sometimes people find bugs that allow circumventing the system protections. As a general principle, you cannot access devices unless you do it through the system services.
I know that user applications can run only in user mode, which is for system security. On the contrary most drivers run in kernel mode, to access I/O devices. Nevertheless some driver run in user mode, but are allowed to access I/O devices. So I have the following question. What is the main difference between drivers and user applications? Can't user application be allowed to access I/O devices like some drivers do?
Thanks.
First of all, some preview from this link :-
Applications run in user mode, and core operating system components
run in kernel mode. Many drivers run in kernel mode, but some drivers
run in user mode.
When you start a user-mode application, Windows(/any OS) creates a process for
the application. The process provides the application with a private
virtual address space and a private handle table. Because an
application's virtual address space is private, one application cannot
alter data that belongs to another application.
In addition to being private, the virtual address space of a user-mode
application is limited. A processor running in user mode cannot access
virtual addresses that are reserved for the operating system. Limiting
the virtual address space of a user-mode application prevents the
application from altering, and possibly damaging, critical operating
system data.
All code that runs in kernel mode shares a single virtual address
space. This means that a kernel-mode driver is not isolated from other
drivers and the operating system itself. If a kernel-mode driver
accidentally writes to the wrong virtual address, data that belongs to
the operating system or another driver could be compromised.
Also, from this link
Software drivers
Some drivers are not associated with any hardware device at all. For
example, suppose you need to write a tool that has access to core
operating system data structures, which can be accessed only by code
running in kernel mode. You can do that by splitting the tool into two
components. The first component runs in user mode and presents the
user interface. The second component runs in kernel mode and has
access to the core operating system data. The component that runs in
user mode is called an application, and the component that runs in
kernel mode is called a software driver. A software driver is not
associated with a hardware device.
Also, software drivers() always run in kernel mode. The main reason
for writing a software driver is to gain access to protected data that
is available only in kernel mode. But device drivers do not always
need access to kernel-mode data and resources. So some device drivers
run in user mode.
What is the main difference between drivers and user applications?
The difference is same as that between a sub-marine and a ship. Drivers are hardware-dependent and operating-system-specific. They usually provide the interrupt handling required for any necessary asynchronous time-dependent hardware interface. Hence, almost all of them run in kernel mode. Whereas, as specified in the second paragraph, to prevent applications damaging critical OS data, the user applications are bound to run in user space.
Also, not all drivers communicate directly with a device. For a given I/O request (like reading data from a device), there are often several drivers, layered in a stack, that participate in the request. The one driver in the stack that communicates directly with the device is called the function driver; the drivers that perform auxiliary processing are called filter drivers.
Can't user application be allowed to access I/O devices like some
drivers do?
The application calls a function implemented by the operating system, and the operating system calls a function implemented by the driver. The driver knows how to communicate with the device hardware to get the data. After the driver gets the data from the device, it returns the data to the operating system, which returns it to the application.
Applications connect to IO devices through APIs/interfaces presented by device drivers(rather the OS). The OS handles most hardware/software interaction. Hardware vendors write "plugins/modules/drivers" which allows the OS to control their specific hardware. So using the interfaces provided by the OS, you can write your Application to access IO devices.
So, you can't have a user application directly access the hardware without the help of drivers because it's all drivers below the hierarchy to access the device as device drivers are written in low-level languages which can communicate with hardware, whereas user-applications are written in high-level languages.
Also, check this answer to get more idea about drivers address space in various OS'.
Your question lacks any concrete operating system tags, so I'll answer generically about the subject
Yes, drivers can be very similar to a user application in a microkernel-based operating system. In a microkernel, usually a driver is just a userspace app that queries the kernel for some generic capabilities and then functions normally in userspace
Take for example a hypothetical implementation of a pagein/pageout driver in such an OS.
When a page fault happens in user mode, the transition is made to kernel as usual with synchronous exceptions, but a microkernel would queue the pagein request to a userspace driver via local message queue.
After that (still in the exception handler) the faulting process is made blocking and dequeued.
Then the userspace driver app does the necessary page-in and notifies the microkernel on the work done again via local message queue. The kernel is then ready to reschedule the faulting process as it will have its memory mappings ready.
Such functioning incurs a performance overhead but it has its advantages, mainly that the drivers can be implemented and tested just like any other user app.
In a monolithic kernel such disposition is impossible and that case is better described by other answers.