I/O instructions by user-mode process - io

There are two ways to access hardware:
by memory-mapped I/O (MMIO)
by I/O ports
If a user-mode process directly wants to access I/O without using system calls and it knows about a particular piece of hardware it can't access it through memory-mapped I/O as it's not in its address space, so a segmentation fault will occur. But what about using I/O ports to do so? As I/O ports are not in the memory address space, but are accessed by instructions executed directly executed by processor, what will happen, can the process access them or not?

Short version
No. For example, in the x86 architecture, in and out are privileged instructions that can only run in Ring 0. Since user-mode applications run at Ring 3, the CPU will fault when trying to execute that instruction.
Long version
In general, in an OS that enforces memory protection and process separation (such as Windows, Linux, Mac OS, but not DOS), user-mode applications don't have direct access to the hardware, and use system calls to ask the kernel to do the actions required on the their behalf. The kernel can then decide if the application has the right to do said action, if the action is safe to do etc.
If a user-mode application is able to do anything that can normally only be done by the kernel, it is a bug/backdoor and a huge security vulnerability in the kernel (or sometimes, the CPU itself).
On the x86 architecture, I/O port instructions are privileged in all operating modes except Real Mode. An attempt to execute any privileged instruction will generate a fault (specifically, #GP, General-Protection Fault, which corresponds to INT 0xD).
In order to allow processes like user-mode drivers to work, the OS is able to override this privileged status of I/O instructions in 2 ways:
Use the IOPL (I/O privilege level) field in the FLAGS register (bits 12 and 13). This is a number from 0 to 3 that specifies the least privileged ring that can run I/O instructions. Setting IOPL to 3 will allow ALL processes to access ANY I/O port, which is insecure and defeats the purpose of protecting everything else like memory. For this reason, operating systems usually set this field to 0.
Use the IOPB (I/O Permissions Bitmap) bitfield in the Task State Segment (TSS) structure. This is simply a collection of 65536 bits, each specifying if an I/O port can be accessed (0) or not (1). If a user-mode driver needs to access, say, port 0xAB, then the OS can clear the corresponding bit in the TSS and allow the user-mode code to access only that port, even though IOPL=0. If the operating system doesn't include all 65536 bits, those missing are assumed by the CPU to be privileged and thus inaccessible.

Related

In a dual socket -- Linux using CPU Socket 1 and some-other-program running on CPU Socket 2 -- how to prevent memory region overlaps

I'm reading about Operating system privilege levels and how the CPU's MMU in cooperation with the OS, prevents invalid memory access.
I think I understand how that works, but I fail to understand how that would work if there is another non-OS running on Socket 1 while Linux is just running on Socket 1.
My question is as follows:
Is it not possible to run this configuration -- Linux just given control of CPU Socket 1 when booting up OS and a different OS(program) takes control of CPU Socket 2.
If yes, then how do you prevent these two OSs (or Linux and the other program) from stomping on overlapping regions of memory
How does privilege level security work in this case?

Is it possible to circumvent OS security by not using the supplied System Calls?

I understand that an Operating System forces security policies on users when they use the system and filesystem via the System Calls supplied by stated OS.
Is it possible to circumvent this security by implementing your own hardware instructions instead of making use of the supplied System Call Interface of the OS? Even writing a single bit to a file where you normally have no access to would be enough.
First, for simplicity, I'm considering the OS and Kernel are the same thing.
A CPU can be in different modes when executing code.
Lets say a hypothetical CPU has just two modes of execution (Supervisor and User)
When in Supervisor mode, you are allowed to execute any instructions, and you have full access to the hardware resources.
When in User mode, there is subset of instructions you don't have access to, such has instructions to deal with hardware or change the CPU mode. Trying to execute one of those instructions will cause the OS to be notified your application is misbehaving, and it will be terminated. This notification is done through interrupts. Also, when in User mode, you will only have access to a portion of the memory, so your application can't even touch memory it is not supposed to.
Now, the trick for this to work is that while in Supervisor Mode, you can switch to User Mode, since it's a less privileged mode, but while in User Mode, you can't go back to Supervisor Mode, since the instructions for that are not permitted anymore.
The only way to go back to Supervisor mode is through system calls, or interrupts. That enables the OS to have full control of the hardware.
A possible example how everything fits together for this hypothetical CPU:
The CPU boots in Supervisor mode
Since the CPU starts in Supervisor Mode, the first thing to run has access to the full system. This is the OS.
The OS setups the hardware anyway it wants, memory protections, etc.
The OS launches any application you want after configuring permissions for that application. Launching the application switches to User Mode.
The application is running, and only has access to the resources the OS allowed when launching it. Any access to hardware resources need to go through System Calls.
I've only explained the flow for a single application.
As a bonus to help you understand how this fits together with several applications running, a simplified view of how preemptive multitasking works:
In a real-world situation. The OS will setup an hardware timer before launching any applications.
When this timer expires, it causes the CPU to interrupt whatever it was doing (e.g: Running an application), switch to Supervisor Mode and execute code at a predetermined location, which belongs to the OS and applications don't have access to.
Since we're back into Supervisor Mode and running OS code, the OS now picks the next application to run, setups any required permissions, switches to User Mode and resumes that application.
This timer interrupts are how you get the illusion of multitasking. The OS keeps changing between applications quickly.
The bottom line here is that unless there are bugs in the OS (or the hardware design), the only way an application can go from User Mode to Supervisor Mode is through the OS itself with a System Call.
This is the mechanism I use in my hobby project (a virtual computer) https://github.com/ruifig/G4DevKit.
HW devices are connected to CPU trough bus, and CPU does use to communicate with them in/out instructions to read/write values at I/O ports (not used with current HW too much, in early age of home computers this was the common way), or a part of device memory is "mapped" into CPU address space, and CPU controls the device by writing values at defined locations in that shared memory.
All of this should be not accessible at "user level" context, where common applications are executed by OS (so application trying to write to that shared device memory would crash on illegal memory access, actually that piece of memory is usually not even mapped into user space, ie. not existing from user application point of view). Direct in/out instructions are blocked too at CPU level.
The device is controlled by the driver code, which is either run is specially configured user-level context, which has the particular ports and memory mapped (micro-kernel model, where drivers are not part of kernel, like OS MINIX). This architecture is more robust (crash in driver can't take down kernel, kernel can isolate problematic driver and restart it, or just kill it completely), but the context switches between kernel and user level are a very costly operation, so the throughput of data is hurt a bit.
Or the device drivers code runs on kernel-level (monolithic kernel model like Linux), so any vulnerability in driver code can attack the kernel directly (still not trivial, but lot more easier than trying to get tunnel out of user context trough some kernel bug). But the overall performance of I/O is better (especially with devices like graphics cards or RAID disc clusters, where the data bandwidth goes into GiBs per second). For example this is the reason why early USB drivers are such huge security risk, as they tend to be bugged a lot, so a specially crafted USB device can execute some rogue code from device in kernel-level context.
So, as Hyd already answered, under ordinary circumstances, when everything works as it should, user-level application should be not able to emit single bit outside of it's user sandbox, and suspicious behaviour outside of system calls will be either ignored, or crash the app.
If you find a way to break this rule, it's security vulnerability and those get usually patched ASAP, when the OS vendor gets notified about it.
Although some of the current problems are difficult to patch. For example "row hammering" of current DRAM chips can't be fixed at SW (OS) or CPU (configuration/firmware flash) level at all! Most of the current PC HW is vulnerable to this kind of attack.
Or in mobile world the devices are using the radiochips which are based on legacy designs, with closed source firmware developed years ago, so if you have enough resources to pay for a research on these, it's very likely you would be able to seize any particular device by fake BTS station sending malicious radio signal to the target device.
Etc... it's constant war between vendors with security researchers to patch all vulnerabilities, and hackers to find ideally zero day exploit, or at least picking up users who don't patch their devices/SW fast enough with known bugs.
Not normally. If it is possible it is because of an operating system software error. If the software error is discovered it is fixed fast as it is considered to be a software vulnerability, which equals bad news.
"System" calls execute at a higher processor level than the application: generally kernel mode (but system systems have multiple system level modes).
What you see as a "system" call is actually just a wrapper that sets up registers then triggers a Change Mode Exception of some kind (the method is system specific). The system exception hander dispatches to the appropriate system server.
You cannot just write your own function and do bad things. True, sometimes people find bugs that allow circumventing the system protections. As a general principle, you cannot access devices unless you do it through the system services.

What is the main difference between drivers and user applications?

I know that user applications can run only in user mode, which is for system security. On the contrary most drivers run in kernel mode, to access I/O devices. Nevertheless some driver run in user mode, but are allowed to access I/O devices. So I have the following question. What is the main difference between drivers and user applications? Can't user application be allowed to access I/O devices like some drivers do?
Thanks.
First of all, some preview from this link :-
Applications run in user mode, and core operating system components
run in kernel mode. Many drivers run in kernel mode, but some drivers
run in user mode.
When you start a user-mode application, Windows(/any OS) creates a process for
the application. The process provides the application with a private
virtual address space and a private handle table. Because an
application's virtual address space is private, one application cannot
alter data that belongs to another application.
In addition to being private, the virtual address space of a user-mode
application is limited. A processor running in user mode cannot access
virtual addresses that are reserved for the operating system. Limiting
the virtual address space of a user-mode application prevents the
application from altering, and possibly damaging, critical operating
system data.
All code that runs in kernel mode shares a single virtual address
space. This means that a kernel-mode driver is not isolated from other
drivers and the operating system itself. If a kernel-mode driver
accidentally writes to the wrong virtual address, data that belongs to
the operating system or another driver could be compromised.
Also, from this link
Software drivers
Some drivers are not associated with any hardware device at all. For
example, suppose you need to write a tool that has access to core
operating system data structures, which can be accessed only by code
running in kernel mode. You can do that by splitting the tool into two
components. The first component runs in user mode and presents the
user interface. The second component runs in kernel mode and has
access to the core operating system data. The component that runs in
user mode is called an application, and the component that runs in
kernel mode is called a software driver. A software driver is not
associated with a hardware device.
Also, software drivers() always run in kernel mode. The main reason
for writing a software driver is to gain access to protected data that
is available only in kernel mode. But device drivers do not always
need access to kernel-mode data and resources. So some device drivers
run in user mode.
What is the main difference between drivers and user applications?
The difference is same as that between a sub-marine and a ship. Drivers are hardware-dependent and operating-system-specific. They usually provide the interrupt handling required for any necessary asynchronous time-dependent hardware interface. Hence, almost all of them run in kernel mode. Whereas, as specified in the second paragraph, to prevent applications damaging critical OS data, the user applications are bound to run in user space.
Also, not all drivers communicate directly with a device. For a given I/O request (like reading data from a device), there are often several drivers, layered in a stack, that participate in the request. The one driver in the stack that communicates directly with the device is called the function driver; the drivers that perform auxiliary processing are called filter drivers.
Can't user application be allowed to access I/O devices like some
drivers do?
The application calls a function implemented by the operating system, and the operating system calls a function implemented by the driver. The driver knows how to communicate with the device hardware to get the data. After the driver gets the data from the device, it returns the data to the operating system, which returns it to the application.
Applications connect to IO devices through APIs/interfaces presented by device drivers(rather the OS). The OS handles most hardware/software interaction. Hardware vendors write "plugins/modules/drivers" which allows the OS to control their specific hardware. So using the interfaces provided by the OS, you can write your Application to access IO devices.
So, you can't have a user application directly access the hardware without the help of drivers because it's all drivers below the hierarchy to access the device as device drivers are written in low-level languages which can communicate with hardware, whereas user-applications are written in high-level languages.
Also, check this answer to get more idea about drivers address space in various OS'.
Your question lacks any concrete operating system tags, so I'll answer generically about the subject
Yes, drivers can be very similar to a user application in a microkernel-based operating system. In a microkernel, usually a driver is just a userspace app that queries the kernel for some generic capabilities and then functions normally in userspace
Take for example a hypothetical implementation of a pagein/pageout driver in such an OS.
When a page fault happens in user mode, the transition is made to kernel as usual with synchronous exceptions, but a microkernel would queue the pagein request to a userspace driver via local message queue.
After that (still in the exception handler) the faulting process is made blocking and dequeued.
Then the userspace driver app does the necessary page-in and notifies the microkernel on the work done again via local message queue. The kernel is then ready to reschedule the faulting process as it will have its memory mappings ready.
Such functioning incurs a performance overhead but it has its advantages, mainly that the drivers can be implemented and tested just like any other user app.
In a monolithic kernel such disposition is impossible and that case is better described by other answers.

Is there a difference between sudo mode and kernel mode?

In a UNIX like system, we have a user mode and a kernel mode. There are some instructions which cannot be accessed in the user mode. However when we do sudo, we can access many critical sections of our OS, perform critical actions.
My question is: When a program is executed in the sudo mode, does the whole program run in kernel mode? Or is it the case that the sudo mode is simply an administrative user whose powers are a mere subset of the operations which can be performed by the kernel?
Yes, a huge difference between sudo and kernel mode.
Kernel mode is related to CPU modes. Most processors (in particular all running a common Linux kernel, not a µCLinux one) e.g. your Intel processor inside your laptop have several modes of operation, at least two: the privileged (or supervisor) mode where all machine instructions are possible (including the most unsafe ones, like those configuring the MMU, disabling interrupts, halting the machine, doing physical I/O i.e. sending bytes on network, or to a printer or a disk) and the user mode where some machine instructions are prohibited (in particular physical I/O instructions, MMU configuration, interrupt disabling, etc...)
On Linux, only kernel code (including kernel modules) is running in kernel mode.
Everything else is in user mode.
Applications (even commands running as root) are executing in user mode, and interacting with the Linux kernel through system calls (and this is the only way for an application to interact with the kernel) listed in syscalls(2). So application code sees a "virtual machine" capable of doing syscalls and executing user-mode instructions. The kernel manage the authentication and credentials (see credentials(7) & capabilities(7) ...)
sudo is simply giving a command (using setuid techniques) the permissions for root (i.e. user id 0). Then, some more syscalls are possible... But the command (i.e. the process running that command) is still running in user mode and uses virtual memory and has its address space.
There is no such thing as sudo mode. There is only user space and kernel space.
As you said, kernel mode may execute any instruction offered by the CPU and do anything to the hardware. User mode programs may only access memory that is mapped to the running process, and they are blocked from any direct hardware access. Via the system call mechanism, a user mode program may call the kernel code, which will perform the hardware access on its behalf and return the result back into user space.
In user space, there are additional restrictions on users who are not root (root being user ID number 0). For example, they can only access certain files, and they can only listen on TCP ports numbered above 1024. Running sudo will start a process as the root user, who does not have these restrictions in force.
But processes which are run as the root user (via sudo) are still running in user space, and are still subject to all the same restrictions that implies.

In single-threaded applications, is that one and only thread a kernel thread?

From Wikipedia it says:
A kernel thread is the "lightest" unit of kernel scheduling. At least one kernel thread exists within each process.
I've learned that a process is a container that houses memory space, file handles, device handles, system resources, etc... and the thread is the one that really gets scheduled by the kernel.
So in single-threaded applications, is that one thread(main thread i believe) a kernel thread?
I assume you are talking about this article:
http://en.wikipedia.org/wiki/Kernel_thread
According to that article, in a single threaded application, since you have only one thread by definition, it has to be a kernel thread, otherwise it will not get scheduled and will not run.
If you had more than one thread in your application, then it would depend on how user mode multi threading is implemented (kernel threads, fibers, etc ...).
It's important to note however it would be a kernel thread running in user mode, when executing the application code (unless you make a system call). Any attempt to execute a protected instruction when running in user mode would cause a fault that will eventually lead to the process being terminated.
So kernel thread here not to be confused with supervisor/privileged mode and kernel code.
You can execute kernel code, but you have to go through a system call gate first.
No. In modern operating systems applications and the kernel run at different processor protection levels (often called rings). For example, Intel CPUs have four protection levels. Kernel code runs at Ring 0 (kernel mode) and is able to execute the most privileged processor instructions, whereas application code runs at Ring 3 (user mode) and is not allowed to execute certain operations. See http://en.wikipedia.org/wiki/Ring_(computer_security)

Resources