Is it possible for an external hardware device to use system calls and access the operating system? It reminds me of the auto-run of a disk-on-key device but I'm not sure whether it occurs because of the operating system or the device itself.
A system call requires a process. Unless there is a process tied to the device, there is no way to do a system call.
Normally devices access the operating system through interrupts. Interrupts are handled in a manner that is similar to the way system calls are dispatched. The difference that system calls are triggered by exceptions,rather than interrupts.
I am sure someone could or perhaps has built one. But in general that is not how it works. The hardware causes some software via interrupt or polling or other to handle that hardware event (the insertion of a usb key and enabled features of the usb driver or operating system software above the driver level to search for autorun files and run them).
when you click the mouse button it simply sends a signal to the computer which in turn wakes up some software that handles that button press. that software is either part of or interacts with the operating system.
In the case of automatically mounting a drive or cd or dvd when inserted starts with signals being sent that wakes up low level software for that interface, then that is passed up/over to the operating system which then if it has features enabled might automatically mount that drive, then if it has features and they are enabled might then search for certain agreed upon files that would allow software on that device to be run by the operating system. All the operating system interaction is done by software.
Related
Let me give some concrete context for motivation.
I've been enjoying the program AHK for quite some time. It allows the user to script various tasks on a Windows machine and, if need be, bind those actions to hotkeys.
I've never understood how it is that if I create a binding for say alt+k, Windows will then understand to first inform AHK when that key-combination is pressed. And if AHK then decides to create the keystroke down in response, Windows will know the intended target for that command.
Furthermore, if I start a program in administrator mode, it seems that AHK now no longer gets to preempt any device input. Now the input is immediately passed to the currently focused program. That's unless I also run the AHK script in administrator mode, in which case everything is back to normal.
Can anybody shed some light on what's going on behind the scenes here? And if there are considerable differences on Linux, I'm also interested to hear about those.
From my understanding Operating Systems course, I will answer this in most Generic way.
Every I/O device has a device controller. The Operating System never communicates with the device controller directly. The OS uses a special software called device driver(usually provided by the vendor themselves) which sits between the device controller and the OS.
The device driver understands the device controller and provides the OS with a uniform interface to communicate with the device. For example: to start an I/O operation the device driver will load the appropriate registers in the device controller. The controller will examine the contents of the register and examine what action is to be taken(like read a character from keyboard). The controller will initiate the transfer from device to a local buffer.
Once the transfer is complete , it will inform the driver using an interrupt. The driver will return the control back to the operating system possibly returning pointer to data. For other operations driver returns the status.
I understand that an Operating System forces security policies on users when they use the system and filesystem via the System Calls supplied by stated OS.
Is it possible to circumvent this security by implementing your own hardware instructions instead of making use of the supplied System Call Interface of the OS? Even writing a single bit to a file where you normally have no access to would be enough.
First, for simplicity, I'm considering the OS and Kernel are the same thing.
A CPU can be in different modes when executing code.
Lets say a hypothetical CPU has just two modes of execution (Supervisor and User)
When in Supervisor mode, you are allowed to execute any instructions, and you have full access to the hardware resources.
When in User mode, there is subset of instructions you don't have access to, such has instructions to deal with hardware or change the CPU mode. Trying to execute one of those instructions will cause the OS to be notified your application is misbehaving, and it will be terminated. This notification is done through interrupts. Also, when in User mode, you will only have access to a portion of the memory, so your application can't even touch memory it is not supposed to.
Now, the trick for this to work is that while in Supervisor Mode, you can switch to User Mode, since it's a less privileged mode, but while in User Mode, you can't go back to Supervisor Mode, since the instructions for that are not permitted anymore.
The only way to go back to Supervisor mode is through system calls, or interrupts. That enables the OS to have full control of the hardware.
A possible example how everything fits together for this hypothetical CPU:
The CPU boots in Supervisor mode
Since the CPU starts in Supervisor Mode, the first thing to run has access to the full system. This is the OS.
The OS setups the hardware anyway it wants, memory protections, etc.
The OS launches any application you want after configuring permissions for that application. Launching the application switches to User Mode.
The application is running, and only has access to the resources the OS allowed when launching it. Any access to hardware resources need to go through System Calls.
I've only explained the flow for a single application.
As a bonus to help you understand how this fits together with several applications running, a simplified view of how preemptive multitasking works:
In a real-world situation. The OS will setup an hardware timer before launching any applications.
When this timer expires, it causes the CPU to interrupt whatever it was doing (e.g: Running an application), switch to Supervisor Mode and execute code at a predetermined location, which belongs to the OS and applications don't have access to.
Since we're back into Supervisor Mode and running OS code, the OS now picks the next application to run, setups any required permissions, switches to User Mode and resumes that application.
This timer interrupts are how you get the illusion of multitasking. The OS keeps changing between applications quickly.
The bottom line here is that unless there are bugs in the OS (or the hardware design), the only way an application can go from User Mode to Supervisor Mode is through the OS itself with a System Call.
This is the mechanism I use in my hobby project (a virtual computer) https://github.com/ruifig/G4DevKit.
HW devices are connected to CPU trough bus, and CPU does use to communicate with them in/out instructions to read/write values at I/O ports (not used with current HW too much, in early age of home computers this was the common way), or a part of device memory is "mapped" into CPU address space, and CPU controls the device by writing values at defined locations in that shared memory.
All of this should be not accessible at "user level" context, where common applications are executed by OS (so application trying to write to that shared device memory would crash on illegal memory access, actually that piece of memory is usually not even mapped into user space, ie. not existing from user application point of view). Direct in/out instructions are blocked too at CPU level.
The device is controlled by the driver code, which is either run is specially configured user-level context, which has the particular ports and memory mapped (micro-kernel model, where drivers are not part of kernel, like OS MINIX). This architecture is more robust (crash in driver can't take down kernel, kernel can isolate problematic driver and restart it, or just kill it completely), but the context switches between kernel and user level are a very costly operation, so the throughput of data is hurt a bit.
Or the device drivers code runs on kernel-level (monolithic kernel model like Linux), so any vulnerability in driver code can attack the kernel directly (still not trivial, but lot more easier than trying to get tunnel out of user context trough some kernel bug). But the overall performance of I/O is better (especially with devices like graphics cards or RAID disc clusters, where the data bandwidth goes into GiBs per second). For example this is the reason why early USB drivers are such huge security risk, as they tend to be bugged a lot, so a specially crafted USB device can execute some rogue code from device in kernel-level context.
So, as Hyd already answered, under ordinary circumstances, when everything works as it should, user-level application should be not able to emit single bit outside of it's user sandbox, and suspicious behaviour outside of system calls will be either ignored, or crash the app.
If you find a way to break this rule, it's security vulnerability and those get usually patched ASAP, when the OS vendor gets notified about it.
Although some of the current problems are difficult to patch. For example "row hammering" of current DRAM chips can't be fixed at SW (OS) or CPU (configuration/firmware flash) level at all! Most of the current PC HW is vulnerable to this kind of attack.
Or in mobile world the devices are using the radiochips which are based on legacy designs, with closed source firmware developed years ago, so if you have enough resources to pay for a research on these, it's very likely you would be able to seize any particular device by fake BTS station sending malicious radio signal to the target device.
Etc... it's constant war between vendors with security researchers to patch all vulnerabilities, and hackers to find ideally zero day exploit, or at least picking up users who don't patch their devices/SW fast enough with known bugs.
Not normally. If it is possible it is because of an operating system software error. If the software error is discovered it is fixed fast as it is considered to be a software vulnerability, which equals bad news.
"System" calls execute at a higher processor level than the application: generally kernel mode (but system systems have multiple system level modes).
What you see as a "system" call is actually just a wrapper that sets up registers then triggers a Change Mode Exception of some kind (the method is system specific). The system exception hander dispatches to the appropriate system server.
You cannot just write your own function and do bad things. True, sometimes people find bugs that allow circumventing the system protections. As a general principle, you cannot access devices unless you do it through the system services.
Normally MMU-less systems don't have MPU (memory protection unit) as well, there's also no distinction between the user & kernel modes. In such a case, assuming we have a MMU-less system with some piece of hardware which is mapped in CPU address space, does it really make sense to have device drivers in the kernel, if all the hardware resources can be accessed from the userspace?
Does a kernel code have more control over memory, then the usercode?
Yes, on platforms without MMUs that host ucLinux it makes sense to do everything as if you had a normal embedded Linux environment. It is a cleaner design to have user applications and services go through their normal interfaces (syscalls, etc.) and have the OS route those kernel requests through to device drivers, file systems, network stack, etc.
Although the kernel does not have more control over the hardware in these circumstances, the actual hardware should only be touched by system software running in the kernel. Not limiting access to the hardware would make debugging things like system resets and memory corruption virtually impossible. This practice also makes your design more portable.
Exceptions may be for user mode debugging binaries that are only used in-house for platform bring-up and diagnostics.
Is there a mechanism to log the peripheral interactions. E.x. If there is an application running on Linux Kernel and it interacts with the physical world over UART, CAN or any other interface. In this context is there some command or tool that can log these interactions (the data transferred is not required) so that it comes handy to understand to which peripheral does the application interacts....
Thanks in advance
Assuming a user mode (not in kernel) program, you can run it through strace, which will trace the use of system calls by the program.
To interact with peripheral hardware a program must cooperate with the kernel, and the device drivers of the respective peripherals. This communication often happens through device files (like /dev/sda). To open these "files" the program issues a system call, which will be shown by strace.
I know that user applications can run only in user mode, which is for system security. On the contrary most drivers run in kernel mode, to access I/O devices. Nevertheless some driver run in user mode, but are allowed to access I/O devices. So I have the following question. What is the main difference between drivers and user applications? Can't user application be allowed to access I/O devices like some drivers do?
Thanks.
First of all, some preview from this link :-
Applications run in user mode, and core operating system components
run in kernel mode. Many drivers run in kernel mode, but some drivers
run in user mode.
When you start a user-mode application, Windows(/any OS) creates a process for
the application. The process provides the application with a private
virtual address space and a private handle table. Because an
application's virtual address space is private, one application cannot
alter data that belongs to another application.
In addition to being private, the virtual address space of a user-mode
application is limited. A processor running in user mode cannot access
virtual addresses that are reserved for the operating system. Limiting
the virtual address space of a user-mode application prevents the
application from altering, and possibly damaging, critical operating
system data.
All code that runs in kernel mode shares a single virtual address
space. This means that a kernel-mode driver is not isolated from other
drivers and the operating system itself. If a kernel-mode driver
accidentally writes to the wrong virtual address, data that belongs to
the operating system or another driver could be compromised.
Also, from this link
Software drivers
Some drivers are not associated with any hardware device at all. For
example, suppose you need to write a tool that has access to core
operating system data structures, which can be accessed only by code
running in kernel mode. You can do that by splitting the tool into two
components. The first component runs in user mode and presents the
user interface. The second component runs in kernel mode and has
access to the core operating system data. The component that runs in
user mode is called an application, and the component that runs in
kernel mode is called a software driver. A software driver is not
associated with a hardware device.
Also, software drivers() always run in kernel mode. The main reason
for writing a software driver is to gain access to protected data that
is available only in kernel mode. But device drivers do not always
need access to kernel-mode data and resources. So some device drivers
run in user mode.
What is the main difference between drivers and user applications?
The difference is same as that between a sub-marine and a ship. Drivers are hardware-dependent and operating-system-specific. They usually provide the interrupt handling required for any necessary asynchronous time-dependent hardware interface. Hence, almost all of them run in kernel mode. Whereas, as specified in the second paragraph, to prevent applications damaging critical OS data, the user applications are bound to run in user space.
Also, not all drivers communicate directly with a device. For a given I/O request (like reading data from a device), there are often several drivers, layered in a stack, that participate in the request. The one driver in the stack that communicates directly with the device is called the function driver; the drivers that perform auxiliary processing are called filter drivers.
Can't user application be allowed to access I/O devices like some
drivers do?
The application calls a function implemented by the operating system, and the operating system calls a function implemented by the driver. The driver knows how to communicate with the device hardware to get the data. After the driver gets the data from the device, it returns the data to the operating system, which returns it to the application.
Applications connect to IO devices through APIs/interfaces presented by device drivers(rather the OS). The OS handles most hardware/software interaction. Hardware vendors write "plugins/modules/drivers" which allows the OS to control their specific hardware. So using the interfaces provided by the OS, you can write your Application to access IO devices.
So, you can't have a user application directly access the hardware without the help of drivers because it's all drivers below the hierarchy to access the device as device drivers are written in low-level languages which can communicate with hardware, whereas user-applications are written in high-level languages.
Also, check this answer to get more idea about drivers address space in various OS'.
Your question lacks any concrete operating system tags, so I'll answer generically about the subject
Yes, drivers can be very similar to a user application in a microkernel-based operating system. In a microkernel, usually a driver is just a userspace app that queries the kernel for some generic capabilities and then functions normally in userspace
Take for example a hypothetical implementation of a pagein/pageout driver in such an OS.
When a page fault happens in user mode, the transition is made to kernel as usual with synchronous exceptions, but a microkernel would queue the pagein request to a userspace driver via local message queue.
After that (still in the exception handler) the faulting process is made blocking and dequeued.
Then the userspace driver app does the necessary page-in and notifies the microkernel on the work done again via local message queue. The kernel is then ready to reschedule the faulting process as it will have its memory mappings ready.
Such functioning incurs a performance overhead but it has its advantages, mainly that the drivers can be implemented and tested just like any other user app.
In a monolithic kernel such disposition is impossible and that case is better described by other answers.