In the IDT each line has some bits called "DPL" - Descriptor Privileg Level, 0 for kernel and 3 for normal users(maybe there are more levels). I don't understand 2 things:
this is the level required to run the interrupt handler code? or to the trigger the event that leads to it?. because system_call has DPL=3, so in user-mode we can do "int 0x80". but in linux only the kernel handle interrupts, so we can trigger the event but not handle it? even though we have the right CPL.
In linux only the kernel handle interrupts, but when an interrupt(or trap) happens, what get us into the kernel mode?
Sorry for any mistakes, I am new to all this stuff and just trying to learn.
The IDT has 3 types of entries - trap gates, interrupt gates and task gates (which nobody uses). For trap gates and interrupt gates; the entry mostly describes the target CS and EIP of the interrupt handler.
The DPL field in an IDT entry determines the privilege level required to use the gate (or, to switch to the target CS and EIP described by the gate). Software can only use a gate via. a software interrupt (e.g. int 0x80).
For IRQs and exceptions hardware uses the gate and not software. Hardware has no privilege level and is always able to use a gate (regardless of which privilege level software is currently using and regardless of the gate's DPL). This means that IRQ handlers should have DPL=0 (to ensure that software running at CPL=3 can't use them via. software interrupts).
When an interrupt handler is started, the CPU determines if there will be a privilege level change or not (based on the privilege level that was in use beforehand and the target privilege level that's almost always zero) and automatically switches privilege level where necessary. This is what causes the switch to CPL=0. Note: CPU will also switch stacks and save "return SS:ESP" on the new stack if a privilege level change was necessary.
Related
Old x86 intel architecture provided context switching (in the form of TSS) at hardware level. But I have read that, linux has long "abandoned" using hardware context switching functionality as they were less optimised, less flexible and was not available on all architextures.
What confuses me is how can software (linux) control hardware operations (saving/restoring context)? Linux can choose not to use context setup by hardware but hardware context switch would nevertheless happen (making "optimisation" argument irrelevant).
Also if linux is not using hardware context switch, how can then the value %eip (pointing to next instruction in user program) be saved and kernel stack pointer restored by the kernel? (and vice-versa process)
I think kernel would need some support from hardware to save user program %eip and switch %esp (user to kernel stack) registers even before interrupt service routine starts.
If this support indeed is provided by hardware then how is linux not using hardware context switches?
Terribly confused!!!
I learned that, whenever hardware interrupt occurs, it sets interrupt line of a processor to voltage high(or low, depends on processor architecture), to make cpu stop what it was doing and serve the interrupt request.
But why same thing happens in case of software interrupt. I mean why to set those interrupt pins of processor to voltage high, why cant OS handles software interrupt as a function call, for example perform steps: 1. save current state,2. Load instruction pointer with memory address of interrupt service routine.Why does software interrupt needs to go that low to get served?
Software interrupts need not be mapped to any hardware pins.
For example, RSTx software interrupts in 8085 don't have any hardware pins and they are used to alter the program flow.
One big difference would be: Interrupt routines execute in privileged mode whereas functions don't. This is one of the use case of software interrupts to switch from user mode to privilege mode.
I'm using the Luminary LM3S8962 micro-controller and its included Library Guide, but this should be relevant to any ARM Cortex-M3s that have Nested Vector Interrupts.
You can only register one interrupt service routine function with an entire GPIO Port. A GPIO port typically has 8 pins on it, each of which can be configured with an interrupt. For each pin, you can test whether or not an interrupt "happened" on it (is pending), right? and for each pin you can clear a pending interrupt, right?
If a pin on the GPIO port triggers the ISR then the processor is in the ISR. Then what happens if another pin on the same port triggers an interrupt while we're in the ISR? We assume the code detects what pins have pending interrupts.
- Is this ISR interrupted and a new one begins, with the same code, but an updated PinInterruptStatus register ? (I hope not)
- Is this ISR executed until completion, immediately executing the interrupt for the other pin right afterward? (I know ARM Cortex M3 implements tail-chaining of interrupts)
- Or must there be a while loop that loops until all the pins have been cleared, clearing a pin after it has been processed?
maybe this will help:
http://www.ti.com/lit/gpn/lm3s8962
As stated in the comment: generally ISRs should take steps to prevent reentrancy. In something like a PIC, this could be as simple as disabling the interrupt at the "top" of the ISR, and enabling the interrupt at the "bottom". The M3's NVIC is a bit more complicated. This white paper (http://www.arm.com/files/pdf/IntroToCortex-M3.pdf) states the following on p.7:
The NVIC supports nesting (stacking) of interrupts, allowing an
interrupt to be serviced earlier by exerting higher priority. It also
supports dynamic reprioritisation of interrupts. Priority levels can
be changed by software during run time. Interrupts that are being
serviced are blocked from further activation until the interrupt
service routine is completed, so their priority can be changed without
risk of accidental re-entry.
The above discussion directly addresses the possibility of same interrupt reentrancy, and it also introduces the concept of prioritization to handle interrupts of higher priority interrupting your ISR.
This reference is pretty good: http://infocenter.arm.com/help/topic/com.arm.doc.dui0552a/DUI0552A_cortex_m3_dgug.pdf. On p. 4-9, you'll find instructions to enable/disable interrupts. On page 4-6, you'll find a description of the Interrupt Clear-pending Registers. Using these, you can determine what interrupts are pending. If you really want to get fancy with interrupt enable/disable control, check out the BASEPRI and BASEPRO_MAX registers.
Having said that, I'm not sure I agree with your statement that your question is relevant to any Cortex-M3. Keil (my flavor of Cortex-M3) mentions that the EXTI (external interrupt controller) handles GPIO pin interrupts. Interestingly, the ARM documentation briefly discusses "EXTI", but does not refer to it as a "controller" like the Keil STM32 documentation. A quick google on "STM32 EXTI" yeilds lots of hits, a similar search on "Luminary EXTI" does not yield much. Given that, I'm guessing that this particular controller is one of the peripheral devices that ARM leaves to 3rd parties.
This document
bolsters that view: http://www.st.com/internet/com/TECHNICAL_RESOURCES/TECHNICAL_LITERATURE/REFERENCE_MANUAL/CD00171190.pdf. There are several AFIO_EXTI registers mentioned here. These permit the mapping of GPIO lines to interrupts. Unfortunately, I can't find anything similar in the Luminary documentation.
So...what does this mean? It looks like you only have port-level granularity for your interrupt. Thus, your ISR will have to determine which pin transitioned (assuming your are looking for edges). Good luck!
In Cortex-M3, if two interrupts are the same priority (for all GPIO pins), the former will not be interrupted. The interrupt comes later will be in pending state.
When a GPIO interrupt occurs you can check the GPIO Interrupt Status for Rising/Falling IO0IntEnR/IO0IntEnF (depending on ) for the corresponding bit to find the pin that causes the interrupt.
I am doing so research trying to find the code in the Linux kernel that implements interrupt handling; in particular, I am trying to find the code responsible for handling the system timer.
According to http://www.linux-tutorial.info/modules.php?name=MContent&pageid=86
The kernel treats interrupts very similarly to the way it treats exceptions: all the general >purpose registers are pushed onto the system stack and a common interrupt handler is called. >The current interrupt priority is saved and the new priority is loaded. This prevents >interrupts at lower priority levels from interrupting the kernel while it handles this >interrupt. Then the real interrupt handler is called.
I am looking for the code that pushes all of the general purpose registers on the stack, and the common interrupt handling code.
At least pushing the general purpose registers onto the stack is architecture independent, so I'm looking for the code that is associated with the x86 architecture. At the moment I'm looking at version 3.0.4 of the kernel source, but any version is probably fine. I've gotten started looking in kernel/irq/handle.c, but I don't see anything that looks like saving the registers; it just looks like it is calling the registered interrupt handler.
The 32-bit versions are in arch/i386/kernel/entry_32.S, the 64-bit versions in entry_64.S. Search for the various ENTRY macros that mark kernel entry points.
I am looking for the code that pushes all of the general purpose registers on the stack
Hardware stores the current state (which includes registers) before executing an interrupt handler. Code is not involved. And when the interrupt exits, the hardware reads the state back from where it was stored.
Now, code inside the interrupt handler may read and write the saved copies of registers, causing different values to be restored as the interrupt exits. That's how a context switch works.
On x86, the hardware only saves those registers that change before the interrupt handler starts running. On most embedded architectures, the hardware saves all registers. The reason for the difference is that x86 has a huge number of registers, and saving and restoring any not modified by the interrupt handler would be a waste. So the interrupt handler is responsible to save and restore any registers it voluntarily uses.
See Intel® 64 and IA-32 Architectures
Software Developer’s Manual, starting on page 6-15.
In the linux kernel how to determine whether an interrupt is disabled ? Because enable the interrupt need to balance,if there is api, not irqs_disabled().
Because enable the interrupt need to balance,if I force to enable the kernel will report a warning.I know the depth -- or ++ may be useful.
Every interrupt service routine and every kernel code which disables interrupts is required to reenable interrupts. There should be exactly a one to one ratio.
Reenabling interrupts should not be conditional. If it is, there are some deep problems in the logic of added components.