Linux page table of the process - linux

Intel core i5, Ubunu 16.04
I'm reading about the memory paging here and now trying to experiment with it. I wrote a simple assembly program for getting Segmentation Fault and ran in gdb. Here it is:
section .text
global _start
_start:
mov rax, 0xFFFFFFFFFFFF0A31
mov [val], eax
mov eax, 4
mov ebx, 1
mov ecx, val
mov edx, 2
int 0x80
mov eax, 1
int 0x80
segment .bss
dummy resb 0xFFA
val resb 1
I assemble and link this into a 64-bit ELF static executable.
As far as I read each process has its own Page Table which cr3 register points to. Now I would like to look at the page table myself? Is it possible to find info about process page table in Linux?

You would need to have a program compiled as a kernel module to read the page tables. I am sure there are projects out there to do that.
Take a look here: https://github.com/jethrogb/ptdump
Seems to describe what you want

You can see all the mappings your process has in /proc/PID/smaps. This tells you what you can access without getting a SIGSEGV.
This is not the same thing as your cr3 page table, because the kernel doesn't always "wire" all your mappings. i.e. a hardware page fault isn't always a SIGSEGV: the kernel page-fault handler checks whether your process logically has that memory mapped and corrects the situation, or whether you really did violate the memory protections.
After an mmap() system call, or on process startup to map the text / data / BSS segments, you logically have memory mapped, but Linux might have decided to be lazy and not provide any physical pages yet. (e.g. maybe the pages aren't in the pagecache yet, so there's no need to block until you try to actually touch that memory and get a page fault).
Or for BSS memory, multiple logical pages might start out copy-on-write mapped to the same physical page of zeros. Even though according to Unix semantics your memory is read-write, the page tables would actually have read-only mappings. Writing a page will page-fault, and the kernel will point that entry at a new physical page of zeros before returning to your process at the instruction which faulted (which will then be re-run and succeed).
Anyway, this doesn't directly answer your question, but might be part of what you actually want. If you want to look under the hood, then sure have fun looking at the actual page tables, but you generally don't need to do that. smaps can tell you how much of a mapping is resident in memory.
See also what does pss mean in /proc/pid/smaps for details on what the fields mean.
BTW, see Why in 64bit the virtual address are 4 bits short (48bit long) compared with the physical address (52 bit long)? for a nice diagram of the 4-level page table format (and how 2M / 1G hugepages fit in).

I wrote a simple assembly program for getting Segmentation Fault and ran in gdb.... As far as I read each process has its own Page Table which cr3 register points to. Now I would like to look at the page table myself? Is it possible to find info about process page table in Linux?
The operating system maintains the page tables. They are protected from user-mode access (as you are trying to do).
To understand how protection works you are going to need to understand the difference between processor modes (e.g., Kernel and User) and how the processor shifts between these modes.
In short, however, trying to write code to examine page tables as you are doing is a dead end. You are better off learning about page table structure from books rather than trying to write code. I suggest looking at the Intel manuals.
https://software.intel.com/en-us/articles/intel-sdm
Sadly, this is rather dry and Intel writes the worst processor manuals I have seen. I recommend looking exclusively at 64-bit mode. Intel's 32-bit is overly complicated. If there is talk about segments, you are reading 32-bit and can ignore it. Intel's documentation never specifies whether addresses are physical or logical. So you may need to look at on-line lectures for clarification.
To supplement this reading, you can look at the Linux Source code. https://github.com/torvalds/linux
To conclude, it appears you need two prerequisites to get where you want to go: (1) processor modes; and (2) page table structure.

Related

How are stack and heap segment managed in x86 without utilizing the segmentation mechanism?

From Understanding the Linux Kernel:
Segmentation has been included in 80x86 microprocessors to encourage programmers to split their applications into logically related entities, such as subroutines or global and local data areas. However, Linux uses segmentation in a very limited way. In fact, segmentation and paging are somewhat redundant, because both can be used to separate the physical address spaces of processes: segmentation can assign a different linear address space to each process, while paging can map the same linear address space into different physical address spaces. Linux prefers paging to segmentation for the following reasons:
Memory management is simpler when all processes use the same segment register values—that is, when they share the same set of linear addresses.
One of the design objectives of Linux is portability to a wide range of architectures; RISC architectures, in particular, have limited support for segmentation.
The 2.6 version of Linux uses segmentation only when required by the 80x86 architecture.
The x86-64 architecture does not use segmentation in long mode (64-bit mode). As the x86 has segments, it is not possible to not use them. Four of the segment registers: CS, SS, DS, and ES are forced to 0, and the limit to 2^64. If so, two questions have been raised:
Stack data (stack segment) and heap data (data segment) are mixed together, then pop from the stack and increase the ESP register is not available.
How does the operating system know which type of data is (stack or heap) in a specific virtual memory address?
How do different programs share the kernel code by sharing memory?
Stack data (stack segment) and heap data (data segment) are mixed together, then pop from the stack and increase the ESP register is not available.
As Peter states in the above comment, even though CS, SS, ES and DS are all treated as having zero base, this does not change the behavior of PUSH/POP in any way. It is no different than any other segment descriptor usage really. You could get overlapping segments even in 32-bit multi-segment mode if you point multiple selectors to the same descriptor. The only thing that "changes" in 64-bit mode is that you have a base forced by the CPU, and RSP can be used to point anywhere in addressable memory. PUSH/POP operations will work as usual.
How does the operating system know which type of data is (stack or heap) in a specific virtual memory address?
User-space programs can (and will) move the stack and heap around as they please. The operating system doesn't really need to know where stack and heap are, but it can keep track of those to some extent, assuming the user-space application does everything according to convention, that is uses the stack allocated by the kernel at program startup and the program break as heap.
Using the stack allocated by the kernel at program startup, or a memory area obtained through mmap(2) with MAP_GROWSDOWN, the kernel tries to help by automatically growing the memory area when its size is exceeded (i.e. stack overflow), but this has its limits. Manual MAP_GROWSDOWN mappings are rarely used in practice (see 1, 2, 3, 4). POSIX threads and other more modern implementations use fixed-size mappings for threads.
"Heap" is a pretty abstract concept in modern user-space applications. Linux provides user-space applications with the basic ability to manipulate the program break through brk(2) and sbrk(2), but this is rarely in a 1-to-1 correspondence with what we got used to call "heap" nowadays. So in general the kernel does not know where the heap of an application resides.
How do different programs share the kernel code by sharing memory?
This is simply done through paging. You could say there is one hierarchy of page tables for the kernel and many others for user-space processes (one for each task). When switching to kernel-space (e.g. through a syscall) the kernel changes the value of the CR3 register to make it point to the kernel's page global directory. When switching back to user-space, CR3 is changed back to point to the current process' page global directory before giving control to user-space code.

Explain Linux commit message that patches/secures POP SS followed by a #BP interrupt (INT3)

This is in reference to CVE-2018-8897 (which appears related to CVE-2018-1087), described as follows:
A statement in the System Programming Guide of the Intel 64 and IA-32 Architectures Software Developer's Manual (SDM) was mishandled in the development of some or all operating-system kernels, resulting in unexpected behavior for #DB exceptions that are deferred by MOV SS or POP SS, as demonstrated by (for example) privilege escalation in Windows, macOS, some Xen configurations, or FreeBSD, or a Linux kernel crash. The MOV to SS and POP SS instructions inhibit interrupts (including NMIs), data breakpoints, and single step trap exceptions until the instruction boundary following the next instruction (SDM Vol. 3A; section 6.8.3). (The inhibited data breakpoints are those on memory accessed by the MOV to SS or POP to SS instruction itself.) Note that debug exceptions are not inhibited by the interrupt enable (EFLAGS.IF) system flag (SDM Vol. 3A; section 2.3). If the instruction following the MOV to SS or POP to SS instruction is an instruction like SYSCALL, SYSENTER, INT 3, etc. that transfers control to the operating system at CPL < 3, the debug exception is delivered after the transfer to CPL < 3 is complete. OS kernels may not expect this order of events and may therefore experience unexpected behavior when it occurs.
When reading this related git commit to the Linux kernel, I noted that the commit message states:
x86/entry/64: Don't use IST entry for #BP stack
There's nothing IST-worthy about #BP/int3. We don't allow kprobes
in the small handful of places in the kernel that run at CPL0 with
an invalid stack, and 32-bit kernels have used normal interrupt
gates for #BP forever.
Furthermore, we don't allow kprobes in places that have usergs while
in kernel mode, so "paranoid" is also unnecessary.
In light of the vulnerability, I'm trying to understand the last sentence/paragraph in the commit message. I understand that an IST entry refers to one of the (allegedly) "known good" stack pointers in the Interrupt Stack Table that can be used to handle interrupts. I also understand that #BP refers to a breakpoint exception (equivalent to INT3), and that kprobes is the debugging mechanism that is claimed to only run in a few places in the kernel at ring 0 (CPL0) privilege level.
But I'm completely lost in the next part, which may be because "usergs" is a typo and I'm simply missing what was intended:
Furthermore, we don't allow kprobes in places that have usergs while
in kernel mode, so "paranoid" is also unnecessary.
What does this statement mean?
usergs is referring to the x86-64 swapgs instruction, which exchanges gs with an internal saved GS value for the kernel to find the kernel stack from a syscall entry point. The swaps also swap the cached gsbase segment info, rather than reloading from the GDT based on the gs value itself. (wrgsbase can change the GS base independently of the GDT/LDT)
AMD's design is that syscall doesn't change RSP to point to the kernel stack, and doesn't read/write any memory, so syscall itself can be fast. But then you enter the kernel with all registers holding their user-space values. See Why does Windows64 use a different calling convention from all other OSes on x86-64? for some links to mailing list discussions between kernel devs and AMD architects in ~2000, tweaking the design of syscall and swapgs to make it usable before any AMD64 CPUs were sold.
Apparently keeping track of whether GS is currently the kernel or user value is tricky for error handling: There's no way to say "I want kernelgs now"; you have to know whether to run swapgs or not in any error-handling path. The only instruction is a swap, not a set it to one vs. the other.
Read comments in arch/x86/entry/entry_64.S e.g. https://github.com/torvalds/linux/blob/9fb71c2f230df44bdd237e9a4457849a3909017d/arch/x86/entry/entry_64.S#L1267 (from current Linux) which mentions usergs, and the next block of comments describes doing a swapgs before jumping to some error handling code with kernel gsbase.
IIRC, the Linux kernel [gs:0] holds a thread info block, at the lowest addresses of the kernel stack for that thread. The block includes the kernel stack pointer (as an absolute address, not relative to gs).
I wouldn't be surprised if this bug is basically tricking the kernel to loading kernel rsp from a user-controlled gsbase, or otherwise screwing up the dead-reckoning of swapgs so it has the wrong gs at some point.

Linux Segmentation

Recently, I read a book called Understanding the linux kernel. There is a sentence that confused me a lot. Can anybody explain this to me?
As stated earlier, the Current Privilege Level of the CPU indicates
whether the processor is in User or Kernel Mode and is specified by
the RPL field of the Segment Selector stored in the cs register.
Whenever the CPL is changed, some segmentation registers must be
correspondingly updated. For instance, when the CPL is equal to 3
(User Mode), the ds register must contain the Segment Selector of the
user data segment,but when the CPL is equal to 0, the ds register must contain the Segment Selector of the kernel data segment.
A similar situation occurs for the ss register. It must refer to a
User Mode stack inside the user data segment when the CPL is 3, and it
must refer to a Kernel Mode stack inside the kernel data segment when
the CPL is 0. When switching from User Mode to Kernel Mode, Linux
always makes sure that the ss register contains the Segment Selector
of the kernel data segment.
When saving a pointer to an instruction or to a data structure, the
kernel does not need to store the Segment selector component of the
logical address, because the ss register contains the current Segment
Selector.
As an example, when the kernel invokes a function, it executes a call
assembly language instruction specifying just the Offset component of
its logical address; the Segment Selector is implicitly selected as
the one referred to by the cs register. Because there is just one
segment of type “executable in Kernel Mode,” namely the code segment
identified by __KERNEL_CS, it is sufficient to load __KERNEL_CS into
cs whenever the CPU switches to Kernel Mode. The same argument goes
for pointers to kernel data structures (implicitly using the ds
register), as well as for pointers to user data structures (the kernel
explicitly uses the es register).
My understanding is the ss register contains the Segment Selector point to the base of the stack. Does ss register have anything to do with the pointer to an instruction that affects a data structure? If it doesn't, why mention it here?
Finally I make it clear what's the meaning of that paragraph. Actually, this piece of description demonstrates how segmentation works in Linux. It really has implicit objects of comparison--those systems exploit segmentation but not paging. How those systems works? Each process has different segment selectors in their logical address which point to different entries in global descriptor table. Each segment doesn't necessarily need to have the same base. In that case, when you save a pointer to an instruction or data structure, you really have to take care of its segment base. Notice that each logical address has a 16-bit segment selector and a 32-bit offset. If you only save the offset, it's not possible to find that pointer again because there are a lot of different segments in GDT. Things are different when it comes to Linux. All segment selectors have the same base 0. That means a pointer's offset is special enough for picking it up from memory. You might ask, does it work when there are lots of processes running there? It works! Remember each process has its Page Table which has the magical power to map same addresses to different physical addresses. Thanks for all of you who care this question!
My understanding is the ss register contains the Segment Selector
point to the base of the stack.
Right
Does ss register have anything to do with the pointer to an instruction to to a data structure?
No, ss register does not have anything to do with instructions that affect data segment.
If it doesn't, why mention it here?
Because ss register influences the result of instructions that affect the stack (eg : pop, push, etc.).
They are just explaining that Linux maintains also two stack segments (one for user mode, and one for kernel mode) as well as two data segments (one for user mode, and one for kernel mode).
As for the data segment if not updated when switching from user mode to kernel mode, the ss selector would still point to the user stack and the kernel would work with the user stack (would be very bad, right?). So the kernel takes care of updating the ss register as well as the ds register.
NB:
Let's recall that an instruction may access/modify bits in the data segment (mov to a memory address, ) as well as in the stack segment (pop, push, etc.)

Segmentation registers use

I am trying to understand how memory management goes on low level and have a couple of questions.
1) A book about assembly language by by Kip R. Irvine says that in the real mode first three segment registers are loaded with base addresses of code, data, and stack segment when the program starts. This is a bit ambigous to me. Are these values specified manually or does the assembler generates instructions to write the values into registers? If it happens automatically, how it finds out what is the size of these segments?
2) I know that Linux uses flat linear model, i.e. uses segmentation in a very limited way. Also, according to "Understanding the Linux Kernel" by Daniel P. Bovet and Marco Cesati there are four main segments: user data, user code, kernel data and kernel code in GDT. All four segments have the same size and base address. I do not understand why there is need in four of them if they differ only in type and access rights (they all produce the same linear address, right?). Why not use just one of them and write its descriptor to all segment registers?
3) How operating systems that do not use segmentation divide programs into logical segments? For example, how they differentiate stack from code without segment descriptors. I read that paging can be used to handle such things, but don't understand how.
You must have read some really old books because nobody program for real-mode anymore ;-) In real-mode, you can get the physical address of a memory access with physical address = segment register * 0x10 + offset, the offset being a value inside one of the general-purpose registers. Because these registers are 16 bit wide, a segment will be 64kb long and there is nothing you can do about its size, just because there is no attribute! With the * 0x10 multiplication, 1mb of memory become available, but there are overlapping combinations depending on what you put in the segment registers and the address register. I haven't compiled any code for real-mode, but I think it's up to the OS to setup the segment registers during the the binary loading, just like a loader would allocate some pages when loading an ELF binary. However I do have compiled bare-metal kernel code, and I had to setup these registers by myself.
Four segments are mandatory in the flat model because of architecture constraints. In protected-mode the segment registers no more contains the segment base address, but a segment selector which is basically an offset into the GDT. Depending on the value of the segment selector, the CPU will be in a given level of privilege, this is the CPL (Current Privilege Level). The segment selector points to a segment descriptor which has a DPL (Descriptor Privilege Level), which is eventually the CPL if the segment register is filled with with this selector (at least true for the code-segment selector). Therefore you need at least a pair of segment selectors to differentiate the kernel from the userland. Moreover, segments are either code segment or data segment, so you eventually end up with four segment descriptors in the GDT.
I don't have any example of serious OS which make any use of segmentation, just because segmentation is still present for backward compliancy. Using the flat model approach is nothing but a mean to get rid of it. Anyway, you're right, paging is way more efficient and versatile, and available on almost all architecture (the concepts at least). I can't explain here paging internals, but all the information you need to know are inside the excellent Intel man: Intel® 64 and IA-32 Architectures
Software Developer’s Manual
Volume 3A:
System Programming Guide, Part 1
Expanding on Benoit's answer to question 3...
The division of programs into logical parts such as code, constant data, modifiable data and stack is done by different agents at different points in time.
First, your compiler (and linker) creates executable files where this division is specified. If you look at a number of executable file formats (PE, ELF, etc), you'll see that they support some kind of sections or segments or whatever you want to call it. Besides addresses and sizes and locations within the file, those sections bear attributes telling the OS the purpose of these sections, e.g. this section contains code (and here's the entry point), this - initialized constant data, that - uninitialized data (typically not taking space in the file), here's something about the stack, over there is the list of dependencies (e.g. DLLs), etc.
Next, when the OS starts executing the program, it parses the file to see how much memory the program needs, where and what memory protection is needed for every section. The latter is commonly done via page tables. The code pages are marked as executable and read-only, the constant data pages are marked as not executable and read-only, other data pages (including those of the stack) are marked as not executable and read-write. This is how it ought to be normally.
Often times programs need read-write and, at the same time, executable regions for dynamically generated code or just to be able to modify the existing code. The combined RWX access can be either specified in the executable file or requested at run time.
There can be other special pages such as guard pages for dynamic stack expansion, they're placed next to the stack pages. For example, your program starts with enough pages allocated for a 64KB stack and then when the program tries to access beyond that point, the OS intercepts access to those guard pages, allocates more pages for the stack (up to the maximum supported size) and moves the guard pages further. These pages don't need to be specified in the executable file, the OS can handle them on its own. The file should only specify the stack size(s) and perhaps the location.
If there's no hardware or code in the OS to distinguish code memory from data memory or to enforce memory access rights, the division is very formal. 16-bit real-mode DOS programs (COM and EXE) didn't have code, data and stack segments marked in some special way. COM programs had everything in one common 64KB segment and they started with IP=0x100 and SP=0xFFxx and the order of code and data could be arbitrary inside, they could intertwine practically freely. DOS EXE files only specified the starting CS:IP and SS:SP locations and beyond that the code, data and stack segments were indistinguishable to DOS. All it needed to do was load the file, perform relocation (for EXEs only), set up the PSP (Program Segment Prefix, containing the command line parameter and some other control info), load SS:SP and CS:IP. It could not protect memory because memory protection isn't available in the real address mode, and so the 16-bit DOS executable formats were very simple.
Wikipedia is your friend in this case. http://en.wikipedia.org/wiki/Memory_segmentation and http://en.wikipedia.org/wiki/X86_memory_segmentation should be good starting points.
I'm sure there are others here who can personally provide in-depth explanations, though.

program life in terms of paged segmentation memory

I have a confusing notion about the process of segmentation & paging in x86 linux machines. Will be glad if some clarify all the steps involved from the start to the end.
x86 uses paged segmentation memory technique for memory management.
Can any one please explain what happens from the moment an executable .elf format file is loaded from hard disk in to main memory to the time it dies. when compiled the executable has different sections in it (text, data, stack, heap, bss). how will this be loaded ? how will they be set up under paged segmentation memory technique.
Wanted to know how the page tables get set up for the loaded program ? Wanted to know how GDT table gets set up. how the registers are loaded ? and why it is said that logical addresses (the ones that are processed by segmentation unit of MMU are 48 bits (16 bits of segment selector + 32 bit offset) when it is a bit 32 bit machine. how will other 16 bits be stored ? any thing accessed from ram must be 32 bits or 4 bytes how does the rest of 16 bits be accessed (to be loaded into segment registers) ?
Thanks in advance. the question can have a lot of things. but wanted to get clarification about the entire life cycle of an executable. Will be glad if some answers and pulls up a discussion on this.
Unix traditionally has implemented protection via paging. 286+ provides segmentation, and 386+ provides paging. Everyone uses paging, few make any real use of segmentation.
In x86, every memory operand has an implicit segment (so the address is really 16 bit selector + 32 bit offset), depending on the register used. So if you access [ESP + 8] the implied segment register is SS, if you access [ESI] the implied segment register is DS, if you access [EDI+4] the implied segment register is ES,... You can override this via segment prefix overrides.
Linux, and virtually every modern x86 OS, uses a flat memory model (or something similar). Under a flat memory model each segment provides access to the whole memory, with a base of 0 and a limit of 4Gb, so you don't have to worry about the complications segmentation brings about. Basically there are 4 segments: kernelspace code (RX), kernelspace data (RW), userspace code (RX), userspace data (RW).
An ELF file consists of some headers that pont to "program segments" and "sections". Section are used for linking. Program segments are used for loading. Program segments are mapped into memory via mmap(), this setups page-table entries with appropriate permissions.
Now, older x86 CPUs' paging mechanism only provided RW access control (read permission implies execute permission), while segmentation provided RWX access control. The end permission takes into account both segmentation and paging (e.g: RW (data segment) + R (read only page) = R (read only), while RX (code segment) + R (read only page) = RX (read and execute)).
So there are some patches that provide execution prevention via segmentation: e.g. OpenWall provided a non-executable stack by shrinking the code segment (the one with execute permission), and having special emulation in the page fault handler for anything that needed execution from a high memory address (e.g: GCC trampolines, self-modified code created on the stack to efficiently implement nested functions).
There's no such thing as paged segmentation, not in the official documentation at least. There are two different mechanisms working together and more or less independently of each other:
Translation of a logical address of the form 16-bit segment selector value:16/32/64-bit segment offset value, that is, a pair of 2 numbers into a 32/64-bit virtual address.
Translation of the virtual address into a 32/64-bit physical address.
Logical addresses is what your applications operate directly with. Then follows the above 2-step translation of them into what the RAM will understand, physical addresses.
In the first step the GDT (or it can be LDT, depends on the selector value) is indexed by the selector to find the relevant segment's base address and size. The virtual address will be the sum of the segment base address and the offset. The segment size and other things in segment descriptors are needed to provide protection.
In the second step the page tables are indexed by different parts of the virtual address and the last indexed table in the hierarchy gives the final, physical address that goes out on the address bus for the RAM to see. Just like with segment descriptors, page table entries contain not only addresses but also protection control bits.
That's about it on the mechanisms.
Now, in many x86 OSes the segment selectors that are used for applications are fixed, they are the same in all of them, they never change and they point to segment descriptors that have base addresses equal to 0 and sizes equal to the possible maximum (e.g. 4GB in non-64-bit modes). Such a GDT setup effectively means that the first step does no useful work and the offset part of the logical address translates into numerically equal virtual address.
This makes the segment selector values practically useless. They still have to be loaded into the CPU's segment registers (in non-64-bit modes into at least CS, SS, DS and ES), but beyond that point they can be forgotten about.
This all (except Linux-related details and the ELF format) is explained in or directly follows from Intel's and AMD's x86 CPU manuals. You'll find many more details there.
Perhaps read the Assembly HOWTO. When a Linux process starts to execute an ELF executable using the execve system call, it is essentially (sort of) mmap-ing some segments (and initializing registers, and a tiny part of the stack). Read also the SVR4 x86 ABI supplement and its x86-64 variant. Don't forget that a Linux process only see memory mapping for its address space and only cares about virtual memory
There are many good books on Operating Systems (=O.S.) kernels, notably by A.Tanenbaum & by M.Bach, and some on the linux kernel
NB: segment registers are nearly (almost) unused on Linux.

Resources