ARM Linux kernel page table - linux

Ref. Linux kernel ARM Translation table base (TTB0 and TTB1)
I have father doubt/query on topic discussed in previous link:
0 to 0xbfffffff is a lower part of memory (for user processes) and managed by the page table in TTB0, it contains the page-table of the current process
Ref. arm/include/asm/pgtable-2level.h : PTRS_PER_PGD =2048, PTRS_PER_PMD =1, PTRS_PER_PTE =512
0xc0000000 to 0xffffffff is upper part (OS and memory-mapped I/O) of the address space managed/translated by the page table in TTBR1.
TTB1 table is fixed in size and alignment (to 16k). Each level 1 entry of size is 32bits and represents 1MB page/segment. This is swapper_pg_dir (ref System.map) page tables that placed 16K below the actual text address
Is that the first 768 entry in swapper_pg_dir = 0 (0x0 to 0xbfffffff for user processes) and valid entry from 768 to 1024(0xc0000000 to 0xffffffff is for OS and memory-mapped I/O)?
Anyone like to share some sample code in kernel space (kernel module) to browse this swapper_pg_dir PGD?

Because of how the ARM MMU was designed, both the translations tables (TTB0 and TTB1) can only be used in a 1:1 mapping kernel mapping.
Most Linux Kernels have a 3:1 mapping (3GB User space : 1GB Kernel space for ARM).
This means that 0-0xBFFFFFFF is user space while 0xC0000000 - 0xFFFFFFFF is kernel space.
Now for the HW memory translations, only TTBR0 is used. TTBR1 only holds the address of the initial swapper page (which contains all the kernel mappings) and isn't really used for virtual address translations. TTBR0 hold the address for the current used page directory (the page table that the HW is using for translations). Now each user process has their own page tables, and for each process switch, TTBR0 changes to the current user process page table (they are all located in kernel space).
For example, for each new user process, the kernel creates a new page directory, copies all the kernel mappings from the swapper page(page frames from 3-4GB) to the new page table and clears the user pages(page frames from 0-3GB). It then sets TTB0 to the base address of this page directory and flushes cache to install the new address space. The swapper page is also always kept up to date with changes to the mappings.
For your question:
Simplified, hardwarewise the first level page have 4096 entries. Each entry represent 1MB of virtual address, totalling 4GB of ram. Entry 0-3071 represent user space and entry 3072-4095 represent kernel space.
The swapper page is usually located at address 0xC0004000 - 0xc0008000 (4096 entries *4bytes each entry = 16384 =16kb in hex = 0x4000 ). By examing the memory at 0xc0004000-0xc0007000 you can find entries for user space (empty) and from 0xc0007000-0xc0008000 you can find kernel entries. I use gdb with the command line x /100x 0xc0007000 in order to examine the first 100 kernel entries. You can then examine the technical reference manual for your current platform in order to decipher the page table attributes.
If you want to learn more about the Linux kernel, I recommend you to use Qemu to simulate the Beagleboard together with gdb to examine and debug the source code. I did this to learn how the kernel builds the page table during initialization.

Related

What about the many addresses unpresented in the /proc/$pid/maps file?

Brief Version:
what status are the addresses unpresented in the maps file? Are they belongs to unallocated virtual pages or allocated from anonymous file or others?
Detailed Version
I'm learning about VM. In my book(CS:APP), I learned that all virtual pages can be cut into three sets: unallocated, allocated but not cached, allocated and cached.I have some questions about "what are allocated pages and unallocated pages? When are pages allocated?" And also, is stack and heap belongs to allocated pages or unallocated or only allocate when used?
Trying to solve these problems, I read the /proc/$pid/maps file, while I think I can get anything I want from it. In my mind, the file contains all memory mapping relations. But there isn't information about is it cached(I know maybe it cannot be seen from user mode...), and are the unpresented pages unallocated?
Honestly, I don't really know about the maps file. What I do know is that the information on every page is stored in page structures at all time. I'm gonna take x86-64 as an example.
For x86-64 on Linux you have Page Global Directory (PGD), Page Upper Directory (PUD), Page Middle Directory (PMD) and Page Directory (PD). The address of the bottom of the PGD table is stored in the CR3 register. PGD contains addresses of the PUDs, PUDs contain addresses of the PMDs, PMDs contain addresses of the PDs and PDs contain addresses of the physical pages.
A virtual address, of which only 48 bits are used, is split into 5 parts. The 12 least significant bits are the offset in the physical page. The next chunk of 9 bits is the offset in the PD. The next chunk is the offset in PMD etc. For example let's say you have virtual address 0x0000000000000123. This virtual address will be translated by the MMU in the CPU by looking at entry (offset) 0 of the PGD, entry 0 of the PUD, entry 0 of the PMD, entry 0 of the PD and finally offset 0x123 in the actual physical page in RAM. Every virtual address is 64 bits of which only the 48 least significant bits will be used.
At boot, the kernel makes checks to determine how much memory is available. It then builds its kernel structures accordingly.
When the kernel boots it will mark all pages as unallocated in its own structures (except for kernel needs). The page structure is important for this. The kernel has a page C structure for every page in the system (https://linux-kernel-labs.github.io/refs/heads/master/labs/memory_mapping.html and https://elixir.bootlin.com/linux/v4.6/source/include/linux/mm_types.h). This structure informs the kernel whether the page is allocated or not.
Each physical page in the system has a struct page associated with
it to keep track of whatever it is we are using the page for at the
moment. Note that we have no way to track which tasks are using
a page, though if it is a pagecache page, rmap structures can tell us
who is mapping it.
At first the pages are mostly unallocated. When you start a new process by launching an executable as the user of the system, pages are allocated for your process. On Linux, executables are ELF files. ELF is a conventional format which separates code in loadable segments. Each segment gets a virtual address at which it is going to be loaded in the virtual address space.
Let's say you have an elf file with one segment which should be loaded at virtual address 0x400000. When you launch that ELF executable, the Linux kernel will call certain functions which will look at the size of the code and allocate pages accordingly. The kernel will look at its structures and determine using algorithms where the process will be allocated in RAM. It will then setup the page tables according to where the virtual addresses for that process should land in actual physical memory.
What's important to understand is that each CPU core in the system has only one process running at a time. Each core has it's own set of page tables. When a process switch occurs for one core, the page tables are swapped completely to point to somewhere else in RAM. The same virtual address can point anywhere in RAM depending on how the page tables are set up.
The kernel holds a task_struct for every process running in the system. The task_struct contains a field named pgd which is a pointer to the PGD of the process. Each process has its very own PGD. If you dereference the pointer to the PGD you get the actual value of the first entry of the PGD. This first entry is the address of the PUD. With this only pointer, the kernel can reach every table belonging to the process and modify them at will.
While a process is running, it can ask for more memory. This is called dynamic memory allocation. The kernel has no way to know how much memory the process is going to ask in advance since it is dynamic (done while code is executing). When the process asks for more memory, the kernel determines what page to give to the process depending on an algorithm. It then marks this page as allocated to that process. The task_struct contains a mm field which is of type mm_struct (https://manybutfinite.com/post/how-the-kernel-manages-your-memory/). It is a memory descriptor for that process so that the kernel can know what memory the process is using. In itself the process doesn't need that information since the process should rely only on itself to ask for memory properly to the operating system and to not jump somewhere in RAM where it doesn't belong.
You ask about heap and stack. The stack for a process is allocated at the beginning of the process and I think it has a fixed size. If you overflow the stack, you will throw a CPU exception which will prompt the kernel to kill your process. Each CPU core has a special register called RSP. This is the stack pointer. It points to the top of the stack (the stack grows downward toward low memory). When the kernel allocates a stack for a process you launch, it will set up this register to point at the top of it. The stack pointer contains a virtual address. It will thus be translated using the page tables just like any address.
The heap is allocated and managed completely by the OS. It doesn't have special registers like the stack. It is allocated only when the process asks for more memory during code execution. The kernel knows in advance how much memory a process requires. It is all written in the ELF executable. All static memory is allocated during compilation and thus the kernel knows everything about the size of static memory. The only moment it requires to allocate new memory to a process is when the process actually asks for it. In C++ you use the keyword new to ask for heap memory dynamically. If you don't use this keyword, then the kernel knows in advance where your variables will be allocated (where they will be in memory). Only the stack will be used by static memory.

How exactly do kernel virtual addresses get translated to physical RAM?

On the surface, this appears to be a silly question. Some patience please.. :-)
Am structuring this qs into 2 parts:
Part 1:
I fully understand that platform RAM is mapped into the kernel segment; esp on 64-bit systems this will work well. So each kernel virtual address is indeed just an offset from physical memory (DRAM).
Also, it's my understanding that as Linux is a modern virtual memory OS, (pretty much) all addresses are treated as virtual addresses and must "go" via hardware - the TLB/MMU - at runtime and then get translated by the TLB/MMU via kernel paging tables. Again, easy to understand for user-mode processes.
HOWEVER, what about kernel virtual addresses? For efficiency, would it not be simpler to direct-map these (and an identity mapping is indeed setup from PAGE_OFFSET onwards). But still, at runtime, the kernel virtual address must go via the TLB/MMU and get translated right??? Is this actually the case? Or is kernel virtual addr translation just an offset calculation?? (But how can that be, as we must go via hardware TLB/MMU?). As a simple example, lets consider:
char *kptr = kmalloc(1024, GFP_KERNEL);
Now kptr is a kernel virtual address.
I understand that virt_to_phys() can perform the offset calculation and return the physical DRAM address.
But, here's the Actual Question: it can't be done in this manner via software - that would be pathetically slow! So, back to my earlier point: it would have to be translated via hardware (TLB/MMU).
Is this actually the case??
Part 2:
Okay, lets say this is the case, and we do use paging in the kernel to do this, we must of course setup kernel paging tables; I understand it's rooted at swapper_pg_dir.
(I also understand that vmalloc() unlike kmalloc() is a special case- it's a pure virtual region that gets backed by physical frames only on page fault).
If (in Part 1) we do conclude that kernel virtual address translation is done via kernel paging tables, then how exactly does the kernel paging table (swapper_pg_dir) get "attached" or "mapped" to a user-mode process?? This should happen in the context-switch code? How? Where?
Eg.
On an x86_64, 2 processes A and B are alive, 1 cpu.
A is running, so it's higher-canonical addr
0xFFFF8000 00000000 through 0xFFFFFFFF FFFFFFFF "map" to the kernel segment, and it's lower-canonical addr
0x0 through 0x00007FFF FFFFFFFF map to it's private userspace.
Now, if we context-switch A->B, process B's lower-canonical region is unique But
it must "map" to the same kernel of course!
How exactly does this happen? How do we "auto" refer to the kernel paging table when
in kernel mode? Or is that a wrong statement?
Thanks for your patience, would really appreciate a well thought out answer!
First a bit of background.
This is an area where there is a lot of potential variation between
architectures, however the original poster has indicated he is mainly
interested in x86 and ARM, which share several characteristics:
no hardware segments or similar partitioning of the virtual address space (when used by Linux)
hardware page table walk
multiple page sizes
physically tagged caches (at least on modern ARMs)
So if we restrict ourselves to those systems it keeps things simpler.
Once the MMU is enabled, it is never normally turned off. So all CPU
addresses are virtual, and will be translated to physical addresses
using the MMU. The MMU will first look up the virtual address in the
TLB, and only if it doesn't find it in the TLB will it refer to the
page table - the TLB is a cache of the page table - and so we can
ignore the TLB for this discussion.
The page table
describes the entire virtual 32 or 64 bit address space, and includes
information like:
whether the virtual address is valid
which mode(s) the processor must be in for it to be valid
special attributes for things like memory mapped hardware registers
and the physical address to use
Linux divides the virtual address space into two: the lower portion is
used for user processes, and there is a different virtual to physical
mapping for each process. The upper portion is used for the kernel,
and the mapping is the same even when switching between different user
processes. This keep things simple, as an address is unambiguously in
user or kernel space, the page table doesn't need to be changed when
entering or leaving the kernel, and the kernel can simply dereference
pointers into user space for the
current user process. Typically on 32bit processors the split is 3G
user/1G kernel, although this can vary. Pages for the kernel portion
of the address space will be marked as accessible only when the processor
is in kernel mode to prevent them being accessible to user processes.
The portion of the kernel address space which is identity mapped to RAM
(kernel logical addresses) will be mapped using big pages when possible,
which may allow the page table to be smaller but more importantly
reduces the number of TLB misses.
When the kernel starts it creates a single page table for itself
(swapper_pg_dir) which just describes the kernel portion of the
virtual address space and with no mappings for the user portion of the
address space. Then every time a user process is created a new page
table will be generated for that process, the portion which describes
kernel memory will be the same in each of these page tables. This could be
done by copying all of the relevant portion of swapper_pg_dir, but
because page tables are normally a tree structures, the kernel is
frequently able to graft the portion of the tree which describes the
kernel address space from swapper_pg_dir into the page tables for each
user process by just copying a few entries in the upper layer of the
page table structure. As well as being more efficient in memory (and possibly
cache) usage, it makes it easier to keep the mappings consistent. This
is one of the reasons why the split between kernel and user virtual
address spaces can only occur at certain addresses.
To see how this is done for a particular architecture look at the
implementation of pgd_alloc(). For example ARM
(arch/arm/mm/pgd.c) uses:
pgd_t *pgd_alloc(struct mm_struct *mm)
{
...
init_pgd = pgd_offset_k(0);
memcpy(new_pgd + USER_PTRS_PER_PGD, init_pgd + USER_PTRS_PER_PGD,
(PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
...
}
or
x86 (arch/x86/mm/pgtable.c) pgd_alloc() calls pgd_ctor():
static void pgd_ctor(struct mm_struct *mm, pgd_t *pgd)
{
/* If the pgd points to a shared pagetable level (either the
ptes in non-PAE, or shared PMD in PAE), then just copy the
references from swapper_pg_dir. */
...
clone_pgd_range(pgd + KERNEL_PGD_BOUNDARY,
swapper_pg_dir + KERNEL_PGD_BOUNDARY,
KERNEL_PGD_PTRS);
...
}
So, back to the original questions:
Part 1: Are kernel virtual addresses really translated by the TLB/MMU?
Yes.
Part 2: How is swapper_pg_dir "attached" to a user mode process.
All page tables (whether swapper_pg_dir or those for user processes)
have the same mappings for the portion used for kernel virtual
addresses. So as the kernel context switches between user processes,
changing the current page table, the mappings for the kernel portion
of the address space remain the same.
The kernel address space is mapped to a section of each process for example on 3:1 mapping after address 0xC0000000. If the user code try to access this address space it will generate a page fault and it is guarded by the kernel.
The kernel address space is divided into 2 parts, the logical address space and the virtual address space. It is defined by the constant VMALLOC_START. The CPU is using the MMU all the time, in user space and in kernel space (can't switch on/off).
The kernel virtual address space is mapped the same way as user space mapping. The logical address space is continuous and it is simple to translate it to physical so it can be done on demand using the MMU fault exception. That is the kernel is trying to access an address, the MMU generate fault , the fault handler map the page using macros __pa , __va and change the CPU pc register back to the previous instruction before the fault happened, now everything is ok. This process is actually platform dependent and in some hardware architectures it mapped the same way as user (because the kernel doesn't use a lot of memory).

How does ARM Linux maintain segments?

Linux translates flat virtual address to physical address by MMU. In the virtual address space of Linux, there are many types of segments:
Kernel space
User stack
Memory mapping region
User heap
Bss segment
Data segment
Text segment
How does Linux maintain these segments (aka sections)? Where are the base addresses and sizes of these segments stored? Registers, GDT/LDT, mm_struct or other data structures in kernel?
Appreciate any help.
GDT/LDT is x86 family feature. Kernel space translated via kernel part of page tables, userspace via userspace part. Page tables are in main memory, mm_struct is a structure used in Linux kernel to describe memory layout. It is per-process
User stack
User heap
Bss segment
Data segment
Text segment
This layout described in mm_struct. Also mm_struct contains ->pgd field which is a root page table pointer (loaded to ttrb0/ttrb1 on ARM)

Linux x86: Where is the real mode address space mapped to in protected kernel mode?

In Linux running on an x86 platform where is the real mode address space mapped to in protected kernel mode? In kernel mode, a thread can access the kernel address space directly. The kernel is in the lower 8MB, The page table is at a certain position, etc (as describe here). But where does the real mode address space go? Can it be accessed directly? For example the BIOS and BIOS addons (See here)?
(My x86-fu is a bit weak. I'll add some tags so that other people can (hopefully) correct me if I'm lying anywhere.)
Physical addresses are the same in real and protected mode. The only difference is in how you get from an address (offset) specified in an instruction to a physical address:
In real mode, the physical address is basically (segment_reg << 4) + offset.
In protected mode, the physical address is translate_via_page_table([segment_reg] + offset).
By [segment_reg] I mean the base address of the segment, looked up in the Global or Local Descriptor Table at the offset in segment_reg. translate_via_page_table() means the address translation done via paging (if enabled).
Looking here, it seems the BIOS ROM appears at physical addresses 0x000F0000-0x000FFFFF. To get at that memory in protected mode with paging, you would have to map it into the virtual address space somewhere by setting up correct page table entries. Assuming 4 KB pages (the usual case), mapping the entire range should require 16 ((0xFFFFF-0xF0000+1)/4096) entries.
To see how the Linux kernel does things, you could look into how e.g. /dev/mem, which allows reading of arbitrary physical addresses, is implemented. The implementation is in drivers/char/mem.c.
The following command (from e.g. this answer) will dump the memory range 0xC0000-0xFFFFF (meaning it includes the video BIOS too, per the memory map linked above):
$ dd if=/dev/mem bs=1k skip=768 count=256 > bios
1024*768 = 0xC0000, and 1024*(768+256) - 1 = 0xFFFFF, which gives the expected physical memory range.
Tracing things a bit, read_mem() in drivers/char/mem.c calls xlate_dev_mem_ptr(), which has an x86-specific implementation in arch/x86/mm/ioremap.c. The ioremap_cache() call in that function seems to be responsible for mapping in the page if needed.
Note that BIOS routines won't work in protected mode by the way. They assume the CPU is running in real mode.
For Linux x86 32 bits, the first 896MB of physical RAM is mapped to a contiguous block of virtual memory starting at virtual address 0xC0000000 to 0xF7FFFFFF. Virtual addresses from 0xF8000000 to 0xFFFFFFFF are assigned dynamically to various parts of the physical memory, so the kernel can have a window of 128MB mapped into any part of physical memory beyond the 896MB limit.
The kernel itself loads at physical address 1MB and up, leaving the first MB free. This first MB is used, for instance, to have DMA buffers that ISA devices needs to be there, because they use the 8237 DMA controller, which can only be mapped to such addresses.
So, reading from virtual memory address 0xC0000000 is actually reading from physical address 0x00000000 (provided the kernel has flagged that page as present)

Is kernel space mapped into user space on Linux x86?

It seems that on Windows 32 bit, kernel will reserve 1G of virtual memory from the totally 4G user virtual memory space and map some of the kernel space into this 1G space.
So my questions are:
Is there any similiar situation on 32 bit Linux?
If so, how can we see the whole memory layout ?
I think
cat /proc/pid/map
can only see the user space layout of certain process..
Thank you!
Actually, on 32-bit Windows, without the /3G boot option, the kernel is mapped at the top 2GB of linear address space, leaving 2GB for user process.
Linux does a similar thing, but it maps the kernel in the top 1GB of linear space, thus leaving 3GB for user process.
I don't know if you can peek the entire memory layout by just using the /proc filesystem. For a lab I designed for my students, I created a tiny device driver that allows a user to peek at an physical memory address, and get the contents of several control registers, such as CR3 (directory page base address).
By using these two operations, one can walk through the directory page of the current process (the one which is doing this operation) and see which pages are present, which ones are owned by the user and the kernel, or just by the kernel, which ones are read/write or read only, etc. With that information, they have to display a map showing memory usage, including kernel space.
Take a look at this PDF. It's the compiled version of all the labs we did in my course.
http://www.atc.us.es/asignaturas/tpbn/PracticasTPBN2011.pdf
On page 36 of PDF (page 30 of the document) you will see how a memory map looks like. This is the result of doing exercise #3.2 from lab #3.
The text is in spanish, but I'm sure you can use a translator or something like that if there are things you cannot understand. This labs assumes the student has previously read about how the paging system works and how to interpret the layout of the directory and page entries.
The map is like this. A 16x64 block. Each cell in the block represents 4MB of the current process virtual address space. The map should be tridimensional, as there are 4MB regions that are described by a page table with 1024 entries (pages), and not all pages may be present, but to keep the map clear, the exercise requires the user to collapse these regions, showing the contents of the first page entry that describes a present page, in the hope that all subsequents pages in that page table share the same attributes (which may or may not be actually true).
This map is used with kernels 2.6.X. in which PAE is not used, and PSE is used (PAE and PSE being two bit fields from control register CR4). PAE enables 2MB pages and PSE enables 4MB pages. 4KB pages are always available.
. : PDE not present, or page table empty.
X : 4MB page, supervisor.
R : 4MB page, user, read only.
* : 4MB page, user, read/write.
x : Page table with at least one entry describing a supervisor page.
r : Page table with at least one entry describing an user page, read only.
+ : Page table with at least one entry describing an user page, read/write.
................................r...............................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
................................................................
...............................+..............................+.
xXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXxX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX..x...........................xx
You can see there is a vast space of 3GB of memory, almost empty in this case (the process is just a little C application, and uses less than 4MB, all contained in a page table, whose first present page is a read only page, assumed to be part of the program code, or maybe static strings).
Near the 3GB border there are two small regions read/write, which may belong to shared libraries loaded by the user program.
The last 4 rows (256 directory entries) belong almost all of them to the kernel. There are 224 entries which are actually being present and used. These maps the first 896MB of physical memory and it's the space in where the kernel lives. The last 32 entries are used by the kernel to access physical memory beyond the 896MB mark in systems with more than 896MB RAM.
Is there any similiar situation on 32 bit Linux?
Yes. On 32-bit Linux, by default, the kernel reserves the high quarter of the address space (the 1G from C0000000 to the top of memory) for its own use.
If so, how can we see the whole memory layout ?
You can't. /proc/pid/maps only displays mappings which are present in userspace. Kernel memory is not accessible from userspace applications, so it is not shown.
Keep in mind the reason why this arrangement is used - while the kernel is active, it needs to be able to install its own mappings while still keeping userspace mappings active (so that, for instance, it can copy data from or to userspace). It accomplishes this by reserving that high memory range for itself.
The locations of memory mappings within the kernel is not relevant to anything besides the kernel itself, so it is not exposed to userspace at all except by accident, or in some debug messages.

Resources