Why is kmalloc() more efficient than vmalloc()? - linux

I think kmalloc() allocates continuous physical pages in the kernel because the virtual memory space is directly mapping to the physical memory space, by simply adding an offset.
However, I still don't understand why it is more efficient than vmalloc().
It still needs to go through the page table (the kernel page table), right? Because the MMU is not disabled when the process is switching to the kernel. So why Linux directly maps the kernel virtual space to the physical memory? What is the benefit?
In include/asm-x86/page_32.h, there is:
#define __pa(x) ((unsigned long)(x)-PAGE_OFFSET)
#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET))
Why does the kernel need to calculate the physical address? It has to use the virtual address to access the memory anyway, right? I cannot figure out why the physical address is needed.

Your Queries :-
why is Kmalloc more efficient than vmalloc()?
kmalloc allocates a region of physically contiguous (also virtually contiguous) memory. The physical to virtual map is one-to-one.
For vmalloc(), an MMU/PTE value is allocated for each page; the physical to virtual mapping is not continuous.
vmalloc is often slower than kmalloc, because it may have to remap the buffer space into a virtually contiguous range. kmalloc never remaps.
why Linux directly maps the kernel virtual space to the physical memory?
There is one concept in linux kernel known as DMA(Direct Memory Access) which require contiguous physical memory. so when kernel trigger DMA operation we need to specify physically contiguous memory. that's why we need direct memory mapping.
Why the kernel needs to calculate the physical address? It has to use the virtual address to access the memory anyway, right?
for this question answer you need to read difference between virtual memory and physical memory. but in short, every load and store operation is performed on physical memory(You RAM on PC)
physical memory point to RAM.
virtual memory point to swap area of your HARD DISK.

Related

Is it faster when access the contiguous physical address than virtual address?

What's the benifit of allocating a chunk of contiguous physical memory?
Is it faster when access the contiguous physical address than virtual address? And why?
All memory accesses from the CPU go through the MMU; the speed does not depend on the actual location of the pages in physical memory.
Physically contiguous memory is needed for other devices that access memory but are not able to remap pages.
In that case, the contiguous allocation is needed to make the device work to begin with, and is not a question of speed.

What is the rationality of Linux kernel's mapping as much RAM as possible in direct-mapping(linear mapping) area?

The discussion below applies to 32-bit ARM Linux.
Suppose there are 512MB physical RAM in my system. For common configurations, all these 512MB physical RAM will be mapped via direct mapping by kernel(0xC000 0000 to 0xE000 0000).
Question is: kernel itself only uses part of these RAM; most of these RAM would be allocated to user space. Why bother mapping all these 512MB physical RAM in kernel's virtual space(0xC000 0000 to 0xE000 0000)? Why doesn't kernel just map part of these RAM for its only usage(say 64MB RAM)?
If physical RAM is greater than 1GB, things get a little complicated. Let's say directly-mapped area is 768MB in size. The result would be 768MB out of 1GB being directly mapped to kernel's virtual space. I guess the rest of the RAM(256MB) goes to two places: either high memory area or allocated by kernel to user space. But I still don't see any advantage of mapping so many physical RAM into kernel's virtual space.
Actually this question can be reduced to:
what are the drawbacks if kernel only directly maps a small part of physical RAM(say 64MB out of 512MB)?
Before further discussion, it is beneficial to know that
After MMU is turned on, every address issued by CPU is virtual
address.
If kernel wants to access ANY address in RAM, a mapping must be set up before the actual access happens.
If kernel only directly maps a small part of physical RAM, the cost is that every time kernel needs to access other parts of RAM, it needs to set up a temporary mapping before accessing that address and torn down that mapping after the access, which is very tedious and low efficiency.
If that mapping is set up in advance and is always there, it saves quite a lot of trouble for kernel.

kmalloc() functionality in linux kernel

I did come across through the LDD book that using the kmalloc we can allocate from high memory. I have one basic question here.
1)But to my knowledge we can't access the high memory directly from the kernel (unless it is mapped to the kernel space through the kmap()). And i didn't see any mapping area reserved for kmalloc(), But for vmalloc() it is present.So, to which part of the kernel address does the kmalloc() will map if allocated from high memory?
This is on x86 architecture,32bit system.
My knowledge may be out of date but the stack is something like this:
kmalloc allocates physically contiguous memory by calling get_free_pages (this is what the acronym GFP stands for). The GFP_* flags passed to kmalloc end up in get_free_pages, which is the page allocator.
Since special handling is required for highmem pages, you won't get them unless you add the GFP_HIGHMEM flag to the request.
All memory in Linux is virtual (a generalization that is not exactly true and that is architecture-dependent, but let's go with it, until the next parenthesized statement in this paragraph). There is a range of memory, however, that is not subject to the virtualization in the sense of remapping of pages: it is just a linear mapping between virtual addresses and physical addresses. The memory allocated by get_free_pages is linearly mapped, except for high memory. (On some architectures, linear mappings are supported for memory ranges without the use of a MMU: it's just a simple arithmetic translation of logical addresses to physical: add a displacement. On other architectures, linear mappings are done with the MMU.)
Anyway, if you call get_free_pages (directly, or via kmalloc) to allocate two or more pages, it has to find physically contiguous ones.
Now virtual memory is also implemented on top of get_free_pages, because we can take a page allocated that way, and install it into a virtual address space.
This is how mmap works and everything else for user space. When a piece of virtual memory is committed (becomes backed by a physical page, on a page fault or whatever), a page comes from get_free_pages. Unless that page is highmem, it has a linear mapping so that it is visible in the kernel. Additionally, it is wired into the virtual address space for which the request is being made. Some kernel data structures keep track of this, and of course it is punched into the page tables so the MMU makes it happen.
vmalloc is similar in principle to mmap, but far simpler because it doesn't deal with multiple backends (devices in the filesystem with a mmap virtual function) and doesn't deal with issues like coalescing and splitting of mappings that mmap allows. The vmalloc area consists of a reserved range of virtual addresses visible only to the kernel (whose base address and is architecture dependent and can be tweaked by you at kernel compile time). The vmalloc allocator carves out this virtual space, and populates it with pages from get_free_pages. These need not be contiguous, and so can be obtained one at a time, and wired into the allocated virtual space.
Highmem pages are physical memory that is not addressable in the kernel's linear map representing physical memory. Highmem exists because the kernel's linear "window" on physical memory isn't always large enough to cover all of memory. (E.g. suppose you have a 1GB window, but 4GB of RAM.) So, for coverage of all of memory, there is in addition to the linear map, some smaller "non linear" map where pages they are selectively made visible, on a temporary basis using kmap and kunmap. Placement of a page into this view is considered the acquisition of a precious resource that must be used sparingly and released as soon as possible.
A highmem page can be installed into a virtual memory map just like any other page, and no special "highmem" handling is needed for that view of the page. Any map: that of a process, or the vmalloc range.
If you're dealing with some virtual memory that could be a mixture of highmem and non-highmem pages, which you have to view through the kernel's linear space, you have to be prepared to use the mapping functions.

Linux 3/1 virtual address split

I am missing something when it comes to understanding the need for highmem to address more than 1GB of RAM. Could someone point out where I go wrong? Thanks!
What I know:
1 GB of a processes' virtual memory (high memory region) is reserved for kernel operations. The user space can use the remaining 3 GB. This is the 3/1 split.
The virtual memory features of the VM map the (continuous) virtual memory pages to physical pages (RAM).
What I don't know:
What operations use the kernel virtual memory? I suppose things like kmalloc(...) in kernel-space would use kernel virtual memory.
I would think that 4GB of RAM could be used under this scheme. I don't get why the kernel 1 GB virtual space is the limiting factor when addressing physical space. This is where my understanding breaks down. Please advise.
I've been reading this (http://kerneltrap.org/node/2450), which is great. But it doesn't quite address my question to my liking.
The reason that kernel virtual space is a limiting factor on useable physical memory is because the kernel needs access to all physical memory, and the way it accesses physical memory is through kernel virtual addresses. The kernel doesn't use special instructions that allow direct access to physical memory locations - it has to set up page table entries for any physical ranges that it wants to talk to.
In the "old style" scheme, the kernel set things up so that every process's page tables mapped virtual addresses from 0xC0000000 to 0xFFFFFFFF directly to physical addresses from 0x00000000 to 0x3FFFFFFF (these pages were marked so that they were only accessible in ring 0 - kernel mode). These are the "kernel virtual addresses". Under this scheme, the kernel could directly read and write any physical memory location without having to fiddle with the MMU to change the mappings.
Under the HIGHMEM scheme, the mappings from kernel virtual addresses to physical addresses aren't fixed - parts of physical memory are mapped in and out of the kernel virtual address space as the kernel needs access to that memory. This allows more physical memory to be used, but at the cost of having to constantly change the virtual-to-physical mappings, which is quite an expensive operation.
Mapping 1 GB to kernel in each process allows processes to switch to kernel mode without also performing a context switch. Responses to system calls such as read(), mmap() and others can then be appropriately processed in the calling process' address space.
If space for the kernel were not reserved in each process, switching to "kernel mode" in between executing user space code would be more expensive, and be unable to use virtual address mapping through the hardware MMU (memory management unit) for the system calls being serviced.
Systems running a 32bit kernel with more than 1GB of physical memory, are able to assign physical memory locations in ZONE_HIGHMEM (roughly above the 1GB mark), which can require the kernel to jump through hoops for certain operations to interact with them. The addition of PAE (physical address extension), extends this problem by allowing upto 64GB of physical memory, decreasing the ratio of memory within the 1GB physical address memory, to regions allocated in ZONE_HIGHMEM.
For example the system calls use the kernel space.
You can have 64GB of physical ram, but on 32-bit platforms processors can access only 4gb because of the 32-bit virtual addressing. Actually, you can have 1GB of RAM and 3GB of swap and virtual addressing will make it look like you have 4GB. On 64-bit platforms virtual addressing is practically unlimited.

why do we need zone_highmem on x86?

In linux kernel, mem_map is the array which holds all "struct page" descriptors. Those pages includes the 128MiB memory in lowmem for dynamically mapping highmem.
Since the lowmem size is 1GiB, so the mem_map array has only 1GiB/4KiB=256KiB entries. If each entry size is 32 byte, then the mem_map memory size = 8MiB. But if we could use mem_map to map all 4GiB physical memory(if we have so much physical memory available on x86-32), then the mem_map array would occupy 32MiB, that is not a lot of kernel memory(or am i wrong?).
So my question is: why do we need to use that 128MiB in low for indirect highmem mapping in the first place? Or put another way, why not to map all those max 4GiB physical memory(if available) in the kernel space directly?
Note: if my understanding of the kernel source above is wrong, please correct. Thanks!
Look Here: http://www.xml.com/ldd/chapter/book/ch13.html
Kernel low memory is the 'real' memory map, addressed with 32-bit pointers on x86.
Kernel high memory is the 'virtual' memory map, addressed with virtual structures on x86.
You don't want to map it all into the kernel address space, because you can't always address all of it, and you need most of your memory for virtual memory segments (virtual, page-mapped process space.)
At least, that's how I read it. Wow, that's a complicated question you asked.
To throw more confusion, chapter 13 talks about some PCI devices not being able to address the 32-bit space, which was the genesis of my previous comment:
On x86, some kernel memory usage is limited to the first Gigabyte of memory bacause of DMA addressing concerns. I'm not 100% familiar with the topic, but there's a comapatibility mode for DMA on the PCI bus. That may be what you are looking at.
3.6 GB is not the ceiling when using physical address extension, which is commonly needed on most modern x86 boards, especially with memory hotplug.
Or put another way, why not to map all those max 4GiB physical
memory(if available) in the kernel space directly?
One reason is userspace: every usespace process have its own virtual address space. Suppose you have 4Gb of RAM on x86. So if we suggest that kernel owns 1Gb of memory (~800 directly mapped + ~200 vmalloc) all other ~3Gb should be dynamically distributed between processes spinning in user space. So how can you map your 4Gbs directly when you have a several address spaces?
why do we need zone_highmem on x86?
The reason is the same. Kernel reserves only ~800Mb for low mem. All other memory will be allocated and connected with particular virtual address only on demand. For example if you will execute a binary a new virtual address space will be created and some pages will be allocated for storing your binary code and data (heap ,stack ...). So the key attribute of high mem is to serve dynamic memory allocation requests, you never know in advance what will be triggered by userspace...

Resources