How does contiguous block of memory reduce memory access time? - linux

When we use kmalloc() it is said, this function returns contiguous physical blocks of memory (if available) and with vmalloc(), we get non-contiguous block of memory (if available) .
It is further stated, access of contiguous block of memory is faster compared to non-contiguous block of memory [Source Link].
To be more specific, lets consider two cases:
Let 1 physical frame=4 KB, page size =4 KB
Case 1:
In my module code, I am using kmalloc() to allocate 20 KB memory to a char array; call succeeds.
Case 2:
I have done above request using vmalloc() and the call has succeeded.
My questions are:
a) How does it take less time for kmalloc() to fulfil the request compared to vmalloc()?
b) How does contiguous allocation lead to fast access of memory compared to non-contiguous allocation?
In each case, CPU generates virtual address, gives to MMU (if TLB miss), does a page walk, identifies frame number, then converts virtual address to a physical address. How does it matter if address is contiguous or non-contiguous?

For kmalloc the whole physical RAM is already mapped 1:1 with an offset1, i.e. physical RAM address N is mapped to virtual address N+PAGE_OFFSET. This makes allocation using kmalloc simpler than with vmalloc, since vmalloc has to find free pages and set up the page tables so that the pages are mapped to a contiguous address block.
There is no difference in access time when accessing kmalloc vs. vmalloc allocated memory, except for the page faults mentioned in the document you linked to.
1 With the exception of systems with more physical memory than fits in the virtual address space reserved for the kernel.

Related

Why is kmalloc() more efficient than vmalloc()?

I think kmalloc() allocates continuous physical pages in the kernel because the virtual memory space is directly mapping to the physical memory space, by simply adding an offset.
However, I still don't understand why it is more efficient than vmalloc().
It still needs to go through the page table (the kernel page table), right? Because the MMU is not disabled when the process is switching to the kernel. So why Linux directly maps the kernel virtual space to the physical memory? What is the benefit?
In include/asm-x86/page_32.h, there is:
#define __pa(x) ((unsigned long)(x)-PAGE_OFFSET)
#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET))
Why does the kernel need to calculate the physical address? It has to use the virtual address to access the memory anyway, right? I cannot figure out why the physical address is needed.
Your Queries :-
why is Kmalloc more efficient than vmalloc()?
kmalloc allocates a region of physically contiguous (also virtually contiguous) memory. The physical to virtual map is one-to-one.
For vmalloc(), an MMU/PTE value is allocated for each page; the physical to virtual mapping is not continuous.
vmalloc is often slower than kmalloc, because it may have to remap the buffer space into a virtually contiguous range. kmalloc never remaps.
why Linux directly maps the kernel virtual space to the physical memory?
There is one concept in linux kernel known as DMA(Direct Memory Access) which require contiguous physical memory. so when kernel trigger DMA operation we need to specify physically contiguous memory. that's why we need direct memory mapping.
Why the kernel needs to calculate the physical address? It has to use the virtual address to access the memory anyway, right?
for this question answer you need to read difference between virtual memory and physical memory. but in short, every load and store operation is performed on physical memory(You RAM on PC)
physical memory point to RAM.
virtual memory point to swap area of your HARD DISK.

High memory mappings in kernel virtual address space

The linear address beyond 896MB correspond to High memory region ZONE_HIGHMEM.
So the page allocator functions will not work on this region, since they give the linear address of directly mapped page frames in ZONE_NORMAL and ZONE_DMA.
I am confused about these lines specified in Undertanding linux Kernel:
What do they mean when they say "In 64 bit hardware platforms ZONE_HIGHMEM is always empty."
What does this highlighted statement mean: "The allocation of high-memory page frames is done only through alloc_pages() function. These functions do not return linear address since they do not exist. Instead the functions return linear address of the page descriptor of the first allocated page frame. These linear addresses always exist, because all page descriptors are allocated in low memory once and forever during kernel initialization."
What are these Page descriptors and does the 896MB already have all page descriptors of entire RAM.
The x86-32 kernel needs high memory to access more than 1G of physical memory, as it is impossible to permanently map more than 2^{32} addresses within a 32-bit address space and the kernel/user split is 1G/3G.
The x86-64 kernel has no such limitation, as the amount of physically-addressable memory (currently 256T) fits within its 64-bit address space and thus may always be permanently mapped.
High memory is a hack. Ideally you don't need it. Indeed, the point of x86-64 is to be able to directly address all the memory you could possibly want. Taken
from https://www.quora.com/Linux-Kernel/What-is-the-difference-between-high-memory-and-normal-memory
I think page descriptor means struct page. And considering the sizeof struct page. Yes all of them can be stored in ZONE_NORMAL

Is there a reuse of virtual memory addresses in linux?

I thought a little about virtual memory management, and came to the result that there can be two types of memory fragmentation. The first happens on the physical memory side where pages can not be freed because there are some bytes of it used. Mostly the last bytes will be freed sooner or later and then the physical memory page will become free again and is unmapped.
But what happens to the pointer (virtual address) returned by malloc. Let's assume a 32-bit system. The program "randomly" allocates and frees memory but there is never used more than some MByte. Let's assume further that the program will never free the memory in the order it is allocated. So the "top of heap" pointer can never be decreased as the free will never occur at the end of the heap. I assume that malloc has to map the memory always to the end of the heap memory space. This means the pointer value will increase with every call.
Earlier or later the returned pointer will reach the highest possible address (e.g. 0xffffffff) and it becomes impossible to further add memory while the system has enough free pages available as most pages have been freed. It is just a matter of the highest possible pointer value.
To solve this an algorithm would be needed that maintains unmapped address spaces and let them grow as more memory is beeing freed at the beginning or the end of the space. Is there an algorithm like this implemented by malloc?
I assume that malloc has to map the memory always to the end of the heap memory space.
This assumption is actually incorrect. Some implementations may keep multiple pools that different sizes of blocks are allocated from. (For instance, one common approach is a slab allocator, which keeps a separate pool for each size of block that the allocator will return.)
In any case, yes — all meaningful implementations of malloc() will track memory that has been freed and will reuse it when possible.
I had a short look at the slab allocator. This seems to be more related to memory page management used inside kernel. My question is related to the user space and the fact that whenever memory is allocated it needs to get an address in address space of the calling process's heap. What happens to this address space when it is limited as given in a 32-bit system.
It is clear that the system does not loose the memory at all. What I mean is that there is no address space left to get an address where the memory can be mapped while all memory at lower addresses has been freed and unmapped already.

Preventing Linux kernel from taking allocated memory from a process

I want to allocate a large portion of memory using malloc() for an indefinite amount of time. I may touch the memory for a long time let say 1 minute. How do i prevent the kernel from taking that memory away from the process?
I can not re allocate that memory because it is being used by another device that is outside of the kernels control.
In the Linux, you can allocate memory in user space, such as with malloc or a mmap, pass it down to the kernel, and then in the kernel, obtain references to the memory with get_user_pages. This will prevent the pages from going away, and also allow them to be accessed from any address space as struct page * references (and requiring kmap and kunmap if CONFIG_HIGHMEM is in effect). These pages will not be contiguous physical memory, however, and they may not be in a suitable range for DMA.
Memory to be accessed by devices is usually allocated in the kernel (e.g. using kmalloc with GFP_DMA. For allocations larger than a page, kmalloc finds consecutive physical pages, too. Once obtained, kmalloc-ed memory can be mapped into user space with remap_pfn_range.

kmalloc() functionality in linux kernel

I did come across through the LDD book that using the kmalloc we can allocate from high memory. I have one basic question here.
1)But to my knowledge we can't access the high memory directly from the kernel (unless it is mapped to the kernel space through the kmap()). And i didn't see any mapping area reserved for kmalloc(), But for vmalloc() it is present.So, to which part of the kernel address does the kmalloc() will map if allocated from high memory?
This is on x86 architecture,32bit system.
My knowledge may be out of date but the stack is something like this:
kmalloc allocates physically contiguous memory by calling get_free_pages (this is what the acronym GFP stands for). The GFP_* flags passed to kmalloc end up in get_free_pages, which is the page allocator.
Since special handling is required for highmem pages, you won't get them unless you add the GFP_HIGHMEM flag to the request.
All memory in Linux is virtual (a generalization that is not exactly true and that is architecture-dependent, but let's go with it, until the next parenthesized statement in this paragraph). There is a range of memory, however, that is not subject to the virtualization in the sense of remapping of pages: it is just a linear mapping between virtual addresses and physical addresses. The memory allocated by get_free_pages is linearly mapped, except for high memory. (On some architectures, linear mappings are supported for memory ranges without the use of a MMU: it's just a simple arithmetic translation of logical addresses to physical: add a displacement. On other architectures, linear mappings are done with the MMU.)
Anyway, if you call get_free_pages (directly, or via kmalloc) to allocate two or more pages, it has to find physically contiguous ones.
Now virtual memory is also implemented on top of get_free_pages, because we can take a page allocated that way, and install it into a virtual address space.
This is how mmap works and everything else for user space. When a piece of virtual memory is committed (becomes backed by a physical page, on a page fault or whatever), a page comes from get_free_pages. Unless that page is highmem, it has a linear mapping so that it is visible in the kernel. Additionally, it is wired into the virtual address space for which the request is being made. Some kernel data structures keep track of this, and of course it is punched into the page tables so the MMU makes it happen.
vmalloc is similar in principle to mmap, but far simpler because it doesn't deal with multiple backends (devices in the filesystem with a mmap virtual function) and doesn't deal with issues like coalescing and splitting of mappings that mmap allows. The vmalloc area consists of a reserved range of virtual addresses visible only to the kernel (whose base address and is architecture dependent and can be tweaked by you at kernel compile time). The vmalloc allocator carves out this virtual space, and populates it with pages from get_free_pages. These need not be contiguous, and so can be obtained one at a time, and wired into the allocated virtual space.
Highmem pages are physical memory that is not addressable in the kernel's linear map representing physical memory. Highmem exists because the kernel's linear "window" on physical memory isn't always large enough to cover all of memory. (E.g. suppose you have a 1GB window, but 4GB of RAM.) So, for coverage of all of memory, there is in addition to the linear map, some smaller "non linear" map where pages they are selectively made visible, on a temporary basis using kmap and kunmap. Placement of a page into this view is considered the acquisition of a precious resource that must be used sparingly and released as soon as possible.
A highmem page can be installed into a virtual memory map just like any other page, and no special "highmem" handling is needed for that view of the page. Any map: that of a process, or the vmalloc range.
If you're dealing with some virtual memory that could be a mixture of highmem and non-highmem pages, which you have to view through the kernel's linear space, you have to be prepared to use the mapping functions.

Resources