Is anonymous memory - i.e. program heap and stack - part of the page cache on Linux? The linked documentation of the kernel does not state that.
But the Wikipedia entry about Page Cache contains a graphic (look at the top right) which gives me the impression that malloc() allocates dynamic memory within the page cache:
Does that make sense? Regarding mmap(), when it is used to access files it makes sense to use the page cache. Also generally for anonymous memory e.g. malloc() and anonymous mappings through mmap()?
I would appreciate some explanation.
Thank you.
Edit 2021-03-14
I've decided it is the best to ask the kernel maintainers of the memory subsystem on their mailing-list. Luckily Matthew Wilcox responded and helped me. Extract:
Anonymous memory is not handled by the page cache.
Anonymous pages are handled in a number of different ways -- they can be found on LRU lists (Least Recently Used) and they can be found through the page tables. Somewhat ad-hoc.
The wikipedia diagram is wrong. And it contains further flaws.
If a system provides swap and if anonymous memory is swapped - it enters the swap cache, not the page cache.
The discussion can be read on here or here.
TLDR: No, except for anonymous memory with special filesystem backing (like IPC shmem).
Update: Corrected answer to incorporate new info from the kernel mailing list discussion with OP.
The page cache originally was meant to be an OS-level region of memory for fast lookup of disk-backed files and in its original form was a buffer cache (meant to cache blocks from disk). The notion of a page cache came about later in 1995 after Linux's inception, but the premise was similar, just a new abstraction -- pages [1].
In fact, eventually the two caches became one: the page cache included the buffer cache, or rather, the buffer cache is the page cache [1, 2].
So what does go in the page cache? Aside from traditional disk-backed files, in an attempt to make the page cache as general purpose as possible, Linux has a few examples of page types that don't adhere to the traditional notion of disk-backed pages, yet are still stored in the page cache. Of course, as mentioned, the buffer cache (which is the same as the page cache) is used to store disk-backed blocks of data. Blocks aren't necessarily the same size as pages. In fact, I learned that they can be smaller than pages [pg.323 of 3]. In that case, pages considered part of the buffer cache might consist of multiple blocks corresponding to non-contiguous regions of memory on disk. I'm unclear whether, then, each page in the buffer cache must be a one-to-one mapping between a page and a file, or if one page can consist of blocks from different files. Nonetheless, this is one page cache usage that doesn't adhere to the strictest definition of the original page cache.
Next is the swap cache. As Barmar mentioned in the comments, anonymous (non-file backed pages) can be swapped out to disk. Along the way to disk and back, pages are put in the swap cache. The swap cache repurposes similar data structures as the page cache, specifically the address_space struct, albeit with swap flags set and a few other differences [pg. 731 of 4, 5] However, since the swap cache is considered separate from the page cache, anonymous pages in the swap cache are not considered to be "in the page cache."
Finally: the question about whether mmap/malloc are allocating memory in the page cache. As discussed in [5], typically, mmap uses memory that comes from the free page list, not the page cache (unless there were no free pages left, I assume). When using mmap to map files for reading and writing, these pages do end up residing within the page cache. However, for anonymous memory, mmap/mallocd pages do not normally reside within the page cache.
One exception to this is anonymous memory that has special filesystem backing. For instance, shared memory mmapd between processes for IPC is backed by the ram-based tmpfs [6]. This memory sits in the page cache, but is anonymous because it has no disk-backing file [pg. 600 of 4].
Sources:
https://lwn.net/Articles/712467/
https://www.thomas-krenn.com/en/wiki/Linux_Page_Cache_Basics
https://www.doc-developpement-durable.org/file/Projets-informatiques/cours-&-manuels-informatiques/Linux/Linux%20Kernel%20Development,%203rd%20Edition.pdf
https://doc.lagout.org/operating%20system%20/linux/Understanding%20Linux%20Kernel.pdf
https://lore.kernel.org/linux-mm/20210315000738.GR2577561#casper.infradead.org/
https://github.com/torvalds/linux/blob/master/Documentation/filesystems/tmpfs.rst
Related
In Linux, pages in memory have a PG_referenced bit which is set when there is a reference to the page. My question is, if there is a memory read/write to an address in a page and there was a cache hit for that address, will it count as a reference to that page?
Is PG_referenced what Linux calls the bit in the hardware page tables that the CPU updates when a page is accessed? Or is it tracking whether a page is referenced by another kernel data structure?
The CPU hardware sets a bit in the page-table entry on access to virtual page, if it wasn't already set. (I assume most ISAs have functionality like this to help detect less recently used pages, e.g. the OS can clear the Accessed bit on some pages, and later check to see which pages still haven't been accessed. Without having to actually invalidate them and force a soft page fault on access.)
On x86 for example, the check (for whether to "trap" to an internal microcode assist to atomically set the Accessed and/or Dirty bit in the PTE and higher levels in the page directory) is based on the TLB entry caching the PTE.
This is unrelated to D-cache or I-cache hit or miss for the physical address.
Updating the Accessed and/or Dirty bits works even for pages set to be uncacheable, I think.
I am currently working on frontswap which uses zswap to compress RAM pages and store in RAM. There is one doubt regarding the pages it consider to do that.
I read about frontswap at https://lwn.net/Articles/386090/ and at https://www.kernel.org/doc/Documentation/vm/frontswap.txt. It says that it handle swap pages but it never mention clearly that it uses anonymous pages or dirty pages or both. From my understanding.
Anonymous pages are those pages which are created when space requirement increase in your program. For example suppose you have declare a large matrix to do some processing. So when you allocate memory for this matrix then this memory is corresponding to some RAM pages we call these pages anonymous as these pages are not containing data which are file mapped.
Dirty pages are those pages which we load from secondary storage to our RAM and during the life of process we modify that page.
Please correct me if I am wrong about the above two definition.
Frontswap is present present as hook in swap_readpage() and swap_writepage() function in page_io.c. So, what I really want to know that what kind of pages are passed through these function call?
everyone. I am stuck on the following question.
I am working on a hybrid storage system which uses an ssd as a cache layer for hard disk. To this end, the data read from the hard disk should be written to the ssd to boost the subsequent reads of this data. Since Linux caches data read from disk in the page cache, the writing of data to the ssd can be delayed; however, the pages caching the data may be freed, and accessing the freed pages is not recommended. Here is the question: I have "struct page" pointers pointing to the pages to be written to the ssd. Is there any way to determine whether the page represented by the pointer is valid or not (by valid I mean the cached page can be safely written to the ssd? What will happen if a freed page is accessed via the pointer? Is the data of the freed page the same as that before freeing?
Are you using cleancache module? You should only get valid pages from it and it should remain valid until your callback function finished.
Isn't this a cleancache/frontswap reimplementation? (https://www.kernel.org/doc/Documentation/vm/cleancache.txt).
The benefit of existing cleancache code is that it calls your code only just before it frees a page, so before the page resides in RAM, and when there is no space left in RAM for it the kernel calls your code to back it up in tmem (transient memory).
Searching I also found an existing project that seems to do exactly this: http://bcache.evilpiepirate.org/:
Bcache is a Linux kernel block layer cache. It allows one or more fast
disk drives such as flash-based solid state drives (SSDs) to act as a
cache for one or more slower hard disk drives.
Bcache patches for the Linux kernel allow one to use SSDs to cache
other block devices. It's analogous to L2Arc for ZFS, but Bcache also
does writeback caching (besides just write through caching), and it's
filesystem agnostic. It's designed to be switched on with a minimum of
effort, and to work well without configuration on any setup. By
default it won't cache sequential IO, just the random reads and writes
that SSDs excel at. It's meant to be suitable for desktops, servers,
high end storage arrays, and perhaps even embedded.
What you are trying to achieve looks like the following:
Before the page is evicted from the pagecache, you want to cache it. This, in concept, is called a Victim cache. You can look for papers around this.
What you need is a way to "pin" the pages targeted for eviction for the duration of the IO. Post IO, you can free the pagecache page.
But, this will delay the eviction, which is possibly needed during memory pressure to create more un-cached pages.
So, one possible solution is to start your caching algorithm a bit before the pagecache eviction starts.
A second possible solution is to set aside a bunch of free pages and exchange the page being evicted form the page cache with a page from the free pool, and cache the evicted page in the background. But, you need to now synchronize with file block deletes, etc
According the the mlock() man page:
All pages that contain a part of the specified address range are
guaranteed to be resident in RAM when the call returns successfully;
the pages are guaranteed to stay in RAM until later unlocked.
Does this also guarantee that the physical address of these pages is constant throughout their lifetime, or until unlocked?
If not (that is, if it can be moved by the memory manager - but still remain in RAM), is there anything that can be said about the new location, or the event when such change occur?
UPDATE:
Can anything be said about the coherency of the locked pages in RAM? If the CPU has a cache, then does mlock-ing guarantee RAM coherency with the cache (assuming write-back cache)?
No. Pages that have been mlocked are managed using the kernel's unevictable LRU list. As the name suggests (and mlock() guarantees) these pages cannot be evicted from RAM. However, the pages can be migrated from one physical page frame to another. Here is an excerpt from Unevictable LRU Infrastructure (formatting added for clarity):
MIGRATING MLOCKED PAGES
A page that is being migrated has been isolated from the LRU lists and is held locked across unmapping of the page, updating the page's address space entry and copying the contents and state, until the page table entry has been replaced with an entry that refers to the new page. Linux supports migration of mlocked pages and other unevictable pages. This involves simply moving the PG_mlocked and PG_unevictable states from the old page to the new page.
I've been reading up on Linux's "swappiness" tuneable, which controls how aggressive the kernel is about swapping applications' memory to disk when they're not being used. If you Google the term, you get a lot of pages like this discussing the pros and cons. In a nutshell, the argument goes like this:
If your swappiness is too low, inactive applications will hog all the system memory that other programs might want to use.
If your swappiness is too high, when you wake up those inactive applications, there's going to be a big delay as their state is read back off the disk.
This argument doesn't make sense to me. If I have an inactive application that's using a ton of memory, why doesn't the kernel page its memory to disk AND leave another copy of that data in-memory? This seems to give the best of both worlds: if another application needs that memory, it can immediately claim the physical RAM and start writing to it, since another copy of it is on disk and can be swapped back in when the inactive application is woken up. And when the original app wakes up, any of its pages that are still in RAM can be used as-is, without having to pull them off the disk.
Or am I missing something?
If I have an inactive application that's using a ton of memory, why doesn't the kernel page its memory to disk AND leave another copy of that data in-memory?
Lets say we did it. We wrote the page to disk, but left it in memory. A while later another process needs memory, so we want to kick out the page from the first process.
We need to know with absolute certainty whether the first process has modified the page since it was written out to disk. If it has, we have to write it out again. The way we would track this is to take away the process's write permission to the page back when we first wrote it out to disk. If the process tries to write to the page again there will be a page fault. The kernel can note that the process has dirtied the page (and will therefore need to be written out again) before restoring the write permission and allowing the application to continue.
Therein lies the problem. Taking away write permission from the page is actually somewhat expensive, particularly in multiprocessor machines. It is important that all CPUs purge their cache of page translations to make sure they take away the write permission.
If the process does write to the page, taking a page fault is even more expensive. I'd presume that a non-trivial number of these pages would end up taking that fault, which eats into the gains we were looking for by leaving it in memory.
So is it worth doing? I honestly don't know. I'm just trying to explain why leaving the page in memory isn't so obvious a win as it sounds.
(*) This whole thing is very similar to a mechanism called Copy-On-Write, which is used when a process fork()s. The child process is very likely going to execute just a few instructions and call exec(), so it would be silly to copy all of the parents pages. Instead the write permission is taken away and the child simply allowed to run. Copy-On-Write is a win because the page fault is almost never taken: the child almost always calls exec() immediately.
Even if you page the apps memory to disk and keep it in memory, you would still have to decide when should an application be considered "inactive" and that's what swapiness controls. Paging to disk is expensive in terms of IO and you don't want to do it too often. There is also another variable on this equation, and that is the fact that Linux uses of remaining memory as disk buffers/cache.
According to this 1 that is exactly what Linux does.
I'm still trying to make sense of a lot of this, so any authoritative links would be appreciated.
The first thing the VM does is clean pages and move them to the clean list.
When cleaning anonymous memory (things which do not have an actual file backing store, you can see the segments in /proc//maps which are anonymous and have no filesystem vnode storage behind them), the first thing the VM is going to do is take the "dirty" pages and "clean" then by writing the contents of the page out to swap. Now when the VM has a shortage of completely free memory and is worried about its ability to grant new free pages to be used, it can go through the list of 'clean' pages and based on how recently they were used and what kind of memory they are it will move those pages to the free list.
Once the memory pages are placed on the free list, they no longer are associated with the contents they had before. If a program comes along a references the memory location the page was serving previously the program will take a major fault and a (most likely completely different) page will be grabbed from the free list and the data will be read into the page from disk. Once this is done, the page is actually still 'clean' since it has not been modified. If the VM chooses to use that page on swap for a different page in RAM then the page would be again 'dirtied', or if the app wrote to that page it would be 'dirtied'. And then the process begins again.
Also, swappinness is pretty horrible for server applications in a business/transactional/online/latency-sensitive environment. When I've got 16GB RAM boxes where I'm not running a lot of browsers and GUIs, I typically want all my apps nearly pinned in memory. The bulk of my RAM tends to be 8-10GB java heaps that I NEVER want paged to disk, ever, and the cruft that is available are processes like mingetty (but even there the glibc pages in those apps are shared by other apps and actually used, so even the RSS size of those useless processes are mostly shared, used pages). I normally don't see more than a few 10MBs of the 16GB actually cleaned to swap. I would advise very, very low swappiness numbers or zero swappiness for servers -- the unused pages should be a small fraction of the overall RAM and trying to reclaim that relatively tiny amount of RAM for buffer cache risks swapping application pages and taking latency hits in the running app.