Linux huge page value in sysctl.conf - linux

Why do we configue huge page value in Linux ?
When we will configure huge page value and how we calculate the huge page value ?

Usually the huge page value is configured, when large memory pages needs to be allocated contigously (in a sequence) in RAM.
The below link has an example, which explains when and how:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Tuning_and_Optimizing_Red_Hat_Enterprise_Linux_for_Oracle_9i_and_10g_Databases/sect-Oracle_9i_and_10g_Tuning_Guide-Large_Memory_Optimization_Big_Pages_and_Huge_Pages-Sizing_Big_Pages_and_Huge_Pages.html
When you need Huge page value:
When the applications require large blocks of memory for processing.
Translation lookaside buffer (TLB) is a chaching mechaninsm for memory, which is used for quicker memory access. During memory management mapping entries are entered in to TLB, so that it helps in quick access of memory whenever required. (To know much about TLB refer https://en.wikipedia.org/wiki/Translation_lookaside_buffer)
TLB has a fixed number of slots, So it is a scarce source. So when the application has requirement for large blocks of memory,using huge pages will have less number of entries in the TLB, so that the TLB is used much effectively.
If you want more in depth information on huge page and TLB, Please walk through below Kernel documentation. But it is too deep.
https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt

Related

Memory capacity saturation and minor page faults

In USE Method: Linux Performance Checklist it is mentioned that
The goal is a measure of memory capacity saturation - the degree to which a process is driving the system beyond its ability (and causing paging/swapping). [...] Another metric that may serve a similar goal is minor-fault rate by process, which could be watched from /proc/PID/stat.
I'm not sure I understand what minor-faults have to do with memory saturation.
Quoting wikipedia for reference
If the page is loaded in memory at the time the fault is generated, but is not marked in the memory management unit as being loaded in memory, then it is called a minor or soft page fault.
I think what the book is referring to is the following OS behaviour that could make soft page faults increase with memory pressure. But there are other reasons for soft page faults (allocating new pages with mmap(MAP_ANONYMOUS) and then freeing them again; every first-touch of a new page will cost a soft page fault, although fault-around for a group of contiguous pages can reduce that to one fault per N pages for some small N when iterating through a new large allocation.)
When approaching memory pressure limits, Linux (like many other OSes) will un-wire a page in the HW page tables to see if a soft page-fault happens very soon. If no, then it may actually evict that page from memory1.
But if it does soft-pagefault before being evicted, the kernel just has to wire it back in to the page table, having saved a hard page-fault. (And the I/O to write it out in the first place.)
Footnote 1: Writing it to disk if dirty, either in swap space or a file-backed mapping if not anonymous; otherwise just dropping it. The kernel could start this disk I/O while waiting to see if it gets faulted back in; IDK if Linux does this or not.

Virtually contiguous vs. physically contiguous memory

Is virtually contiguous memory also always physically contiguous? If not, how is virtually continuous memory allocated and memory-mapped over physically non-contiguous RAM blocks? A detailed answer is appreciated.
Short answer: You need not care (unless you're a kernel/driver developer). It is all the same to you.
Longer answer: On the contrary, virtually contiguous memory is usually not physically contiguous (only in very small amounts). Except by coincidence, or shortly after the machine has just booted. That isn't necessary, however.
The only way of allocating larger amounts of physically contiguous RAM is by using large pages (since the memory within one page needs to be contiguous). It is however a useless endeavor, since there is no observable difference for your process whether memory of which you think that it is contiguous is actually contiguous, but there are disadvantages to using large pages.
Memory mapping over phyically non-contiuous RAM works in no particularly "special" way. It follows the same method which all memory management follows.
The OS divides virtual memory in "pages" and creates page table entries for your process. When you access a memory in some location, either the corresponding page does not exist at all, or it exists and corresponds to a real page in RAM, or it exists but doesn't correspond to a real page in RAM.
If the page exists in RAM, nothing happens at all1. Otherwise a fault is generated and some operating system code is run. If it turns out the page doesn't exist at all (or does not have the correct access rights), your process is killed with a segmentation fault.
Otherwise, the OS chooses an arbitrary page that isn't used (or it swaps out the one it thinks is the least important one), and loads the data from disk into that page. In the case of a memory mapping, the data comes from the mapped file, otherwise it comes from swap (and for completely new allocated memory, the zero page is copied). The OS then returns control back to your process. You never know this happened.
If you access another location in a "contiguous" (or so you think!) memory area which lies in a different page, the exact same procedure runs.
1 In reality, it is a little more complicated, since a page may exist in RAM but not exist "officially", being part of a list of pages that are to be recycled or such. But this gets too complicated.
No, it doesn't have to. Any page of the virtual memory can be mapped to an arbitrary physical page. Therefore you can have adjacent pages of your virtual memory pointing to non-adjacent physical pages. This mapping is maintained by the OS and is used by the MMU unit of CPU.

Why does MongoDB's memory mapped files cause programs like top to show larger numbers than normal?

I am trying to wrap my head around the internals of mongodb, and I keep reading about this
http://www.theroadtosiliconvalley.com/technology/mongodb-mongo-nosql-db/
Why does this happen?
So the way memorry mapped files work is that the addresses in memory are mapped byte for byte with a file on disk. This makes it really fast and but really large. Imagine a file on disk for your data taking up that size of memory.
Why it's awesome
In practice, this rocks because writing and reading from memory directly instead of issuing a system call (think context switch) is fast. Also, in practice, the fact that this huge memory mapped chunk doesn't fit in your physical ram is fine. Why? You only need the working set of data to fit in ram because the non-used pages are not loaded and just kept on disk. If they are needed a page fault happens and it gets loaded up. (I believe the portion that has been loaded is referred to as resident memory)
Why it it kind of sucks
Files mapped in memory needs to be page aligned so if you don't use up the memory space on the page boundary exactly you waste space (small tradoff)
Summary (tldnr)
It may look like its taking up a lot of resources because its mapping the entirety of your data to memory addresses but it doesn't really matter as that data isn't actually all being held in RAM. Mongo will pull in data as it needs it and use memory effectively to maintain a performant working set.

What are the exact conditions based on which Linux swaps process(s) memory from RAM to a swap file?

My server has 8Gigs of RAM and 8Gigs configured for swap file. I have memory intensive apps running. These apps have peak loads during which we find swap usage increase. Approximately 1 GIG of swap is used.
I have another server with 4Gigs of RAM and 8 Gigs of swap and similar memory intensive apps running on it. But here swap usage is very negligible. Around 100 MB.
I was wondering what are the exact conditions or a rough formula based on which Linux will do a swapout of a process memory in RAM to the swap file.
I know its based on swapiness factor. What else is it based on? Swap file size? Any pointers to Linux kernel documentation/source code explaining this will be great.
I've seen a lot of people posting subjective explanations of what this does. Here is hopefully a more full answer.
In the split LRU on post 2.6.28 Linux swappiness is a multiplier used to arbitrarily modify the fraction that is calculated determining the pressure built up in both LRUs.
So, for example on a system with no free memory left - the value of the existing memory you have is measured based off of the rate of how much memory is listed as 'Active' and the rate of how often pages are promoted to active after falling into the inactive list.
An LRU with many promotions/demotions of pages between active and inactive is in a lot of use.
Typically file backed storage is cheaper and safer to evict when your running out of memory and automatically is given a modifier of 200 (this makes file backed memory 200 times more worthless than swap backed memory (Which has a value of 0) when it multiplies this fraction.
What swappiness does is modify this value by deducting the swappiness number you gave (default 60) to file memory and adding the swappiness value you gave as a multiplier to anon memory. Thus the default swappiness leaves you with anonymous memory being 80 times more valuable than file memory (200-60 for file, 0+60 for anon). Thus, on a typical linux system that has used up all its memory, page cache would have to be 80 TIMES more active than anonymous memory for anonymous memory to be swapped out in favour of page cache.
If you set swappiness to 100 this gives anon a modifier of 100 and file memory a modifier of 100 (200 - 100) leaving both LRUs equally weighted. Thus on a file heavy system that wants page cache providing the anon memory is not as active as page cache then anon memory will be swapped to disk to make space for extra page cache.
Linux (or any other OS) divides memory up into pages (typically 4Kb). Each of these pages represent a chunk of memory. Usage information for these pages is maintained, which basically contains info about whether the page is free or is in use (part of some process), whether it has been accessed recently, what kind of data it contains (process data, executable code etc.), owner of the page, etc. These pages can also be broadly divided into two categories - filesystem pages or the page cache (in which all data read/written to your filesystem resides) and pages belonging to processes.
When the system is running low on memory, the kernel starts swapping out pages based on their usage. Using a list of pages sorted w.r.t recency of access is common for determining which pages can be swapped out (linux kernel has such a list too).
During swapping, Linux kernel needs to decide what to trade-off when nuking pages in memory and sending them to swap. If it swaps filesystem pages too aggressively, more reads are required from the filesystem to read those pages back when they are needed. However, if it swaps out process pages more aggressively it can hurt interactivity, because when the user tries to use the swapped out processes, they will have to be read back from the disk. See a nice discussion here on this.
By setting swappiness = 0, you are telling the linux kernel not to swap out pages belonging to processes. When setting swappiness = 100 instead, you tell the kernel to swap out pages belonging to processes more aggressively. To tune your system, try changing the swappiness parameter in steps of 10, monitoring performance and pages being swapped in/out at each setting using the "vmstat" command. Keep the setting that gives you the best results. Remember to do this testing during peak usage hours. :)
For database applications, swappiness = 0 is generally recommended. (Even then, test different settings on your systems to arrive to a good value).
References:
http://www.linuxvox.com/2009/10/what-is-the-linux-kernel-parameter-vm-swappiness/
http://www.pythian.com/news/1913/

Find out how many pages of memory a process uses on linux

I need to find out how many pages of memory a process allocates?
Each page is 4096, the process memory usage I'm having some problems locating the correct value. When I'm looking in the gome-system-monitor there are a few values to choose from under memory map.
Thanks.
The point of this is to divide the memory usage by the page count and verify the page size.
It's hard to figure exact amount of memory allocated correctly: there are pages shared with other processes (r/o parts of libraries), never used memory allocated by brk and anonymous mmap, mmaped file which are not fetched from disk completely due to efficient processing algorithms which touch only small part of file etc, swapped out pages, dirty pages to-be-written-on-disk etc.
If you want to deal with all this complexity and figure out True count of pages, the detailed information is available at /proc/<pid>/smaps, and there are tools, like mem_usage.py or smem.pl (easily googlable) to turn it into more-or-less usable summary.
This would be the "Resident Set Size", assuming you process doesn't use swap.
Note that a process may allocate far more memory ("Virtual Memory Size"), but as long as it don't writes to the memory, it is not represented by physical memory, be it in RAM or on the disk.
Some system tools, like top, display a huge value for "swap" for each process - this is of course completly wrong, the value is the difference between VMS and RSS and most likely those unused, but allocated, memory pages.

Resources