If a process initially has a number of pages allocated to it in the heap, but a lot of the data in the pages has been deallocated, is there some sort of optimization that the OS does to consolidate the data into one page so that the other pages can be freed?
In general, nothing happens, the heap will continue to have "holes" in it.
Since the (virtual) memory addresses known by a process must remain valid, the operating system cannot perform "heap compaction" on its own. However, some runtimes like .Net do it.
If you are using C or C++, all you can hope for by default is that malloc() will be able to reuse previously deallocated chunks. But if your usage pattern is "allocate a lot of small objects then deallocate half of them at random," the memory utilization will probably not decrease much from the peak.
If a process initially has a number of pages allocated to it in the heap
A process will not initially have pages allocates in a heap.
is there some sort of optimization that the OS does to consolidate the data into one page so that the other pages can be freed
The operating system has no knowledge of user heaps. It allocates pages to the process. What that process does with those pages is up to it (i.e., use them for a heap, stack, code, etc.).
A process's heap manager can consolidate freed chunks of memory. When this occurs, it is normally done to fight heap fragmentation. However, I have never seen a heap manager on a paging system that unmaps pages once they are mapped by the operating system.
The heap of a process never has holes on it. The heap is part of the data segment allocated to a process, that grows dynamically upwards to the top of the stack segment, basically with the use of the sbrk(2) system call (that fixes a new size to the data segment) so the heap is a continuous segment (at least in terms of virtual address space) of allocated pages. malloc(3) never returns the heap space (or part of it) to the system. See malloc(3) for info about this. While there are memory allocators that allow a process to have several heaps (by means of allocating new memory segments, by use of the mmap(2) system call) the segments allocated by a memory allocator are commonly never returned back to the system.
What happens is that the memory allocator reuses the heap space allocated with sbrk(2) and mmap(2) and manages memory for being reused, but it is never returned back to the system.
But don't fear, as this is handled in a good and profitable way by the system, anyway.
That should not affect the overall system management, except from the fact that it consumes virtual address space, and probably page contents will end in the swap device if you don't use them until the process references them again and makes the system to reload them from the swap device(s). If your process doesn't reuse the holes it creates in the heap, the most probable destination is for the system to move them to the swap device and continue reusing it for other processes.
At this moment, I don't know if the system optimices swap allocation by not swapping out zeroed pages, as it does, for example, with text segments of executables (they never go to a swap device, because their contents are already swapped off in the executable file ---this was the reason you couldn't erase in ancient unices a program executable, or the reason there's not need anymore to use the sticky bit in frequently used programs---) but I think it doesn't (and the reason is that it's most improbable the unused pages will be zeroed by the application)
Be warned only in the case you have a 15Gb single process' heap use in your system and 90% of heap use is not in use most of the time. But think better in optimising the allocation resources because a process that consumes 15Gb of heap while most of the time 90%+ is unused, seems to be a poor design. If you have no other chance, simply provide enough swap space to your system to afford that.
Related
My understanding is that kmalloc() allocates from anonymous memory. Does this actually reserve the physical memory immediately or is that going to happen only when a write page fault happens?
kmalloc() does not usually allocate memory pages(1), it's more complicated than that. kmalloc() is used to request memory for small objects (smaller than the page size), and manages those requests using already existing memory pages, similarly to what libc's malloc() does to manage the heap in userspace programs.
There are different allocators that can be used in the Linux kernel: SLAB, SLOB and SLUB. Those use different approaches for the same goal: managing allocation and deallocation of kernel memory for small objects at runtime. A call to kmalloc() may use any of the three depending on which one was configured into the kernel.
A call to kmalloc() does not usually reserve memory at all, but rather just manages already reserved memory. Therefore memory returned by kmalloc() does not require a subsequent page fault like a page requested through mmap() normally would.
Copy on Write (CoW) is a different concept. Although it's still triggered through page faults, CoW is a mechanism used by the kernel to save space sharing existing mappings until they are modified. It is not what happens when a fault is triggered on a newly allocated memory page. A great example of CoW is what happens when the fork syscall is invoked: the process memory of the child process is not immediately duplicated, instead existing pages are marked as CoW, and the duplication only happens at the first attempt to write.
I believe this should clear any doubt you have. The short answer is that kmalloc() does not usually "reserve the physical memory immediately" because it merely allocates an object from already reserved memory.
(1) Unless the request is very large, in which case it falls back to actually allocating new memory through alloc_pages().
I'm writing a memory allocation routine, and it's currently running smoothly. I get my memory from the OS with mmap() in 4096-byte pages. When I start my memory allocator I allocate 1gig of virtual address space with mmap(), and then as allocations are made I divide it up into hunks according to the specifics of my allocation algorithm.
I feel safe allocating as much as a 1gig of memory on a whim because I know mmap() doesn't actually put pages into physical memory until I actually write to them.
Now, the program using my allocator might have a spurt where it needs a lot of memory, and in this case the OS would have to eventually put a whole 1gig worth of pages into physical RAM. The trouble is that the program might then go into a dormant period where it frees most of that 1gig and then uses only minimal amounts of memory. Yet, all I really do inside of my allocator's MyFree() function is to flip a few bits of bookkeeping data which mark the previously used gig as free, but I know this doesn't cause the OS remove those pages from physical memory.
I can't use something like munmap() to fix this problem, because the nature of the allocation algorithm is such that it requires a continuous region of memory without any holes in it. Basically I need a way to tell the OS "Listen, you can take these pages out of physical memory and clear them to 0, but please remap them on the fly when I need them again, as if they were freshly mmap()'d"
What would be the best way to go about this?
Actually, after writing this all up I just realized that I can probably do an munmap() followed immediately by a fresh mmap(). Would that be the correct way to go about? I get the sense that there's probably some more efficient way to do this.
You are looking for madvise(addr, length, MADV_DONTNEED). From the manpage:
MADV_DONTNEED: Do not expect access in the near future. (For the time being, the application is finished with the given range, so the kernel can free resources associated with it.) Subsequent accesses of pages in this range will succeed, but will result either in reloading of the memory contents from the underlying mapped file (see mmap(2)) or zero-fill-on-demand pages for mappings without an underlying file.
Note especially the language about how subsequent accesses will succeed but revert to zero-fill-on-demand (for mappings without an underlying file).
Your thinking-out-loud alternative of an munmap followed immediately by another mmap will also work but risks kernel-side inefficiencies because it is no longer tracking the allocation a single contiguous region; if there are many such unmap-and-remap events the kernelside data structures might wind up being quite bloated.
By the way, with this kind of allocator it's very important that you use MAP_NORESERVE for the initial allocation, and then touch each page as you allocate it, and trap any resulting SIGSEGV and fail the allocation. (And you'll need to document that your allocator installs a handler for SIGSEGV.) If you don't do this your application will not work on systems that have disabled memory overcommit. See the mmap manpage for more detail.
As program is stored on flash/disk. For it execution, program is loaded into virtual memory and is mapped to RAM by virtual manager. During its execution process is in RAM. Then where does virtual memory exist (where it has all .text, .data, .stack, .heap)?
The virtual memory is a view of the RAM plus maybe some swap space provided by a virtual memory manager. Modern OSs have virtual memory managers and provide virtual memory to processes so that the executing program can behave as if it had a contiguous address space whose size is not limited by the actual RAM. The pages or blocks making up the virtual memory can be mapped anywhere in the RAM, so that contiguos virtual pages need to be stored in contiguos RAM areas. Or they can be swapped out to page space or swap space, waiting there until needed, whereupon they're read by the OS and mapped to some RAM page.
When you say
During its execution process is in RAM.
This is not entirely correct. Some or all memory pages that belong to the process may be swapped out, as explained.
One more word concerning the answers and comments that say that "virtual" means it doesn't exist. This makes no sense. On the contrary, according to Webster:
being such in essence or effect ...
Hence virtual memory is something (therefore, it exists!) that behaves as if it were memory.
Virtual memory is just like an illusion of RAM. It uses paging to acquire additional RAM that could be used by the processes in operating system.
Virtual memory means memory you can access with "normal" momory access methods, although it isn't clear where the data is actually stored.
It may be
actually in RAM
in a swap area
in another file (memory mapped file)
and access to it will be handled appropriately.
It is a layer of, well, virtualization so that you as a programmer don't have to worry about where the data is actually put.
The original purpose was mainly to be able to provide more memory to processes than we actually have and to extend it with means of swap space, but there are even more:
The OS is free to use the RAM for whatever it seems necessary, e. g. caching. Under some circumstances, it may be more effective to use RAM for cache than for holding parts of a program which hasn't been used for a long time.
Provide additional memory to a program when it requests it: if you call malloc(), the program's library may request the OS to provide a part of memory which can be attached seamlessly into the address space.
Avoid stack overflow: if the stack grows larger and larger, the respective memory section may be extended as well transparently so that the program won't have to worry about it.
A system can even do "overcommitment" of memory: if a process requests a large amount of memory, the OS may say "yes, ok", i. e. provide the memory to the program. That means in the first place "allow the program to access a certain address space area", but this address space is not immediately backed by memory. Only as soon as the program accesses this memory the mapping will be done, and if this cannot be fulfilled, the program is crashed by the Out of emory killer (at least, under Linux).
All this works by page-wise (1 page = 4 kiB) assignment of physical memory to a program, viewed via the program's address space, and this in the amount and frequency as it is needed.
How does libc communicate with the OS (e.g., a Linux kernel) to manage memory? Specifically, how does it allocate memory, and how does it release memory? Also, in what cases can it fail to allocate and deallocate, respectively?
That is very general question, but I want to speak to the failure to allocate. It's important to realize that memory is actually allocated by kernel upon first access. What you are doing when calling malloc/calloc/realloc is reserving some addresses inside the virtual address space of a process (via syscalls brk, mmap, etc. libc does that).
When I get malloc or similar to fail (or when libc get brk or mmap to fail), it's usually because I exhausted the virtual address space of a process. This happens when there is no continuous block of free address, an no room to expand an existing one. You can either exhaust all space available or hit a limit RLIMIT_AS. It's pretty common especially on 32bit systems when using multiple threads, because people sometimes forget that each thread needs it's own stack. Stacks usually consume several megabytes, which means you can create only few hundreds threads before you have no more free address space. Maybe an even more common reason for exhausted address space are memory leaks. Libc of course tries to reuse space on the heap (space obtained by a brk syscall) and tries to munmmap unneeded mappings. However, it can't reuse something that is not "deallocated".
The shortage of physical memory is not detectable from within a process (or libc which is part of the process) by failure to allocate. Yeah, you can hit "overcommitting limit", but that doesn't mean the physical memory is all taken. When free physical memory is low, kernel invokes special task called OOM killer (Out Of Memory Killer) which terminates some processes in order to free memory.
Regarding failure to deallocate, my guess is it doesn't happen unless you do something silly. I can imagine setting program break (end of heap) below it's original position (by a brk syscall). That is, of course, recipe for a disaster. Hopefully libc won't do that and it doesn't make much sense either. But it can be seen as failed deallocation. munmap can also fail if you supply some silly argument, but I can't think of regular reason for it to fail. That doesn't mean it doesn't exists. We would have to dig deep within source code of glibc/kernel to find out.
1) how does it allocate memory
libc provides malloc() to C programs.
Normally, malloc allocates memory from the heap, and adjusts the
size of the heap as required, using sbrk(2). When allocating blocks of
memory larger than MMAP_THRESHOLD bytes, the glibc malloc()
implementation allocates the memory as a private anonymous mapping
using mmap(2). MMAP_THRESHOLD is 128 kB by default, but is adjustable
using mallopt(3). Allocations performed using mmap(2) are unaffected
by the RLIMIT_DATA resource limit (see getrlimit(2)).
And this is about sbrk.
sbrk - change data segment size
2) in what cases can it fail to allocate
Also from malloc
By default, Linux follows an optimistic memory allocation strategy.
This means that when malloc() returns non-NULL there is no guarantee
that the memory really is available.
And from proc
/proc/sys/vm/overcommit_memory
This file contains the kernel virtual memory accounting mode. Values are:
0: heuristic overcommit (this is the default)
1: always overcommit, never check
2: always check, never overcommit
Mostly it uses the sbrk system call to adjust the size of the data segment, thereby reserving more memory for it to parcel out. Memory allocated in that way is generally not released back to the operating system because it is only possible to do it when the blocks available to be released are at the end of the data segment.
Larger blocks are sometime done by using mmap to allocate memory, and that memory can be released again with an munmap call.
How does libc communicate with the OS (e.g., a Linux kernel) to manage memory?
Through system calls - this is a low-level API that the kernel provides.
Specifically, how does it allocate memory, and how does it release memory?
Unix-like systems provide the "sbrk" syscall.
Also, in what cases can it fail to allocate and deallocate, respectively?
Allocation can fail, for example, when there's no enough available memory. Deallocation shall not fail.
From my understanding, when a process is under execution it has some amount of memory at it's disposal. As the stack increases in size it builds from one end of the process (disregarding global variables that come before the stack), while the heap builds from another end. If you keep adding to the stack or heap, eventually all the memory will be used up for this process.
How does the amount of memory the process is given get determined? I can only imagine it depends on a bunch of different variables, but an as-general-as-possible response would be great. If things have to get specific, I'm interested in linux processes written in C++.
On most platforms you will encounter, Linux runs with virtual memory enabled. This means that each process has its own virtual address space, the size of which is determined only by the hardware and the way the kernel has configured it.
For example, on the x86 architecture with a "3/1" split configuration, every userspace process has 3GB of address space available to it, within which the heap and stack are allocated. This is regardless of how much physical memory is available in the system. On the x86-64 architecture, 128TB of address space is typically available to each userspace process.
Physical memory is separately allocated to back that virtual memory. The amount of this available to a process depends upon the configuration of the system, but in general it's simply supplied "on-demand" - limited mostly how much physical memory and swap file space exists, and how much is currently in use for other purposes.
The stack does not magically grow. It's size is static and the size is determined at linking time. So when you take enough space from the stack, it overflows (stack overflow ;)
On the other hand, the heap area 'magically' grows. Meaning that when ever more memory is needed for heap, the program asks operating system for more memory.
EDIT: As Mat pointed out below, the stack actually can increase during runtime on modern operating systems.