What happens if memory is leaking? - visual-c++

What exactly is a memory leak?
And how will it affect the system the program is running on?

When your process allocates memory from the OS on an ongoing basis, and never frees up any of it, you will eventually be using more memory than there is physically in the machine. At this point, the OS will first swap out to virtual memory (degrades performance) if it has any, and at some point your process will reach a point where the OS can no longer grant it more memory, because you've exceeded the maximum amount of addressable space (4GB on a 32bit OS).
There are basically two reasons this can happen: You've allocated memory and you've lost the pointer to it (it has become unreachable to your program), so you cannot free it any longer. That's what most people call a memory leak. Alternatively, you may just be allocating memory and never freeing it, because your program is lazy. that's not so much a leak, but in the end, the problems you get into are the same ones.

A memory leak is when your code allocates memory and then loses track of it, including the ability to free it later.
In C, for example, this can be done with the simple sequence:
void *pointer = malloc (2718); // Alloc, store address in pointer.
pointer = malloc (31415); // And again.
free (pointer); // Only frees the second block.
The original block of memory is still allocated but, because pointer no longer points to it, you have no way to free it.
That sequence, on its own, isn't that bad (well, it is bad, but the effects may not be). It's usually when you do it repeatedly that problems occur. Such as in a loop, or in a function that's repeatedly called:
static char firstDigit (int val) {
char *buff = malloc (100); // Allocates.
if (val < 0)
val = -val;
sprintf (buff, "%d", val);
return buff[0]; // But never frees.
}
Every time you call that function, you will leak the hundred bytes (plus any housekeeping information).
And, yes, memory leaks will affect other things. But the effects should be limited.
It will eventually affect the process that is leaking as it runs out of address space for allocating more objects. While that may not necessarily matter for short-lived processes, long-lived processes will eventually fail.
However, a decent operating system (and that includes Windows) will limit the resources that a single process can use, which will minimise the effects on other processes. Since modern environments disconnect virtual from physical memory, the only real effect that can be carried from process to process is if one tries to keep all its virtual memory resident in physical memory all the time, reducing the allocation of that physical memory to other processes.
But, even if a single process leaks gigabytes of memory, the memory itself won't be being used by the process (the crux of the leak is that the process has lost access to the memory). And, since it's not being used, the OS will almost certainly swap it out to disk and never have to bring it back into RAM again.
Of course, it uses up swap space and that may affect other processes but the amount of disk far outweighs the amount of physical RAM.

Your program will eventually crash. If it does not crash itself, it will help other programs crash because of lack of memory.

When you leak memory, it means that you are dynamically creating objects but are not destroying them. If the leak is severe enough, your program will eventually run out of address space and future allocation attempts will fail (likely causing your application to terminate or crash, since if you are leaking memory, you probably aren't handling out of memory conditions very well either), or the OS will halt your process if it attempts to allocate too much memory.
Additionally, you have to remember that in C++, many objects have destructors: when you fail to destroy a dynamically allocated object, its destructor will not be called.

A memory leak is a situation when a program allocates dynamic memory and then loses all pointers to that memory, therefor it can neither address nor free it. memory remains marked as allocated, so it will never be returned when more memory is requested by the program.
The program will exhaust limited resources at some speed. Depending on the amount of memory and swap file this can cause either the program eventually getting "can't allocate memory" indication or the operating system running out of both physical memory and swap file and just any program getting "can't allocate memory" indication. The latter can have serious consequences on some operating systems - we sometimes see Windows XP completely falling apart with critical services malfunctioning severely once extreme memory consumption in one program exhausts all memory. If that happens the only way to fix the problem is to reboot the system.

Related

What happens to allocated pages that are mostly empty?

If a process initially has a number of pages allocated to it in the heap, but a lot of the data in the pages has been deallocated, is there some sort of optimization that the OS does to consolidate the data into one page so that the other pages can be freed?
In general, nothing happens, the heap will continue to have "holes" in it.
Since the (virtual) memory addresses known by a process must remain valid, the operating system cannot perform "heap compaction" on its own. However, some runtimes like .Net do it.
If you are using C or C++, all you can hope for by default is that malloc() will be able to reuse previously deallocated chunks. But if your usage pattern is "allocate a lot of small objects then deallocate half of them at random," the memory utilization will probably not decrease much from the peak.
If a process initially has a number of pages allocated to it in the heap
A process will not initially have pages allocates in a heap.
is there some sort of optimization that the OS does to consolidate the data into one page so that the other pages can be freed
The operating system has no knowledge of user heaps. It allocates pages to the process. What that process does with those pages is up to it (i.e., use them for a heap, stack, code, etc.).
A process's heap manager can consolidate freed chunks of memory. When this occurs, it is normally done to fight heap fragmentation. However, I have never seen a heap manager on a paging system that unmaps pages once they are mapped by the operating system.
The heap of a process never has holes on it. The heap is part of the data segment allocated to a process, that grows dynamically upwards to the top of the stack segment, basically with the use of the sbrk(2) system call (that fixes a new size to the data segment) so the heap is a continuous segment (at least in terms of virtual address space) of allocated pages. malloc(3) never returns the heap space (or part of it) to the system. See malloc(3) for info about this. While there are memory allocators that allow a process to have several heaps (by means of allocating new memory segments, by use of the mmap(2) system call) the segments allocated by a memory allocator are commonly never returned back to the system.
What happens is that the memory allocator reuses the heap space allocated with sbrk(2) and mmap(2) and manages memory for being reused, but it is never returned back to the system.
But don't fear, as this is handled in a good and profitable way by the system, anyway.
That should not affect the overall system management, except from the fact that it consumes virtual address space, and probably page contents will end in the swap device if you don't use them until the process references them again and makes the system to reload them from the swap device(s). If your process doesn't reuse the holes it creates in the heap, the most probable destination is for the system to move them to the swap device and continue reusing it for other processes.
At this moment, I don't know if the system optimices swap allocation by not swapping out zeroed pages, as it does, for example, with text segments of executables (they never go to a swap device, because their contents are already swapped off in the executable file ---this was the reason you couldn't erase in ancient unices a program executable, or the reason there's not need anymore to use the sticky bit in frequently used programs---) but I think it doesn't (and the reason is that it's most improbable the unused pages will be zeroed by the application)
Be warned only in the case you have a 15Gb single process' heap use in your system and 90% of heap use is not in use most of the time. But think better in optimising the allocation resources because a process that consumes 15Gb of heap while most of the time 90%+ is unused, seems to be a poor design. If you have no other chance, simply provide enough swap space to your system to afford that.

can we use too many malloc and free in c program

is it ok to call too many malloc & free in a program?
i have a program that does malloc and free for each record. Although it sounds bad, does it have performance issue if i use too many malloc and free ?
Most modern malloc(3) implementations work like a memory pool. Since most modern OSes treat memory with pages (usually 4KB size), a malloc will probably request at least 4KB from the OS.
Suppose you keep calling malloc with 32. In your first malloc, at least one new page is requested from the OS (via sbrk(2) on unix). The successive mallocs have nothing to do with the OS, they just return you the next free chunk of memory in the memory pool as long as memory is available. So, calling malloc many times is not a big deal, usually. The point here is that system calls (the communication between the user process and OS) are usually expensive and malloc tries its best to avoid as much as possible.
free is similar too. When you free memory, usually OS isn't notified about that. When a page is totally freed, the page may be returned to the OS. Some implementations do not return the page to the OS unless the process already holds many unused pages.
To sum it up, malloc and free are like generic memory managers working with arbitrary size. The problem you might face is that malloc is designed to work with arbitrary size allocations, which might be slower than a memory manager that's designed to work with fixed size allocations. If you're usually allocating the same types of memory, you might be better off with implementing your own memory pool. Another case would be that malloc calls involve locking/unlocking in most modern implementations to support multithreading. If you're working with a single thread, that might also be an overhead: another reason to implement your own memory pool.
You might also want to work with different malloc implementations, benchmark them and decide to go with either one. Starting with a clean implementation and stripping off unnecessary parts might also be a good idea here.
yes/no. Large volumes of malloc/free can cause the heap to be fragmented to the point where malloc can fail. It is less of an issue now that memory is pretty cheap.
There is some overhead in calling malloc, but not a lot. malloc basically has to go to through the heap and find a block of memory that is unused and large enough to hold the number of bytes you asked for, then it designates that block as used and tells the operating system to mmap it for you and returns a pointer to that block.
It's a few steps, but really not a lot of work for your computer. The difference between using malloc to get memory for you, and putting a variable on the stack is a handful of instructions, and a system call, and unless you're programming on an embedded system, you honestly shouldn't worry about it. You'll only take a real performance hit if you allocate so much memory that you actually run out of RAM (in which case your Virtual Memory Manager will have to move some things into the swap space to make more room - as it turns out, malloc never fails)!
Freeing memory is even easier than allocating it, and in the end it's better to free what you allocate (future malloc calls will be faster, more memory will be available).
In short, use malloc to your hearts content! Decades of advances in technology have worked hard to earn you that right, there's no sense squandering it!
By definition 'too many' is 'too many'.
But more seriously, on most systems heap allocation is reasonably fast - because its done a lot. Allocating space for a record each time its processed doesn't sound bad.
The real answer is : write your program and measure its speed, is it acceptable? If not then profile it and find where the bottlenecks are - my 10c says it wont be heap processing

vm/min_free_kbytes - Why Keep Minimum Reserved Memory?

According to this article:
/proc/sys/vm/min_free_kbytes: This controls the amount of memory that is kept free for use by special reserves including “atomic” allocations (those which cannot wait for reclaim)
My question is that what does it mean by "those which cannot wait for reclaim"? In other words, I would like to understand why there's a need to tell the system to always keep a certain minimum amount of memory free and under what circumstances will this memory be used? [It must be used by something; don't see the need otherwise]
My second question: does setting this memory to something higher than 4MB (on my system) leads to better performance? We have a server which occasionally exhibit very poor shell performance (e.g. ls -l takes 10-15 seconds to execute) when certain processes get going and if setting this number to something higher will lead to better shell performance?
(link is dead, looks like it's now here)
That text is referring to atomic allocations, which are requests for memory that must be satisfied without giving up control (i.e. the current thread can not be suspended). This happens most often in interrupt routines, but it applies to all cases where memory is needed while holding an essential lock. These allocations must be immediate, as you can't afford to wait for the swapper to free up memory.
See Linux-MM for a more thorough explanation, but here is the memory allocation process in short:
_alloc_pages first iterates over each memory zone looking for the first one that contains eligible free pages
_alloc_pages then wakes up the kswapd task [..to..] tap into the reserve memory pools maintained for each zone.
If the memory allocation still does not succeed, _alloc pages will either give up [..] In this process _alloc_pages executes a cond_resched() which may cause a sleep, which is why this branch is forbidden to allocations with GFP_ATOMIC.
min_free_kbytes is unlikely to help much with the described "ls -l takes 10-15 seconds to execute"; that is likely caused by general memory pressure and swapping rather than zone exhaustion. The min_free_kbytes setting only needs to allow enough free pages to handle immediate requests. As soon as normal operation is resumed, the swapper process can be run to rebalance the memory zones. The only time I've had to increase min_free_kbytes is after enabling jumbo frames on a network card that didn't support dma scattering.
To expand on your second question a bit, you will have better results tuning vm.swappiness and the dirty ratios mentioned in the linked article. However, be aware that optimizing for "ls -l" performance may cause other processes to become slower. Never optimize for a non-primary usecase.
All linux systems will attempt to make use of all physical memory available to the system, often through the creation of a filesystem buffer cache, which put simply is an I/O buffer to help improve system performance. Technically this memory is not in use, even though it is allocated for caching.
"wait for reclaim", in your question, refers to the process of reclaiming that cache memory that is "not in use" so that it can be allocated to a process. This is supposed to be transparent but in the real world there are many processes that do not wait for this memory to become available. Java is a good example, especially where a large minimum heap size has been set. The process tries to allocate the memory and if it is not instantly available in one large contiguous (atomic?) chunk, the process dies.
Reserving a certain amount of memory with min_free_kbytes allows this memory to be instantly available and reduces the memory pressure when new processes need to start, run and finish while there is a high memory load and a full buffer cache.
4MB does seem rather low because if the buffer cache is full, any process that wants an immediate allocation of more than 4MB will likely fail. The setting is very tunable and system-specific, but if you have a few GB of memory available it can't hurt to bump up the reserve memory to 128MB. I'm not sure what effect it will have on shell interactivity, but likely positive.
This memory is kept free from use by normal processes. As #Arno mentioned, the special processes that can run include interrupt routines, which must be run now (as it's an interrupt), and finish before any other processes can run (atomic). This can include things like swapping out memory to disk when memory is full.
If the memory is filled an interrupt (memory management) process runs to swap some memory into disk so it can free some memory for use by normal processes. But if vm.min_free_kbytes is too small for it to run, then it locks up the system. This is because this interrupt process must run first to free memory so others can run, but then it's stuck because it doesn't have enough reserved memory vm.min_free_kbytes to do its task resulting in a deadlock.
Also see:
https://www.linbit.com/en/kernel-min_free_kbytes/ and
https://askubuntu.com/questions/41778/computer-freezing-on-almost-full-ram-possibly-disk-cache-problem (where the memory management process has so little memory to work with it takes so long to swap little by little that it feels like a freeze.)

libc memory management

How does libc communicate with the OS (e.g., a Linux kernel) to manage memory? Specifically, how does it allocate memory, and how does it release memory? Also, in what cases can it fail to allocate and deallocate, respectively?
That is very general question, but I want to speak to the failure to allocate. It's important to realize that memory is actually allocated by kernel upon first access. What you are doing when calling malloc/calloc/realloc is reserving some addresses inside the virtual address space of a process (via syscalls brk, mmap, etc. libc does that).
When I get malloc or similar to fail (or when libc get brk or mmap to fail), it's usually because I exhausted the virtual address space of a process. This happens when there is no continuous block of free address, an no room to expand an existing one. You can either exhaust all space available or hit a limit RLIMIT_AS. It's pretty common especially on 32bit systems when using multiple threads, because people sometimes forget that each thread needs it's own stack. Stacks usually consume several megabytes, which means you can create only few hundreds threads before you have no more free address space. Maybe an even more common reason for exhausted address space are memory leaks. Libc of course tries to reuse space on the heap (space obtained by a brk syscall) and tries to munmmap unneeded mappings. However, it can't reuse something that is not "deallocated".
The shortage of physical memory is not detectable from within a process (or libc which is part of the process) by failure to allocate. Yeah, you can hit "overcommitting limit", but that doesn't mean the physical memory is all taken. When free physical memory is low, kernel invokes special task called OOM killer (Out Of Memory Killer) which terminates some processes in order to free memory.
Regarding failure to deallocate, my guess is it doesn't happen unless you do something silly. I can imagine setting program break (end of heap) below it's original position (by a brk syscall). That is, of course, recipe for a disaster. Hopefully libc won't do that and it doesn't make much sense either. But it can be seen as failed deallocation. munmap can also fail if you supply some silly argument, but I can't think of regular reason for it to fail. That doesn't mean it doesn't exists. We would have to dig deep within source code of glibc/kernel to find out.
1) how does it allocate memory
libc provides malloc() to C programs.
Normally, malloc allocates memory from the heap, and adjusts the
size of the heap as required, using sbrk(2). When allocating blocks of
memory larger than MMAP_THRESHOLD bytes, the glibc malloc()
implementation allocates the memory as a private anonymous mapping
using mmap(2). MMAP_THRESHOLD is 128 kB by default, but is adjustable
using mallopt(3). Allocations performed using mmap(2) are unaffected
by the RLIMIT_DATA resource limit (see getrlimit(2)).
And this is about sbrk.
sbrk - change data segment size
2) in what cases can it fail to allocate
Also from malloc
By default, Linux follows an optimistic memory allocation strategy.
This means that when malloc() returns non-NULL there is no guarantee
that the memory really is available.
And from proc
/proc/sys/vm/overcommit_memory
This file contains the kernel virtual memory accounting mode. Values are:
0: heuristic overcommit (this is the default)
1: always overcommit, never check
2: always check, never overcommit
Mostly it uses the sbrk system call to adjust the size of the data segment, thereby reserving more memory for it to parcel out. Memory allocated in that way is generally not released back to the operating system because it is only possible to do it when the blocks available to be released are at the end of the data segment.
Larger blocks are sometime done by using mmap to allocate memory, and that memory can be released again with an munmap call.
How does libc communicate with the OS (e.g., a Linux kernel) to manage memory?
Through system calls - this is a low-level API that the kernel provides.
Specifically, how does it allocate memory, and how does it release memory?
Unix-like systems provide the "sbrk" syscall.
Also, in what cases can it fail to allocate and deallocate, respectively?
Allocation can fail, for example, when there's no enough available memory. Deallocation shall not fail.

Does an Application memory leak cause an Operating System memory leak?

When we say a program leaks memory, say a new without a delete in c++, does it really leak? I mean, when the program ends, is that memory still allocated to some non-running program and can't be used, or does the OS know what memory was requested by each program, and release it when the program ends? If I run that program a lot of times, will I run out of memory?
No, in all practical operating systems, when a program exits, all its resources are reclaimed by the OS. Memory leaks become a more serious issue in programs that might continue running for an extended time and/or functions that may be called often from the same program.
On operating systems with protected memory (Mac OS 10+, all Unix-clones such as Linux, and NT-based Windows systems meaning Windows 2000 and younger), the memory gets released when the program ends.
If you run any program often enough without closing it in between (running more and more instances at the same time), you will eventually run out of memory, regardless of whether there is a memory leak or not, so that's also true of programs with memory leaks. Obviously, programs leaking memory will fill the memory faster than an identical program without memory leaks, but how many times you can run it without filling the memory depends much rather on how much memory that program needs for normal operation than whether there's a memory leak or not. That comparison is really not worth anything unless you are comparing two completely identical programs, one with a memory leak and one without.
Memory leaks become the most serious when you have a program running for a very long time. Classic examples of this is server software, such as web servers. With games or spreadsheet programs or word processors, for instance, memory leaks aren't nearly as serious because you close those programs eventually, freeing up the memory. But of course memory leaks are nasty little beasts which should always be tackled as a matter of principle.
But as stated earlier, all modern operating systems release the memory when the program closes, so even with a memory leak, you won't fill up the memory if you're continuously opening and closing the program.
Leaked memory is returned by the OS after the execution has stopped.
That's why it isn't always a big problem with desktop applications, but its a big problem with servers and services (they tend to run long times.).
Lets look at the following scenario:
Program A ask memory from the OS
The OS marks the block X as been used by A and returns it to the program.
The program should have a pointer to X.
The program returns the memory.
The OS marks the block as free. Using the block now results in a access violation.
Program A ends and all memory used by A is marked unused.
Nothing wrong with that.
But if the memory is allocated in a loop and the delete is forgotten, you run into real problems:
Program A ask memory from the OS
The OS marks the block X as been used by A and returns it to the program.
The program should have a pointer to X.
Goto 1
If the OS runs out of memory, the program probably will crash.
No. Once the OS finishes closing the program, the memory comes back (given a reasonably modern OS). The problem is with long-running processes.
When the process ends, the memory gets cleared as well. The problem is that if a program leaks memory, it will requests more and more of the OS to run, and can possibly crash the OS.
It's more leaking in the sense that the code itself has no more grip on the piece of memory.
The OS can release the memory when the program ends. If a leak exists in a program then it is just an issue whilst the program is running. This is a problem for long running programs such as server processes. Or for example, if your web browser had a memory leak and you kept it running for days then it would gradually consume more memory.
As far as I know, on most OS when a program is started it receives a defined segment of memory which will be completely liberated once the program is ended.
Memory leaks are one of the main reason why garbage collector algorithms were invented since, once plugged into the runtime, they become responsible in reclaiming the memory that is no longer accessible by a program.
Memory leaks don't persist past end of execution so a "solution" to any memory leak is to simply end program execution. Obviously this is more of an issue on certain types of software. Having a database server which needs to go offline every 8 hours due to memory leaks is more of an issue than a video game which needs to be restarted after 8 hours of continual play.
The term "leak" refers to the fact that over time memory consumption will grow without any increased benefit. The "leaked" memory is memory neither used by the program nor usable by the OS (and other programs).
Sadly memory leaks are very common in unmanaged code. I have had firefox running for a couple days now and memory usage is 424MB despite only having 4 tabs open. If I closed firefox and re-opened the same tabs memory usage would likely be <100MB. Thus 300+ MB has "leaked".

Resources