Are memory leaks somehow worse in windows than in linux? - memory-leaks

I randomly stumbled upon The Art of C++, Chapter 2 while typing questions into google. I read this passage on memory leaks:
"Memory leaks cannot occur in a garbage collection environment because the garbage collector ensures that unused objects are eventually freed.
Memory leaks are a particularly troublesome problem in Windows programming, where the
failure to release unused resources slowly degrades performance."
Why is the author emphasizing Windows programming here as opposed to linux or OS X?
Are memory leaks somehow worse in windows than on linux?

Related

Distinguish memory leak from memory fragmentation

I use linux command top to observe a program running and I can see increasing memory used by that program.
How to figure out that symptom is caused by memory leak or memory fragmentation?
Well, you can't do it using "top" command. The only way to detect memory leaks is by using special debugging tools called memory debugger. One example is "Valgrind" but there are many of them.
Another consideration is what is the program language of the program. If it is some modern script language with garbage collector - the memory leaks are not possible at all (of course if the language interpreter/compiler is not buggy).
Mostly the compiled, relatively low level languages are prone to memory leaks - like C, C++, Pascal, Assembly and similar.

Performance issues in Linux Multi-heap Multi-thread application

We are porting a multi-process application to multi-threaded architecture. We have the same application running on Windows and it is very performant.
For Linux we are using the pthread libraries. In order to avoid memory contention we have custom heaps, each thread having its own heap. We are using mspace for this custom heap implemention. However the issue is that this approach is causing a lot of performance overhead. The mspace memory allocations are very slow as compared to the native malloc. This is the bottleneck. We tried the Hoard allocator but that is much worse.
The performance impact is due to:
1. CPU switching: I see more switching in the LWP than in normal multi-processes
2. Memory allocations: I see mspace taking about 7 to 8 times more time than normal malloc
Is there any alternative to acheive multi-heap which is also performant in Linux?

Does an Application memory leak cause an Operating System memory leak?

When we say a program leaks memory, say a new without a delete in c++, does it really leak? I mean, when the program ends, is that memory still allocated to some non-running program and can't be used, or does the OS know what memory was requested by each program, and release it when the program ends? If I run that program a lot of times, will I run out of memory?
No, in all practical operating systems, when a program exits, all its resources are reclaimed by the OS. Memory leaks become a more serious issue in programs that might continue running for an extended time and/or functions that may be called often from the same program.
On operating systems with protected memory (Mac OS 10+, all Unix-clones such as Linux, and NT-based Windows systems meaning Windows 2000 and younger), the memory gets released when the program ends.
If you run any program often enough without closing it in between (running more and more instances at the same time), you will eventually run out of memory, regardless of whether there is a memory leak or not, so that's also true of programs with memory leaks. Obviously, programs leaking memory will fill the memory faster than an identical program without memory leaks, but how many times you can run it without filling the memory depends much rather on how much memory that program needs for normal operation than whether there's a memory leak or not. That comparison is really not worth anything unless you are comparing two completely identical programs, one with a memory leak and one without.
Memory leaks become the most serious when you have a program running for a very long time. Classic examples of this is server software, such as web servers. With games or spreadsheet programs or word processors, for instance, memory leaks aren't nearly as serious because you close those programs eventually, freeing up the memory. But of course memory leaks are nasty little beasts which should always be tackled as a matter of principle.
But as stated earlier, all modern operating systems release the memory when the program closes, so even with a memory leak, you won't fill up the memory if you're continuously opening and closing the program.
Leaked memory is returned by the OS after the execution has stopped.
That's why it isn't always a big problem with desktop applications, but its a big problem with servers and services (they tend to run long times.).
Lets look at the following scenario:
Program A ask memory from the OS
The OS marks the block X as been used by A and returns it to the program.
The program should have a pointer to X.
The program returns the memory.
The OS marks the block as free. Using the block now results in a access violation.
Program A ends and all memory used by A is marked unused.
Nothing wrong with that.
But if the memory is allocated in a loop and the delete is forgotten, you run into real problems:
Program A ask memory from the OS
The OS marks the block X as been used by A and returns it to the program.
The program should have a pointer to X.
Goto 1
If the OS runs out of memory, the program probably will crash.
No. Once the OS finishes closing the program, the memory comes back (given a reasonably modern OS). The problem is with long-running processes.
When the process ends, the memory gets cleared as well. The problem is that if a program leaks memory, it will requests more and more of the OS to run, and can possibly crash the OS.
It's more leaking in the sense that the code itself has no more grip on the piece of memory.
The OS can release the memory when the program ends. If a leak exists in a program then it is just an issue whilst the program is running. This is a problem for long running programs such as server processes. Or for example, if your web browser had a memory leak and you kept it running for days then it would gradually consume more memory.
As far as I know, on most OS when a program is started it receives a defined segment of memory which will be completely liberated once the program is ended.
Memory leaks are one of the main reason why garbage collector algorithms were invented since, once plugged into the runtime, they become responsible in reclaiming the memory that is no longer accessible by a program.
Memory leaks don't persist past end of execution so a "solution" to any memory leak is to simply end program execution. Obviously this is more of an issue on certain types of software. Having a database server which needs to go offline every 8 hours due to memory leaks is more of an issue than a video game which needs to be restarted after 8 hours of continual play.
The term "leak" refers to the fact that over time memory consumption will grow without any increased benefit. The "leaked" memory is memory neither used by the program nor usable by the OS (and other programs).
Sadly memory leaks are very common in unmanaged code. I have had firefox running for a couple days now and memory usage is 424MB despite only having 4 tabs open. If I closed firefox and re-opened the same tabs memory usage would likely be <100MB. Thus 300+ MB has "leaked".

ARM/Linux memory leak: Can a user program retain memory after terminating?

I've got a memory leak somewhere, but it doesn't appear to be related to my program. I'm making this bold statement based on the fact that once my program terminates, either by seg-faulting, exitting, or aborting, the memory isn't recovered. If my program were the culprit, I would assume the MMU would recover everything, but this doesn't appear to be the case.
My question is:
On a small Linux system (64 Mb Ram) running a program that uses only stack memory and a few calls to malloc(), what causes can I look too see memory being run down and stay down once my program terminates?
A related question is here:
This all started when after code in question was directing its stdout, stderr to a file. After a few hours it aborted with a "Segmentation Fault". A quick (naive?) look at /proc/meminfo showed that there wasn't much available memory, so I assumed something was leaking.
It appears I don't have a memory leak (see here) but it does lead me to some new questions...
It turns out that writing to block devices can use a quite a pile of physical memory; in my system there was only 64 Meg, so writing hundreds of Megs to a USB drive was increasing the cached, active and inactive memory pools quite a bit.
These memory pools are immediately released to the Free memory pool when the device is dismounted.
The exact cause of my segmentation fault remains a small mystery, but I know it's occurence can be reduced by understanding the virtual memory resources better, particularly around the use of Block devices.

When process exit, will the memory that's left undeleted be returned to OS?

I am wondering if i new some object but forget to delete it, when the process exit, will the leaked memory be returned to the OS?
This isn't so mush a C++ question as an operating system question.
All of the operating systems that I have knowledge of will reclaim conventional memory that had been allocated. That's because the allocation generally comes from a processes private address space which will be reclaimed on exit.
This may not be true for other resources such as shared memory. There are implementations that will not release shared memory segments unless you specifically mark them for deletion before your process exits (and, even then, they don't get deleted until everyone has detached).
For most modern operating systems (most flavors of unix and anything running in protected memory under x86), the memory allocation occurs within a program's heap (either through malloc for C or new/delete for C++). So when the program exits, then the memory will be released for use elsewhere.

Resources