My application similar to hypotetical program:
for(;;) {
for (i=0; i<1000; i++) {
p[i] = malloc(random_number_between_1000_and_100000());
p[i][0]=0; // update
}
for (i=0; i<1000; i++) {
free(p[i]);
}
}
Has no memory leaks but on my system, the consumption of memory (top, column VSS) grows without limits (such as to 300% of available physical memory). Is this normal?
Updated - use the memory for a while and then free it. Is this a difference?
The behavior is normal. Quoting man 3 malloc:
BUGS
By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the
memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be
killed by the infamous OOM killer. In case Linux is employed under circumstances where it would be less desirable to suddenly lose some randomly
picked processes, and moreover the kernel version is sufficiently recent, one can switch off this overcommitting behavior using a command like:
# echo 2 > /proc/sys/vm/overcommit_memory
See also the kernel Documentation directory, files vm/overcommit-accounting and sysctl/vm.txt.
You need to touch (read/write) the memory for the Linux kernel to actually reserve it.
Try to add
sbrk(-1);
at end of each loop to see if it makes difference.
The free() only deallocates memory but it doesn't give it back to OS.
The OS usually allocates all pages as Copy-On-Write clones of the "0"
page, that is a fixed page filled with zeros. Reading from the pages
will return 0 as expected. As long as you only read, all references go
the same physical memory. Once you write a value the "COW" is
broken and a real, physical, page frame is allocated for you. This
means that as long as you don't write to the memory you can keep
allocating memory until the virtual memory address space runs out or
your page table fills up all available memory.
As long as you don't touch those allocated chunks the system will not really allocate them for you.
However you can run out of addressable space which is a limit the OS imposes to the processes, and is not necessarily the maximum you can address with the systems pointer type.
Related
On linux malloc behaves opportunistically, only backing virtual memory by real memory when it is first accessed. Would it be possible to modify calloc so that it also behaves this way (allocating and zeroing pages when they are first accessed)?
It is not a feature of malloc() that makes it "opportunistic". It's a feature of the kernel with which malloc() has nothing to do whatsoever.
malloc() asks the kernel for a slap of memory everytime it needs more memory to fulfill a request, and it's the kernel that says "Yeah, sure, you have it" everytime without actually supplying memory. It is also the kernel that handles the subsequent page faults by supplying zero'ed memory pages. Note that any memory that the kernel supplies will already be zero'ed out due to safety considerations, so it is equally well suited for malloc() and for calloc().
That is, unless the calloc() implementation spoils this by unconditionally zeroing out the pages itself (generating the page faults that prompt the kernel to actually supply memory), it will have the same "opportunistic" behavior as malloc().
Update:
On my system, the following program successfully allocates 1 TiB (!) on a system with only 2 GiB of memory:
#include <stdlib.h>
#include <stdio.h>
int main() {
size_t allocationCount = 1024, successfullAllocations = 0;
char* allocations[allocationCount];
for(int i = allocationCount; i--; ) {
if((allocations[i] = calloc(1, 1024*1024*1024))) successfullAllocations++;
}
if(successfullAllocations == allocationCount) {
printf("all %zd allocations were successfull\n", successfullAllocations);
} else {
printf("there were %zd failed allocations\n", allocationCount - successfullAllocations);
}
}
I think, its safe to say that at least the calloc() implementation on my box behaves "opportunistically".
From the related /proc/sys/vm/overcommit_memory section in proc:
The amount of memory presently allocated on the system. The committed memory is a sum of all of the memory which has been allocated by processes, even if it has not been "used" by them as of yet. A process which allocates 1GB of memory (using malloc(3) or similar), but only touches 300MB of that memory will only show up as using 300MB of memory even if it has the address space allocated for the entire 1GB. This 1GB is memory which has been "committed" to by the VM and can be used at any time by the allocating application. With strict overcommit enabled on the system (mode 2 /proc/sys/vm/overcommit_memory), allocations which would exceed the CommitLimit (detailed above) will not be permitted. This is useful if one needs to guarantee that processes will not fail due to lack of memory once that memory has been successfully allocated.
Though not explicitly said, I think similar here means calloc and realloc. So calloc already behaves opportunistically as malloc.
What exactly is a memory leak?
And how will it affect the system the program is running on?
When your process allocates memory from the OS on an ongoing basis, and never frees up any of it, you will eventually be using more memory than there is physically in the machine. At this point, the OS will first swap out to virtual memory (degrades performance) if it has any, and at some point your process will reach a point where the OS can no longer grant it more memory, because you've exceeded the maximum amount of addressable space (4GB on a 32bit OS).
There are basically two reasons this can happen: You've allocated memory and you've lost the pointer to it (it has become unreachable to your program), so you cannot free it any longer. That's what most people call a memory leak. Alternatively, you may just be allocating memory and never freeing it, because your program is lazy. that's not so much a leak, but in the end, the problems you get into are the same ones.
A memory leak is when your code allocates memory and then loses track of it, including the ability to free it later.
In C, for example, this can be done with the simple sequence:
void *pointer = malloc (2718); // Alloc, store address in pointer.
pointer = malloc (31415); // And again.
free (pointer); // Only frees the second block.
The original block of memory is still allocated but, because pointer no longer points to it, you have no way to free it.
That sequence, on its own, isn't that bad (well, it is bad, but the effects may not be). It's usually when you do it repeatedly that problems occur. Such as in a loop, or in a function that's repeatedly called:
static char firstDigit (int val) {
char *buff = malloc (100); // Allocates.
if (val < 0)
val = -val;
sprintf (buff, "%d", val);
return buff[0]; // But never frees.
}
Every time you call that function, you will leak the hundred bytes (plus any housekeeping information).
And, yes, memory leaks will affect other things. But the effects should be limited.
It will eventually affect the process that is leaking as it runs out of address space for allocating more objects. While that may not necessarily matter for short-lived processes, long-lived processes will eventually fail.
However, a decent operating system (and that includes Windows) will limit the resources that a single process can use, which will minimise the effects on other processes. Since modern environments disconnect virtual from physical memory, the only real effect that can be carried from process to process is if one tries to keep all its virtual memory resident in physical memory all the time, reducing the allocation of that physical memory to other processes.
But, even if a single process leaks gigabytes of memory, the memory itself won't be being used by the process (the crux of the leak is that the process has lost access to the memory). And, since it's not being used, the OS will almost certainly swap it out to disk and never have to bring it back into RAM again.
Of course, it uses up swap space and that may affect other processes but the amount of disk far outweighs the amount of physical RAM.
Your program will eventually crash. If it does not crash itself, it will help other programs crash because of lack of memory.
When you leak memory, it means that you are dynamically creating objects but are not destroying them. If the leak is severe enough, your program will eventually run out of address space and future allocation attempts will fail (likely causing your application to terminate or crash, since if you are leaking memory, you probably aren't handling out of memory conditions very well either), or the OS will halt your process if it attempts to allocate too much memory.
Additionally, you have to remember that in C++, many objects have destructors: when you fail to destroy a dynamically allocated object, its destructor will not be called.
A memory leak is a situation when a program allocates dynamic memory and then loses all pointers to that memory, therefor it can neither address nor free it. memory remains marked as allocated, so it will never be returned when more memory is requested by the program.
The program will exhaust limited resources at some speed. Depending on the amount of memory and swap file this can cause either the program eventually getting "can't allocate memory" indication or the operating system running out of both physical memory and swap file and just any program getting "can't allocate memory" indication. The latter can have serious consequences on some operating systems - we sometimes see Windows XP completely falling apart with critical services malfunctioning severely once extreme memory consumption in one program exhausts all memory. If that happens the only way to fix the problem is to reboot the system.
Dear all, I am using Redhat linux ,How to set maximum memory for particular process. For eg i have to allocate maximum memory usage to eclipse alone .Is it possible to allocate like this.Give me some solutions.
ulimit -v 102400
eclipse
...gives eclipse 100MiB of memory.
You can't control memory usage; you can only control virtual memory size, not the amount of actual memory used, as that is extremely complicated (perhaps impossible) to know for a single process on an operating system which supports virtual memory.
Not all memory used appears in the process's virtual address space at a given instant, for example kernel usage, and disc caching. A process can change which pages it has mapped in as often as it likes (e.g. via mmap() ). Some of a process's address space is also mapped in, but not actually used, or is shared with one or more other processes. This makes measuring per-process memory usage a fairly unachievable goal in practice.
And putting a cap on the VM size is not a good idea either, as that will result in the process being killed if it attempts to use more.
The right way of doing this in this case (for a Java process) is to set the heap maximum size (via various well-documented JVM startup options). However, experience suggests that you should not set it less than 1Gb.
Linux noob question:
If I have 500MB of RAM, and 500MB of swap space, can the OS and processes then use 1GB of memory?
In other words, is the total amount of memory available to programs and the OS the total of the physical memory size and swap size?
I'm trying to figure out which SNMP counters to query, but need to understand how Linux uses virtual memory a little better first.
Thanks
Actually, it IS essentially correct, but your "virtual" memory does NOT reside beside your "physical memory" (as Matthew Scharley stated).
Your "virtual memory" is an abstraction layer covering both "physical" (as in RAM) and "swap" (as in hard-disk, which is of course as much physical as RAM is) memory.
Virtual memory is in essention an abstraction layer. Your program always addresses a "virtual" address, which your OS translates to an address in RAM or on disk (which needs to be loaded to RAM first) depending on where the data resides. So your program never has to worry about lack of memory.
Nothing is ever quite so simple anymore...
Memory pages are lazily allocated. A process can malloc() a large quantity of memory and never use it. So on your 500MB_RAM + 500MB_SWAP system, I could -- at least in theory -- allocate 2 gig of memory off the heap and things will run merrily along until I try to use too much of that memory. (At which point whatever process couldn't acquire more memory pages gets nuked. Hopefully it's my process. But not always.)
Individual processes may be limited to 4 gig as a hard address limitation on 32-bit systems. Even when you have more than 4 gig of RAM on the machine and you're using that bizarre segmented 36-bit atrocity from hell addressing scheme, individual processes are still limited to only 4 gigs. Some of that 4 gigs has to go for shared libraries and program code. So yer down to 2-3 gigs of stack+heap as an ADDRESSING limitation.
You can mmap files in, effectively giving you more memory. It basically acts as extra swap. I.e. Rather than loading a program's binary code data into memory and then swapping it out to the swapfile, the file is just mmapped. As needed, pages are swapped into RAM directly from the file.
You can get into some interesting stuff with sparse data and mmapped sparse files. I've seen X-windows claim enormous memory usage when in fact it was only using up a tiny bit.
BTW: "free" might help you. As might "cat /proc/meminfo" or the Vm lines in /proc/$PID/status. (Especially VmData and VmStk.) Or perhaps "ps up $PID"
Although mostly it's true, it's not entirely correct. For a particular process, the environment you run it in may limit the memory available to your process. Check the output of ulimit -v as well.
Yes, this is essentially correct. The actual numbers might be (very) marginally lower, but for all intents and purposes, if you have x physical memory and y virtual memory (swap in linux), then you have x + y memory available to the operating system and any programs running underneath the OS.
Tools like 'ps' and 'top' report various kinds of memory usages, such as the VM size and the Resident Set Size. However, none of those are the "real" memory usage:
Program code is shared between multiple instances of the same program.
Shared library program code is shared between all processes that use that library.
Some apps fork off processes and share memory with them (e.g. via shared memory segments).
The virtual memory system makes the VM size report pretty much useless.
RSS is 0 when a process is swapped out, making it not very useful.
Etc etc.
I've found that the private dirty RSS, as reported by Linux, is the closest thing to the "real" memory usage. This can be obtained by summing all Private_Dirty values in /proc/somepid/smaps.
However, do other operating systems provide similar functionality? If not, what are the alternatives? In particular, I'm interested in FreeBSD and OS X.
On OSX the Activity Monitor gives you actually a very good guess.
Private memory is for sure memory that is only used by your application. E.g. stack memory and all memory dynamically reserved using malloc() and comparable functions/methods (alloc method for Objective-C) is private memory. If you fork, private memory will be shared with you child, but marked copy-on-write. That means as long as a page is not modified by either process (parent or child) it is shared between them. As soon as either process modifies any page, this page is copied before it is modified. Even while this memory is shared with fork children (and it can only be shared with fork children), it is still shown as "private" memory, because in the worst case, every page of it will get modified (sooner or later) and then it is again private to each process again.
Shared memory is either memory that is currently shared (the same pages are visible in the virtual process space of different processes) or that is likely to become shared in the future (e.g. read-only memory, since there is no reason for not sharing read-only memory). At least that's how I read the source code of some command line tools from Apple. So if you share memory between processes using mmap (or a comparable call that maps the same memory into multiple processes), this would be shared memory. However the executable code itself is also shared memory, since if another instance of your application is started there is no reason why it may not share the code already loaded in memory (executable code pages are read-only by default, unless you are running your app in a debugger). Thus shared memory is really memory used by your application, just like private one, but it might additionally be shared with another process (or it might not, but why would it not count towards your application if it was shared?)
Real memory is the amount of RAM currently "assigned" to your process, no matter if private or shared. This can be exactly the sum of private and shared, but usually it is not. Your process might have more memory assigned to it than it currently needs (this speeds up requests for more memory in the future), but that is no loss to the system. If another process needs memory and no free memory is available, before the system starts swapping, it will take that extra memory away from your process and assign it another process (which is a fast and painless operation); therefor your next malloc call might be somewhat slower. Real memory can also be smaller than private and physical memory; this is because if your process requests memory from the system, it will only receive "virtual memory". This virtual memory is not linked to any real memory pages as long as you don't use it (so malloc 10 MB of memory, use only one byte of it, your process will get only a single page, 4096 byte, of memory assigned - the rest is only assigned if you actually ever need it). Further memory that is swapped may not count towards real memory either (not sure about this), but it will count towards shared and private memory.
Virtual memory is the sum of all address blocks that are consider valid in your apps process space. These addresses might be linked to physical memory (that is again private or shared), or they might not, but in that case they will be linked to physical memory as soon as you use the address. Accessing memory addresses outside of the known addresses will cause a SIGBUS and your app will crash. When memory is swapped, the virtual address space for this memory remains valid and accessing those addresses causes memory to be swapped back in.
Conclusion:
If your app does not explicitly or implicitly use shared memory, private memory is the amount of memory your app needs because of the stack size (or sizes if multithreaded) and because of the malloc() calls you made for dynamic memory. You don't have to care a lot for shared or real memory in that case.
If your app uses shared memory, and this includes a graphical UI, where memory is shared between your application and the WindowServer for example, then you might have a look at shared memory as well. A very high shared memory number may mean you have too many graphical resources loaded in memory at the moment.
Real memory is of little interest for app development. If it is bigger than the sum of shared and private, then this means nothing other than that the system is lazy at taken memory away from your process. If it is smaller, then your process has requested more memory than it actually needed, which is not bad either, since as long as you don't use all of the requested memory, you are not "stealing" memory from the system. If it is much smaller than the sum of shared and private, you may only consider to request less memory where possible, as you are a bit over-requesting memory (again, this is not bad, but it tells me that your code is not optimized for minimal memory usage and if it is cross platform, other platforms may not have such a sophisticated memory handling, so you may prefer to alloc many small blocks instead of a few big ones for example, or free memory a lot sooner, and so on).
If you are still not happy with all that information, you can get even more information. Open a terminal and run:
sudo vmmap <pid>
where is the process ID of your process. This will show you statistics for EVERY block of memory in your process space with start and end address. It will also tell you where this memory came from (A mapped file? Stack memory? Malloc'ed memory? A __DATA or __TEXT section of your executable?), how big it is in KB, the access rights and whether it is private, shared or copy-on-write. If it is mapped from a file, it will even give you the path to the file.
If you want only "actual" RAM usage, use
sudo vmmap -resident <pid>
Now it will show for every memory block how big the memory block is virtually and how much of it is really currently present in physical memory.
At the end of each dump is also an overview table with the sums of different memory types. This table looks like this for Firefox right now on my system:
REGION TYPE [ VIRTUAL/RESIDENT]
=========== [ =======/========]
ATS (font support) [ 33.8M/ 2496K]
CG backing stores [ 5588K/ 5460K]
CG image [ 20K/ 20K]
CG raster data [ 576K/ 576K]
CG shared images [ 2572K/ 2404K]
Carbon [ 1516K/ 1516K]
CoreGraphics [ 8K/ 8K]
IOKit [ 256.0M/ 0K]
MALLOC [ 256.9M/ 247.2M]
Memory tag=240 [ 4K/ 4K]
Memory tag=242 [ 12K/ 12K]
Memory tag=243 [ 8K/ 8K]
Memory tag=249 [ 156K/ 76K]
STACK GUARD [ 101.2M/ 9908K]
Stack [ 14.0M/ 248K]
VM_ALLOCATE [ 25.9M/ 25.6M]
__DATA [ 6752K/ 3808K]
__DATA/__OBJC [ 28K/ 28K]
__IMAGE [ 1240K/ 112K]
__IMPORT [ 104K/ 104K]
__LINKEDIT [ 30.7M/ 3184K]
__OBJC [ 1388K/ 1336K]
__OBJC/__DATA [ 72K/ 72K]
__PAGEZERO [ 4K/ 0K]
__TEXT [ 108.6M/ 63.5M]
__UNICODE [ 536K/ 512K]
mapped file [ 118.8M/ 50.8M]
shared memory [ 300K/ 276K]
shared pmap [ 6396K/ 3120K]
What does this tell us? E.g. the Firefox binary and all library it loads have 108 MB data together in their __TEXT sections, but currently only 63 MB of those are currently resident in memory. The font support (ATS) needs 33 MB, but only about 2.5 MB are really in memory. It uses a bit over 5 MB CG backing stores, CG = Core Graphics, those are most likely window contents, buttons, images and other data that is cached for fast drawing. It has requested 256 MB via malloc calls and currently 247 MB are really in mapped to memory pages. It has 14 MB space reserved for stacks, but only 248 KB stack space is really in use right now.
vmmap also has a good summary above the table
ReadOnly portion of Libraries: Total=139.3M resident=66.6M(48%) swapped_out_or_unallocated=72.7M(52%)
Writable regions: Total=595.4M written=201.8M(34%) resident=283.1M(48%) swapped_out=0K(0%) unallocated=312.3M(52%)
And this shows an interesting aspect of the OS X: For read only memory that comes from libraries, it plays no role if it is swapped out or simply unallocated; there is only resident and not resident. For writable memory this makes a difference (in my case 52% of all requested memory has never been used and is such unallocated, 0% of memory has been swapped out to disk).
The reason for that is simple: Read-only memory from mapped files is not swapped. If the memory is needed by the system, the current pages are simply dropped from the process, as the memory is already "swapped". It consisted only of content mapped directly from files and this content can be remapped whenever needed, as the files are still there. That way this memory won't waste space in the swap file either. Only writable memory must first be swapped to file before it is dropped, as its content wasn't stored on disk before.
On Linux, you may want the PSS (proportional set size) numbers in /proc/self/smaps. A mapping's PSS is its RSS divided by the number of processes which are using that mapping.
Top knows how to do this. It shows VIRT, RES and SHR by default on Debian Linux. VIRT = SWAP + RES. RES = CODE + DATA. SHR is the memory that may be shared with another process (shared library or other memory.)
Also, 'dirty' memory is merely RES memory that has been used, and/or has not been swapped.
It can be hard to tell, but the best way to understand is to look at a system that isn't swapping. Then, RES - SHR is the process exclusive memory. However, that's not a good way of looking at it, because you don't know that the memory in SHR is being used by another process. It may represent unwritten shared object pages that are only used by the process.
You really can't.
I mean, shared memory between processes... are you going to count it, or not. If you don't count it, you are wrong; the sum of all processes' memory usage is not going to be the total memory usage. If you count it, you are going to count it twice- the sum's not going to be correct.
Me, I'm happy with RSS. And knowing you can't really rely on it completely...
You can get private dirty and private clean RSS from /proc/pid/smaps
Take a look at smem. It will give you PSS information
http://www.selenic.com/smem/
Reworked this to be much cleaner, to demonstrate some proper best practices in bash, and in particular to use awk instead of bc.
find /proc/ -maxdepth 1 -name '[0-9]*' -print0 | while read -r -d $'\0' pidpath; do
[ -f "${pidpath}/smaps" ] || continue
awk '!/^Private_Dirty:/ {next;}
$3=="kB" {pd += $2 * (1024^1); next}
$3=="mB" {pd += $2 * (1024^2); next}
$3=="gB" {pd += $2 * (1024^3); next}
$3=="tB" {pd += $2 * (1024^4); next}
$3=="pB" {pd += $2 * (1024^5); next}
{print "ERROR!! "$0 >"/dev/stderr"; exit(1)}
END {printf("%10d: %d\n", '"${pidpath##*/}"', pd)}' "${pidpath}/smaps" || break
done
On a handy little container on my machine, with | sort -n -k 2 to sort the output, this looks like:
56: 106496
1: 147456
55: 155648
Use the mincore(2) system call. Quoting the man page:
DESCRIPTION
The mincore() system call determines whether each of the pages in the
region beginning at addr and continuing for len bytes is resident. The
status is returned in the vec array, one character per page. Each
character is either 0 if the page is not resident, or a combination of
the following flags (defined in <sys/mman.h>):
For a question that mentioned Freebsd, surprised no one wrote this yet :
If you want a linux style /proc/PROCESSID/status output, please do the following :
mount -t linprocfs none /proc
cat /proc/PROCESSID/status
Atleast in FreeBSD 7.0, the mounting was not done by default ( 7.0 is a much older release,but for something this basic,the answer was hidden in a mailing list!)
Check it out, this is the source code of gnome-system-monitor, it thinks the memory "really used" by one process is sum(info->mem) of X Server Memory(info->memxserver) and Writable Memory(info->memwritable), the "Writable Memory" is the memory blocks which are marked as "Private_Dirty" in /proc/PID/smaps file.
Other than linux system, could be different way according to gnome-system-monitor code.
static void
get_process_memory_writable (ProcInfo *info)
{
glibtop_proc_map buf;
glibtop_map_entry *maps;
maps = glibtop_get_proc_map(&buf, info->pid);
gulong memwritable = 0;
const unsigned number = buf.number;
for (unsigned i = 0; i < number; ++i) {
#ifdef __linux__
memwritable += maps[i].private_dirty;
#else
if (maps[i].perm & GLIBTOP_MAP_PERM_WRITE)
memwritable += maps[i].size;
#endif
}
info->memwritable = memwritable;
g_free(maps);
}
static void
get_process_memory_info (ProcInfo *info)
{
glibtop_proc_mem procmem;
WnckResourceUsage xresources;
wnck_pid_read_resource_usage (gdk_screen_get_display (gdk_screen_get_default ()),
info->pid,
&xresources);
glibtop_get_proc_mem(&procmem, info->pid);
info->vmsize = procmem.vsize;
info->memres = procmem.resident;
info->memshared = procmem.share;
info->memxserver = xresources.total_bytes_estimate;
get_process_memory_writable(info);
// fake the smart memory column if writable is not available
info->mem = info->memxserver + (info->memwritable ? info->memwritable : info->memres);
}