While running, is it possible to display the currently allocated buffers managed by LeakSanitizer? - memory-leaks

I've a very large program (okay, only 13,000 lines of code according to cloc) which leaks. I know because over time, it uses more and more resident memory.
I have the sanitizer option turned on, but on a clean exit, all my C++ software will properly clean everything as expected. So I don't see anything growing in the sanitizer output.
What would be useful in this case, is a way to call a function which displays the (large) list of allocated buffers while running the code. I can then look at a diff of two such outputs and see what was allocated anew. The leaked buffers will be in there...
At this point, though, I just don't see any header with sanitizer functions I could call to see such a list. Does it exist?

Lsan interface is available in sanitizer/lsan_interface.h but AFAIK it has no API to print allocation info. The best you can get is compile your code with Asan (which includes Lsan as well) and use __asan_print_accumulated_stats to get basic allocation statistics:
$ cat tmp.c
#include <sanitizer/asan_interface.h>
#include <stdlib.h>
int main() {
malloc(100);
__asan_print_accumulated_stats();
return 0;
}
$ gcc -fsanitize=address -g tmp.c && ./a.out
Stats: 0M malloced (0M for red zones) by 2 calls
Stats: 0M realloced by 0 calls
Stats: 0M freed by 0 calls
Stats: 0M really freed by 0 calls
Stats: 0M (0M-0M) mmaped; 5 maps, 0 unmaps
mallocs by size class: 7:1; 11:1;
Stats: malloc large: 0
Stats: StackDepot: 2 ids; 0M allocated
Stats: SizeClassAllocator64: 0M mapped in 256 allocations; remains 256
07 (112): mapped: 64K allocs: 128 frees: 0 inuse: 128 num_freed_chunks 457 avail: 585 rss: 4K releases: 0
11 (176): mapped: 64K allocs: 128 frees: 0 inuse: 128 num_freed_chunks 244 avail: 372 rss: 4K releases: 0
Stats: LargeMmapAllocator: allocated 0 times, remains 0 (0 K) max 0 M; by size logs:
=================================================================
==15060==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 100 byte(s) in 1 object(s) allocated from:
#0 0x7fdf2194fb40 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb40)
#1 0x559ca08a7857 in main /home/yugr/tmp.c:5
#2 0x7fdf214a1bf6 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21bf6)
SUMMARY: AddressSanitizer: 100 byte(s) leaked in 1 allocation(s).
Unfortunately there is no way to print exact allocations.

Related

Memory leaks in pthread even if the state is detached

I am learning pthreads programming.
I understood that there are two states of thread:
1. Joinable
2. Detachable
In case of Joinable, we need to call pthread_join to free the resources(stack), whereas in case of detached there is no need to call pthread_join and the resources will be freed on thread exit.
I wrote a sample program to observe the behavior
#include <stdio.h>
#include <pthread.h>
#include <stdlib.h>
void *threadFn(void *arg)
{
pthread_detach(pthread_self());
sleep(1);
printf("Thread Fn\n");
pthread_exit(NULL);
}
int main(int argc, char *argv[])
{
pthread_t tid;
int ret = pthread_create(&tid, NULL, threadFn, NULL);
if (ret != 0) {
perror("Thread Creation Error\n");
exit(1);
}
printf("After thread created in Main\n");
pthread_exit(NULL);
}
When i try to check any mem leaks with valgrind it gave me leaks of 272 bytes. Can you show me why is the leak happening here.
$valgrind --leak-check=full ./app
==38649==
==38649== HEAP SUMMARY:
==38649== in use at exit: 272 bytes in 1 blocks
==38649== total heap usage: 7 allocs, 6 frees, 2,990 bytes allocated
==38649==
==38649== 272 bytes in 1 blocks are possibly lost in loss record 1 of 1
==38649== at 0x4C31B25: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==38649== by 0x40134A6: allocate_dtv (dl-tls.c:286)
==38649== by 0x40134A6: _dl_allocate_tls (dl-tls.c:530)
==38649== by 0x4E44227: allocate_stack (allocatestack.c:627)
==38649== by 0x4E44227: pthread_create##GLIBC_2.2.5 (pthread_create.c:644)
==38649== by 0x108902: main (2.c:18)
==38649==
==38649== LEAK SUMMARY:
==38649== definitely lost: 0 bytes in 0 blocks
==38649== indirectly lost: 0 bytes in 0 blocks
==38649== possibly lost: 272 bytes in 1 blocks
==38649== still reachable: 0 bytes in 0 blocks
==38649== suppressed: 0 bytes in 0 blocks
==38649==
==38649== For counts of detected and suppressed errors, rerun with: -v
==38649== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
Your expectation is correct that there shouldn't be any leaks in main thread once you call pthread_exit.
However, what you observe is a quirk of the implementation you're using (which is likely to be glibc) - pthreads library (glibc implementation) re-uses the initially allocated stack for threads - like a cache so that previously allocated stacks can be re-used whenever possible.
Valgrind simply reports what it "sees" (something was allocated but not de-allocated). But it's not a real leak, so you don't need to worry about this.
If you "reverse" the logic (main thread exits as the last thread) then you wouldn't see leaks because the initially allocated stack space is properly free'd by the main thread. But this leak isn't a real leak in any case and you can safely ignore this.
You can also setup a suppression file so that Valgrind doesn't complain about this (which is to inform Valgrind that "I know this isn't not real leak, so don't report this"), such as:
{
Pthread_Stack_Leaks_Ignore
Memcheck:Leak
fun:calloc
fun:allocate_dtv
fun:_dl_allocate_tls
fun:allocate_stack
fun:pthread_create*
}

How can I reduce the virtual memory required by gccgo compiled executable?

When I compile this simple hello world example using gccgo, the resulting executable uses over 800 MiB of VmData. I would like to know why, and if there is anything I can do to lower that. The sleep is just to give me time to observe the memory usage.
The source:
package main
import (
"fmt"
"time"
)
func main() {
fmt.Println("hello world")
time.Sleep(1000000000 * 5)
}
The script I use to compile:
#!/bin/bash
TOOLCHAIN_PREFIX=i686-linux-gnu
OPTIMIZATION_FLAG="-O3"
CGO_ENABLED=1 \
CC=${TOOLCHAIN_PREFIX}-gcc-8 \
CXX=${TOOLCHAIN_PREFIX}-g++-8 \
AR=${TOOLCHAIN_PREFIX}-ar \
GCCGO=${TOOLCHAIN_PREFIX}-gccgo-8 \
CGO_CFLAGS="-g ${OPTIMIZATION_FLAG}" \
CGO_CPPFLAGS="" \
CGO_CXXFLAGS="-g ${OPTIMIZATION_FLAG}" \
CGO_FFLAGS="-g ${OPTIMIZATION_FLAG}" \
CGO_LDFLAGS="-g ${OPTIMIZATION_FLAG}" \
GOOS=linux \
GOARCH=386 \
go build -x \
-compiler=gccgo \
-gccgoflags=all="-static -g ${OPTIMIZATION_FLAG}" \
$1
The version of gccgo:
$ i686-linux-gnu-gccgo-8 --version
i686-linux-gnu-gccgo-8 (Ubuntu 8.2.0-1ubuntu2~18.04) 8.2.0
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
The output from /proc/<pid>/status:
VmPeak: 811692 kB
VmSize: 811692 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 5796 kB
VmRSS: 5796 kB
VmData: 807196 kB
VmStk: 132 kB
VmExe: 2936 kB
VmLib: 0 kB
VmPTE: 52 kB
VmPMD: 0 kB
VmSwap: 0 kB
I ask because my device only has 512 MiB of RAM. I know that this is virtual memory but I would like to reduce or remove the overcommit if possible. It does not seem reasonable to me for a simple executable to require that much allocation.
I was able to locate where gccgo is asking for so much memory. It's in the libgo/go/runtime/malloc.go file in the mallocinit function:
// If we fail to allocate, try again with a smaller arena.
// This is necessary on Android L where we share a process
// with ART, which reserves virtual memory aggressively.
// In the worst case, fall back to a 0-sized initial arena,
// in the hope that subsequent reservations will succeed.
arenaSizes := [...]uintptr{
512 << 20,
256 << 20,
128 << 20,
0,
}
for _, arenaSize := range &arenaSizes {
// SysReserve treats the address we ask for, end, as a hint,
// not as an absolute requirement. If we ask for the end
// of the data segment but the operating system requires
// a little more space before we can start allocating, it will
// give out a slightly higher pointer. Except QEMU, which
// is buggy, as usual: it won't adjust the pointer upward.
// So adjust it upward a little bit ourselves: 1/4 MB to get
// away from the running binary image and then round up
// to a MB boundary.
p = round(getEnd()+(1<<18), 1<<20)
pSize = bitmapSize + spansSize + arenaSize + _PageSize
if p <= procBrk && procBrk < p+pSize {
// Move the start above the brk,
// leaving some room for future brk
// expansion.
p = round(procBrk+(1<<20), 1<<20)
}
p = uintptr(sysReserve(unsafe.Pointer(p), pSize, &reserved))
if p != 0 {
break
}
}
if p == 0 {
throw("runtime: cannot reserve arena virtual address space")
}
The interesting part is that it falls back to smaller arena sizes if larger ones fail. So limiting the virtual memory available to a go executable will actually limit how much it will successfully allocate.
I was able to use ulimit -v 327680 to limit the virtual memory to smaller numbers:
VmPeak: 300772 kB
VmSize: 300772 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 5712 kB
VmRSS: 5712 kB
VmData: 296276 kB
VmStk: 132 kB
VmExe: 2936 kB
VmLib: 0 kB
VmPTE: 56 kB
VmPMD: 0 kB
VmSwap: 0 kB
These are still big numbers, but the best that a gccgo executable can achieve. So the answer to the question is, yes you can reduce the VmData of a gccgo compiled executable, but you really shouldn't worry about it. (On a 64 bit machine gccgo tries to allocate 512 GB.)
The likely cause is that you are linking libraries into the code. My guess is that you'd be able to get a smaller logical address space if you were to explicitly link to static libraries so that you get the minimum added to your executable. In any event, there is minimum harm in having a large logical address space.

Can malloc_trim() release memory from the middle of the heap?

I am confused about the behaviour of malloc_trim as implemented in the glibc.
man malloc_trim
[...]
malloc_trim - release free memory from the top of the heap
[...]
This function cannot release free memory located at places other than the top of the heap.
When I now look up the source of malloc_trim() (in malloc/malloc.c) I see that it calls mtrim() which is utilizing madvise(x, MADV_DONTNEED) to release memory back to the operating system.
So I wonder if the man-page is wrong or if I misinterpret the source in malloc/malloc.c.
Can malloc_trim() release memory from the middle of the heap?
There are two usages of madvise with MADV_DONTNEED in glibc now: http://code.metager.de/source/search?q=MADV_DONTNEED&path=%2Fgnu%2Fglibc%2Fmalloc%2F&project=gnu
H A D arena.c 643 __madvise ((char *) h + new_size, diff, MADV_DONTNEED);
H A D malloc.c 4535 __madvise (paligned_mem, size & ~psm1, MADV_DONTNEED);
There was https://sourceware.org/git/?p=glibc.git;a=commit;f=malloc/malloc.c;h=68631c8eb92ff38d9da1ae34f6aa048539b199cc commit by Ulrich Drepper on 16 Dec 2007 (part of glibc 2.9 and newer):
malloc/malloc.c (public_mTRIm): Iterate over all arenas and call
mTRIm for all of them.
(mTRIm): Additionally iterate over all free blocks and use madvise
to free memory for all those blocks which contain at least one
memory page.
mTRIm (now mtrim) implementation was changed. Unused parts of chunks, aligned on page size and having size more than page may be marked as MADV_DONTNEED:
/* See whether the chunk contains at least one unused page. */
char *paligned_mem = (char *) (((uintptr_t) p
+ sizeof (struct malloc_chunk)
+ psm1) & ~psm1);
assert ((char *) chunk2mem (p) + 4 * SIZE_SZ <= paligned_mem);
assert ((char *) p + size > paligned_mem);
/* This is the size we could potentially free. */
size -= paligned_mem - (char *) p;
if (size > psm1)
madvise (paligned_mem, size & ~psm1, MADV_DONTNEED);
Man page of malloc_trim is there: https://github.com/mkerrisk/man-pages/blob/master/man3/malloc_trim.3 and it was committed by kerrisk in 2012: https://github.com/mkerrisk/man-pages/commit/a15b0e60b297e29c825b7417582a33e6ca26bf65
As I can grep the glibc's git, there are no man pages in the glibc, and no commit to malloc_trim manpage to document this patch. The best and the only documentation of glibc malloc is its source code: https://sourceware.org/git/?p=glibc.git;a=blob;f=malloc/malloc.c
Additional functions:
malloc_trim(size_t pad);
609 /*
610 malloc_trim(size_t pad);
611
612 If possible, gives memory back to the system (via negative
613 arguments to sbrk) if there is unused memory at the `high' end of
614 the malloc pool. You can call this after freeing large blocks of
615 memory to potentially reduce the system-level memory requirements
616 of a program. However, it cannot guarantee to reduce memory. Under
617 some allocation patterns, some large free blocks of memory will be
618 locked between two used chunks, so they cannot be given back to
619 the system.
620
621 The `pad' argument to malloc_trim represents the amount of free
622 trailing space to leave untrimmed. If this argument is zero,
623 only the minimum amount of memory to maintain internal data
624 structures will be left (one page or less). Non-zero arguments
625 can be supplied to maintain enough trailing space to service
626 future expected allocations without having to re-obtain memory
627 from the system.
628
629 Malloc_trim returns 1 if it actually released any memory, else 0.
630 On systems that do not support "negative sbrks", it will always
631 return 0.
632 */
633 int __malloc_trim(size_t);
634
Freeing from the middle of the chunk is not documented as text in malloc/malloc.c (and malloc_trim description in commend was not updated in 2007) and not documented in man-pages project. Man page from 2012 may be the first man page of the function, written not by authors of glibc. Info page of glibc only mentions M_TRIM_THRESHOLD of 128 KB:
https://www.gnu.org/software/libc/manual/html_node/Malloc-Tunable-Parameters.html#Malloc-Tunable-Parameters and don't list malloc_trim function https://www.gnu.org/software/libc/manual/html_node/Summary-of-Malloc.html#Summary-of-Malloc (and it also don't document memusage/memusagestat/libmemusage.so).
You may ask Drepper and other glibc developers again as you already did in https://sourceware.org/ml/libc-help/2015-02/msg00022.html "malloc_trim() behaviour", but there is still no reply from them. (Only wrong answers from other users like https://sourceware.org/ml/libc-help/2015-05/msg00007.html https://sourceware.org/ml/libc-help/2015-05/msg00008.html)
Or you may test the malloc_trim with this simple C program (test_malloc_trim.c) and strace/ltrace:
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <malloc.h>
int main()
{
int *m1,*m2,*m3,*m4;
printf("%s\n","Test started");
m1=(int*)malloc(20000);
m2=(int*)malloc(40000);
m3=(int*)malloc(80000);
m4=(int*)malloc(10000);
printf("1:%p 2:%p 3:%p 4:%p\n", m1, m2, m3, m4);
free(m2);
malloc_trim(0); // 20000, 2000000
sleep(1);
free(m1);
free(m3);
free(m4);
// malloc_stats(); malloc_info(0, stdout);
return 0;
}
gcc test_malloc_trim.c -o test_malloc_trim, strace ./test_malloc_trim
write(1, "Test started\n", 13Test started
) = 13
brk(0) = 0xcca000
brk(0xcef000) = 0xcef000
write(1, "1:0xcca010 2:0xccee40 3:0xcd8a90"..., 441:0xcca010 2:0xccee40 3:0xcd8a90 4:0xcec320
) = 44
madvise(0xccf000, 36864, MADV_DONTNEED) = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0
rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], 0}, 8) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
nanosleep({1, 0}, 0x7ffffafbfff0) = 0
brk(0xceb000) = 0xceb000
So, there is madvise with MADV_DONTNEED for 9 pages after malloc_trim(0) call, when there was hole of 40008 bytes in the middle of the heap.
... utilizing madvise(x, MADV_DONTNEED) to release memory back to the
operating system.
madvise(x, MADV_DONTNEED) does not release memory. man madvise:
MADV_DONTNEED
Do not expect access in the near future. (For the time being,
the application is finished with the given range, so the kernel
can free resources associated with it.) Subsequent accesses of
pages in this range will succeed, but will result either in
reloading of the memory contents from the underlying mapped file
(see mmap(2)) or zero-fill-on-demand pages for mappings without
an underlying file.
So, the usage of madvise(x, MADV_DONTNEED) does not contradict man malloc_trim's statement:
This function cannot release free memory located at places other than the top of the heap.

Valgrind has reported "memory leak" when I compiled my program with "-pg" enabled?

I've get a memory leak reported by Valgrind with -pg enabled when compiling the following simple code.
#include <iostream>
#include <boost/filesystem.hpp>
#define BOOST_FILESYSTEM_VERSION 3
using boost::filesystem::path;
using namespace std;
int main() {
path ptDir;
ptDir = "/home/foo/bar";
if (true == is_directory((const path &)ptDir)){
cout << "ptDir: " << ptDir << endl;
}
}
The full compile option was as follows.
g++ -pg -g test.cpp -lboost_system -lboost_filesystem
The command of running valgrind is:
valgrind --gen-suppressions=all --track-origins=yes --error-limit=no --leak-check=full --show-reachable=yes -v --trace-children=yes --track-fds=yes --log-file=vg.log ./a.out
Then valgrind gave me a memory leak error.
==9598== HEAP SUMMARY:
==9598== in use at exit: 4,228 bytes in 1 blocks
==9598== total heap usage: 136 allocs, 135 frees, 17,984 bytes allocated
==9598==
==9598== Searching for pointers to 1 not-freed blocks
==9598== Checked 130,088 bytes
==9598==
==9598== 4,228 bytes in 1 blocks are still reachable in loss record 1 of 1
==9598== at 0x402A5E6: calloc (in /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so)
==9598== by 0x4260F44: monstartup (gmon.c:134)
==9598== by 0xBED6B636: ???
==9598== by 0x4E45504E: ???
==9598==
{
<insert_a_suppression_name_here>
Memcheck:Leak
fun:calloc
fun:monstartup
obj:*
obj:*
}
==9598== LEAK SUMMARY:
==9598== definitely lost: 0 bytes in 0 blocks
==9598== indirectly lost: 0 bytes in 0 blocks
==9598== possibly lost: 0 bytes in 0 blocks
==9598== still reachable: 4,228 bytes in 1 blocks
==9598== suppressed: 0 bytes in 0 blocks
==9598==
==9598== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
==9598== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
Is this correct? I'm using Ubuntu 12.04/ Kernel 3.2.0-32-generic
Valgrind FAQ:
"still reachable" means your program is probably ok -- it didn't free some
memory it could have. This is quite common and often reasonable. Don't use
--show-reachable=yes if you don't want to see these reports.
The fact that the allocatio comes from:
by 0x4260F44: monstartup (gmon.c:134)
indicates that it's a side effect of -pg - which is nothing you can do anything about. Don't mix -pg and valgrind is my suggestion.

RES != CODE + DATA in the output information of the top command,why?

what 'man top' said is: RES = CODE + DATA
q: RES -- Resident size (kb)
The non-swapped physical memory a task has used.
RES = CODE + DATA.
r: CODE -- Code size (kb)
The amount of physical memory devoted to executable code, also known as the 'text resident set' size or TRS.
s: DATA -- Data+Stack size (kb)
The amount of physical memory devoted to other than executable code, also known as the 'data >resident set' size or DRS.
what when i run 'top -p 4258',i get the following:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ CODE DATA COMMAND
258 root 16 0 3160 1796 1328 S 0.0 0.3 0:00.10 476 416 bash
1796 != 476+416
why?
ps:
linux distribution:
linux-iguu:~ # lsb_release -a
LSB Version: core-2.0-noarch:core-3.0-noarch:core-2.0-ia32:core-3.0-ia32:desktop-3.1-ia32:desktop-3.1-noarch:graphics-2.0-ia32:graphics-2.0-noarch:graphics-3.1-ia32:graphics-3.1-noarch
Distributor ID: SUSE LINUX
Description: SUSE Linux Enterprise Server 9 (i586)
Release: 9
Codename: n/a
kernel version:
linux-iguu:~ # uname -a
Linux linux-iguu 2.6.16.60-0.21-default #1 Tue May 6 12:41:02 UTC 2008 i686 i686 i386 GNU/Linux
I'll explain this with the help of an example of what happens when a program allocates and uses memory. Specifically, this program:
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <string.h>
int main(){
int *data, size, count, i;
printf( "fyi: your ints are %d bytes large\n", sizeof(int) );
printf( "Enter number of ints to malloc: " );
scanf( "%d", &size );
data = malloc( sizeof(int) * size );
if( !data ){
perror( "failed to malloc" );
exit( EXIT_FAILURE );
}
printf( "Enter number of ints to initialize: " );
scanf( "%d", &count );
for( i = 0; i < count; i++ ){
data[i] = 1337;
}
printf( "I'm going to hang out here until you hit <enter>" );
while( getchar() != '\n' );
while( getchar() != '\n' );
exit( EXIT_SUCCESS );
}
This is a simple program that asks you how many integers to allocate, allocates them, asks how many of those integers to initialize, and then initializes them. For a run where I allocate 1250000 integers and initialize 500000 of them:
$ ./a.out
fyi: your ints are 4 bytes large
Enter number of ints to malloc: 1250000
Enter number of ints to initialize: 500000
Top reports the following information:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP CODE DATA COMMAND
<program start>
11129 xxxxxxx 16 0 3628 408 336 S 0 0.0 0:00.00 3220 4 124 a.out
<allocate 1250000 ints>
11129 xxxxxxx 16 0 8512 476 392 S 0 0.0 0:00.00 8036 4 5008 a.out
<initialize 500000 ints>
11129 xxxxxxx 15 0 8512 2432 396 S 0 0.0 0:00.00 6080 4 5008 a.out
The relevant information is:
DATA CODE RES VIRT
before allocation: 124 4 408 3628
after 5MB allocation: 5008 4 476 8512
after 2MB initialization: 5008 4 2432 8512
After I malloc'd 5MB of data, both VIRT and DATA increased by ~5MB, but RES did not. RES did increase after I touched 2MB of the integers I allocated, but DATA and VIRT stayed the same.
VIRT is the total amount of virtual memory used by the process, including what is shared and what is over-committed. DATA is the amount of virtual memory used that isn't shared and that isn't code-text. I.e., it is the virtual stack and heap of the process. RES is not virtual: it is a measurment of how much memory the process is actually using at that specific time.
So in your case, the large inequality CODE+DATA < RES is likely the shared libraries included by the process. In my example (and yours), SHR+CODE+DATA is a closer approximation to RES.
Hope this helps.
There's a lot of hand-waving and voodoo associated with top and ps. There are many articles (rants?) online about the descrepancies. E.g., this and this.
This explanation is terrific to resolve my some queries. Thanks!
And meanwhile, trying to add something got during my understanding of linux memory management knowledge. If any misunderstand, please correct me!
Modern OS process concepts are based on virtual memory. Virtual memory system includes the RAM+SWAP;
So I think most of the memory concepts related with processes refer to the virtual memory, except that there are some supplement notes.
Any virtual address(page) allocated to a process is in below state:
a) allocated, but no mapping to any physical memory(something like COW)
b) allocated, already mapped to physical memory
c) allocated, already mapped to swapped memory.
The fields ouput of top command:
a) VIRT -- it refers to all virtual memory that the process have the right
to access, no matter it is already mapped to physical memory or swapped
memory, or even has no any mapping.
b) RES -- it refers to the virtual address already mapped to physical address and it still in RAM.
c) SWAP -- refers to the virtual address already mapped to physical address and it is swapped into SWAP space.
d) SHR -- it refers to the shared memory available to a process(VM?)
e) CODE + DATA -- CODE could be in a state of 2.b/2.c, and DATA could be in any of 3 state 2.a/2.b/3.c, and 3.b/3.c also have a fields name called "USED".
4) So the calculation maybe look like:
a) VIRT(VM) = RES(VM in memory) + SWAP(VM in swap) + VM unmapped(DATA, SHR?).
b) USED = RES + SWAP
c) SWAP = CODE(vm in memory) + DATA(vm in memory) + SHR(vm in memory?)
d) RES = CODE(vm in memory) + DATA(vm in memory) + SHR(vm in memory?)
At least DATA segment still have a "DATA(VM unmapped)", this could be observed from above malloc example. That's a little different from the manpage of top command which says "DATA: The amount of physical memory devoted to other than executable code, also known as the Data Resident Set size or DRS". Thanks again.
So amount of (CODE + DATA + SHR) usually larger than RES, because at least DATA(vm unmapped) actually calculated in "DATA", not like the manpge claiming.
Regards,

Resources