Calculating memory of a Process using Proc file system - linux

I am writing small process monitor script in Perl by reading values from Proc file system. Right now I am able to fetch number of threads, process state, number of bytes read and write using /proc/[pid]/status and /proc/[pid]/io files. Now I want to calculate the memory usage of a process. After searching, I came to know memory usage will be present /proc/[pid]/statm. But I still can't figure out what are necessary fields needed from that file to calculate the memory usage. Can anyone help me on this? Thanks in advance.

You likely want resident or size. From kernel.org.
size total program size
This is the whole program, including stuff never swapped in
resident resident set size
Stuff in RAM at the current moment (this does not include pages swapped out)

It is extremely difficult to know what the "memory usage" of a process is. VM size and RSS are known, measurable values.
But what you probably want is something else. In practice, "VM size" seems too high and RSS often seems too low.
The main problems are:
Multiple processes can share the same pages. You can add up the RSS of all running processes, and end up with much more than the physical memory of your machine (this is before kernel data structures are counted)
Private pages belonging to the process can be swapped out. Or they might not be initialised yet. Do they count?
How exactly do you count memory-mapped file pages? Dirty ones? Clean ones? MAP_SHARED or MAP_PRIVATE ones?
So you really need to think about what counts as "memory usage".
It seems to me that logically:
Private pages which are not shared with any other processes (NB: private pages can STILL be copy-on-write!) must count even if swapped out
Shared pages should count divided by the number of processes they're shared by e.g. a page shared by two processes counts half
File-backed pages which are resident can count in the same way
File-backed non-resident pages can be ignored
If the same page is mapped more than once into the address-space of the same process, it can be ignored the 2nd and subsequent time. This means that if proc 1 has page X mapped twice, and proc 2 has page X mapped once, they are both "charged" half a page.
I don't know of any utility which does this. It seems nontrivial though, and involves (at least) reading /proc/pid/pagemap and possibly some other /proc interfaces, some of which are root-only.

Another (less simple, but more precise) possibility would be to parse the /proc/123/maps file, perhaps by using the pmap utility. It gives you information about the "virtual memory" (i.e. the address space of the process).

Related

Finding memory location in running application

In old good DOS 5.0 times I was using some resident program for finding (and modifying) memory location of program variable. Typically lives, or ammunition in games (yea cheating). It was taking memory snapshot to disk and making diffs. One can also narrow the search using using greater, smaller comparison. Then it was able to fix the value and so on.
If it's possible, how can I do something similar in current Linux(64bit)? Is there such a tool? I was trying radare2 for tracking the calls, but the binary is stripped and I'm getting lost.
Thanks.
The memory of a Linux process can be examined and modified by mapping pieces of the /proc/<PID>/maps pseudo file. You can discover where the different parts of the process are by reading /proc/<PID>/maps and other similar files.
The problem you'll face is that many things have changed since the old good DOS times. In those times, with just a few tens of kilobytes for your program, global variables were the norm, and those are easy to find.
But now, with hundreds of megabytes, most programs will use dynamic memory, complex hierarchies of classes, virtual functions... and that will make your cheats a lot harder.
Also the pmap command will show you the memory locations (starting points and size) used by the process.
For example, if I had a process with id 123:
pmap 123

Different between memory usage in WHM/Cpanel and Linux

If I go to WHM and see my server's memory usage, it says that only 16% of memory is in use.
But when I connect to server using SSH and run command "free -m" then it shows that 80% is in use. Why is that? I want to know exact memory usage of all applications running like MySQL, Apache e.t.c.
How do I view that?
Thanks
As they say, "It's Complicated".
Linux uses unused memory for disk buffering and caching. It speeds things up. But you may need to look at the -/+ buffers/cache line of free.
'ps' can show you, for any given process, or for all processes, the %cpu, %mem, cumulative cpu-time, rss (resident set size, the non-swapped physical memory that a process is using), size (very approximate amount of swap space that would be required if the process were to dirty all writable pages and then be swapped out), vsize (virtual memory usage of entire process (vm_lib + vm_exe + vm_data + vm_stack)), and much much more.
For any given process, you can cat /proc/$PID/status -- it's human readable -- and check out the VmSize, VmLck, VmRSS, VmData, VmStk, VmExe, VmLib, and VmPTE values, along with others...
But that's just for starters... Processes can allocate memory but not use it. (Memory can be allocated, but the memory pages are not created/issued until they're actually used. That whole on-demand thing.)
Processes can map in hardware space, showing up as using a large quantity of memory that's not actually coming from system RAM. (X-servers are known to sometimes do this. It's some wonky stuff involved kernel drivers...)
There's the executable, which is usually a memory-mapped file. Meaning that parts that are swapped-in are taking up RAM, but when swapped out it never takes up swapfile space.
Processes can have other memory-mapped files...
There's shared-memory libraries, where the same RAM pages are used by multiple programs concurrently.
So we have to ask, as far as memory goes, what exactly counts and what doesn't?

What does the Linux /proc/meminfo "Mapped" topic mean?

What does the Linux /proc/meminfo "Mapped" topic mean? I have seen several one-liners that tell me it is the "Total size of memory in kilobytes that is mapped by devices or libraries with mmap." But I have now spent almost twenty hours searching the 2.6.30.5 kernel source code trying to confirm this statement, and I have been unable to do so -- indeed I see some things which seem to conflict with it.
The "Mapped" count is held in global_page_state[NR_FILE_MAPPED]. The comment near the declaration of NR_FILE_MAPPED says: "Pagecache pages mapped into pagetables. Only modified from process context."
Aren't all of the pages referred to by meminfo's "Cached" topic file-backed? Doesn't that mean that all these pages must be "Mapped"? I've looked at a few dozen meminfo listings, from several different architectures, and always the "Mapped" value is much smaller than the "Cached" value.
At any given time most of memory is filled with executable images and shared libraries. Looking at /proc/pid/smaps, I see that all of these are mapped into VMAs. Are all of these mapped into memory using mmap()? If so, why is "Mapped" so small? If they aren't mapped into memory using mmap(), how do they get mapped? Calls on handle_mm_fault, which is called by get_user_pages and various architecture-dependent page-fault handlers, increment the "Mapped" count, and they seem to do so for any page associated with a VMA.
I've looked at the mmap() functions of a bunch of drivers. Many of these call vm_insert_page or remap_vmalloc_range to establish their mappings, and these functions do increment the "Mapping" count. But a good many other drivers seem to call remap_pfn_range, which, as far as I can tell, doesn't increment the "Mapping" count.
It's the other way around. Everything in Mapped is also in Cached - Mapped is pagecache data that's been mapped into process virtual memory space. Most pages in Cached are not mapped by processes.
The same page can be mapped into many different pagetables - it'll only count in Mapped once, though. So if you have 100 processes running, each with 2MB mapped from /lib/i686/cmov/libc-2.7.so, that'll still only add 2MB to Mapped.
I think the intention is that it counts the number of pages that are mapped from files. In my copy of the source (2.6.31), it is incremented in page_add_file_rmap, and decremented in page_remove_rmap if the page to be removed is not anonymously mapped. page_add_file_rmap is, for example, invoked in __do_fault, again in case the mapping is not anonymous.
So it looks all consistent to me...

What are the exact conditions based on which Linux swaps process(s) memory from RAM to a swap file?

My server has 8Gigs of RAM and 8Gigs configured for swap file. I have memory intensive apps running. These apps have peak loads during which we find swap usage increase. Approximately 1 GIG of swap is used.
I have another server with 4Gigs of RAM and 8 Gigs of swap and similar memory intensive apps running on it. But here swap usage is very negligible. Around 100 MB.
I was wondering what are the exact conditions or a rough formula based on which Linux will do a swapout of a process memory in RAM to the swap file.
I know its based on swapiness factor. What else is it based on? Swap file size? Any pointers to Linux kernel documentation/source code explaining this will be great.
I've seen a lot of people posting subjective explanations of what this does. Here is hopefully a more full answer.
In the split LRU on post 2.6.28 Linux swappiness is a multiplier used to arbitrarily modify the fraction that is calculated determining the pressure built up in both LRUs.
So, for example on a system with no free memory left - the value of the existing memory you have is measured based off of the rate of how much memory is listed as 'Active' and the rate of how often pages are promoted to active after falling into the inactive list.
An LRU with many promotions/demotions of pages between active and inactive is in a lot of use.
Typically file backed storage is cheaper and safer to evict when your running out of memory and automatically is given a modifier of 200 (this makes file backed memory 200 times more worthless than swap backed memory (Which has a value of 0) when it multiplies this fraction.
What swappiness does is modify this value by deducting the swappiness number you gave (default 60) to file memory and adding the swappiness value you gave as a multiplier to anon memory. Thus the default swappiness leaves you with anonymous memory being 80 times more valuable than file memory (200-60 for file, 0+60 for anon). Thus, on a typical linux system that has used up all its memory, page cache would have to be 80 TIMES more active than anonymous memory for anonymous memory to be swapped out in favour of page cache.
If you set swappiness to 100 this gives anon a modifier of 100 and file memory a modifier of 100 (200 - 100) leaving both LRUs equally weighted. Thus on a file heavy system that wants page cache providing the anon memory is not as active as page cache then anon memory will be swapped to disk to make space for extra page cache.
Linux (or any other OS) divides memory up into pages (typically 4Kb). Each of these pages represent a chunk of memory. Usage information for these pages is maintained, which basically contains info about whether the page is free or is in use (part of some process), whether it has been accessed recently, what kind of data it contains (process data, executable code etc.), owner of the page, etc. These pages can also be broadly divided into two categories - filesystem pages or the page cache (in which all data read/written to your filesystem resides) and pages belonging to processes.
When the system is running low on memory, the kernel starts swapping out pages based on their usage. Using a list of pages sorted w.r.t recency of access is common for determining which pages can be swapped out (linux kernel has such a list too).
During swapping, Linux kernel needs to decide what to trade-off when nuking pages in memory and sending them to swap. If it swaps filesystem pages too aggressively, more reads are required from the filesystem to read those pages back when they are needed. However, if it swaps out process pages more aggressively it can hurt interactivity, because when the user tries to use the swapped out processes, they will have to be read back from the disk. See a nice discussion here on this.
By setting swappiness = 0, you are telling the linux kernel not to swap out pages belonging to processes. When setting swappiness = 100 instead, you tell the kernel to swap out pages belonging to processes more aggressively. To tune your system, try changing the swappiness parameter in steps of 10, monitoring performance and pages being swapped in/out at each setting using the "vmstat" command. Keep the setting that gives you the best results. Remember to do this testing during peak usage hours. :)
For database applications, swappiness = 0 is generally recommended. (Even then, test different settings on your systems to arrive to a good value).
References:
http://www.linuxvox.com/2009/10/what-is-the-linux-kernel-parameter-vm-swappiness/
http://www.pythian.com/news/1913/

Find out how many pages of memory a process uses on linux

I need to find out how many pages of memory a process allocates?
Each page is 4096, the process memory usage I'm having some problems locating the correct value. When I'm looking in the gome-system-monitor there are a few values to choose from under memory map.
Thanks.
The point of this is to divide the memory usage by the page count and verify the page size.
It's hard to figure exact amount of memory allocated correctly: there are pages shared with other processes (r/o parts of libraries), never used memory allocated by brk and anonymous mmap, mmaped file which are not fetched from disk completely due to efficient processing algorithms which touch only small part of file etc, swapped out pages, dirty pages to-be-written-on-disk etc.
If you want to deal with all this complexity and figure out True count of pages, the detailed information is available at /proc/<pid>/smaps, and there are tools, like mem_usage.py or smem.pl (easily googlable) to turn it into more-or-less usable summary.
This would be the "Resident Set Size", assuming you process doesn't use swap.
Note that a process may allocate far more memory ("Virtual Memory Size"), but as long as it don't writes to the memory, it is not represented by physical memory, be it in RAM or on the disk.
Some system tools, like top, display a huge value for "swap" for each process - this is of course completly wrong, the value is the difference between VMS and RSS and most likely those unused, but allocated, memory pages.

Resources