Shared memory marked as virtual memory? - linux

I run a program which allocates 64MB as shared memory for IPC communication. pmap shows that chunk of 64MB is allocated. However, "top" shows the RES memory of the proc is just about 40MB! I conclude the shared memory is marked as VIRT. But why? There Linux still has more than 1GB RAM available.

Have you actually used any of that 64MB yet? Linux defers allocation.
cf. Does malloc lazily create the backing pages for an allocation on Linux (and other platforms)?

Linux doesn't load all memory the process "obtain" to RAM , it load memory form virtual place to RAM just when you program refer that block of memory. Here "memory" means private mem & shared mem both.
I haven't done any experiments to verify the above opinion, but I have seen this in many places like SO, and I do believe it . Just FYI.

Shared memory is, just like most if not all of the memory userland programs deal with, virtual. Only active pages need to be mapped to physical (i.e. resident memory). Doing differently would be a waste of resources.
The only exception is when the process specifically locks the pages in RAM with mlock.

Related

Can shared memory be swapped out under memory pressure?

Under high memory pressure, if there are processes with massive shared memory, would linux swap out these shared memory pages just like it would regular memory pages?
Does swapping behavior differ for shared memory in any way?
What about when hugepages are being used by the process?
Would that make any difference in swapping policy under memory pressure?

Will Linux programs crash if there is enough ram but not enough virtual memory?

For example, I run the top command and see that my application uses 1MB RES and 1000MB VIRT. Will this program crash if my system just has 128MB RAM and 512MB virtual memory?
If you have enough ram, you don't actually need any virtual memory. The only time virtual memory is used is... when you run out of actual memory.The system writes some ram to virtual memory(disk), and then uses that memory for something else... loading that virtual memory back into actual memory only when it is needed (and that may require writing some other memory to virtual memory to free some memory up for that).Depending on how the system is configured, it can tell you that it allocated memory for you... but if you never touched it, it never really allocated that memory for you (real or virtual).
So... if your program is actually using 1MB RES + 1000MB VIRT it could not fit into less than 1001MB memory (virtual or real), but if the system over-promised the memory and never really allocated for you.. then your program could run until it actually uses enough memory to run out of memory.

Find exact physical memory usage in Ubuntu/Linux

(I'm new to Linux)
Say I've 1300 MB memory, on a Ubuntu machine. OS and other default programs consumes 300 MB memory and 1000 MB is free for my own applications.
I installed my application and I could configure it to use 700 MB memory, when the application starts.
However I couldn't verify its actual memory usage. Even I disabled swap space.
The "VIRT" value shows a huge value and "RES", "SHR", "%MEM" shows very less value.
It is difficult to find actual physical memory usage, similar to "Resource monitor" in Windows, which will say my application is using 700 MB memory.
Is there any way to find actual physical memory in Ubuntu/Linux ?
TL;DR - Virtual memory is complicated.
The best measure of a Linux processes current usage of physical memory is RES.
The RES value represents the sum of all of the processes pages that are currently resident in physical memory. It includes resident code pages and resident data pages. It also includes shared pages (SHR) that are currently RAM resident, though these pages cannot be exclusively ascribed to >>this<< process.
The VIRT value is actually the sum of all notionally allocated pages for the process, and it includes pages that are currently RAM resident, pages that are currently swapped to disk.
See https://stackoverflow.com/a/56351211/1184752 for another explanation.
Note that RES is giving you (roughly) instantaneous RAM usage. That is what you asked about ...
The "actual" memory usage over time is more complicated because the OS's virtual memory subsystem is typically be swapping pages in and out according to demand. So, for example, some of your application's pages may not have been accesses recently, and the OS may then swap them out (to swap space) to free up RAM for other pages required by your application ... or something else.
The VIRT value while actually representing virtual address space, is a good approximation of total (virtual) memory usage. However, it may be an over-estimate:
Some pages in a processes address space are shared between multiple processes. This includes read-only code segments, pages shared between parent and child processes between vfork and exec, and shared memory segments created using mmap.
Some pages may be set to have illegal access (e.g. for stack red-zones) and may not be backed by either RAM or swap device pages.
Some pages of the address space in certain states may not have been committed to either RAM or disk yet ... depending on how the virtual memory system is implemented. (Consider the case where a process requests a huge memory segment and neither reads from it or writes to it. It is possible that the virtual memory implementation will not allocate RAM pages until the first read or write in the page. And if you use lazy swap reservation, swap pages not be committed either. But beware that you can get into trouble with lazy swap reservation.)
VIRT can also be under-estimate because the OS usually reserves swap space for all pages ... whether they are currently swapped in or swapped out. So if you count the RAM and swap versions of a given page as separate units of storage, VIRT usually underestimates the total storage used.
Finally, if your real goal is to limit your application to using at most
700 MB (of virtual address space) then you can use ulimit -v ... to do this. If the application tries to request memory beyond its limit, the request fails.

mprotect(addr, size, PROT_NONE) for guard pages and its memory consumption

I allocated some memory using memalign, and I set the last page as a guard page using mprotec(adde, size, PROT_NONE), so this page is inaccessible.
Does the inaccessible page consume physical memory? In my opinion, the kernel can offline the physical pages safely, right?
I also tried madvise(MADV_SOFT_OFFLINE) to manually offline the physical memory but the function always fails.
Can anybody tell me the internal behavior of kernel with mprotect(PROT_NONE), and how to offline the physical memory to save physical memory consumption?
Linux applications are using virtual memory. Only the kernel is managing physical RAM. Application code don't see the physical RAM.
A segment protected with mprotect & PROT_NONE won't consume any RAM.
You should allocate your segment with mmap(2) (maybe you want MAP_NORESERVE). Mixing memalign with mprotect may probably break libc invariants.
Read carefully madvise(2) man page. MADV_SOFT_OFFLINE may require a specially configured kernel.

Allocating memory for process in linux

Dear all, I am using Redhat linux ,How to set maximum memory for particular process. For eg i have to allocate maximum memory usage to eclipse alone .Is it possible to allocate like this.Give me some solutions.
ulimit -v 102400
eclipse
...gives eclipse 100MiB of memory.
You can't control memory usage; you can only control virtual memory size, not the amount of actual memory used, as that is extremely complicated (perhaps impossible) to know for a single process on an operating system which supports virtual memory.
Not all memory used appears in the process's virtual address space at a given instant, for example kernel usage, and disc caching. A process can change which pages it has mapped in as often as it likes (e.g. via mmap() ). Some of a process's address space is also mapped in, but not actually used, or is shared with one or more other processes. This makes measuring per-process memory usage a fairly unachievable goal in practice.
And putting a cap on the VM size is not a good idea either, as that will result in the process being killed if it attempts to use more.
The right way of doing this in this case (for a Java process) is to set the heap maximum size (via various well-documented JVM startup options). However, experience suggests that you should not set it less than 1Gb.

Resources