I need to make the process to run in real time as much as possible.
All the communication is done via shared memory - memory mapped files - no system calls at all - it uses busy waiting on shared memory.
The process runs under real time priority and all memory is locked with mlockall(MCL_CURRENT|MCL_FUTURE) which succeeds and process has enough ulimits
to have all the memory locked.
When I run it on it perf stat -p PID I still get counts of minor page faults.
I tested this with both process affinity and without.
Question:
Is it possible to eliminate them at all - even minor page faults?
I solved this problem by switching from memory mapped files to POSIX shared memory shm_open + memory locking.
If I understand the question correct, avoiding the minor page faults completely isn't possible. In most modern OS's including Linux, the OS doesn't load all the text and data segments into memory when the programs start. It allocates internal data structures and pages are essentially faulted in when the text and data are needed. This causing a page fault physical memory is made available to the process, swapping page from backing store. Therefore, minor page fault could be avoided without accessing backing store which might not possible.
Related
I know when a program first starts, it has massive page faults in the beginning since the code is not in memory, and thus need to load code from disk.
What happens when a program exits? Does the binary stay in memory? Would subsequent invocations of the program find that the code is already in memory and thus not have page faults (assuming nothing runs in between and pages stuff out to disk)?
It seems like the answer is no from running some experiments on my Linux machine. I ran some program over and over again, and observed the same number of page faults every time. It's a relatively quiet machine so I doubt stuff is getting paged out in between invocations. So, why is that? Why doesn't executable get to stay in memory?
There are two things to consider here:
1) The content of the executable file is likely kept in the OS cache (disk cache). While that data is still in the OS cache, every read for that data will hit the cache and the OS will honor the request without needing to re-read the file from disk
2) When a process exits, the OS unmaps every memory page mapped to a file, frees any memory (in general, releases every resource allocated by the process, including other resources, such as sockets, and so on). Strictly speaking, the physical memory may be zeroed, but not quite required (still, the security level of the OS may require to zero a page that is not used anymore - probably Windows NT, 2K, XP, etc, do that - see this Does Windows clear memory pages?). Another invocation of the same executable will create a brand new process which will map the same file in the memory, but the first access to those pages will still trigger page faults because, in the end, it is a new process, a different memory mapping. So yes, the page faults occur, but they are a lot cheaper for the second instance of the same executable compared to the first.
Of course, this is only about the read-only parts of the executable (the segments/modules containing the code and read-only data).
One may consider another scenario: forking. In this case, every page is marked as copy-on-write. When the first write occurs on each memory page, a hardware exception is triggered and intercepted by the OS memory manager. The OS determines if the page in question is allowed to be written (eg: if it is the stack, heap or any writable page in general) and if so, it allocates memory and copies the original content before allowing the process to modify the page - in order to preserve the original data in the other process. And yes, there is still another case - shared memory, where the exact physical memory is mapped to two or more processes. In this case, the copy-on-write flag is, of course, not set on the memory pages.
Hope this clarifies what is going on with the memory pages.
What I highly suspect is that parts, information blobs are not promptly erased from RAM unless there's a new request for more RAM from actually running code. For that part what probably happens is OS reusing OS dependent bits from RAM, on a next execution e.g. I think this is true for OS initiated resources (and probably not for all resources but some).
Actually most of your questions are highly implementation-dependant. But for most used OS:
What happens when a program exits? Does the binary stay in memory?
Yes, but the memory blocks are marked as unused (and thus could be allocated to other processes).
Would subsequent invocations of the program find that the code is
already in memory and thus not have page faults (assuming nothing runs
in between and pages stuff out to disk)?
No, those blocks are considered empty. Some/all blocks might have been overwritten already.
Why doesn't executable get to stay in memory?
Why would it stay? When a process is finished, all of its allocated resources are freed.
One of the reasons is that one generally wants to clear everything out on a subsequent invocation in case their was a problem in the previous.
Plus, the writeable data must be moved out.
That said, some systems do have mechanisms for keeping executable and static data in memory (possibly not linux). For example, the VMS operating system allows the system manager to install executables and shared libraries so that they remain in memory (paging allowed). The same system can be used to create create writeable shared memory allowing interprocess communication and for modifications to the memory to remain in memory (possibly paged out).
I'm trying to find any system functionality that would allow a process to allocate "temporary" memory - i.e. memory that is considered discardable by the process, and can be take away by the system when memory is needed, but allowing the process to benefit from available memory when possible. In other words, the process tells the system it's OK to sacrifice the block of memory when the process is not using it. Freeing the block is also preferable to swapping it out (it's more expensive, or as expensive, to swap it out rather then re-constitute its contents).
Systems (e.g. Linux), have those things in the kernel, like F/S memory cache. I am looking for something like this, but available to the user space.
I understand there are ways to do this from the program, but it's really more of a kernel job to deal with this. To some extent, I'm asking the kernel:
if you need to reduce my, or another process residency, take these temporary pages off first
if you are taking these temporary pages off, don't swap them out, just unmap them
Specifically, I'm interested on a solution that would work on Linux, but would be interested to learn if any exist for any other O/S.
UPDATE
An example on how I expect this to work:
map a page (over swap). No difference to what's available right now.
tell the kernel that the page is "temporary" (for the lack of a better name), meaning that if this page goes away, I don't want it paged in.
tell the kernel that I need the temporary page "back". If the page was unmapped since I marked it "temporary", I am told that happened. If it hasn't, then it starts behaving as a regular page.
Here are the problems to have that done over existing MM:
To make pages not being paged in, I have to allocate them over nothing. But then, they can get paged out at any time, without notice. Testing with mincore() doesn't guarantee that the page will still be there by the time mincore() finishes. Using mlock() requires elevated privileges.
So, the closest I can get to this is by using mlock(), and anonymous pages. Following the expectations I outlined earlier, it would be:
map an anonymous, locked page. (MAP_ANON|MAP_LOCKED|MAP_NORESERVE). Stamp the page with magic.
for making page "temporary", unlock the page
when needing the page, lock it again. If the magic is there, it's my data, otherwise it's been lost, and I need to reconstitute it.
However, I don't really need for pages to be locked in RAM when I'm using them. Also, MAP_NORESERVE is problematic if memory is overcommitted.
This is what the VmWare ESXi server aka the Virtual Machine Monitor (VMM) layer implements. This is used in the Virtual Machines and is a way to reclaim memory from the virtual machine guests. Virtual machines that have more memory allocated than they actually are using/require are made to release/free it to the VMM so that it can assign it back to the Virtual Machines guests that are in need of it.
This technique of Memory Reclamation is mentioned in this paper: http://www.vmware.com/files/pdf/mem_mgmt_perf_vsphere5.pdf
On similar lines, something similar you can implement in your kernel.
I'm not sure to understand exactly your needs. Remember that processes run in virtual memory (their address space is virtual), that the kernel is dealing with virtual to physical address translation (using the MMU) and with paging. So page fault can happen at any time. The kernel will choose to page-in or page-out at arbitrary moments - and will choose which page to swap (only the kernel care about RAM, and it can page-out any physical RAM page at will). Perhaps you want the kernel to tell you when a page is genuinely discarded. How would the kernel take away temporary memory from your process without your process being notified ? The kernel could take away and later give back some RAM.... (so you want to know when the given back memory is fresh)
You might use mmap(2) with MAP_NORESERVE first, then again (on the same memory range) with MAP_FIXED|MAP_PRIVATE. See also mincore(2) and mlock(2)
You can also later use madvise(2) with MADV_WONTNEED or MADV_WILLNEED etc..
Perhaps you want to mmap some device like /dev/null, /dev/full, /dev/zero or (more likely) write your own kernel module providing a similar device.
GNU Hurd has an external pager mechanism... You cannot yet get exactly that on Linux. (Perhaps consider mmap on some FUSE mounted file).
I don't understand what you want to happen when the kernel is paging out your memory, and what you want to happen when the kernel is paging in again such a page because your process is accessing it. Do you want to get a zero-ed page, or a SIGSEGV ?
I wrote a program daemon that starts some sort of self-healing procedure when the hard drive controller of a computer crashes. This program already works fine, but I have concerns that the program (about 18KB compiled file size) may not be fully loaded into RAM by the operating system and that - when I'm really unlucky - some program pages have to be loaded from the disk exactly when the program has to come active and disk accesses are no longer possible.
After all, most of the time the program stays in an endless loop checking if everything is okay and 95% of the program code isn't used. So, I think, the Kernel may optimize the RAM usage by removing unused program pages from RAM.
So, my question: does Linux load and keep all program code pages into memory, making it unnecessary to access the hard disk again to run the program code itself, once the program has started?
Technical details: Linux Kernel 2.6.36+, about 1 GB of RAM, Debian 5, no swap space active
I already learned that I can prevent swapping by calling mlockall(MCL_CURRENT | MCL_FUTURE);, but wondering if I really need to update my machines.
No, the program code pages are memory mapped into the address space of the process, not so differently than any other mmap(), so that if you don't access these pages in a long time, they can eventually be removed from RAM. To avoid it, just use the mlockall() call.
From mlockall manual
mlockall() locks all pages mapped into the address space of the calling process.
This includes the pages of the code, data and stack segment, as well as shared
libraries, user space kernel data, shared memory, and memory-mapped files. All
mapped pages are guaranteed to be resident in RAM when the call returns successâ
fully; the pages are guaranteed to stay in RAM until later unlocked.
So, if locked, pages will be here. However, modifying mounted hard disk partition is always great risk, regardless of any kind of locks.
If a process runs right after another process quit (for example) the second process may get allocated some of the first process real pages. Is it possible that the second process may be able to read some of the first process's data? (Question for Windows and/or Linux OSs)
Most OSes with a security model (NT-based Windows, most Unixes, Mac OS...) will scrub pages of memory (usually by overwriting with zeros) for precisely this reason. Of course, within a single process you can reuse memory pages without scrubbing.
You can see this in linux in do_anonymous_page (line 3143 of mm/memory.c in v3.6.6). When a write request comes for a page that has been mapped but not allocated, the kernel calls alloc_zeroed_user_highpage_movable to allocate a zeroed page.
Is there a way to detect programmatically?
Also, what would be the linux commands to detect which processes are thrashing?
I'm assuming that "thrashing" here refers to a situation where the active memory set of all processes is too big to fit into memory. In such a situation every context switch causes reading and writing to disk, and eventually the server may become so thrashed that hardware reboot is the only option to regain control of the box.
There are global counters swin and swout in /proc/vmstat - if both of them increases during some short time interval the box is probably experiencing thrashing problems.
At the process level it's non-trivial AFAIK. /proc/$pid/status contains some useful stuff, but not swin and swout. From 2.6.34 there is a VmSwap entry, total amount of swap used, and the variable #12 in /proc/$pid/state is the number of major page faults. /proc/$pid/oom_score is also worth looking into. If VmSwap is increasing and/or the number of major page faults is increasing and/or oom_score is spectacularly high, then the process is likely to cause thrashing.
I wrote up a script thrash-protect - it's available at https://github.com/tobixen/thrash-protect - it attempts to figure out what processes are causing the thrashing and temporary suspends processes. It has worked out great for me and rescued me from some server reboots eventually.
Update: newer versions of the kernel has useful statistics under /proc/pressure. Also, a computer set up without swap will also start "thrashing" as the memory is getting full, as lack of buffer space tends to cause excessive read operations on the disk.