I am using Lucene's MMapDirectory to keep index files in memory mapped files. According to the output of pmap command, only part of these files actually reside in the physical memory. I would like to put more RAM to the server and make sure that the all files are completely in the memory. However, the extra RAM will be proportionally distributed. In a Linux system, can I reserve some physical memory specifically for these files (which are about 5GB in total).
Thanks in advance for your answers.
I'm afraid there aren't any decent out-of-the-box solutions available at the moment.
You could use RAMDirectory which is closest to what you are looking for, but it is not efficient (too much ram gets allocated, GC slowness, etc.). There's LUCENE-3659 to improve this, but it's not ready yet.
You could look at ramfs/tmpfs, but both of them are volatile (index goes away after switching the machine off), also you could end up in situations where RAM gets cached in another RAM.
More info:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
Related
In my understanding, mmap'ing a file that fits into RAM will be like having the file in memory.
Say that we have 16G of RAM, and we first mmap a 10G file that we use for a while. This should be fairly efficient in terms of access. If we then mmap a second 10G file, will that cause the first one be swapped out? Or parts of it? If so, when will this happen? At the mmap call, or on accessing the memory area of the newly loaded file?
And if we want to access the memory of the pointer for the first file again, will that make it load the swap the file in again? So, say we alternate reading between memory corresponding to the first file and the second file, will that lead to disastrous performance?
Lastly, if any of this is true, would it be better to mmap several smaller files?
As has been discussed, your file will be accessed in pages; on x86_64 (and IA32) architectures, a page is typically 4096 bytes. So, very little if any of the file will be loaded at mmap time. The first time you access some page in either file, then the kernel will generate a page fault and load some of your file. The kernel may prefetch pages, so more than one page may be loaded. Whether it does this depends on your access pattern.
In general, your performance should be good if your working set fits in memory. That is, if you're only regularly accesning 3G of file across the two files, so long as you have 3G of RAM available to your process, things should generally be fine.
On a 64-bit system there's no reason to split the files, and you'll be fine if the parts you need tend to fit in RAM.
Note that if you mmap an existing file, swap space will not be required to read that file. When an object is backed by a file on the filesystem, the kernel can read from that file rather than swap space. However, if you specify MMAP_PRIVATE in your call to mmap, swap space may be required to hold changed pages until you call msync.
Your question does not have a definitive answer, as swapping in/out is handled by your kernel, and each kernel will have a different implementation (and linux itself offers different profiles depending on your usage, RT, desktop, server…)
Generally speaking, though, whatever you load in memory is done using pages, so your mmap'ed file in memory is loaded (and offloaded) by pages between all the levels of memory (the caches, RAM and swap).
Then if you load two 10GB data into memory, you'll have parts of both between the RAM and your Swap, and the kernel will try to keep in RAM the pages you're likely to use now and guess what you'll load next.
What it means is that if you do truly random access to a few bytes of data in both files alternatively, you should expect awful performance, if you access contiguous chunks sequentially from both files alternatively, you should expect decent performance.
You can read some more details about kernel paging into virtual memory theory:
https://0xax.gitbooks.io/linux-insides/content/Theory/Paging.html
https://en.wikipedia.org/wiki/Paging
I want to get the map of allocated memory in RAM running on Linux.
I am looking to find the utilization of memory at a given time as well as specifics of allocation in terms of user process and kernel modules and kernel itself.
This is very very hard to get right because of shared memory, caching and on demand paging.
You can indeed use /proc/PID/maps as the previous answer stated to learn that glibc, as an example, occupies a certain range of virtual memory in a certain process but part of that is shared between all processes mapping that library (the code section), part isn't (the dynamic linker tables, for example). Part of this memory space might not be in RAM at all (paged to disk) anc opy on write semantics might mean that the memory picture at the next moment might be very different.
Add do that the sophisticated use Linux makes in caching (the page and buffer caches being the major ones) which part of which can be evicted at the kernel whim (cache IO buffers which are not dirty) but some cannot (e.g. tmpfs pages) and it gets really hairy really quickly.
In short - no one good answer to get a true view of what uses RAM and for what in a Linux system. The best answer I know is pagemap and related tool. read all about it here: http://lwn.net/Articles/230975/
You can find it out by checking ever process memory mapping
cat /proc/<PID>/maps
and for overall memory state
cat /proc/meminfo
If I go to WHM and see my server's memory usage, it says that only 16% of memory is in use.
But when I connect to server using SSH and run command "free -m" then it shows that 80% is in use. Why is that? I want to know exact memory usage of all applications running like MySQL, Apache e.t.c.
How do I view that?
Thanks
As they say, "It's Complicated".
Linux uses unused memory for disk buffering and caching. It speeds things up. But you may need to look at the -/+ buffers/cache line of free.
'ps' can show you, for any given process, or for all processes, the %cpu, %mem, cumulative cpu-time, rss (resident set size, the non-swapped physical memory that a process is using), size (very approximate amount of swap space that would be required if the process were to dirty all writable pages and then be swapped out), vsize (virtual memory usage of entire process (vm_lib + vm_exe + vm_data + vm_stack)), and much much more.
For any given process, you can cat /proc/$PID/status -- it's human readable -- and check out the VmSize, VmLck, VmRSS, VmData, VmStk, VmExe, VmLib, and VmPTE values, along with others...
But that's just for starters... Processes can allocate memory but not use it. (Memory can be allocated, but the memory pages are not created/issued until they're actually used. That whole on-demand thing.)
Processes can map in hardware space, showing up as using a large quantity of memory that's not actually coming from system RAM. (X-servers are known to sometimes do this. It's some wonky stuff involved kernel drivers...)
There's the executable, which is usually a memory-mapped file. Meaning that parts that are swapped-in are taking up RAM, but when swapped out it never takes up swapfile space.
Processes can have other memory-mapped files...
There's shared-memory libraries, where the same RAM pages are used by multiple programs concurrently.
So we have to ask, as far as memory goes, what exactly counts and what doesn't?
Linux noob question:
If I have 500MB of RAM, and 500MB of swap space, can the OS and processes then use 1GB of memory?
In other words, is the total amount of memory available to programs and the OS the total of the physical memory size and swap size?
I'm trying to figure out which SNMP counters to query, but need to understand how Linux uses virtual memory a little better first.
Thanks
Actually, it IS essentially correct, but your "virtual" memory does NOT reside beside your "physical memory" (as Matthew Scharley stated).
Your "virtual memory" is an abstraction layer covering both "physical" (as in RAM) and "swap" (as in hard-disk, which is of course as much physical as RAM is) memory.
Virtual memory is in essention an abstraction layer. Your program always addresses a "virtual" address, which your OS translates to an address in RAM or on disk (which needs to be loaded to RAM first) depending on where the data resides. So your program never has to worry about lack of memory.
Nothing is ever quite so simple anymore...
Memory pages are lazily allocated. A process can malloc() a large quantity of memory and never use it. So on your 500MB_RAM + 500MB_SWAP system, I could -- at least in theory -- allocate 2 gig of memory off the heap and things will run merrily along until I try to use too much of that memory. (At which point whatever process couldn't acquire more memory pages gets nuked. Hopefully it's my process. But not always.)
Individual processes may be limited to 4 gig as a hard address limitation on 32-bit systems. Even when you have more than 4 gig of RAM on the machine and you're using that bizarre segmented 36-bit atrocity from hell addressing scheme, individual processes are still limited to only 4 gigs. Some of that 4 gigs has to go for shared libraries and program code. So yer down to 2-3 gigs of stack+heap as an ADDRESSING limitation.
You can mmap files in, effectively giving you more memory. It basically acts as extra swap. I.e. Rather than loading a program's binary code data into memory and then swapping it out to the swapfile, the file is just mmapped. As needed, pages are swapped into RAM directly from the file.
You can get into some interesting stuff with sparse data and mmapped sparse files. I've seen X-windows claim enormous memory usage when in fact it was only using up a tiny bit.
BTW: "free" might help you. As might "cat /proc/meminfo" or the Vm lines in /proc/$PID/status. (Especially VmData and VmStk.) Or perhaps "ps up $PID"
Although mostly it's true, it's not entirely correct. For a particular process, the environment you run it in may limit the memory available to your process. Check the output of ulimit -v as well.
Yes, this is essentially correct. The actual numbers might be (very) marginally lower, but for all intents and purposes, if you have x physical memory and y virtual memory (swap in linux), then you have x + y memory available to the operating system and any programs running underneath the OS.
At work we have an application to play 2K (2048*1556px) OpenEXR film sequences. It works well.. apart from when sequences that are over 3GB (quite common), then it has to unload old frames from memory, despite the fact all machines have 8-16GB of memory (which is addressable via the linux BIGMEM stuff).
The frames have to he cached into memory to play back in realtime. The OS is a several-year old 32-bit Fedora Distro (not possible to upgradable to 64bit, for the foreseeable future). The per-process limitation is 3GB per process.
Basically, is it possible to cache more than 3GB of data in memory, somehow? My initial idea was to spread the data between multiple processes, but I've no idea if this is possible..
One possibility may be to use mmap. You would map/unmap different parts of your data into the same virtual memory region. You could only have one set mapped at a time, but as long as there was enough physical memory, the data should stay resident.
How about creating a RAM drive and loading the file into that ... assuming the RAM drive supports the BIGMEM stuff for you.
You could use multiple processes: each process loads a view of the file as a shared memory segment, and the player process then maps the segments in turn as needed.
My, what an interesting problem :)
(EDIT: Oh, I just read Rob's ram drive post...I got all excited by the problem...but have a bit more to suggest, so I won't delete)
Would it be possible to...
setup a multi-gigabyte ram disk, and then
modify the program to do all it's reading from the "disk"?
I'd guess the ram disk part is where all the problem would be, since the size of the ram disk would be OS and file system dependent. You might have to create multiple ram disks and have your code jump between them. Or maybe you could setup a RAID-0 stripe set over multiple ram disks. Or, if there are still OS limitations and you can afford to drop a couple grand (4k?), setup a hardware RAID-0 strip set with some of those new blazing fast solid state drives. Or...
Fun, fun, fun.
Be sure to follow up!
I assume you can modify the application. If so, the easiest thing would be to start the application several times (once for each 3GB chunk of video), have each one hold a chunk of video, and use another program to synchronize them so they each take control of the framebuffer (or other video output) in turn.
The synchronization is going to be a little messy, perhaps, but it can be simplified if each app has its own framebuffer and the sync program points the video controller to the correct framebuffer inbetween frames when switching to the next app.
#dbr said:
There is a review machine with an absurd fiber-channel-RAID-array that can play 2K files direct from the array easily. The issue is with the artist-workstations, so it wouldn't be one $4000 RAID array, it'd be hundreds..
Well, if you can accept a limit of ~30GB, then maybe a single 36GB SSD drive would be enough? Those go for ~US$1k each I think, and the data rates might be enough. That very well maybe cheaper than a pure RAM approach. There are smaller sizes available, too. If ~60GB is enough you could probably get away with a JBOD array of 2 for double the cost, and skip the RAID controller. Be sure only to look at the higher end SSD options--the low end is filled with glorified memory sticks. :P