I am using Linux + PPC64 where the memory page size is 64KiB. If I were to make two separate 32KiB allocations from within the same process, would that take up a single page in memory or two? Thanks!
kernel will assign 64KiB for each request less than 64 KiB.
Related
(I'm new to Linux)
Say I've 1300 MB memory, on a Ubuntu machine. OS and other default programs consumes 300 MB memory and 1000 MB is free for my own applications.
I installed my application and I could configure it to use 700 MB memory, when the application starts.
However I couldn't verify its actual memory usage. Even I disabled swap space.
The "VIRT" value shows a huge value and "RES", "SHR", "%MEM" shows very less value.
It is difficult to find actual physical memory usage, similar to "Resource monitor" in Windows, which will say my application is using 700 MB memory.
Is there any way to find actual physical memory in Ubuntu/Linux ?
TL;DR - Virtual memory is complicated.
The best measure of a Linux processes current usage of physical memory is RES.
The RES value represents the sum of all of the processes pages that are currently resident in physical memory. It includes resident code pages and resident data pages. It also includes shared pages (SHR) that are currently RAM resident, though these pages cannot be exclusively ascribed to >>this<< process.
The VIRT value is actually the sum of all notionally allocated pages for the process, and it includes pages that are currently RAM resident, pages that are currently swapped to disk.
See https://stackoverflow.com/a/56351211/1184752 for another explanation.
Note that RES is giving you (roughly) instantaneous RAM usage. That is what you asked about ...
The "actual" memory usage over time is more complicated because the OS's virtual memory subsystem is typically be swapping pages in and out according to demand. So, for example, some of your application's pages may not have been accesses recently, and the OS may then swap them out (to swap space) to free up RAM for other pages required by your application ... or something else.
The VIRT value while actually representing virtual address space, is a good approximation of total (virtual) memory usage. However, it may be an over-estimate:
Some pages in a processes address space are shared between multiple processes. This includes read-only code segments, pages shared between parent and child processes between vfork and exec, and shared memory segments created using mmap.
Some pages may be set to have illegal access (e.g. for stack red-zones) and may not be backed by either RAM or swap device pages.
Some pages of the address space in certain states may not have been committed to either RAM or disk yet ... depending on how the virtual memory system is implemented. (Consider the case where a process requests a huge memory segment and neither reads from it or writes to it. It is possible that the virtual memory implementation will not allocate RAM pages until the first read or write in the page. And if you use lazy swap reservation, swap pages not be committed either. But beware that you can get into trouble with lazy swap reservation.)
VIRT can also be under-estimate because the OS usually reserves swap space for all pages ... whether they are currently swapped in or swapped out. So if you count the RAM and swap versions of a given page as separate units of storage, VIRT usually underestimates the total storage used.
Finally, if your real goal is to limit your application to using at most
700 MB (of virtual address space) then you can use ulimit -v ... to do this. If the application tries to request memory beyond its limit, the request fails.
It is known that the page size is 4KB on x86. If we have 64G RAM, then there are 16M page enteries, it will cause too mant tlb misses. In x86, we can enable PAE to access more than 4GB memory. (and the page size could split to 2MB per page?)
The Hugepagetlbfs permit us to use huge pages to get performance benefit(Eg: less tlb miss), but there are a lot of limitions:
Must use share memory interface to write the Hugepagetlbfs
Not all processes can use it
Reserve memory may fail
So, if we can change the page size to 2M or 4M, then we can get the performance benefit.
In my way, I tried some ways to change it, but fail.
Compile the kernel with CONFIG_HUGETLBFS, fail
Compilt the kernel with CONFIG_TRANSPARENT_HUGEPAGE and CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS, fail
Could somebody help me?
I have an application where I have to allocate on Windows (using operator new) quite a large memory space (hundreds of MB). The application is 32 bit (we don't use 64 bit for now, even on 64 bit systems) and I enabled /LARGEADDRESSAWARE linker option to be able to use 4 GB of user space memory.
Question If I need to allocate, say 450 MB of contiguous memory does the virtual address space of the process need to have a contiguous large enough space and additionally the physical memory does not have to be fragmented on the system ? I ask this because I can make it so that my application reserves a large enough contiguous space but don't know if other applications on the system can affect me in this way. Does OS page tables need to translate contiguous virtual addresses seen by the application into contiguous physical addresses ?
If the memory is simply used in your software, then your 450MB allocation will only need a hole of 450MB in the virtual space. It can be satisfied with pages from every corner of the memory system [as long as there is at least 450MB available somewhere in the system - including swapspace].
Your system will get a little bit better performance if the OS is able to allocate the pages in contiguous blocks of 2MB a piece [using "large pages" of 2MB at a time]. But the system will fall back to individual 4KB pages if it needs to.
One of several benefits with a paged memory architecture is that any physical page can be placed at any virtual address. In some systems, for example Xen virutalization manager in Debug mode, pages are INTENTIONALLY allocated out of sequence, to make it easier to detect when the system makes assumptions about memory pages being contiguous.
You don't need to be concerned about contiguity of the physical memory. That's one thing that virtual to physical address translation helps you with. As long as you can reserve a chunk of the address space and back it with physical memory, wherever it happens to be, things are going to work.
Is the page size constant? To be more specific , getconf PAGE_SIZE gives 4096, fair enough. But can this change through a program's runtime? Or is it constant throughout the entire OS process spawn. I.e. , is it possible for a process to have 1024 and 2048 AND 4096 page sizes? Let's just talk virtual page sizes for now. But going further is it possible for a virtual page to span a physical page of greater size?
It is possible for a process to use more than one pagesize. On newer kernels this may even happen without notice, see Andrea Arcangelis transparent huge pages.
Other than that, you can request memory with a different (usually larger) page size over hugetlbfs.
The main reason for having big pages is performance, the TLB in the processor is very limited in size, and fewer but bigger pages mean more hits.
In linux kernel, mem_map is the array which holds all "struct page" descriptors. Those pages includes the 128MiB memory in lowmem for dynamically mapping highmem.
Since the lowmem size is 1GiB, so the mem_map array has only 1GiB/4KiB=256KiB entries. If each entry size is 32 byte, then the mem_map memory size = 8MiB. But if we could use mem_map to map all 4GiB physical memory(if we have so much physical memory available on x86-32), then the mem_map array would occupy 32MiB, that is not a lot of kernel memory(or am i wrong?).
So my question is: why do we need to use that 128MiB in low for indirect highmem mapping in the first place? Or put another way, why not to map all those max 4GiB physical memory(if available) in the kernel space directly?
Note: if my understanding of the kernel source above is wrong, please correct. Thanks!
Look Here: http://www.xml.com/ldd/chapter/book/ch13.html
Kernel low memory is the 'real' memory map, addressed with 32-bit pointers on x86.
Kernel high memory is the 'virtual' memory map, addressed with virtual structures on x86.
You don't want to map it all into the kernel address space, because you can't always address all of it, and you need most of your memory for virtual memory segments (virtual, page-mapped process space.)
At least, that's how I read it. Wow, that's a complicated question you asked.
To throw more confusion, chapter 13 talks about some PCI devices not being able to address the 32-bit space, which was the genesis of my previous comment:
On x86, some kernel memory usage is limited to the first Gigabyte of memory bacause of DMA addressing concerns. I'm not 100% familiar with the topic, but there's a comapatibility mode for DMA on the PCI bus. That may be what you are looking at.
3.6 GB is not the ceiling when using physical address extension, which is commonly needed on most modern x86 boards, especially with memory hotplug.
Or put another way, why not to map all those max 4GiB physical
memory(if available) in the kernel space directly?
One reason is userspace: every usespace process have its own virtual address space. Suppose you have 4Gb of RAM on x86. So if we suggest that kernel owns 1Gb of memory (~800 directly mapped + ~200 vmalloc) all other ~3Gb should be dynamically distributed between processes spinning in user space. So how can you map your 4Gbs directly when you have a several address spaces?
why do we need zone_highmem on x86?
The reason is the same. Kernel reserves only ~800Mb for low mem. All other memory will be allocated and connected with particular virtual address only on demand. For example if you will execute a binary a new virtual address space will be created and some pages will be allocated for storing your binary code and data (heap ,stack ...). So the key attribute of high mem is to serve dynamic memory allocation requests, you never know in advance what will be triggered by userspace...