I'm testing CPU's hardware prefetcher.
It is known that the prefetching occurs up to a page boundary.
I want to make sure that my test works right.
Anybody know how I can change virtual page size in linux?
On x86-64, the only page sizes supported by the hardware are 4kb and 2MB. 4kb is used by default; for 2MB pages, you can use Linux's hugetlb system to allocate them.
Related
I know processors these days, some of them, support 2MB and 1GB page sizes. Is it possible to compile the Linux kernel to natively support 2MB as opposed to the standard 4Kb page?
Thanks.
Well, I can say yes and no.
The page size is fixed. But that depend on your patience to the erros and issues that you will encounter.
The page size is known and determined by the MMU hardware, so the operating system is taking that into account. However, notice that some Linux systems (and hardware!) have hugetlbpage and Linux mmap(2) might accept MAP_HUGETLB (but your code should handle the case of processors or kernels without huge page support, e.g. by calling mmap again without MAP_HUGETLB when the first mmap with MAP_HUGETLB has failed).
You may find these links interested for you:
https://www.cloudibee.com/linux-hugepages/
https://forums.opensuse.org/showthread.php/437078-changing-pagesize-kernel
https://linuxgazette.net/155/krishnakumar.html
https://lwn.net/Articles/375096/
i was reading there,that the number of virtual memory pages are equal to number of physical memory frames and that the size of frames and that of pages are equal,like for my 32bit system the page size is 4096.
Well i was thinking is there there any way to change the page size or the frame size?
I am using Linux OS. I have searched a lot and what I found is,we can change the page size or in fact we can increase the page size by shifting to huge pages.Is there any other way to change (increase or decrease) or set the page size of our choice?
(Not coding anything,a general question)
In practice it is (nearly) impossible to "change" the memory page size, since the page size is known & determined by the MMU hardware, so the operating system is taking that into account. However, notice that some Linux systems (and hardware!) have hugetlbpage and Linux mmap(2) might accept MAP_HUGETLB (but your code should handle the case of processors or kernels without huge page support, e.g. by calling mmap again without MAP_HUGETLB when the first mmap with MAP_HUGETLB has failed).
From what I read, on some Linux systems, you can use hugetlbpage with various sizes. But the sysadmin can restrict these (or some kernels disable it), so your code should always be prepared that a mmap with MAP_HUGETLB fails.
Even with those "huge pages", the page size is not arbitrary. Use sysconf(_SC_PAGE_SIZE) on POSIX systems to get the standard page size (it is usually 4Kbytes). See also sysconf(3)
AFAIK, even on systems with hugetlbpage feature, mmap can be called without MAP_HUGETLB and the page size (as reported by sysconf(_SC_PAGE_SIZE)) is still 4Kbytes. Perhaps some recent kernels with some weird configurations are using huge pages everywhere, and IIRC some kernels might be configured with 1Mbyte page (i am not sure about that and I might be wrong)...
I allocated some memory using memalign, and I set the last page as a guard page using mprotec(adde, size, PROT_NONE), so this page is inaccessible.
Does the inaccessible page consume physical memory? In my opinion, the kernel can offline the physical pages safely, right?
I also tried madvise(MADV_SOFT_OFFLINE) to manually offline the physical memory but the function always fails.
Can anybody tell me the internal behavior of kernel with mprotect(PROT_NONE), and how to offline the physical memory to save physical memory consumption?
Linux applications are using virtual memory. Only the kernel is managing physical RAM. Application code don't see the physical RAM.
A segment protected with mprotect & PROT_NONE won't consume any RAM.
You should allocate your segment with mmap(2) (maybe you want MAP_NORESERVE). Mixing memalign with mprotect may probably break libc invariants.
Read carefully madvise(2) man page. MADV_SOFT_OFFLINE may require a specially configured kernel.
It is known that the page size is 4KB on x86. If we have 64G RAM, then there are 16M page enteries, it will cause too mant tlb misses. In x86, we can enable PAE to access more than 4GB memory. (and the page size could split to 2MB per page?)
The Hugepagetlbfs permit us to use huge pages to get performance benefit(Eg: less tlb miss), but there are a lot of limitions:
Must use share memory interface to write the Hugepagetlbfs
Not all processes can use it
Reserve memory may fail
So, if we can change the page size to 2M or 4M, then we can get the performance benefit.
In my way, I tried some ways to change it, but fail.
Compile the kernel with CONFIG_HUGETLBFS, fail
Compilt the kernel with CONFIG_TRANSPARENT_HUGEPAGE and CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS, fail
Could somebody help me?
What is the limitation on the virtual address space available to a process ?
Is it
32 bit Vs 64 bit Address bus ?
32 bit vs 64 bit Processor ?
Secondary storage available ?
Maximum swap space configured ?
Thanks in advance
Secondary storage / swap space have nothing to do with it, because pages can be mapped into your address space without being allocated. And the same page can be mapped at multiple virtual addresses. ([edit] This is the default behavior, but the vm.overcommit_memory sysctl setting can be used to prevent the mapping of VM pages for which there is no RAM or swap available. Do a search on that sysctl setting for more information.)
The CPU certainly puts an upper limit, and that is essentially the only limit on 64-bit systems. Although note that current x86_64 processors do not actually let you use the entire 64-bit space.
On 32-bit Linux, things get more complicated. Older versions of Linux reserved 2GB of virtual space of each process for the kernel; newer ones reserve 1GB. (If memory serves, that is. I believe these are configurable when the kernel is compiled.) Whether you consider that space "available to a process" is a matter of semantics.
Linux also has a per-process resource limit RLIMIT_AS accessible via setrlimit and getrlimit.