Memory allocation during linux booting? - linux

I have tried to search this topic on google and this site but I can't find a proper answer.
I am trying to allocate a big continuous block of memory (a few MB) at a set physical address during the Linux booting process. But I am still not clear where I should place my "alloc_bootmem" function. I am running Linux on an ARM processor.
AFAIK, there is a way to create a driver which contains a call to "alloc_bootmem" and then compile that driver directly to into the kernel.
Another method is to add "alloc_bootmem" somewhere in the Linux kernel source.
The last method that I think exists is to create a settings file like boot.rc?(not sure) so that during booting Linux will reserve the memory I want allocated.
If there is a clear way or a link to an answer to this question, I really would appreciate everyone's help. The basic question is "where should I call "alloc_bootmem" so it will work during booting?"
Thanks,
Shahril

Take a look at: http://lwn.net/Kernel/LDD3/ chapter 8 it explains the memory allocation for early booting stages.
Further information about booting memory allocation can be found here:
https://www.kernel.org/doc/gorman/html/understand/understand022.html
This feature is used for allocating large memory chunks during the system boot up and it uses the physical rather than virtual memory. After MMU is up and running there is no possible way of accessing the memory AFAIK
If you are looking for a large continues memory allocation you should probably use different allocator take a look at:
http://lwn.net/Articles/396702/

Related

At boot, how does the Linux kernel allocate memory for its own memory allocator?

I'm building a small x86-64 kernel that I write completely from scratch. I'm starting to write a very simple memory allocator where I simply iterate on page structs to find a free page and break the loop when I find one. When I began writing the allocator, I stumbled upon an issue.
In my kernel, I managed to get a memory map of RAM using a UEFI call (GetMemoryMap()). By iterating on the memory map, I managed to find that I would have around 1021MB of usable memory (out of 1024MB). I test my kernel on QEMU.
I read here and there that the Linux kernel holds a page struct for every page in the system. I'm guessing that the memory allocator of the Linux kernel uses the page structs to determine what page is free and which isn't. By using a proper binary structure and an efficient algorithm, it attempts to find a free page as fast as possible to use for itself or to provide to a user mode process.
If my assumption is correct, the issue arises here. If the Linux kernel's memory allocator relies on the page struct to work, how does it allocate memory for the page structs themselves?
I thought of a simple algorithm where I simply start at the physical address of the first usable memory region. If this memory region has enough room for all the page structs (representing the whole memory), I stop there. Otherwise, I use the second memory region an so on. This seems quite simple but I wanted to know how the Linux kernel handles that issue.
Even if my above assumption is wrong, the memory allocator probably requires some memory to work. At boot, how does the Linux kernel allocate memory for its own memory allocator?
I guess you can google some stuff about memblock allocator, which manages the physical memory in kernel boot phase.

Large physically contiguous memory area

For my M.Sc. thesis, I have to reverse-engineer the hash function Intel uses inside its CPUs to spread data among Last Level Cache slices in Sandy Bridge and newer generations. To this aim, I am developing an application in Linux, which needs a physically contiguous memory area in order to make my tests. The idea is to read data from this area, so that they are cached, probe if older data have been evicted (through delay measures or LLC miss counters) in order to find colliding memory addresses and finally discover the hash function by comparing these colliding addresses.
The same procedure has already been used in Windows by a researcher, and proved to work.
To do this, I need to allocate an area that must be large (64 MB or more) and fully cachable, so without DMA-friendly options in TLB. How can I perform this allocation?
To have a full control over the allocation (i.e., for it to be really physically contiguous), my idea was to write a Linux module, export a device and mmap() it from userspace, but I do not know how to allocate so much contiguous memory inside the kernel.
I heard about Linux Contiguous Memory Allocator (CMA), but I don't know how it works
Applications don't see physical memory, a process have some address space in virtual memory. Read about the MMU (what is contiguous in virtual space might not really be physically contiguous and vice versa)
You might perhaps want to lock some memory using mlock(2)
But your application will be scheduled, and other processes (or scheduled tasks) would dirty your CPU cache. See also sched_setaffinity(2)
(and even kernel code might be perhaps preempted)
This page on Kernel Newbies, has some ideas about memory allocation. But the max for get_free_pages looks like 8MiB. (Perhaps that's a compile-time constraint?)
Since this would be all-custom, you could explore the mem= boot parameter of the linux kernel. This will limit the amount of memory used, and you can party all over the remaining memory without anyone knowing. Heck, if you boot up a busybox system, you could probably do mem=32M, but even mem=256M should work if you're not booting a GUI.
You will also want to look into the Offline Scheduler (and here). It "unplugs" the CPU from Linux so you can have full control over ALL code running on it. (Some parts of this are already in the mainline kernel, and maybe all of it is.)

RAM analysis on Linux

I want to get the map of allocated memory in RAM running on Linux.
I am looking to find the utilization of memory at a given time as well as specifics of allocation in terms of user process and kernel modules and kernel itself.
This is very very hard to get right because of shared memory, caching and on demand paging.
You can indeed use /proc/PID/maps as the previous answer stated to learn that glibc, as an example, occupies a certain range of virtual memory in a certain process but part of that is shared between all processes mapping that library (the code section), part isn't (the dynamic linker tables, for example). Part of this memory space might not be in RAM at all (paged to disk) anc opy on write semantics might mean that the memory picture at the next moment might be very different.
Add do that the sophisticated use Linux makes in caching (the page and buffer caches being the major ones) which part of which can be evicted at the kernel whim (cache IO buffers which are not dirty) but some cannot (e.g. tmpfs pages) and it gets really hairy really quickly.
In short - no one good answer to get a true view of what uses RAM and for what in a Linux system. The best answer I know is pagemap and related tool. read all about it here: http://lwn.net/Articles/230975/
You can find it out by checking ever process memory mapping
cat /proc/<PID>/maps
and for overall memory state
cat /proc/meminfo

Is it possible to allocate a certain sector of RAM under Linux?

I have recently gotten a faulty RAM and despite already finding out this I would like to try a much easier concept - write a program that would allocate faulty regions of RAM and never release them. It might not work well if they get allocated before the program runs, but it'd be much easier to reboot on failure than to build a kernel with patches.
So the question is:
How to write a program that would allocate given sectors (or pages containing given sectors)
and (if possible) report if it was successful.
This will problematic. To understand why, you have to understand the relation between physical and virtual memory.
On any modern Operating System, programs will get a very large address space for themselves, with the remainder of the address space being used for the OS itself. Other programs are simply invisible: there's no address at which they're found. How is this possible? Simple: processes use virtual addresses. A virtual address does not correspond directly to physical RAM. Instead, there's an address translation table, managed by the OS. When your process runs, the table only contains mappings for RAM that's allocated to you.
Now, that implies that the OS decides what physical RAM is allocated to your program. It can (and will) change that at runtimke. For instance, swapping is implemented using the same mechanism. When swapping out, a page of RAM is written to disk, and its mapping deleted from the translation table. When you try to use the virtual address, the OS detects the missing mapping, restores the page from disk to RAM, and puts back a mapping. It's unlikely that you get back the same page of physical RAM, but the virtual address doesn't change during the whole swap-out/swap-in. So, even if you happened to allocate a page of bad memory, you couldn't keep it. Programs don't own RAM, they own a virtual address space.
Now, Linux does offer some specific kernel functions that allocate memory in a slightly different way, but it seems that you want to bypass the kernel entirely. You can find a much more detailed description in http://lwn.net/images/pdf/LDD3/ch08.pdf
Check out BadRAM: it seems to do exactly what you want.
Well, it's not an answer on how to write a program, but it fixes the issue whitout compiling a kernel:
Use memmap or mem parameters:
http://gquigs.blogspot.com/2009/01/bad-memory-howto.html
I will edit this answer when I get it running and give details.
The thing is write own kernel module, which can allocate physical address. And make it noswap with mlock(2).
I've never tried it. No warranty.

Dynamic memory managment under Linux

I know that under Windows, there are API functions like global_alloc() and such, which allocate memory, and return a handle, then this handle can be locked and a pointer returned, then unlocked again. When unlocked, the system can move this piece of memory around when it runs low on space, optimising memory usage.
My question is that is there something similar under Linux, and if not, how does Linux optimize its memory usage?
Those Windows functions come from a time when all programs were running in the same address space in real mode. Linux, and modern versions of Windows, run programs in separate address spaces, so they can move them about in RAM by remapping what physical address a particular virtual address resolves to in the page tables. No need to burden the programmer with such low level details.
Even on Windows, it's no longer necessary to use such functions except when interacting with a small number of old APIs. I believe Raymond Chen's blog and book have some discussions of the topic if you are interested in more detail. Eg here's part 4 of a series on the history of GlobalLock.
Not sure what Linux equivalent is but in ATT UNIX there are "scatter gather" memory management functions in the memory manager of the core OS. In a virtual memory operating environment there are no absolute addresses so applications don't have an equivalent function. The executable object loader (loads executable file into memory where it becomes a process) uses memory addressing from the memory manager that is all kept track of in virtual memory blocks maintained in its page table (which contains the physical memory addresses). Bottom line is your applications physical memory layout is likely in no way ever linear or accessible directly.

Resources