Limiting how much physical memory the kernel can manage with the "mem" environment variable - linux

I've inherited supporting some linux kernel drivers (in which my experience is very limited). My question is as follows. It's an embedded environment and the hardware has 512MB of physical memory. However, the boot parameters that are passed to the kernel limits the memory to 256MB by using the variable linuxMem=mem=256M. In my research of this environment variable, I am of the understanding that
this limits the amount of memory that the kernel can manage to 256MB.
Yet in some application code that runs on my target, I see an open of /dev/mem and a subsequent mmap of the returned file descriptor and the offset parameter of the mmap call is in the upper 256MB of physical memory.
And things seem to be working fine. So my question is "why does it work if the kernel supposedly does not know about the upper 256MB?"

Strictly speaking, mem=256M is a kernel parameter, not an environment variable. This parameter only tells the kernel to use so much memory, but it does not make the system completely blind to the physical chip installed in the machine. It can be used to simulate a system with less physical memory than is actually available, but it is not fully equivalent to opening up your box and pulling out one of the memory chips.
Looking at the docs for this parameter, you can explicitly see that addresses outside of limited range can be used in some situations, that's why they recommend also using memmap= in some cases. So, you can't allocate memory for your app above the limit, but you can look at what is found at some physical address, and it seems some device drivers make use of this possibility.

mmap() returns virtual addresses, not physical ones.
It's perfectly possible for a device to only have 64MB of memory and for mmap() to map something around 1GB.

Related

Large physically contiguous memory area

For my M.Sc. thesis, I have to reverse-engineer the hash function Intel uses inside its CPUs to spread data among Last Level Cache slices in Sandy Bridge and newer generations. To this aim, I am developing an application in Linux, which needs a physically contiguous memory area in order to make my tests. The idea is to read data from this area, so that they are cached, probe if older data have been evicted (through delay measures or LLC miss counters) in order to find colliding memory addresses and finally discover the hash function by comparing these colliding addresses.
The same procedure has already been used in Windows by a researcher, and proved to work.
To do this, I need to allocate an area that must be large (64 MB or more) and fully cachable, so without DMA-friendly options in TLB. How can I perform this allocation?
To have a full control over the allocation (i.e., for it to be really physically contiguous), my idea was to write a Linux module, export a device and mmap() it from userspace, but I do not know how to allocate so much contiguous memory inside the kernel.
I heard about Linux Contiguous Memory Allocator (CMA), but I don't know how it works
Applications don't see physical memory, a process have some address space in virtual memory. Read about the MMU (what is contiguous in virtual space might not really be physically contiguous and vice versa)
You might perhaps want to lock some memory using mlock(2)
But your application will be scheduled, and other processes (or scheduled tasks) would dirty your CPU cache. See also sched_setaffinity(2)
(and even kernel code might be perhaps preempted)
This page on Kernel Newbies, has some ideas about memory allocation. But the max for get_free_pages looks like 8MiB. (Perhaps that's a compile-time constraint?)
Since this would be all-custom, you could explore the mem= boot parameter of the linux kernel. This will limit the amount of memory used, and you can party all over the remaining memory without anyone knowing. Heck, if you boot up a busybox system, you could probably do mem=32M, but even mem=256M should work if you're not booting a GUI.
You will also want to look into the Offline Scheduler (and here). It "unplugs" the CPU from Linux so you can have full control over ALL code running on it. (Some parts of this are already in the mainline kernel, and maybe all of it is.)

Is it possible to allocate a certain sector of RAM under Linux?

I have recently gotten a faulty RAM and despite already finding out this I would like to try a much easier concept - write a program that would allocate faulty regions of RAM and never release them. It might not work well if they get allocated before the program runs, but it'd be much easier to reboot on failure than to build a kernel with patches.
So the question is:
How to write a program that would allocate given sectors (or pages containing given sectors)
and (if possible) report if it was successful.
This will problematic. To understand why, you have to understand the relation between physical and virtual memory.
On any modern Operating System, programs will get a very large address space for themselves, with the remainder of the address space being used for the OS itself. Other programs are simply invisible: there's no address at which they're found. How is this possible? Simple: processes use virtual addresses. A virtual address does not correspond directly to physical RAM. Instead, there's an address translation table, managed by the OS. When your process runs, the table only contains mappings for RAM that's allocated to you.
Now, that implies that the OS decides what physical RAM is allocated to your program. It can (and will) change that at runtimke. For instance, swapping is implemented using the same mechanism. When swapping out, a page of RAM is written to disk, and its mapping deleted from the translation table. When you try to use the virtual address, the OS detects the missing mapping, restores the page from disk to RAM, and puts back a mapping. It's unlikely that you get back the same page of physical RAM, but the virtual address doesn't change during the whole swap-out/swap-in. So, even if you happened to allocate a page of bad memory, you couldn't keep it. Programs don't own RAM, they own a virtual address space.
Now, Linux does offer some specific kernel functions that allocate memory in a slightly different way, but it seems that you want to bypass the kernel entirely. You can find a much more detailed description in http://lwn.net/images/pdf/LDD3/ch08.pdf
Check out BadRAM: it seems to do exactly what you want.
Well, it's not an answer on how to write a program, but it fixes the issue whitout compiling a kernel:
Use memmap or mem parameters:
http://gquigs.blogspot.com/2009/01/bad-memory-howto.html
I will edit this answer when I get it running and give details.
The thing is write own kernel module, which can allocate physical address. And make it noswap with mlock(2).
I've never tried it. No warranty.

Accessing any memory locations under Linux 2.6.x

I'm using Slackware 12.2 on an x86 machine. I'm trying to debug/figure out things by dumping specific parts of memory. Unfortunately my knowledge on the Linux kernel is quite limited to what I need for programming/pentesting.
So here's my question: Is there a way to access any point in memory? I tried doing this with a char pointer so that It would only be a byte long. However the program crashed and spat out something in that nature of: "can't access memory location". Now I was pointing at the 0x00000000 location which where the system stores it's interrupt vectors (unless that changed), which shouldn't matter really.
Now my understanding is the kernel will allocate memory (data, stack, heap, etc) to a program and that program will not be able to go anywhere else. So I was thinking of using NASM to tell the CPU to go directly fetch what I need but I'm unsure if that would work (and I would need to figure out how to translate MASM to NASM).
Alright, well there's my long winded monologue. Essentially my question is: "Is there a way to achieve this?".
Anyway...
If your program is running in user-mode, then memory outside of your process memory won't be accessible, by hook or by crook. Using asm will not help, nor will any other method. This is simply impossible, and is a core security/stability feature of any OS that runs in protected mode (i.e. all of them, for the past 20+ years). Here's a brief overview of Linux kernel memory management.
The only way you can explore the entire memory space of your computer is by using a kernel debugger, which will allow you to access any physical address. However, even that won't let you look at the memory of every process at the same time, since some processes will have been swapped out of main memory. Furthermore, even in kernel mode, physical addresses are not necessarily the same as the addresses visible to the process.
Take a look at /dev/mem or /dev/kmem (man mem)
If you have root access you should be able to see your memory there. This is a mechanism used by kernel debuggers.
Note the warning: Examining and patching is likely to lead to unexpected results when read-only or write-only bits are present.
From the man page:
mem is a character device file that is an image of
the main memory of the computer. It may be used, for
example, to examine (and even patch) the system.
Byte addresses in mem are interpreted as physical
memory addresses. References to nonexistent locations
cause errors to be returned.
...
The file kmem is the same as mem, except that the
kernel virtual memory rather than physical memory is
accessed.

Dynamic memory managment under Linux

I know that under Windows, there are API functions like global_alloc() and such, which allocate memory, and return a handle, then this handle can be locked and a pointer returned, then unlocked again. When unlocked, the system can move this piece of memory around when it runs low on space, optimising memory usage.
My question is that is there something similar under Linux, and if not, how does Linux optimize its memory usage?
Those Windows functions come from a time when all programs were running in the same address space in real mode. Linux, and modern versions of Windows, run programs in separate address spaces, so they can move them about in RAM by remapping what physical address a particular virtual address resolves to in the page tables. No need to burden the programmer with such low level details.
Even on Windows, it's no longer necessary to use such functions except when interacting with a small number of old APIs. I believe Raymond Chen's blog and book have some discussions of the topic if you are interested in more detail. Eg here's part 4 of a series on the history of GlobalLock.
Not sure what Linux equivalent is but in ATT UNIX there are "scatter gather" memory management functions in the memory manager of the core OS. In a virtual memory operating environment there are no absolute addresses so applications don't have an equivalent function. The executable object loader (loads executable file into memory where it becomes a process) uses memory addressing from the memory manager that is all kept track of in virtual memory blocks maintained in its page table (which contains the physical memory addresses). Bottom line is your applications physical memory layout is likely in no way ever linear or accessible directly.

Force Linux to use only memory over 4G?

I have a Linux device driver that interfaces to a device that, in theory, can perform DMA using 64-bit addresses. I'd like to test to see that this actually works.
Is there a simple way that I can force a Linux machine not to use any memory below physical address 4G? It's OK if the kernel image is in low memory; I just want to be able to force a situation where I know all my dynamically allocated buffers, and any kernel or user buffers allocated for me are not addressable in 32 bits. This is a little brute force, but would be more comprehensive than anything else I can think of.
This should help me catch (1) hardware that wasn't configured correctly or loaded with the full address (or is just plain broken) as well as (2) accidental and unnecessary use of bounce buffers (because there's nowhere to bounce to).
clarification: I'm running x86_64, so I don't care about most of the old 32-bit addressing issues. I just want to test that a driver can correctly interface with multitudes of buffers using 64-bit physical addresses.
/usr/src/linux/Documentation/kernel-parameters.txt
memmap=exactmap [KNL,X86] Enable setting of an exact
E820 memory map, as specified by the user.
Such memmap=exactmap lines can be constructed based on
BIOS output or other requirements. See the memmap=nn#ss
option description.
memmap=nn[KMG]#ss[KMG]
[KNL] Force usage of a specific region of memory
Region of memory to be used, from ss to ss+nn.
memmap=nn[KMG]#ss[KMG]
[KNL,ACPI] Mark specific memory as ACPI data.
Region of memory to be used, from ss to ss+nn.
memmap=nn[KMG]$ss[KMG]
[KNL,ACPI] Mark specific memory as reserved.
Region of memory to be used, from ss to ss+nn.
Example: Exclude memory from 0x18690000-0x1869ffff
memmap=64K$0x18690000
or
memmap=0x10000$0x18690000
If you add memmap=4G$0 to the kernel's boot parameters, the lower 4GB of physical memory will no longer be accessible. Also, your system will no longer boot... but some variation hereof (memmap=3584M$512M?) may allow for enough memory below 4GB for the system to boot but not enough that your driver's DMA buffers will be allocated there.
IIRC there's an option within kernel configuration to use PAE extensions which will enable you to use more than 4GB (I am a bit rusty on the kernel config - last kernel I recompiled was 2.6.4 - so please excuse my lack of recall). You do know how to trigger a kernel config
make clean && make menuconfig
Hope this helps,
Best regards,
Tom.

Resources