I was playing with some of the linux boot params. I was trying to create a hole in system memory using memmap option. I have a 6GB system and e820 map shows: 0x100000-0xcf49d000 as usable memory. I decided to create a hole from 128MB to 1G and mark it as reserved and allow system to use memory from 1G-2G.
In boot options I configured it as follows:
memmap=890M$128M memmap=1G#1G.
However, once system boots up the modified memory map is quite different that what I expect.
0000000000100000 - 0000000037a00000 (usable)
0000000040000000 - 0000000080000000 (usable)
What I must be doing wrong?
I do know, kernel needs some low memory and I cant completely make a whole from 1M to 1G. That is why I thought of giving 128MB for initial boot sequence.
Thanks
Related
I'm working on a 32-bit embedded system with 1GB RAM that has an unusual configuration. There exists a 32MB region of RAM(DDR) that is used by an external piece of hardware. This region has size and alignment requirements defined by the hardware and the physical address and size are fixed during boot time before Linux kernels execute. The system also has two independent CPUs (not 2 cores on the same CPU) each running Linux that share the same 1GB of RAM and also need to simultaneously access the 32MB region of RAM.
I need to give each CPU an arbitrary amount of RAM to run Linux and I also need to allow each CPU to access the 32MB region. I have got all of this working by specifying the memory map on the kernel command line (e.g. for CPU1 memmap=512M#0 and for CPU2 memmap=512M#512M) and also using memmap=32M$XXXX to reserve the 32MB region. When the kernels boot, I use ioremap map the 32MB region into each kernels virtual address space so it can access the region.
However, what I've found is that to maintain software compatibility with existing kernel drivers, I need the 32MB region to be mapped into both kernels' "logical" address space so that phys_to_virt and virt_to_phys will work. Of course, when using ioremap, the 32MB region maps into the vmalloc area, so phys_to_virt and virt_to_phys don't work as desired.
I'm not finding any way to accomplish what I need to do with existing command-line or build-configuration options.
From a high-level, I think the kernels' page table entries will have no translation where the gap in the physical/logical memory map was specified (i.e. memmap=32M$XXXX) and what I need to do is to make sure those page table entries have the same information as if the gap was never specified. Or find a way to "reserve" a specific region of physical RAM from the kernel after it has mapped the logical address space into the page table. Either way, any manipulation needs to be done before the kernel starts using the memory.
The discussion below applies to 32-bit ARM Linux.
Suppose there are 512MB physical RAM in my system. For common configurations, all these 512MB physical RAM will be mapped via direct mapping by kernel(0xC000 0000 to 0xE000 0000).
Question is: kernel itself only uses part of these RAM; most of these RAM would be allocated to user space. Why bother mapping all these 512MB physical RAM in kernel's virtual space(0xC000 0000 to 0xE000 0000)? Why doesn't kernel just map part of these RAM for its only usage(say 64MB RAM)?
If physical RAM is greater than 1GB, things get a little complicated. Let's say directly-mapped area is 768MB in size. The result would be 768MB out of 1GB being directly mapped to kernel's virtual space. I guess the rest of the RAM(256MB) goes to two places: either high memory area or allocated by kernel to user space. But I still don't see any advantage of mapping so many physical RAM into kernel's virtual space.
Actually this question can be reduced to:
what are the drawbacks if kernel only directly maps a small part of physical RAM(say 64MB out of 512MB)?
Before further discussion, it is beneficial to know that
After MMU is turned on, every address issued by CPU is virtual
address.
If kernel wants to access ANY address in RAM, a mapping must be set up before the actual access happens.
If kernel only directly maps a small part of physical RAM, the cost is that every time kernel needs to access other parts of RAM, it needs to set up a temporary mapping before accessing that address and torn down that mapping after the access, which is very tedious and low efficiency.
If that mapping is set up in advance and is always there, it saves quite a lot of trouble for kernel.
Can I allocate one large and guaranteed continued range physical memory (100 MB consecutive without breaks) on Linux, and if I can, then how can I do this?
It is necessary to mapping this a continuous block of memory through the PCI-Express BAR from one CPU1 to the other CPU2 located behind the PCIe Non-Transparent Bridge.
You don't allocate physical memory in user applications (physical memory only makes sense inside the kernel).
I don't understand if you are coding a kernel module or some Linux application (e.g. a numerical finite-element code=.
Inside applications, you can allocate virtual memory with e.g. mmap(2) (and then you can allocate a big contiguous segment of address space)
I guess that some GPU cards give access to a large amount of GPU memory thru mmap so I believe it is possible to do what you want.
You might be interested by numa(7) man page. Probably the numa(3) library should give you what you want. Did you consider also open MPI? See also msync(2) and mlock(2)
From user space -- there is no guarantee depends on you luck.
if you compile your driver into the kernel -- you can use the mmap and allocate the required amount of memory.
if it is required to use it as storage or some other work not specifically for a driver then you should set the memmap parameter in the boot command line.
e.g. memmap=200M$1700M
it will block 200 MB memory starting from the end of 1700M (address).
Later it can be used to as FS as well ;)
I am trying to test Contiguous Memory Allocator for DMA mapping framework. I have compiled kernel 3.5.7 with CMA support, I know that it is experimental but it should work.
My goal is to allocate several 32MB physically contiguous memory chunks in kernel module for device without scatter/gather capability.
I am testing my system with test patch from Barry Song: http://thread.gmane.org/gmane.linux.kernel/1263136
But when I try to allocate memory with echo 1024 > /dev/cma_test. I get bash: echo: write error: No space left on device. And in dmesg:misc cma_test: no mem in CMA area
What could be the problem? What am I missing? System is freshly rebooted and there should be at least 350mb of free contiguous memory because bigphysarea patch on kernel 3.2 were able to allocate that amount on similar system.
Thank you for your time!
At the end I have decided to use kernel 3.5 and bigphysarea patch(from 3.2). It is easy and works like a charm.
CMA is great option as well but it is a bit harder to use an debug(CMA needs actual device). I have used up all my skills to find what was the problem. Printk inside kernel code was only possibility to debug this one.
I have a Linux device driver that interfaces to a device that, in theory, can perform DMA using 64-bit addresses. I'd like to test to see that this actually works.
Is there a simple way that I can force a Linux machine not to use any memory below physical address 4G? It's OK if the kernel image is in low memory; I just want to be able to force a situation where I know all my dynamically allocated buffers, and any kernel or user buffers allocated for me are not addressable in 32 bits. This is a little brute force, but would be more comprehensive than anything else I can think of.
This should help me catch (1) hardware that wasn't configured correctly or loaded with the full address (or is just plain broken) as well as (2) accidental and unnecessary use of bounce buffers (because there's nowhere to bounce to).
clarification: I'm running x86_64, so I don't care about most of the old 32-bit addressing issues. I just want to test that a driver can correctly interface with multitudes of buffers using 64-bit physical addresses.
/usr/src/linux/Documentation/kernel-parameters.txt
memmap=exactmap [KNL,X86] Enable setting of an exact
E820 memory map, as specified by the user.
Such memmap=exactmap lines can be constructed based on
BIOS output or other requirements. See the memmap=nn#ss
option description.
memmap=nn[KMG]#ss[KMG]
[KNL] Force usage of a specific region of memory
Region of memory to be used, from ss to ss+nn.
memmap=nn[KMG]#ss[KMG]
[KNL,ACPI] Mark specific memory as ACPI data.
Region of memory to be used, from ss to ss+nn.
memmap=nn[KMG]$ss[KMG]
[KNL,ACPI] Mark specific memory as reserved.
Region of memory to be used, from ss to ss+nn.
Example: Exclude memory from 0x18690000-0x1869ffff
memmap=64K$0x18690000
or
memmap=0x10000$0x18690000
If you add memmap=4G$0 to the kernel's boot parameters, the lower 4GB of physical memory will no longer be accessible. Also, your system will no longer boot... but some variation hereof (memmap=3584M$512M?) may allow for enough memory below 4GB for the system to boot but not enough that your driver's DMA buffers will be allocated there.
IIRC there's an option within kernel configuration to use PAE extensions which will enable you to use more than 4GB (I am a bit rusty on the kernel config - last kernel I recompiled was 2.6.4 - so please excuse my lack of recall). You do know how to trigger a kernel config
make clean && make menuconfig
Hope this helps,
Best regards,
Tom.