I am trying to reserve 10MB out of the 2GB onboard RAM on an embedded single-board computer that uses the Canyonlands (PowerPC-460EX) CPU. By reserve RAM, I mean block out a chunk of RAM that Linux will not touch so it will retain data on a warm-reboot. I am using the U-Boot bootloader, and I have tried the following methods:
1) Set mem=2038M in the bootargs envinronment variable
2) Set the 'pram' environment variable in U-Boot and then set mem=\${mem} in bootargs
Both methods failed to change the RAM seen by Linux. I am looking at /proc/meminfo to figure out how much RAM Linux sees as available. In both cases, /proc/meminfo sees 2074876 kB of RAM available (just under 2GB).
Any ideas?
I don't have enough points to comment, but here are some clues:
1) Check that your mem parameter is being passed correctly to the kernel. You should be able to do this using cat /proc/cmdline after you boot as suggested here.
2) Try using quotes around your parameters like "mem=2038M".
Related
I want to get contiguous physical memory on 10GB to test my DMA capable device (New Device).
Can some one explain how to get contiguous physical memory to my driver?
I also want to know how to map that memory to user space ?
I am using on an 64 bit x86 Intel platform with 2.6.32 and 3.10 Linux Kernel versions. In that kernel version Contiguous Memory Allocation (CMA) is not present. I have 16GB RAM.
From Google I got some clue, that kernel command line parameter mem=6G and ioremap can be used.
But I am not clear how exactly to do it.
Please explain.
How to get the physical address map of my 64 bit x86 Intel platform using Linux 2.6.32 Kernel?
What kernel command line parameter should I use ?
Can I do ioremap on RAM physical address ? If yes, what are the steps ?
After mapping access to kernel space, how to map that memory to user space process?
I don't want to patch or recompile the Linux kernel.
Thanks in advance.
I was playing with some of the linux boot params. I was trying to create a hole in system memory using memmap option. I have a 6GB system and e820 map shows: 0x100000-0xcf49d000 as usable memory. I decided to create a hole from 128MB to 1G and mark it as reserved and allow system to use memory from 1G-2G.
In boot options I configured it as follows:
memmap=890M$128M memmap=1G#1G.
However, once system boots up the modified memory map is quite different that what I expect.
0000000000100000 - 0000000037a00000 (usable)
0000000040000000 - 0000000080000000 (usable)
What I must be doing wrong?
I do know, kernel needs some low memory and I cant completely make a whole from 1M to 1G. That is why I thought of giving 128MB for initial boot sequence.
Thanks
We have a redhat 6 servers and memory is around 64GB, we are planing to configure kdump and I am confused about disk size I should set. Redhat suggest it would be memory + 2% more (that means around ~66GB Disk space). I need your suggestion what would be the best size I should define for kdump.
First, don't enable kdump unless Redhat support tells you to. KDumps don't really produce anything useful for most Linux 'customers'.
Second, kdump could (potentially) dump the entire contents of RAM into the dump file. If you have 64GB of RAM ..AND.. it is full when the kdump is triggered, then yes, the space for your kdump file will need to be what RH suggested. That said, most problems can be identified with partial kdumps. RH support has even said before to perform 'head -c ' on the file before sending it in to reduce its size. Usually trimming it down to the first 64MBs.
Finally, remember to disable kdumps after you have finished trouble-shooting the issue. This isn't something you want running constantly on any system above a 'Development/Test' level. Most importantly is to remember to clean this space out after a kdump has occurred.
Unless the system has enough memory, the kdump crash recovery service will not be operational. For the information on minimum memory requirements, refer to the Required minimums section of the Red Hat Enterprise Linux Technology capabilities and limits comparison chart. When kdump is enabled, the minimum memory requirements increase by the amount of memory reserved for it. This value is determined by a user, and when the crashkernel=auto option is used, it defaults to 128 MB plus 64 MB for each TB of physical memory (that is, a total of 192 MB for a system with 1 TB of physical memory).
In Red Hat Enterprise Linux 6, the crashkernel=auto only reserves memory if the system has 4 GB of physical memory or more.
To configure the amount of memory that is reserved for the kdump kernel, as root, open the /boot/grub/grub.conf file in a text editor and add the crashkernel=M (or crashkernel=auto) parameter to the list of kernel options
I had the same question before.
The amount of memory reserved for the kdump kernel can be estimated with the following scheme:
base memory to be reserved = 128MB
an additional 64MB added for each
TB of physical RAM present in the system.
So for example if a system
has 1TB of memory 192MB (128MB + 64MB) will be reserved.
So I believe 128MB will be sufficient in your case.
You may want to have a look at this link Configuring crashkernel on RHEL6.2 (and later) kernels
I am trying to test Contiguous Memory Allocator for DMA mapping framework. I have compiled kernel 3.5.7 with CMA support, I know that it is experimental but it should work.
My goal is to allocate several 32MB physically contiguous memory chunks in kernel module for device without scatter/gather capability.
I am testing my system with test patch from Barry Song: http://thread.gmane.org/gmane.linux.kernel/1263136
But when I try to allocate memory with echo 1024 > /dev/cma_test. I get bash: echo: write error: No space left on device. And in dmesg:misc cma_test: no mem in CMA area
What could be the problem? What am I missing? System is freshly rebooted and there should be at least 350mb of free contiguous memory because bigphysarea patch on kernel 3.2 were able to allocate that amount on similar system.
Thank you for your time!
At the end I have decided to use kernel 3.5 and bigphysarea patch(from 3.2). It is easy and works like a charm.
CMA is great option as well but it is a bit harder to use an debug(CMA needs actual device). I have used up all my skills to find what was the problem. Printk inside kernel code was only possibility to debug this one.
I've inherited supporting some linux kernel drivers (in which my experience is very limited). My question is as follows. It's an embedded environment and the hardware has 512MB of physical memory. However, the boot parameters that are passed to the kernel limits the memory to 256MB by using the variable linuxMem=mem=256M. In my research of this environment variable, I am of the understanding that
this limits the amount of memory that the kernel can manage to 256MB.
Yet in some application code that runs on my target, I see an open of /dev/mem and a subsequent mmap of the returned file descriptor and the offset parameter of the mmap call is in the upper 256MB of physical memory.
And things seem to be working fine. So my question is "why does it work if the kernel supposedly does not know about the upper 256MB?"
Strictly speaking, mem=256M is a kernel parameter, not an environment variable. This parameter only tells the kernel to use so much memory, but it does not make the system completely blind to the physical chip installed in the machine. It can be used to simulate a system with less physical memory than is actually available, but it is not fully equivalent to opening up your box and pulling out one of the memory chips.
Looking at the docs for this parameter, you can explicitly see that addresses outside of limited range can be used in some situations, that's why they recommend also using memmap= in some cases. So, you can't allocate memory for your app above the limit, but you can look at what is found at some physical address, and it seems some device drivers make use of this possibility.
mmap() returns virtual addresses, not physical ones.
It's perfectly possible for a device to only have 64MB of memory and for mmap() to map something around 1GB.