I'm running a program which allocates 8mb stacks using mmap. While testing to see how many stacks I could allocate (aiming for 100,000), I see virtual memory size rise quickly as expected, and reserved size stay small (less than 1gb). The program then segfaults with Cannot allocate new fiber stack: Cannot allocate memory (Errno). Using gdb to rescue the segfault and then looking at htop, I have discovered this happens at around 256GB of virtual memory.
I've tried using prlimit --as=unlimited --rss=unlimited --memlock=unlimited --data=unlimited when running the program, but it doesn't seem to make a difference.
Is there a way to increase this limit? Is it advisable to increase this limit? Is there a better way for crystal to allocate stacks?
Maybe you're hitting the maximum of /proc/sys/vm/max_map_count. This setting sets a maximum on the number of mmaps your process can have. The default value is 65536. So it's likely not the size of memory you want to malloc, but the number of malloc calls that causes the error Cannot allocate memory.
You can try to increase the maximum with:
sysctl -w vm.max_map_count=131070
See also NPTL caps maximum threads at 65528?
I'd check your swap file size. if you are running out of swap then all those parameter changes wont help you until you fix that.
I'd recreate the failure and run free -h to see if there is any unused swap. If its all gone you will need to increase your swap size.
Related
How can I calculate the real memory usage of a single process? I am not talking about the virtual memory, because it just keeps growing. For instance, there are proc files like smaps, where you can get the mappings of a process. But this is virtual memory and the values of that file just keeps growing for running process. But I would like to reflect the real memory usage of a process. E.g. if you plot the memory usage of a process it should represent the allocations of memory and also the freeing of memory. So the plot should be like an up and down movement instead of a linear function, that just keeps growing for a running process.
So, how could I calculate the real memory usage? I would appreciate any helpful answer.
It's actually kind of a complicated question. The two most common metrics for a program's memory usage at the OS level are virtual size and resident set size. (These show in the output of ps -u as the VSZ and RSS columns.) Roughly speaking, these tell the total memory the program has assigned to it, versus how much it is currently actively using.
Further complicating the question is that when you use malloc (or the C++ new operator) to allocate memory, memory is allocated from a pool in your process which is built by occasionally requesting an allocation of memory from the operating system. But when you free memory, the memory goes back into this pool, but it is typically not returned to the OS. So as your program allocates and frees memory, you typically will not see its memory footprint go up and down. (However, if it frees a lot of memory and then doesn't allocate it any more, eventually you may see its rss go down.)
I'm trying to simulate memory exaustion. So here is what I'm thinking:
turn off over commiting.
reduce the available heap so that the memory exaustion can happen quicker.
Run the program under test.
My question is w.r.t 2: is there a trick that reduce the heap size that kernel will allocate? I can probably write another program that allocates a large amount of RAM but there might be a smarter way?
You can remove the maximum process memory size using the ulimit system call. The command is available from the shell. The option in question is the -v (max memory size), so for example to limit the process to a maximum of 2GB you would do:
ulimit -v 2097152
Then you launch the process from that shell.
If you use the -H option to ulimit, then it sets a hard limit which cannot be increased once it's set (root can increase the limit).
If you want control from a program, you can use the setrlimit system call, in a manner like:
#include <sys/types.h>
#include <sys/resource.h>
struct rlimit the_limit = { 2097152 * 1024, RLIM_INFINITY };
if (-1 == setrlimit(RLIMIT_AS, &the_limit)) {
perror("setrlimit failed");
}
This sets the soft limit to 2GB, you can set the hard limit by changing the RLIM_INFINITY value. Please note, you can only increase the hard limit if you're root.
This limit applies to the total amount of memory that can be used for the process, not just the memory that is usable as the heap.
The heap memory can be limited using the -d option. The equivalent name for the setrlimit call is RLIMIT_DATA. This limit applies to memory allocations only - e.g. malloc, mmap, static data.
If you use ulimit -d you can limit the data segment size, which is used for heap allocation (as well as global/static variables).
To simulate memory exhaustion, you can use also use mmap to map large memory segments and make sure that you "lock" them in memory. See the mmap man page on how to lock pages in memory. That way these will not get swapped, and the available memory will be reduced.
You may want to request several small sized segments (e.g. 256KB) if request for large contigious mmap segments fail. Moreoever if you want to go all the way, you may need to make your mmap process immune to the Linux "OOM killer" (by setting the OOM prio to -17). Else, when Linux sees that the system is running too low on free memory, it could select and kill your process which is calling the mmaps, in an attempt to free up memory.
I have a process/Linux, which runs out of memory very soon, and I wonder that it might be because per process max allowable virtual memory set by system setting might be low, in which case the process will soon run out of memory irrespective of how much RAM/virtual memory is available.
What is the command to check max allowable memory to a user process?
the command you are looking for is
ulimit -m
Forgot to add, if you are on a 64 bit machine, it will probably show you unlimited.
I am trying to allocate a 5-page-800x600 frame buffer(roughly 5mb). But during DRAM memory map initialization, dma_alloc_coherent() only returns a zero pointer or does not allocate the buffer.
It used to work with just allocating a 4-page frame buffer(4mb). I have already tried setting CONSISTENT_DMA_SIZE to 8mb, 10mb, and 12mb. But this doesn't seem to have any effect.
Is there any other setting I'm over looking?
thanks alot,
nazekimi
P.S.
working on a Linux 2.6.10 Mobilinux kernel
kernel does power-of-2 allocation. so 5MB means 8MB allocation. so probably you need to increase CONSISTENT_DMA_SIZE even more.
Thx,
Jeffrey
Dear all, I am using Redhat linux ,How to set maximum memory for particular process. For eg i have to allocate maximum memory usage to eclipse alone .Is it possible to allocate like this.Give me some solutions.
ulimit -v 102400
eclipse
...gives eclipse 100MiB of memory.
You can't control memory usage; you can only control virtual memory size, not the amount of actual memory used, as that is extremely complicated (perhaps impossible) to know for a single process on an operating system which supports virtual memory.
Not all memory used appears in the process's virtual address space at a given instant, for example kernel usage, and disc caching. A process can change which pages it has mapped in as often as it likes (e.g. via mmap() ). Some of a process's address space is also mapped in, but not actually used, or is shared with one or more other processes. This makes measuring per-process memory usage a fairly unachievable goal in practice.
And putting a cap on the VM size is not a good idea either, as that will result in the process being killed if it attempts to use more.
The right way of doing this in this case (for a Java process) is to set the heap maximum size (via various well-documented JVM startup options). However, experience suggests that you should not set it less than 1Gb.