I'm trying to simulate memory exaustion. So here is what I'm thinking:
turn off over commiting.
reduce the available heap so that the memory exaustion can happen quicker.
Run the program under test.
My question is w.r.t 2: is there a trick that reduce the heap size that kernel will allocate? I can probably write another program that allocates a large amount of RAM but there might be a smarter way?
You can remove the maximum process memory size using the ulimit system call. The command is available from the shell. The option in question is the -v (max memory size), so for example to limit the process to a maximum of 2GB you would do:
ulimit -v 2097152
Then you launch the process from that shell.
If you use the -H option to ulimit, then it sets a hard limit which cannot be increased once it's set (root can increase the limit).
If you want control from a program, you can use the setrlimit system call, in a manner like:
#include <sys/types.h>
#include <sys/resource.h>
struct rlimit the_limit = { 2097152 * 1024, RLIM_INFINITY };
if (-1 == setrlimit(RLIMIT_AS, &the_limit)) {
perror("setrlimit failed");
}
This sets the soft limit to 2GB, you can set the hard limit by changing the RLIM_INFINITY value. Please note, you can only increase the hard limit if you're root.
This limit applies to the total amount of memory that can be used for the process, not just the memory that is usable as the heap.
The heap memory can be limited using the -d option. The equivalent name for the setrlimit call is RLIMIT_DATA. This limit applies to memory allocations only - e.g. malloc, mmap, static data.
If you use ulimit -d you can limit the data segment size, which is used for heap allocation (as well as global/static variables).
To simulate memory exhaustion, you can use also use mmap to map large memory segments and make sure that you "lock" them in memory. See the mmap man page on how to lock pages in memory. That way these will not get swapped, and the available memory will be reduced.
You may want to request several small sized segments (e.g. 256KB) if request for large contigious mmap segments fail. Moreoever if you want to go all the way, you may need to make your mmap process immune to the Linux "OOM killer" (by setting the OOM prio to -17). Else, when Linux sees that the system is running too low on free memory, it could select and kill your process which is calling the mmaps, in an attempt to free up memory.
Related
I'm running a program which allocates 8mb stacks using mmap. While testing to see how many stacks I could allocate (aiming for 100,000), I see virtual memory size rise quickly as expected, and reserved size stay small (less than 1gb). The program then segfaults with Cannot allocate new fiber stack: Cannot allocate memory (Errno). Using gdb to rescue the segfault and then looking at htop, I have discovered this happens at around 256GB of virtual memory.
I've tried using prlimit --as=unlimited --rss=unlimited --memlock=unlimited --data=unlimited when running the program, but it doesn't seem to make a difference.
Is there a way to increase this limit? Is it advisable to increase this limit? Is there a better way for crystal to allocate stacks?
Maybe you're hitting the maximum of /proc/sys/vm/max_map_count. This setting sets a maximum on the number of mmaps your process can have. The default value is 65536. So it's likely not the size of memory you want to malloc, but the number of malloc calls that causes the error Cannot allocate memory.
You can try to increase the maximum with:
sysctl -w vm.max_map_count=131070
See also NPTL caps maximum threads at 65528?
I'd check your swap file size. if you are running out of swap then all those parameter changes wont help you until you fix that.
I'd recreate the failure and run free -h to see if there is any unused swap. If its all gone you will need to increase your swap size.
So when I execute a linux command, say a cat command for this example, on a server with 128 GB of RAM, assuming none of this RAM is currently in use, all is free. (I realize this will never happen, but that's why it's an example)
1) Would this command then be executed with a heap space of all 128 GB? Or would it be up to the linux distro I am using to decide how much heap space is aloocated from the available 128 GB?
2) If so, is there another command line argument that I can pass along with my cat command to reserve more heap space than the system standard?
EDIT: 3) Is there a way which I can identify how much heap space will be allocated for my cat command (or any command), preferrably a built-in command line solution, not an external application. If it's not possible please say so, I am just curious.
When you start a program, some amount of memory is allocated for the program; however, this amount of memory is not the limit to what the program is allowed to use. Whenever more memory is requested, it can be granted up until the system has used up all of the memory.
It would be up to the memory allocator (usually found in the libc) to determine how much heap would be allocated. glibc's allocator doesn't use any environment variables, but other replacements may.
can ftruncate be used to increase the size of shared memory block beyond the shared memory limit size given by sysconfig? How do I make it use swap in case physical memory runs out?
can ftruncate be used to increase the size of shared memory block ...
ftruncate() resizes a file. It does not resize a memory mapping of that file. So, the answer is no.
... beyond the shared memory limit size given by sysconfig?
That limit can not be breached. root user can change that limit though.
How do I make it use swap in case physical memory runs out?
Assuming it is a memory mapped file, one way is to only map parts of the file at a time, rather than the whole file. If a process uses more virtual memory than there is available physical memory the operating system is going to automatically use swapping to free some physical memory for you.
Dear all, I am using Redhat linux ,How to set maximum memory for particular process. For eg i have to allocate maximum memory usage to eclipse alone .Is it possible to allocate like this.Give me some solutions.
ulimit -v 102400
eclipse
...gives eclipse 100MiB of memory.
You can't control memory usage; you can only control virtual memory size, not the amount of actual memory used, as that is extremely complicated (perhaps impossible) to know for a single process on an operating system which supports virtual memory.
Not all memory used appears in the process's virtual address space at a given instant, for example kernel usage, and disc caching. A process can change which pages it has mapped in as often as it likes (e.g. via mmap() ). Some of a process's address space is also mapped in, but not actually used, or is shared with one or more other processes. This makes measuring per-process memory usage a fairly unachievable goal in practice.
And putting a cap on the VM size is not a good idea either, as that will result in the process being killed if it attempts to use more.
The right way of doing this in this case (for a Java process) is to set the heap maximum size (via various well-documented JVM startup options). However, experience suggests that you should not set it less than 1Gb.
My application similar to hypotetical program:
for(;;) {
for (i=0; i<1000; i++) {
p[i] = malloc(random_number_between_1000_and_100000());
p[i][0]=0; // update
}
for (i=0; i<1000; i++) {
free(p[i]);
}
}
Has no memory leaks but on my system, the consumption of memory (top, column VSS) grows without limits (such as to 300% of available physical memory). Is this normal?
Updated - use the memory for a while and then free it. Is this a difference?
The behavior is normal. Quoting man 3 malloc:
BUGS
By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the
memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be
killed by the infamous OOM killer. In case Linux is employed under circumstances where it would be less desirable to suddenly lose some randomly
picked processes, and moreover the kernel version is sufficiently recent, one can switch off this overcommitting behavior using a command like:
# echo 2 > /proc/sys/vm/overcommit_memory
See also the kernel Documentation directory, files vm/overcommit-accounting and sysctl/vm.txt.
You need to touch (read/write) the memory for the Linux kernel to actually reserve it.
Try to add
sbrk(-1);
at end of each loop to see if it makes difference.
The free() only deallocates memory but it doesn't give it back to OS.
The OS usually allocates all pages as Copy-On-Write clones of the "0"
page, that is a fixed page filled with zeros. Reading from the pages
will return 0 as expected. As long as you only read, all references go
the same physical memory. Once you write a value the "COW" is
broken and a real, physical, page frame is allocated for you. This
means that as long as you don't write to the memory you can keep
allocating memory until the virtual memory address space runs out or
your page table fills up all available memory.
As long as you don't touch those allocated chunks the system will not really allocate them for you.
However you can run out of addressable space which is a limit the OS imposes to the processes, and is not necessarily the maximum you can address with the systems pointer type.