How is RAM and Heap Space allocated for a linux/unix command? - linux

So when I execute a linux command, say a cat command for this example, on a server with 128 GB of RAM, assuming none of this RAM is currently in use, all is free. (I realize this will never happen, but that's why it's an example)
1) Would this command then be executed with a heap space of all 128 GB? Or would it be up to the linux distro I am using to decide how much heap space is aloocated from the available 128 GB?
2) If so, is there another command line argument that I can pass along with my cat command to reserve more heap space than the system standard?
EDIT: 3) Is there a way which I can identify how much heap space will be allocated for my cat command (or any command), preferrably a built-in command line solution, not an external application. If it's not possible please say so, I am just curious.

When you start a program, some amount of memory is allocated for the program; however, this amount of memory is not the limit to what the program is allowed to use. Whenever more memory is requested, it can be granted up until the system has used up all of the memory.

It would be up to the memory allocator (usually found in the libc) to determine how much heap would be allocated. glibc's allocator doesn't use any environment variables, but other replacements may.

Related

Linux: manually reduce heap size

I'm trying to simulate memory exaustion. So here is what I'm thinking:
turn off over commiting.
reduce the available heap so that the memory exaustion can happen quicker.
Run the program under test.
My question is w.r.t 2: is there a trick that reduce the heap size that kernel will allocate? I can probably write another program that allocates a large amount of RAM but there might be a smarter way?
You can remove the maximum process memory size using the ulimit system call. The command is available from the shell. The option in question is the -v (max memory size), so for example to limit the process to a maximum of 2GB you would do:
ulimit -v 2097152
Then you launch the process from that shell.
If you use the -H option to ulimit, then it sets a hard limit which cannot be increased once it's set (root can increase the limit).
If you want control from a program, you can use the setrlimit system call, in a manner like:
#include <sys/types.h>
#include <sys/resource.h>
struct rlimit the_limit = { 2097152 * 1024, RLIM_INFINITY };
if (-1 == setrlimit(RLIMIT_AS, &the_limit)) {
perror("setrlimit failed");
}
This sets the soft limit to 2GB, you can set the hard limit by changing the RLIM_INFINITY value. Please note, you can only increase the hard limit if you're root.
This limit applies to the total amount of memory that can be used for the process, not just the memory that is usable as the heap.
The heap memory can be limited using the -d option. The equivalent name for the setrlimit call is RLIMIT_DATA. This limit applies to memory allocations only - e.g. malloc, mmap, static data.
If you use ulimit -d you can limit the data segment size, which is used for heap allocation (as well as global/static variables).
To simulate memory exhaustion, you can use also use mmap to map large memory segments and make sure that you "lock" them in memory. See the mmap man page on how to lock pages in memory. That way these will not get swapped, and the available memory will be reduced.
You may want to request several small sized segments (e.g. 256KB) if request for large contigious mmap segments fail. Moreoever if you want to go all the way, you may need to make your mmap process immune to the Linux "OOM killer" (by setting the OOM prio to -17). Else, when Linux sees that the system is running too low on free memory, it could select and kill your process which is calling the mmaps, in an attempt to free up memory.

Process killed due to too much memory?

I have a Ubuntu 12.10 (kernel 3.9.0-rc2) installation running on VMWare. I've given it 512MB RAM.
cat /proc/meminfo shows:
MemTotal: 507864 KB
MemFree: 440180
I want to use the swap (for some reason) so I wrote a C program which allocates a 500MB array (using malloc()) and fills it with junk. However, the program gets killed before it can fill the entire array and a message "Killed" appears on the screen.
I wanted to ask if this is normal behavior and what is the reason behind this? In my opinion, the swap should be used because the free RAM is insufficient.
Edit: I did not mention that I have 1GB swap. cat /proc/swaps shows:
/dev/sda5 Size: 1046524 Used: 14672.
The "Used" amount increases when I run the memory-eating program. But as you can see, a lot of swap is leftover. So why did the program have to be 'Killed'?
So I couldn't find a valid answer. I have a temporary solution:
I had modified the Virtual Machine settings to give 512MB RAM to the VM. Now I reverted back to 2GB and ran 5 parallel programs each consuming 500MB. Thankfully, all of them run and the swap gets used.
I just needed to use the swap for a project on swap management.
It also matters how you have written your C program to allocate the memory and what are the compiler flags. For example, if you are statically allocating memory (such as double A[N][N]), the behaviour is different from dynamically allocating it: (such as using malloc/calloc). Static allocations are limited by the memory model of the compiler (medium, small etc, often can be specified). Perhaps, a good starting point is :
http://en.wikipedia.org/wiki/C_dynamic_memory_allocation
Does this help?

process memory size solaris

Running perl script in solaris 10 machine. Know the RAM Size is 25 GB. Have two queries.
Normally How much RAM memory a solaris process is allocated. Is it a default value assigned to any script or process. where it can be set? How do i determine how much max static array size can i have and how much dynamic memory can i allocate? what command i need to issue to find it out what memory allocated to a process in solaris? Is it configurable?
When the script gives me outofmemory error. Does it mean it used the entire RAM and virtual memory? Is there any way to know how memory was used when the script threw out ofmemory error?what command i need to issue to find it out in solaris?
1) As much as it requests, up to the limit set by ulimit. Commands such as pmap and ps can show how much a process has allocated at the current time.
2) It can mean that it used all virtual memory, or that it hit the process limit, or that it's a 32-bit process and hit the 4gb address space limit. Solaris Application Memory Management provides some more details.

Why the process is getting killed at 4GB?

I have written a program which works on huge set of data. My CPU and OS(Ubuntu) both are 64 bit and I have got 4GB of RAM. Using "top" (%Mem field), I saw that the process's memory consumption went up to around 87% i.e 3.4+ GB and then it got killed.
I then checked how much memory a process can access using "uname -m" which comes out to be "unlimited".
Now, since both the OS and CPU are 64 bit and also there exists a swap partition, the OS should have used the virtual memory i.e [ >3.4GB + yGB from swap space ] in total and only if the process required more memory, it should have been killed.
So, I have following ques:
How much physical memory can a process access theoretically on 64 bit m/c. My answer is 2^48 bytes.
If less than 2^48 bytes of physical memory exists, then OS should use virtual memory, correct?
If ans to above ques is YES, then OS should have used SWAP space as well, why did it kill the process w/o even using it. I dont think we have to use some specific system calls which coding our program to make this happen.
Please suggest.
It's not only the data size that could be the reason. For example, do ulimit -a and check the max stack size. Have you got a kill reason? Set 'ulimit -c 20000' to get a core file, it shows you the reason when you examine it with gdb.
Check with file and ldd that your executable is indeed 64 bits.
Check also the resource limits. From inside the process, you could use getrlimit system call (and setrlimit to change them, when possible). From a bash shell, try ulimit -a. From a zsh shell try limit.
Check also that your process indeed eats the memory you believe it does consume. If its pid is 1234 you could try pmap 1234. From inside the process you could read the /proc/self/maps or /proc/1234/maps (which you can read from a terminal). There is also the /proc/self/smaps or /proc/1234/smaps and /proc/self/status or /proc/1234/status and other files inside your /proc/self/ ...
Check with  free that you got the memory (and the swap space) you believe. You can add some temporary swap space with swapon /tmp/someswapfile (and use mkswap to initialize it).
I was routinely able, a few months (and a couple of years) ago, to run a 7Gb process (a huge cc1 compilation), under Gnu/Linux/Debian/Sid/AMD64, on a machine with 8Gb RAM.
And you could try with a tiny test program, which e.g. allocates with malloc several memory chunks of e.g. 32Mb each. Don't forget to write some bytes inside (at least at each megabyte).
standard C++ containers like std::map or std::vector are rumored to consume more memory than what we usually think.
Buy more RAM if needed. It is quite cheap these days.
In what can be addressed literally EVERYTHING has to fit into it, including your graphics adaptors, OS kernel, BIOS, etc. and the amount that can be addressed can't be extended by SWAP either.
Also worth noting that the process itself needs to be 64-bit also. And some operating systems may become unstable and therefore kill the process if you're using excessive RAM with it.

Allocating memory for process in linux

Dear all, I am using Redhat linux ,How to set maximum memory for particular process. For eg i have to allocate maximum memory usage to eclipse alone .Is it possible to allocate like this.Give me some solutions.
ulimit -v 102400
eclipse
...gives eclipse 100MiB of memory.
You can't control memory usage; you can only control virtual memory size, not the amount of actual memory used, as that is extremely complicated (perhaps impossible) to know for a single process on an operating system which supports virtual memory.
Not all memory used appears in the process's virtual address space at a given instant, for example kernel usage, and disc caching. A process can change which pages it has mapped in as often as it likes (e.g. via mmap() ). Some of a process's address space is also mapped in, but not actually used, or is shared with one or more other processes. This makes measuring per-process memory usage a fairly unachievable goal in practice.
And putting a cap on the VM size is not a good idea either, as that will result in the process being killed if it attempts to use more.
The right way of doing this in this case (for a Java process) is to set the heap maximum size (via various well-documented JVM startup options). However, experience suggests that you should not set it less than 1Gb.

Resources