I try to allocate memory using calloc(). The maximum size I can get is 1027 Mb (not 1024 Mb). I see this from the top command output. ulimit -v is set to unlimited. imx6q ARM. How can I allocate more memory? Thank you!
If you're on a 32-bit addressed architecture, theoretically you could use at most 4 GiB of virtual memory. Parts of them, however, is either reserved for kernel or allocated to hold library and program code, so it's possible you're left with less than 2 GiB of memory.
Related
The address space is huge for the x86-64 even though 48-bit addresses are mainly used.
On x86 32-bit machines it was pretty clear how much RAM the kernel took up. Generally around 1 GB of ZONE_NORMAL is on the bottom of memory while everything else above the 1GB in PHYSICAL (not virtual) addresses were for ZONE_HIGHMEM (for user space). This would be a 3:1 split. Of course we can have configurations were we can have a 1:3, 2:2, etc. (by changing VM_SPLIT).
How much memory in RAM is for kernel space for 64 bit kernels?
I know the PAGE_OFFSET is set to a value far above physically addressable memory in x64 (for both 48 and 56). PAGE_OFFSET in x64 just describes the split in virtual address space, not physical (a 48 bit PAGE_OFFSET would be ffff888000000000 ).
So does 1 GB of memory house kernel space? 2GB? 3? Are there variable or macros that describe the size? Is it calculated?
Each user-space process can use its own 2^47 bytes (128 TiB) of virtual address space. Or more on a system with PML5 support.
The available physical RAM to back those pages is the total size of physical RAM, minus maybe 30 MiB or so that the kernel needs for its own code/data. (Not including the pagecache: Linux will use any spare pages as buffers and disk cache). This is mostly unrelated to virtual address-space limits.
1G is how much virtual address space a kernel used up. Not how much physical RAM.
The address-space question mattered for how much memory a single process could use at the same time, but the kernel can still use all your RAM for caching file data, etc. Unless you're finding the 2^(48-1) or 2^(57-1) bytes of the low half virtual address-space range cramped, there's no equivalent problem.
See the kernel's Documentation/x86/x86-64/mm.txt for the x86-64 virtual memory map. Also Why 4-level paging can only cover 64 TiB of physical address re: x86-64 Linux not doing inconvenient HIGHMEM stuff - the entire high half of virtual address space is reserved for the kernel, and it maps all the RAM because it's a kernel.
Virtual address space usage does indirectly set a 64 TiB limit on how much physical RAM the kernel can use, but if you have less than that there's no effect. Just like how a 32-bit kernel wasn't a problem if your machine had less than 1 or 2 GiB of RAM.
The amount of physical RAM actually reserved by the kernel depends on build options and modules, but might be something like 16 to 32 MiB.
Check dmesg output and look for something like this kernel log message from an x86-64 5.16.3-arch1 kernel I found in an old boot-log message.
Memory: 32538176K/33352340K available (14344K kernel code, 2040K rwdata, 8996K rodata, 1652K init, 4336K bss, 813904K reserved, 0K cma-reserved
Don't count the init (freed in after boot) or reserved parts; I'm pretty sure Linux doesn't actually reserve ~800 MiB in a way that makes it unusable for anything else.
Also look for the later Freeing unused decrypted memory: 2036K / Freeing unused kernel image (initmem) memory: 1652K etc. (That's the same size as the init part listed earlier, which is why you don't have to count it.)
It might also dynamically allocate some memory during startup; that initial "memory" line is just the sum of its .text, .data, and .bss sections, static code+data sizes.
On 64-Bit systems, the only limitation is on how much physical memory the kernel can use. The kernel will map all the available ram, and user space applications should be able to gain access to as much as the kernel can provide while maintaining sufficient for the kernel to operate.
I'm running a program which allocates 8mb stacks using mmap. While testing to see how many stacks I could allocate (aiming for 100,000), I see virtual memory size rise quickly as expected, and reserved size stay small (less than 1gb). The program then segfaults with Cannot allocate new fiber stack: Cannot allocate memory (Errno). Using gdb to rescue the segfault and then looking at htop, I have discovered this happens at around 256GB of virtual memory.
I've tried using prlimit --as=unlimited --rss=unlimited --memlock=unlimited --data=unlimited when running the program, but it doesn't seem to make a difference.
Is there a way to increase this limit? Is it advisable to increase this limit? Is there a better way for crystal to allocate stacks?
Maybe you're hitting the maximum of /proc/sys/vm/max_map_count. This setting sets a maximum on the number of mmaps your process can have. The default value is 65536. So it's likely not the size of memory you want to malloc, but the number of malloc calls that causes the error Cannot allocate memory.
You can try to increase the maximum with:
sysctl -w vm.max_map_count=131070
See also NPTL caps maximum threads at 65528?
I'd check your swap file size. if you are running out of swap then all those parameter changes wont help you until you fix that.
I'd recreate the failure and run free -h to see if there is any unused swap. If its all gone you will need to increase your swap size.
From this post, I know the swap space is correlated to physical memory. So assume the physical memory and the swap space are both 4 GB. Although theoretically, the memory space of the 64-bit application is near to 2^64 (certainly, the kernel will occupy some space), but per my understanding, the actual memory the application can use is only 8 GB.
So my question is: for an application running on Unix/Linux, Is the maximum memory space it can use equals to (physical memory + swap space)?
This is a complicated question.
First of all, the theoretical virtual memory space of 64-bit system is 2^64. But in fact, neither the OS nor the CPU supports so big virtual memory space or physical RAM.
Current x86-64 CPUs (aka AMD64 and Intel's current 64-bit chips) actually use 48-bit address lines (AMD64) and 42-bit address lines (Intel), theoretically allowing 256 terabytes of physical RAM.
And Linux allows 128TB of virtual memory space per process on x86-64, and can theoretically support 64TB of physical RAM.
To your question, in an ideal case, the maximum virtual memory space a Linux process can use is just the Linux limitation of virtual memory space above. Even if your system has run out of all the swap space, leaved only 100MB of free RAM, your process can also make use of the entire memory space.
But your system may have some limitations for the virtual memory space request (malloc, which call brk/sbrk syscall). For example, Linux has a vm.overcommit_memory and vm.overcommit_ratio options to determine whether malloc will refuse in a process. See http://www.win.tue.nl/~aeb/linux/lk/lk-9.html.
However, the virtual memory space is not the real RAM + swap. Considering real RAM + swap, your opinion is right: a process will never use more real RAM + swap than that your system has. But in most cases, there will be a lot of processes exist in your system, so the RAM + swap your process can use is shrinked. If all the physical RAM + swap are going to be exhausted, the OOM killer will choose some process to kill.
I have a 64 bit Linux (SUSE 10) dual processor. When I run my process it uses around 4 G of virtual memory. Only 3G is resident memory. Rest around 9G memory is free. How to load this 1 G also in RAM? Why it is still in swap space why kernel can't load this into RAM when all the RAM is available.
Rahul
The kernel could load the data into memory. However, when they are not used, it choses to write them out to the swap file.
If you absolutely want the data in memory, you should either turn off all swap files (using swapoff(8)), or lock the specific pages into memory, using mlock or mlockall.
I am trying to understand my embedded linux memory usage.
By using the top utility and the process file /proc/meminfo I can see how much virtual memory a process is using, and how much physical memory is available to the system. But it would seem for any given process the virtual memory can be very much higher than the used physical memory. As this is an embedded system memory swapping is disabled.(SwapTotal = 0)
How is linux calculating the free physical memory? As it doesn't seem to be accounting for everything allocated in the virtual memory space.
MemFree in /proc/meminfo is a count of how many pages are free in the buddy allocator. This buddy allocator is the fundamental unit of physical memory allocation in the kernel; however there are a lot of ways pages can be returned to the buddy allocator in time of need - for example, freeing empty SLABs, discarding cache/buffer RAM (even if this means invalidating PTEs in a running process), or as a last resort, swapping things out.
In fact, MemFree is generally controlled to be only 5-10% of total physical RAM, with any extra free RAM being co-opted into cache as time goes on. As such, MemFree alone is a very incomplete view of the overall memory situation.
As for the virtual memory (VSIZE) of a given process, this refers to the sum total of the sizes of all mapped memory segments in the process's address space. However, not all of these will be physically present - some may be paged in upon first access and as such will not register as memory in use until actually used. The resident size (RSIZE) is a more accurate view, as it only registers pages that are mapped in right now - although this may also not be accurate if a given page is mapped in multiple virtual addresses (which is very common when you consider multiple processes - shared libraries have the same physical RAM mapped to all processes that are using that library)
Try using htop. You will have to install it sudo apt-get install htop or yum install htop, whatever.
It will show you a more accurate representation of memory usage.
Basically, it comes down to "buffers/cache".
free -m
Look at the free column in the buffers/cache row, this is a more accurate representation of what is actually available.
total used free shared buffers cached
Mem: 3770 3586 183 0 112 1498
-/+ buffers/cache: 1976 1793
Swap: 7624 750 6874