Impact of Increasing GLIBC malloc() M_MMAP_THRESHOLD to 1GB - malloc

I am using glibc (version 2.21) for system with page size (2MB and 64MB). But with this very large page size, there is more fragmentation. So i increased the M_MMAP_THRESHOLD to 32MB using mallopt() still there is fragmentation. So i want to increase M_MMAP_THRESHOLD to 1 GB. Is there any impact of this on bin index calculation ?

This question was answered on the libc-help list:
If you increase M_MMAP_THRESHOLD, you also have to increase the heap size to something like 32 GiB (HEAP_MAX_SIZE in malloc/arena.c). The default of 2 * DEFAULT_MMAP_THRESHOLD_MAX is probably too small (assuming that DEFAULT_MMAP_THRESHOLD_MAX will be 2 GiB). Otherwise you will have substantial fragmentation for allocation requests between 2 GiB and HEAP_MAX_SIZE.

Related

How can I get the detail usage of physical memory in linux?

I have a Dell R740 server with 16G*12 memory installed. In theory I can see total 192G memory in linux, but in fact the kernel only report 187G total memory. How can I check which devices eats another 5G memory?
As mentioned in the comments by Mr Pang, this answer is only valid for hard drives, not for RAM.
e.g. see Gigabyte or Gibibyte (1000 or 1024)? or https://www.linkedin.com/pulse/20140814132922-176099595-mb-vs-mib-gb-vs-gib/
I guess this is linked to the difference between GB and GiB.
1 GB = 1000MB
1 GiB = 1024MiB
When selling memory (or hard drives) it is more interesting to write GB, but usually we're measuring GiB.

Virtual memory limit on linux based ARM

I try to allocate memory using calloc(). The maximum size I can get is 1027 Mb (not 1024 Mb). I see this from the top command output. ulimit -v is set to unlimited. imx6q ARM. How can I allocate more memory? Thank you!
If you're on a 32-bit addressed architecture, theoretically you could use at most 4 GiB of virtual memory. Parts of them, however, is either reserved for kernel or allocated to hold library and program code, so it's possible you're left with less than 2 GiB of memory.

How is total memory in Java calculated

If I have 8GB RAM and I use the following on a 64-bit JVM
max heap size 6144MB
max perm gen space 2048MB
stack size 2MB
Q1 : Is perm gen space allocated from the max heap or a separate?
Q2 : if seperate then will the jvm with above settings get started or it will give error as heap + permgen + stack + program data would be above the total RAM?
First of all remember that the parameter you set with -Xmx (since that's the way I suppose you are setting your heap size) is the size of heap available to your Java code, not the amount of memory the JVM will consume. The difference comes from housekeeping structures that the JVM keeps (garbage collector structures, JIT overhead etc.), sometimes memory allocated by native code, buffers, and so on. The size of this additional memory depends on JVM version, the app you are running, and other factors, but I've seen JVMs allocate twice as much RAM as the heap size visible to the application. For the average case, I usually consider 50% to be a safe margin, with 20-30% acceptable. If you set your heap size to be close to amount of RAM in your machine, you will hit the swap and performance will suffer.
Now for the enumerated questions:
Perm gen is a separate space from the heap at least in Oracle's JDK 6. It is separate because it undergoes completely different memory management rules than the regular heap. By the way, 2 GB of pergen space is huge - are you sure you really need it?
Regarding the second question, see above. If this is Oracle's JDK, you are likely to run into trouble since perm and heap sums up but there will be additional memory, usually on the order of 20-50% of your 6 GB heap, and together with heap and perm space this will be more than your RAM. At first try this setup may work, but once both the heap and perm gen space usages come close to their configured limits, you could run out of memory.
heap and permgen are different memory parts of JVM. As such you will be consuming virtually all the memory on system. It is always better to leave 20% ram to be free for os/other tasks to execute properly.
Also, 2 gb for perm space is a huge figure. Have you looked at jar optimisation meaning that only relevant classes are present in the classpath?
This depends on the JVM and the version of the JVM.
In Hotspot Java 6, PermGen space is independent from the max heap size argument (-Xmx and -Xms control only the Young/OldGen sizes). The PermGen space size is given by the -XX:PermSize and -XX:MaxPermSize. See Java SE 6 HotSpot[tm] Virtual Machine Garbage Collection Tuning
UPDATE: In Hotspot Java 8, there is no PermGen space anymore and the objects reside in the Young/Old Generation spaces.

On Linux: We see following: Physical, Real, Swap, Virtual Memory - Which should we consider for sizing?

We use a Tool (Whats Up Gold) to monitor memory usage on a Linux Box.
We see Memory usage (graphs) related to:
Physical, Real, Swap, Virtual Memory and ALL Memory (which is a average of all these).
'The ALL' Memory graphs show low memory usage of about: 10%.
But Physical memory shows as 95% used.
Swap memory shows as 2% used.
So, do i need more memory on this Linux Box?
In other words should i go by:
ALL Memory graph(which says memory situation is good) OR
Physical Memory Graph (which says memory situation is bad).
Real and Physical
Physical memory is the amount of DRAM which is currently used. Real memory shows how much your applications are using system DRAM memory. It is roughly lower than physical memory. Linux system caches some of disk data. This caching is the difference between physical and real memory. Actually, when you have free memory Linux goes to use it for caching. Do not worry, as your applications demand memory they gonna get the cached space back.
Swap and Virtual
Swap is additional space to your actual DRAM. This space is borrowed from disk space and once you application fill-out entire DRAM, Linux transfers some unused memory to swap to let all application stay alive. Total of swap and physical memory is the virtual memory.
Do you need extra memory?
In answer to your question, you need to check real memory. If your real memory is full, you need to get some RAM. Use free command to check the amount of actual free memory. For example on my system free says:
$ free
total used free shared buffers cached
Mem: 16324640 9314120 7010520 0 433096 8066048
-/+ buffers/cache: 814976 15509664
Swap: 2047992 0 2047992
You need to check buffer/cache section. As shown above, there are real 15 GB free DRAM (second line) on my system. Check this on your system and find out whether you need more memory or not. The lines represent physical, real, and swap memory, respectively.
free -m
as for free tool analisys about memory lack in linux i have some opinion proved by experiments (practice)
~# free -m
total used free shared buff/cache available
Mem: 2000 164 144 1605 1691 103
you should summarize 'used'+'shared' and compare with 'total'
other columns are useless just confuse and nothing more
i would say
[ total - (used + shared ) ] should be always at least > 200 MB
also you can get almost the same number if you check MemAvailable in meminfo :
# cat /proc/meminfo
MemAvailable: 107304 kB
MemAvailable - is how much memory linux thinks right now is really free before active swapping happens.
so now you can consume 107304 kB maximum. if you
consume more big swappening starts happening.
MemAvailable also is in good correlation with real practice.

Linux process memory consumption in bytes (not Kbytes)

In Linux is there any way to check processes memory measured on bytes (using top or ps for example). Not in kbytes, but bytes.
Thanks in advance!
Beyond the obvious answer of multiplying by 1024 (or 1000 if you want to be SI-correct)?
AFAIK top, ps etc. get their info from reading /proc/[PID]/status or something equivalent. Which reports info in KB. So I'd guess the answer to your question is no. Not that a positive answer would be useful, since memory is allocated from the kernel at page-level granularity, and the minimum page size Linux supports is 4 KB, so you wouldn't get more "resolution" by getting the memory consumption in bytes.
multiply kbytes by 1024

Resources