Elastic memory configuration in hazelcast 3.4.1 - hazelcast

Can we still configure elastic memory in hazelcast 3.4.1 ?.
I do see the NATIVE Option for Map enabling off heap storage.
Not sure still we can use below properties hazelcast.elastic.memory.enabled, hazelcast.elastic.memory.total.size to use the RAM memory instead only heap.
Because Hazelcast version 3.4.1 documentation says that it supports high density memory datastore using Jcache.
Is Highdensity memory datastore is second generation implementation of elastic memory?
Thanks in advance
Dinesh

You're right, HD Memory store is a second generation off-heap memory. We don't call it elastic memory anymore.
You can find configuration for HD memory here and in JCache section
Also, IMap API for off-heap documentation will be available as well. Follow this issue

Related

VoltDB cluster eating all RAM

I've setup a 3 machine VoltDB cluster with more or less default settings. However there seems to be a constant problem with voltdb eating up all of the RAM heap and not freeing it. The heap size is recommended 2GB.
Things that I think might be bad in my setup:
I've set 1 min async snapshots
Most of my queries are AdHoc
Event though it might not be ideal, I don't think it should lead to a problem where memory doesn't get freed.
I've setup my machines accordingly to 2.3. Configure Memory Management.
On this image you can see sudden drops in memory usage. These are server shutdowns.
Heap filling warnings
DB Monitor, current state of leader server
I would also like to note that this server is not heavily loaded.
Sadly, I couldn't find anyone with a similar problem. Most of the advice were targeted on fixing problems with optimizing memory use or decreasing the amount of memory allocated to voltdb. No one seems to have this memory leak lookalike.

Ambari dashboard memory usage explanation for spark cluster

I am using Ambari to monitor my spark cluster, and I'm a little confused by all the memory categories; Can somebody with expertise explain what these terms mean? Thanks in advance!
Here is a screen shot of the Ambari Memory Usage zoom out:
Basically what do swap, Share, Cache and Buffer memory usage stand for? (I think I understand Total well)
There is nothing specific to Spark or Ambari here. These are basic Linux / Unix memory management terms:
In short:
Swap is a part of memory written to disk. See Wikipedia and What is swap memory?.
Buffer and cache are used for caching filesystem data and file data. See What is the difference between buffer vs cache memory in Linux? and Overview of memory management
Shared memory is a part of virtual memory used for shared libraries.

What is and how to control Memory Storage in Executors tab in web UI?

I use Spark 1.5.2 for a Spark Streaming application.
What is this Storage Memory in Executors tab in web UI? How was this to reach 530 MB? How to change that value?
CAUTION: You use the very, very old and currently unsupported Spark 1.5.2 (which I noticed after I had posted the answer) and my answer is about Spark 1.6+.
The tooltip of Storage Memory may say it all:
Memory used / total available memory for storage of data like RDD partitions cached in memory.
It is part of Unified Memory Management feature that was introduced in SPARK-10000: Consolidate storage and execution memory management that (quoting verbatim):
Memory management in Spark is currently broken down into two disjoint regions: one for execution and one for storage. The sizes of these regions are statically configured and fixed for the duration of the application.
There are several limitations to this approach. It requires user expertise to avoid unnecessary spilling, and there are no sensible defaults that will work for all workloads. As a Spark user, I want Spark to manage the memory more intelligently so I do not need to worry about how to statically partition the execution (shuffle) memory fraction and cache memory fraction. More importantly, applications that do not use caching use only a small fraction of the heap space, resulting in suboptimal performance.
Instead, we should unify these two regions and let one borrow from another if possible.
Spark Properties
You can control the storage memory using spark.driver.memory or spark.executor.memory Spark properties that set up the entire memory space for a Spark application (the driver and executors) with the split between regions controlled by spark.memory.fraction and spark.memory.storageFraction.
You should consider watching the slides Memory Management in Apache Spark by the author Andrew Or and the video Deep Dive: Apache Spark Memory Management by the author himself (again).
You may want to read how the Storage Memory values (in web UI and internally) are calculated in How does web UI calculate Storage Memory (in Executors tab)?

High Density Memory in Hazelcast (Native memory Size Limit)

I am using Hazelcast 3.5.3 and 3.6 enterprise version to implement High Density memory,Which initializing first Pooled Native Memory and then Off Heap memory. WHich is working fine and creating the Native memory, Which I can check in Hazelcast Managment Console.
My Question is : How can we set the upper limit of Native memory so If I will start Two or One instanceof Hazelcast then it will not cross that Upper limit during assignation of Native memory.
I appreciate for your response in advance.
Thanks
You can configure native memory limit by programmatic or decleratively.
See https://github.com/hazelcast/hazelcast-reference-manual/blob/master/src/Storage/ConfiguringHD.md
Note that maximum sizes are Hazelcast instance based, not JVM process (there may be multiple Hazelcast instance in the same JVM process) or not host (there may be multiple Hazelcast instance on the same host)

Q: how to limit usage of virtual memory by cassandra

I deployed multi node with Apache cassandra-2.0.13 version in centos 7.0. I am using heap size-8G and New heap size-2048M . system used as cached 17GB memory.
How can I limit the usage of virtual memory by cassandra.
Virtual memory use is generally not a problem. It is not to be confused with actual RAM usage. You can find a good description about virtual memory here. Please further elaborate if you still think the shown virtual memory value could be a problem.

Resources