Partition Count in Hazelcast - hazelcast

Can we set the partition count of HazelCast IMap equal to number of nodes in the cluster?
What are the pitfalls?
I understand parallelism could be one.

As a consequence of single parallelism on a node, CPU won't be utilized well.
If a new node is added, then it won't get assigned any partitions.
If a node crashes, one of the remaining nodes will have two partitions, hence double CPU and memory load.

Related

What is the relationship between a Node, Worker, Executor, Task and Partition

I am trying to understand the relationship between different components and elements in the Spark architecture but am unable to get a grip on it. Can someone please validate my assumptions and correct me where I am wrong.
My understanding - A node is the actual physical machine. One node can contain the main driver while others will contain the workers.
Q - Can a node have multiple drivers (if I have multiple applications)?
My understanding - A worker is a process within a node. There can be multiple workers within each node though not recommended.
My understanding - An executor is a sub-process(?) within a worker process. Each worker can have multiple executors.
Q. What metric determines the number of executors per worker?
Q. Is the idea of a JVM associated with an executor process or at a higher "worker" level?
Q. What is the relationship between a core and an executor?
Q - Can RAM and HD be allocated at an executor level?
For e.g., if I have a worker node with 100GB of RAM and 5 TB HD, can I allocate 20 GB RAM and 1 TB HDD per executor for that worker?
My understanding - A partition is a portion of the actual data. This split could happen using hashing, round robin or range.
Q - What determines the location of these data partitions?
For e.g., if I have a cluster with 2 nodes, 10 executors (5 executors in each node) and a dataframe with 20 partitions, I'm assuming I would have 2 partitions in each executor or is there a chance that partition distribution could be skewed? What would I need to do to ensure that all my partitions that have a certain partitioning key get co-located within the same worker so there is minimum network transfer when these partitions have to work together to, say, perform an aggregation or a join?
Q - What happens when a repartition() is performed. For e.g., if I have 20 partitions across 10 executors (say, 2 partitions in each) and I repartition(2). I will now have only 2 portions of data which I assume would be resting in a couple of executors. What happens to the remaining executors?
Assumption - A task is the lowest unit of work that performs the actual ask. The number of tasks dependent on the number of partitions. So, if there are 20 partitions, I would have 20 tasks in each stage.
Q - Are these tasks performed by individual executors?
Q - If I have less executors (say, 10) than the partitions (say, 20), does it mean that only 10 tasks will be executed in parallel at any point? is the degree of parallelism constrained by the number of executors?
Thanks in advance!
My understanding - A node is the actual physical machine. One node can contain the main driver while others will contain the workers. (This is Correct as a starting Point)
Q - Can a node have multiple drivers (if I have multiple applications)?
Yes Because driver is just a process that gets created based on the program that you might have written. And you can have multiple process running on the same node.
My understanding - A worker is a process within a node. There can be multiple workers within each node though not recommended.
your understanding here seems wrong because worker is actually a node or machine. Either you say it worker or worker node both are same
My understanding - An executor is a sub-process(?) within a worker process. Each worker can have multiple executors.
An executor is a process inside the worker node and a single worker node can have multiple executors
Q. What metric determines the number of executors per worker?
configuration(Number of cores and memory) of your worker node decides what is the max executors it can run on any specific worker node.
Q. Is the idea of a JVM associated with an executor process or at a higher "worker" level?
It is associated with the executor process. Spark executor is a single JVM instance on a node that serves a single spark application
Q. What is the relationship between a core and an executor?
Core property controls the number of concurrent tasks an executor can run. For example if you request 2 executor each with 2 cores then you can run 4 concurrent tasks at the same time during your job execution.
Q - Can RAM and HD be allocated at an executor level?
For e.g., if I have a worker node with 100GB of RAM and 5 TB HD, can I allocate 20 GB RAM and 1 TB HDD per executor for that worker?
Generally spark perform all its computation in memory. RAM is allocated at the executor level and HD would be allocated at the Worker node level only. Spark would just spill the data to the disk only when it does not fit in memory
My understanding - A partition is a portion of the actual data. This split could happen using hashing, round robin or range.
Q - What determines the location of these data partitions?
These partitions could be anywhere and might not be equally distributed in most of the cases.It could happen some of the executors does not have a single partition and other executors have more than 2 partitions.
In order to have colocated partitions or partitions that have same keys you would have to repartition data based on the specific column in your dataframe and then it would partition your data based on the values of that column and make sure that same column values are there in the same partition
When you repartition the data to 2 partitions then it would shuffle the data between all the executors and then break the dat into 2 partitions and then that data could be on any of the executors and other executors would be empty or idle in that case.
Assumption - A task is the lowest unit of work that performs the actual ask. The number of tasks dependent on the number of partitions. So, if there are 20 partitions, I would have 20 tasks in each stage.
you would have 20 tasks for that specific stage and it wont remain same for all the stages as stage gets created when there is data shuffle that needs to happen. If there is no shuffle happening based on the code that you might have written it would just create a single stage with 20 tasks for sure.
Q - Are these tasks performed by individual executors? Yes
Q - If I have less executors (say, 10) than the partitions (say, 20), does it mean that only 10 tasks will be executed in parallel at any point? is the degree of parallelism constrained by the number of executors? Yes

How to design a big scale VoltDB cluser with dozens of nodes and hundreds of partitions?

If I have 32 phsical servers which have 32 cores CPU and 128G memory inside, I want to build a VoltDB cluster with all of those 32 servers with K-Safefy=2 and 32 partitions in each server, so we will get VoltDB cluster with 256 available partitions to save data.
Looks there are too many partitions to split tables especially when some tables don't have a lot of records. But there will be too many copies of table if we choice replica of table.
If we build a much smaller cluster with a couple of servers from the beginning, there's a worry that the cluster will have to scale-out soon along with the business grows. Actually I don't konw how the VoltDB will re-organize data when a cluster expand to more nodes horizontally.
Do you have comments? Appreciated.
It may be more optimal to set the sitesperhost to less than 32, so that some % of cores are free to run threads for subsystems like export or database replication, or to handle non-VoltDB processes. Typically somewhere from 8 - 24 is the optimal number.
VoltDB creates the logical partitions based on the sitesperhost, the number of hosts, and the kfactor. If you need to scale out later, you can add additional nodes to the cluster which will increase the number of partitions, and VoltDB will gradually and automatically rebalance data from existing partitions to the new ones. You must add multiple servers together if you have kfactor > 0. For kfactor=2, you would add servers in sets of 3 so that they provide their own redundancy for the new partitions.
Your data is distributed across the logical partitions based on a hash of the partition key value of a record, or the corresponding input parameter for routing the execution of a procedure to a partition. In this way, the client application code does not need to be aware of the # of partitions. It doesn't matter so much which partition each record goes to, but you can assume that any records that share the same partition key value will be located in the same partition.
If you choose partition keys well, they should be columns with high cardinality, such as ID columns. This will evenly distribute the data and procedure execution work across the partitions.
Typically a VoltDB cluster is sized based on the RAM requirements, rather than the need for performance, since the performance is very high on even a very small cluster.
You can contact VoltDB at info#voltdb.com or ask more questions at http://chat.voltdb.com if you'd like to get help with an evaluation or discuss cluster sizing and planning with an expert.
Disclaimer: I work for VoltDB.

Cassandra Read Timeouts on Specific Servers

We have a five node Cassandra cluster with replication factor 3. We are experiencing a lot of Read Timeouts in our application. When we checked tpstats on each Cassandra node, we see that three of the nodes have a lot of Read request drops and a high CPU utilisation, whereas on the other two nodes Read request drops are zero and CPU utilisation is moderate. Note that the total number of Read requests on all servers are almost same.
After taking thread dump we found out that the reason for high CPU utilisation is that Parallel GC is running a lot on the three nodes compared to the other two nodes, which is causing CPU utilisation to go high. What we are not able to understand is why GC should be running more on three nodes and less on two nodes, when the distribution of our partition key and our queries is almost uniform.
Cassandra version is 2.2.3.

Settle the right number of partition on RDD

I read some comments which says than a good number of partition for a RDD is 2-3 time the number of core. I have 8 nodes each with two 12-cores processor, so i have 192 cores, i setup the partition beetween 384-576 but it doesn't seems works efficiently, i tried 8 partition, same result. Maybe i have to setup other parameters in order to my job works better on the cluster rather than on my machine. I add that the file i analyse make 150k lines.
val data = sc.textFile("/img.csv",384)
The primary effect would be by specifying too few partitions or far too many partitions.
Too few partitions You will not utilize all of the cores available in the cluster.
Too many partitions There will be excessive overhead in managing many small tasks.
Between the two the first one is far more impactful on performance. Scheduling too many smalls tasks is a relatively small impact at this point for partition counts below 1000. If you have on the order of tens of thousands of partitions then spark gets very slow.
Now, considering your case, you are getting the same results from 8 and 384-576 partitions. Generally the thumb rule says,
NoOfPartitions = (NumberOfWorkerNodes*NoOfCoresPerWorkerNode)-1
It says that, as we know, the task is processed by CPU cores. So we should set that many number of partitions which is the total number of cores in the cluster to process-1(for Application Master of driver). That means the each core will process each partition at a time.
That means with 191 partitions can improve the performance. Otherwise impact of setting less and more partitions scenario is explained in beginnning.
Hope this will help!!!

Cassandra vnodes performance overhead and changing the number of vnodes

We have a test cluster of 4 nodes, and we've turned on vnodes. It seems that reading out is somewhat slower than the old method (initial_token). Is there some performance overhead by using vnodes? Do we have to increase/decrease the default num_tokens (256) if we only have 4 physical nodes?
Another scenario we would like to test is to change the num_tokens of the cluster on the fly. Is it possible, or do we have to recreate the whole cluster? If possible, how can we accomplish that?
We're using Cassandra 2.0.4.
It really depends on your application, but if you are running Spark queries on top of Cassandra, then a high number of vnodes can significantly slow down your queries, by at least 2x or 5x. This is because Spark cannot subdivide queries across vnodes, and each vnode results in one Spark partition, and a high number of partitions slows down low latency queries.
The recommended number of vnodes is more like 16. This lets you split a two node cluster in theory to 32 nodes max, which is more than enough of an expansion ratio for most folks.

Resources