Cassandra - Parameter responsible for number of partitions - cassandra

After going through multiple websites, partition key in cassandra is responsible for identifying the node in the cluster where it stores data. But I don't understand on what parameter number of partitions are created(like keyspace responsible for Replication Factor) in cassandra..! or it creates partitions based on murmur3 without being able to specifying partitions explicitly
Thanks in Advance

Cassandra by default uses partitioner based on Murmur3 hash that generates values in range from -2^63 to 2^63-1. Each node in cluster is responsible for particular range of hash values, and data with partition key hashed to values in this range go to that node(s). I recommend to read documentation about Cassandra/DSE architecture - it will make things easier to understand.

Related

Cassandra partition technique

From my understanding Apache Cassandra partitions each row in a table into a separate partition located in separate nodes. In that case, if we consider a table having millions of records or rows, Cassandra will partition the records to millions of Nodes.
My doubt is "What if adequate nodes are not available to store each record in case of a table with millions of records which is continuously growing?"
Your understanding is wrong. The three main keywords used in your question are partition, rows and node. Now consider how are they defined
Node represents the Cassandra process running on a virtaul machine/baremetal/cloud.
Partition represents a logical entity which helps Cassandra cluster to know on which node requested data resides. Primary key should be unique.
Row represent a record contained within a partition. A partition can contain millions of rows.
Based on your partition key your Cassandra cluster will identify on which node the data will reside. If you have three nodes, then Cassandra will take hash of your partition key and based on that value node will be identified where data will be written. So as you scale, hash numbers will be redistributed (along with them partitions will be distributed).
So even if you millions of records, they can reside in single node if your Cluster has one node and if you multiple nodes, your data will be distributed almost equally among nodes.

Cassandra Partition vs NoSql Partition

I've understood difference b/w Cassandra Partition key, Composite key, Clustering key. But not finding enough information to understand how partition is handled in cassandra.
In cassandra, range of partition keys are stored on a node like a partition/shard. Is my understanding is correct or not..?
Is each partition key has different file(at the system level) in DB..? If so, won't the reads be slower..?
If each partition key is not having different file in DB. How it's handled..?
Data is stored in Cassandra in wide rows called partitions. Each row has a partition key used for identifying that partition. For distributing the data across the cluster, Cassandra is using partitioners which are basically computing hashes of the partition key and the data is distributed across the cluster based on these values. The default partitioner in Cassandra is Murmur3Partitioner.
At OS level, the data is stored in sstables files. A partition can be spread across many sstables. That's why you also need compaction, which is the process of consolidating those sstables, so your partitions won't be spread across a lot of sstables. Reducing the number of sstables a partitions is spread across, will also improve read time. It's worth noting that sstables are immutable.
I suggest reading this, especially "How Cassandra reads and writes data".

Cassandra Batch statement-Multiple tables

I want to use batch statement to delete a row from 3 tables in my database to ensure atomicity. The partition key is going to be the same in all the 3 tables. In all the examples that I read about batch statements, all the queries were for a single table? In my case, is it a good idea to use batch statements? Or, should I avoid it?
I'm using Cassandra-3.11.2 and I execute my queries using the C++ driver.
Yes, you can use batch to ensure atomicity. Single partition batches are faster (same table and same partition key) but only for a limited number of partitions (in your case three) it is okay. But don't use it for performance optimization (Ex: reduce of multiple requests). If you need atomicity you can use it.
You can check below links:
Cassandra batch query performance on tables having different partition keys
Cassandra batch query vs single insert performance
How single parition batch in cassandra function for multiple column update?
EDITED
In my case, the tables are different but the partition key is the same in all 3 tables. So is this a special case of single partition batch or is it something entirely different.
For different tables partitions are also different. So this is a multi partition batch. LOGGED batches are used to ensure atomicity for different partitions (different tables or different partition keys). UNLOGGED batches are used to ensure atomicity and isolation for single partition batch. If you use UNLOGGED batch for multi partition batch atomicity will not be ensured. Default is LOGGED batch. For single partition batch default is UNLOGGED. Cause single partition batch is considered as single row mutation. For single row update, there is no need of using LOGGED batch. To know about LOGGED or UNLOGGED batch, I have shared a link below.
Multi partition batches should only be used to achieve atomicity for a few writes on different tables. Apart from this they should be avoided because they’re too expensive.
Single partition batches can be used to achieve atomicity and isolation. They’re not much more expensive than normal writes.
But you can use multi partition LOGGED batch as partitions are limited.
A very useful Doc in Batch and all the details are provided. If you read this, all the confusions will be cleared.
Cassandra - to BATCH or not to BATCH
Partition Key tokens vs row partition
Table partitions and partition key tokens are different. Partition key is used to decide which node the data resides. For same row key partition tokens are same thus resides in the same node. For different partition key or same key different tables they are different row mutation. You cannot get data with one query for different partition keys or from different tables even if for the same key. Coordinator nodes have to treat it as different request or mutation and request the actual data from replicated nodes separately. It's the internal structure of how C* stores data.
Every table even has it's own directory structure making it clear that a partition from one table will never interact with the partition of another.
Does the same partition key in different cassandra tables add up to cell theoretical limit?
To know details how C* maps data check this link:
Understanding How CQL3 Maps to Cassandra's Internal Data Structure
Yes, this is a good use-case for BATCH according to the Cassandra documentation.
See the "Note:" on https://docs.datastax.com/en/dse/6.0/cql/cql/cql_using/useBatchGoodExample.html
If there are two different tables in the same keyspace and the two tables have the same partition key, this scenario is considered a single partition batch. There will be a single mutation for each table. This happens because the two tables could have different columns, even though the keyspace and partition are the same. Batches allow a caller to bundle multiple operations into a single batch request. All the operations are performed by the same coordinator. The best use of a batch request is for a single partition in multiple tables in the same keyspace. Also, batches provide a guarantee that mutations will be applied in a particular order.
Specifically, if they have the same partition key, this will be considered a single-partition batch. Hence: "The best use of a batch request is for a single partition in multiple tables in the same keyspace."

Efficient Filtering on a huge data frame in Spark

I have a Cassandra table with 500 million rows. I would like to filter based on a field which is a partition key in Cassandra using spark.
Can you suggest the best possible/efficient approach to filter in Spark/Spark SQL based on the list keys which is also a pretty large.
Basically i need only those rows from the Cassandra table which are present in the list of keys.
We are using DSE and its features.
The approach i am using is taking lot of time roughly around an hour.
Have you checked repartitionByCassandraReplica and joinWithCassandraTable ?
https://github.com/datastax/spark-cassandra-connector/blob/75719dfe0e175b3e0bb1c06127ad4e6930c73ece/doc/2_loading.md#performing-efficient-joins-with-cassandra-tables-since-12
joinWithCassandraTable utilizes the java drive to execute a single
query for every partition required by the source RDD so no un-needed
data will be requested or serialized. This means a join between any
RDD and a Cassandra Table can be performed without doing a full table
scan. When performed between two Cassandra Tables which share the same
partition key this will not require movement of data between machines.
In all cases this method will use the source RDD's partitioning and
placement for data locality.
The method repartitionByCassandraReplica can be used to relocate data
in an RDD to match the replication strategy of a given table and
keyspace. The method will look for partition key information in the
given RDD and then use those values to determine which nodes in the
Cluster would be responsible for that data.

how to achieve even distribution of partitions keys in Cassandra

We modeled our data in cassandra table with partition key, lets say "pk". We have a total of 100 unique values for pk and our cluster size is 160. We are using random partitioner. When we add data to Cassandra (with replication factor of 3) for all 100 partitions, I noticed that those 100 partitions are not distributed evenly. One node has as many as 7 partitions and lot of nodes only has 1 or no partition. Given that we are using random partitioner, I expected the distribution to be reasonably even. Because 7 partitions are in the same node, thats creating a hot partition for us. Is there a better way to distribute partitions evenly?
Any input is appreciated.
Thanks
I suspect the problem is the low cardinality of your partition key. With only 100 possible values, it's not unexpected that several values end up hashing to the same nodes.
If you have 160 nodes, then only having 100 possible values for your partition key will mean you aren't using all 160 nodes effectively. An even distribution of data comes from inserting a lot of data with a high cardinality partition key.
So I'd suggest you figure out a way to increase the cardinality of your partition key. One way to do this is to use a compound partition key by including some part of your clustering columns or data fields into your partition key.
You might also consider switching to the Murmur3Partitioner, which generally gives better performance and is the current default partitioner on the newest releases. But you'd still need to address the low cardinality problem.

Resources