Hampered read latency of all column families in Cassandra - cassandra

I was trying to read the data from 100 partitions of one of the column family of Cassandra from Spark and it has 150-200 MB partition size. So, when I try to run this SparkJob on this column family read latency of rest of the column families also hamper. What could be the possible reasons for such behavior and solution for the same?

Related

Cassandra (DSE) - Need suggestion on using PER PARTITION LIMIT on huge data

I have a table with around 4M of partitions and each partition contains 4 rows. So, the total data in table would be having 16M rows (wide columns). Since our table is a time series database, we only need the latest row or version of the partition_key. I can achieve my desired results through below query. However this will impact load on clusters and time consuming. Would like to see if we have any other best way to achieve this or this is the only way.
SELECT some_value FROM some_table PER PARTITION LIMIT 1;
Using PER PARTITION LIMIT won't have an impact on performance. In fact, it's efficient for achieving what you need from each partition since only the first row will be returned and it doesn't to iterate over the other rows in the partition. Cheers!

Cassandra - Composite Partition Keys and Performance

I am working on the keyspace and tables for a Cassandra environment. I understand the size limitations of Cassandra and dealing with Partition keys to keep it optimized. However, I am having a disagreement with a developer regarding how to handle the keys. Is there any downside in having a key that would include a large number of data rather than a small amount of data. For example,
I have 100k records. I can create a key that will partition this into 10k; I could also create a key that will partition this into 10 records (by day). So either I store 10k and 10 partitions or 10 records and 10,000 partitions.
Keep in mind that having more columns in the key requires you to specify those columns in your select statements, which sometimes isn't desired. The more partitions the better - whether by picking a better single column or having multiple columns.
Cassandra reads data via the partition key, and can get help with performance if clustering columns are used. If you have a large partition, the entire partition must be read (memory and disk) and then merged for the output. If you have large partitions, this will definitely slow you down.

maximum number of partitions limit per table in cassandra

I am new to cassandra, As per my understanding depending on the configured partitioner(murmur3partitioner or randomaccess partitioner) there is a partitions limit per table. if we configure keyspace with murmur3partitioner which would enforce the partitions limit of 2^63 partitions per table. while inserting the row, if the new insertion tries to create new partition beyond the limit, the insertion would fail(means if I get unique combinations of row keys more than 2^63 per table).
Can anyone please clarify, Is my understanding about partitions limit on column family is correct ?
And also as per my understanding there is no way to increase the partitions limit even by adding nodes into the cluster, please correct me if I am wrong.
The range of values for the murmur3 partitioner is actually -2^63 to +2^63-1 That's a massive number. You aren't going to run out of values in any practical sense. No worries.

Cassandra schema design: should more columns go into partition vs. cluster?

In my case I have a table structure like this:
table_1 {
entity_uuid text
,fk1_uuid text
,fk2_uuid text
,int_timestamp bigint
,cnt counter
,primary key (entity_uuid, fk1_uuid, fk2_uuid, int_timestamp)
}
The text columns are made up of random strings. However, only entity_uuid is truly random and evenly distributed. fk1_uuid and fk2_uuid have much lower cardinality and may be sparse (sometimes fk1_uuid=null or fk2_uuid=null).
In this case, I can either define only entity_uuid as the partition key or entity_uuid, fk1_uuid, fk2_uuid combination as the partition key.
And this is a LOOKUP-type of table, meaning we don't plan to do any aggregations/slice-dice based on this table. And the rows will be rotated out since we will be inserting with TTL defined for each row.
Can someone enlighten me:
What is the downside of having too many partition keys with very few
rows in each? Is there a hit/cost on the storage engine level?
My understanding is the cluster keys are ALWAYS sorted. Does that mean having text columns in a cluster will always incur tree
balancing cost?
Well you can tell where my heart lies by now. However, when all rows in a partition all TTL-ed out, that partition still lives, or is there a way they will be removed by the DB engine as well?
Thanks,
Bing
The major and possibly most significant difference between having big partitions and small partitions is the ability to do range scans. If you want to be able to do scan queries like
SELECT * FROM table_1 where entity_id = x and fk1_uuid > something
Then you'll need to have the clustering column for performance, otherwise this query would be difficult (a multi-get at best, full table scan at worst.) I've never heard of any cases where having too many partitions is a drag on performance but having too wide a partition (ie lots of clustering column values) can cause issues when you get into the 1B+ cell range.
In terms of the cost of clustering, it is basically free at write time (in memory sort is very very fast) but you can incur costs at read time as partitions become spread amongst various SSTables. Small partitions which are written once will not occur the merge penalty since they will most likely only exist in 1 SSTable.
TTL'd partitions will be removed but be sure to read up on GC_GRACE_SECONDS to see how Cassandra actually deals with removing data.
TL;DR
Everything is dependent on your read/write pattern
No Range Scans? No need for clustering keys
Yes Range Scans? Clustering keys a must

Cassandra column family limitation

I am wondering is there any columnfamily count limitations in keyspace?
And Is there any rows limitation in columnfamily?
Which schema is better?
To create columnfamilies
CF1_customer1, CF2_customer1,
CF1_customer2, CF2_customer2,
or to have CF1 and CF2
with
custmer_id as primary_key?
There are overheads per column family so you will find schema operations slow with more than a few hundred CFs. The only (practical) limit to the number of rows is your cluster size. People routinely use Cassandra with many billions of rows.
For your example, it is much better to have CF1 and CF2 with customer_id as the row key. This will be more efficient and scale better, because the data is partitioned according to the row key so different customers will be stored on different nodes.

Resources