I understand that two tables with same partition columns and values have same token generated. Does that mean that all the cells of this partition in both tables are actually in the same partition ? How does Cassandra store data internally ?
Eg:
Create table table1 (emp_id int PRIMARY KEY, name text, role text);
Create table table2 (emp_id int PRIMARY KEY, name text, role text);
INSERT INTO table1(emp_id, name, role) VALUES (1, 'sahil', 'MTS');
INSERT INTO table2(emp_id, name, role) VALUES (1, 'sahil', 'MTS');
SELECT token(emp_id) from table1 where token(emp_id) = token(11596);
system.token(emp_id)
----------------------
**7447223576279188802**
SELECT token(emp_id) from table2 where token(emp_id) = token(1);
system.token(emp_id)
----------------------
**7447223576279188802**
For your example, because both tables have the same partition key, then when identical values are inserted, they will be mapped to the same token. It is on insert that the hash function to the PK is applied to determine what replica will get the data. If you use the Murmur3 partitioner (which is used by default) then you get a consistent token value, i.e. using the same PK and PK value, the result is the same. You can reference this page for understanding:
https://docs.datastax.com/en/cassandra-oss/3.x/cassandra/architecture/archDataDistributeHashing.html
Rows (items of data) that have the same table and the same partition key are said to be in the same partition. The most important consequence of being in the same partition is that data in the same partition is guaranteed to be co-located - handled by the same replica nodes and in ScyllaDB, even by the same CPU. This allows efficiently scanning a partition: All the partition's data can be read from a single node and Cassandra doesn't to go back and forth between replicas to read the various pieces of the partition and combine them. This is also what allows a node that handles the partition's full data to maintain it sorted by the clustering key: A process called compaction is merging different pieces of a sorted partition (these are sstables, or sorted string tables) into a bigger sorted partition.
When you have two different tables in the same keyspace, and use the same partition key in both, they are not stored physically on disk together - because each table has its own set of sstables (files on disk), so in that sense they are not "in the same partition". However, the co-location property which I mentioned earlier still holds (if the two tables are in the same keyspace): Two identically-keyed partitions in the two tables will be stored on exactly the same node. Why is this important/useful? Usually it isn't. One place where this knowledge can become useful is that it can be used in some situations to achieve atomic batch write to both tables at once, utilizing the fact that all replicas will see both writes together, whereas usually two writes to two tables go to different nodes at different times.
Related
I have a table that stores events
CREATE TABLE active_events (
event_id VARCHAR,
number VARCHAR,
....
start_time TIMESTAMP,
PRIMARY KEY (event_id, number)
);
Now, I want to select an event with the highest start_time. It is possible? I've tried to create a secondary index, but no success.
This is a query I've created
select * from active_call order by start_time limit 1
But the error says ORDER BY is only supported when the partition key is restricted by an EQ or an IN.
Should I create some kind of materialized view? What should I do to execute my query?
This is an anti-pattern in Cassandra. To order the data you need to read all data and find the highest value. And this will require scanning of data on multiple nodes, and will be very long.
Materialized view also won't help much as order for data only exists inside an individual partition, so you will need to put all your data into a single partition that could be huge and data would be imbalanced.
I can only think of following workaround:
Have an additional table that will have all columns of the original table, but with a fake partition key and no clustering columns
You do inserts into that table in parallel to normal inserts, but use a fixed value for that fake partition key, and explicitly setting a timestamp for a record equal to start_time (don't forget to multiple by 1000 as timestamp uses microseconds). In this case it will guaranteed to be the value with the highest timestamp as Cassandra won't override it with other data with lower timestamp.
But this doesn't solve a problem with data skew, and all traffic will be handled by fixed number of nodes equal to RF.
Another alternative - use another database.
This type of query isn't valid in big data because it requires a full table scan and doesn't scale. It works in traditional relational databases because the dataset is smaller. Imagine you had billions of partitions each with thousands of rows spread across hundreds of nodes. A full table scan in a large cluster will take a very long time if it was allowed.
The error:
ORDER BY is only supported when the partition key is restricted by an EQ or an IN
gets returned because you can only sort the results provided (a) the query is restricted to a partition key, and (b) the rows are ordered by a clustering column. You cannot sort the results based on a column that is not part of the clustering key. Cheers!
Let's say I have a table like this
CREATE TABLE request(
transaction_id text,
request_date timestamp,
data text,
PRIMARY KEY (transaction_id)
);
The transaction_id is unique, so as far as I understand each partition in this table would have one row only and I'm not sure if this situation causes a performance issue in the OS, maybe because Cassandra creates a file for each partition causing lots of files to manage for its hosting OS, as a note I'm not sure how Cassandra creates its files for its tables.
In this scenario I can find a request by its transaction_id like
select data from request where transaction_id = 'abc';
If the previous assumption is correct, a different approach could be the next one?
CREATE TABLE request(
the_date date,
transaction_id text,
request_date timestamp,
data text,
PRIMARY KEY ((the_date), transaction_id)
);
The field the_date would change every next day, so the partitions in the table would be created for each day.
In this scenario I would have to have the_date data always available to the client so I can find a request using the next query
select data from request where the_date = '2020-09-23' and transaction_id = 'abc';
Thank you in advance for your kind help!
Cassandra doesn't create a separate file for each partition. One SSTable file may contain multiple partitions. Partitions that consist only of one row are often called "skinny rows" - they aren't very bad, but may cause some performance issues:
to access such partitions you still need to read a block with compressed data (by default it's 64Kb) that needs to be decompressed to read that data. If you're doing really random access, such blocks would be discarded from file cache and needs to be re-read from disk. In this case, it's maybe useful to decrease the block size
if you have a lot of such partitions per table per node - this may heavily increase the size of the bloom filter, because each partition has a separate entry in it. I saw some customers that had tens of gigabytes of memory allocated for bloom filter only because of the skinny partitions
so it's really depends on the amount of data, access patterns, etc. It could be good or bad, depends on that factors.
If you have date available, and want to use it as part partition key - that may also not advisable because if you're writing and reading a lot of data on that day, then only some nodes will handle that load - this is so-called "hot partitions".
You may implement so-called bucketing, when you infer partition key from the data. But this will depend on the data available. For example, if you have date + transaction ID as a string, you may create partition key as date + 1st character of that string - in this case you'll have N partition keys per day, that are distributed between nodes, eliminating the hot partition problem.
See the corresponding best practices doc from DataStax about that topic.
Let me not get into the different types of keys, but let me mention and shortly explain the two keys you use in your question.
PRIMARY KEY
A row MUST have a unique primary key (which identifies the row as what it is regarding equality). The primary key can be a collection of columns (as in your second example with (the_date), transaction_id) or just a single column (as in your first example with transaction_id). Nevertheless, as mentioned the important part is that for a row the primary key must be unique to identify the row.
PARTITION KEY
The partition key is actually determined based on the primary key. You can have composite partition key (you used the syntax for that in your second example, to enforce the (the_date) to be the partition key, this is actually not necessary since it would be by default the first column of the primary key).
Cassandra uses the hashed value of the (combined) partition key(s') values to determine on which node(s) the data is stored (or retrieved from when requesting data).
So the answer to your question is, it's totally ok to use the transaction_id as primary and partition key. And that is not bad practice, it's more or less pretty common practice if you have a unique identifier in your data which can be stored in one row and fulfills your needs regarding requests.
More Infos:
Hashing Explained: Consistent hashing
Defining a basic primary key
Defining a multi-column partition key
I'm inspecting the output of sstabledump, to gain better understanding of the cassandra data model, and I have some questions
From the output of of sstabledump it seems that
a table is a list of partitions (split by partition key)
a partition is a list of rows (split according to clustering key)
a row is a map of key-value pairs, where the keys belong in a predefined list
Question 1: For each partition, as well as for each row inside a partition, there is a position key. What does this value correspond to? Physical storage details? And how exactly?
Question 2: Each row inside each partition has a type: row key-value pair. Could this type be anything else? If yes, what? If not
why have a value that is always the same?
why is cassandra is classified as wide-column and other similar terms? Looks more like a two-level row storage.
Partition key is the murmur3 hash of whatever you assigned as the primary key. Consistent hashing is used with that hash to determine which node in the cluster that partition belongs to and its replicas. Within each partition data is sorted by clustering key, and then by cell name within the row. The structure is used so redundant things like timestamps if inserted for a row at once is only inserted once as a vint delta sequence from the partitions to save space.
On disk the partitions are sorted in order of this hashed key. The output of the position key is just referring to where in the sstable's data file its located (decompressed byte offset). type can also identify in that spot as a static block, which is located at the beginning of each partition for any static cells or a ranged tombstone marker (beginning or end). Note that values are sometimes for sstabledump repeated in json for readability even if not physically written on disk (ie repeated timestamps).
You can have many of these rows inside a partition, a common datamodel for time series for example is to use timestamp as the clustering key which makes very wide partitions with millions of rows. Pre 3.0 as well the data storage was closer to big table's design. It was essentially a Map<byte[], SortedMap<byte[], Cell>> where the Comparator of the sorted map was changed based on schema. It did not differentiate rows and columns within a partition, and it lead to massive amounts redundant data and was redesigned to fit the query language better.
Some more references:
Explanation of motivation of 3.0 change by DataStax here
Blog post by TLP has a good detailed explanation of the new disk format
CASSANDRA-8099
It is known that a Cassandra partition has a theoretical limit of 2 billion cells. But how does that work in a situation like this one below:
create table table1 (
some_id int PRIMARY KEY,
some_name text
);
create table table2 (
other_id int PRIMARY KEY,
other_name text
);
Assume we have 1 billion cells in partition (some_id = 1) on table1.
If we had another 1 billion cells in partition (other_id = 1) on table2, would those add up to the 2 billion theoretical limit?
In other words, are equal partition keys in different tables stored together?
Different tables have different partitions. This makes the structure of any particular partition homogenous (it will always follow the proscribed schema of a single table) which allows for optimizations.
If you look at the storage engine under the hood you'll see that every table even has it's own directory structure making it clear that a partition from one table will never interact with the partition of another. (see /var/lib/cassandra/)
Using older versions of Cassandra, we were expected to create our own sorted rows using a special row of columns, because columns are saved sorted in Cassandra.
Is Cassandra 3.0 with CQL using the same concept when you create a PRIMARY KEY?
Say, for example, that I create a table like so:
CREATE TABLE my_table (
created_on timestamp,
...,
PRIMARY KEY (created_on)
);
Then I add various entries like so:
INSERT INTO my_table (created_on, ...) VALUES (1, ...);
...
INSERT INTO my_table (created_on, ...) VALUES (9, ...);
How does Cassandra manage the sort on the PRIMARY KEY? Will that happens on all nodes, or only one set (what I call a set is the number of replicates, so if you have a cluster of 100 nodes with a replication factor of 4, would the primary key appear on 100 nodes, 25, or just 4? With older versions, it would only be on 4 nodes.)
In your case the primary key is the partition key, which used to be the row key. Which means the data your are inserting will be present on 4 out of 100 nodes if the replication factor is set to 4.
In CQL you can add more columns to the primary key, which are called clustering keys. When querying C* with CQL the result set might contain more than one row for a partition key. Those rows are logical and are stored in the partition of which they share the partition key (but vary in their clustering key values). The data in those logical rows is replicated as the partition is.
Have a look at the example for possible primary keys in the official documentation of the CREATE TABLE statement.
EDIT (row sorting):
C* keeps the partitions of a table in the order of their partition key values' hash code. The ordering is therefor not straight forward and results for range queries by partition key values are not what you would expect them to be. But as partitions are in fact ordered you still can do server side pagination with the help of the token function.
That said, you could employ the ByteOrderedPartitioner to achieve lexical ordering of your partitions. But it is very easy to create hotspots with that partitioner and it is generally discouraged to use it.
The rows of a given partition are ordered by the actual values of their clustering keys. Range queries on those behave as you'd expect them to.