Querying Cassandra by a partial partition key - cassandra

In Cassandra, I can create a composite partition key, separate from my clustering key:
CREATE TABLE footable (
column1 text,
column2 text,
column3 text,
column4 text,
PRIMARY KEY ((column1, column2))
)
As I understand it, quering by partition key is an extremely efficient (the most efficient?) method for retrieving data. What I don't know, however, is whether it's also efficient to query by only part of a composite partition key.
In MSSQL, this would be efficient, as long as components are included starting with the first (column1 instead of column2, in this example). Is this also the case in Cassandra? Is it highly efficient to query for rows based only on column1, here?

This is not the case in Cassandra, because it is not possible. Doing so will yield the following error:
Partition key part entity must be restricted since preceding part is
Check out this Cassandra 2014 SF Summit presentation from DataStax MVP Robbie Strickland titled "CQL Under the Hood." Slides 62-64 show that the complete partition key is used as the rowkey. With composite partitioning keys in Cassandra, you must query by all of the rowkey or none of it.
You can watch the complete presentation video here.

This is impossible in Cassandra because it would require a full table scan to resolve such a query. The location of the partition is defined by a hash of all members of the composite key, this means giving only half of the key is as good as giving none of it. The only way to find the record is to search through all keys and check if they match.

Related

Apache Cassandra stock data model design

I got a lot of data regarding stock prices and I want to try Apache Cassandra out for this purpose. But I'm not quite familiar with the primary/ partition/ clustering keys.
My database columns would be:
Stock_Symbol
Price
Timestamp
My users will always filter for the Stock_Symbol (where stock_symbol=XX) and then they might filter for a certain time range (Greater/ Less than (equals)). There will be around 30.000 stock symbols.
Also, what is the big difference when using another "filter", e.g. exchange_id (only two stock exchanges are available).
Exchange_ID
Stock_Symbol
Price
Timestamp
So my users would first filter for the stock exchange (which is more or less a foreign key), then for the stock symbol (which is also more or less a foreign key). The data would be inserted/ written in this order as well.
How do I have to choose the keys?
The Quick Answer
Based on your use-case and predicted query pattern, I would recommend one of the following for your table:
PRIMARY KEY (Stock_Symbol, Timestamp)
The partition key is made of Stock_Symbol, and Timestamp is the only clustering column. This will allow WHERE to be used with those two fields. If either are to be filtered on, filtering on Stock_Symbol will be required in the query and must come as the first condition to WHERE.
Or, for the second case you listed:
PRIMARY KEY ((Exchange_ID, Stock_Symbol), Timestamp)
The partition key is composed of Exchange_ID and Stock_Symbol, and Timestamp is the only clustering column. This will allow WHERE to be used with those three fields. If any of those three are to be filtered on, filtering on both Exchange_ID and Stock_Symbol will be required in the query and must come in that order as the first two conditions to WHERE.
See the last section of this answer for a few other variations that could also be applied based on your needs.
Long Answer & Explanation
Primary Keys, Partition Keys, and Clustering Columns
Primary keys in Cassandra, similar to their role in relational databases, serve to identify records and index them in order to access them quickly. However, due to the distributed nature of records in Cassandra, they serve a secondary purpose of also determining which node that a given record should be stored on.
The primary key in a Cassandra table is further broken down into two parts - the Partition Key, which is mandatory and by default is the first column in the primary key, and optional clustering column(s), which are all fields that are in the primary key that are not a part of the partition key.
Here are some examples:
PRIMARY KEY (Exchange_ID)
Exchange_ID is the sole field in the primary key and is also the partition key. There are no additional clustering columns.
PRIMARY KEY (Exchange_ID, Timestamp, Stock_Symbol)
Exchange_ID, Timestamp, and Stock_Symbol together form a composite primary key. The partition key is Exchange_ID and Timestamp and Stock_Symbol are both clustering columns.
PRIMARY KEY ((Exchange_ID, Timestamp), Stock_Symbol)
Exchange_ID, Timestamp, and Stock_Symbol together form a composite primary key. The partition key is composed of both Exchange_ID and Timestamp. The extra parenthesis grouping Exchange_ID and Timestamp group them into a single composite partition key, and Stock_Symbol is a clustering column.
PRIMARY KEY ((Exchange_ID, Timestamp))
Exchange_ID and Timestamp together form a composite primary key. The partition key is composed of both Exchange_ID and Timestamp. There are no clustering columns.
But What Do They Do?
Internally, the partitioning key is used to calculate a token, which determines on which node a record is stored. The clustering columns are not used in determining which node to store the record on, but they are used in determining order of how records are laid out within the node - this is important when querying a range of records. Records whose clustering columns are similar in value will be stored close to each other on the same node; they "cluster" together.
Filtering in Cassandra
Due to the distributed nature of Cassandra, fields can only be filtered on if they are indexed. This can be accomplished in a few ways, usually by being a part of the primary key or by having a secondary index on the field. Secondary indexes can cause performance issues according to DataStax Documentation, so it is typically recommended to capture your use-cases using the primary key if possible.
Any field in the primary key can have a WHERE clause applied to it (unlike unindexed fields which cannot be filtered on in the general case), but there are some stipulations:
Order Matters - The primary key fields in the WHERE clause must be in the order that they are defined; if you have a primary key of (field1, field2, field3), you cannot do WHERE field2 = 'value', but rather you must include the preceding fields as well: WHERE field1 = 'value' AND field2 = 'value'.
The Entire Partition Key Must Be Present - If applying a WHERE clause to the primary key, the entire partition key must be given so that the cluster can determine what node in the cluster the requested data is located in; if you have a primary key of ((field1, field2), field3), you cannot do WHERE field1 = 'value', but rather you must include the full partition key: WHERE field1 = 'value' AND field2 = 'value'.
Applied to Your Use-Case
With the above info in mind, you can take the analysis of how users will query the database, as you've done, and use that information to design your data model, or more specifically in this case, the primary key of your table.
You mentioned that you will have about 30k unique values for Stock_Symbol and further that it will always be included in WHERE cluases. That sounds initially like a resonable candidate for a partition key, as long as queries will include only a single value that they are searching for in Stock_Symbol (e.g. WHERE Stock_Symbol = 'value' as opposed to WHERE Stock_Symbol < 'value'). If a query is intended to return multiple records with multiple values in Stock_Symbol, there is a danger that the cluster will need to retrieve data from multiple nodes, which may result in performance penalties.
Further, if your users wish to filter on Timestamp, it should also be a part of the primary key, though wanting to filter on a range indicates to me that it probably shouldn't be a part of the partition key, so it would be a good candidate for a clustering column.
This brings me to my recommendation:
PRIMARY KEY (Stock_Symbol, Timestamp)
If it were important to distribute data based on both the Stock_Symbol and the Timestamp, you could introduce a pre-calculated time-bucketed field that is based on the time but with less cardinality, such as Day_Of_Week or Month or something like that:
PRIMARY KEY ((Stock_Symbol, Day_Of_Week), Timestamp)
If you wanted to introduce another field to filtering, such as Exchange_ID, it could be a part of the partition key, which would mandate it being included in filters, or it could be a part of the clustering column, which would mean that it wouldn't be required unless subsequent fields in the primary key needed to be filtered on. As you mentioned that users will always filter by Exchange_ID and then by Stock_Symbol, it might make sense to do:
PRIMARY KEY ((Exchange_ID, Stock_Symbol), Timestamp)
Or to make it non-mandatory:
PRIMARY KEY (Stock_Symbol, Exchange_ID, Timestamp)

Cassandra: Is partition key also used in clustering?

Let's say I have a primary key like this: primary key (PK, CK).
Based on what I read (see refs), I think I can loosely describe the way Cassandra uses PK and CK as follows - PK will be used to decide which node(s) the data should go to and CK will be used for clustering (aka ordering) of data within that node.
Then, it seems PK is not used in clustering data within the node and that sounds wrong. What if I have a simple primary with with just PK? Will Cassandra only distribute data across nodes and not order data within each node since there is no clustering column?
refs:
https://docs.datastax.com/en/cql/3.1/cql/ddl/ddl_compound_keys_c.html
Difference between partition key, composite key and clustering key in Cassandra?
Then, it seems PK is not used in clustering data within the node and
that sounds wrong. What if I have a simple primary with with just PK?
Will Cassandra only distribute data across nodes and not order data
within each node since there is no clustering column?
Good question. Let's try this out. I'll create a simple table and INSERT some data:
aploetz#cqlsh:stackoverflow> CREATE TABLE programs
(name text PRIMARY KEY, data text);
aploetz#cqlsh:stackoverflow> INSERT INTO programs (name) VALUES ('Tron');
aploetz#cqlsh:stackoverflow> INSERT INTO programs (name) VALUES ('Yori');
aploetz#cqlsh:stackoverflow> INSERT INTO programs (name) VALUES ('Quorra');
aploetz#cqlsh:stackoverflow> INSERT INTO programs (name) VALUES ('Clu');
aploetz#cqlsh:stackoverflow> INSERT INTO programs (name) VALUES ('Flynn');
aploetz#cqlsh:stackoverflow> INSERT INTO programs (name) VALUES ('Zuze');
Now, let's run a query that should answer your question:
aploetz#cqlsh:stackoverflow> SELECT name, token(name) FROM programs;
name | system.token(name)
--------+----------------------
Flynn | -1059892732813900311
Zuze | 1815531347795840810
Yori | 2854211700591734382
Quorra | 3079126743186967718
Tron | 6359222509420865788
Clu | 8304850648940574176
(6 rows)
As you can see, they are definitely not in order by name, which is the partition key and lone PRIMARY KEY. But, my query runs the token() function on name, which shows the hashed value of the partition key (name in this case). The results are ordered by that.
So to answer your question, Cassandra orders its partitions by the hashed value of the partition key. Note that this order is maintained throughout the cluster, not just on a single node. Therefore, results for an unbound query (not recommended to be run in a multi-node configuration) will be ordered by the hashed value of the partition key, regardless of the number of nodes in the cluster.
Since all data for a table will be written to the same SSTables with a ordering of the partition key. So yes they are sorted.
I think what you're asking is why you can't use a primary key the same way you use a clustering key. For example you can't do less than (<) or greater than (>) on a partition key. Since one node doesn't have all the partition keys this type of query would have to check with all nodes in your cluster to see if they have any partition key that matches your query.

How to get latest data from primary key of (user_id and date) in Cassandra [duplicate]

In Cassandra, I can create a composite partition key, separate from my clustering key:
CREATE TABLE footable (
column1 text,
column2 text,
column3 text,
column4 text,
PRIMARY KEY ((column1, column2))
)
As I understand it, quering by partition key is an extremely efficient (the most efficient?) method for retrieving data. What I don't know, however, is whether it's also efficient to query by only part of a composite partition key.
In MSSQL, this would be efficient, as long as components are included starting with the first (column1 instead of column2, in this example). Is this also the case in Cassandra? Is it highly efficient to query for rows based only on column1, here?
This is not the case in Cassandra, because it is not possible. Doing so will yield the following error:
Partition key part entity must be restricted since preceding part is
Check out this Cassandra 2014 SF Summit presentation from DataStax MVP Robbie Strickland titled "CQL Under the Hood." Slides 62-64 show that the complete partition key is used as the rowkey. With composite partitioning keys in Cassandra, you must query by all of the rowkey or none of it.
You can watch the complete presentation video here.
This is impossible in Cassandra because it would require a full table scan to resolve such a query. The location of the partition is defined by a hash of all members of the composite key, this means giving only half of the key is as good as giving none of it. The only way to find the record is to search through all keys and check if they match.

Cassandra Defining Primary key and alternatives

Here is a simple example of the user table in cassandra. What is best strategy to create a primary key.
My requirements are
search by uuid
search by username
search by email
All the keys mentioned will be high cardinality keys. Also at any moment I will be having only one of them to search
PRIMARY KEY(uid,username,email)
What if I have only the username ?, Then the above primary key is not use ful. I am not able visualize a solution to achieve this using compound primary key?
what are other options? should we go with a new table with username to uid, then search the user table. ?
From all articles out there on the internet recommends not to create secondary index for high cardinality keys
CREATE TABLE medicscity.user (
uid uuid,
fname text,
lname text,
user_id text,
email_id text,
password text,
city text,
state_id int,
country_id int,
dob timestamp,
zipcode text,
PRIMARY KEY (??)
)
How do we solve this kind of situation ?
Yes, you need to go with duplicate tables.
If ever in Cassandra you face a situation in which you will have to query a table based on column1, column2 or column3 independently. You will have to duplicate the tables.
Now, how much duplication you have to use, is individual choice.
Like, in this example, you can either duplicate table with full data.
Or, you can simply create a new table column1 (partition), column2, column 3 as primary key in main table.
Create a new table with primary key of column1, column2, column3 and partition key on column2.
Another one with same primary key and partition key on column3.
So, your data duplicate will be row, but in this case you will end up querying data twice. One from duplicate table, and one from full fledged table.
Big data technology, is there to speed up computation and let your system scale horizontally, and it comes at the expense of disk/storage. I mean just look at everything, even its base of replication factor does duplication of data.
Your PRIMARY KEY(uuid,username,email) don't fit your requirement. Because you can't search for the clustering column without fill the Partition Key, and even the second clustering column without fill the first clustering column.
e.g. you cannot search for username without uuid in WHERE clause and cannot search for email without uuid and username too.
All you need is the denormalization and duplicate data.
Denormalization and duplication of data is a fact of life with Cassandra. Don’t be afraid of it. Disk space is generally the cheapest resource (compared to CPU, memory, disk IOPs, or network), and Cassandra is architected around that fact. In order to get the most efficient reads, you often need to duplicate data.
In your case, you need to create 3 tables that have the same column (data that you want to get), but these 3 tables will have different PRIMARY KEY, one have uuid as PK, one have username as PK, and one have email as PK. :)

row key in cassandra table

I am new to Cassandra, I am confused between rowkey and partition key in Cassandra.
I am creating a table like:
Create table events( day text, hour text, dip text, sip text, count counter,
primary key((day,hour), dip, sip));
As per my understanding, in the above table day and hour columns form a partition key and dip,sip columns form a clustering key.
My understanding is that row key is nothing but partition key i.e. day, hour columns form a row key.
Is my understanding correct? Can any one clarify this?
Is my understanding correct, Can any one clarify this?
Yes, your understanding is correct. The row key is the "old school" way of referring to a partition key. The partition key (as you probably understand) is the part of the CQL PRIMARY KEY which determines where the data is stored in the cluster. In your case, data within your partition keys will be sorted by dip and sip (your clustering keys).
You should give John Berryman's article Understanding How CQL3 Maps To Cassandra’s Internal Data Structure a read. It does a great job of explaining how your table structures map "under the hood."

Resources