Misunderstanding on Composite Key for Cassandra - cassandra

I've to test different datamodels for Cassandra. I'm thinking about to use a composite key made by key1:key2 for the row key.
With this configuration on Cassandra, for example, I can query to have all the rows having a specific key1 value and any key2 value but It's impossible otherwise (obtain all the rows having a specific key2's value and any key1).
Is it right?
thanks in advance
Cesare

If you use Order Preserving Partitioning (OPP), then yes, the keys will be stored sorted, and then you can get slices over a range of keys e.g. A:A to A:Z -- but not necessarily any:A to any:Z.
But, OPP is not guaranteed to evenly distribute the keys across the nodes and you could end up with "hot spots" of too many or too few keys. You probably want to use Random Partitioning (RP) which distributes the keys by storing by hash across all nodes.
However, since Columns are stored sorted, using Composite values can be pretty powerful for accessing ranges of data.
See this question for details on querying Composite columns using Hector .
If necessary, the column names could then be used as keys to do Multiget queries for additional lookups.

I hope these articles help you :)
http://pkghosh.wordpress.com/2011/03/02/cassandra-secondary-index-patterns/
http://www.datastax.com/docs/0.7/data_model/cfs_as_indexes
http://www.anuff.com/2011/02/indexing-in-cassandra.html
Also checkout this question
Storing a list of values in Cassandra

Related

Cassandra - Same partition key in different tables - when it is right?

I modeled my Cassandra in a way that i have couple of tables with the same partition key - Uuid.
Each table has it's partition key and others column representing data for specific query i would like to ask.
For example - 1 table have Uuid and column regarding it's status (no other clustering keys in this table) and table 2 will contain the same Uuid (Also without clustering keys) but with different columns representing the data for this Uuid.
Is it the right modeling? Is it wrong to duplicate the same partition key around tables in order to group each table to hold relevant column for specific use case? or it preferred to use only 1 table and query them and taking the relevant data for the specific use case in the code?
There's nothing wrong with this modeling. Whether it is better, or worse, than the obvious alternative of having just one table with both pieces of data, depends on your workload:
For example, if you commonly need to read both status and data columns of the same uuid, then these reads will be more efficient if both things are in the same table, which only needs to be looked up once. If you always read just one but not both, then reads will be more efficient from separate tables. Also, if this workload is not read-mostly but rather write-mostly, then writing to just one table instead of two will be more efficient.

What is the cardinality of a partition key?

If I use a randomly generated unique Id , is it correct that
the cardinality would be rather large ?
If I have a key with a low cardinality like 5 category values that the partition key can take, and I want to distribute it, the recommended approach seems to be to make partition key into composite key.
But this requires that I have to specify all the parts of a composite key in my query to retrieve all records of that key.
Even then the generated token might end up being for the same node.
Is there any way to decide on a the additional column for composite key to that would guarantee that the data would be distributed ?
The thing is that with cassandra you actually want to have partitioning keys "known" so that you can access the data when you need it. I'm not sure what you mean when you say large cardinality on partitioning key. You would get a lot of partitions in the cluster. This is usually o.k.
If you want to distribute the data around the cluster. You can use artificial columns. And this approach is sometimes also called bucketing. Basically if you want to keep 100k+ or in never version 1 million+ columns it's o.k. to split this data into partitions.
Some people simply use a trick and when they insert the data they add some artificial bucket column to partition ... let's say random(1-10) and then when they are reading the data out they simply issue 10 queries or use an in operator and then fetch the data and merge it on the client side. This approach has many benefits in that it prevents appearance of "hot rows" in the cluster.
Chances for every key are more or less 1/NUM_NODES that it will end on the same node. So I would say most of the time this is not something you should worry about too much. Unless you have number of partitions that is smaller then the number of nodes in the cluster.
Basically there are two choices for additional column random (already described) or some function based on some input data i.e. when using time series data and you decide to bucket based on the month you can always calculate the month based on the data that you are going to insert and then you just put it in bucket. When you are retrieving the data then you know ... o.k. I'm looking something in May 2016 and then you know how to select the appropriate bucket.

Cassandra 1.1 composite keys / columns and hierarchical queries

So far, this is what I understand of the current Cassandra architecture:
Super columns are not desirable any more due to performance issues.
Composite columns (actually keys) are a good choice for indexing hierarchical keys.
Composite columns store nested components in sorted order. There is no actual index.
I have some questions:
Is everything I stated correct?
Can composite columns efficiently process range queries per component (assuming logical usage)?
Are composite columns suited to extremely large numbers of rows while still yielding rapid query results (considering they are not an index per se)?
Can secondary indexes be created against composite columns. If yes, can range queries be efficiently performed?
Thanks in advance.
Yes
Yes
Yes, because they are sorted on write just like any other column
Yes, secondaries can be created against composites as of 1.2. See this JIRA ticket

select compositetype keys in cassandra

So I've defined a column family that uses composite ids for the row keys. So say the composite key is CompositeType(LongType,LongType). So I've tested storing items with this type and that works fine and SELECT works as expected too when I know the full key. But lets say I want all keys that have 0 as the first element and anything as the second. So far the only way that I can see to perform this query is as follows:
if I was all keys that are 0:* then I would do a CQL query for key >= 0:0 AND key < 1:0 which works as long as there is an order preserving partitioner.
My questions are:
1) is this odd syntax only because I'm using a CQL driver (only option for nodejs aside from thrift)
2) is there any inefficiency with this type of query? essentially i'm using a composite key instead of super columns since those aren't supported in CQL. I have no problem dealing with this logic in the code as long as there is no limitations to using it like this.
I would suggest you change your data model. Use RandomPartitioner and just have the first component as the row key. Push the second component into the column names, that is make your column names composites instead.
Since column names are always sorted, you can do easy slicing operations. For example,
a) When you know both the components, do a get slice on the row key(first component) and first component of the composite.
b) When you know just the first component, fetch the complete row for the row key(first component)
This is the approach CQL3 takes when you ask it to create a table with multiple primary keys.
Your best option is to use CQL 3. This will let you use composites underneath to optimize your lookups while still allowing you to use the parts of the composite values as though they were separate columns. You're currently using composites in your row keys, and CQL 3 only supports composites in column names (so far), but that's probably ok. In many cases like this, shifting the compositing from the row key to the column name won't have an adverse effect on your performance or data distribution, but if your row keys aren't sufficiently selective, then it might.
Either way, though, you should be looking at CQL 3. CQL 2 is deprecated. I could tell you more about how to adapt your model for CQL 3 if I knew more about your situation.

How to chose Azure Table ParitionKey and RowKey for table that already has a unique attribute

My entity is a key value pair. 90% of the time i'll be retrieving the entity based on key but 10% of the time I'll also do a reverse lookup i.e. I'll search by value and get the key.
The key and value both are guaranteed to be unique and hence their combination is also guaranteed to be unique.
Is it correct to use Key as PartitionKey and Value as RowKey?
I believe this will also ensure that my data is perfectly load balanced between servers since ParitionKey is unique.
Are there any problems in the above decision?
Under any circumstance is it practical to have a hard coded partition key? I.e all rows have same partition key? and keeping the RowKey unique?
Is it doable, yes, but depending on the size of your data, I'm not so sure it's a good idea. When you query on partition key, Table Store can go directly to the exact partition and retrieve all your records. If you query on Rowkey alone, Table store has to check if the row exists in every partition of the table. so if you have 1000 key value pairs, searching by your key will read a single partition/row. If your search via your value alone, it will read all 1000 partitions!
I face a similar problem, I solved it 2 ways:
Have 2 different tables, one with partitionKey as your-key, the other with your-value as partitionKey. Storage is cheap, so duplicating data shouldn't cost much.
(What I finally did) If you're effectively returning single entites based on a unique key, just stick them in blobs(partitioned and pivoted as in point 1), because you don't need to traverse a table, so don't.

Resources