Cassandra table based query and primary key uniqueness - cassandra

I have read here that for a table like:
CREATE TABLE user (
username text,
password text,
email text,
company text,
PRIMARY KEY (username)
);
We can create a table like:
CREATE TABLE user_by_company (
company text,
username text,
email text,
PRIMARY KEY (company)
);
In order to support query by the company. But what about primary key uniqueness for the second table?

Modify your table's PRIMARY KEY definition and add username as a clustering key:
CREATE TABLE user_by_company (
company text,
username text,
email text,
PRIMARY KEY (company,username)
);
That will enforce uniqueness, as well as return all usernames for a particular company. Additionally, your result set will be sorted in ascending order by username.
data will be partitioned by the company name over nodes. What if there is a lot of users from one company and less from other one. Data will be partition'ed in a non balanced way
That's the balance that you have to figure out on your own. PRIMARY KEY definition in Cassandra is a give-and-take between data distribution and query flexibility. And unless the cardinality of company is very low (like single digits), you shouldn't have to worry about creating hot spots in your cluster.
Also, if one particular company gets too big, you can use a modeling technique known as "bucketing." If I was going to "bucket" your user_by_company table, I would first add a company_bucket column, and it as an additional (composite) partitioning key:
CREATE TABLE user_by_company (
company text,
company_bucket text,
username text,
email text,
PRIMARY KEY ((company,company_bucket),username)
);
As for what to put into that bucket, it's up to you. Maybe that particular company has East and West locations, so something like this might work:
INSERT INTO user_by_company (company,company_bucket,username,email)
VALUES ('Acme','West','Jayne','jcobb#serenity.com');
The drawback here, is that you would then have to provide company_bucket whenever querying that table. But it is a solution that could help you if a company should get too big.

I think there is typo in the blog (the link you mentioned). You are right with the table structure as user_by_company there will be issue with uniqueness.
To support the typo theory:
In this case, creating a secondary index in the company field in the
user table could be a solution because it has much lower cardinality
than the user's email but let’s solve it with performance in mind.
Secondary indexes are always slower than dedicated table approach.
This are the lines mentioned in the blog for querying user by company.
If you were to define company as primary key OR part of primary key there should be no need to create secondary index.

Related

How to create Cassandra primary key in correct way

I have the following table structure:
CREATE TABLE test_keyspace.persons (
id uuid,
country text,
city text,
address text,
phone_number text,
PRIMARY KEY (id, country, address)
);
My main scenario is to get person by id. But sometimes I want to get all cities inside country and all persons inside city as well.
I know that Cassandra must have at least one partition key and zero or more clustering keys, but I don't understand how to organize it to work most effectively (and generally work).
Can anybody give me advice?
So it sounds like you want to be able to query by both id and country. Typically in Cassandra, the way to build your data models is a "one table == one query" approach. In that case, you would have two tables, just keyed differently:
CREATE TABLE test_keyspace.persons_by_id (
id uuid,
country text,
city text,
address text,
phone_number text,
PRIMARY KEY (id));
TBH, you don't really to cluster on country and address, unless a person can have multiple addresses. But a single PK is a completely legit approach.
For the second table:
CREATE TABLE test_keyspace.persons_by_country (
id uuid,
country text,
city text,
address text,
phone_number text,
PRIMARY KEY (country,city,id));
This will allow you to query by country, with persons grouped/sorted by city and sorted by id. In theory, you could also serve the query by id approach here, as long as you also had the country and city. But that might not be possible in your scenario.
Duplicating data in Cassandra (NoSQL) to help queries perform better is ok. The trick becomes keeping the tables in-sync, but you can use the BATCH functionality to apply writes to both tables atomically.
In case you haven't already, you might benefit from DataStax's (free) course on data modeling - Data Modeling with Apache Cassandra and DataStax Enterprise.

How is denormalization handled in cassandra

What is the best approach to update table with duplicate data?
I have a table
table users (
id text PRIMARY KEY,
email text,
description,
salary
)
I will delete, update, insert etc to this table. But I also have a requirement to be able to search by email, and description. If I create new table with new composite keys for email, and description,
when I update my base table I do
insert into users (id, salary) values (1, 500);
I do not have the required data to also update my secondary table since all the client has is id and salary. How is the second table updated.
Other workarounds and shortcomings
I could have created a materialized view, but since the base table has only one primary key I can only add one more column. my search requirement requires more than one column.
Create secondary indexes on the columns that will be searched on. But the performance for this would be bad since the columns I will be searching on would have high cardinality. i.e. description, email, etc
So, the "correct" way of doing this is to create 3 tables. salary_by_id, salary_by_email and salary_by_description.
table salary_by_id (
id text PRIMARY KEY,
salary int
)
table salary_by_email (
email text PRIMARY KEY,
salary int
)
table salary_by_description (
description text,
id int,
salary int,
primary key (description, id)
)
The reason i added id to salary_by_description is that, from my own guessing, description won't be globally uniq, so it has to have something else in it's primary key.
Depending on the size of these tables the last one might need something extra added to it's partitioning key. And if needed you can add id, email and description to the other tables.
Now, when inserting or deleting values you need so do it in all 3 tables. If you use a driver, like in java, that supports asynchronous calls, then this doesn't cost very much extra.

Cassandra Defining Primary key and alternatives

Here is a simple example of the user table in cassandra. What is best strategy to create a primary key.
My requirements are
search by uuid
search by username
search by email
All the keys mentioned will be high cardinality keys. Also at any moment I will be having only one of them to search
PRIMARY KEY(uid,username,email)
What if I have only the username ?, Then the above primary key is not use ful. I am not able visualize a solution to achieve this using compound primary key?
what are other options? should we go with a new table with username to uid, then search the user table. ?
From all articles out there on the internet recommends not to create secondary index for high cardinality keys
CREATE TABLE medicscity.user (
uid uuid,
fname text,
lname text,
user_id text,
email_id text,
password text,
city text,
state_id int,
country_id int,
dob timestamp,
zipcode text,
PRIMARY KEY (??)
)
How do we solve this kind of situation ?
Yes, you need to go with duplicate tables.
If ever in Cassandra you face a situation in which you will have to query a table based on column1, column2 or column3 independently. You will have to duplicate the tables.
Now, how much duplication you have to use, is individual choice.
Like, in this example, you can either duplicate table with full data.
Or, you can simply create a new table column1 (partition), column2, column 3 as primary key in main table.
Create a new table with primary key of column1, column2, column3 and partition key on column2.
Another one with same primary key and partition key on column3.
So, your data duplicate will be row, but in this case you will end up querying data twice. One from duplicate table, and one from full fledged table.
Big data technology, is there to speed up computation and let your system scale horizontally, and it comes at the expense of disk/storage. I mean just look at everything, even its base of replication factor does duplication of data.
Your PRIMARY KEY(uuid,username,email) don't fit your requirement. Because you can't search for the clustering column without fill the Partition Key, and even the second clustering column without fill the first clustering column.
e.g. you cannot search for username without uuid in WHERE clause and cannot search for email without uuid and username too.
All you need is the denormalization and duplicate data.
Denormalization and duplication of data is a fact of life with Cassandra. Don’t be afraid of it. Disk space is generally the cheapest resource (compared to CPU, memory, disk IOPs, or network), and Cassandra is architected around that fact. In order to get the most efficient reads, you often need to duplicate data.
In your case, you need to create 3 tables that have the same column (data that you want to get), but these 3 tables will have different PRIMARY KEY, one have uuid as PK, one have username as PK, and one have email as PK. :)

Data modeling easy table in Cassandra not working

I have to design a web page where a group leader can invite people to join his/her group. My requirements are really simple.
No sending duplicate emails out, if person was already contacted.
Show the group leader a list of invites sorted by invite date in ascending order.
Seems easy. I created this table.
CREATE TABLE invites (
email_address text,
invite_date timeuuid,
PRIMARY KEY (email_address, invite_date)
) WITH CLUSTERING ORDER BY (invite_date ASC);
Problem 1: LWT no use with invite_date as a Cluster column.
I figured I'd use LWT to ensure email_address is unique, only to find out IF NOT EXISTS only seems to work on the whole PRIMARY KEY, so LWT in C* does not work for me.
Problem 2: I cannot get an ordered list of invites back to save me life even with invite_date as a Cluster column.
If I take invite_date out, I cannot issue an 'order by' in CQL. That said, having invite_date out of the PK let's me use LWT...
I can't even get a 2 column table to fulfill 2 easy requirements! Any help on data modeling design for this problem is much appreciated.
New Dec. 4, 2015:
Additional to business requirements, a technical requirement I have is:
I want to make sure I model this correctly in Cassandra, so that it allows me to use CQL's LIMIT and pagingState capabilities in the Java driver. This means, I cannot just read all the rows in, sort on Java side and return the results.
Problem 1:
I think that the easiest way to handle this might be to have two separate tables, one for the emails_in_group and one for invites_by_group. This will allow each query to be fulfilled independantly. The emails_in_group table would look something like this:
CREATE TABLE emails_in_group (
email_address text,
group_id text,
PRIMARY KEY (email_address , group_id));
Then this, combined with the table as defined in Problem 2 below could be updated using a conditional batch statement as shown here:
http://docs.datastax.com/en/cql/3.1/cql/cql_using/use-batch-static.html
Problem 2:
So the basic problem here is that as you have your data currently modeled each email_address value will be in it's own partition and then within that partition the invite_date will be ordered. #bydsky is right when he said that you need to add something like a group_id to your table and make it the partition key portion of your Primary Key. If you do this and then add invite_date as a cluster column to the partition key then all records for that group_id will be stored in the same partition and the Order By will work. Order By only works within the same partiton, not across partitions which is what you were asking it to do.
CREATE TABLE invites_by_group (
group_id text,
email_address text,
invite_date timestamp,
PRIMARY KEY (group_id, invite_date));
I think you should use group_id and email as primary key.
For ordering, maybe you could do it in your application.
CREATE TABLE invites ( group_id text,
email_address text,
invite_date timestamp,
PRIMARY KEY (group_id, email_address) );
For cassandra data modeling, it's a good start to watch DS220

Select Cassandra row key

What criteria should be considered when selecting a rowid for a column family in cassandra? I want to migrate a relational database which does not contain any primary key. In that case what should be the best rowid selection?
Use natural keys that can be derived from the dataset if possible (e.g. phone_number for phone book, user_name for user table). If thats not possible, use a UUID.
There are many things to consider when consider the primary key of the cassandra system
Understand the difference between primary and partition key
CREATE TABLE users (
user_name varchar PRIMARY KEY,
password varchar,
);
In the above case primary and partition keys are the same.
CREATE TABLE users (
user_name varchar,
user_email varchar,
password varchar,
PRIMARY KEY (user_name, user_email)
);
Here Primary key is the user_name and user_email together, where as user_name is the partition keys.
CREATE TABLE users (
user_name varchar,
user_email varchar,
password varchar,
PRIMARY KEY ((user_name, user_email))
);
Here the primary key and partition keys are both equal to user_name,user_email
Carefully define your partition key. Partition keys are used for lookups by cassandra, so you must define your partition key by looking at your select queries.
Cassandra organizes data where partition keys are used for lookups, using the previous example
For the first case:
user_name ---> email:password email:data_of_birth
ABC --> abc#gmail.com:abc123 abc#gmail.com:22/02/1950 abc#yahoo.com:def123...
In the second case:
user_name,email ---> password data_of_birth ABC,abc#gmail.com --> abc123 22/02/1950
Making partition key more complex containing many data will make sure that you have many rows instead of a single row with many columns. It might be beneficial to balance the number of rows you might induce vs the number of columns each row might have. Having incredible large of small rows might not be too beneficial for reads
Partition keys indicate how data is distributed across nodes, so consider whether you have hotspots and decide whether you want to break it further.
Case 1:
All users named ABC will be in a single node
Case 2:
Users named ABC might or might not be in the single node, depending on the key that is generated along with their email.
Your partition key(s) should be how you want to store the data and how you will always look it up. You can only retrieve data by partition key, so it's important to choose something that you will naturally look up (this is why sometimes data is denormalized in Cassandra by storing it in multiple tables that mimic materialized views).
The clustering column key(s), if any, are mostly useful if you sometimes want to retrieve all the data in a partition and sometimes only want some of it. This is great for things like timeseries data because you can cluster the data on a timeuuid, store it sorted, and then do efficient range queries over the data.

Resources