Data modelling conflicts in Cassandra - cassandra

The schema I am using is following :
CREATE TABLE mytable(
id varchar,
date date,
name varchar,
PRIMARY KEY ((date),name, id)
) WITH CLUSTERING ORDER BY (name desc);
I have 2 queries for my use case :
Fetching all records for given name
Delete all records for given date.
As we can't delete records without partition key being specified, my partition key got fixed to date only and no other column can be added to partition key as I won't have anything except date at time of deletion.
But to fetch records using name, I need to use ALLOW FILTERING as I need to scan whole table of above schema which causes performance issue.
Can you suggest a better way so that I can skip ALLOW FILTERING with is also delete by date compatible.

You could use indexes:
https://docs.datastax.com/en/cql/3.3/cql/cql_using/useSecondaryIndex.html
But you have to be careful, there could be issues depending on the size of the table. You should read this post for more informations:
https://pantheon.io/blog/cassandra-scale-problem-secondary-indexes

You need an additional table to support your requirements.
Your main query is to retrieve the records given a name. For this, you should use mytable as follow (note the primary key):
CREATE TABLE mytable(
id varchar,
date date,
name varchar,
PRIMARY KEY ((name),date, id)
) WITH CLUSTERING ORDER BY (date desc);
This table will let you retrieve your data for a given name with (query 1):
SELECT * FROM mytable WHERE name='bob';
Now, you want to be able to delete by date. For this you would need the following additional table:
CREATE TABLE mytable_by_date(
id varchar,
date date,
name varchar,
PRIMARY KEY ((date), name, id)
) WITH CLUSTERING ORDER BY (name);
This table will let you find the name (and id) for a given date with:
SELECT * from mytable_by_date WHERE date='your-date';
I don't know your business requirements, so you this query might return 0, 1 or maybe more results. Once you have that, you can issue the delete against the first and second table (maybe using a logged batch for atomicity?)
DELETE * from mytable_by_date WHERE date='your-date' and name='the-name' and id='the-id'
DELETE * from mytable WHERE name='the-name' and ...
Overall, you might need to adjust according to your business requirements (is name unique, is uniqueness enforced by id etc...)
Hope it helps!

Related

Delete whole row based on one of clusturing column value in cassandra

Schema I am using is as follows:
CREATE TABLE mytable(
id int,
name varchar,
PRIMARY KEY ((id),name)
) WITH CLUSTERING ORDER BY (name desc);
I wanted to delete records by following command :
DELETE FROM mytable WHERE name = 'Jhon';
But gived error
[Invalid query] message="Some partition key parts are missing: name"
As I looked for the reason, I came to know that only delete in not possible only with clustering columns.
Then I tried
DELETE FROM mytable WHERE id IN (SELECT id FROM mytable WHERE name='Jhon') AND name = 'Jhon';
But obviously it did not work.
I then tried with setting TTL to 0 for deleting row. But TTL can be set only for particular column, not the entire row.
What are feasible alternates to perform this operation?
In Cassandra, you need to design your data model to support your query. When you query your data, you always have to provide the partition key (otherwise the query would be inefficient).
The problem is that you want to query your data without a partition key. You would need to denormalize your data to support this kind or request. For example, you could add an additional table, such as:
CREATE TABLE id_by_name(
name varchar,
id int,
name varchar,
PRIMARY KEY (name, id)
) WITH CLUSTERING ORDER BY (id desc);
Then, you would be able to do your delete with a few queries:
SELECT ID from id_by_name WHERE name='John';
let's assume this returns 4.
DELETE FROM mytable WHERE id=4;
DELETE FROM id_by_name WHERE name='John' and id=4;
You could try to leverage materialized view (instead of maintaining yourself id_by_name) but materialized views are currently marked as unstable.
Now, there are still a few issues you need to address in your data model, in particular, how do you handle multiple user with the same name etc...
You cannot delete primary key if not complete. Primary key decisions are for sharding and load balancing. Cassandra can get complex if you are not used to thinking in columns.
I don't like the above answer, which though is good, complicates your solution. If you are thinking relational but getting lost in Cassandra I suggest using something that simplifies and maps your thinking to relational views.

Cassandra Update query | timestamp column as clustering key

I have a table in cassandra with following schema:
CREATE TABLE user_album_entity (
userId text,
albumId text,
updateDateTimestamp timestamp,
albumName text,
description text,
PRIMARY KEY ((userId), updateDateTimestamp)
);
The query required to get data would have a where userId = xxx order by updateTimestamp. Hence the schema had updateDateTimestamp.
Problem comes in updating the column of table.The query is: Update the album information for user where user id = xxx. But as per specs,for update query I would need the exact value of updateDateTimestamp which in normal world scenario, an application would never send.
What should be the answer to such problems since I believe this a very common use case where select query requires ordering on timestamp. Any help is much appreciated.
The problem is that your table structure allows the same album to have multiple records with the only difference being the timestamp (the clustering key).
Three possible solutions:
Remove the clustering key and sort your data at application level.
Remove the clustering key and add a Secondary Index to the timestamp field.
Remove the clustering key and create a Materialized View to perform the query.
If your usecase is such that each partition will contain exactly one row,
then you can model your table like:
CREATE TABLE user_album_entity (
userId text,
albumId text static,
updateDateTimestamp timestamp,
albumName text static,
description text static,
PRIMARY KEY ((userId), updateDateTimestamp)
);
modelling the table this way enables Update query to be done in following way:
UPDATE user_album_entity SET albumId = 'updatedAlbumId' WHERE userId = 'xyz'
Hope this helps.

Updating Primary Key value Cassandra

I have table with node_id, node_name and data. My requirement is to getByID and getByName. So I have made id and name as the primary keys. But I also need to sometimes update the name as well.
I know Cassandra does not allow updating primary keys and having non primary key in the WHERE clause.
How can I achieve this?
I did consider deleting the record first, and then inserting again with the same id and new name. But I read that this would create tombstones and affect the performance.
Use only node_id as the primary key. To implement getByName create a materialized view. materialized views in cassandra
create table users_by_id_name(
id int,
createdOn bigint, -- timestamp in millisec
name text,
age int,
primary key (id,name,createdOn)
)WITH CLUSTERING ORDER BY ( name DESC, createdOn DESC);
Use above table definition to insert users.
Insert query --
insert into users_by_id_name (id,createdOn,name,age) values (1,100,'darthvedar',28);
to update the user, insert the row again with same user id and updated name and createdOn value.
insert into users_by_id_name (id,createdOn,name,age) values (1,200,'obi-wan-kenobi',28);
while selecting the user use below query --
select by user id -
select * from users_by_id_name where id=1 limit 1;
Select user by name -
select * from users_by_id_name where name='obi-wan-kenobi' ALLOW FILTERING;
Other way is to use secondary index on user name. Think, user name is not going to change too frequently, so secondary index is also one good option.
Edit after comments -
If you have very frequent updates on user name, it would be better to use two different tables.
create table users_by_id(
id int,
name text,
age int,
primary key (id)
);
create table users_by_name(
id int,
name text,
age int,
primary key (name)
);
While inserting , insert in both the tables using batch statement.
Hope this will help.

Load data into Cassandra denormalized table

I understand that as Cassandra does not support join, we need to create denormalized table sometimes.
Given I need to get Item names for each item within a order given order Id, I create a table using:
CREATE TABLE order (
order_id int,
item_id int,
item_name,
primary key ((id), item_id)
);
I have two csv files to load data from, order.csv and item.csv, where order.csv contains order_id and item_id and item.csv contains item_id and item_name.
The question is how to load data from the csv file into the table I create? I insert data from order file first and it works fine. When I do a insertion of item, it will throw error saying missing primary key.
Any idea how I can insert data from different input files into the denormalized table? Thanks.
there is a typo on the definition of the primary key, it should be
CREATE TABLE order (
order_id int,
item_id int,
item_name,
primary key (order_id, item_id)
);
Are you using COPY to upload the data?
Regarding the denormalization, that depends on your use case, usually on a normalized schema you will have one table for orders, another for customers and do a join with SQL to display information of the order and customers at the same time; in this case, for a denormalized table you will have the order and the customer information in the same table, the fields will depend on how you are going to use the query.
As a rule of thumb, before creating the table, you first need to define what that are you going to use.
Using a secondary index on your item_id should do the trick:
CREATE INDEX idx_item_id ON order (item_id);
Now you should be able to query like:
SELECT * FROM order WHERE item_id = ?;
Beware that indexes usually have performance impacts, so you can use them to import your data, and drop them when finished.
Please refer to the Cassandra Index Documentation for further information.

How to choose proper tables structure in cassandra?

Suppose I have table with the following structure
create table tasks (
user_id uuid,
name text,
task_id uuid,
description text,
primary key ((user_id), name, task_id)
);
It allows me to get all tasks for user sorted by name ascending. Also I added task_id to primary key to avoid upserts. The following query holds
select * from tasks where user_id = ?
as well as
select * from tasks where user_id = ? and name > ?
However, I cannot get task with specific task_id. For example, following query crashes
select * from tasks where user_id = ? and task_id = ?
with this error
PRIMARY KEY column "task_id" cannot be restricted as preceding column "name" is not restricted
It requires name column to be specified, but at the moment I have only task_id (from url, for example) and user_id (from session).
How should I create this table to perform both queries? Or I need create separate table for second case? What is the common pattern in this situation?
You could simply add one more redundant column taskId with same value as task_id and create a secondary index on taskId.
Then you can query user_id=? and tsakId=?
PRIMARY KEY column "task_id" cannot be restricted as preceding
column "name" is not restricted
You are seeing this error because CQL does not permit queries to skip primary key components.
How should I create this table to perform both queries? Or I need create separate table for second case? What is the common pattern in this situation?
As you suspect, the typical way that problems like this are solved with Cassandra is that an additional table is created for each query. In this case, recreating the table with a PRIMARY KEY designed to match your additional query pattern would simply look like this:
create table tasks_by_user_and_task (
user_id uuid,
name text,
task_id uuid,
description text,
primary key ((user_id), task_id)
);
You could simply add one more redundant column taskId with same value as task_id and create a secondary index on taskId.
While I am usually not a fan of using secondary indexes, in this case it may perform ok. Reason being, is that you would still be restricting your query by partition key, which would eliminate the need to examine additional nodes. The drawback (as Undefined_variable pointed out) is that you cannot create a secondary index on a primary key component, so you would need to duplicate that column (and apply the index to the non-primary key column) to get that solution to work.
It might be a good idea to model and test both solutions for performance.
If you have the extra disk space, the best method would be to replicate the data in a second table. You should avoid using secondary indexes in production. Your application would, of course, need to write to both these tables. But Cassandra is darn good at making that efficient.
create table tasks_by_name (
user_id uuid,
name text,
task_id uuid,
description text,
primary key ((user_id), name, task_id)
);
create table tasks_by_id (
user_id uuid,
name text,
task_id uuid,
description text,
primary key ((user_id), task_id)
);

Resources