I tried reading up on datastax blogs and documentation but could not find any specific on this
Is there a way to keep 2 tables in Cassandra to belong to same partition?
For example:
CREATE TYPE addr (
street_address1 text,
city text,
state text,
country text,
zip_code text,
);
CREATE TABLE foo (
account_id timeuuid,
data text,
site_id int,
PRIMARY KEY (account_id)
};
CREATE TABLE bar (
account_id timeuuid,
address_id int,
address frozen<addr>,
PRIMARY KEY (account_id, address_id)
);
Here I need to ensure that both of these tables/CF will live on same partition that way for the same account_id both of these set of data can be fetched from the same node
Any pointers are highly appreciated.
Also, if someone has some experience in using UDT (User Defined Types), I would like to understand how the backward compatibility would work. If I modify "addr" UDT to have a couple of more attributes (say for example zip_code2 int, and name text), how does the older rows that does have these attribute work? Is it even compatible?
Thanks
If two table share the same replication strategy and same partition key they will colocate their partitions. So as long as the two tables are in the same keyspace AND their partition keys match
PRIMARY KEY (account_id) == PRIMARY KEY (account_id, address_id)
Any given account_id will be on (and replicated to) the same machines.
Related
I have the following table structure:
CREATE TABLE test_keyspace.persons (
id uuid,
country text,
city text,
address text,
phone_number text,
PRIMARY KEY (id, country, address)
);
My main scenario is to get person by id. But sometimes I want to get all cities inside country and all persons inside city as well.
I know that Cassandra must have at least one partition key and zero or more clustering keys, but I don't understand how to organize it to work most effectively (and generally work).
Can anybody give me advice?
So it sounds like you want to be able to query by both id and country. Typically in Cassandra, the way to build your data models is a "one table == one query" approach. In that case, you would have two tables, just keyed differently:
CREATE TABLE test_keyspace.persons_by_id (
id uuid,
country text,
city text,
address text,
phone_number text,
PRIMARY KEY (id));
TBH, you don't really to cluster on country and address, unless a person can have multiple addresses. But a single PK is a completely legit approach.
For the second table:
CREATE TABLE test_keyspace.persons_by_country (
id uuid,
country text,
city text,
address text,
phone_number text,
PRIMARY KEY (country,city,id));
This will allow you to query by country, with persons grouped/sorted by city and sorted by id. In theory, you could also serve the query by id approach here, as long as you also had the country and city. But that might not be possible in your scenario.
Duplicating data in Cassandra (NoSQL) to help queries perform better is ok. The trick becomes keeping the tables in-sync, but you can use the BATCH functionality to apply writes to both tables atomically.
In case you haven't already, you might benefit from DataStax's (free) course on data modeling - Data Modeling with Apache Cassandra and DataStax Enterprise.
I have some data in Cassandra. Say
create table MyTable {
id text PRIMARY KEY,
data text,
updated_on timestamp
}
My application in addition to querying this data by primary key id, needs to query it by updated_on timestamp as well. To fulfil the query by time use case I have tried the following.
create table MyTable {
id text PRIMARY KEY,
data text,
updated_on timestamp,
updated_on_minute timestamp
}
Secondary index on the updated_on_minute field. As I understand, secondary indexes are not recommended for high cardinality cases (which is my case, because I could have a lot of data at the same minute mark). Moreover I have data that gets frequently updated, which means the updated_on_minute will keep revving.
MaterializedView with updated_on_minute as the partition key and a id as the clustering key. I am on version 3.9 of cassandra and had just begun using these, but alas I find these release notes for 3.11x (https://github.com/apache/cassandra/blob/cassandra-3.11/NEWS.txt), which declare them purely experimental and not meant for production clusters.
So then what are my options? Do I just need to maintain my own tables to track data that comes in timewise? Would love some input on this.
Thanks in advance.
As always have been the case, create additional table to query by a different partition key.
In your case the table would be
create table MyTable_by_timestamp {
id text,
data text,
updated_on timestamp,
Primary key(updated_on, id)
}
Write to both tables mytable_by_timetamp and mytable_by_id. Use the corresponding table to READ from based on the partition key either updated_on or id.
It’s absolutely fine to duplicate data based on the use case (query) it’s trying solve.
Edited:
In case there is a fear about huge partition, you can always bucket into smaller partitions. For example the table above could be broken down into
create table MyTable_by_timestamp {
id text,
data text,
updated_on timestamp,
updated_min timestamp,
Primary key(updated_min, id)
}
Here I have chosen every minute as the bucket size. Depending on how many updates you receive, you can change it to seconds (updated_sec) to reduce the partition size further.
Suppose I have table with the following structure
create table tasks (
user_id uuid,
name text,
task_id uuid,
description text,
primary key ((user_id), name, task_id)
);
It allows me to get all tasks for user sorted by name ascending. Also I added task_id to primary key to avoid upserts. The following query holds
select * from tasks where user_id = ?
as well as
select * from tasks where user_id = ? and name > ?
However, I cannot get task with specific task_id. For example, following query crashes
select * from tasks where user_id = ? and task_id = ?
with this error
PRIMARY KEY column "task_id" cannot be restricted as preceding column "name" is not restricted
It requires name column to be specified, but at the moment I have only task_id (from url, for example) and user_id (from session).
How should I create this table to perform both queries? Or I need create separate table for second case? What is the common pattern in this situation?
You could simply add one more redundant column taskId with same value as task_id and create a secondary index on taskId.
Then you can query user_id=? and tsakId=?
PRIMARY KEY column "task_id" cannot be restricted as preceding
column "name" is not restricted
You are seeing this error because CQL does not permit queries to skip primary key components.
How should I create this table to perform both queries? Or I need create separate table for second case? What is the common pattern in this situation?
As you suspect, the typical way that problems like this are solved with Cassandra is that an additional table is created for each query. In this case, recreating the table with a PRIMARY KEY designed to match your additional query pattern would simply look like this:
create table tasks_by_user_and_task (
user_id uuid,
name text,
task_id uuid,
description text,
primary key ((user_id), task_id)
);
You could simply add one more redundant column taskId with same value as task_id and create a secondary index on taskId.
While I am usually not a fan of using secondary indexes, in this case it may perform ok. Reason being, is that you would still be restricting your query by partition key, which would eliminate the need to examine additional nodes. The drawback (as Undefined_variable pointed out) is that you cannot create a secondary index on a primary key component, so you would need to duplicate that column (and apply the index to the non-primary key column) to get that solution to work.
It might be a good idea to model and test both solutions for performance.
If you have the extra disk space, the best method would be to replicate the data in a second table. You should avoid using secondary indexes in production. Your application would, of course, need to write to both these tables. But Cassandra is darn good at making that efficient.
create table tasks_by_name (
user_id uuid,
name text,
task_id uuid,
description text,
primary key ((user_id), name, task_id)
);
create table tasks_by_id (
user_id uuid,
name text,
task_id uuid,
description text,
primary key ((user_id), task_id)
);
I have task to create a social feed(news feed). I think no need to explain the standard functionality - all are how as FB.
I chose solution apache cassandra and designed a data column Posts for storing information about posts users:
CREATE TABLE Posts (
post_id uuid,
post_at timestamp,
user_id text,
name varchar,
category set<text>,
link varchar,
image set<varchar>,
video set<varchar>,
content map<text, text>,
private boolean,
PRIMARY KEY ((post_id, user_id), post_at)
)
WITH CLUSTERING ORDER BY (post_at DESC) COMPACT STORAGE;
The next table contains id user posts:
CREATE TABLE posts_user (
post_id bigint,
post_at timestamp,
user_id bigint,
PRIMARY KEY ((post_id), post_at, user_id)
)
WITH CLUSTERING ORDER BY (post_at DESC) AND COMPACT STORAGE;
How do you think, is it good? What do you do change in the such a data model?
There are a couple of questions and a couple of improvements that jump out.
COMPACT STORAGE is deprecated now (if you want to take advantage of CQL 3 features). I do not think that you can create your table Posts as you have defined above since it uses CQL 3 features (collections) with COMPACT STORAGE as well as declaring more than one column that is not part of the primary key.
posts_user has completely different key types than Posts does. I am not clear on what the relationship between the two tables is, but I imagine that post_id is supposed to be consistent between them, whereas you have it as a uuid in one table and a bigint in the other. There are also discrepancies with the other fields.
Assuming post_id is unique and represents the id of an individual post, it is strange to have it as the first part of a compound primary key in the Posts table since if you know the post_id then you can already uniquely access the record. Furthermore, as it is part of the partition key it also prevents you from doing wider selects of multiple posts and taking advantage of your post_at ordering.
The common method to fix this is to create a dedicated index table to sort the data the way you want.
E.g.
CREATE TABLE posts (
id uuid,
created timestamp,
user_id uuid,
name text,
...
PRIMARY KEY (id)
);
CREATE TABLE posts_by_user_index (
user_id uuid,
post_id uuid,
post_at timestamp,
PRIMARY KEY (user_id,post_at,post_id)
WITH CLUSTERING ORDER BY (post_at DESC)
);
Or more comprehensively:
CREATE TABLE posts_by_user_sort_index (
user_id uuid,
post_id uuid,
sort_field text,
sort_value text,
PRIMARY KEY ((user_id,sort_field),sort_value,post_id)
);
However, in your case if you only wish to select the data one way, then you can get away with using your posts table to do the sorting:
CREATE TABLE posts (
id uuid,
post_at timestamp,
user_id uuid,
name text,
...
PRIMARY KEY (user_id,post_at,id)
WITH CLUSTERING ORDER BY (post_at DESC)
);
It will just make it more complicated if you wish to add additional indexes later since you will need to index each post not just by its post id, but by its user and post_at fields as well.
I am using Cassandra to store the big data.
In order to run several queries, I am afraid that I need to create different tables and insert almost the same data into these tables.
Will it affect the performance?
What I have done right now is reading one record from the queue, push it into five tables that I created in Cassandra.
If the records in one table takes about 10M memory, then in five tables, will it take 50M memory in total?
Is there a better way to store duplicated data in Cassandra?
Thank you.
If I create multiple primary keys, it always throws exception.
That is expected, you cant have multiple primery keys. When you combine columns to make a primary key it's called a composite (or compound) key. Example:
CREATE TABLE timeline (
user_id varchar,
tweet_id uuid,
author varchar,
body varchar,
PRIMARY KEY (user_id, tweet_id)); // PK consists of two columns!
For your data model I would try something along the lines of:
CREATE TABLE timeline (
user_id varchar,
module varchar,
host varchar,
body varchar,
PRIMARY KEY (user_id, module, host));