I have a fairly simple data model. I am tracking events for users based on timestamps. I'm converting a JSON object which has this scema:
userID:{
event: [
{ timestamp: data },
{ timestamp: data }
]
}
I have come up with two Cassandra schemas.
The first:
CREATE TABLE users ( guid uuid, date timestamp, events varchar, PRIMARY KEY(guid, date) );
The second:
CREATE TABLE users ( guid uuid PRIMARY KEY, date timestamp, events map<text, text> );
Either one would work, requiring the data to be a stringified JSON object. My query will be returning all data from a user in a given time range. Which model makes more sense, or is there a better way to go about this?
The second approach won't allow you to do queries by time range since you don't have date as a clustering column. So you might want to do this:
CREATE TABLE users (
guid uuid,
date timestamp,
events map<text, text>,
PRIMARY KEY(guid, date) );
How you want to define the events field depends on what's in there and how you need to access it. If you access small parts of it often, you might want to break events in the map out into separate rows by making the event key another clustering column like this:
CREATE TABLE users (
guid uuid,
date timestamp,
event_type text,
event_value text,
PRIMARY KEY(guid, date, event_type) );
It's hard to give more specific advice since you didn't describe your use case in terms of what queries you want to run and the volume of data, number of users, etc.
As Jim was saying the second schema does not allow query on the timestamp since it is not contained in the primary key.
He suggested a valid solution but I would also suggest that you use not a uuid and timestamp but a TimeUUID (which provide both an id and a timestamp at the same time) if you can. However if you need to get the users by id only sometimes then the solution of Jim is probably the best :
PRIMARY KEY(guid, date, event_type)
Related
I have this schema in cassandra:
create table if not exists
converstation_events(
timestamp timestamp,
sender_id bigint,
conversation_id bigint,
message_type varchar,
message text,
primary key ((conversation_id), sender_id, message_type, timestamp));
And there is a message_type with value conversation_ended, is there a way to denormalise the data so I can do queries on those conversations that have already ended?
I've thought about having an extra field that can be updated by a trigger when a conversation_ended message hits the system, does this make sense?
In Cassandra you need to model your data in a way the answers your questions. It's not like a RDBMS where you create you model first then create your queries. So think backwards...
When you do a query in cassandra (for the most part...) you need to query by the primary key and you can use your clustering key(s) to filter or a select ranges. a great post on it.
Your converstation_events table will give you answers about a conversation, filtering by sender, type and time. ** if you want to filter by time you must include sender_id and message_type in the query.
But you want all conversations of a given type so you'll need another table to answer this query. If you want all the conversation that are conversation_ended you could create a second table to map message type to conversation, like-
conversation_by_message_type (
message_type varchar,
conversation_id bigint,
timestamp timestamp,
primary key ((message_type), timestamp, conversation_id));
On the client side you'll have to add a record to conversation_by_message_type anytime you insert a converstation_events event with a given message_type that you might want to look up. I have timestamp in this table so you can sort or filter by time or time and conversation_id.
To find all the ended conversations you could do queries like
<ids> = select conversation_id from conversation_by_message_type where message_type = 'conversation_ended'
select * from conversation_events where conversation_id IN (<ids>)
I have some data in Cassandra. Say
create table MyTable {
id text PRIMARY KEY,
data text,
updated_on timestamp
}
My application in addition to querying this data by primary key id, needs to query it by updated_on timestamp as well. To fulfil the query by time use case I have tried the following.
create table MyTable {
id text PRIMARY KEY,
data text,
updated_on timestamp,
updated_on_minute timestamp
}
Secondary index on the updated_on_minute field. As I understand, secondary indexes are not recommended for high cardinality cases (which is my case, because I could have a lot of data at the same minute mark). Moreover I have data that gets frequently updated, which means the updated_on_minute will keep revving.
MaterializedView with updated_on_minute as the partition key and a id as the clustering key. I am on version 3.9 of cassandra and had just begun using these, but alas I find these release notes for 3.11x (https://github.com/apache/cassandra/blob/cassandra-3.11/NEWS.txt), which declare them purely experimental and not meant for production clusters.
So then what are my options? Do I just need to maintain my own tables to track data that comes in timewise? Would love some input on this.
Thanks in advance.
As always have been the case, create additional table to query by a different partition key.
In your case the table would be
create table MyTable_by_timestamp {
id text,
data text,
updated_on timestamp,
Primary key(updated_on, id)
}
Write to both tables mytable_by_timetamp and mytable_by_id. Use the corresponding table to READ from based on the partition key either updated_on or id.
It’s absolutely fine to duplicate data based on the use case (query) it’s trying solve.
Edited:
In case there is a fear about huge partition, you can always bucket into smaller partitions. For example the table above could be broken down into
create table MyTable_by_timestamp {
id text,
data text,
updated_on timestamp,
updated_min timestamp,
Primary key(updated_min, id)
}
Here I have chosen every minute as the bucket size. Depending on how many updates you receive, you can change it to seconds (updated_sec) to reduce the partition size further.
I have a table in cassandra with following schema:
CREATE TABLE user_album_entity (
userId text,
albumId text,
updateDateTimestamp timestamp,
albumName text,
description text,
PRIMARY KEY ((userId), updateDateTimestamp)
);
The query required to get data would have a where userId = xxx order by updateTimestamp. Hence the schema had updateDateTimestamp.
Problem comes in updating the column of table.The query is: Update the album information for user where user id = xxx. But as per specs,for update query I would need the exact value of updateDateTimestamp which in normal world scenario, an application would never send.
What should be the answer to such problems since I believe this a very common use case where select query requires ordering on timestamp. Any help is much appreciated.
The problem is that your table structure allows the same album to have multiple records with the only difference being the timestamp (the clustering key).
Three possible solutions:
Remove the clustering key and sort your data at application level.
Remove the clustering key and add a Secondary Index to the timestamp field.
Remove the clustering key and create a Materialized View to perform the query.
If your usecase is such that each partition will contain exactly one row,
then you can model your table like:
CREATE TABLE user_album_entity (
userId text,
albumId text static,
updateDateTimestamp timestamp,
albumName text static,
description text static,
PRIMARY KEY ((userId), updateDateTimestamp)
);
modelling the table this way enables Update query to be done in following way:
UPDATE user_album_entity SET albumId = 'updatedAlbumId' WHERE userId = 'xyz'
Hope this helps.
How would ago about modelling the data if I have a web app for messaging and I expect the user to either see all the messages ordered by date, or see the messages exchanged with a specific contact, again ordered by date.
Should I have two tables, called "global_inbox" and "contacts_inbox" where I would add each message to both?
For example:
CREATE TABLE global_inbox(user_id int, timestamp timestamp,
message text, PRIMARY KEY(user_id, timestamp)
CREATE TABLE inbox(user_id int, contact_id int,
timestamp timestapm, message text,
PRIMARY KEY(user_id, contact_id, timestamp)
This means that every message should be copied 4 times, 2 for sender and 2 for receiver. Does it sound reasonable?
Yes, It's reasonable.
You need some modification.
Inbox table : If a user have many contact and every contact send message, then a huge amount of data will be inserted into a single partition (user_id). So add contact_id to partition key.
Updated Schema :
CREATE TABLE inbox (
user_id int,
contact_id int,
timestamp timestamp,
message text,
PRIMARY KEY((user_id, contact_id), timestamp)
);
global_inbox : Though It's global inbox, a huge amount of data can be inserted into a single partition (user_id). So add more key to partition key to more distribution.
Updated Schema :
CREATE TABLE global_inbox (
user_id int,
year int,
month int,
timestamp timestamp,
message text,
PRIMARY KEY((user_id,year,month), timestamp)
);
Here you can also add also add week to partition key, if you have huge data in a single partition in a week. Or remove month from partition key if you think not much data will insert in a year.
In term of queries performance, Yes it sounds good for me. Apache cassandra is really built in for this kind of data modeling. We build table to satisfy queries. This is the process called 'Denormalization' in Cassandra paradigm. This will increase queries performance. You have duplicated data but the main goal is to have fast queries.
I'm on my research for storing logs to Cassandra.
The schema for logs would be something like this.
EDIT: I've changed the schema in order to make some clarification.
CREATE TABLE log_date (
userid bigint,
time timeuuid,
reason text,
item text,
price int,
count int,
PRIMARY KEY ((userid), time) - #1
PRIMARY KEY ((userid), time, reason, item, price, count) - #2
);
A new table will be created for the day everyday.
So a table contains logs for only one day.
My querying condition is as follows.
Query all logs from a specific user on a specific day(date not time).
So the reason, item, price, count will not be used as hints or conditions for queries at all.
My Question is which PRIMARY KEY design suits better.
EDIT: And the key here is that I want to store the logs in a schematic way.
If I choose #1 so many columns would be created per log. And the possibility of having more values per log is very high. The schema above is just an example. The log can contain values like subreason, friendid and so on.
If I choose #2 one (very) composite column will be created per log, and so far I couldn't find any valuable information about the overhead of the composite columns.
Which one should I choose? Please help.
My advise is that none of your two options seems to be ideal for your time-series, the fact the you're creating a table per-day, doesn't seem optimal either.
Instead I'd recommend to create a single Table and partition by userid and day and use a time uuids as the clustered column for the event, an example of this would look like:
CREATE TABLE log_per_day (
userid bigint,
date text,
time timeuuid,
value text,
PRIMARY KEY ((userid, date), time)
)
This will allow you to have all events in a day in a single row and allow you to do your query per day per user.
By declaring the time clustered column allows to have a wide row where you can insert as a many events as you need in a day.
So the row key is a composite key of the userid and plus date in text e.g.
insert into log_per_day (userid, date, time, value) values (1000,'2015-05-06',aTimeUUID1,'my value')
insert into log_per_day (userid, date, time, value) values (1000,'2015-05-06',aTimeUUID2,'my value2')
The two inserts above will be in the same row and therefore you will be able to read in a single query.
Also if you want more information about time series I highly recommend you to check Getting Started with Time Series Data Modeling
Hope it helps,
José Luis