I am trying to model my time series events with C* .
However , I am not every sure on the model if it is the most efficient because of the use cases I have.
CREATE TABLE events_by_hour (
event_time timestamp,
date_hour text,
customer_id text,
PRIMARY KEY ((customer_id,date_hour),event_time)
); // I dont want to keep data older then 2 weeks m so every column will have a ttl of 2 weeks+delta
Also , There will be another column family to track row keys because of the use case below:
CREATE TABLE customer_rowKeys (
date_hour text,
customer_id text,
PRIMARY KEY (customer_id)
); // This column family will also have a ttl of 2 weeks
My use case is I would want events from time t1 to t2 but only 100 at a time. In the next call I would want subsequent 100 and so on. I can make use of Slice query on columns and keep passing the offset in subsequent call(something like pagination). Now, The problem I see with this is , as each row contains an hour of data, for some customers the row could be very wide and for some it could be skinny . Now if the time difference between t1 to t2 is a week. How do I proceed. Also , each customer will not have data for every hour , so there will be holes ? The worst case assume every hour had only 1 event , I will end up making 100 calls.
How can the above use case be optimally handled in Cassandra ?
Related
Obviously when dealing with time-series data which relates to some natural partition key like sensor id it can be used as a primary key. But what to do if we are interested in a global view and there is no natural candidate for the partition key? If we model the schema like this:
CREATE TABLE my_data(
year smallint,
day smallint,
date timestamp,
value text
PRIMARY KEY ((year, day), timestamp)
) WITH CLUSTERING ORDER BY (date DESC);
It is (probably) going to work just fine for most cases but given we know what year and days to fetch.
What if we don't care what day is it but we expect to see first 50 most recent items? What if we then want to see next 50 items? Is there a way to do it in Cassandra? What is the recommended way of doing this?
Keep a 2nd table of the year/days. When reading can grab from it first. When adding to my_data update that as well but keep a cache of days inserted so each app would only try the insert once per day. ie for example adding extra key so can have multiple streams not just a single table per time series:
CREATE TABLE my_data (
key blob,
year smallint,
day smallint,
date timestamp,
value text
PRIMARY KEY ((key, year, day), timestamp)
) WITH CLUSTERING ORDER BY (date DESC);
CREATE TABLE my_data_keys (
key blob,
year smallint,
day smallint,
PRIMARY KEY ((key), year, day)
)
For inserts:
INSERT INTO my_data_keys (key, year, day) VALUES (0x01, 1, 2)
INSERT INTO my_data ...
Then keep a in memory Set somewhere that you stored that key/year/data so you dont need to insert it every time. To read most recent:
SELECT year, day FROM my_data_keys WHERE key = 0x01;
driver returns iterator, for each element in it make query to my_data until 50 records reached.
If inserts are frequent enough can just work backwards from "today", issuing queries until you get 50 events. If data sparse though that can be a lot of wasted reads and another table work better.
We store events in multiple tables depending on category.
Each event have an id but contains multiple subelements.
We have a lookup table to find events using the subelement_id.
Each subelement can participate at max in 7 events.
Hence the partition will hold max 7 rows.
We will have 30-50 BILLIONS of rows in eventlookup over a period of 5 years.
CREATE TABLE eventlookup (
subelement_id text,
recordtime timeuuid,
event_id text,
PRIMARY KEY ((subelement_id), recordtime)
)
Problem: How do we delete old data once we reach the 5 (or some other number) year mark.
We want to purge the "tail" at some specific intervals, say every week or month.
Approaches investigated so far:
TTL of X years (performs well, but TTL needs to be known before hand, 8 extra bytes for each column)
NO delete - simply ignore the problem (somebody else's problem :0)
Rate limited single row delete (do complete table scan and potentially billions of delete statements)
Split the table to multiple tables -> "CREATE TABLE eventlookupYYYY". Once a year is not needed, simply drop it. (Problem is every read should potentially query all tables)
Is there any other approaches we can consider ?
Is there a design decision we can make now ( we are not in production yet) that will mitigate the future problem?
If it's worth the extra space, track for ranges of recordtimes your subelement_id in a seperate table / columnfamiliy.
Then you can easily get the ids to delete for records having a specific age if you do not want to set a ttl a priori.
But keep in mind to make this tracking distribute well, just a single date will generate hotspots in your cluster and very wide rows, so think about some partition key like (date,chunk) where I uses a random number from 0-10 in the past for chunk. Also you might look at TimeWindowCompactionStrategy - here is a blog post about it: http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html
Your partition key is only set to subelement_id, so all tuples of 7 events for all recordtimes will be in one partition.
Given your table structure, you need to know all the subelement_id of all your data just to fetch a single row. So, with this assumption, your table structure can be improved a bit by sorting your data by recordtime DESC:
CREATE TABLE eventlookup (
subelement_id text,
recordtime timeuuid,
eventtype int,
parentid text,
partition bigint,
event_id text,
PRIMARY KEY ((subelement_id), recordtime)
)
WITH CLUSTERING ORDER BY (recordtime DESC);
Now all of your data is in descending order and this will give you a big advantage.
Suppose that you have multiple years of data (eg from 2000 to 2018). Assuming you need to keep only the last 5 years, you'd need to fetch data by something like:
SELECT * FROM eventlookup WHERE subelement_id = 'mysub_id' AND recordtime >= '2013-01-01';
This query is efficient because C* will retrieve your data and will stop scanning the partition exactly where you wanted to: 5 years ago. The big plus is that if you have tombstones after that point, well, they won't impact your reads at all. That means you can "safely" trim after that point safely by issuing a delete with
WHERE subelement_id = 'mysub_id' AND recordtime < '2013-01-01';
Beware that this delete will create tombstones that will be skipped by your reads, BUT they will be read during compactions, so keep it in mind.
Alternatively, you can simply skip the delete part if you don't need to reclaim your storage space, your system will always run smooth because you will always retrieve your data efficiently.
Creating the following employee column family in Cassandra
Case 1:
CREATE TABLE employee (
name text,
designation text,
gender text,
created_by text,
created_date timestamp,
modified_by text,
modified_date timestamp,
PRIMARY KEY (name)
);
From UI, if i wanted to get all employee, it is not possible to
retrieve. is it true?
select * from employee; //not possible as it is partitioned by name
Case 2:
I was told to do this way to retrieve all employees.
We need to design this with a static key, to retrieve all the employees.
CREATE TABLE employee (
static_name text,
name text,
designation text,
gender text,
created_by text,
created_date timestamp,
modified_by text,
modified_date timestamp,
PRIMARY KEY (static_name,name)
);
static_name i.e.) "EMPLOYEE" will be the partition key and name will the clustering key. Primary key, combination of both static_name and name
static_name -> every time you add the employee , insert with the static value i.e) EMPLOYEE
now, you will be able to do "select all employees query"
//this will return you all the employees
select * from employee where static_name='EMPLOYEE';
is this true? can't we use case 1 to return all the employees?
Both approaches are o.k. with some catches
Approach 1:
When you say UI I guess you mean to use simple select * ... it's correct that this won't really work out of the box if you want to get every single one of them out. Especially if the data set is big. You could use pagination on a driver (I'm not 100% sure since I hadn't had a case in a while to use it) but when I needed to jump over all the partition I would use the token function i.e.:
select token(name), name from employee limit 1;
system.token(name) | name
----------------------+------
-8839064797231613815 | a
now you use the result of the token and put it into next query. This would have to be done by your program. After it would fetch all the elements that are greater than ... you would also need to start for all lower than the -8839064797231613815.
select token(name), name from employee where token(name) > -8839064797231613815 limit 1;
system.token(name) | name
----------------------+------
-8198557465434950441 | c
and then I would wrap this into a loop until I would fetch all the elements out. (I think this is also how spark cassandra does it when retrieving wide rows out from a cluster).
Disadvantage of this model is that it's really bad because it has to go all over the cluster and is more or less to be used in analytical work loads. Since you mentioned UI, It would take the user too long to get the result, so I would advise not to use approach 1 in UI related stuff.
Approach 2.
Disadvantage of the second one is that it would be what is called a hot row. Meaning every update would go to a single partition and this is most of the time bad model.
The advantage is that you could easily paginate over the one partition and get your data out by pagination functions built into the driver.
This would how ever behave just fine if you have moderate load (tens or low hundreds updates per second) and relatively low number of users, let's say for 100 000 this would work just fine. If your numbers are greater you have to somehow split up the users into multiple partitions so that the "load" gets distributed more evenly.
One possibility is to include letter of alphabet into "EMPLOYEE" ... so you would have "EMPLOYE_A", "EMPLOYEE_B" etc ... this would work relatively well. Not ideal again because of the lexicographical distribution and some partitions may get relatively larger amounts of that which is also not ideal.
One approach would be to create some artificial columns, let's say by design you say there are 10 buckets and when you insert into "EMPLOYEE" partition you just add (random bucket to the static prefix) "EMPLOYEE_1" and so on ... but when retrieving you go over specific partition until you exhaust the result.
I am trying to store & retrieve data in cassandra in the following way:
Storing Data:
I created the table in the following way:
CREATE TABLE mydata (
myKey TEXT,
datetime TIMESTAMP,
value TEXT,
PRIMARY KEY (myKey,datetime)
);
Where i would store a value for every minute for last 5 years. So it stores 1440 * 365 * 5 = 2628000 records/columns per row (myKey as row key).
INSERT INTO mydata(myKey, datetime, value) VALUES ('1234ABCD','2013-04-03 07:01:00','72F');
INSERT INTO mydata(myKey, datetime, value) VALUES ('1234ABCD','2013-04-03 07:02:00','72F');
INSERT INTO mydata(myKey, datetime, value) VALUES ('1234ABCD','2013-04-03 07:03:00','72F');
.................
I am able to store data and all fine. However, i would like to know, if this is efficient way of doing (storing) data horizontally (2628000 values for each key for 1 million such keys altogether) ?
Retrieving Data:
After storing the data in above format, i am able to select data by using a simple select query for a period.
Ex:
SELECT *
FROM mydata
WHERE myKey='1234ABCD' AND datetime > '2013-04-03 07:01:00' AND datetime < '2013-04-03 07:04:00';
The query works fine and i get result as expected.
However my question is:
How can i select only those values at certain intervals. For example, if i query data for a day, i would get 1440 values (1 for every minute). I would like to get values at every 10 minutes interval (value at every 10th minute) limiting the no. of values to 144.
Is there a way to query the table if we use the above storage strategy?
If not, what are possible options to meet my requirement of querying data at a specific interval like 1-min, 10-min, 1-hour, 1-day etc?
Appreciate any other suggestions.
No it not right ,in future you will face problem because per row key we can only store 2 billion records or columns. After that it will not give error but it will store data also .
For your problem split column timestamp into year , month , day and time .
like 2016 , 04 , 04 and 15:03:00 .Put also year , month , day into partition key .
You definitely need to bound your partition with a modular version of the timestamp. But the granularity really depends on your reads.
If you are mainly going to read per day then use something like this PK((myKey, yyyymmdd), time)
If mainly by weeks PK((mykey, yyyyww), time), or month...
The problem is then if you want to read values for a whole year, then you better have a partition per weeks or month, or even year would do I think if you don't do any deletes, your partition size needs to be smaller than 100MB
I'm on my research for storing logs to Cassandra.
The schema for logs would be something like this.
EDIT: I've changed the schema in order to make some clarification.
CREATE TABLE log_date (
userid bigint,
time timeuuid,
reason text,
item text,
price int,
count int,
PRIMARY KEY ((userid), time) - #1
PRIMARY KEY ((userid), time, reason, item, price, count) - #2
);
A new table will be created for the day everyday.
So a table contains logs for only one day.
My querying condition is as follows.
Query all logs from a specific user on a specific day(date not time).
So the reason, item, price, count will not be used as hints or conditions for queries at all.
My Question is which PRIMARY KEY design suits better.
EDIT: And the key here is that I want to store the logs in a schematic way.
If I choose #1 so many columns would be created per log. And the possibility of having more values per log is very high. The schema above is just an example. The log can contain values like subreason, friendid and so on.
If I choose #2 one (very) composite column will be created per log, and so far I couldn't find any valuable information about the overhead of the composite columns.
Which one should I choose? Please help.
My advise is that none of your two options seems to be ideal for your time-series, the fact the you're creating a table per-day, doesn't seem optimal either.
Instead I'd recommend to create a single Table and partition by userid and day and use a time uuids as the clustered column for the event, an example of this would look like:
CREATE TABLE log_per_day (
userid bigint,
date text,
time timeuuid,
value text,
PRIMARY KEY ((userid, date), time)
)
This will allow you to have all events in a day in a single row and allow you to do your query per day per user.
By declaring the time clustered column allows to have a wide row where you can insert as a many events as you need in a day.
So the row key is a composite key of the userid and plus date in text e.g.
insert into log_per_day (userid, date, time, value) values (1000,'2015-05-06',aTimeUUID1,'my value')
insert into log_per_day (userid, date, time, value) values (1000,'2015-05-06',aTimeUUID2,'my value2')
The two inserts above will be in the same row and therefore you will be able to read in a single query.
Also if you want more information about time series I highly recommend you to check Getting Started with Time Series Data Modeling
Hope it helps,
José Luis