Multiple columns in Cassandra tables - cassandra

I am wondering what happens when there are multiple Non-PK columns in a table. I've read this example:
http://johnsanda.blogspot.co.uk/2012/10/why-i-am-ready-to-move-to-cql-for.html
Which shows that with single column:
CREATE TABLE raw_metrics (
schedule_id int,
time timestamp,
value double,
PRIMARY KEY (schedule_id, time)
);
We get:
Now I wonder what happens when we have two columns:
CREATE TABLE raw_metrics (
schedule_id int,
time timestamp,
value1 double,
value2 int,
PRIMARY KEY (schedule_id, time)
);
Are we going to end up with something like:
row key columns...
123 1339707619:"value1" | 1339707679:"value2" | 1339707784:"value2"
...
or rather:
row key columns...
123 1339707619:"value1":"value2" | 1339707679:"value1":"value2" | 1339707784:"value1""value2"
...
etc. I guess what I am asking is if this is going to be a sparse table given that I only insert "value1" or "value2" at a time.
In such situations if I want to store more columns (one per each type, eg. double, int, date, etc) would it be better perhaps to have separate tables rather than storing everything in a single table?

This post might help in explaining what is happening when composite keys are created:
Cassandra Composite Columns - How are CompositeTypes chosen?
So essentially the table will look in the following way:
row key columns...
123 1339707619:"value1" | 1339707679:"value2" | 1339707784:"value2"
See also reference to secondary indexes:
http://wiki.apache.org/cassandra/SecondaryIndexes

Related

Schema for tick data on cassandra

Overview
I would like to determine the correct schema in cassandra for financial tick data.
Data and schema
I have the following sample data in csv:
SYMBOL,DATE,TIME,PRICE,SIZE
A,2011-01-03,9:28:00,41.46,200
A,2011-01-03,9:28:00,41.46,100
A,2011-01-03,9:30:00,41.56,1300
A,2011-01-03,9:30:00,41.56,1300
A,2011-01-03,9:30:00,41.55,100
A,2011-01-03,9:30:19,41.55,200
A,2011-01-03,9:30:23,41.5169,100
A,2011-01-03,9:30:29,41.44,66534
A,2011-01-03,9:30:29,41.45,225
A,2011-01-03,9:30:30,41.44,100
A,2011-01-03,9:30:30,41.43,100
A,2011-01-03,9:30:30,41.49,100
A,2011-01-03,9:30:30,41.45,200
and I store into the following table:
CREATE TABLE tickdata (
symbol text,
date date,
time time,
price float,
size int,
PRIMARY KEY ((symbol,date),time)
);
This a slice of a SELECT of the table:
symbol | date | time | price | size
--------+------------+--------------------+---------+-------
A | 2011-01-03 | 09:28:00.000000000 | 41.46 | 100
A | 2011-01-03 | 09:30:00.000000000 | 41.56 | 1300
A | 2011-01-03 | 09:30:19.000000000 | 41.55 | 200
A | 2011-01-03 | 09:30:23.000000000 | 41.5169 | 100
A | 2011-01-03 | 09:30:29.000000000 | 41.45 | 66534
Use case
The data will be written to Cassandra once, and mostly read with conditions on date and symbol, e.g. a set of symbols for a given time-period.
Questions
The tuple (symbol,date,time) are not a proper PRIMARY KEY, since my granularity is limited to seconds. Hence, the COPY FROM e.g. drops the second row of the csv during the import due to the repetition in the key. How can I preserve the record?
Assuming the PRIMARY KEY is unique, how can I avoid storing repeated values of SYMBOL and DATE? Or is partitioning taking care of that under the hood?
I was thinking to use the following schema:
CREATE TABLE tickdata (
symbol text,
date date,
time blob,
price blob,
size blob,
PRIMARY KEY ((symbol,date))
);
to store raw data. Is this the correct way to address the points above?
The data is NOT ordered according to the definition of the PRIMARY KEY when I SELECT it. Is that related to the non-uniqueness problem entioned above?
Should I stick with my binary file-store which keeps a map of symbols and dates and loads the relevant files on request? This avoids repeating symbol and date for each row and is indifferent to limited granularity (repetition) of the timestamp.
The tuple (symbol,date,time) are not a proper PRIMARY KEY, since my
granularity is limited to seconds. Hence, the COPY FROM e.g. drops the
second row of the csv during the import due to the repetition in the
key. How can I preserve the record?
The primary key in your first table definition is ((symbol,date),time) NOT (symbol,date,time). Both are different in cassandra.
((symbol,date),time) => will store all records for same symbol (A) and date in one node. For same symbol(A) but other date might go on other node.
Row Key will be symbol+date
Physical Data layout (example)
|A_2011-01-03||time1.price & time1.value||time2.price & time2.value|
|A_2011-01-04||time1.price & time1.value||time2.price & time2.value|
|B_2011-01-03||time1.price & time1.value||time2.price & time2.value|
|B_2011-01-04||time1.price & time1.value||time2.price & time2.value|
(symbol,date,time) => All records for same symbol will reside on one node. This might result in wide rows.
Row key will be symbol.
Physical Data layout (example)
|A||date1.time1.price & date1.time1.value||date1.time2.price & date1.time2.value||date2.time1.price & date2.time1.value||date2.time2.price & date2.time2.value|
|B||date1.time1.price & date1.time1.value||date1.time2.price & date1.time2.value||date2.time1.price & date2.time1.value||date2.time2.price & date2.time2.value|
To Avoid dropping of records you can add one more column like uuid or timeuuid
CREATE TABLE tickdata (
symbol text,
date date,
time time,
price float,
size int,
id timeuuid
PRIMARY KEY ((symbol,date),time,id)
);
Assuming the PRIMARY KEY is unique, how can I avoid storing repeated
values of SYMBOL and DATE? Or is partitioning taking care of that
under the hood?
Based on physical storage structure explained above this issue is already taken care of.
The alternate schema you are talking about will have only 1 record for one symbol and a date. You will have to handle the blob at application side... which i think might be overhead.
The data is NOT ordered according to the definition of the PRIMARY KEY
when I SELECT it. Is that related to the non-uniqueness problem
entioned above?
By default data is ordered by clustering key in ascending order (in your case time). Though you can change order by changing CLUSTERING ORDER BY property of table to descending.
Example:
CREATE TABLE tickdata (
symbol text,
date date,
time time,
price float,
size int,
id timeuuid
PRIMARY KEY ((symbol,date),time,id)
) WITH CLUSTERING ORDER BY(time desc,id desc);
Should I stick with my binary file-store which keeps a map of symbols
and dates and loads the relevant files on request? This avoids
repeating symbol and date for each row and is indifferent to limited
granularity (repetition) of the timestamp.
you can decide this on your own :)

Cassandra: Is there a limit to amount of data that a collection column can hold?

In the below table, what is the maximum size phone_numbers column can accommodate ?
Like normal columns, is it 2GB ?
Is it 64K*64K as mentioned here
CREATE TABLE d2.employee (
id int PRIMARY KEY,
doj timestamp,
name text,
phone_numbers map<text, text>
)
Collection types in Cassandra are represented as a set of distinct cells in the internal data model: you will have a cell for each key of your phone_numbers column. Therefore they are not normal columns, but a set of columns. You can verify this by executing the following command in cassandra-cli (1001 stands for a valid employee id):
use d2;
get employee[1001];
The good answer is your point 2.

Select 2000 most recent log entries in cassandra table using CQL (Latest version)

How do you query and filter by timeuuid, ie assuming you have a table with
create table mystuff(uuid timeuuid primary key, stuff text);
ie how do you do:
select uuid, unixTimestampOf(uuid), stuff
from mystuff
order by uuid desc
limit 2000
I also want to be able to fetch the next older 2000 and so on, but thats a different problem. The error is:
Bad Request: ORDER BY is only supported when the partition key is restricted by an EQ or an IN.
and just in case it matters, the real table is actually this:
CREATE TABLE audit_event (
uuid timeuuid PRIMARY KEY,
event_time bigint,
ip text,
level text,
message text,
person_uuid timeuuid
) WITH
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=864000 AND
read_repair_chance=0.100000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'SnappyCompressor'};
I would recommend that you design your table a bit differently. It would be rather hard to achieve what you're asking for with the design you have currently.
At the moment each of your entries in the audit_event table will receive another uuid, internally Cassandra will create many short rows. Querying for such rows is inefficient, and additionally they are ordered randomly (unless using Byte Ordered Partitioner, which you should avoid for good reasons).
However Cassandra is pretty good at sorting columns. If (back to your example) you declared your table like this :
CREATE TABLE mystuff(
yymmddhh varchar,
created timeuuid,
stuff text,
PRIMARY KEY(yymmddhh, created)
);
Cassandra internally would create a row, where the key would be the hour of a day, column names would be the actual created timestamp and data would be the stuff. That would make it efficient to query.
Consider you have following data (to make it easier I won't go to 2k records, but the idea is the same):
insert into mystuff(yymmddhh, created, stuff) VALUES ('13081615', now(), '90');
insert into mystuff(yymmddhh, created, stuff) VALUES ('13081615', now(), '91');
insert into mystuff(yymmddhh, created, stuff) VALUES ('13081615', now(), '92');
insert into mystuff(yymmddhh, created, stuff) VALUES ('13081615', now(), '93');
insert into mystuff(yymmddhh, created, stuff) VALUES ('13081615', now(), '94');
insert into mystuff(yymmddhh, created, stuff) VALUES ('13081616', now(), '95');
insert into mystuff(yymmddhh, created, stuff) VALUES ('13081616', now(), '96');
insert into mystuff(yymmddhh, created, stuff) VALUES ('13081616', now(), '97');
insert into mystuff(yymmddhh, created, stuff) VALUES ('13081616', now(), '98');
Now lets say that we want to select last two entries (let's a assume for the moment that we know that the "latest" row key to be '13081616'), you can do it by executing query like this:
SELECT * FROM mystuff WHERE yymmddhh = '13081616' ORDER BY created DESC LIMIT 2 ;
which should give you something like this:
yymmddhh | created | stuff
----------+--------------------------------------+-------
13081616 | 547fe280-067e-11e3-8751-97db6b0653ce | 98
13081616 | 547f4640-067e-11e3-8751-97db6b0653ce | 97
to get next 2 rows you have to take the last value from the created column and use it for the next query:
SELECT * FROM mystuff WHERE yymmddhh = '13081616'
AND created < 547f4640-067e-11e3-8751-97db6b0653ce
ORDER BY created DESC LIMIT 2 ;
If you received less rows than expected you should change your row key to another hour.
Row key handling / calculation
For now I've assumed that we know the row key with which we want to query the data. If you log a lot of information I'd say that's not the problem - you can take just current time and issue a query with the hour set to what hour we have now. If we run out of rows we can subtract one hour and issue another query.
However if you don't know where your data lies, or if it's not distributed evenly, you can create metadata table, where you'd store the information about the row keys:
CREATE TABLE mystuff_metadata(
yyyy varchar,
yymmddhh varchar,
PRIMARY KEY(yyyy, yymmddhh)
) WITH COMPACT STORAGE;
The row keys would be organized by a year, so to get the latest row key from the current year you'd have to issue a query:
SELECT yymmddhh
FROM mystuff_metadata where yyyy = '2013'
ORDER BY yymmddhh DESC LIMIT 1;
Your audit software would have to make an entry to that table on start and later on each hour change (for example before inserting data to mystuff).

Timestamp / date as key for cassandra column family / hector

I have to create and query a column family with composite key as [timestamp,long]. Also,
while querying I want to fire range query for timestamp (like timestamp between xxx and yyy) Is this possible ?
Currently I am doing something really funny (Which I know its not correct). I create keys with timestamp string for given range and concatenate with long.
like ,
1254345345435-1234
3423432423432-1234
1231231231231-9999
and pass set of keys to hector api. (so if i have date range for 1 month and I want every minute data, i create 30 * 24 * 60 * [number of secondary key - long])
I can solve concatenation issue with composite key. But query part is what I am trying to understand.
As far as I understood, As we are using RandomPartitioner we cannot really query based on range as keys are MD5 checksum. Whats ideal design for this kind of use case ?
my schema and requirements are as follows : (actual csh)
CREATE TABLE report(
ts timestamp,
user_id long,
svc1 long,
svc2 long,
svc3 long,
PRIMARY KEY(ts, user_id));
select from report where ts between (123445345435 and 32423423424) and user_id is in (123,567,987)
You cannot do range queries on the first component of a composite key. Instead, you should write a sentinel value such as a daystamp (the unix epoch at midnight on the current day) as the key, then write a composite column as timestamp:long. This way you can provide the keys that comprise your range, and slice on the timestamp component of the composite column.
Denormalize! You must model your schema in a manner that will enable the types of queries you wish to perform. We create a reverse (aka inverted, inverse) index for such scenarios.
CREATE TABLE report(
KEY uuid PRIMARY KEY,
svc1 bigint,
svc2 bigint,
svc3 bigint
);
CREATE TABLE ReportsByTime(
KEY ascii PRIMARY KEY
) with default_validation=uuid AND comparator=uuid;
CREATE TABLE ReportsByUser(
KEY bigint PRIMARY KEY
)with default_validation=uuid AND comparator=uuid;
See here for a nice explanation. What you are doing now is generating your own ascii key in the times table, to enable yourself to perform the range slice query you want - it doesn't have to be ascii though just something you can use to programmatically generate your own slice keys with.
You can use this approach to facilitate all of your queries, this likely isn't going to suit your application directly but the idea is the same. You can squeeze more out of this by adding meaningful values to the column keys of each table above.
cqlsh:tester> select * from report;
KEY | svc1 | svc2 | svc3
--------------------------------------+------+------+------
1381b530-1dd2-11b2-0000-242d50cf1fb5 | 332 | 333 | 334
13818e20-1dd2-11b2-0000-242d50cf1fb5 | 222 | 223 | 224
13816710-1dd2-11b2-0000-242d50cf1fb5 | 112 | 113 | 114
cqlsh:tester> select * from times;
KEY,1212051037 | 13818e20-1dd2-11b2-0000-242d50cf1fb5,13818e20-1dd2-11b2-0000-242d50cf1fb5 | 1381b530-1dd2-11b2-0000-242d50cf1fb5,1381b530-1dd2-11b2-0000-242d50cf1fb5
KEY,1212051035 | 13816710-1dd2-11b2-0000-242d50cf1fb5,13816710-1dd2-11b2-0000-242d50cf1fb5 | 13818e20-1dd2-11b2-0000-242d50cf1fb5,13818e20-1dd2-11b2-0000-242d50cf1fb5
KEY,1212051036 | 13818e20-1dd2-11b2-0000-242d50cf1fb5,13818e20-1dd2-11b2-0000-242d50cf1fb5
cqlsh:tester> select * from users;
KEY | 13816710-1dd2-11b2-0000-242d50cf1fb5 | 13818e20-1dd2-11b2-0000-242d50cf1fb5
-------------+--------------------------------------+--------------------------------------
23123123231 | 13816710-1dd2-11b2-0000-242d50cf1fb5 | 13818e20-1dd2-11b2-0000-242d50cf1fb5
Why don't you use wide rows, where Key is timestamp and Column Name as Long-Value then you can pass multiple key's (timestamp's) to getKeySlice and select multiple column's to withColumnSlice by there name (which is id).
As I don't know what is column name and value, I feel this can help you. Can you provide more details of your column family definition.

Cassandra Composite Columns - How are CompositeTypes chosen?

I'm trying to understand the type used when I create composite columns.
I'm using CQL3 (via cqlsh) to create the CF and then the CLI to issue a describe command.
The Types in the Columns sorted by: ...CompositeType(Type1,Type2,...) are not the ones I'm expecting.
I'm using Cassandra 1.1.6.
CREATE TABLE CompKeyTest1 (
KeyA int,
KeyB int,
KeyC int,
MyData varchar,
PRIMARY KEY (KeyA, KeyB, KeyC)
);
The returned CompositeType is
CompositeType(Int32,Int32,UTF8)
Shouldn't it be (Int32,Int32,Int32)?
CREATE TABLE CompKeyTest2 (
KeyA int,
KeyB varchar,
KeyC int,
MyData varchar,
PRIMARY KEY (KeyA, KeyB, KeyC)
);
The returned CompositeType is
CompositeType(UTF8,Int32,UTF8)
Why isn't it the same as the types used when I define the table? I'm probably missing something basic in the type assignment...
Thanks!
The composite column name is composed of the values of primary keys 2...n and the name of the non-primary key column being saved.
(So if you have 5 non-key fields then you'll have five such columns and their column names will differ only in the last composed value which would be the non-key field name.)
So in both examples the composite column is made up of the values of KeyB, KeyC and the name of the column being stored ("MyData", in both cases). That's why you're seeing those CompositeTypes being returned.
(btw, the first key in the primary key is the partitioning key and its value is only used as the row key (if you're familiar with Cassandra under the covers). It is not used as part of any of the composite column names.)

Resources