I have been reading this section of the Cassandra docs and found the following a little puzzling:
Determine column overhead:
regular_total_column_size = column_name_size + column_value_size + 15
counter - expiring_total_column_size = column_name_size + column_value_size + 23
Every column in Cassandra incurs 15 bytes of overhead. Since each row in a table can have different column names as well as differing numbers of columns, metadata is stored for each column. For counter columns and expiring columns, you should add an additional 8 bytes (23 bytes total).
The way I interpret the above for a CQL3 defined schema such as:
CREATE TABLE mykeyspace.mytable(
id text,
report_id text,
subset_id text,
report_date timestamp,
start_date timestamp,
end_date timestamp,
subset_descr text,
x int,
y double,
z int,
PRIMARY KEY (id, report_id, subset_id)
);
is that each row will contain the metadata for the column names, e.g., the strings report_date, start_date, end_date, etc. and their type along with the data. However, it's not clear to me what it means that each row in a table can have different column names. This sounds wrong to me given the schema above is totally static, i.e., Cassandra 2.0 will most certainly complain if I try to write:
INSERT INTO mykeyspace.mytable (id, report_id , subset_id, x, y, z, w)
VALUES ( 'asd','qwe','rty',100,1.234,12, 123.123);
Bad Request: Unknown identifier w
Now it looks to me like column names are fixed given this table schema and thus the metadata should not need to be stored per each row. I am guessing either the phrasing in the documentation is outdated (it's the same as Cassandra 1.2) or I'm misunderstanding some core concept at work here.
Can anybody clarify? Bottom line: do I have to worry about the length of the names of my columns or not?
We have been playing it safe and used single character names where possible (so the above columns would actually be i, r, s, dr, ds, de, sd, ...), but it's so non human unreadable and can be confusing to work with.
The easiest way to figure out what is going on in situations like this is to check the sstable2json (cassandra/bin) representation of your data. This will show you what ends up actually be saved on disk.
Here is the example for your situation
[
{"key": "4b6579","columns": [
["rid1:ssid1:","",1401469033325000],
["rid1:ssid1:end_date","2004-10-03 00:00:00-0700",1401469033325000],
["rid1:ssid1:report_date","2004-10-03 00:00:00-0700",1401469033325000],
["rid1:ssid1:start_date","2004-10-03 00:00:00-0700",1401469033325000],
["rid1:ssid1:subset_descr","descr",1401469033325000],
["rid1:ssid1:x","1",1401469033325000],
["rid1:ssid1:y","5.5",1401469033325000],
["rid1:ssid1:z","1",1401469033325000],
["rid2:ssid2:","",1401469938599000],
["rid2:ssid2:end_date", "2004-10-03 00:00:00-0700",1401469938599000],
["rid2:ssid2:report_date","2004-10-03 00:00:00-0700",1401469938599000],
["rid2:ssid2:start_date","2004-10-03 00:00:00-0700",1401469938599000],
["rid2:ssid2:subset_descr","descr",1401469938599000],
["rid2:ssid2:x","1",1401469938599000],
["rid2:ssid2:y","5.5",1401469938599000],
["rid2:ssid2:z","1",1401469938599000]
}
]
The value of the partition key is saved once per partition (per sstable) as you can see above, the column name in this case doesn't matter at all since it is implicit given the table. The column names for the clustering columns are also not present because with C* you aren't allowed to insert without specifying all portions of the key.
Whats left though does have the column name, this is needed incase a partial update to a row is made so it can be saved without the rest of the row information. You could imagine an update to a single column field in a row, to indicate which field this is C* currently uses the column name but there are tickets to change this to a smaller representation.
https://issues.apache.org/jira/browse/CASSANDRA-4175
To generate this
cqlsh
CREATE TABLE mykeyspace.mytable( id text, report_id text, subset_id text, report_date timestamp, start_date timestamp, end_date timestamp, subset_descr text, x int, y double, z int, PRIMARY KEY (id, report_id, subset_id) );
INSERT INTO mykeyspace.mytable (id, report_id , subset_id , report_date , start_date , end_date , subset_descr ,x, y, z) VALUES ( 'Key', 'rid1','ssid1', '2004-10-03','2004-10-03','2004-10-03','descr',1,5.5,1);
INSERT INTO mykeyspace.mytable (id, report_id , subset_id , report_date , start_date , end_date , subset_descr ,x, y, z) VALUES ( 'Key', 'rid2','ssid2', '2004-10-03','2004-10-03','2004-10-03','descr',1,5.5,1);
exit;
nodetool flush
bin/sstable2json $DATA_DIR/mytable/mykeyspace-mytable-jb-1-Data.db
Related
I want to create a table with these columns: id1, id2, type, time, data, version.
The frequent query is:
select * from table_name where id1 = ... and id2 =... and type = ...
select * from table_name where id1= ... and type = ... and time > ... and time < ...
I don't know how to set the primary key for the fast query?
As you have two different queries, you will likely need to have two different tables for them to perform well. This is not unusual for Cassandra data models. Keep in mind that for both of these, the PRIMARY KEY definition in Cassandra is largely dependent on the cardinalities and anticipated query patterns. As you have only provided the latter, you may need to make adjustments based on the cardinalities of id1, id2, and type.
select * from table_name where id1 = X and id2 = Y and type = Z;
So here I'm going to make an educated guess that id1 and id2 are nigh unique (high cardinality), as IDs usually are. I don't know how many types are available in your application, but as long as there aren't more than 10,000 this should work:
CREATE TABLE table_name_by_ids (
id1 TEXT,
id2 TEXT,
type TEXT,
time TIMESTAMP,
data TEXT,
version TEXT,
PRIMARY KEY ((id1,id2),type));
This will key your partitions on a joint hash of id1 and id2, sorting the rows inside by type (default ascending).
select * from table_name where id1= X and type = Z and time > A and time < B;
Likewise, the table to support this query will look like this:
CREATE TABLE table_name_by_id1_time (
id1 TEXT,
id2 TEXT,
type TEXT,
time TIMESTAMP,
data TEXT,
version TEXT,
PRIMARY KEY ((id1),type,time))
WITH CLUSTERING ORDER BY (type ASC, time DESC);
Again, this should work as long as you don't have more than several thousand type/time combinations.
One final adjustment that I would make though, would be around judging just how many type/time combinations you expect to have over the life of the application. If this data will grow over time, then the above will cause the partitions to grow to an unmaintainable point. To keep that from happening, I'd also recommend adding a time "bucket."
version TEXT,
month_bucket TEXT,
PRIMARY KEY ((id1,month_bucket),type,time))
WITH CLUSTERING ORDER BY (type ASC, time DESC);
Likewise for this, the query will need to be adjusted as well:
select * from table_name_by_id1_time
where id1= 'X' and type = 'Z'
and month_bucket='201910'
and time > '2019-10-07 00:00:00' and time < '2019-10-07 16:22:12';
Hope this helps.
how do I guarantee the atomicity of these two insertions?
Simply put, you can run the two INSERTs together in an atomic batch.
BEGIN BATCH
INSERT INTO table_name_by_ids (
id1, id2, type, time, data, version
) VALUES (
'X', 'Y', 'Z', '2019-10-07 12:00:01','stuff','1.0'
) ;
INSERT INTO table_name_by_id1_time (
id1, id2, type, time, data, version, month_bucket
) VALUES (
'X', 'Y', 'Z', '2019-10-07 12:00:01','stuff','1.0','201910'
);
APPLY BATCH;
For more info, check out the DataStax docs on atomic batches: https://docs.datastax.com/en/dse/6.7/cql/cql/cql_using/useBatchGoodExample.html
I am looking for a good way to store time specific data in cassandra.
Each entry can look like (start_time, value). Later, I would like to retrieve the current value.
Logic of retrieving current value is like following.
Find all rows with start_time<=current_time.
Then find the value with maximum start_time from the rows obtained in the first step.
PS:- Edited the question to make it more clear
The exact requirements are not possible. But we can get close to it with one more column.
First, to be able to use <= operator, your start_time column need to be the clustering key of your table.
Then, you need a different partition key. You could choose a fixed value but it could bring problems when the partition will have too many rows. Then you should better use something like the year or the month of the start_time.
CREATE TABLE time_specific_table (
year bigint,
start_time timestamp,
value text,
PRIMARY KEY((year), start_time)
) WITH CLUSTERING ORDER BY (start_time DESC);
The problem is that when you will query the table, you will need to know the value of the partition key :
Find all rows with start_time<=current_time
SELECT * FROM time_specific_table
WHERE year = :year AND start_time <= :time;
select the value with maximum start_time
SELECT * FROM time_specific_table
WHERE year = :year LIMIT 1;
Create two separate table like below :
CREATE TABLE data (
start_time timestamp,
value int,
PRIMARY KEY(start_time, value)
);
CREATE TABLE current_value (
partition int PRIMARY KEY,
value int
);
Now you have to insert data into both table, to insert data into second table use a static value like 1
INSERT INTO current_value(partition, value) VALUES(1, 10);
Now In current value table your data will be upsert and You will get latest value whenever you select.
i want to store every second one value to a table. Therefore i testet two approches against each other. If I have understood correctly, the data should be stored internally almost identical.
Wide-Row
CREATE TABLE timeseries (
id int,
date date,
timestamp timestamp,
value decimal,
PRIMARY KEY ((id, date), timestamp)
) WITH CLUSTERING ORDER BY (timestamp DESC) AND
compaction={'class': 'DateTieredCompactionStrategy'}
and compression = { 'sstable_compression' : 'DeflateCompressor' };
Skinny Row
CREATE TABLE timeseries(
id int,
date date,
"0" decimal, "1" decimal,"2" decimal, -- ... 86400 decimal values
-- each column index is the second of the day
PRIMARY KEY ((id, date))
)
Test:
10 different id's
1 million values (100.000 per id)
each values is increased by one minute
In my test the skinny row approach for a sinus function only consums half of the storage for 1 million values. Even the random test is significant. Can somebody explain this behaviour?
The only difference between these schema is the cell key
A sample cell of The wide row model :
["2017-06-09 15\\:05+0600:value","3",1496999149885944]
| | | |
timestamp column value timestamp
And A sample cell of the Skinny row model :
["0","3",1497019292686908]
| | |
column value timestamp
You can clearly see that wide row model cell key is timestamp value and column name of value. And for skinny model cell key is only column name.
The overhead of wide row model is the timestamp(8 bytes) and the size of column name (value).you can keep the column name small and instead of using timestamp, use int and put the seconds of the day, like your skinny row column name. This will save more space.
In the below table, what is the maximum size phone_numbers column can accommodate ?
Like normal columns, is it 2GB ?
Is it 64K*64K as mentioned here
CREATE TABLE d2.employee (
id int PRIMARY KEY,
doj timestamp,
name text,
phone_numbers map<text, text>
)
Collection types in Cassandra are represented as a set of distinct cells in the internal data model: you will have a cell for each key of your phone_numbers column. Therefore they are not normal columns, but a set of columns. You can verify this by executing the following command in cassandra-cli (1001 stands for a valid employee id):
use d2;
get employee[1001];
The good answer is your point 2.
How in CQL3 do we do millions of columns? We have one special table where all rows are basically composite keys and very very wide.
I was reading this question that implied two ways
Does collections in CQL3 have certain limits?
Also, the types of our composite keys are String.bytes and ordered by STring
We have an exact matching table that is Decimal.bytes and ordered by decimal.
How would one handle this in CQL3?
thanks,
Dean
"oh, and part of my question was missing since SO formatted it out of the question. I was looking for Decimal.bytes and String.bytes as my composite key....there is no "value", just a col name and I want all columns were decimal > 10 and decimal < 20 so to speak and the column name = 10 occurs multiple times as in 10.a, 10.b 11.c, 11.d, 11.e"
CREATE TABLE widerow
(
row_key text, //whatever
column_composite1 decimal,
column_composite2 text,
PRIMARY KEY(row_key,column_composite1,column_composite2)
)
SELECT * FROM widerow WHERE row_key=...
AND column_composite1>=10.0
AND column_composite1<=20.0
In that case, you can query with range over column_composite1 and have for EACH column_composite1, different values of column_composite2 (10.a, 10.b 11.c, 11.d, 11.e...)
"How do I get all the columns where row_composite1 > "a" and row_composite1 < "b" in that use case? ie. I dont' care about the second half of the composite name. "
2 possible solutions here
Make row_composite1 a composite component of column
Use OrderPreservingPartitioner (this is indeed strongly discouraged)
For solution 1
CREATE TABLE widerow
(
fake_row_key text, //whatever
column_composite1 text, // previously row_composite1
column_composite2 decimal,
column_composite3 text,
PRIMARY KEY(row_key,column_composite1,column_composite2,column_composite3)
)
SELECT * FROM widerow WHERE row_key=...
AND column_composite1>='a'
AND column_composite1<='b'
This modeling has some drawback though. To be able to range query over DOUBLE values, you need to provide first the column_composite1:
SELECT * FROM widerow WHERE row_key=...
AND column_composite1='a'
AND column_composite2>=10.0
AND column_composite2<=20.0
CREATE TABLE widerow
(
row_composite1 text,
row_composite2 text,
column_name decimal,
value text,
PRIMARY KEY((row_composite1,row_composite2),column_name)
)
SELECT * FROM widerow WHERE row_composite1=...
AND row_composite2=...
AND column_name>=10.0
AND column_name<=20.0
ORDER BY column_name DESC