I want to create a table with these columns: id1, id2, type, time, data, version.
The frequent query is:
select * from table_name where id1 = ... and id2 =... and type = ...
select * from table_name where id1= ... and type = ... and time > ... and time < ...
I don't know how to set the primary key for the fast query?
As you have two different queries, you will likely need to have two different tables for them to perform well. This is not unusual for Cassandra data models. Keep in mind that for both of these, the PRIMARY KEY definition in Cassandra is largely dependent on the cardinalities and anticipated query patterns. As you have only provided the latter, you may need to make adjustments based on the cardinalities of id1, id2, and type.
select * from table_name where id1 = X and id2 = Y and type = Z;
So here I'm going to make an educated guess that id1 and id2 are nigh unique (high cardinality), as IDs usually are. I don't know how many types are available in your application, but as long as there aren't more than 10,000 this should work:
CREATE TABLE table_name_by_ids (
id1 TEXT,
id2 TEXT,
type TEXT,
time TIMESTAMP,
data TEXT,
version TEXT,
PRIMARY KEY ((id1,id2),type));
This will key your partitions on a joint hash of id1 and id2, sorting the rows inside by type (default ascending).
select * from table_name where id1= X and type = Z and time > A and time < B;
Likewise, the table to support this query will look like this:
CREATE TABLE table_name_by_id1_time (
id1 TEXT,
id2 TEXT,
type TEXT,
time TIMESTAMP,
data TEXT,
version TEXT,
PRIMARY KEY ((id1),type,time))
WITH CLUSTERING ORDER BY (type ASC, time DESC);
Again, this should work as long as you don't have more than several thousand type/time combinations.
One final adjustment that I would make though, would be around judging just how many type/time combinations you expect to have over the life of the application. If this data will grow over time, then the above will cause the partitions to grow to an unmaintainable point. To keep that from happening, I'd also recommend adding a time "bucket."
version TEXT,
month_bucket TEXT,
PRIMARY KEY ((id1,month_bucket),type,time))
WITH CLUSTERING ORDER BY (type ASC, time DESC);
Likewise for this, the query will need to be adjusted as well:
select * from table_name_by_id1_time
where id1= 'X' and type = 'Z'
and month_bucket='201910'
and time > '2019-10-07 00:00:00' and time < '2019-10-07 16:22:12';
Hope this helps.
how do I guarantee the atomicity of these two insertions?
Simply put, you can run the two INSERTs together in an atomic batch.
BEGIN BATCH
INSERT INTO table_name_by_ids (
id1, id2, type, time, data, version
) VALUES (
'X', 'Y', 'Z', '2019-10-07 12:00:01','stuff','1.0'
) ;
INSERT INTO table_name_by_id1_time (
id1, id2, type, time, data, version, month_bucket
) VALUES (
'X', 'Y', 'Z', '2019-10-07 12:00:01','stuff','1.0','201910'
);
APPLY BATCH;
For more info, check out the DataStax docs on atomic batches: https://docs.datastax.com/en/dse/6.7/cql/cql/cql_using/useBatchGoodExample.html
Related
I am trying to figure out what advantages that a compound partition key can provide. Look at the famous weather station example below.
CREATE TABLE temperature (
state text,
city text,
event_time timestamp,
temperature text,
PRIMARY KEY ((state, city),event_time)
);
Now, I most of time query into one single state on a set of cities and a range of dates. So the query is like
SELECT * FROM temperature WHERE state = 'NY' AND city IN ('mahattan', 'brooklyn','queens') AND event_time > '2016-01-01'.
Assuming I have a large data set, in sense that I have a few states (# < 1000) but for each state I have many many cities ( # > 100M). I replicate the data and distribute them into different nodes.
Question: can you compare the differences using
PRIMARY KEY (**(state, city)**,event_time)
PRIMARY KEY (**(city, state)**,event_time)
PRIMARY KEY (state, city,event_time)
PRIMARY KEY (zipcode, event_time)
Thank you!
Composite Key
PRIMARY KEY (**(state, city)**,event_time)
PRIMARY KEY (**(city, state)**,event_time)
Are functionally equivalent. The composite partition key will be the combined values of city and state. You will be unable to fully specify a partition without both portions. Within the partition cells will be ordered by event_time. You will have #State * #City Partitions
[city, state] -> [event_time_0, event_time_1, event_time_2, event_time_3, ...]
You will be able to write queries like
SELECT * FROM TABLE WHERE CITY = X AND STATE = Y AND event_time (><=) SomeValue
Compound Keys
PRIMARY KEY (state, city,event_time)
One partition is made for every state. This is probably bad since there are on the order of 100x state/provinces which means you will only have a very small number of partitions. Data will be laid out within the partition by city and event_time.
[Illinois] --> [Chicago, 0], [Chicago, 1], [Peoria, 0], [Peoria, 1]
Queries will have to restrict city if they are also restricting event time.
PRIMARY KEY (zipcode, event_time)
You will have up to 10k Partitions, each will have a single cell for each event time.
I have a column family and syntax like this:
CREATE TABLE sr_number_callrecord (
id int,
callerph text,
sr_number text,
callid text,
start_time text,
plan_id int,
PRIMARY KEY((sr_number), start_time, callerph)
);
I want to do the query like :
a) select * from dummy where sr_number='+919xxxx8383'
and start_time >='2014-12-02 08:23:18' limit 10;
b) select * from dummy where sr_number='+919xxxxxx83'
and start_time >='2014-12-02 08:23:18'
and callerph='+9120xxxxxxxx0' limit 10;
First query works fine but second query is giving error like
Bad Request: PRIMARY KEY column "callerph" cannot be restricted
(preceding column "start_time" is either not restricted or by a non-EQ
relation)
If I get the result in first query, In second query I am just adding one
more cluster key to get filter result and the row will be less
Just like you cannot skip PRIMARY KEY components, you may only use a non-equals operator on the last component that you query (which is why your 1st query works).
If you do need to serve both of the queries you have listed above, then you will need to have separate query tables for each. To serve the second query, a query table (with the same columns) will work if you define it with a PRIMARY KEY like this:
PRIMARY KEY((sr_number), callerph, start_time)
That way you are still specifying the parts of your PRIMARY KEY in order, and your non-equals condition is on the last PRIMARY KEY component.
There are certain restrictions in the way the primary key columns are to be used in the where clause http://docs.datastax.com/en/cql/3.1/cql/cql_reference/select_r.html
One solution that will work in your situation is to change the order of clustering columns in the primary key
CREATE TABLE sr_number_callrecord (
id int,
callerph text,
sr_number text,
callid text,
start_time text,
plan_id int,
PRIMARY KEY((sr_number), callerph, start_time,)
);
Now you can use range query on the last column as
select * from sr_number_callrecord where sr_number = '1234' and callerph = '+91123' and start_time >= '1234';
I have been reading this section of the Cassandra docs and found the following a little puzzling:
Determine column overhead:
regular_total_column_size = column_name_size + column_value_size + 15
counter - expiring_total_column_size = column_name_size + column_value_size + 23
Every column in Cassandra incurs 15 bytes of overhead. Since each row in a table can have different column names as well as differing numbers of columns, metadata is stored for each column. For counter columns and expiring columns, you should add an additional 8 bytes (23 bytes total).
The way I interpret the above for a CQL3 defined schema such as:
CREATE TABLE mykeyspace.mytable(
id text,
report_id text,
subset_id text,
report_date timestamp,
start_date timestamp,
end_date timestamp,
subset_descr text,
x int,
y double,
z int,
PRIMARY KEY (id, report_id, subset_id)
);
is that each row will contain the metadata for the column names, e.g., the strings report_date, start_date, end_date, etc. and their type along with the data. However, it's not clear to me what it means that each row in a table can have different column names. This sounds wrong to me given the schema above is totally static, i.e., Cassandra 2.0 will most certainly complain if I try to write:
INSERT INTO mykeyspace.mytable (id, report_id , subset_id, x, y, z, w)
VALUES ( 'asd','qwe','rty',100,1.234,12, 123.123);
Bad Request: Unknown identifier w
Now it looks to me like column names are fixed given this table schema and thus the metadata should not need to be stored per each row. I am guessing either the phrasing in the documentation is outdated (it's the same as Cassandra 1.2) or I'm misunderstanding some core concept at work here.
Can anybody clarify? Bottom line: do I have to worry about the length of the names of my columns or not?
We have been playing it safe and used single character names where possible (so the above columns would actually be i, r, s, dr, ds, de, sd, ...), but it's so non human unreadable and can be confusing to work with.
The easiest way to figure out what is going on in situations like this is to check the sstable2json (cassandra/bin) representation of your data. This will show you what ends up actually be saved on disk.
Here is the example for your situation
[
{"key": "4b6579","columns": [
["rid1:ssid1:","",1401469033325000],
["rid1:ssid1:end_date","2004-10-03 00:00:00-0700",1401469033325000],
["rid1:ssid1:report_date","2004-10-03 00:00:00-0700",1401469033325000],
["rid1:ssid1:start_date","2004-10-03 00:00:00-0700",1401469033325000],
["rid1:ssid1:subset_descr","descr",1401469033325000],
["rid1:ssid1:x","1",1401469033325000],
["rid1:ssid1:y","5.5",1401469033325000],
["rid1:ssid1:z","1",1401469033325000],
["rid2:ssid2:","",1401469938599000],
["rid2:ssid2:end_date", "2004-10-03 00:00:00-0700",1401469938599000],
["rid2:ssid2:report_date","2004-10-03 00:00:00-0700",1401469938599000],
["rid2:ssid2:start_date","2004-10-03 00:00:00-0700",1401469938599000],
["rid2:ssid2:subset_descr","descr",1401469938599000],
["rid2:ssid2:x","1",1401469938599000],
["rid2:ssid2:y","5.5",1401469938599000],
["rid2:ssid2:z","1",1401469938599000]
}
]
The value of the partition key is saved once per partition (per sstable) as you can see above, the column name in this case doesn't matter at all since it is implicit given the table. The column names for the clustering columns are also not present because with C* you aren't allowed to insert without specifying all portions of the key.
Whats left though does have the column name, this is needed incase a partial update to a row is made so it can be saved without the rest of the row information. You could imagine an update to a single column field in a row, to indicate which field this is C* currently uses the column name but there are tickets to change this to a smaller representation.
https://issues.apache.org/jira/browse/CASSANDRA-4175
To generate this
cqlsh
CREATE TABLE mykeyspace.mytable( id text, report_id text, subset_id text, report_date timestamp, start_date timestamp, end_date timestamp, subset_descr text, x int, y double, z int, PRIMARY KEY (id, report_id, subset_id) );
INSERT INTO mykeyspace.mytable (id, report_id , subset_id , report_date , start_date , end_date , subset_descr ,x, y, z) VALUES ( 'Key', 'rid1','ssid1', '2004-10-03','2004-10-03','2004-10-03','descr',1,5.5,1);
INSERT INTO mykeyspace.mytable (id, report_id , subset_id , report_date , start_date , end_date , subset_descr ,x, y, z) VALUES ( 'Key', 'rid2','ssid2', '2004-10-03','2004-10-03','2004-10-03','descr',1,5.5,1);
exit;
nodetool flush
bin/sstable2json $DATA_DIR/mytable/mykeyspace-mytable-jb-1-Data.db
I have a column family with primary key definition like this:
...
PRIMARY KEY ((website_id, item_id), user_id, date)
which will be queried using queries such as:
SELECT * FROM myCF
WHERE website_id = 30 AND item_id = 10
AND user_id = 0 AND date > 'some_date' ;
However, I'd like to keep my column family ordered by date only, such as SELECT date FROM myCF ; would return the most recent inserted date.
Due to the order of clustering columns, what I get is an order per user_id then per date.
If I change the primary key definition to:
PRIMARY KEY ((website_id, item_id), date, user_id)
I can no longer run the same query, as date must be restricted is user_id is.
I thought there might be some way to say:
...
PRIMARY KEY ((website_id, shop_id), store_id, date)
) WITH CLUSTERING ORDER BY (store_id RANDOMPLEASE, date DESC) ;
But it doesn't seem to exist. Worst, maybe this is completely stupid and I don't get why.
Is there any ways of achieving this? Am I missing something?
Many thanks!
Your query example restricts user_id so that should work with the second table format. But if you are actually trying to run queries like
SELECT * FROM myCF
WHERE website_id = 30 AND item_id = 10
AND date > 'some_date'
Then you need an additional table which is created to handle those queries, it would only order on Date and not on user id
Create Table LookupByDate ... PRIMARY KEY ((website_id, item_id), date)
In addition to your primary query, if all you try to get is "return the most recent inserted date", you may not need an additional table. You can use "static column" to store the last update time per partition. CASSANDRA-6561
It probably won't help your particular case (since I imagine your list of all users is unmanagably large), but if the condition on the first clustering column is matching one of a relatively small set of values then you can use IN.
SELECT * FROM myCF
WHERE website_id = 30 AND item_id = 10
AND user_id IN ? AND date > 'some_date'
Don't use IN on the partition key because this will create an inefficient query that hits multiple nodes putting stress on the coordinator node. Instead, execute multiple asynchronous queries in parallel. But IN on a clustering column is absolutely fine.
How in CQL3 do we do millions of columns? We have one special table where all rows are basically composite keys and very very wide.
I was reading this question that implied two ways
Does collections in CQL3 have certain limits?
Also, the types of our composite keys are String.bytes and ordered by STring
We have an exact matching table that is Decimal.bytes and ordered by decimal.
How would one handle this in CQL3?
thanks,
Dean
"oh, and part of my question was missing since SO formatted it out of the question. I was looking for Decimal.bytes and String.bytes as my composite key....there is no "value", just a col name and I want all columns were decimal > 10 and decimal < 20 so to speak and the column name = 10 occurs multiple times as in 10.a, 10.b 11.c, 11.d, 11.e"
CREATE TABLE widerow
(
row_key text, //whatever
column_composite1 decimal,
column_composite2 text,
PRIMARY KEY(row_key,column_composite1,column_composite2)
)
SELECT * FROM widerow WHERE row_key=...
AND column_composite1>=10.0
AND column_composite1<=20.0
In that case, you can query with range over column_composite1 and have for EACH column_composite1, different values of column_composite2 (10.a, 10.b 11.c, 11.d, 11.e...)
"How do I get all the columns where row_composite1 > "a" and row_composite1 < "b" in that use case? ie. I dont' care about the second half of the composite name. "
2 possible solutions here
Make row_composite1 a composite component of column
Use OrderPreservingPartitioner (this is indeed strongly discouraged)
For solution 1
CREATE TABLE widerow
(
fake_row_key text, //whatever
column_composite1 text, // previously row_composite1
column_composite2 decimal,
column_composite3 text,
PRIMARY KEY(row_key,column_composite1,column_composite2,column_composite3)
)
SELECT * FROM widerow WHERE row_key=...
AND column_composite1>='a'
AND column_composite1<='b'
This modeling has some drawback though. To be able to range query over DOUBLE values, you need to provide first the column_composite1:
SELECT * FROM widerow WHERE row_key=...
AND column_composite1='a'
AND column_composite2>=10.0
AND column_composite2<=20.0
CREATE TABLE widerow
(
row_composite1 text,
row_composite2 text,
column_name decimal,
value text,
PRIMARY KEY((row_composite1,row_composite2),column_name)
)
SELECT * FROM widerow WHERE row_composite1=...
AND row_composite2=...
AND column_name>=10.0
AND column_name<=20.0
ORDER BY column_name DESC