Cassandra wide row with every column as composite key - cassandra

I have a wide row table with column
page_id int, user_id int, session_tid timeuuid and end_time timestamp
with partition key = user_id
I need to do multiple queries on the table, some based on one column and some based on another - and it turns out that I have cases with where clause on every column
As Cassandra doesnt allow me to use where clause on non-indexed, non-key column, is it ok if I make all of the columns my composite key? (currently all but end_time column are already composite key, with user_id as the partition key)

Making all columns as part of the primary key will not allow you to perform where conditions to each column in the way you're thinking.
To make an easy example if you create such a primary key
PK(key1, key2, key3, key4)
you won't be able to perform a query like
select * from mytable where key2 = 'xyz';
Because the rule is that you have to follow the order of keys to create a "multiple-where" condition.
So valid queries with multiple where are the following:
select * from mytable where key1 = 'xyz' and key2 = 'abc';
select * from mytable where key1 = 'xyz' and key2 = 'abc' and key3 = 11;
select * from mytable where key2 = 'xyz' and key2 = 'abc' and key3 = 11 and key4 = 2014;
You can ask for keyN only providing keyN-1
HTH,
Carlo

Related

c* schema - denormalization vs materialized view

Schema is:
col0 int,
col1 text,
col2 text,
stamp timestamp,
somemap map<text, int>
I want to query for somemap
- using col0, col1 and a range of stamp
- using col0, col1, col2 and a range of stamp
I need every value of somemap for distinct col0, col1, col2, stamp to be present for either query (IE for the first query I want all the values of col2 to be there)
I've tried various combinations of columns for primary key but I can't find one that permits both types of queries.
I can denormalize this and create both types of tables:
- primary key ((col0, col1), stamp, col2)
- primary key ((col0, col1), col2, stamp)
What Im hoping for is a way to use a materialized view to accomplish this.

how to do the query in cassandra If i have two cluster key in column family

I have a column family and syntax like this:
CREATE TABLE sr_number_callrecord (
id int,
callerph text,
sr_number text,
callid text,
start_time text,
plan_id int,
PRIMARY KEY((sr_number), start_time, callerph)
);
I want to do the query like :
a) select * from dummy where sr_number='+919xxxx8383'
and start_time >='2014-12-02 08:23:18' limit 10;
b) select * from dummy where sr_number='+919xxxxxx83'
and start_time >='2014-12-02 08:23:18'
and callerph='+9120xxxxxxxx0' limit 10;
First query works fine but second query is giving error like
Bad Request: PRIMARY KEY column "callerph" cannot be restricted
(preceding column "start_time" is either not restricted or by a non-EQ
relation)
If I get the result in first query, In second query I am just adding one
more cluster key to get filter result and the row will be less
Just like you cannot skip PRIMARY KEY components, you may only use a non-equals operator on the last component that you query (which is why your 1st query works).
If you do need to serve both of the queries you have listed above, then you will need to have separate query tables for each. To serve the second query, a query table (with the same columns) will work if you define it with a PRIMARY KEY like this:
PRIMARY KEY((sr_number), callerph, start_time)
That way you are still specifying the parts of your PRIMARY KEY in order, and your non-equals condition is on the last PRIMARY KEY component.
There are certain restrictions in the way the primary key columns are to be used in the where clause http://docs.datastax.com/en/cql/3.1/cql/cql_reference/select_r.html
One solution that will work in your situation is to change the order of clustering columns in the primary key
CREATE TABLE sr_number_callrecord (
id int,
callerph text,
sr_number text,
callid text,
start_time text,
plan_id int,
PRIMARY KEY((sr_number), callerph, start_time,)
);
Now you can use range query on the last column as
select * from sr_number_callrecord where sr_number = '1234' and callerph = '+91123' and start_time >= '1234';

Cassandra queries performance, ranges

I'm quite new with Cassandra, and I was wondering if there would be any impact in performance if a query is asked with "date = '2015-01-01'" or "date >= '2015-01-01' AND date <= '2015-01-01'".
The only reason I want to use the ranges like that is because I need to make multiple queries and I want to have them prepared (as in prepared statements). This way the prepared statements number is cut by half.
The keys used are ((key1, key2), date) and (key1, date, key2) in the two tables I want to use this. The query for the first table is similar to:
SELECT * FROM table1
WHERE key1 = val1
AND key2 = val2
AND date >= date1 AND date <= date2
For a PRIMARY KEY (key1, date, key2) that type of query just isn't possible. If you do, you'll see an error like this:
InvalidRequest: code=2200 [Invalid query] message="PRIMARY KEY column
"key2" cannot be restricted (preceding column "date" is either not
restricted or by a non-EQ relation)"
Cassandra won't allow you to filter by a PRIMARY KEY component if the preceding column(s) are filtered by anything other than the equals operator.
On the other hand, your queries for PRIMARY KEY ((key1, key2), date) will work and perform well. The reason, is that Cassandra uses the clustering key(s) (date in this case) to specify the on-disk sort order of data within a partition. As you are specifying partition keys (key1 and key2) your result set will be sorted by date, allowing Cassandra to satisfy your query by performing a continuous read from the disk.
Just to test that out, I'll even run two queries on a table with a similar key, and turn tracing on:
SELECT * FROM log_date2 WHERe userid=1001
AND time > 32671010-f588-11e4-ade7-21b264d4c94d
AND time < a3e1f750-f588-11e4-ade7-21b264d4c94d;
Returns 1 row and completes in 4068 microseconds.
SELECT * FROM log_date2 WHERe userid=1001
AND time=74ad4f70-f588-11e4-ade7-21b264d4c94d;
Returns 1 row and completes in 4001 microseconds.

will composite partition & compound key affect perfomance in cassandra?

Given below, CQL for 3 tables.
Both have same column structure, But difference in setting the PRIMARY KEY.
tab1: NO compound primary key
CREATE TABLE tab1
(
key1 text,
key2 text,
key3 text,
key4 text,
data1 text,
data2 text,
data3 int,
PRIMARY KEY(key1,key2,key3,key4));
tab2: (key1,key2) forms compound primary key
CREATE TABLE tab2
(
key1 text,
key2 text,
key3 text,
key4 int,
data1 text,
data2 text,
data3 text,
PRIMARY KEY((key1,key2),key3,key4));
tab3: (key1,key2,key3) forms compound primary key
CREATE TABLE tab3
(
key1 text,
key2 text,
key3 text,
key4 int,
data1 text,
data2 text,
data3 text,
PRIMARY KEY((key1,key2,key3),key4));
While querying value1,value2,value3 is known and key4 is specified as a range.
Sample CQL query,
select data1,data2,data3 from tab3 where key1='value1' and key2='value2' and key3='value3' and key4 > 1000 and key4 < 1000000 ;
key4 may have some 50,000 records.
Which TABLE Design in better?
Which design have better read/write performance?
If you need to support range queries over key4, then it needs to be a clustering column, so that rules out tab1. Since you're always specifying an exact value for key3, there's no need to make it a clustering column, so tab3 is a better choice than tab2. Leaving key3 in the partition key will partition your data more evenly around the cluster.

Cassandra: Query with where clause containing greather- or lesser-than (< and >)

I'm using Cassandra 1.1.2 I'm trying to convert a RDBMS application to Cassandra. In my RDBMS application I have following table called table1:
| Col1 | Col2 | Col3 | Col4 |
Col1: String (primary key)
Col2: String (primary key)
Col3: Bigint (index)
Col4: Bigint
This table counts over 200 million records. Mostly used query is something like:
Select * from table where col3 < 100 and col3 > 50;
In Cassandra I used following statement to create the table:
create table table1 (primary_key varchar, col1 varchar,
col2 varchar, col3 bigint, col4 bigint, primary key (primary_key));
create index on table1(col3);
I changed the primary key to an extra column (I calculate the key inside my application).
After importing a few records I tried to execute following cql:
select * from table1 where col3 < 100 and col3 > 50;
This result is:
Bad Request: No indexed columns present in by-columns clause with Equal operator
The Query select col1,col2,col3,col4 from table1 where col3 = 67 works
Google said there is no way to execute that kind of queries. Is that right? Any advice how to create such a query?
Cassandra indexes don't actually support sequential access; see http://www.datastax.com/docs/1.1/ddl/indexes for a good quick explanation of where they are useful. But don't despair; the more classical way of using Cassandra (and many other NoSQL systems) is to denormalize, denormalize, denormalize.
It may be a good idea in your case to use the classic bucket-range pattern, which lets you use the recommended RandomPartitioner and keep your rows well distributed around your cluster, while still allowing sequential access to your values. The idea in this case is that you would make a second dynamic columnfamily mapping (bucketed and ordered) col3 values back to the related primary_key values. As an example, if your col3 values range from 0 to 10^9 and are fairly evenly distributed, you might want to put them in 1000 buckets of range 10^6 each (the best level of granularity will depend on the sort of queries you need, the sort of data you have, query round-trip time, etc). Example schema for cql3:
CREATE TABLE indexotron (
rangestart int,
col3val int,
table1key varchar,
PRIMARY KEY (rangestart, col3val, table1key)
);
When inserting into table1, you should insert a corresponding row in indexotron, with rangestart = int(col3val / 1000000). Then when you need to enumerate all rows in table1 with col3 > X, you need to query up to 1000 buckets of indexotron, but all the col3vals within will be sorted. Example query to find all table1.primary_key values for which table1.col3 < 4021:
SELECT * FROM indexotron WHERE rangestart = 0 ORDER BY col3val;
SELECT * FROM indexotron WHERE rangestart = 1000 ORDER BY col3val;
SELECT * FROM indexotron WHERE rangestart = 2000 ORDER BY col3val;
SELECT * FROM indexotron WHERE rangestart = 3000 ORDER BY col3val;
SELECT * FROM indexotron WHERE rangestart = 4000 AND col3val < 4021 ORDER BY col3val;
If col3 is always known small values/ranges, you may be able to get away with a simpler table that also maps back to the initial table, ex:
create table table2 (col3val int, table1key varchar,
primary key (col3val, table1key));
and use
insert into table2 (col3val, table1key) values (55, 'foreign_key');
insert into table2 (col3val, table1key) values (55, 'foreign_key3');
select * from table2 where col3val = 51;
select * from table2 where col3val = 52;
...
Or
select * from table2 where col3val in (51, 52, ...);
Maybe OK if you don't have too large of ranges. (you could get the same effect with your secondary index as well, but secondary indexes aren't highly recommended?). Could theoretically parallelize it "locally on the client side" as well.
It seems the "Cassandra way" is to have some key like "userid" and you use that as the first part of "all your queries" so you may need to rethink your data model, then you can have queries like select * from table1 where userid='X' and col3val > 3 and it can work (assuming a clustering key on col3val).

Resources