I have this table
user_authentication_token (
token_id uuid,
user_id uuid,
email text,
expiration_time timestamp,
is_sign_up boolean,
PRIMARY KEY ((token_id, user_id, email, expiration_time))
and I have this data in the table
token_id | user_id | email | expiration_time | is_sign_up
--------------------------------------+--------------------------------------+-------------------------+---------------------------------+------------
98b0456a-05b2-4aca-a6c7-6a1f382e19aa | b284d51d-efbb-4204-b342-a2486029a5c5 | manu.chadha#hotmail.com | 2018-09-01 09:40:59.634000+0000 | True
But I am unable to query it.
select * from user_authentication_token where token_id=98b0456a-05b2-4aca-a6c7-6a1f382e19aa and user_id=b284d51d-efbb-4204-b342-a2486029a5c5 and email='manu.chadha#hotmail.com' and expiration_time='2018-09-01 09:40:59.634000+0000';
Error - InvalidRequest: Error from server: code=2200 [Invalid query] message="Unable to coerce '2018-09-01 09:40:59.634000+0000' to a formatted date (long)"
What am I doing wrong?
I inserted the row using Cassandra Java Datastax driver and the following code
def insertValues(tableName:String, model:UserToken):Insert = {
QueryBuilder.insertInto(tableName).value("token_id",model.tokenId)
.value("email",model.email)
.value("user_id",model.userId)
.value("expiration_time",model.expirationTime.getMillis())
.value("is_sign_up",model.isSignUp)
.ifNotExists(); }}
Interestingly, timestamp is stored as long but the cqlsh is showing it in a readable format. Probably I need to convert it into long again but how do I do it?
Your timestamp should include nothing beyond milliseconds:
select * from user_authentication_token
where token_id=98b0456a-05b2-4aca-a6c7-6a1f382e19aa
and user_id=b284d51d-efbb-4204-b342-a2486029a5c5
and email='manu.chadha#hotmail.com'
and expiration_time='2018-09-01 09:40:59.634+0000'; // removed extra precision
Update: Added timezone offset as discovered by Manu Chadha below.
Related
I create table with
CREATE TABLE comments_by_video (
videoid uuid,
userid uuid,
comment text,
PRIMARY KEY(videoid, commentid));
and copy the table.
I excuted this query below
SELECT comment FROM comments_by_video WHERE userid = 'with id 357c33b4-9054-a5e1- 8da8-d9e38294fac1';
and got this error.
InvalidRequest: Error from server: code=2200 [Invalid query] message="Invalid STRING constant (with id 357c33b4-9054-a5e1- 8da8-d9e38294fac1) for "userid" of type uuid"
PLEASE HELP!
First error - UUIDs are written as-is, without quotes: 357c33b4-9054-a5e1- 8da8-d9e38294fac1
Second error - you're having condition on the field that isn't a partition key - this will require full table scan and won't work at scale. In Cassandra table structure is modeled around queries, so you'll need to have a table with partition key for userid
I recommend to read first chapters of this free book to understand how Cassandra works.
I was trying to run CQL query by taking in user input format in Zeppelin tool:-
%cassandra
SELECT ${Select Fields Type=uuid ,uuid | created_by | email_verify| username} FROM
${Select Table=keyspace.table_name}
${WHERE email_verify="true" } ${ORDER BY='updated_date' }LIMIT ${limit = 10};
while running this query I was getting this error:
line 4:0 mismatched input 'true' expecting EOF
(SELECT uuid FROM keyspace.table_name ["true"]...)
You need to move WHERE and ORDER BY out of the dynamic form declaration.
The input field declaration is looks as following: ${field_name=default_value}. In your case, instead of WHERE ..., you've got the field name of WHERE email_verify.
It should be as following (didn't tested):
%cassandra
SELECT ${Select Fields Type=uuid ,uuid | created_by | email_verify| username} FROM
${Select Table=keyspace.table_name}
WHERE ${where_cond=email_verify='true'} ORDER BY ${order_by='updated_date'} LIMIT ${limit = 10};
Update:
here is the working example for table with following structure:
CREATE TABLE test.scala_test2 (
id int,
c int,
t text,
tm timestamp,
PRIMARY KEY (id, c)
) WITH CLUSTERING ORDER BY (c ASC)
I have the following table:
CREATE TABLE dove.backend_events (
log_time_local timeuuid,
username text,
log_type text,
log_time timestamp,
device_category text,
log text,
device_details frozen<map<text, text>>,
PRIMARY KEY (log_time_local, username, device_details)
);
I am running this query: SELECT * FROM dove.backend_events WHERE device_details['category'] = 'mobile' ALLOW FILTERING;
I am getting this error: InvalidRequest: code=2200 [Invalid query] message="Map-entry equality predicates on frozen map column device_details are not supported"
What is causing it and how do I fix it? This error is not occurring when device_details is not part of the primary key and is not frozen.
You could add an index for device_details instead of setting it as primary key (and without freezing it):
CREATE TABLE dove.backend_events (
log_time_local timeuuid,
username text,
log_type text,
log_time timestamp,
device_category text,
log text,
device_details map<text, text>,
PRIMARY KEY (log_time_local, username)
);
CREATE INDEX dove.device_details_index ON dove.backend_events (ENTRIES(device_details));
This way you could run your query efficiently and without having to use ALLOW FILTERING:
SELECT * FROM dove.backend_events WHERE device_details['category'] = 'mobile';
I am trying to create a simple table on Cassandra using cqlsh. The syntax is:
CREATE TABLE TEST(
timestamp timestamp,
system_id text,
hostname text,
cpu_pct float,
memory_used bigint,
PRIMARY_KEY(system_id, timestamp)
);
When I run this I get this error however. How to fix?
ErrorMessage code=2000 [Syntax error in CQL query] message="line 8:0 missing EOF at ')' (...,PRIMARY_KEY(system_id, timestamp)[)];)"
CREATE TABLE TEST(
timestamp timestamp,
system_id text,
hostname text,
cpu_pct float,
memory_used bigint,
PRIMARY KEY(system_id, timestamp)
);
See CQL CREATE TABLE Doc
You accidentally put an underscore between "PRIMARY KEY" instead of a space.
Also you might not want a field called "timestamp" since that is also a Cassandra type, so maybe call that "ts" or something.
PRIMARY_KEY() should be PRIMARY KEY().
I've created a table in CQL3 console (no single primary key constituent is unique, together they will be):
CREATE TABLE aggregate_logs (
bpid varchar,
jid int,
month int,
year int,
value counter,
PRIMARY KEY (bpid, jid, month, year));
then been able to update and query by using:
UPDATE aggregate_logs SET value = value + 1 WHERE bpid='1' and jid=1 and month=1 and year=2000;
This works as expected. I wanted to do the same update in Hector (in Scala):
val aggregateMutator:Mutator[Composite] = HFactory.createMutator(keyspace, compositeSerializer)
val compKey = new Composite()
compKey.addComponent(bpid, stringSerializer)
compKey.addComponent(new Integer(jid), intSerializer)
compKey.addComponent(new Integer(month), intSerializer)
compKey.addComponent(new Integer(year), intSerializer)
aggregateMutator.incrementCounter(compKey, LogsAggregateFamily, "value", 1)
but I get an error with the message:
...HInvalidRequestException: InvalidRequestException(why:String didn't validate.)
Running the query direct from hector with:
val query = new me.prettyprint.cassandra.model.CqlQuery(keyspace, compositeSerializer, stringSerializer, new IntegerSerializer())
query.setQuery("UPDATE aggregate_logs SET value = value + 1 WHERE 'bpid'=1 and jid=1 and month=1 and year=2000")
query.execute()
which gives me the error:
InvalidRequestException(why:line 1:59 mismatched input 'and' expecting EOF)
I've not seem any other examples which use a counter under a composite primary key. Is it even possible?
It's definitely possible using directly cql (both via CQLSH and C++, at least):
cqlsh:goh_master> describe table daily_caps;
CREATE TABLE daily_caps
( caps_type ascii, id ascii, value counter, PRIMARY KEY
(caps_type, id) ) WITH COMPACT STORAGE AND comment='' AND
caching='KEYS_ONLY' AND read_repair_chance=0.100000 AND
gc_grace_seconds=864000 AND replicate_on_write='true' AND
compaction_strategy_class='SizeTieredCompactionStrategy' AND
compression_parameters:sstable_compression='SnappyCompressor';
cqlsh:goh_master> update daily_caps set value=value +1 where caps_type='xp' and id ='myid';
cqlsh:goh_master> select * from daily_caps;
caps_type | id | value
-----------+------+-------
xp | myid | 1
CQL3 and the thrift API are not compatible. So creating a column family with CQL3 and accessing it with Hector or another thrift based client will not work. For more information see:
https://issues.apache.org/jira/browse/CASSANDRA-4377