Creating tables failing with inet, multiple primary keys, and collections. Syntax was correct.
Error messages don't make sense with the primary key (unmatched parens). remove that, I learned that inet won't work except in some cases.
Anything I'm doing wrong, or not understanding about using CQL3 (interfaces or syntax)?
CREATE TABLE session (
'user_id' bigint,
'admin_id' bigint,
'session_id' varchar,
'cache' text ,
'created' timestamp ,
'hits' list<timestamp>,
'ip' inet ,
PRIMARY KEY ( 'session_id' , 'user_id' )
);
The following also fails
CREATE TABLE 'session' (
'user_id' bigint,
'session_id' varchar,
PRIMARY KEY ( 'session_id' , 'user_id' )
);
This works
CREATE TABLE 'session' (
'user_id' bigint,
'session_id' varchar PRIMARY KEY
);
The clue
>help TYPES
CQL types recognized by this version of cqlsh:
ascii
bigint
blob
boolean
counter
decimal
double
float
int
text
timestamp
uuid
varchar
varint
DSE 3.0.x
[EDIT] - turns out DSE has Cassandra 1.1.x installed.
TL;DR: Collections (part of CQL3) not available yet in DSE 3.0.x
Also worth noting, but unrelated to my issue:
Even in Datastax community edition - one needs to activate CQL3. The documentation says it should be activated by default in cqlsh
http://www.datastax.com/docs/1.2/cql_cli/using_cql
"Activating CQL 3
You activate the CQL mode in one of these ways:
Use the DataStax Java Driver to activate CQL through the
native/binary protocol. Start cqlsh, a Python-based command-line
client.
Use the set_sql_version Thrift method.
Specify the desired CQL mode in the connect() call to the Python driver:
*connection = cql.connect('localhost:9160', cql_version='3.0')*
The documentation there was incorrect also, should be
con = cql.connect('localhost', cql_version='3.0.0')
Also, Enterprise Opcenter doesn't yet support CQL 3 in DSE.
cqlsh --cqlversion=3
Related
How can I append a new element to a set which is in a custom type in Cassandra.
custom_type is :
CREATE TYPE custom_type (
normal_type TEXT,
set_type Set<TEXT>
);
and the table to be updated is :
CREATE TABLE test_table (
id TEXT,
my_type FROZEN<custom_type>,
clustering_key TEXT,
PRIMARY KEY ((id),clustering_key)
);
Tried below query but did not work.
#Query("update test_table set my_type.set_type = my_type.set_type + {'newelement'} where id=?1 and clustering_key=?2")
Any Idea on how to do that?
Using [cqlsh 5.0.1 | Cassandra 3.11.1 | CQL spec 3.4.4
When you say frozen, then the whole value is treated as one piece (blob), so you can't update parts of this field. Official documentation states:
When using the frozen keyword, you cannot update parts of a user-defined type value. The entire value must be overwritten. Cassandra treats the value of a frozen, user-defined type like a blob.
I am looking for example of using "CassandraDataModel" for Mahout. Also what tables are needed and how to import data into those tables from CSV files
Posting this, If others face same issue with Windows version of Cassandra 3.9.0 (DataStax Distribuition).
Following are the tables that need to be created through CQL:
CREATE TABLE "users" ("userID" bigint , "itemID" bigint,value float,PRIMARY KEY ("userID","itemID") ) WITH COMPACT STORAGE;
CREATE TABLE "items" ("itemID" bigint , "userID" bigint,value float,PRIMARY KEY ("itemID","userID")) WITH COMPACT STORAGE;
CREATE TABLE "userIDs" ("ID" bigint , "userID" bigint, PRIMARY KEY ("ID","userID")) WITH COMPACT STORAGE;
CREATE TABLE "itemIDs" ("ID" bigint , "itemID" bigint, PRIMARY KEY ("ID","itemID")) WITH COMPACT STORAGE;
Following is the format of command for importing data from CSV files to these 4 tables.
copy users from 'C:\Recommender_mv\RecommenderApp\data\movies.csv';
I am trying to create a user defined type with Cassandra. I am using cqlsh and the example provided in the documentation (http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/cqlRefcreateType.html):
REATE TYPE address (
street text,
city text,
zip_code int,
phones set<text>
)
But this returns:
Bad Request: line 1:7 no viable alternative at input 'TYPE'
Using the help command, I found the 'CREATE_TABLE_TYPES' but I could not get it to work either.
What is the correcft syntax to get this to work?
Many thanks
Looks like you need to create a keyspace first:
cqlsh> CREATE KEYSPACE excelsior
... WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 3 };
cqlsh> use excelsior;
cqlsh:excelsior> CREATE TYPE address (
... street text,
... city text,
... zip_code int,
... phones set<text>
... );
I also struck with same problem. User type doesn't supported by earlier version of 2.1.
Now i am using 2.1.2 version of cassandra.
You can find it NEW FEATURES of Cassandra
I have a table similar to this
{
CREATE TABLE timeseries (
event_type text,
insertion_time timestamp,
event blob,
PRIMARY KEY (event_type, insertion_time)
)
}
and I am trying to do
delete * from timseries where event_type='xxx' and insertion_time <'12345';
Bad Request: Invalid operator LT for PRIMARY KEY part insertion_time.
Does cassandra support these kind of deletes ?
Any help is appreciated.
No, range deletes are not supported.
For what is support, look at the CQL3 documentation:
http://cassandra.apache.org/doc/cql3/CQL.html
Note that range deletes (on the cluster key) are in the next major version (3.0-beta2). Patch got committed to trunk just recently: https://issues.apache.org/jira/browse/CASSANDRA-6237
do a select first, then delete each result.
We have a CQL table that looks something like this:
CREATE table data (
occurday text,
seqnumber int,
occurtimems bigint,
unique bigint,
fields map<text, text>,
primary key ((occurday, seqnumber), occurtimems, unique)
)
I can query this table from cqlsh like this:
select * from data where seqnumber = 10 AND occurday = '2013-10-01';
This query works and returns the expected data.
If I execute this query as part of a LOAD from within Pig, however, things don't work.
-- Need to URL encode the query
data = LOAD 'cql://ks/data?where_clause=seqnumber%3D10%20AND%20occurday%3D%272013-10-01%27' USING CqlStorage();
gives
InvalidRequestException(why:seqnumber cannot be restricted by more than one relation if it includes an Equal)
at org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result.read(Cassandra.java:39567)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_prepare_cql3_query(Cassandra.java:1625)
at org.apache.cassandra.thrift.Cassandra$Client.prepare_cql3_query(Cassandra.java:1611)
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.prepareQuery(CqlPagingRecordReader.java:591)
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.executeQuery(CqlPagingRecordReader.java:621)
Shouldn't these behave the same? Why is the version through Pig failing where the straight cqlsh command works?
Hadoop is using CqlPagingRecordReader to try to load your data. This is leading to queries that are not identical to what you have entered. The paging record reader is trying to obtain small slices of Cassandra data at a time to avoid timeouts.
This means that your query is executed as
SELECT * FROM "data" WHERE token("occurday","seqnumber") > ? AND
token("occurday","seqnumber") <= ? AND occurday='A Great Day'
AND seqnumber=1 LIMIT 1000 ALLOW FILTERING
And this is why you are seeing your repeated key error. I'll submit a bug to the Cassandra Project.
Jira:
https://issues.apache.org/jira/browse/CASSANDRA-6151