I have a Cassandra table "test_data_table" with columns of:
id, name, start_time, is_deleted
And id is the partition key.
I'm trying to use the bellow code to build map select statemen
#Entity
#CqlName("test_data_table")
data class RtawData(
#PartitionKey var id: String? = null,
#ClusteringColumn var name: String? = null,
var start_time: Long? = null,
var is_deleted: Boolean? = null
)
But during compile time I got Invalid CQL form [_deleted] needs double quotes.
Does Cassandra's #Enitty annotation not match columns starting with "is_"?
Related
I have three tables:
students
id INT
name VARCHAR
class
id INT
description VARCHAR
student_classes
id INT
student_id (FOREIGN KEY of students.id)
class_id (FOREIGN KEY of class.id)
How i can return all classes of a student that is not in student_classes?
I receive the student_id in request.params.student_id, i try something like:
async getAvailableClassesOfAStudent({ request }){
const classes = await Database
.query()
.select('class.*')
.from('class')
.leftJoin('student_classes', 'class.id', 'student_classes.class_id')
.whereNotIn('student_classes.student_id', request.params.student_id)
return classes
}
I'm getting:
select "class".* from "class" left join "student_classes" on
"class"."id" = "student_classes"."class_id" where
"student_classes"."student_id" not in $1 - syntax error at or near
"$1"
I think it might help you : How to select all records from one table that do not exist in another table?
Something like :
...
const classes = await Database
.query()
.select('class.*')
.from('class')
.leftJoin('student_classes', 'class.id', 'student_classes.class_id')
.whereNull('student_classes.student_id')
...
I'm trying to get a few columns from a Cassandra table from Spark and put them in a case class. If I want all the columns that I have in my case class works. But, I only want to bring a few of them and don't have a specific case class for each case.
I tried to overload constructor in the case class and define a normal class but I didn't get to work.
//It doesn't work, it's normal because there aren't an specific contructor.
case class Father(idPadre: Int, name: String, lastName: String, children: Map[Int,Son], hobbies: Map[Int,Hobbie], lastUpdate: Date)
//It works, because it has the right contructor. I tried to do an companion object and def others contructors but it didn't work
case class FatherEspecifica(idFather: Int, name: String, children: Map[Int,Son])
//Problems in compilation, I don't know why.
class FatherClaseNormal(idFather: Int, name: String, lastName: String, children: Map[Int,Son], hobbies: Map[Int,Hobbie], lastUpdate: Date){
/**
* A secondary constructor.
*/
def this(nombre: String) {
this(0, nombre, "", Map(), Map(), new Date());
println("\nNo last name or age given.")
}
}
//I'm trying to get some a few columns and don't have to have all the case classes and I would like to map directly to case class and don't use CassandraRows.
joinRdd = rddAvro.joinWithCassandraTable[FatherXXX]("poc_udt", "father",SomeColumns("id_father", "name", "children"))
CREATE TABLE IF NOT EXISTS poc_udt.father(
id_father int PRIMARY KEY,
name text,
last_name text,
children map<int,frozen<son>>,
hobbies map<int,frozen<hobbie>>,
last_update timestamp
)
When I use a normal class the error is:
Error:(57, 67) No RowReaderFactory can be found for this type
Error occurred in an application involving default arguments.
val joinRdd = rddAvro.joinWithCassandraTable[FatherClaseNormal]("poc_udt", "padre",SomeColumns("nombre"))
I have this table:
DomainId string HashKey
EmailId string RangeKey
I was wondering if it's possible query this table with HashKey only, like this:
var AWS = require("aws-sdk");
var client = new AWS.DynamoDB.DocumentClient();
var dm = 'infodinamica.cl';
//Set params
var params = {
TableName : 'table-name',
KeyConditionExpression: "DomainId = :dm",
ExpressionAttributeValues: {
":dm": dm
},
Select: 'COUNT'
};
client.query(params, (err, data) => {
if(err)
console.log(JSON.stringify(err, null, 2));
else
console.log(JSON.stringify(data, null, 2));
}
ps: note that this table has HashKey and RangeKey.
Yes, it is possible to query the data using Hash Key only using query API.
Use the KeyConditionExpression parameter to provide a specific value
for the partition key. The Query operation will return all of the
items from the table or index with that partition key value. You can
optionally narrow the scope of the Query operation by specifying a
sort key value and a comparison operator in KeyConditionExpression.
You can use the ScanIndexForward parameter to get results in forward
or reverse order, by sort key.
I am trying cassandra node driver and stuck in problem while inserting a record, it looks like cassandra driver is not able to insert float values.
Problem: When passing int value for insertion in db, api gives following error:
Debug: hapi, internal, implementation, error
ResponseError: Expected 4 or 0 byte int (8)
at FrameReader.readError (/home/gaurav/Gaurav-Drive/code/nodejsWorkspace/cassandraTest/node_modules/cassandra-driver/lib/readers.js:291:13)
at Parser.parseError (/home/gaurav/Gaurav-Drive/code/nodejsWorkspace/cassandraTest/node_modules/cassandra-driver/lib/streams.js:185:45)
at Parser.parseBody (/home/gaurav/Gaurav-Drive/code/nodejsWorkspace/cassandraTest/node_modules/cassandra-driver/lib/streams.js:167:19)
at Parser._transform (/home/gaurav/Gaurav-Drive/code/nodejsWorkspace/cassandraTest/node_modules/cassandra-driver/lib/streams.js:101:10)
at Parser.Transform._read (_stream_transform.js:179:10)
at Parser.Transform._write (_stream_transform.js:167:12)
at doWrite (_stream_writable.js:225:10)
at writeOrBuffer (_stream_writable.js:215:5)
at Parser.Writable.write (_stream_writable.js:182:11)
at write (_stream_readable.js:601:24)
I am trying to execute following query from code:
INSERT INTO ragchews.user
(uid ,iid ,jid ,jpass ,rateCount ,numOfratedUser ,hndl ,interests ,locX ,locY ,city )
VALUES
('uid_1',{'iid1'},'jid_1','pass_1',25, 10, {'NEX1231'}, {'MUSIC'}, 21.321, 43.235, 'delhi');
parameter passed to execute() is
var params = [uid, iid, jid, jpass, rateCount, numOfratedUser, hndl, interest, locx, locy, city];
where
var locx = 32.09;
var locy = 54.90;
and call to execute looks like:
var addUserQuery = 'INSERT INTO ragchews.user (uid ,iid ,jid ,jpass ,rateCount ,numOfratedUser ,hndl ,interests ,locX ,locY ,city) VALUES (?,?,?,?,?,?,?,?,?,?,?);';
var addUser = function(user, cb){
console.log(user);
client.execute(addUserQuery, user, function(err, result){
if(err){
throw err;
}
cb(result);
});
};
CREATE TABLE ragchews.user(
uid varchar,
iid set<varchar>,
jid varchar,
jpass varchar,
rateCount int,
numOfratedUser int,
hndl set<varchar>,
interests set<varchar>,
locX float,
locY float,
city varchar,
favorite map<varchar, varchar>,
PRIMARY KEY(uid)
);
P.S
Some observations while trying to understand the issue:
Since it seems, problem is with float so i changed type float (of locX, locY) to int and re-run the code. Same error persist. Hence, it is not problem associated specifically to float CQL type.
Next, i attempted to remove all int from the INSERT query and attempted to insert only non-numeric values. This attempt successfully inputted the value into db. Hence it looks like now that, this problem may be associated with numeric types.
Following words are as it is picked from cassandra node driver data type documentation
When encoding data, on a normal execute with parameters, the driver tries to guess the target type based on the input type. Values of type Number will be encoded as double (as Number is double / IEEE 754 value).
Consider the following example:
var key = 1000;
client.execute('SELECT * FROM table1 where key = ?', [key], callback);
If the key column is of type int, the execution fails. There are two possible ways to avoid this type of problem:
Prepare the data (recommended) - prepare the query before execution
client.execute('SELECT * FROM table1 where key = ?', [key], { prepare : true }, callback);
Hinting the target types - Hint: the first parameter is an integer`
client.execute('SELECT * FROM table1 where key = ?', [key], { hints : ['int'] }, callback);
If you are dealing with batch update then this issue may be of your interest.
I am trying out Cassandra and looking at ways to model our data in it. I have described our data store requirements along with my thoughts on how to model in Cassandra. Please let me know whether this makes sense and suggest changes.
Did quite some search on the web, but didn't get clear idea regarding how to model multi-valued column requirements and index it, which is quite a common requirement.
Any help would be greatly appreciated.
Our current data for each record:
{
‘id’ : <some uuid>,
‘title’ : text,
‘description’ text,
‘images’ : [{id : id1, ‘caption’: cap1}, {id : id2, ‘caption’: cap2}, ... ],
‘videos’ : [‘video id1’, video id2’, …],
‘keywords’ [‘keyword1’, ‘keyword2’,...]
updated_at: <timestamp>
}
Queries we would need
Lookup by id
Lookup by images.id
Lookup by keyword
All records where updated_at >
Our current model
Column Family: Article
id: uuid
title: varchar
description: varchar
images:
videos:
keywords:
updated_at:
updated_date: [eg: ‘2013-05-06:02’]
Column Family: Image-Article Index
{
‘id’ : <image id>,
‘article1 uuid’ : null,
‘article2 uuid’ : null,
...
}
Column Family: Keyword-Article Index
{
‘id’ : <keyword>,
‘article1 uuid’ : null,
‘article2 uuid’ : null,
...
}
Sample queries:
Lookup by id => straight forward
Lookup by images.id =>
ids = select * from ‘Image-Article Index’ where id=<image id>
select * from Article where id in (ids)
Lookup by keyword =>
ids = select * from ‘Keyword-Article Index’ where id=<image id>
select * from Article where id in (ids)
All records where updated_at > <some timestamp>
Cassandra doesn’t support range queries unless there is one equality condition on one of the indexed columns.
extract date and hour from given timestamp;
for each date:hour in start to current time
ids = select * from Article where update_date=date:hour and timestamp > <some timestamp>
select * from Article where id in (ids)