Retrieving data from composite key via astyanax - cassandra

I am very naive in cassandra & am using astyanax
CREATE TABLE employees (empID int, deptID int, first_name varchar,
last_name varchar, PRIMARY KEY (empID, deptID));
i want to get the values of query:
select * from employees where empID =2 and deptID = 800;
public void read(Integer empID, String deptID) {
OperationResult<ColumnList<String>> result;
try {
columnFamilies = ColumnFamily.newColumnFamily("employees", IntegerSerializer.get(), StringSerializer.get());
result = keyspace.prepareQuery(columnFamilies).getKey(empID).execute();
ColumnList<String> cols = result.getResult();
//Other stuff
}
how should i achieve this

As far as I can find, there isn't a super clean way to do this. You have to do it by executing a cql query and then iterating through the rows. This code is taken from the astynax examples file
public void read(int empId) {
logger.debug("read()");
try {
OperationResult<CqlResult<Integer, String>> result
= keyspace.prepareQuery(EMP_CF)
.withCql(String.format("SELECT * FROM %s WHERE %s=%d;", EMP_CF_NAME, COL_NAME_EMPID, empId))
.execute();
for (Row<Integer, String> row : result.getResult().getRows()) {
logger.debug("row: "+row.getKey()+","+row); // why is rowKey null?
ColumnList<String> cols = row.getColumns();
logger.debug("emp");
logger.debug("- emp id: "+cols.getIntegerValue(COL_NAME_EMPID, null));
logger.debug("- dept: "+cols.getIntegerValue(COL_NAME_DEPTID, null));
logger.debug("- firstName: "+cols.getStringValue(COL_NAME_FIRST_NAME, null));
logger.debug("- lastName: "+cols.getStringValue(COL_NAME_LAST_NAME, null));
}
} catch (ConnectionException e) {
logger.error("failed to read from C*", e);
throw new RuntimeException("failed to read from C*", e);
}
}
You just have to tune the cql query to return what you want. This is a bit frustrating because according to the documentation, you can do
Column<String> result = keyspace.prepareQuery(CF_COUNTER1)
.getKey(rowKey)
.getColumn("Column1")
.execute().getResult();
Long counterValue = result.getLongValue();
However I don't know what rowkey is. I've posted a question about what rowkey can be. Hopefully that will help

Related

Cannot insert into Cassandra table, getting SyntaxError

I have an assignment where I have to build a Cassandra database. I have connected Cassandra with IntelliJ, i'm writing in java and the output is shown in the command line.
My keyspace farm_db contains a couple of tables in wish i'm would like to insert data. I would like to insert the data with two columns and a list all in one row, in the table 'farmers'. This is a part of my database so far:
cqlsh:farm_db> use farm_db;
cqlsh:farm_db> Describe tables;
farmers foods_dairy_eggs foods_meat
foods_bread_cookies foods_fruit_vegetables
cqlsh:farm_db> select * from farmers;
farmer_id | delivery | the_farmer
-----------+----------+------------
This is what i'm trying to do:
[Picture of what i'm trying to do][1]
I need to insert the collection types 'list' and 'map' in 'farmers' but after a couple of failed attempts with that I tried using hashmap and arraylist instead. I think this could work but i seem to have an error in my syntax and I have no idea what the problem seem to be:
Exception in thread "main" com.datastax.driver.core.exceptions.SyntaxError: line 1:31 mismatched input 'int' expecting ')' (INSERT INTO farmers (farmer_id [int]...)
Am I missing something or am I doing something wrong?
This is my code:
public class FarmersClass {
public static String serverIP = "127.0.0.1";
public static String keyspace = "";
//Create db
public void crateDatabase(String databaseName) {
Cluster cluster = Cluster.builder()
.addContactPoints(serverIP)
.build();
keyspace = databaseName;
Session session = cluster.connect();
String create_db_query = "CREATE KEYSPACE farm_db WITH replication "
+ "= {'class':'SimpleStrategy', 'replication_factor':1};";
session.execute(create_db_query);
}
//Create table
public void createFarmersTable() {
Cluster cluster = Cluster.builder()
.addContactPoints(serverIP)
.build();
Session session = cluster.connect("farm_db");
String create_farm_table_query = "CREATE TABLE farmers(farmer_id int PRIMARY KEY, the_farmer Map <text, text>, delivery list<text>); ";
session.execute(create_farm_table_query);
}
//Insert data in table 'farmer'.
public void insertFarmers(int id, HashMap< String, String> the_farmer, ArrayList <String> delivery) {
Cluster cluster = Cluster.builder()
.addContactPoints(serverIP)
.build();
Session session = cluster.connect("farm_db");
String insert_query = "INSERT INTO farmers (farmer_id int PRIMARY KEY, the_farmer, delivery) values (" + id + "," + the_farmer + "," + delivery + ");";
System.out.println(insert_query);
session.execute(insert_query);
}
}
public static void main(String[] args) {
FarmersClass farmersClass = new FarmersClass();
// FarmersClass.crateDatabase("farm_db");
// FarmersClass.createFarmersTable();
//Collection type map
HashMap<String, String> the_farmer = new HashMap<>();
the_farmer.put("Name", "Ana Petersen ");
the_farmer.put("Farmhouse", "The great farmhouse");
the_farmer.put("Foods", "Fruits & Vegetables");
//Collection type list
ArrayList<String> delivery = new ArrayList<String>();
String delivery_1 = "Village 1";
String delivery_2 = "Village 2";
delivery.add(delivery_1);
delivery.add(delivery_2);
FarmersClass.insertFarmers(1, the_farmer, delivery);
}
The problem is the syntax of your CQL INSERT query:
String insert_query = \
"INSERT INTO farmers (farmer_id int PRIMARY KEY, the_farmer, delivery) \
values (" + id + "," + the_farmer + "," + delivery + ");";
You've incorrectly added int PRIMARY KEY in the list of columns.
The correct format is:
INSERT INTO table_name (pk, col2, col3) VALUES ( ... )
For details and examples, see CQL INSERT. Cheers!

How to fix [Error: While query - conversion error from string "pending" ] when trying to insert a new record into firebird database?

I've created a table with the following schema:
CREATE TABLE orders (
id INTEGER NOT NULL PRIMARY KEY,
status VARCHAR(30) NOT NULL CHECK(status IN('ordered', 'paid', 'pending', 'complete')),
order_time TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
delivery_id INTEGER,
client_id INTEGER,
operator_id INTEGER,
FOREIGN KEY(delivery_id)
REFERENCES delivery_info(id) ON DELETE CASCADE,
FOREIGN KEY(client_id) REFERENCES client(id) ON DELETE CASCADE,
FOREIGN KEY(operator_id) REFERENCES operator(id)
);
However, when I'm trying to insert new data into the table using a node.js application, I get the following error:
Console output
Generated query looks okay to me:
INSERT INTO orders VALUES (793771, 'pending', '1612387572153', 590931, 3923, 0);
Application code:
class Order {
static DEFAULT_OPERATOR_ID = 0;
constructor(id, status, time, delivery, client,
operatorId = Order.DEFAULT_OPERATOR_ID)
{
this.id = id;
this.status = status;
this.time = time;
this.delivery = delivery;
this.client = client;
this.operatorId = operatorId
}
save() {
console.log(this);
const insertQuery = `
INSERT INTO orders
VALUES (${this.id}, '${this.status}', '${this.time}', ${this.delivery.id}, ${this.client.id}, ${this.operatorId});
`;
console.log(insertQuery);
dbConnection.query(insertQuery, (error, result) => {
if (error) throw error;
console.log('Inserted new order');
});
}
}
Issue is fixed when I specify the column list during insertion. Thanks, #Mark Rotteveel.

Lose Properties when convert Cassandra column to java object

I use spring-data-cassandra-1.2.1.RELEASE to operate Cassandra database. Things all go well .But recent days I got a problem, when I using the code to get data:
public UserInfoCassandra selectUserInfo(String passport) {
Select select = QueryBuilder.select().from("userinfo");
select.setConsistencyLevel(ConsistencyLevel.QUORUM);
select.where(QueryBuilder.eq("passport", passport));
UserInfoCassandra userinfo = operations.selectOne(select,
UserInfoCassandra.class);
return userinfo;
}
there were many properties in userinfo , but I just get two the passport and uid properties.
I debug into the method,got that the data getting from db is right,all properties were ready.but when converting them to a java object ,some disappear.. the converting code:
protected <T> T selectOne(Select query, CassandraConverterRowCallback<T> readRowCallback) {
ResultSet resultSet = query(query);
Iterator<Row> iterator = resultSet.iterator();
if (iterator.hasNext()) {
Row row = iterator.next();
T result = readRowCallback.doWith(row);
if (iterator.hasNext()) {
throw new DuplicateKeyException("found two or more results in query " + query);
}
return result;
}
return null;
}
the row data is right ,but the result is wrong, who can help ?
Most probably your entity class and it's corresponding relational model are mismatched.

cassandra trigger on composite blob key

I use Cassandra 2.1.9 and have table like
create table "Keyspace1"."Standard4" ( id blob, user_name blob, data blob, primary key(id, user_name));
and I follow the post in Cassandra Sample Trigger Code to get inserted value and do trigger code like
public class InvertedIndex implements ITrigger
{
private static final Logger logger = LoggerFactory.getLogger(InvertedIndex.class);
public Collection augment(ByteBuffer key, ColumnFamily update)
{
CFMetaData cfm = update.metadata();
ByteBuffer id_bb = key;
String id_Value = new String(id_bb.array());
Iterator col_itr=update.iterator();
Cell username_col=(Cell)col_itr.next();
ByteBuffer username_bb=CompositeType.extractComponent(username_col.name().collectionElement(),0);
String username_Value = new String(username_bb.array());
Cell data_col=(Cell)col_itr.next();
ByteBuffer data_bb=BytesType.instance.compose(data_col.value());
String data_Value = new String(data_bb.array());
logger.info(" id --> "+id_Value);
logger.info(" username-->"+username_Value);
logger.info(" data ---> "+data_Value);
return null;
}
}
I tried insert into "Keyspace1"."Standard4" (id, user_name, data) values (textAsBlob('id1'), textAsBlob('user_name1'), textAsBlob('data1'));
and got run time exception in ByteBuffer username_bb=CompositeType.extractComponent(username_col.name().collectionElement(),0);
Caused by: java.lang.NullPointerException: null
at org.apache.cassandra.db.marshal.CompositeType.extractComponent(CompositeType.java:191) ~[apache-cassandra-2.1.9.jar:2.1.9]
at org.apache.cassandra.triggers.InvertedIndex.augment(InvertedIndex.java:52) ~[na:na]
at org.apache.cassandra.triggers.TriggerExecutor.executeInternal(TriggerExecutor.java:223) ~[apache-cassandra-2.1.9.jar:2.1.9]
... 17 common frames omitted
Can anybody tell me how to correct?
You are trying to show all the inserted column name and value right ?
Here is the code:
#Override
public Collection<Mutation> augment(ByteBuffer key, ColumnFamily update) {
CFMetaData cfm = update.metadata();
System.out.println("key => " + ByteBufferUtil.toInt(key));
for (Cell cell : update) {
if (cell.value().remaining() > 0) {
try {
String name = cfm.comparator.getString(cell.name());
String value = cfm.getValueValidator(cell.name()).getString(cell.value());
System.out.println("Column Name => " + name + " Value => " + value);
} catch (Exception e) {
System.out.println("Exception : " + e.getMessage());
}
}
}
return null;
}

Cassandra insert after delete fails

In a specific case of deleting and reinserting rows to a table does not work as intended when replication factor is more than 1 (say 3).
I have tried using quorum for read as well as write and timestamp in the query with no success.
If the replication factor is set to 1, all works well. I have 5 nodes in my cluster.
Issue becomes visible when tried to read after the delete and insert operation. The read does not return rows that should have been inserted.
Could someone please share their thoughts on this?
Edit:
Following is what I have tried from code
Database create script:
DROP KEYSPACE IF EXISTS consistency_check;
CREATE KEYSPACE IF NOT EXISTS consistency_check WITH replication = {'class': 'NetworkTopologyStrategy', 'Cassandra': 3} AND durable_writes = true;
USE consistency_check;
CREATE TABLE IF NOT EXISTS resource (
id uuid,
col1 text,
col2 text,
col3 text,
col4 text,
PRIMARY KEY(id, col1)
);
C# Unit test code:
public class Resource
{
public Guid Id { get; set; }
public string Col1 { get; set; }
public string Col2 { get; set; }
public string Col3 { get; set; }
public string Col4 { get; set; }
}
public class MyMappings : Mappings
{
public MyMappings()
{
For<Resource>()
.TableName("resource")
.PartitionKey(u => u.Id)
.Column(u => u.Id, cm => cm.WithName("id"))
.Column(u => u.Col1, cm => cm.WithName("col1"))
.Column(u => u.Col2, cm => cm.WithName("col2"))
.Column(u => u.Col3, cm => cm.WithName("col3"))
.Column(u => u.Col4, cm => cm.WithName("col4"));
}
}
//Following test fails always
[TestMethod]
public async Task DeleteInsert()
{
var table = new Table<Resource>(_session);
_session.Execute("truncate resource");
for (int i = 0; i < 10; i++)
{
var id = Guid.NewGuid();
await table.Insert(new Resource { Id = id, Col1 = id.ToString(), Col2 = id.ToString(), Col3 = id.ToString(), Col4 = id.ToString() }).ExecuteAsync().ConfigureAwait(false);
}
var data = (await table.ExecuteAsync().ConfigureAwait(false)).ToList();
foreach (var datum in data)
{
await table.Where(e => e.Id == datum.Id).Delete().ExecuteAsync().ConfigureAwait(false);
}
foreach (var datum in data)
{
await table.Insert(datum).ExecuteAsync().ConfigureAwait(false);
}
var data1 = (await table.ExecuteAsync().ConfigureAwait(false)).ToList();
Console.WriteLine("data length: {0}", data.Count);
Console.WriteLine("data1 length: {0}", data1.Count);
Assert.IsTrue(data.Count == 10);
Assert.IsTrue(data1.Count == 10);
}
Where are you setting the consistency level? If you use quorum for reads and writes, you shouldn't get this problem.
Looks like your running the inserts and deletes asynchronously - so the delete could have run before the inserts. In the case the delete clause will not be met and not rows will be deleted.
I am not super familiar with the C# async/await construct so i might be missing something

Resources