I'm trying to deserialize the commit log in Cassandra for a research project.
I have succeeded so far in deserializing the cell names and the cell values from the mutation entries in the commit log.
However, am struggling to deserialize the primary key entries of the mutations since per design the cell values are empty for the primary keys. The closest I could get is to retrieve the partition key name from the column definition of the column family metadata. But I don't know how to get the actual value of the primary key ?
Thanks
Below is my approach to deserialize the mutation:
// function in CommitLog.java
public ReplayPosition add(Mutation mutation){
Collection<ColumnFamily> myCollection = mutation.getColumnFamilies();
for(ColumnFamily cf:myCollection) {
CFMetaData cfm = cf.metadata();
// Retrieve name of partition key
logger.info("partition key={}.", cfm.partitionKeyColumns().get(0).name.toString());
for (Cell cell : cf){
// Retrieve cell name
String name = cfm.comparator.getString(cell.name());
logger.info("name={}.", name);
// Retrieve cell value
String value = cfm.getValueValidator(cell.name()).getString(cell.value());
logger.info("value={}.", value);
}
}
}
Related
I am shifting my database from mongodb to dynamo db. I have a problem with delete function from a table where labName is partition key and serialNumber is my sort key and there is one Id as feedId I want to delete all the records from the table where labName is given and feedId is NOT IN (array of ids).
I am doing it in mongo like below mentioned code
Is there a way with BatchWriteItem where i can add condition for feedId without sort key.
let dbHandle = await getMongoDbHandle(dbName);
let query = {
feedid: {$nin: feedObjectIds}
}
let output = await dbModule.removePromisify(dbHandle,
dbModule.collectionNames.feeds, query);
While working with DynamoDB, you can perform Conditional Retrieval (GET) / Deletion (DELETE) on the records only & only if you have provided all of the attributes for the Primary Key. For example:
For a Simple Primary key, you only need to provide a value for the Partition key.
For a Composite Primary Key, you must need to provide values for both the Partition key & sort key.
I want to use Node.js with Sequelize and SQLite. I have the following model for users:
const User = sequelize.define('user', {
rowid: {
type: 'INTEGER',
primaryKey: true,
},
// other properties
});
If I now execute const newUser = await User.create({ /* properties */ }), I would expect to be able to access the ID of the new user with newUser.rowid. But this is not the case, only if I add autoIncrement: true to the specification. Following https://www.sqlite.org/autoinc.html, I don't want to do this. Are there any other possibilities?
Edit
As it turns out, this is only possible by creating the table without autoIncrement: true and only afterward add it to the column definition. The much more practical way is probably to just use autoincrement, the performance decrease won't matter for most small applications.
You should not have to use autoincrement to access rowid in the table User. I would expect to see it this way User.rowid not newUser.rowid as in the example, since the table name is (apparently) User.
Also from the sqlite doc:
if a rowid table has a primary key that consists of a single column
and the declared type of that column is "INTEGER" in any mixture of
upper and lower case, then the column becomes an alias for the rowid.
Such a column is usually referred to as an "integer primary key". A
PRIMARY KEY column only becomes an integer primary key if the declared
type name is exactly "INTEGER".
And finally, you might consider a different name than rowid for the PK, since sqlite already has a rowid.
Except for WITHOUT ROWID tables, all rows within SQLite tables have a
64-bit signed integer key that uniquely identifies the row within its
table. This integer is usually called the "rowid". The rowid value can
be accessed using one of the special case-independent names "rowid",
"oid", or "rowid" in place of a column name. If a table contains a
user defined column named "rowid", "oid" or "rowid", then that name
always refers the explicitly declared column and cannot be used to
retrieve the integer rowid value.
I am new to cassandra triggers. I am still ramping up. I could find a way to extract value out for a given ByteBuffer key, but do not know how to get the "name" of the actual primary key column
public static String getKeyText(ColumnFamily columnFamily, ByteBuffer key) {
CFMetaData cfm = columnFamily.metadata();
String key_data = cfm.getKeyValidator().getString(key);
}
Any idea on how to get just the key column name?
Any pointers are highly appreciated
Thanks
not sure if this what you mean, but you can get the name of the partition keys from columnFamily.partitionKeyColumns() the ColumnDefinition's have a name field thats readable. There may be more than one depending on schema
https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/config/CFMetaData.java#L797
So I have the following code used as a validation method:
if (TableQuery[UsersTable].filter(_.name === login).exists.run) {
val id = TableQuery[UsersTable].filter(_.name === login).firstOption.get.id
val name = TableQuery[UsersTable].filter(_.id === id).firstOption.get.name
}
if you're wondering, I check to make sure of .exists before I query the next two times because the login value can be equal to two columns in the database.
Anyways, I get [SlickException: Read NULL value (null) for ResultSet column Path s2._5] when attempting to get the id above, and I'm unsure why. There should be a first option there because the code has already validated a row exists for the requirements typed beforehand. No "id" column values are null.
How can I get this id value working correctly?
One of the involved columns is nullable but you didn't specify it as Option[...] in the Table.
In red is using hash, I need to store hash key with multiple fields and values.
I tried as below:
client.hmset("Table1", "Id", "9324324", "ReqNo", "23432", redis.print);
client.hmset("Table1", "Id", "9324325", "ReqNo", "23432", redis.print);
var arrrep = new Array();
client.hgetall("Table1", function(err, rep){
console.log(rep);
});
Output is: { Id: '9324325', ReqNo: '23432' }
I am getting only one value. How to get all fields and values in the hash key? Kindly help me if I am wrong and let me get the code. Thanks.
You are getting one value because you override the previous value.
client.hmset("Table1", "Id", "9324324", "ReqNo", "23432", redis.print);
This adds Id, ReqNo to the Table1 hash object.
client.hmset("Table1", "Id", "9324325", "ReqNo", "23432", redis.print);
This overrides Id and ReqNo for the Table1 hash object. At this point, you only have two fields in the hash.
Actually, your problem comes form the fact you are trying to map a relational database model to Redis. You should not. With Redis, it is better to think in term of data structures and access paths.
You need to store one hash object per record. For instance:
HMSET Id:9324324 ReqNo 23432 ... and some other properties ...
HMSET Id:9324325 ReqNo 23432 ... and some other properties ...
Then, you can use a set to store the IDs:
SADD Table1 9324324 9324325
Finally to retrieve the ReqNo data associated to the Table1 collection:
SORT Table1 BY NOSORT GET # GET Id:*->ReqNo
If you want to also search for all the IDs which are associated to a given ReqNo, then you need another structure to support this access path:
SADD ReqNo:23432 9324324 9324325
So you can get the list of IDs for record 23432 by using:
SMEMBERS ReqNo:23432
In other words, do not try to transpose a relational model: just create your own data structures supporting your use cases.