Get column name and their value in cassandra trigger - cassandra

I have this cassandra trigger
public class AuditTrigger implements ITrigger
{
private Properties properties = loadProperties();
public Collection<Mutation> augment(Partition update)
{
String auditKeyspace = properties.getProperty("keyspace");
String auditTable = properties.getProperty("table");
RowUpdateBuilder audit = new RowUpdateBuilder(Schema.instance.getCFMetaData(auditKeyspace, auditTable),
FBUtilities.timestampMicros(),
UUIDGen.getTimeUUID());
audit.add("keyspace_name", update.metadata().ksName);
audit.add("table_name", update.metadata().cfName);
audit.add("primary_key", update.metadata().getKeyValidator().getString(update.partitionKey().getKey()));
return Collections.singletonList(audit.build());
}
}
ho can I get the inserted value and their column name ?

Related

How to persist PanacheEntity avoiding duplicate key exception?

I want to persist an entity. I want to skip it in case it already exists in the datastore. Assume the name field is part of the primary key. Assume p1 exists in the datastore. Only p2 should be inserted. Inserting p1 produces duplicate key exception.
#Entity
public class PersonEntity extends PanacheEntity {
String name;
public PersonEntity(String name){
this.name=name;
}
public static Uni<PersonEntity> findByName(String name) {
return find("name", name).firstResult();
}
}
#QuarkusTest
public class PersonResourceTest {
#Test
#ReactiveTransactional
void persistListOfPersons() {
List<PersonEntity> persons = List.of(new PersonEntity("p1"), new PersonEntity("p2"));
Predicate<PersonEntity> personExists = entity -> {
//How to consume Uni?
Uni<PersonEntity> entityUni = PersonEntity.findByName(entity.name);
//entityUni.onItem().ifNull().continueWith(???);
//include entity in filtered stream
//return true;
//exclude entity from filtered stream
return false;
};
List<PersonEntity> filteredPersons = persons.stream().filter(personExists).toList();
PersonEntity.persist(filteredPersons);
}
}
I can't produce a valid filter predicate. I need a boolean value somehow produced by the person query. But how?
This should serve as a minimum reproducable example.

How to use fieldset in lightning Component

I want to create a custom lightning component to create new Case records and need to use fieldset to include fields in component. Need to use this only for one object. I never used fieldsets so dont have any idea on it. It would be really great if you can share some sample code or any link for the same.
You can use this utility class
This is the wrapper class to hold the meta info about the fields
public with sharing class DataTableColumns {
#AuraEnabled
public String label {get;set;}
#AuraEnabled
public String fieldName {get;set;}
#AuraEnabled
public String type {get;set;}
public DataTableColumns(String label, String fieldName, String type){
this.label = label;
this.fieldName = fieldName;
this.type = type;
}
}
Class FieldSetHelper has a method getColumns () this will return the list of DataTableColumns wrapper containing the information about the filedset columns
public with sharing class FieldSetHelper {
/*
#param String strObjectName : required. Object name to get the required filed set
#param String strFieldSetName : required. FieldSet name
#return List<DataTableColumns> list of columns in the specified fieldSet
*/
public static List<DataTableColumns> getColumns (String strObjectName, String strFieldSetName) {
Schema.SObjectType SObjectTypeObj = Schema.getGlobalDescribe().get(strObjectName);
Schema.DescribeSObjectResult DescribeSObjectResultObj = SObjectTypeObj.getDescribe();
Schema.FieldSet fieldSetObj = DescribeSObjectResultObj.FieldSets.getMap().get(strFieldSetName);
List<DataTableColumns> lstDataColumns = new List<DataTableColumns>();
for( Schema.FieldSetMember eachFieldSetMember : fieldSetObj.getFields() ){
String dataType =
String.valueOf(eachFieldSetMember.getType()).toLowerCase();
DataTableColumns datacolumns = new DataTableColumns(
String.valueOf(eachFieldSetMember.getLabel()) ,
String.valueOf(eachFieldSetMember.getFieldPath()),
String.valueOf(eachFieldSetMember.getType()).toLowerCase() );
lstDataColumns.add(datacolumns);
}
return lstDataColumns;
}
}
After you getting all those field set information the create lightning component dynamically

Cassandra Accessor / Mapper not mapping udt field

I am using datastax cassandra 3.1.2. I have created the following table in cassandra and inserted a record.
CREATE TYPE memory ( capacity text );
create TABLE laptop ( id uuid primary key, model text, ram frozen<memory> );
select * from laptop ;
id | model | ram
--------------------------------------+---------------+-------------------
e55cba2b-0847-40d5-ad56-ae97e793dc3e | Dell Latitude | {capacity: '8gb'}
When I am trying to fetch the capacity field from frozen type memory in Java using Cassandra Accessor with the below code:
this.cluster = Cluster.builder().addContactPoint(node).withPort(port).build();
session = cluster.connect();
MappingManager manager = new MappingManager(session);
LaptopAccessor laptopAccessor = manager.createAccessor(LaptopAccessor.class);
Result<Laptop> cp = laptopAccessor.getOne(UUID.fromString("e55cba2b-0847-40d5-ad56-ae97e793dc3e"));
System.out.println(cp.one());
It is giving ram datapoint itself is null.
id = null model = null ram = null
I was expecting that the mapper would create ram instance while mapping and map capacity field into it and return the Laptop bean.
I have the following Accessor interface:
#Accessor
interface LaptopAccessor {
#Query("SELECT ram.capacity FROM user_info.laptop where id=?")
Result<Laptop> getOne(UUID id);
}
I have the following java beans for the above table.
#Table(keyspace = "user_info", name = "laptop")
public class Laptop {
private UUID id;
private String model;
private Memory ram;
#PartitionKey
public UUID getId() {
return id;
}
public void setId(UUID id) {
this.id = id;
}
public String getModel() {
return model;
}
public void setModel(String model) {
this.model = model;
}
#Frozen
public Memory getRam() {
return ram;
}
public void setRam(Memory ram) {
this.ram = ram;
}
#Override
public String toString() {
return "id = " + id + " model = " + model + " ram = " + ram;
}
}
#UDT(keyspace = "user_info", name = "memory")
public class Memory {
private String capacity;
#Field
public String getCapacity() {
return capacity;
}
public void setCapacity(String capacity) {
this.capacity = capacity;
}
#Override
public String toString() {
return "capacity = " + capacity ;
}
}
The code works fine when I change the query to retrieve entire ram UDT. Could somebody please tell that why the mapper doesn't work when I select some field from the udt in the query?
Doesn't cassandra support this? Any workaround to fetch the UDT fields?
I think the issue is the return type on your accessor:
#Accessor
interface LaptopAccessor {
#Query("SELECT ram.capacity FROM user_info.laptop where id=?")
Result<Laptop> getOne(UUID id);
}
Since your query is only selecting ram.capacity, all the driver is getting back is a Row with a single column that is a String with a name of ram.capacity which does not map to any field in Laptop.
Instead, since it looks like all you want is the 1 row matching that query, you could change your Accessor to:
#Accessor
interface LaptopAccessor {
#Query("SELECT ram.capacity FROM user_info.laptop where id=?")
ResultSet getOne(UUID id);
}
The accessor now returns a ResultSet for which you can call one().getString(0) to get the capacity back. It's not ideal if you don't want to deal with ResultSet directly, but works well.
You shouldn't really need the whole Laptop object anyways since all you are requesting is a field of a UDT right?

Cassandra Query column mapping

I have a Cassandra table trans_by_date with columns origin, tran_date (and some other columns). I try to run the below code get error:
java.util.NoSuchElementException: Columns not found in table trans.trans_by_date : TRAN_DATE. The column does exist.
Any syntax gotcha?
JavaRDD<TransByDate> transDateRDD = javaFunctions(sc)
.cassandraTable("trans", "trans_by_date", CassandraJavaUtil.mapRowTo(TransByDate.class))
.select(CassandraJavaUtil.column("origin"), CassandraJavaUtil.column("TRAN_DATE").as("transdate"));
public static class TransByDate implements Serializable {
private String origin;
private Date transdate;
public String getOrigin() { return origin; }
public void setOrigin(String id) { this.origin = id; }
public Date getTransdate() { return transdate; }
public void setTransdate(Date trans_date) { this.transdate = trans_date; }
}
Thanks
If you change CassandraJavaUtil.column("TRAN_DATE") to CassandraJavaUtil.column("tran_date"), i.e. only use lower-case column names, your code should work.
It seems that the CassandraJavaUtil puts the column name into double quotes when creating the select query.
See the following link for uppercase and lowercase handling in cassandra:
https://docs.datastax.com/en/cql/3.3/cql/cql_reference/ucase-lcase_r.html

How to execute ranged query in cassandra with astyanax and composite column

I am developing a blog using cassandra and astyanax. It is only an exercise of course.
I have modelled the CF_POST_INFO column family in this way:
private static class PostAttribute {
#Component(ordinal = 0)
UUID postId;
#Component(ordinal = 1)
String category;
#Component
String name;
public PostAttribute() {}
private PostAttribute(UUID postId, String category, String name) {
this.postId = postId;
this.category = category;
this.name = name;
}
public static PostAttribute of(UUID postId, String category, String name) {
return new PostAttribute(postId, category, name);
}
}
private static AnnotatedCompositeSerializer<PostAttribute> postSerializer = new AnnotatedCompositeSerializer<>(PostAttribute.class);
private static final ColumnFamily<String, PostAttribute> CF_POST_INFO =
ColumnFamily.newColumnFamily("post_info", StringSerializer.get(), postSerializer);
And a post is saved in this way:
MutationBatch m = keyspace().prepareMutationBatch();
ColumnListMutation<PostAttribute> clm = m.withRow(CF_POST_INFO, "posts")
.putColumn(PostAttribute.of(post.getId(), "author", "id"), post.getAuthor().getId().get())
.putColumn(PostAttribute.of(post.getId(), "author", "name"), post.getAuthor().getName())
.putColumn(PostAttribute.of(post.getId(), "meta", "title"), post.getTitle())
.putColumn(PostAttribute.of(post.getId(), "meta", "pubDate"), post.getPublishingDate().toDate());
for(String tag : post.getTags()) {
clm.putColumn(PostAttribute.of(post.getId(), "tags", tag), (String) null);
}
for(String category : post.getCategories()) {
clm.putColumn(PostAttribute.of(post.getId(), "categories", category), (String)null);
}
the idea is to have some row like bucket of some time (one row per month or year for example).
Now if I want to get the last 5 posts for example, how can I do a rage query for that? I can execute a rage query based on the post id (UUID) but I don't know the available post ids without doing another query to get them. What are the cassandra best practice here?
Any suggestion about the data model is welcome of course, I'm very newbie to cassandra.
If your use case works the way I think it works you could modify your PostAttribute so that the first component is a TimeUUID that way you can store it as time series data and you'd easily be able to pull the oldest 5 or newest 5 using the standard techniques. Anyway...here's a sample of what it would look like to me since you don't really need to make multiple columns if you're already using composites.
public class PostInfo {
#Component(ordinal = 0)
protected UUID timeUuid;
#Component(ordinal = 1)
protected UUID postId;
#Component(ordinal = 2)
protected String category;
#Component(ordinal = 3)
protected String name;
#Component(ordinal = 4)
protected UUID authorId;
#Component(ordinal = 5)
protected String authorName;
#Component(ordinal = 6)
protected String title;
#Component(ordinal = 7)
protected Date published;
public PostInfo() {}
private PostInfo(final UUID postId, final String category, final String name, final UUID authorId, final String authorName, final String title, final Date published) {
this.timeUuid = TimeUUIDUtils.getUniqueTimeUUIDinMillis();
this.postId = postId;
this.category = category;
this.name = name;
this.authorId = authorId;
this.authorName = authorName;
this.title = title;
this.published = published;
}
public static PostInfo of(final UUID postId, final String category, final String name, final UUID authorId, final String authorName, final String title, final Date published) {
return new PostInfo(postId, category, name, authorId, authorName, title, published);
}
}
private static AnnotatedCompositeSerializer<PostInfo> postInfoSerializer = new AnnotatedCompositeSerializer<>(PostInfo.class);
private static final ColumnFamily<String, PostInfo> CF_POSTS_TIMELINE =
ColumnFamily.newColumnFamily("post_info", StringSerializer.get(), postInfoSerializer);
You should save it like this:
MutationBatch m = keyspace().prepareMutationBatch();
ColumnListMutation<PostInfo> clm = m.withRow(CF_POSTS_TIMELINE, "all" /* or whatever makes sense for you such as year or month or whatever */)
.putColumn(PostInfo.of(post.getId(), post.getCategory(), post.getName(), post.getAuthor().getId(), post.getAuthor().getName(), post.getTitle(), post.getPublishedOn()), /* maybe just null bytes as column value */)
m.execute();
Then you could query like this:
OperationResult<ColumnList<PostInfo>> result = getKeyspace()
.prepareQuery(CF_POSTS_TIMELINE)
.getKey("all" /* or whatever makes sense like month, year, etc */)
.withColumnRange(new RangeBuilder()
.setLimit(5)
.setReversed(true)
.build())
.execute();
ColumnList<PostInfo> columns = result.getResult();
for (Column<PostInfo> column : columns) {
// do what you need here
}

Resources