I am not able to map data which has UDT type.
The table is definition is the following.
CREATE TABLE IF NOT EXISTS members_data.Test(
priority int,
name text,
test_links list<frozen<TestLinks>>,
PRIMARY KEY(name)
);
The model is the following.
#JsonAutoDetect
#JsonSerialize
#Table(keyspace="members_data", caseSensitiveKeyspace=false, caseSensitiveTable=false, name="Test")
public class Test{
#Column(name="name", caseSensitive=false)
private String name;
#Column(name="priority", caseSensitive=false)
private int priority;
#Frozen
#Column(name="test_links", caseSensitive=false)
private List<TestLinks> test_links;
#JsonAutoDetect
#JsonSerialize
#UDT(keyspace = "members_data", name = "Testlinks")
public class TestLinks {
#Field(name = "test_link")
private String test_link;
#Field(name = "link_title")
private String link_title;
The mapper usage.
MappingManager manager = new MappingManager(sessionManager.getSession());
manager.udtCodec(TestLinks.class);
Mapper<Test> mapper = manager.mapper(Test.class);
Result<Test> result = mapper.map(testResultSet);
test = result.one(); //test object would be null here
the cassandra-driver-mapping is 3.1.0.
Mapper is not throwing any error and now even mapping data to model. Could someone tell me what is the problem?
Related
I want to create a custom lightning component to create new Case records and need to use fieldset to include fields in component. Need to use this only for one object. I never used fieldsets so dont have any idea on it. It would be really great if you can share some sample code or any link for the same.
You can use this utility class
This is the wrapper class to hold the meta info about the fields
public with sharing class DataTableColumns {
#AuraEnabled
public String label {get;set;}
#AuraEnabled
public String fieldName {get;set;}
#AuraEnabled
public String type {get;set;}
public DataTableColumns(String label, String fieldName, String type){
this.label = label;
this.fieldName = fieldName;
this.type = type;
}
}
Class FieldSetHelper has a method getColumns () this will return the list of DataTableColumns wrapper containing the information about the filedset columns
public with sharing class FieldSetHelper {
/*
#param String strObjectName : required. Object name to get the required filed set
#param String strFieldSetName : required. FieldSet name
#return List<DataTableColumns> list of columns in the specified fieldSet
*/
public static List<DataTableColumns> getColumns (String strObjectName, String strFieldSetName) {
Schema.SObjectType SObjectTypeObj = Schema.getGlobalDescribe().get(strObjectName);
Schema.DescribeSObjectResult DescribeSObjectResultObj = SObjectTypeObj.getDescribe();
Schema.FieldSet fieldSetObj = DescribeSObjectResultObj.FieldSets.getMap().get(strFieldSetName);
List<DataTableColumns> lstDataColumns = new List<DataTableColumns>();
for( Schema.FieldSetMember eachFieldSetMember : fieldSetObj.getFields() ){
String dataType =
String.valueOf(eachFieldSetMember.getType()).toLowerCase();
DataTableColumns datacolumns = new DataTableColumns(
String.valueOf(eachFieldSetMember.getLabel()) ,
String.valueOf(eachFieldSetMember.getFieldPath()),
String.valueOf(eachFieldSetMember.getType()).toLowerCase() );
lstDataColumns.add(datacolumns);
}
return lstDataColumns;
}
}
After you getting all those field set information the create lightning component dynamically
Versions: Datastax Java driver 3.1.4, Cassandra 3.10
Consider the following table:
create table object_ta
(
objid bigint,
version_date timestamp,
objecttype ascii,
primary key (objid, version_date)
);
And a mapped class:
#Table(name = "object_ta")
public class ObjectTa
{
#Column(name = "objid")
private long objid;
#Column(name = "version_date")
private Instant versionDate;
#Column(name = "objecttype")
private String objectType;
public ObjectTa()
{
}
public ObjectTa(long objid)
{
this.objid = objid;
this.versionDate = Instant.now();
}
public long getObjId()
{
return objid;
}
public void setObjId(long objid)
{
this.objid = objid;
}
public Instant getVersionDate()
{
return versionDate;
}
public void setVersionDate(Instant versionDate)
{
this.versionDate = versionDate;
}
public String getObjectType()
{
return objectType;
}
public void setObjectType(String objectType)
{
this.objectType = objectType;
}
}
After creating a mapper for this class (mm is a MappingManager for the session on mykeyspace)
final Mapper<ObjectTa> mapper = mm.mapper(ObjectTa.class);
On calling
mapper.save(new ObjectTa(1));
I get
Query preparation failed: INSERT INTO mykeyspace.object_ta
(objid,objid,version_date,objecttype) VALUES (?,?,?,?);:
com.datastax.driver.core.exceptions.InvalidQueryException: The column
names contains duplicates at
com.datastax.driver.core.Responses$Error.asException(Responses.java:136)
at
com.datastax.driver.core.SessionManager$4.apply(SessionManager.java:220)
at
com.datastax.driver.core.SessionManager$4.apply(SessionManager.java:196)
at
com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:906)
at
com.google.common.util.concurrent.Futures$1$1.run(Futures.java:635)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
I am at a loss to understand, why the duplicate objid is generated in the query.
Thank you in advance for pointers to the problem.
Clemens
I think it is because the inconsistent use of case on the field name (objid) vs the setter/getters (getObjId). If you rename getObjId and setObjId to getObjid and setObjid respectively, I believe it might work.
In a future release, the driver mapper will allow the user to be more explicit about whether setters/getters are used (JAVA-1310) and what the naming conventions are (JAVA-1316).
I am using datastax cassandra 3.1.2. I have created the following table in cassandra and inserted a record.
CREATE TYPE memory ( capacity text );
create TABLE laptop ( id uuid primary key, model text, ram frozen<memory> );
select * from laptop ;
id | model | ram
--------------------------------------+---------------+-------------------
e55cba2b-0847-40d5-ad56-ae97e793dc3e | Dell Latitude | {capacity: '8gb'}
When I am trying to fetch the capacity field from frozen type memory in Java using Cassandra Accessor with the below code:
this.cluster = Cluster.builder().addContactPoint(node).withPort(port).build();
session = cluster.connect();
MappingManager manager = new MappingManager(session);
LaptopAccessor laptopAccessor = manager.createAccessor(LaptopAccessor.class);
Result<Laptop> cp = laptopAccessor.getOne(UUID.fromString("e55cba2b-0847-40d5-ad56-ae97e793dc3e"));
System.out.println(cp.one());
It is giving ram datapoint itself is null.
id = null model = null ram = null
I was expecting that the mapper would create ram instance while mapping and map capacity field into it and return the Laptop bean.
I have the following Accessor interface:
#Accessor
interface LaptopAccessor {
#Query("SELECT ram.capacity FROM user_info.laptop where id=?")
Result<Laptop> getOne(UUID id);
}
I have the following java beans for the above table.
#Table(keyspace = "user_info", name = "laptop")
public class Laptop {
private UUID id;
private String model;
private Memory ram;
#PartitionKey
public UUID getId() {
return id;
}
public void setId(UUID id) {
this.id = id;
}
public String getModel() {
return model;
}
public void setModel(String model) {
this.model = model;
}
#Frozen
public Memory getRam() {
return ram;
}
public void setRam(Memory ram) {
this.ram = ram;
}
#Override
public String toString() {
return "id = " + id + " model = " + model + " ram = " + ram;
}
}
#UDT(keyspace = "user_info", name = "memory")
public class Memory {
private String capacity;
#Field
public String getCapacity() {
return capacity;
}
public void setCapacity(String capacity) {
this.capacity = capacity;
}
#Override
public String toString() {
return "capacity = " + capacity ;
}
}
The code works fine when I change the query to retrieve entire ram UDT. Could somebody please tell that why the mapper doesn't work when I select some field from the udt in the query?
Doesn't cassandra support this? Any workaround to fetch the UDT fields?
I think the issue is the return type on your accessor:
#Accessor
interface LaptopAccessor {
#Query("SELECT ram.capacity FROM user_info.laptop where id=?")
Result<Laptop> getOne(UUID id);
}
Since your query is only selecting ram.capacity, all the driver is getting back is a Row with a single column that is a String with a name of ram.capacity which does not map to any field in Laptop.
Instead, since it looks like all you want is the 1 row matching that query, you could change your Accessor to:
#Accessor
interface LaptopAccessor {
#Query("SELECT ram.capacity FROM user_info.laptop where id=?")
ResultSet getOne(UUID id);
}
The accessor now returns a ResultSet for which you can call one().getString(0) to get the capacity back. It's not ideal if you don't want to deal with ResultSet directly, but works well.
You shouldn't really need the whole Laptop object anyways since all you are requesting is a field of a UDT right?
I have a User table and its corresponding POJO
#Table
public class User{
#Column(name = "id")
private String id;
// lots of fields
#Column(name = "address")
#Frozen
private Optional<Address> address;
// getters and setters
}
#UDT
public class Address {
#Field(name = "id")
private String id;
#Field(name = "country")
private String country;
#Field(name = "state")
private String state;
#Field(name = "district")
private String district;
#Field(name = "street")
private String street;
#Field(name = "city")
private String city;
#Field(name = "zip_code")
private String zipCode;
// getters and setters
}
I wanna convert UDT "address" to Optional.
Because I use "cassandra-driver-mapping:3.0.0-rc1" and "cassandra-driver-extras:3.0.0-rc1", there are lots of codec I can use them.
For example: OptionalCodec
I wanna register it to CodecRegistry and pass TypeCodec to OptionalCodec's constructor.
But TypeCodec is a abstract class, I can't initiate it.
Someone have any idea how to initiate OptionalCodec?
Thank you, #Olivier Michallat. Your solution is OK!
But I'm a little confused to set OptionalCodec to CodecRegistry.
You must initial a session at first.
Then pass session to MappingManager, get correct TypeCodec and register codecs.
It's a little weird that you must initial session at first, in order to get TypeCodec !?
Cluster cluster = Cluster.builder()
.addContactPoints("127.0.0.1")
.build();
Session session = cluster.connect(...);
cluster.getConfiguration()
.getCodecRegistry()
.register(new OptionalCodec(new MappingManager(session).udtCodec(Address.class)))
.register(...);
// use session to operate DB
The MappingManager has a method that will create the codec from the annotated class:
TypeCodec<Address> addressCodec = mappingManager.udtCodec(Address.class);
OptionalCodec<Address> optionalAddressCodec = new OptionalCodec(addressCodec);
codecRegistry.register(optionalAddressCodec);
Not really an answer but hope it helps. I couldn't make the Optional work with UDT in scala. However List and Array are working fine:
Here is a scala solution for driver version 4.x:
val reg = session.getContext.getCodecRegistry
val yourTypeUdt: UserDefinedType = session.getMetadata.getKeyspace(keyspace).flatMap(_.getUserDefinedType("YOUR_TYPE")).get
val yourTypeCodec: TypeCodec[UserDefinedType] = reg.codecFor(yourTypeUdt)
reg.asInstanceOf[MutableCodecRegistry].register(TypeCodecs.listOf(yourTypeCodec))
Don't forget to use java.util.* instead of your normal scala types.
I can't make this query work:
Query query = eManager.createQuery("select c FROM News c WHERE c.NEWSID = :id",News.class);
return (News)query.setParameter("id", newsId).getSingleResult();
and I got this exception:
Exception Description: Problem compiling [select c FROM News c WHERE c.NEWSID = :id].
[27, 35] The state field path 'c.NEWSID' cannot be resolved to a valid type.] with root cause
Local Exception Stack:
Exception [EclipseLink-0] (Eclipse Persistence Services - 2.5.0.v20130507-3faac2b): org.eclipse.persistence.exceptions.JPQLException
Exception Description: Problem compiling [select c FROM News c WHERE c.NEWSID = :id].
Why does it happen?
:id and named parameter are identical
EDIT:
my entity class
#Entity
#Table(name="NEWS")
public class News implements Serializable{
#Id
#SequenceGenerator(name = "news_seq_gen", sequenceName = "news_seq")
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "news_seq_gen")
private int newsId;
private String newsTitle;
private String newsBrief;
private String newsContent;
private Date newsDate;
#Transient
private boolean selected=false;
//constructor and getters and setters
That happens because News entity does not have persistent attribute named NEWSID. Names of the persistent attributes are case sensitive in JPQL queries and those should be written with exactly same case as they appear in entity.
Because entity have persistent attribute named newsId, that should also be used in query instead of NEWSID:
select c FROM News c WHERE c.newsId = :id
entity have persistent attribute named newsId.but in query you have used NEWSID . try with this
select c FROM News c WHERE c.newsId = :id
My entity is:
#Entity
#Table(name = "TBL_PERSON_INFO")
public class Person implements Serializable {
#Id
#Column(name = "ID", nullable = false)
private Integer id;
#Column(name = "USER_ID", nullable = false)
private Integer user_id;
.
.
.
}
my query is (JPQL):
String queryName = "from Person p where p.user_id = :user_id";
so I use it like this:
javax.persistence.Query query = em.createQuery(queryName);
query.setParameter("user_id", userId);
try {
obj = query.getSingleResult();
}
catch (javax.persistence.NoResultException nre) {
logger.error("javax.persistence.NoResultException: " + nre.getMessage());
}
catch (javax.persistence.NonUniqueResultException nure) {
logger.error("javax.persistence.NonUniqueResultException: " + nure.getMessage());
}
catch (Exception e) {
e.printStackTrace();
}
if (obj == null) {
System.out.println("obj is null!");
return null;
}
Person person = (Person) obj;
It's work ;-)