Query Build using DataStax Driver - cassandra

I have my table as below,
create table contact( id uuid primary key,
personName text,
updatedTime timestamp
);
and trying to execute the below prepare statement,
String query = "SELECT * FROM CONTACT WHERE personName IN (:personNameList) " +
"AND updatedTime > ':startTime' AND updatedTime < :endTime ALLOW FILTERING;";
SimpleStatement simpleStatement = SimpleStatement.builder(query)
.setConsistencyLevel(DefaultConsistencyLevel.QUORUM)
.build();
PreparedStatement preparedStatement = cqlSession.prepare(simpleStatement);
BoundStatement boundStatement = preparedStatement.bind();
personList = ["John","Alex"];
boundStatement.setString("startTime", "2020-08-16 14:44:32+0000"); // Issue with setting
boundStatement.setString("endTime", "2020-08-16 14:60:32+0000"); // Issue with setting
boundStatement.setList("personNameList", personList, String.class); // Codec not found for requested operation: [TEXT <-> java.util.List<java.lang.String>]
ResultSet execute = cqlSession.execute(boundStatement);
// List<Person> personList = // Mapping
from 4.7.2 of driver mapping, it is different type to mapping as per my understanding, I couldn't get answer from Google. Any suggestions?
<dependency>
<groupId>com.datastax.oss</groupId>
<artifactId>java-driver-mapper-processor</artifactId>
<version>4.7.2</version>
<scope>test</scope>
</dependency>

Object mapper has changed significantly in the Java driver 4.x, so it requires setting up of the compile-time processor to generate auxiliary classes that are necessary for it work. But you can still use it to convert Row object into your POJO, by declaring the method with GetEntry annotation in your DAO interface, like this:
#Dao
public interface PersonDao {
#GetEntity
Person asPerson(Row row);
}
But if you're going to use object Mapper, I would recommend to use it for everything - in your case, you can declare a method with the Query annotation, and pass parameters for bindings.

Related

Spring integration jdbc array update query spel expression

I'm trying to create an 'update-query' for my JdbcPollingChannelAdapter given following workflow :
Select 500 records of type A from database
Update 1 row in another table with the values of the last record read ( the one at position 500 )
But i'm unable to sort it out, as been trying to use spring-el to find the value.
By debugging, i reached JdbcPollingChannelAdapter executeUpdateQuery method,
void executeUpdateQuery(Object obj) {
SqlParameterSource updateParameterSource = this.sqlParameterSourceFactory.createParameterSource(obj);
this.jdbcOperations.update(this.updateSql, updateParameterSource);
}
where Object obj is an ArrayList of 500 records of type A
This is my best match :
UPDATE LAST_EVENT_READ SET SEQUENCE=:#root[499].sequence, EVENT_DATE=:#[499].eventDate
Can anyone help me ?
P.S. Type A has sequence and eventDate attributes
I suggest you to use a custom SqlParameterSourceFactory and already don't rely on the SpEL:
public class CustomSqlParameterSourceFactory implements SqlParameterSourceFactory {
#Override
public SqlParameterSource createParameterSource(Object input) {
List<?> objects = (List<?>) input;
return new BeanPropertySqlParameterSource(objects.get(objects.size() - 1));
}
}
Inject this into the JdbcPollingChannelAdapter.setUpdateSqlParameterSourceFactory() and already use simple properties in the UPDATE statement:
UPDATE LAST_EVENT_READ SET SEQUENCE=:sequence, EVENT_DATE=:eventDate

Servicestack - Ormlite - high volume data loading

I am getting some issues with Servicestack and OrmLite in high data loading scenarios.
Specifically,
1. I have a list of 1000 000 + entities
2. I would like to insert them into Db (using Sql Server) if record does not exist yet
Thus,
public class Entity
{
[Autoincrement]
public int Id {get;set;}
public string Name {get;set;}
public string Address {get;set;}
}
Now for the import logic,
List<Entity> entities = oneMillionEntities.ToList();
foreach (var entity in entities)
{
if (!db.Exists<Entity>(ar => ar.Address == entity.Address))
{
db.Save(entity);
}
}
Issue is that quite often db is still busy with save action thus db.Exists does not always produce correct result. What is the best way of handling these scenarios?
Try
// Prepare SqlExpression
var ev = Db.From<Entity>().Select(p => p.Address).GroupBy(p => p.Address);
// Execute SqlExpression and transform result to HashSet
var dbAddresses = Db.SqlList(ev).ToHashSet();
// Filter local entities and get only local entities with different addresses
var filteredEntities = oneMillionEntities.Where(p =>
!dbAddresses.Contains(p.Address));
// Bulk insert
db.InsertAll(filteredEntities.ToList());

Using Custom Classes in Collection Types using DataStax's Java Driver

Is there a way to use our types (classes we provide) in the the collection types that we provide to the DataStax Java Driver through the BoundStatement.setSet(..) method without using User Defined Types? Perhaps with the Object Mapping API?
And if so, when we create the table what should we use for the type in the SET (in other words: SET<????>)?
We're using the DataStax Java Driver v2.1.5 connected to DSE v4.6.0 (Cassandra 2.0.11.83).
Here's an example to illustrate:
CREATE TABLE teams (uid TEXT PRIMARY KEY, players SET<????>);
Our Type:
public class Player {
private String name;
public Player(String name) { this.name = name; }
public String getName() { return name; }
public void setName(String name) { this.name = name; }
}
Test:
public void testSets(){
Set<Player> players = new HashSet<>();
players.add(new Player("Nick"));
players.add(new Player("Paul"));
players.add(new Player("Scott"));
PreparedStatement statement = session.prepare("INSERT INTO teams (uid, players) values (?,?);");
BoundStatement boundStatement = new BoundStatement(statement);
boundStatement.setString("uid", ...);
boundStatement.setSet("players", players);
session.execute(boundStatement);
}
Unfortunately you cannot do this today with the driver or its object mapping API yet. There is an outstanding issue (JAVA-722) opened today to handle this and other mapping related inadequacies (related issues linked in that issue) that will be covered in the next release (2.1.6) of the driver.
It may also be worth taking a look at Achilles which supports this through its Type Transformer.

Hazelcast Aggregations API results in ClassCastException with Predicates

I'm using a Hazelcast IMap instance to hold objects like the following:
public class Report implements Portable, Comparable<Report>, Serializable
{
private String id;
private String name;
private String sourceId;
private Date timestamp;
private Map<String,Object> payload;
// ...
}
The IMap is keyed by the id, and I have also created an index on sourceId, as I need to query and aggregate based on that field.
IMap<String, Report> reportMap = hazelcast.getMap("reports");
reportMap.addIndex("sourceId", false);
I've been trying to use the Aggregations framework to count reports by sourceId. Attempt #1:
public static int reportCountforSource(String sourceId)
{
EntryObject e = new PredicateBuilder().getEntryObject();
Predicate<String, Report> predicate = e.get("sourceId").equal(sourceId);
Supplier<String, Report, Object> supplier = Supplier.fromPredicate(predicate);
Long count = reportMap.aggregate(supplier, Aggregations.count());
return count.intValue();
}
This resulted in a ClassCastException being thrown by the Aggregations framework:
Caused by: java.lang.ClassCastException: com.hazelcast.mapreduce.aggregation.impl.SupplierConsumingMapper$SimpleEntry cannot be cast to com.hazelcast.query.impl.QueryableEntry
at com.hazelcast.query.Predicates$AbstractPredicate.readAttribute(Predicates.java:859)
at com.hazelcast.query.Predicates$EqualPredicate.apply(Predicates.java:779)
at com.hazelcast.mapreduce.aggregation.impl.PredicateSupplier.apply(PredicateSupplier.java:58)
at com.hazelcast.mapreduce.aggregation.impl.SupplierConsumingMapper.map(SupplierConsumingMapper.java:55)
at com.hazelcast.mapreduce.impl.task.KeyValueSourceMappingPhase.executeMappingPhase(KeyValueSourceMappingPhase.java:49)
I then changed to use Predicates instead of PredicateBuilder().getEntryObject() for Attempt #2:
public static int reportCountforSource(String sourceId)
{
#SuppressWarnings("unchecked")
Predicate<String, Report> predicate = Predicates.equal("sourceId", sourceId);
Supplier<String, Report, Object> supplier = Supplier.fromPredicate(predicate);
Long count = reportMap.aggregate(supplier, Aggregations.count());
return count.intValue();
}
This resulted in the same ClassCastException.
Finally, I used a lambda to implement the Predicate interface in Attempt #3:
public static int reportCountforSource(String sourceId)
{
Predicate<String, Report> predicate = (entry) -> entry.getValue().getSourceId().equals(sourceId);
Supplier<String, Report, Object> supplier = Supplier.fromPredicate(predicate);
Long count = reportMap.aggregate(supplier, Aggregations.count());
return count.intValue();
}
This attempt finally works.
Question #1: Is this a bug in Hazelcast? It seems that the Aggregations framework should support a Predicate constructed from either Predicates or PredicateBuilder? If not, then a new type should be created (e.g., AggregationPredicate) to avoid this kind of confusion.
Question #2 (related to #1): Using the lambda Predicate results in the index I created not being used. Instead, each entry within the map is being deserialized to determine if it matches the Predicate, which slows things down quite a bit. Is there any way to create a Supplier from a Predicate that will use the index? (EDIT: I verified that each entry is being deserialized by putting a counter in the readPortable method).
this looks like a Hazelcast bug. I guess I never created a unittest to test a Predicate created by PredicateBuilder. Can you please file an issue at github?
Currently indexes are not supported on mapreduce, whatever you try. The indexing system will be rewritten in the near future to also support all kinds of non-primitive indexes like partial or stuff.
Another thing that is not yet available is an optimized reader for Portable objects which would prevent full deserialization.

ordering of Hashtable in J2ME

I have some data and I have added them to Hashtable in some orders what I want to do now is to get the data in the same order that I have entered
What is the data type that I can use?
Assuming your key is a String you could add some ordering to it and have a getter method for the sorted data. See example below:
static int order;
Hashtable map = new Hashtable();
void put (String key, Object value) {
map.put(order + key, value);
order++;
}
Enumeration getSorted() {
Enumeration keys = map.keys();
Vector sortedKeys = new Vector();
while (keys.hasMoreElements()) {
String key = (String) keys.nextElement();
insertionSort(key, sortedKeys);
}
Vector sortedData = new Vector();
keys = sortedKeys.elements();
while (keys.hasMoreElements()) {
String key = (String) keys.nextElement();
sortedData.addElement(map.get(key));
}
return sortedData.elements();
}
You can find insertionSort algorithms at http://en.wikipedia.org/wiki/Insertion_sort
A Hashtable does not retain any ordering.
If you need insertion order access see if Linked Hash Map is offered in JavaME
You can download a source code of Java SE and make work LinkedHashMap in J2ME easily by removing generics (but also you might need to perform this on it's parent classes and interfaces).
You can find LinkedHashMap for Java ME here

Resources