Getting unknown type for RetryPolicy error after upgrading spring data cassandra to latest 2.0.7.RELEASE - spring-data-cassandra

I updated these lines of code to support for spring-data-cassandra-2.0.7.RELEASE:
CassandraOperations cOps = new CassandraTemplate(session);
From:
Insert insertStatement = (Insert)statement;
CqlTemplate.addWriteOptions(insertStatement, queryWriteOptions);
cOps.execute(insertStatement);
To:
Insert insertStatement = (Insert)statement;
insertStatement = QueryOptionsUtil.addWriteOptions(insertStatement,
queryWriteOptions);
cOps.insert(insertStatement);
Above changes are throwing below error:
Caused by: org.springframework.dao.InvalidDataAccessApiUsageException: Unknown type [interface com.datastax.driver.core.policies.RetryPolicy] for property [retryPolicy] in entity [com.datastax.driver.core.querybuilder.Insert]; only primitive types and Collections or Maps of primitive types are allowed
at org.springframework.data.cassandra.core.mapping.BasicCassandraPersistentProperty.getDataType(BasicCassandraPersistentProperty.java:170)
at org.springframework.data.cassandra.core.mapping.CassandraMappingContext.lambda$null$10(CassandraMappingContext.java:552)
at java.util.Optional.orElseGet(Optional.java:267)
at org.springframework.data.cassandra.core.mapping.CassandraMappingContext.lambda$getDataTypeWithUserTypeFactory$11(CassandraMappingContext.java:542)
at java.util.Optional.orElseGet(Optional.java:267)
at org.springframework.data.cassandra.core.mapping.CassandraMappingContext.getDataTypeWithUserTypeFactory(CassandraMappingContext.java:527)
at org.springframework.data.cassandra.core.mapping.CassandraMappingContext.getDataType(CassandraMappingContext.java:486)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.getPropertyTargetType(MappingCassandraConverter.java:689)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.lambda$getTargetType$0(MappingCassandraConverter.java:682)
at java.util.Optional.orElseGet(Optional.java:267)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.getTargetType(MappingCassandraConverter.java:670)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.getWriteValue(MappingCassandraConverter.java:711)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.writeInsertFromWrapper(MappingCassandraConverter.java:403)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.writeInsertFromObject(MappingCassandraConverter.java:360)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.write(MappingCassandraConverter.java:345)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.write(MappingCassandraConverter.java:320)
at org.springframework.data.cassandra.core.QueryUtils.createInsertQuery(QueryUtils.java:78)
at org.springframework.data.cassandra.core.CassandraTemplate.insert(CassandraTemplate.java:442)
at org.springframework.data.cassandra.core.CassandraTemplate.insert(CassandraTemplate.java:430)
Query that is passed as input is of type com.datastax.driver.core.querybuilder.Insert containing:
INSERT INTO person (name,id,age) VALUES ('name01','123',23) USING TIMESTAMP 1528922717378000 AND TTL 60;
And the queryoptions containing RetryPolicy and consistency level is passed.
Based on documentation followed above changes are not working. Can anyone let me know what is wrong here?
I'm using Spring 2.0.7.RELEASE with Cassandra driver 3.5.0

I was able to work with it using below changes:
cOps.getCqlOperations().execute(insertStatement);
How can i check the consistency level if it got applied?

For me, this works:
batchOps.insert(ImmutableSet.of(entity), insertOptions);

Related

EsHadoopIllegalArgumentException: invalid map received dynamic=strict errors on elasticsearch-hadoop

trying with both the dataframe Api and the rdd API
val map =collection.mutable.Map[String, String]()
map("es.nodes.wan.only") = "true"
map("es.port") = "reducted"
map("es.net.http.auth.user") = "reducted"
map("es.net.http.auth.pass") = "reducted"
map("es.net.ssl") = "true"
map("es.mapping.date.rich") = "false"
map("es.read.field.include") = "data_scope_id"
map("es.nodes") = "reducted"
val rdd = sc.esRDD("index name", map)
rdd.take(1)
But anything I try I get this error
EsHadoopIllegalArgumentException: invalid map received dynamic=strict
I've tried limiting the fields being read with es.read.field.include But even if I choose one field which I'm sure doesn't have any varient I still get this error
How can I work around this? I'll be glad for any advice
Versions:
eshadoop-7.13.4
client Spark 3.1.2
Scala 2.12
Clarification
This is about reading from elasticsearch in spark, not indexing
So if I understand correctly, your aim is to index the values in map to index name.
TLDR;
Update the mapping of your index to allow for new fields to be indexed. As of new the value of dynamic is strict which does not allow for new field and throw an exception.
PUT /index name/
{
"mappings": {
"dynamic": true
}
}
To understand
The issue is with the mapping of your index.
There is a setting called [dynamic(https://www.elastic.co/guide/en/elasticsearch/reference/current/dynamic.html) on the mapping of your index.
I bet it is set to strict, which according to the doc:
If new fields are detected, an exception is thrown and the document is rejected. New fields must be explicitly added to the mapping.
So, my understanding is, you have one or many fields that are new in your document.
Either:
Fix the document
Fix the mapping
Switch dynamic to true, false or runtime according to your needs

Datastax cassandra seem to cache preparestatent

When my application runs a long time, everything works as well. But when I change type a column from int to text(Drop table and recreate), I caught a Exception:
com.datastax.oss.driver.api.core.type.codec.CodecNotFoundException: Codec not found for requested operation: [INT <-> java.lang.String]
at com.datastax.oss.driver.internal.core.type.codec.registry.CachingCodecRegistry.createCodec(CachingCodecRegistry.java:609)
at com.datastax.oss.driver.internal.core.type.codec.registry.DefaultCodecRegistry$1.load(DefaultCodecRegistry.java:95)
at com.datastax.oss.driver.internal.core.type.codec.registry.DefaultCodecRegistry$1.load(DefaultCodecRegistry.java:92)
at com.datastax.oss.driver.shaded.guava.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3527)
at com.datastax.oss.driver.shaded.guava.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2276)
at com.datastax.oss.driver.shaded.guava.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)
at com.datastax.oss.driver.shaded.guava.common.cache.LocalCache$Segment.get(LocalCache.java:2044)
at com.datastax.oss.driver.shaded.guava.common.cache.LocalCache.get(LocalCache.java:3951)
at com.datastax.oss.driver.shaded.guava.common.cache.LocalCache.getOrLoad(LocalCache.java:3973)
at com.datastax.oss.driver.shaded.guava.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4957)
at com.datastax.oss.driver.shaded.guava.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4963)
at com.datastax.oss.driver.internal.core.type.codec.registry.DefaultCodecRegistry.getCachedCodec(DefaultCodecRegistry.java:117)
at com.datastax.oss.driver.internal.core.type.codec.registry.CachingCodecRegistry.codecFor(CachingCodecRegistry.java:215)
at com.datastax.oss.driver.api.core.data.SettableByIndex.set(SettableByIndex.java:132)
at com.datastax.oss.driver.api.core.data.SettableByIndex.setString(SettableByIndex.java:338)
This exception appears occasionally. I'm using PreparedStatement to execute the query, I think it is cached from DataStax's driver.
I'm using AWS Keyspaces(Cassandra version 3.11.2), DataStax driver 4.6.
Here is my application.conf:
basic.request {
timeout = 5 seconds
consistency = LOCAL_ONE
}
advanced.connection {
max-requests-per-connection = 1024
pool {
local.size = 1
remote.size = 1
}
}
advanced.reconnect-on-init = true
advanced.reconnection-policy {
class = ExponentialReconnectionPolicy
base-delay = 1 second
max-delay = 60 seconds
}
advanced.retry-policy {
class = DefaultRetryPolicy
}
advanced.protocol {
version = V4
}
advanced.heartbeat {
interval = 30 seconds
timeout = 1 second
}
advanced.session-leak.threshold = 8
advanced.metadata.token-map.enabled = false
}
Yes, Java driver 4.x caches prepared statement - it's a difference from the driver 3.x. From documentation:
the session has a built-in cache, it’s OK to prepare the same string twice.
...
Note that caching is based on: the query string exactly as you provided it: the driver does not perform any kind of trimming or sanitizing.
I'm not sure 100% about the source code, but the relevant entries in the cache may not be cleared up on the table drop. I suggest to open the JIRA against Java driver, although, such type changes are often not really recommended - it's better to introduce new field with new type, even if it's possible to re-create table.
That's correct. Prepared statements are cached -- it's the optimisation that makes prepared statements more efficient if they are reused since they only need to be prepared once (the query doesn't need to get parsed again).
But I suspect that underlying issue in your case is that your queries involve SELECT *. Best practice recommendation (regardless of the database you're using) is to explicitly enumerate the columns you are retrieving from the table.
In the prepared statement, each of the columns are bound to a data type. When you alter the schema by adding/dropping columns, the order of the columns (and their data types) no longer match the data types of the result set so you end up in situations where the driver gets an int when it's expecting a text or vice-versa. Cheers!

Cassandra UDF with Java throwing error on usage of stream

I am trying to write a UDF in Cassandra using Java.
I have two list of integers with one to one mapping i.e., first item in the first list corresponds to the first item in the second list. This is how my data is stored in Cassandra table under two columns of list type.
My UDF
CREATE OR REPLACE FUNCTION mykeyspace.get_thread(list1 List<int>, list2 List<int>)
CALLED ON NULL INPUT
RETURNS List<int>
LANGUAGE java
AS '
Map<Integer, Integer> intermediatemap = new HashMap<Integer, Integer>();
for (int i = 0; i < list1.size(); i++) {
if (!intermediatemap.containsKey(i))
intermediatemap.put(list1.get(i), list2.get(i));
}
List<Integer> commentids = new ArrayList<Integer>();
intermediatemap.entrySet().stream()
.sorted(Map.Entry.<Integer, Integer>comparingByValue().reversed())
.forEachOrdered(x -> commentids.add(x.getKey()));
if (commentids.size() > 8)
return commentids.subList(0, 8);
else
return commentids;
';
This is running all good as code. However, when I execute this on cqlsh I am getting error saying
InvalidRequest: Error from server: code=2200 [Invalid query] message="Java source compilation failed:
Line 8: The type java.util.stream.Stream cannot be resolved. It is indirectly referenced from required .class files
Line 8: The method stream() from the type Collection<Map.Entry<Integer,Integer>> refers to the missing type Stream
Is there any problem with the Cassandra version I am using? I am running the Cassandra on my local MAC machine inside a docker. I have tried with 3.11.2 and the image with latest tag.
Also, UDF function is enabled I am able to run simple UDF functions.
You can use only classes that are explicitly defined in the allowed classes list inside Cassandra, and not in the disallowed list defined. If you look to the source code for Cassandra 3.11.2, you can see that java.utils.stream is in the disallowed list, so you can't use it inside UDF/UDA.

Error binding OffsetDateTime [operator does not exist: timestamp with time zone <= character varying]

We are trying to execute dml which deletes records based on ZonedDateTime. We are using following code but running into an error.
dsl.execute ("delete from fieldhistory where createddate <= ? and object = ?", beforeDate.toOffsetDateTime(), objName)
Where beforeDate is ZonedDateTime and objectName is string
We are getting following error from postgres.
org.jooq.exception.DataAccessException: SQL [delete from fieldhistory where createddate <= ? and object = ?]; ERROR: operator does not exist: timestamp with time zone <= character varying
Hint: No operator matches the given name and argument types. You might need to add explicit type casts.
Position: 56
at org.jooq_3.13.1.POSTGRES.debug(Unknown Source)
at org.jooq.impl.Tools.translate(Tools.java:2751)
at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:755)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:385)
at org.jooq.impl.DefaultDSLContext.execute(DefaultDSLContext.java:1144)
Questions is, how do we bind datetime value in Jooq?
For historic reasons, jOOQ binds all JSR-310 times as strings, not as the relevant object type. This is because until recently, JDBC drivers did not support the JSR-310 types natively, and as such, using a string was not a bad default.
Unfortunately, this leads to type ambiguities, which you would not have if either:
jOOQ didn't bind a string
you were using the code generator and thus type safe DSL API methods
As a workaround, you can do a few things, including:
Casting your bind variable explicitly
dsl.execute("delete from fieldhistory where createddate <= ?::timestamptz and object = ?",
beforeDate.toOffsetDateTime(),
objName)
Using the DSL API
dsl.deleteFrom(FIELDHISTORY)
.where(FIELDHISTORY.CREATEDDATE.lt(beforeDate.toOffsetDateTime()))
.and(FIELDHISTORY.OBJECT.eq(objName))
.execute();
By writing your own binding
You can write your own data type binding and attach that to generated code, or to your plain SQL query, in case of which you would be in control of how the bind variable is sent to the JDBC driver. See:
https://www.jooq.org/doc/latest/manual/sql-building/queryparts/custom-bindings/
For example:
DataType<OffsetDateTime> myType = SQLDataType.OFFSETDATETIME
.asConvertedDataType(new MyBinding());
dsl.execute ("delete from fieldhistory where createddate <= {0} and object = {1}",
val(beforeDate.toOffsetDateTime(), myType),
val(objName))
There will be a fix in the future for this, so this won't be necessary anymore: https://github.com/jOOQ/jOOQ/issues/9902

How to read schema of a keyspace using java.?

I want to read schema of a keyspace in cassandra.
I know that, in Cassandra-cli we can execute following command to get Schema
show schema keyspace1;
But i want to read schema from remote machine using java.
How i can solve this? Plzzz help me....
This one i solved by using thrift client
KsDef keyspaceDefinition = _client.describe_keyspace(_keyspace);
List<CfDef> columnDefinition = keyspaceDefinition.getCf_defs();
Here key space definition contains whole schema details, so from that KsDef we can read whatever we want. In my case i want to read metadata so i am reading column metadata from the above column definitions as shown below.
for(int i=0;i<columnDefinition.size();i++){
List<ColumnDef> columnMetadata = columnDefinition.get(i).getColumn_metadata();
for(int j=0;j<columnMetadata.size();j++){
columnfamilyNames.add(columnDefinition.get(i).getName());
columnNames.add(new String((columnMetadata.get(j).getName())));
validationClasses.add(columnMetadata.get(j).getValidation_class());
//ar.add(coldef.get(i).getName()+"\t"+bb_to_str(colmeta.get(j).getName())+"\t"+colmeta.get(j).getValidationClass());
}
}
here columnfamilyNames, columnNames and validationClasses are arraylists.

Resources