Error using cql prepared statement in cassandra (astyanax) - cassandra

I was experimenting with astyana write operation, and was using cqlsh for that.
If i simply use the query, I am able to get the result
keyspace.prepareQuery(COLUMN_FAMILY).withCql("insert into scan_request (customer_id, uuid, scan_type, status, content_size, request_time, request_content_hash) values ("+dao.getCustomerId()+","+"'"+dao.getUuId()+"',"+dao.getScanType()+","+dao.getStatus()+","+dao.getContentSize()+",'2012-12-12 12:12:12', '"+dao.getRequestContentHash()+"');")
.execute();
However if i use prepared statement to do the same, i get the below error.
this.getKeyspace()
.prepareQuery(COLUMN_FAMILY)
.withCql(INSERT_STATEMENT)
.asPreparedStatement()
.withIntegerValue(dao.getCustomerId())
.withStringValue(dao.getUuId())
.withIntegerValue(dao.getScanType())
.withIntegerValue(dao.getStatus())
.withIntegerValue(dao.getContentSize())
.withStringValue("'2012-12-12 12:12:12'")
.withStringValue(dao.getRequestContentHash())
.execute();
I get the below error
Exception in thread "main" java.lang.RuntimeException: failed to write data to C*
at com.tools.dbaccess.cassandra.astyanax.AstyanaxClient.write(AstyanaxClient.java:155)
at com.tools.dbaccess.cassandra.astyanax.AstyanaxClient.main(AstyanaxClient.java:164)
Caused by: com.netflix.astyanax.connectionpool.exceptions.BadRequestException: BadRequestException: [host=localhost(127.0.0.1):9160, latency=11(11), attempts=1]InvalidRequestException(why:Expected 8 or 0 byte long for date (21))
at com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:159)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:65)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:28)
at com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$ThriftConnection.execute(ThriftSyncConnectionFactoryImpl.java:151)
at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:69)
at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:256)
at com.netflix.astyanax.thrift.AbstractThriftCqlQuery$3.execute(AbstractThriftCqlQuery.java:80)
at com.tools.dbaccess.cassandra.astyanax.AstyanaxClient.write(AstyanaxClient.java:144)
... 1 more
Caused by: InvalidRequestException(why:Expected 8 or 0 byte long for date (21))
at org.apache.cassandra.thrift.Cassandra$execute_prepared_cql3_query_result.read(Cassandra.java:41868)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_execute_prepared_cql3_query(Cassandra.java:1689)
at org.apache.cassandra.thrift.Cassandra$Client.execute_prepared_cql3_query(Cassandra.java:1674)
at com.netflix.astyanax.thrift.ThriftCql3Query.execute_prepared_cql_query(ThriftCql3Query.java:29)
at com.netflix.astyanax.thrift.AbstractThriftCqlQuery$3$1.internalExecute(AbstractThriftCqlQuery.java:92)
at com.netflix.astyanax.thrift.AbstractThriftCqlQuery$3$1.internalExecute(AbstractThriftCqlQuery.java:82)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:60)
I think its wrong when i try to store the timestamp into the table. But i didn't find anything adequeate in the preparedStatement to store timestamp. The datatype of the field in the database is "timestamp".

I should have used
.withByteBufferValue(new Date(), DateSerializer.get()).
Or if you have a custom object to serialize, extend the AbstractSerializer class.

Since what you get is a NullPointerException have you checked that some of your values aren't null?

Related

java.lang.ClassCastException: java.util.ArrayList cannot be cast to java.util.UUID exception with cassandra?

I have a spring boot java application that talks to cassandra .
However one of my queries is failing .
public class ParameterisedListItemRepository {
private PreparedStatement findByIds;
public ParameterisedListItemRepository(Session session, Validator validator, ParameterisedListMsisdnRepository parameterisedListMsisdnRepository ) {
this.findByIds = session.prepare("SELECT * FROM mep_parameterisedListItem WHERE id IN ( :ids )");
}
public List<ParameterisedListItem> findAll(List<UUID> ids){
List<ParameterisedListItem> parameterisedListItemList = new ArrayList<>();
BoundStatement stmt =this.findByIds.bind();
stmt.setList("ids", ids);
session.execute(stmt)
.all()
.stream()
.map(parameterisedListItemMapper)
.forEach(parameterisedListItemList::add);
return parameterisedListItemList;
}
}
the following is the stack trace
java.lang.ClassCastException: java.util.ArrayList cannot be cast to java.util.UUID
at com.datastax.driver.core.TypeCodec$AbstractUUIDCodec.serialize(TypeCodec.java:1626)
at com.datastax.driver.core.AbstractData.setList(AbstractData.java:358)
at com.datastax.driver.core.AbstractData.setList(AbstractData.java:374)
at com.datastax.driver.core.BoundStatement.setList(BoundStatement.java:681)
at com.openmind.primecast.repository.ParameterisedListItemRepository.findAll(ParameterisedListItemRepository.java:128)
at com.openmind.primecast.repository.ParameterisedListItemRepository$$FastClassBySpringCGLIB$$46ffc15e.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673)
at com.openmind.primecast.repository.ParameterisedListItemRepository$$EnhancerBySpringCGLIB$$b2db3c41.findAll(<generated>)
at com.openmind.primecast.service.impl.ParameterisedListItemServiceImpl.findByParameterisedList(ParameterisedListItemServiceImpl.java:102)
at com.openmind.primecast.web.rest.ParameterisedListItemResource.getParameterisedListItemsByParameterisedList(ParameterisedListItemResource.java:94)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Any idea what is going wrong. I know this query is the problem
SELECT * FROM mep_parameterisedListItem WHERE id IN ( :ids )
any idea how I can change the findAll function to achieve the query?
this is the table definition
CREATE TABLE "Openmind".mep_parameterisedlistitem (
id uuid PRIMARY KEY,
data text,
msisdn text,
ordernumber int,
parameterisedlist uuid
) WITH COMPACT STORAGE;
Thank you.
Without knowing the table schema, my guess is that a change was made to the table so the schema no longer match the bindings in the prepared statement.
A big part of the problem is your query with SELECT *. Our recommendation for best practice is to explicitly name all the columns you're retrieving from the table. By specifying the columns in your query, you avoid surprises when the table schema changes.
In this instance, either a new column was added or an old column was dropped. With the cached prepared statement, it was expecting one column type and got another -- the ArrayList doesn't match UUID.
The solution is to re-prepare the statement and name all the columns. Cheers!

Error binding OffsetDateTime [operator does not exist: timestamp with time zone <= character varying]

We are trying to execute dml which deletes records based on ZonedDateTime. We are using following code but running into an error.
dsl.execute ("delete from fieldhistory where createddate <= ? and object = ?", beforeDate.toOffsetDateTime(), objName)
Where beforeDate is ZonedDateTime and objectName is string
We are getting following error from postgres.
org.jooq.exception.DataAccessException: SQL [delete from fieldhistory where createddate <= ? and object = ?]; ERROR: operator does not exist: timestamp with time zone <= character varying
Hint: No operator matches the given name and argument types. You might need to add explicit type casts.
Position: 56
at org.jooq_3.13.1.POSTGRES.debug(Unknown Source)
at org.jooq.impl.Tools.translate(Tools.java:2751)
at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:755)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:385)
at org.jooq.impl.DefaultDSLContext.execute(DefaultDSLContext.java:1144)
Questions is, how do we bind datetime value in Jooq?
For historic reasons, jOOQ binds all JSR-310 times as strings, not as the relevant object type. This is because until recently, JDBC drivers did not support the JSR-310 types natively, and as such, using a string was not a bad default.
Unfortunately, this leads to type ambiguities, which you would not have if either:
jOOQ didn't bind a string
you were using the code generator and thus type safe DSL API methods
As a workaround, you can do a few things, including:
Casting your bind variable explicitly
dsl.execute("delete from fieldhistory where createddate <= ?::timestamptz and object = ?",
beforeDate.toOffsetDateTime(),
objName)
Using the DSL API
dsl.deleteFrom(FIELDHISTORY)
.where(FIELDHISTORY.CREATEDDATE.lt(beforeDate.toOffsetDateTime()))
.and(FIELDHISTORY.OBJECT.eq(objName))
.execute();
By writing your own binding
You can write your own data type binding and attach that to generated code, or to your plain SQL query, in case of which you would be in control of how the bind variable is sent to the JDBC driver. See:
https://www.jooq.org/doc/latest/manual/sql-building/queryparts/custom-bindings/
For example:
DataType<OffsetDateTime> myType = SQLDataType.OFFSETDATETIME
.asConvertedDataType(new MyBinding());
dsl.execute ("delete from fieldhistory where createddate <= {0} and object = {1}",
val(beforeDate.toOffsetDateTime(), myType),
val(objName))
There will be a fix in the future for this, so this won't be necessary anymore: https://github.com/jOOQ/jOOQ/issues/9902

I am getting TraceRetrievalException while reading data from Cassandra

I am trying to read a particular data from database. And executing the query to get the result. I am running the code in a while (true) loop so that I am doing the operation again and again.
In some of the iterations, I am getting result but in some cases it shows com.datastax.driver.core.exceptions.TraceRetrievalException and couldn't get the execution time for the query.
Cassandra version-3.11.4 , Datastax version-3.7
Exception thrown :
com.datastax.driver.core.exceptions.TraceRetrievalException: Unable to retrieve complete query trace for id bf7902b0-5d26-11e9-befd-d97cd54dc732 after 5 tries
at com.datastax.driver.core.QueryTrace.doFetchTrace(QueryTrace.java:242)
at com.datastax.driver.core.QueryTrace.maybeFetchTrace(QueryTrace.java:176)
at com.datastax.driver.core.QueryTrace.getDurationMicros(QueryTrace.java:105)
at com.cassandra.datastax.DatastaxTestBootstrapper.lambda$0(DatastaxTestBootstrapper.java:227)
at java.util.Collections$SingletonList.forEach(Collections.java:4822)
at com.cassandra.datastax.DatastaxTestBootstrapper.readData(DatastaxTestBootstrapper.java:227)
at com.cassandra.datastax.MultipleThread.run(MultipleThread.java:13)
at java.lang.Thread.run(Thread.java:748)
Is there any way I can overcome this exception or can I increase the number of retries to retrieve complete query trace?
Statement statement1 = new SimpleStatement(
"SELECT * FROM keyspace.table where key='405861500500033'").enableTracing();
ResultSet resultSet = session.execute(statement1);
resultSet.getAllExecutionInfo()
.forEach(e -> System.out.println("time : " + e.getQueryTrace().getDurationMicros()));

Getting unknown type for RetryPolicy error after upgrading spring data cassandra to latest 2.0.7.RELEASE

I updated these lines of code to support for spring-data-cassandra-2.0.7.RELEASE:
CassandraOperations cOps = new CassandraTemplate(session);
From:
Insert insertStatement = (Insert)statement;
CqlTemplate.addWriteOptions(insertStatement, queryWriteOptions);
cOps.execute(insertStatement);
To:
Insert insertStatement = (Insert)statement;
insertStatement = QueryOptionsUtil.addWriteOptions(insertStatement,
queryWriteOptions);
cOps.insert(insertStatement);
Above changes are throwing below error:
Caused by: org.springframework.dao.InvalidDataAccessApiUsageException: Unknown type [interface com.datastax.driver.core.policies.RetryPolicy] for property [retryPolicy] in entity [com.datastax.driver.core.querybuilder.Insert]; only primitive types and Collections or Maps of primitive types are allowed
at org.springframework.data.cassandra.core.mapping.BasicCassandraPersistentProperty.getDataType(BasicCassandraPersistentProperty.java:170)
at org.springframework.data.cassandra.core.mapping.CassandraMappingContext.lambda$null$10(CassandraMappingContext.java:552)
at java.util.Optional.orElseGet(Optional.java:267)
at org.springframework.data.cassandra.core.mapping.CassandraMappingContext.lambda$getDataTypeWithUserTypeFactory$11(CassandraMappingContext.java:542)
at java.util.Optional.orElseGet(Optional.java:267)
at org.springframework.data.cassandra.core.mapping.CassandraMappingContext.getDataTypeWithUserTypeFactory(CassandraMappingContext.java:527)
at org.springframework.data.cassandra.core.mapping.CassandraMappingContext.getDataType(CassandraMappingContext.java:486)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.getPropertyTargetType(MappingCassandraConverter.java:689)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.lambda$getTargetType$0(MappingCassandraConverter.java:682)
at java.util.Optional.orElseGet(Optional.java:267)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.getTargetType(MappingCassandraConverter.java:670)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.getWriteValue(MappingCassandraConverter.java:711)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.writeInsertFromWrapper(MappingCassandraConverter.java:403)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.writeInsertFromObject(MappingCassandraConverter.java:360)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.write(MappingCassandraConverter.java:345)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.write(MappingCassandraConverter.java:320)
at org.springframework.data.cassandra.core.QueryUtils.createInsertQuery(QueryUtils.java:78)
at org.springframework.data.cassandra.core.CassandraTemplate.insert(CassandraTemplate.java:442)
at org.springframework.data.cassandra.core.CassandraTemplate.insert(CassandraTemplate.java:430)
Query that is passed as input is of type com.datastax.driver.core.querybuilder.Insert containing:
INSERT INTO person (name,id,age) VALUES ('name01','123',23) USING TIMESTAMP 1528922717378000 AND TTL 60;
And the queryoptions containing RetryPolicy and consistency level is passed.
Based on documentation followed above changes are not working. Can anyone let me know what is wrong here?
I'm using Spring 2.0.7.RELEASE with Cassandra driver 3.5.0
I was able to work with it using below changes:
cOps.getCqlOperations().execute(insertStatement);
How can i check the consistency level if it got applied?
For me, this works:
batchOps.insert(ImmutableSet.of(entity), insertOptions);

schema crawler--Table name pattern can not be NULL or empty

I'm running the following command:
schemacrawler.cmd -server=mysql -database=prepaid -infolevel=minimum -command=list -loglevel=CONFIG -url=jdbc:mysql://127.0.0.1:3306/prepaid -u=root -schemas=prepaid
And I'm getting the following error:
Feb 22, 2017 5:11:48 PM us.fatehi.commandlineparser.CommandLineUtility logFullStackTrace
SEVERE: Exception retrieving table information: Table name pattern can not be NULL or empty.
schemacrawler.schemacrawler.SchemaCrawlerException: Exception retrieving table information: Table name pattern can not be NULL or empty.
at schemacrawler.crawl.SchemaCrawler.crawlTables(SchemaCrawler.java:739)
at schemacrawler.crawl.SchemaCrawler.crawl(SchemaCrawler.java:797)
at schemacrawler.tools.executable.BaseStagedExecutable.execute(BaseStagedExecutable.java:91)
at schemacrawler.tools.commandline.SchemaCrawlerCommandLine.execute(SchemaCrawlerCommandLine.java:129)
at schemacrawler.Main.main(Main.java:90)
Caused by: java.sql.SQLException: Table name pattern can not be NULL or empty.
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:545)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:513)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:505)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:479)
at com.mysql.cj.jdbc.DatabaseMetaData.getTables(DatabaseMetaData.java:3836)
at schemacrawler.crawl.TableRetriever.retrieveTables(TableRetriever.java:114)
at schemacrawler.crawl.SchemaCrawler.lambda$crawlTables$26(SchemaCrawler.java:570)
at schemacrawler.crawl.SchemaCrawler$$Lambda$41/1559122513.call(Unknown Source)
at sf.util.StopWatch.time(StopWatch.java:156)
at schemacrawler.crawl.SchemaCrawler.crawlTables(SchemaCrawler.java:567)
... 4 more
=================
Please advise
Please make sure to use the correct MySQL database connection URL, following the documentation on Driver/Datasource Class Names, URL Syntax and Configuration Properties for Connector/J. In particular, you need to set nullNamePatternMatchesAll=true like this:
schemacrawler.cmd -server=mysql -database=prepaid -infolevel=minimum -command=list -loglevel=CONFIG -url=jdbc:mysql://127.0.0.1:3306/prepaid?nullNamePatternMatchesAll=true -u=root -schemas=prepaid
Or, better, use SchemaCrawler's built-in support for MySQL, like this, which is much easier:
schemacrawler.cmd -server=mysql -host=127.0.0.1 -database=prepaid -infolevel=minimum -command=list -loglevel=CONFIG -u=root -schemas=prepaid
Sualeh Fatehi, SchemaCrawler

Resources