schema crawler--Table name pattern can not be NULL or empty - schemacrawler

I'm running the following command:
schemacrawler.cmd -server=mysql -database=prepaid -infolevel=minimum -command=list -loglevel=CONFIG -url=jdbc:mysql://127.0.0.1:3306/prepaid -u=root -schemas=prepaid
And I'm getting the following error:
Feb 22, 2017 5:11:48 PM us.fatehi.commandlineparser.CommandLineUtility logFullStackTrace
SEVERE: Exception retrieving table information: Table name pattern can not be NULL or empty.
schemacrawler.schemacrawler.SchemaCrawlerException: Exception retrieving table information: Table name pattern can not be NULL or empty.
at schemacrawler.crawl.SchemaCrawler.crawlTables(SchemaCrawler.java:739)
at schemacrawler.crawl.SchemaCrawler.crawl(SchemaCrawler.java:797)
at schemacrawler.tools.executable.BaseStagedExecutable.execute(BaseStagedExecutable.java:91)
at schemacrawler.tools.commandline.SchemaCrawlerCommandLine.execute(SchemaCrawlerCommandLine.java:129)
at schemacrawler.Main.main(Main.java:90)
Caused by: java.sql.SQLException: Table name pattern can not be NULL or empty.
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:545)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:513)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:505)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:479)
at com.mysql.cj.jdbc.DatabaseMetaData.getTables(DatabaseMetaData.java:3836)
at schemacrawler.crawl.TableRetriever.retrieveTables(TableRetriever.java:114)
at schemacrawler.crawl.SchemaCrawler.lambda$crawlTables$26(SchemaCrawler.java:570)
at schemacrawler.crawl.SchemaCrawler$$Lambda$41/1559122513.call(Unknown Source)
at sf.util.StopWatch.time(StopWatch.java:156)
at schemacrawler.crawl.SchemaCrawler.crawlTables(SchemaCrawler.java:567)
... 4 more
=================
Please advise

Please make sure to use the correct MySQL database connection URL, following the documentation on Driver/Datasource Class Names, URL Syntax and Configuration Properties for Connector/J. In particular, you need to set nullNamePatternMatchesAll=true like this:
schemacrawler.cmd -server=mysql -database=prepaid -infolevel=minimum -command=list -loglevel=CONFIG -url=jdbc:mysql://127.0.0.1:3306/prepaid?nullNamePatternMatchesAll=true -u=root -schemas=prepaid
Or, better, use SchemaCrawler's built-in support for MySQL, like this, which is much easier:
schemacrawler.cmd -server=mysql -host=127.0.0.1 -database=prepaid -infolevel=minimum -command=list -loglevel=CONFIG -u=root -schemas=prepaid
Sualeh Fatehi, SchemaCrawler

Related

What is the correct CSV format for tuples when loading data with DSBulk?

I recently started using Cassandra for my new project and doing some load testing.
I have a scenario where I’m doing dsbulk load using CSV like this,
$ dsbulk load -url <csv path> -k <keyspace> -t <table> -h <host> -u <user> -p <password> -header true -cl LOCAL_QUORUM
My CSV file entries looks like this,
userid birth_year created_at freq
1234 1990 2023-01-13T23:27:15.563Z {1234:{"(1, 2)": 1}}
Column types,
userid bigint PRIMARY KEY,
birth_year int,
created_at timestamp,
freq map<bigint, frozen<map<frozen<tuple<tinyint, smallint>>, smallint>>>
The issue is, for column freq, I try different ways of setting the value in csv like below, but not able to insert the row using dsbulk
Let’s say if I set freq as {1234:{[1, 2]: 1}},
com.datastax.oss.dsbulk.workflow.commons.schema.InvalidMappingException: Could not map field freq to variable freq; conversion from Java type java.lang.String to CQL type Map(BIGINT => Map(Tuple(TINYINT, SMALLINT) => SMALLINT, not frozen), not frozen) failed for raw value: {1234:{[1,2]: 1}}
Caused by: java.lang.IllegalArgumentException: Could not parse ‘{1234:{[1, 2]: 1}}’ as Json
Caused by: com.fasterxml.jackson.core.JsonParseException: Unexpected character (‘[’ (code 91)): was expecting either valid name character (for unquoted name) or double-quote (for quoted) to start field name
at [Source: (String)“{1234:{[1, 2]: 1}}“; line: 1, column: 9]
If I set freq as {\"1234\":{\"[1, 2]\":1}},
java.lang.IllegalArgumentException: Expecting record to contain 4 fields but found 5.
If I set freq as {1234:{"[1, 2]": 1}} or {1234:{"(1, 2)": 1}},
Source: 1234,80,2023-01-13T23:27:15.563Z,“{1234:{“”[1, 2]“”: 1}}” java.lang.IllegalArgumentException: Expecting record to contain 4 fields but found 5.
But in COPY FROM TABLE command, the value for freq {1234:{[1, 2]:1}} inserts into DB without any error, the value in DB looks like this {1234: {(1, 2): 1}}
I guess the JSON not accepting array(tuple) as key when I try with dsbulk? Can someone advise me what’s the issue and how to fix this? Appreciate your help.
When loading data using the DataStax Bulk Loader (DSBulk), the CSV format for CQL tuple type is different from the format used by the COPY ... FROM command because DSBulk uses a different parser.
Formatting the CSV data is particularly challenging in your case because the column contains multiple nested CQL collections.
InvalidMappingException
The JSON parser used by DSBulk doesn't accept parentheses () when enclosing tuples. It also expects tuples to be enclosed in double quotes " otherwise you'll get errors like:
com.datastax.oss.dsbulk.workflow.commons.schema.InvalidMappingException: \
Could not map field ... to variable ...; \
conversion from Java type ... to CQL type ... failed for raw value: ...
...
Caused by: java.lang.IllegalArgumentException: Could not parse '...' as Json
...
Caused by: com.fasterxml.jackson.core.JsonParseException: \
Unexpected character ('(' (code 91)): was expecting either valid name character \
(for unquoted name) or double-quote (for quoted) to start field name
...
IllegalArgumentException
Since values for tuples contain a comma (,) as a separator, DSBulk incorrectly parses the rows and it thinks each row contains more fields than expected and throws an IllegalArgumentException, for example:
java.lang.IllegalArgumentException: Expecting record to contain 2 fields but found 3.
Solution
Just to make it easier, here is the schema for the table I'm using as an example:
CREATE TABLE inttuples (
id int PRIMARY KEY,
inttuple map<frozen<tuple<tinyint, smallint>>, smallint>
)
In this example CSV file, I've used the pipe character (|) as a delimiter:
id|inttuple
1|{"[2,3]":4}
Here's another example that uses tabs as the delimiter:
id inttuple
1 {"[2,3]":4}
Note that you will need to specify the delimiter with either -delim '|' or -delim '\t' when running DSBulk. Cheers!
👉 Please support the Apache Cassandra community by hovering over the cassandra tag then click on the Watch tag button. 🙏 Thanks!

Space in column name is throwing exception while parquet is used for compression

I am getting below error while inserting the data into a table of parquet format with column name having space.
Used Hive client of Cloudera version
CREATE TABLE testColumNames( First Name string) stored as parquet;
insert into testColumNames select 'John Smith';
Is there any workaround to solve this issue? We got this error from Spark 2.3 code as well.
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException: field ended by ';': expected ';' but got 'name' at line 1: optional binary first name
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:248)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:583)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:527)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:636)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:98)
at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:497)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:170)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:459)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.IllegalArgumentException: field ended by ';': expected ';' but got 'name' at line 1: optional binary first name
at parquet.schema.MessageTypeParser.check(MessageTypeParser.java:212)
at parquet.schema.MessageTypeParser.addPrimitiveType(MessageTypeParser.java:185)
at parquet.schema.MessageTypeParser.addType(MessageTypeParser.java:111)
at parquet.schema.MessageTypeParser.addGroupTypeFields(MessageTypeParser.java:99)
at parquet.schema.MessageTypeParser.parse(MessageTypeParser.java:92)
at parquet.schema.MessageTypeParser.parseMessageType(MessageTypeParser.java:82)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.getSchema(DataWritableWriteSupport.java:43)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.init(DataWritableWriteSupport.java:48)
at parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:310)
at parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:287)
at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.<init>(ParquetRecordWriterWrapper.java:69)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat.getParquerRecordWriterWrapper(MapredParquetOutputFormat.java:134)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat.getHiveRecordWriter(MapredParquetOutputFormat.java:123)
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:260)
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:245)
... 18 more
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException: field ended by ';': expected ';' but got 'name' at line 1: optional binary first name
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:248)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:583)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:527)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:974)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:598)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:199)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:459)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.IllegalArgumentException: field ended by ';': expected ';' but got 'name' at line 1: optional binary first name
at parquet.schema.MessageTypeParser.check(MessageTypeParser.java:212)
at parquet.schema.MessageTypeParser.addPrimitiveType(MessageTypeParser.java:185)
at parquet.schema.MessageTypeParser.addType(MessageTypeParser.java:111)
at parquet.schema.MessageTypeParser.addGroupTypeFields(MessageTypeParser.java:99)
at parquet.schema.MessageTypeParser.parse(MessageTypeParser.java:92)
at parquet.schema.MessageTypeParser.parseMessageType(MessageTypeParser.java:82)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.getSchema(DataWritableWriteSupport.java:43)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.init(DataWritableWriteSupport.java:48)
at parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:310)
at parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:287)
at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.<init>(ParquetRecordWriterWrapper.java:69)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat.getParquerRecordWriterWrapper(MapredParquetOutputFormat.java:134)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat.getHiveRecordWriter(MapredParquetOutputFormat.java:123)
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:260)
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:245)
... 16 more
Please refer the below url:
https://issues.apache.org/jira/browse/PARQUET-677
It seems this issue is not yet resolved.
From Hive Doc https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL
Table names and column names are case insensitive but SerDe and
property names are case sensitive.
In Hive 0.12 and earlier, only
alphanumeric and underscore characters are allowed in table and column
names.
In Hive 0.13 and later, column names can contain any Unicode
character (see HIVE-6013), however, dot (.) and colon (:) yield errors
on querying, so they are disallowed in Hive 1.2.0 (see HIVE-10120).
Any column name that is specified within backticks (`) is treated
literally. Within a backtick string, use double backticks (``) to
represent a backtick character. Backtick quotation also enables the
use of reserved keywords for table and column identifiers.
To revert
to pre-0.13.0 behavior and restrict column names to alphanumeric and
underscore characters, set the configuration property
hive.support.quoted.identifiers to none. In this configuration,
backticked names are interpreted as regular expressions. For details,
see Supporting Quoted Identifiers in Column Names.

Getting unknown type for RetryPolicy error after upgrading spring data cassandra to latest 2.0.7.RELEASE

I updated these lines of code to support for spring-data-cassandra-2.0.7.RELEASE:
CassandraOperations cOps = new CassandraTemplate(session);
From:
Insert insertStatement = (Insert)statement;
CqlTemplate.addWriteOptions(insertStatement, queryWriteOptions);
cOps.execute(insertStatement);
To:
Insert insertStatement = (Insert)statement;
insertStatement = QueryOptionsUtil.addWriteOptions(insertStatement,
queryWriteOptions);
cOps.insert(insertStatement);
Above changes are throwing below error:
Caused by: org.springframework.dao.InvalidDataAccessApiUsageException: Unknown type [interface com.datastax.driver.core.policies.RetryPolicy] for property [retryPolicy] in entity [com.datastax.driver.core.querybuilder.Insert]; only primitive types and Collections or Maps of primitive types are allowed
at org.springframework.data.cassandra.core.mapping.BasicCassandraPersistentProperty.getDataType(BasicCassandraPersistentProperty.java:170)
at org.springframework.data.cassandra.core.mapping.CassandraMappingContext.lambda$null$10(CassandraMappingContext.java:552)
at java.util.Optional.orElseGet(Optional.java:267)
at org.springframework.data.cassandra.core.mapping.CassandraMappingContext.lambda$getDataTypeWithUserTypeFactory$11(CassandraMappingContext.java:542)
at java.util.Optional.orElseGet(Optional.java:267)
at org.springframework.data.cassandra.core.mapping.CassandraMappingContext.getDataTypeWithUserTypeFactory(CassandraMappingContext.java:527)
at org.springframework.data.cassandra.core.mapping.CassandraMappingContext.getDataType(CassandraMappingContext.java:486)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.getPropertyTargetType(MappingCassandraConverter.java:689)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.lambda$getTargetType$0(MappingCassandraConverter.java:682)
at java.util.Optional.orElseGet(Optional.java:267)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.getTargetType(MappingCassandraConverter.java:670)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.getWriteValue(MappingCassandraConverter.java:711)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.writeInsertFromWrapper(MappingCassandraConverter.java:403)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.writeInsertFromObject(MappingCassandraConverter.java:360)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.write(MappingCassandraConverter.java:345)
at org.springframework.data.cassandra.core.convert.MappingCassandraConverter.write(MappingCassandraConverter.java:320)
at org.springframework.data.cassandra.core.QueryUtils.createInsertQuery(QueryUtils.java:78)
at org.springframework.data.cassandra.core.CassandraTemplate.insert(CassandraTemplate.java:442)
at org.springframework.data.cassandra.core.CassandraTemplate.insert(CassandraTemplate.java:430)
Query that is passed as input is of type com.datastax.driver.core.querybuilder.Insert containing:
INSERT INTO person (name,id,age) VALUES ('name01','123',23) USING TIMESTAMP 1528922717378000 AND TTL 60;
And the queryoptions containing RetryPolicy and consistency level is passed.
Based on documentation followed above changes are not working. Can anyone let me know what is wrong here?
I'm using Spring 2.0.7.RELEASE with Cassandra driver 3.5.0
I was able to work with it using below changes:
cOps.getCqlOperations().execute(insertStatement);
How can i check the consistency level if it got applied?
For me, this works:
batchOps.insert(ImmutableSet.of(entity), insertOptions);

Spark Data frame search column starting with a string

I have a requirement to filter a data frame based on a condition that a column value should starts with a predefined string.
I am trying following:
val domainConfigJSON = sqlContext.read
.jdbc(url, "CONFIG", prop)
.select("DID", "CONF", "KEY").filter("key like 'config.*'")
And getting exception:
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException:
You have an error in your SQL syntax; check the manual that
corresponds to your MariaDB server version for the right syntax to use
near 'KEY = 'config.*'' at line 1
Using spark: 1.6.1
You can use the startsWith function present in Column class.
myDataFrame.filter(col("columnName").startswith("PREFIX"))
I used the same function but I was getting errors then I checked what is the error?
actually, we need to use startsWith(literals: String) but the above function having lowercase startswith().
Ex : df.filter(col("ACCOUNT_NUMBER").startsWith("9"))

Error using cql prepared statement in cassandra (astyanax)

I was experimenting with astyana write operation, and was using cqlsh for that.
If i simply use the query, I am able to get the result
keyspace.prepareQuery(COLUMN_FAMILY).withCql("insert into scan_request (customer_id, uuid, scan_type, status, content_size, request_time, request_content_hash) values ("+dao.getCustomerId()+","+"'"+dao.getUuId()+"',"+dao.getScanType()+","+dao.getStatus()+","+dao.getContentSize()+",'2012-12-12 12:12:12', '"+dao.getRequestContentHash()+"');")
.execute();
However if i use prepared statement to do the same, i get the below error.
this.getKeyspace()
.prepareQuery(COLUMN_FAMILY)
.withCql(INSERT_STATEMENT)
.asPreparedStatement()
.withIntegerValue(dao.getCustomerId())
.withStringValue(dao.getUuId())
.withIntegerValue(dao.getScanType())
.withIntegerValue(dao.getStatus())
.withIntegerValue(dao.getContentSize())
.withStringValue("'2012-12-12 12:12:12'")
.withStringValue(dao.getRequestContentHash())
.execute();
I get the below error
Exception in thread "main" java.lang.RuntimeException: failed to write data to C*
at com.tools.dbaccess.cassandra.astyanax.AstyanaxClient.write(AstyanaxClient.java:155)
at com.tools.dbaccess.cassandra.astyanax.AstyanaxClient.main(AstyanaxClient.java:164)
Caused by: com.netflix.astyanax.connectionpool.exceptions.BadRequestException: BadRequestException: [host=localhost(127.0.0.1):9160, latency=11(11), attempts=1]InvalidRequestException(why:Expected 8 or 0 byte long for date (21))
at com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:159)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:65)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:28)
at com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$ThriftConnection.execute(ThriftSyncConnectionFactoryImpl.java:151)
at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:69)
at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:256)
at com.netflix.astyanax.thrift.AbstractThriftCqlQuery$3.execute(AbstractThriftCqlQuery.java:80)
at com.tools.dbaccess.cassandra.astyanax.AstyanaxClient.write(AstyanaxClient.java:144)
... 1 more
Caused by: InvalidRequestException(why:Expected 8 or 0 byte long for date (21))
at org.apache.cassandra.thrift.Cassandra$execute_prepared_cql3_query_result.read(Cassandra.java:41868)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_execute_prepared_cql3_query(Cassandra.java:1689)
at org.apache.cassandra.thrift.Cassandra$Client.execute_prepared_cql3_query(Cassandra.java:1674)
at com.netflix.astyanax.thrift.ThriftCql3Query.execute_prepared_cql_query(ThriftCql3Query.java:29)
at com.netflix.astyanax.thrift.AbstractThriftCqlQuery$3$1.internalExecute(AbstractThriftCqlQuery.java:92)
at com.netflix.astyanax.thrift.AbstractThriftCqlQuery$3$1.internalExecute(AbstractThriftCqlQuery.java:82)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:60)
I think its wrong when i try to store the timestamp into the table. But i didn't find anything adequeate in the preparedStatement to store timestamp. The datatype of the field in the database is "timestamp".
I should have used
.withByteBufferValue(new Date(), DateSerializer.get()).
Or if you have a custom object to serialize, extend the AbstractSerializer class.
Since what you get is a NullPointerException have you checked that some of your values aren't null?

Resources