Java equivalent of Spanner numeric data type - google-cloud-spanner

I am getting Caused by: java.lang.IllegalArgumentException: Invalid code: UNRECOGNIZED while trying to access a spanner NUMERIC column from Java application using spanner repository.
I am getting the above error .I have tried with Long,float and bigdecimal on the entity.Can anyone share the thoughts on this .

NUMERIC data types in Cloud Spanner should be used with BigDecimal.
Support for the NUMERIC data type was introduced in Cloud Spanner Java client library in version 1.59.0. The most probable reason for this error is that you are using an older version of the client library. Your question mentions that you tried it 'on the entity', which might indicate that you are using Hibernate.
Could you please share more information about your specific situation, which should at least include:
Any relevant frameworks you are using (Hibernate, Spring, ...)
Versions of these frameworks
Version of the Cloud Spanner Java client library
Please also always include:
A snippet of your actual code, or otherwise a small code sample that will actually reproduce the problem.
The entire (anonymized) stacktrace of your error, instead of only the error message.

Numeric will refer to BigDecimal in java. You could always check from Spring Data Spanner framework code that contains all datatype mapping from java to spanner.
SpannerTypeMapper.java
https://github.com/spring-cloud/spring-cloud-gcp/blob/master/spring-cloud-gcp-data-spanner/src/main/java/org/springframework/cloud/gcp/data/spanner/core/convert/SpannerTypeMapper.java

this is because of older version of spanner, numeric type was introduced later, you might be using bigdecimal which is not a part of TYPECODE.class
BigDecimal works perfectly fine with NUMERIC fields.
TYPE_CODE_UNSPECIFIED(0),
BOOL(1),
INT64(2),
FLOAT64(3),
TIMESTAMP(4),
DATE(5),
STRING(6),
BYTES(7),
ARRAY(8),
STRUCT(9),
NUMERIC(10)
if your TYPECODE.class has all of these. if not Please update dependency and your issue must be resolved.

Related

jooq timestamp arithmetic Cannot convert from Integer to LocalDateTime

With jooq 3.13.x, we are using
Field<Instant> midPointDueTime = TICKET.READY.plus(TICKET.DUE.minus(TICKET.READY).div(2));
where READY and DUE fields are of type java.time.Instant. They are DATETIME fields in the database (normally java.sql.Timestamp) but are converted to Instant with a javax.persistence.AttributeConverter. The database in question is Informix, but we are using the open source version of jooq for now with the DEFAULT dialect and trying to avoid cases where things would deviate from standard SQL syntax.
From that field declaration jooq 3.13.x creates the following SQL snippet which works as expected
TICKET.READY + ((TICKET.DUE - TICKET.READY) / 2))
This is the expected DATETIME arithmetic. We are looking for a timestamp halfway between READY and DUE.
But jooq 3.14 or 3.15 both throw a runtime exception.
org.jooq.exception.DataTypeException: Cannot convert from 2 (class java.lang.Integer) to class java.time.LocalDateTime
No SQL is generated, so I don't think this is an Informix compatibility issue. The error happens before any SQL statement is logged.
Is this possibly a bug, or is there something else I can do to achieve the same date arithmetic result?
Regarding dialects
From the SQLDialect.DEFAULT javadoc:
This dialect is chosen in the absence of a more explicit dialect. It is not intended to be used with any actual database as it may combine dialect-specific things from various dialects.
The purpose of this dialect is to be used in cases where no other dialect is available, and debug log information is needed, e.g. when writing:
DSL.abs(1).toString(); // This uses DEFAULT to render the abs(1) function call
You shouldn't use this dialect in any production situation, its behaviour may change at any time for various reasons. It's certainly not integration tested with any RDBMS. Use the SQLDialect.INFORMIX dialect instead
After further research
Thanks to your detailed bug report hree: https://github.com/jOOQ/jOOQ/issues/12544, it can be seen that this is a regression that will be fixed in jOOQ 3.16.0 and 3.15.4

AWS Athena row cast fails when key is a reserved keyword despite double quotes

I'm working with data in AWS Athena, and I'm trying to match the structure of some input data. This involves a nested structure where "from" is a key. This consistently throws errors.
I've narrowed the issue down to the fact that Athena queries don't work when you try to use reserved keywords as keys in rows. The following examples demonstrate this behavior.
This simple case, SELECT CAST(ROW(1) AS ROW("from" INTEGER)), fails with the following error: GENERIC_INTERNAL_ERROR: Unable to create class com.facebook.presto.execution.TaskInfo from JSON response: [io.airlift.jaxrs.JsonMapperParsingException: Invalid json for Java type
This simple case runs successfully: SELECT CAST(ROW(1) AS ROW("work" INTEGER))
The Athena documentation says to enclose reserved keywords in double quotes to use them in SELECT statements, but the examples above show that queries still fail when using keywords as keys in rows.
I know I have other options, but this way is by far the most convenient. Is there a way to use reserved keywords in this scenario?
As Piotr mentions in a comment, this is a Presto bug and given that it was posted just days ago it's unlikely to be fixed in Athena anytime soon. When the bug is fixed in Presto it might find its way into Athena, I know the Athena team sometimes apply upstream patches even though Athena is based on an old version of Presto. This one might not be significant enough to appear on their radar, but if you open a support ticket with AWS it might happen (be sure to be clear that you don't need any workaround just report a bug, otherwise you'll have a support person spending way too much time trying to help you to turn things off and on again).

Google Datastore returns incomplete data via official client library for nodejs

Here some information about context of the problem I facing:
we have a semi-structured (JSON from node.js backend) data in datastore.
after saving an entity,
and getting a list of entities about them soon and even a while later,
returned data does not have one indexed property
I can find the entity by that property value.
I use Google Datastore via node.js client library. #google-cloud/datastore: "^2.0.0".
How it can be possible? I understood when due to eventual consistency some updates can be incompletely written etc. But when I getting same inconsistency - lack of whole property of entity saved e. g. hour ago?
I gone through scenario multiple times for same kind multiple times.
I do not have such issues with other kinds or other properties of that kind.
How I can avoid this type of issues with Google Datastore?
Answer for anyone who may encounter with such issue.
We mostly do not use any DTO (data-transfer objects) or any other wrappers for most of our kinds in this project, but for this one a DTO has been used, mostly to be sure the result objects have default values for properties omitted/absent in entity which usually happens for entities created by older version of code.
After reviewing my own code more carefully, I found a piece of code which is out of sync with other related pieces of code - there was no a line to copy this property from entity to the DTO object.
Side note: Actually all this situation remind me a story or meme about a guy who claimed he found a bug in compiler just because he was not able to find a mistake he made in his code.

Upgrading calls to Datastax Java APIs that are gone in 3

There are a number of APIs that are gone now in Datastax 3.x driver. They were used to do 'framework' level driver wrapper classes I have.
https://github.com/datastax/java-driver/tree/3.0/upgrade_guide
The upgrade guide offers no examples of how to replace calls to the removed APIs (that I care about anyway). Here are several that are missing and I'm trying to upgrade my code. Any ideas what has 'replaced' them?
DataType.serialize(Object value, ProtocolVersion protocolVersion)
DataType.deserialize(ByteBuffer bytes, ProtocolVersion protocolVersion)
DataType.asJavaClass()
DataType.Name.asJavaClass()
Any help on which APIs calls to these methods should now be invoking would be appreciated.
Item #2 references the changes to DataTypes via custom codecs. A TypeCodec is no longer attached to a DataType since in the 3.0 version of the driver you can define your own codecs for data types. Therefore these methods are no longer provided directly via DataType.
Custom codecs (JAVA-721) introduce several breaking changes and also modify a few runtime behaviors.
Here is a detailed list of breaking API changes:
...
DataType has no more references to TypeCodec, so most methods that dealt with serialization and deserialization of data types have been removed:
ByteBuffer serialize(Object value, ProtocolVersion protocolVersion)
Object deserialize(ByteBuffer bytes, ProtocolVersion protocolVersion)
Class asJavaClass()
The Custom Codecs should provide the details you need to accomplish everything needed if you have the DataType by resolving the TypeCodec for it using CodecRegistry.codecFor or the TypeCodec static methods for resolving the default codecs. TypeCodec provides the methods you need, i.e.:
TypeCodec<Long> bigIntCodec = TypeCodec.bigint();
bigIntCodec.serialize(10L, protocolVersion);
bigIntCodec.deserialize(bytes, protocolVersion);
Class<?> clazz = bigIntCodec.getJavaType().getRawType();

JAXB Marshalling & Unmarshalling

I am trying to study how the REST service approach works using Jersey.
I have come up with 2 options when creating and accessing a REST service.
I have 6 parameters which are all string
Pass the data as a long comma separated string and at server side split it.
Use JAXB and do Marshalling and Unmarshalling.
I can understand that 1st option will be the fastest but does anyone knows how much fast it will be than the 2nd option and is it a safe and efficient way to do this.
It will be nice if someone can mention any more options that are possible..
Thanks
You'll have to write your own MessageBodyReader/Writter if you want the comma separated string. Also you'll need to make sure the parameter itself does not contain a comma, etc. Not that it would be a blocker - just noting that.
You can also use low-level JSON marshaling/unmarshaling using Jettison - that should also be pretty fast. Or use jackson. See various JSON mapping options in Jersey user-guide.
Just for completeness, another option might be to use Form (which is essentially a map of String->List) - if you use that, no need for a special MessageBodyReader/Writter - Jersey will handle it for you. You just need to annotate your methods with #Produce/#Consume("application/www-form-urlencoded").
Note: I'm the EclipseLink JAXB (MOXy) lead and a member of the JAXB 2 (JSR-222) expert group.
Using a JAXB implementation with Jersey will give you the option of passing an XML or JSON message that will be easy for many clients to interact with. Inventing your own format for the sake of an unknown performance gain is most likely an unnecessary micro optimization.
Here is an example I put together using Jersey & MOXy in GlassFish:
Part 1 - The Database
Part 2 - Mapping the Database to JPA Entities
Part 3 - Mapping JPA entities to XML (using JAXB)
Part 4 - The RESTful Service
Part 5 - The Client

Resources