Converter issue with jOOQ when mapping nested records - jooq

I have two basically identical queries, one using multiset and the other using row, but they behave differently with regard to converting from SQL datatype to Java/Kotlin.
For example, here's the multiset:
multiset(
select(
CALENDAR_ENTRIES.ID,
CALENDAR_ENTRIES.EVENT_DATE
)
.from(CALENDAR_ENTRIES)
.where(cond)
).`as`("calendar_entry")
.convertFrom { r: Result<Record2<Long, LocalDateTime>> ->
r.map(
Records.mapping { id: Long?, eventDate: LocalDateTime? ->
CalendarEntry(id, eventDate!!)
}
)
}
and here's the row:
row(
CALENDAR_ENTRIES.ID,
CALENDAR_ENTRIES.EVENT_DATE
)
.mapping { id: Long?, eventDate: LocalDateTime ->
CalendarEntry(id, eventDate)
}
For argument's sake, assuming the DB has 1 record, with eventDate being (as a String)
'2022-02-21 09:30:00'.
With the first query, I successfully get my Field<List<CalendarEntry>> instance. With the second query (row()), I get "java.time.format.DateTimeParseException: Text '2022-02-21 09:30:00' could not be parsed at index 10".
It seems that jOOQ is able to correctly apply a converter for the multiset call, but not the row call. Am I doing something wrong?

I'm assuming you're using PostgreSQL.
You probably ran into this bug here: https://github.com/jOOQ/jOOQ/issues/13117. The difference in behaviour is explained by the fact that as of jOOQ 3.16:
MULTISET gets emulated using JSONB functionality
ROW gets
emulated using JSONB functionality when nested within multiset
implemented natively using PostgreSQL ROW expressions when used at the top level
Bug https://github.com/jOOQ/jOOQ/issues/13117 seems to affect also top level ROW types, not just UDTs. It has been fixed recently, and might be backported to 3.16 and 3.15.

Related

TypeORM count grouping with different left joins each time

I am using NestJS with TypeORM and PostgreSQL. I have a queryBuilder which joins other tables based on the provided array of relations.
const query = this.createQueryBuilder('user');
if (relations.includes('relation1') {
query.leftJoinAndSelect('user.relation1', 'r1');
}
if (relations.includes('relation2') {
query.leftJoinAndSelect('user.relation2', 'r2');
}
if (relations.includes('relation3') {
query.leftJoinAndSelect('user.relation3', 'r3');
}
// 6 more relations
Following that I select a count on another table.
query
.leftJoin('user.relation4', 'r4')
.addSelect('COUNT(case when r4.value > 10 then r4.id end', 'user_moreThan')
.addSelect('COUNT(case when r4.value < 10 then r4.id end', 'user_lessThan')
.groupBy('user.id, r1.id, r2.id, r3.id ...')
And lastly I use one of the counts (depending on the request) for ordering the result with orderBy.
Now, of course, based on the relations parameter, the requirements for the groupBy query change. If I join all tables, TypeORM expects all of them to be present in groupBy.
I initially had the count query separated, but that was before I wanted to use the result for ordering.
Right now I planned to just dynamically create the groupBy string, but this approach somehow feels wrong and I am wondering if it is in fact the way to go or if there is a better approach to achieving what I want.
You can add group by clause conditionally -
if (relations.includes('relation1') {
query.addGroupBy('r1.id');
}

What are the returned data types from a Knex select() statement?

Hi everyone,
I am currently using Knex.js for a project and a question arise when I make a knex('table').select() function call.
What are the returned types from the query ? In particular, If I have a datetime column in my table, what is the return value for this field ?
I believe the query will return a value of type string for this column. But it is the case for any database (I use SQLite3) ? It is possible that the query returns a Date value ?
EXAMPLE :
the user table has this schema :
knex.schema.createTable('user', function (table) {
table.increments('id');
table.string('username', 256).notNullable().unique();
table.timestamps(true, true);
})
since I use SQLite3, table.timestamps(true, true); produces 2 datetime columns : created_at & modified_at.
when I make a query knex('user').select(), it returns a array of objects with the attributes : id, username, created_at, modified_at.
id is of type number
username is of type string
what will be the types of created_at & modified_at ?
Will it be always of string type ? If I use an other database like PostgreSQL, these columns will have the timestamptz SQL type. The returned type of knex will be also a string type ?
This is not in fact something that Knex is responsible for, but rather the underlying database library. So if you're using SQLite, it would be sqlite3. If you're using Postgres, pg is responsible and you could find more documentation here. Broadly, most libraries take the approach that types which have a direct JavaScript equivalent (booleans, strings, null, integers, etc.) are returned as those types; anything else is converted to a string.
Knex's job is to construct the SQL that the other libraries use to talk to the database, and receives the response that they return.
as I believe it will be object of strings or numbers

Datastax cassandra seem to cache preparestatent

When my application runs a long time, everything works as well. But when I change type a column from int to text(Drop table and recreate), I caught a Exception:
com.datastax.oss.driver.api.core.type.codec.CodecNotFoundException: Codec not found for requested operation: [INT <-> java.lang.String]
at com.datastax.oss.driver.internal.core.type.codec.registry.CachingCodecRegistry.createCodec(CachingCodecRegistry.java:609)
at com.datastax.oss.driver.internal.core.type.codec.registry.DefaultCodecRegistry$1.load(DefaultCodecRegistry.java:95)
at com.datastax.oss.driver.internal.core.type.codec.registry.DefaultCodecRegistry$1.load(DefaultCodecRegistry.java:92)
at com.datastax.oss.driver.shaded.guava.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3527)
at com.datastax.oss.driver.shaded.guava.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2276)
at com.datastax.oss.driver.shaded.guava.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)
at com.datastax.oss.driver.shaded.guava.common.cache.LocalCache$Segment.get(LocalCache.java:2044)
at com.datastax.oss.driver.shaded.guava.common.cache.LocalCache.get(LocalCache.java:3951)
at com.datastax.oss.driver.shaded.guava.common.cache.LocalCache.getOrLoad(LocalCache.java:3973)
at com.datastax.oss.driver.shaded.guava.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4957)
at com.datastax.oss.driver.shaded.guava.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4963)
at com.datastax.oss.driver.internal.core.type.codec.registry.DefaultCodecRegistry.getCachedCodec(DefaultCodecRegistry.java:117)
at com.datastax.oss.driver.internal.core.type.codec.registry.CachingCodecRegistry.codecFor(CachingCodecRegistry.java:215)
at com.datastax.oss.driver.api.core.data.SettableByIndex.set(SettableByIndex.java:132)
at com.datastax.oss.driver.api.core.data.SettableByIndex.setString(SettableByIndex.java:338)
This exception appears occasionally. I'm using PreparedStatement to execute the query, I think it is cached from DataStax's driver.
I'm using AWS Keyspaces(Cassandra version 3.11.2), DataStax driver 4.6.
Here is my application.conf:
basic.request {
timeout = 5 seconds
consistency = LOCAL_ONE
}
advanced.connection {
max-requests-per-connection = 1024
pool {
local.size = 1
remote.size = 1
}
}
advanced.reconnect-on-init = true
advanced.reconnection-policy {
class = ExponentialReconnectionPolicy
base-delay = 1 second
max-delay = 60 seconds
}
advanced.retry-policy {
class = DefaultRetryPolicy
}
advanced.protocol {
version = V4
}
advanced.heartbeat {
interval = 30 seconds
timeout = 1 second
}
advanced.session-leak.threshold = 8
advanced.metadata.token-map.enabled = false
}
Yes, Java driver 4.x caches prepared statement - it's a difference from the driver 3.x. From documentation:
the session has a built-in cache, it’s OK to prepare the same string twice.
...
Note that caching is based on: the query string exactly as you provided it: the driver does not perform any kind of trimming or sanitizing.
I'm not sure 100% about the source code, but the relevant entries in the cache may not be cleared up on the table drop. I suggest to open the JIRA against Java driver, although, such type changes are often not really recommended - it's better to introduce new field with new type, even if it's possible to re-create table.
That's correct. Prepared statements are cached -- it's the optimisation that makes prepared statements more efficient if they are reused since they only need to be prepared once (the query doesn't need to get parsed again).
But I suspect that underlying issue in your case is that your queries involve SELECT *. Best practice recommendation (regardless of the database you're using) is to explicitly enumerate the columns you are retrieving from the table.
In the prepared statement, each of the columns are bound to a data type. When you alter the schema by adding/dropping columns, the order of the columns (and their data types) no longer match the data types of the result set so you end up in situations where the driver gets an int when it's expecting a text or vice-versa. Cheers!

Scala slick 2.0 updateAll equivalent to insertALL?

Looking for a way to do a batch update using slick. Is there an equivalent updateAll to insertALL? Goole research has failed me thus far.
I have a list of case classes that have varying status. Each one having a different numeric value so I cannot run the typical update query. At the same time, I want to save the multiple update requests as there could be thousands of records I want to update at the same time.
Sorry to answer my own question, but what i ended up doing is just dropping down to JDBC and doing batchUpdate.
private def batchUpdateQuery = "update table set value = ? where id = ?"
/**
* Dropping to jdbc b/c slick doesnt support this batched update
*/
def batchUpate(batch:List[MyCaseClass])(implicit subject:Subject, session:Session) = {
val pstmt = session.conn.prepareStatement(batchUpdateQuery)
batch map { myCaseClass =>
pstmt.setString(1, myCaseClass.value)
pstmt.setString(2, myCaseClass.id)
pstmt.addBatch()
}
session.withTransaction {
pstmt.executeBatch()
}
}
It's not clear to me what you are trying to achieve, insert and update are two different operation, for insert makes sense to have a bulk function, for update it doesn't in my opinion, in fact in SQL you can just write something like this
UPDATE
SomeTable
SET SomeColumn = SomeValue
WHERE AnotherColumn = AnotherValue
Which translates to update SomeColumn with the value SomeValue for all the rows which have AnotherColumn equal to AnotherValue.
In Slick this is a simple filter combined with map and update
table
.filter(_.someCulomn === someValue)
.map(_.FieldToUpdate)
.update(NewValue)
If instead you want to update the whole row just drop the map and pass a Row object to the update function.
Edit:
If you want to update different case classes I'm lead to think that these case classes are rows defined in your schema and if that's the case you can pass them directly to the update function since it's so defined:
def update(value: T)(implicit session: Backend#Session): Int
For the second problem I can't suggest you a solution, looking at the JdbcInvokerComponent trait it looks like the update function invokes the execute method immediately
def update(value: T)(implicit session: Backend#Session): Int = session.withPreparedStatement(updateStatement) { st =>
st.clearParameters
val pp = new PositionedParameters(st)
converter.set(value, pp, true)
sres.setter(pp, param)
st.executeUpdate
}
Probably because you can actually run one update query at the time per table and not multiple update on multiple tables as stated also on this SO question, but you could of course update multiple rows on the same table.

How to update multiple rows using Hector

Is there a way I can update multiple rows in cassandra database using column family template like supply a list of keys.
currently I am using updater columnFamilyTemplate to loop through a list of a keys and do an update for each row. I have seen queries like multigetSliceQuery but I don't know their equivalence in doing updates.
There is no utility method in ColumnFamilyTemplate that allow you to just pass a list of keys with a list of mutation in one call.
You can implement your own using mutators.
This is the basic code on how to do it in hector
Set<String> keys = MY_KEYS;
Map<String, String> pairsOfNameValues = MY_MUTATION_BY_NAME_AND_VALUE;
Set<HColumn<String, String>> colums = new HashSet<HColumn<String,String>>();
for (Entry<String, String> pair : pairsOfNameValues.entrySet()) {
colums.add(HFactory.createStringColumn(pair.getKey(), pair.getValue()));
}
Mutator<String> mutator = template.createMutator();
String column_family_name = template.getColumnFamily();
for (String key : keys) {
for (HColumn<String, String> column : colums) {
mutator.addInsertion(key, BASIC_COLUMN_FAMILY, column);
}
}
mutator.execute();
Well it should look like that. This is an example for insertion, be sure to use the following methods for batch mutations:
mutator.addInsertion
mutator.addDeletion
mutator.addCounter
mutator.addCounterDeletion
since this ones will execute right away without waiting for the mutator.execute():
mutator.incrementCounter
mutator.deleteCounter
mutator.insert
mutator.delete
As a last note: A mutator allows you to batch mutations on multiple rows on multiple column families at once ... which is why I generally prefer to use them instead of CF templates. I have a lot of denormalization for functionalities that use the "push-on-write" pattern of NoSQL.
You can use a batch mutation to insert as much as you want (within thrift_max_message_length_in_mb). See http://hector-client.github.com/hector//source/content/API/core/1.0-1/me/prettyprint/cassandra/model/MutatorImpl.html.

Resources