Does `ResultQuery#fetchAsync()` work with r2dbc in jOOQ 3.15.1? - jooq

I am wanting to use non-blocking r2dbc without 'using a third party reactive stream API', and currently have this working when I configure the DSLContext with JDBC (ie. all the records are printed):
// appended to a jOOQ select query
.fetchAsync()
.thenApply{ it.map(mapping(::Film)) }
.whenComplete { result, _ -> println( result ) }
however, if I configure the DSLContext to use r2dbc (without any other changes), the println( result ) prints null :-(
I:
am using Kotlin, but not bridging to coroutines yet ... only the above calls are involved
am using io.r2dbc:r2dbc-mssql:0.8.6.RELEASE
don't know if r2dbc is 'working' in any sense ... I'm relying on jOOQ to hit me with an exception if not ... I've not seen a single piece of data relayed by r2dbc at this stage.

As of jOOQ 3.15.1, this isn't possible yet, see https://github.com/jOOQ/jOOQ/issues/11717. It's likely this will be fixed in a 3.15.x patch release

Related

Parameterized queries in ArcGIS JavaScript API

I am trying to build a where clause for an ArcGIS layer query in the ArcGIS JavaScript API. All the examples I can find (including in where documentation) encourage you to do things like this:
query.where = "NAME = '" + stateName + "'";
I find this surprising, because it seems like we are encouraged to write code that's susceptible to SQL injection bugs. This is especially important to me because my "stateName" is entirely not in my control. It's raw user input.
It looks like the library supports query parameterization via the parameterValues property, however I can find no examples on how to use it, or how to format my query so that it uses parameter values. Not even the documentation contains any examples.
So how do we create properly parameterized queries?
Side note: I recognize it's probably the server administrator's job to prevent bad queries from causing harm, however half the reason I'm asking this is also purely to avoid bugs and improperly parsed queries.
According to ESRI, the parameterValues property is only for using Query Layers. It's not for this context.
So the answer is, no, you can't use proper SQL parameterization when you're simply running a query on a layer.
The best way I have found to make your queries at least a little bit more bulletproof is to use code like this:
function sanitizeParameter(value) {
// We need to escape single quotes to avoid messing up SQL syntax.
// So all ' characters need to become ''
return value.split("'").join("''")
}
query.where = "NAME = '" + sanitizeParameter(stateName) + "'";

Hazelcast Jet with an IMap source and OBJECT in-memory format

I have items in a Hazelcast IMap in OBJECT format, and I'm using a Jet aggregation operation with that IMap as a pipeline source. I was hoping, because of the OBJECT format, to avoid any serialisation/deserialisation of the items in my IMap during processing, the same way native Hazelcast entry processing and queries work. However, I can see that my items are in fact being serialised and then deserialised before being passed to my aggregator.
Is it possible to avoid the serialisation/deserialisation step when using Jet in this way? If so, how?
Yes, local map reader will always serialize/deserialize the entries. The only way I can think of to work around this is to use a custom source which uses map.localKeySet() and then use mapUsingIMap to do a join on those keys. The source would look like below:
SourceBuilder.batch("localKeys", c -> c.jetInstance().getMap("map"))
.fillBufferFn((map, buf) -> {
for (Object key : map.localKeySet(predicate)) {
buf.add(key);
}
buf.close();
}).distributed(1).build());

How to get SAP CloudSdk BatchRequest not to ignore filter parameter on Batch Query?

We are currently struggeling with Batch Query,
which seems to ignore the filter expressions on S4 side caused by a wrong URL encoding.
/sap/opu/odata/sap/ZP2M_A_CONTRACT_SEARCH_HDR_CDS/ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID eq %274600002020%27&$select=*&$format=json
Executing the query using FluentHelperRead.execute(HttpClient)
the returned list of entities contains the expected result with exactly one entity.
Executing the query as Batch Query the following request is logged in console:
GET ZP2M_A_CONTRACT_SEARCH_HDR?%24filter%3DPurchaseContractID+eq+%25274600002020%2527%26%24select%3D*%26%24format%3Djson HTTP/1.1
The collected list from all batch result parts contains all entities.
It seems, that the query URL is encoded in wrong way
and that S4 ignored the filter expressions when encoded in this way.
e.g. $filter is encoded to %24filter which is ignored by S4.
This seems to be a bug in BatchRequestImpl.getRequest(ODataQueryImpl) method,
where URL encoding is done a 2nd time on already encoded URL parts.
if(systemQuery.indexOf("$format=json&$count=true") != -1)
{
systemQuery = systemQuery.substring(0, systemQuery.indexOf("$format=json&$count=true") -1);
keysUrl.append("/$count");
}
systemQuery = URLEncoder.encode(systemQuery, "UTF-8"); // this code line which encodes the query 2nd time
keysUrl.append("?");
The code line systemQuery = URLEncoder.encode(systemQuery, "UTF-8"); located in
  BatchRequestImpl(1.38.0) - line 295
  BatchRequestImpl(1.42.2) - line 307
encodes the systemQuery string again (including the already encoded parts of FilterExpression as well).
When undoing the changes of this code line in debugger and replacing the scapces by %20 or '+' the Batch Query looks like that
GET ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID%20eq%20%274600002020%27&$select=*&$format=json HTTP/1.1
GET ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID+eq+%274600002020%27&$select=*&$format=json HTTP/1.1
and it returns the expected result (exactly 1 entity).
This wrong encoding appears when using these library versions:
sdk-bom: 3.16.1
connectivity: 1.38.0
This issue appears in newest SDK versions as well:
sdk-bom: 3.21.0
connectivity: 1.39.0
This issue appears with connectivity JAR in newest version too:
sdk-bom: 3.21.0
connectivity: 1.40.2
Debugging together with a ABAP/S4 colleague figures out,
that S4 only applies filter expressions, if the keyword $filter is found in request,
%24filter%3D is ignored (the cause why we get all entities running the Batch Query).
My suggestion to solve it would be
// decode query first (to decode the filter expression)
systemQuery = URLDecoder.decode(systemQuery, "UTF_8");
// encode query
systemQuery = org.apache.commons.httpclient.util.URIUtil.encodeQuery(systemQuery, "UTF_8");
My code, how I am calling the batchRequest:
FluentHelperRead<?, MyEntity, ?> queryApi = myService.getAll... // with adding some filter expression
BatchRequestBuilder batchRequestBuilder = BatchRequestBuilder.withService(MyService.DEFAULT_SERVICE_PATH);
ODataQuery query = queryApi.toQuery();
batchRequestBuilder.addQueryRequest(query);
HttpClient httpClient = HttpClientAccessor
.getHttpClient(DefaultErpHttpDestinationAccessor.get());
BatchRequest request = batchRequestBuilder.build();
BatchResult result = request.execute(httpClient);
// ... evaluate response
I think, this is a general issue in the Cloud SDK.
Would is be possible to get this fixed in next Cloud SDK release?
Can you share your code for Batch request? Do you use BatchRequestImpl directly?
The thing is SAP Cloud SDK relies on some dependencies one of which introduces the BatchRequestImpl and if it's called directly the bug is on the dependency side. I have already informed them to investigate this double encoding issue. Unfortunately, we can't directly influence how fast it is resolved and sometimes it takes longer than we'd like.
The good news, we're working on replacing this dependency with our own implementation to solve exactly this kind of problem. The batch is work in progress and should be available in Beta around the end of next month for OData V4 and hopefully around the same time for OData V2 (it's not a hard commitment and depends on other priorities).
From here we have to wait for whatever happens first:
The bug is fixed on the dependency side
Internal OData client implementation is ready together with Batch
I hope it helps and explains current solution path. If you share a bit around your deadlines and the potential impact we'll be happy to consider that.
This has been fixed within the dependency and as of version 3.25.0 the SAP Cloud SDK includes the fix.

Can't access delete method on Slick query

This is very very very frustrating. I have been trying to pick up Slick for a while, and obstacles just keep coming. The concept of Slick is really awesome, but it is very difficult to learn, and unlike Scala, it doesn't have "beginner", "intermediate", and "advanced" style where people in all stages can use it easily.
I'm using Play-Slick (Slick 2.0.0) https://github.com/freekh/play-slick, following its Multi-DB cake example: https://github.com/freekh/play-slick/tree/master/samples/play-slick-cake-sample/app
For some reason, first, ddl does not belong to TableQuery, unlike the claim in the document: "The TableQuery‘s ddl method creates DDL". This shows through the scaladoc: http://slick.typesafe.com/doc/2.0.0/api/#scala.slick.lifted.TableQuery There is no ddl method there.
Second, my slick.lifted.Query can't generate delete method. It works fine with list, but not with delete.
val S3Files = TableQuery[S3Files]
S3Files.where(_.url === url).delete
This wouldn't work...then I tried:
val query = (for(s <- S3Files if s.url === url) yield s)
query.list //this works
query.delete //ehh?? can't find the method
val query2 = (for(s <- S3Files if s.url === url))
query2.delete //still won't work
Well...since Slick uses a very complicated (at least to newbies) implicit type conversion system, I don't really know what went wrong.
I tried it by simply adding
Cats.ddl.create
Cats.filter(_.name===cat.name).delete
to play-slick-cake-sample/app/controllers/Application.scala. Works fine for me.
Looks like you are using the wrong imports. Look at https://github.com/freekh/play-slick/blob/master/samples/play-slick-sample/app/controllers/Application.scala and mimic the imports.
slick 0.8.1 and slick 2.1.0 and I had the same Issue.
The reason why delete is not available on the Query is cause the play-slick Query does not contain a equivalent method of the delete method from slick Query.
I solved this Problem by changing to the original slick Driver
//import play.api.db.slick.Config.driver.simple._ //play-slick extensional Driver
import slick.driver.PostgresDriver.simple._ //original slick Driver

Incremental loading in Azure Mobile Services

Given the following code:
listView.ItemsSource =
App.azureClient.GetTable<SomeTable>().ToIncrementalLoadingCollection();
We get incremental loading without further changes.
But what if we modify the read.js server side script to e.g. use mssql to query another table instead. What happens to the incremental loading? I'm assuming it breaks; if so, what's needed to support it again?
And what if the query used the untyped version instead, e.g.
App.azureClient.GetTable("SomeTable").ReadAsync(...)
Could incremental loading be somehow supported in this case, or must it be done "by hand" somehow?
Bonus points for insights on how Azure Mobile Services implements incremental loading between the server and the client.
The incremental loading collection works by sending the $top and $skip query parameters (those are also sent when you do a query by using the .Take and .Skip methods in the table). So if you want to modify the read script to do something other than the default behavior, while still maintaining the ability to use that table with an incremental loading collection, you need to take those values into account.
To do that, you can ask for the query components, which will contain the values, as shown below:
function read(query, user, request) {
var queryComponents = query.getComponents();
console.log('query components: ', queryComponents); // useful to see all information
var top = queryComponents.take;
var skip = queryComponents.skip;
// do whatever you want with those values, then call request.respond(...)
}
The way it's implemented at the client is by using a class which implements the ISupportIncrementalLoading interface. You can see it (and the full source code for the client SDKs) in the GitHub repository, or more specifically the MobileServiceIncrementalLoadingCollection class (the method is added as an extension in the MobileServiceIncrementalLoadingCollectionExtensions class).
And the untyped table does not have that method - as you can see in the extension class, it's only added to the typed version of the table.

Resources