How to get SAP CloudSdk BatchRequest not to ignore filter parameter on Batch Query? - sap-cloud-sdk

We are currently struggeling with Batch Query,
which seems to ignore the filter expressions on S4 side caused by a wrong URL encoding.
/sap/opu/odata/sap/ZP2M_A_CONTRACT_SEARCH_HDR_CDS/ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID eq %274600002020%27&$select=*&$format=json
Executing the query using FluentHelperRead.execute(HttpClient)
the returned list of entities contains the expected result with exactly one entity.
Executing the query as Batch Query the following request is logged in console:
GET ZP2M_A_CONTRACT_SEARCH_HDR?%24filter%3DPurchaseContractID+eq+%25274600002020%2527%26%24select%3D*%26%24format%3Djson HTTP/1.1
The collected list from all batch result parts contains all entities.
It seems, that the query URL is encoded in wrong way
and that S4 ignored the filter expressions when encoded in this way.
e.g. $filter is encoded to %24filter which is ignored by S4.
This seems to be a bug in BatchRequestImpl.getRequest(ODataQueryImpl) method,
where URL encoding is done a 2nd time on already encoded URL parts.
if(systemQuery.indexOf("$format=json&$count=true") != -1)
{
systemQuery = systemQuery.substring(0, systemQuery.indexOf("$format=json&$count=true") -1);
keysUrl.append("/$count");
}
systemQuery = URLEncoder.encode(systemQuery, "UTF-8"); // this code line which encodes the query 2nd time
keysUrl.append("?");
The code line systemQuery = URLEncoder.encode(systemQuery, "UTF-8"); located in
  BatchRequestImpl(1.38.0) - line 295
  BatchRequestImpl(1.42.2) - line 307
encodes the systemQuery string again (including the already encoded parts of FilterExpression as well).
When undoing the changes of this code line in debugger and replacing the scapces by %20 or '+' the Batch Query looks like that
GET ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID%20eq%20%274600002020%27&$select=*&$format=json HTTP/1.1
GET ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID+eq+%274600002020%27&$select=*&$format=json HTTP/1.1
and it returns the expected result (exactly 1 entity).
This wrong encoding appears when using these library versions:
sdk-bom: 3.16.1
connectivity: 1.38.0
This issue appears in newest SDK versions as well:
sdk-bom: 3.21.0
connectivity: 1.39.0
This issue appears with connectivity JAR in newest version too:
sdk-bom: 3.21.0
connectivity: 1.40.2
Debugging together with a ABAP/S4 colleague figures out,
that S4 only applies filter expressions, if the keyword $filter is found in request,
%24filter%3D is ignored (the cause why we get all entities running the Batch Query).
My suggestion to solve it would be
// decode query first (to decode the filter expression)
systemQuery = URLDecoder.decode(systemQuery, "UTF_8");
// encode query
systemQuery = org.apache.commons.httpclient.util.URIUtil.encodeQuery(systemQuery, "UTF_8");
My code, how I am calling the batchRequest:
FluentHelperRead<?, MyEntity, ?> queryApi = myService.getAll... // with adding some filter expression
BatchRequestBuilder batchRequestBuilder = BatchRequestBuilder.withService(MyService.DEFAULT_SERVICE_PATH);
ODataQuery query = queryApi.toQuery();
batchRequestBuilder.addQueryRequest(query);
HttpClient httpClient = HttpClientAccessor
.getHttpClient(DefaultErpHttpDestinationAccessor.get());
BatchRequest request = batchRequestBuilder.build();
BatchResult result = request.execute(httpClient);
// ... evaluate response
I think, this is a general issue in the Cloud SDK.
Would is be possible to get this fixed in next Cloud SDK release?

Can you share your code for Batch request? Do you use BatchRequestImpl directly?
The thing is SAP Cloud SDK relies on some dependencies one of which introduces the BatchRequestImpl and if it's called directly the bug is on the dependency side. I have already informed them to investigate this double encoding issue. Unfortunately, we can't directly influence how fast it is resolved and sometimes it takes longer than we'd like.
The good news, we're working on replacing this dependency with our own implementation to solve exactly this kind of problem. The batch is work in progress and should be available in Beta around the end of next month for OData V4 and hopefully around the same time for OData V2 (it's not a hard commitment and depends on other priorities).
From here we have to wait for whatever happens first:
The bug is fixed on the dependency side
Internal OData client implementation is ready together with Batch
I hope it helps and explains current solution path. If you share a bit around your deadlines and the potential impact we'll be happy to consider that.

This has been fixed within the dependency and as of version 3.25.0 the SAP Cloud SDK includes the fix.

Related

Handling of etags in batch request using SAP Cloud SDK

I am trying to carry out a batch request including a create, update and a delete (all are different salesorders). As per this question here which deals with something similar, I have done a get for the items I want to update and delete before I add them to the batch request. I am using the SalesOrder.builder() to prepare the SalesOrder I want to create.
final ErpHttpDestination destination = DestinationAccessor.getDestination(DESTINATION_NAME)
.asHttp().decorate(DefaultErpHttpDestination::new);
final SalesOrderItem salesOrderItem1 = SalesOrderItem.builder().material(material)
.requestedQuantityUnit(requestedQuantityUnit).build();
final SalesOrder salesOrder1 = SalesOrder.builder().distributionChannel(distributionChannel)
.salesOrderType(salesOrderType).salesOrganization(salesOrganization)
.organizationDivision(organizationDivision).soldToParty(soldToParty)
.item(salesOrderItem1).build();
final SalesOrder orderToUpdate = new GetSingleSalesOrderCommand(orderToUpdateID, destination,
new DefaultSalesOrderService()).execute();
orderToUpdate.setSoldToParty(updateSoldToParty);
final SalesOrder orderToDelete = new GetSingleSalesOrderCommand(orderToDeleteID, destination,
new DefaultSalesOrderService()).execute();
SalesOrderServiceBatch service = new DefaultSalesOrderServiceBatch(
new DefaultSalesOrderService());
BatchResponse bRes = service.beginChangeSet().createSalesOrder(salesOrder1).updateSalesOrder(orderToUpdate)
.deleteSalesOrder(orderToDelete).endChangeSet().execute(destination);
I am then logging the BatchResponse and see I am getting a Batch Response Failure:
eTag handling not supported for http method 'POST'
I have searched for this error but can't find any resolution to it. Any ideas?
Thanks.
UPDATE: Increasing the logging to DEBUG I can see the batch request that is being sent and can see that there is an if-match header being added to the create request, which doesn't make sense as it can't match something that doesn't exist yet.
"msg":"--batch_123\r\nContent-Type: multipart/mixed;
boundary=changeset_(changeset number)\r\n\r\n--
changeset_(changeset number)\r\nContent-Type:
application/http\r\nContent-Transfer-Encoding: binary\r\n\r\nPOST
/sap/opu/odata/sap/API_SALES_ORDER_SRV/A_SalesOrder HTTP/1.1\r\nContent-
Length:
193\r\nIf-Match: W/\"datetimeoffset'2020-05-
01T11%3A51%3A16.8631720Z'\"\r\nAccept:
application/json;odata=verbose\r\nContent-Type:......
The I get the error:
Inner Error:
"msg":"batch
responseFailure(com.sap.cloud.sdk.odatav2.connectivity.ODataException:
null: <?xml version=\"1.0\" encoding=\"utf-8\"?><error
xmlns=\"http://schemas.microsoft.com/ado/2007/08/dataservices/metadata\">
<code>/IWFND/CM_MGW/537</code><message xml:lang=\"en\">eTag handling not
supported for http method 'POST'</message><innererror>...
However, what does work is if I wrap each request in its own changeset e.g.
service
.beginChangeSet().createSalesOrder(order).endChangeSet()
.beginChangeSet().updateSalesOrder(orderToUpdate).endChangeSet()
.beginChangeSet().deleteSalesOrder(orderToDelete).endChangeSet()
.execute(destination);
Edit:
This is fixed as of version 3.25.0.
Initial Answer:
This seems to be a bug. I was able to reproduce this with a different service and the behaviour is the same: The if-match header is incorrectly applied to the POST operation as well.
When debugging it seems like the request is build up correctly with the header only being present on update and delete. However, it seems that when the batch request is serialised to JSON it gets added to all requests.
So until this is fixed the workaround is isolating these operations via change sets, as you already pointed out.
Looks like eTag handling is not supported for your endpoint.
Now you can do the following to omit eTag headers:
orderToUpdate.setVersionIdentifier(null);
orderToDelete.setVersionIdentifier(null);
However I'm not sure how 'POST' fits the error description, because update uses PATCH and delete uses DELETE. The only POST that I expect would be coming from create. But we do not add headers for entity version identifiers (eTag) in OData create operation. If the same error still comes up, please try again without running createSalesOrder(salesOrder1).

Google Cloud Datastore Cursor with google.cloud.ndb

I am working with Google Cloud Datastore using the latest google.cloud.ndb library
I am trying to implement pagination use Cursor using the following code.
The same is not fetching the data correctly.
[1] To Fetch Data:
query_01 = MyModel.query()
f = query_01.fetch_page_async(limit=5)
This code works fine and fetches 5 entities from MyModel
I want to implementation pagination that can be integrated with a Web frontend
[2] To Fetch Next Set of Data
from google.cloud.ndb._datastore_query import Cursor
nextpage_value = "2"
nextcursor = Cursor(cursor=nextpage_value.encode()) # Converts to bytes
query_01 = MyModel.query()
f = query_01.fetch_page_async(limit=5, start_cursor= nextcursor)
[3] To Fetch Previous Set of Data
previouspage_value = "1"
prevcursor = Cursor(cursor=previouspage_value.encode())
query_01 = MyModel.query()
f = query_01.fetch_page_async(limit=5, start_cursor=prevcursor)
The [2] & [3] sets of code do not fetch paginated data, but returns results same as results of codebase [1].
Please note I'm working with Python 3 and using the
latest "google.cloud.ndb" Client library to interact with Datastore
I have referred to the following link https://github.com/googleapis/python-ndb
I am new to Google Cloud, and appreciate all the help I can get.
Firstly, it seems to me like you are expecting to use the wrong kind of pagination. You are trying to use numeric values, whereas the datastore cursor is providing cursor-based pagination.
Instead of passing in byte-encoded integer values (like 1 or 2), the datastore is expecting tokens that look similar to this: 'CjsSNWoIb3Z5LXRlc3RyKQsSBFVzZXIYgICAgICAgAoMCxIIQ3ljbGVEYXkiCjIwMjAtMTAtMTYMGAAgAA=='
Such a cursor you can obtain from the first call to the fetch_page() method, which returns a tuple:
(results, cursor, more) where results is a list of query results, cursor is a cursor pointing just after the last result returned, and more indicates whether there are (likely) more results after that
Secondly, you should be using fetch_page() instead of fetch_page_async(), since the second method does not return you the cursors you need for pagination. Internally, fetch_page() is calling fetch_page_async() to get your query results.
Thirdly and lastly, I am not entirely sure whether the "previous page" use-case is doable using the datastore-provided pagination. It may be that you need to implement that yourself manually, by storing some of the cursors.
I hope that helps and good luck!

Azure Function Route Parameter Reading: context.bindingData.paramName vs context.req.params.paramName

I have a route definition in function.json: entity/{paramName}
When I make a GET request: http://localhost:7071/api/entity/50043e-315
In context.bindingData.paramName I get surprising 5.0043e-311, while context.req.params.paramName contains 50043e-315.
I noticed that here both ways of reading can be used; and here the same is meant, though the links are dead by now, while the example here mentions only context.bindingData.
Question: What is more preferable? And what is the difference?
I believe the problem here is that the somewhere (if I were to make a guess, here) the param is being parsed as a double before being stored as binding data. But when fetching it from the request object, it is fetched as a string from the URL directly. Hence the difference.
I believe there are only a few cases where this might happen and this is one of them.

DocumentDB Replace not Working

I recently realized that DocumentDB supports stand alone update operations via ReplaceDocumentAsync.
I've replaced the Upsert operation below with the Replace operation.
var result = _client
.UpsertDocumentAsync(_collectionUri, docObject)
.Result;
So this is now:
var result = _client
.ReplaceDocumentAsnyc(_collectionUri, docObject)
.Result;
However, now I get the exception:
Microsoft.Azure.Documents.BadRequestException : ResourceType Document is unexpected.
ActivityId: b1b2fd71-3029-4d0d-bd5d-87d8d0a2fc95
No idea why, upsert and replace are of the same vein and the object is the same that worked for upsert, so I would expect it to work without problems.
All help appreciated.
Thanks
Update: Have tried to implement this using the SelfLink approach, and it works for Replace, but selflink does not work with Upsert. The behavior is quite confusing. I don't like that I have to build a self link in code using string concatenation.
I'm afraid that building the selflink with string concatenation is your only option here because ReplaceDocument(...) requires a link to the document. You show a link to the collection in your example. It won't suck the id out and find the document as you might wish.
The NPM module, documentdb-utils, has library functions for building these links but it's just using string concatenation. I have seen an equivalent library for .NET but I can't remember where. Maybe it was in an Azure example or even in the SDK now.
You can build a document link for a replace using the UriFactory helper class:
var result = _client
.ReplaceDocumentAsync(UriFactory.CreateDocumentUri(databaseId, collectionId, docObject.Id), docObject)
.Result;
Unfortunately it's not very intuitive, as Larry has already pointed out, but a replace expects a document to already be there, while an upsert is what it says on the tin. Two different use-cases, I would say.
In order to update a document, you need to provide the Collection Uri. If you provide the Document Uri it returns the following:
ResourceType Document is unexpected.
Maybe the _collectionUri is a Document Uri, the assignment should look like this:
_collectionUri = UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName);

Incremental loading in Azure Mobile Services

Given the following code:
listView.ItemsSource =
App.azureClient.GetTable<SomeTable>().ToIncrementalLoadingCollection();
We get incremental loading without further changes.
But what if we modify the read.js server side script to e.g. use mssql to query another table instead. What happens to the incremental loading? I'm assuming it breaks; if so, what's needed to support it again?
And what if the query used the untyped version instead, e.g.
App.azureClient.GetTable("SomeTable").ReadAsync(...)
Could incremental loading be somehow supported in this case, or must it be done "by hand" somehow?
Bonus points for insights on how Azure Mobile Services implements incremental loading between the server and the client.
The incremental loading collection works by sending the $top and $skip query parameters (those are also sent when you do a query by using the .Take and .Skip methods in the table). So if you want to modify the read script to do something other than the default behavior, while still maintaining the ability to use that table with an incremental loading collection, you need to take those values into account.
To do that, you can ask for the query components, which will contain the values, as shown below:
function read(query, user, request) {
var queryComponents = query.getComponents();
console.log('query components: ', queryComponents); // useful to see all information
var top = queryComponents.take;
var skip = queryComponents.skip;
// do whatever you want with those values, then call request.respond(...)
}
The way it's implemented at the client is by using a class which implements the ISupportIncrementalLoading interface. You can see it (and the full source code for the client SDKs) in the GitHub repository, or more specifically the MobileServiceIncrementalLoadingCollection class (the method is added as an extension in the MobileServiceIncrementalLoadingCollectionExtensions class).
And the untyped table does not have that method - as you can see in the extension class, it's only added to the typed version of the table.

Resources