Cache-control with Oracle ORDS - cache-control

My DB gets new data only between 1pm-2pm. Users jump back and forth, often looking at the same thing multiple times, I want to use caching to cut network load. My response type is:
source_type_collection_item
Executes a SQL Query returning one row of data into a ORDS Standard JSON representation.
Available when the HTTP method is GET.
Result Format: JSON
single row SYS_REFCURSOR response with multiple nested CURSORS.
FUNCTION my_func RETURN SYS_REFCURSOR
AS
my_cursor SYS_REFCURSOR;
BEGIN
OPEN my_cursor FOR
SELECT value, CURSOR(SELECT value_2, CURSOR(SELECT ... FROM table)
...
I want to set differing cache-control based on server time:
Request at 9:30am - cache 3.5 hrs
Request at 1:30pm - cache 0 mins
Request at 3:30pm - cache 21.5 hrs
The problem is that I can't figure out how to set caching at all for an ORDS endpoint returning HTTP GET. I have one working fine for an unrelated PL/SQL service
source_type_plsql :-
Executes an anonymous PL/SQL block and transforms any OUT or IN/OUT parameters into a JSON representation.
Available only when the HTTP method is DELETE, PUT, or POST.
Result Format: JSON
using a defined parameter:
ords.define_parameter(
p_module_name => l_module_name,
p_pattern => l_pattern,
p_method => l_method,
p_name => 'Cache-Control',
p_bind_variable_name => 'cache_control',
p_source_type => 'HEADER',
p_param_type => 'STRING',
p_access_method => 'OUT',
p_comments => '');
But this won't work for the GET case because:
HTTP GET is unsupported by PL/SQL
SYS_REFCURSOR as ORDS PL/SQL out parameter unsupported
The only option I could figure out was a static cache setting in Tomcat rather than ORDS.
<filter>
<filter-name>ExpiresFilter</filter-name>
<filter-class>org.apache.catalina.filters.ExpiresFilter</filter-class>
<init-param>
<param-name>ExpiresByType application/json</param-name>
<param-value>access plus 30 minutes</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>ExpiresFilter</filter-name>
<url-pattern>/ords/*</url-pattern>
<dispatcher>REQUEST</dispatcher>
</filter-mapping>
But that's undesirable for a number of reasons:
It applies to ALL services under ORDS but caching is a bad idea for most other endpoints. I was unable to filter more precisely than this - from Tomcat's point of view it seems that ORDS is a monolith and gets one cache control setting.
It is a fixed value.

Related

How to get SAP CloudSdk BatchRequest not to ignore filter parameter on Batch Query?

We are currently struggeling with Batch Query,
which seems to ignore the filter expressions on S4 side caused by a wrong URL encoding.
/sap/opu/odata/sap/ZP2M_A_CONTRACT_SEARCH_HDR_CDS/ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID eq %274600002020%27&$select=*&$format=json
Executing the query using FluentHelperRead.execute(HttpClient)
the returned list of entities contains the expected result with exactly one entity.
Executing the query as Batch Query the following request is logged in console:
GET ZP2M_A_CONTRACT_SEARCH_HDR?%24filter%3DPurchaseContractID+eq+%25274600002020%2527%26%24select%3D*%26%24format%3Djson HTTP/1.1
The collected list from all batch result parts contains all entities.
It seems, that the query URL is encoded in wrong way
and that S4 ignored the filter expressions when encoded in this way.
e.g. $filter is encoded to %24filter which is ignored by S4.
This seems to be a bug in BatchRequestImpl.getRequest(ODataQueryImpl) method,
where URL encoding is done a 2nd time on already encoded URL parts.
if(systemQuery.indexOf("$format=json&$count=true") != -1)
{
systemQuery = systemQuery.substring(0, systemQuery.indexOf("$format=json&$count=true") -1);
keysUrl.append("/$count");
}
systemQuery = URLEncoder.encode(systemQuery, "UTF-8"); // this code line which encodes the query 2nd time
keysUrl.append("?");
The code line systemQuery = URLEncoder.encode(systemQuery, "UTF-8"); located in
  BatchRequestImpl(1.38.0) - line 295
  BatchRequestImpl(1.42.2) - line 307
encodes the systemQuery string again (including the already encoded parts of FilterExpression as well).
When undoing the changes of this code line in debugger and replacing the scapces by %20 or '+' the Batch Query looks like that
GET ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID%20eq%20%274600002020%27&$select=*&$format=json HTTP/1.1
GET ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID+eq+%274600002020%27&$select=*&$format=json HTTP/1.1
and it returns the expected result (exactly 1 entity).
This wrong encoding appears when using these library versions:
sdk-bom: 3.16.1
connectivity: 1.38.0
This issue appears in newest SDK versions as well:
sdk-bom: 3.21.0
connectivity: 1.39.0
This issue appears with connectivity JAR in newest version too:
sdk-bom: 3.21.0
connectivity: 1.40.2
Debugging together with a ABAP/S4 colleague figures out,
that S4 only applies filter expressions, if the keyword $filter is found in request,
%24filter%3D is ignored (the cause why we get all entities running the Batch Query).
My suggestion to solve it would be
// decode query first (to decode the filter expression)
systemQuery = URLDecoder.decode(systemQuery, "UTF_8");
// encode query
systemQuery = org.apache.commons.httpclient.util.URIUtil.encodeQuery(systemQuery, "UTF_8");
My code, how I am calling the batchRequest:
FluentHelperRead<?, MyEntity, ?> queryApi = myService.getAll... // with adding some filter expression
BatchRequestBuilder batchRequestBuilder = BatchRequestBuilder.withService(MyService.DEFAULT_SERVICE_PATH);
ODataQuery query = queryApi.toQuery();
batchRequestBuilder.addQueryRequest(query);
HttpClient httpClient = HttpClientAccessor
.getHttpClient(DefaultErpHttpDestinationAccessor.get());
BatchRequest request = batchRequestBuilder.build();
BatchResult result = request.execute(httpClient);
// ... evaluate response
I think, this is a general issue in the Cloud SDK.
Would is be possible to get this fixed in next Cloud SDK release?
Can you share your code for Batch request? Do you use BatchRequestImpl directly?
The thing is SAP Cloud SDK relies on some dependencies one of which introduces the BatchRequestImpl and if it's called directly the bug is on the dependency side. I have already informed them to investigate this double encoding issue. Unfortunately, we can't directly influence how fast it is resolved and sometimes it takes longer than we'd like.
The good news, we're working on replacing this dependency with our own implementation to solve exactly this kind of problem. The batch is work in progress and should be available in Beta around the end of next month for OData V4 and hopefully around the same time for OData V2 (it's not a hard commitment and depends on other priorities).
From here we have to wait for whatever happens first:
The bug is fixed on the dependency side
Internal OData client implementation is ready together with Batch
I hope it helps and explains current solution path. If you share a bit around your deadlines and the potential impact we'll be happy to consider that.
This has been fixed within the dependency and as of version 3.25.0 the SAP Cloud SDK includes the fix.

Azure CosmosDB. Continuation token length in stored procedure

I have a REST API which is intent to query the documents stored in CosmosDB with OData-like syntax. I'm returning documents with chunks. I.e. I'm setting $top=10 and get 10 documents with a continuation token. This continuation token is returned from stored procedure:
var accepted = collection.queryDocuments(collection.getSelfLink(),
sql, requestOptions,
function (err, documents, responseOptions) {
// ...
// put responseOptions.continuation into response body
});
The problem is if the continuation token is long (i.e. 6k characters), an I pass it into URL, the URL cannot be handled and I can't reach out my endpoint (getting 404). As far as I understand the more complex initial SQL query is the longer is the continuation token an its length cannot be set up.
Is there a workaround for that?
Don't think there would be a out of the box solution for this issue. What you can try is to implement tiny url kind of framework at your service layer.
https://www.geeksforgeeks.org/how-to-design-a-tiny-url-or-url-shortener/

Handling of etags in batch request using SAP Cloud SDK

I am trying to carry out a batch request including a create, update and a delete (all are different salesorders). As per this question here which deals with something similar, I have done a get for the items I want to update and delete before I add them to the batch request. I am using the SalesOrder.builder() to prepare the SalesOrder I want to create.
final ErpHttpDestination destination = DestinationAccessor.getDestination(DESTINATION_NAME)
.asHttp().decorate(DefaultErpHttpDestination::new);
final SalesOrderItem salesOrderItem1 = SalesOrderItem.builder().material(material)
.requestedQuantityUnit(requestedQuantityUnit).build();
final SalesOrder salesOrder1 = SalesOrder.builder().distributionChannel(distributionChannel)
.salesOrderType(salesOrderType).salesOrganization(salesOrganization)
.organizationDivision(organizationDivision).soldToParty(soldToParty)
.item(salesOrderItem1).build();
final SalesOrder orderToUpdate = new GetSingleSalesOrderCommand(orderToUpdateID, destination,
new DefaultSalesOrderService()).execute();
orderToUpdate.setSoldToParty(updateSoldToParty);
final SalesOrder orderToDelete = new GetSingleSalesOrderCommand(orderToDeleteID, destination,
new DefaultSalesOrderService()).execute();
SalesOrderServiceBatch service = new DefaultSalesOrderServiceBatch(
new DefaultSalesOrderService());
BatchResponse bRes = service.beginChangeSet().createSalesOrder(salesOrder1).updateSalesOrder(orderToUpdate)
.deleteSalesOrder(orderToDelete).endChangeSet().execute(destination);
I am then logging the BatchResponse and see I am getting a Batch Response Failure:
eTag handling not supported for http method 'POST'
I have searched for this error but can't find any resolution to it. Any ideas?
Thanks.
UPDATE: Increasing the logging to DEBUG I can see the batch request that is being sent and can see that there is an if-match header being added to the create request, which doesn't make sense as it can't match something that doesn't exist yet.
"msg":"--batch_123\r\nContent-Type: multipart/mixed;
boundary=changeset_(changeset number)\r\n\r\n--
changeset_(changeset number)\r\nContent-Type:
application/http\r\nContent-Transfer-Encoding: binary\r\n\r\nPOST
/sap/opu/odata/sap/API_SALES_ORDER_SRV/A_SalesOrder HTTP/1.1\r\nContent-
Length:
193\r\nIf-Match: W/\"datetimeoffset'2020-05-
01T11%3A51%3A16.8631720Z'\"\r\nAccept:
application/json;odata=verbose\r\nContent-Type:......
The I get the error:
Inner Error:
"msg":"batch
responseFailure(com.sap.cloud.sdk.odatav2.connectivity.ODataException:
null: <?xml version=\"1.0\" encoding=\"utf-8\"?><error
xmlns=\"http://schemas.microsoft.com/ado/2007/08/dataservices/metadata\">
<code>/IWFND/CM_MGW/537</code><message xml:lang=\"en\">eTag handling not
supported for http method 'POST'</message><innererror>...
However, what does work is if I wrap each request in its own changeset e.g.
service
.beginChangeSet().createSalesOrder(order).endChangeSet()
.beginChangeSet().updateSalesOrder(orderToUpdate).endChangeSet()
.beginChangeSet().deleteSalesOrder(orderToDelete).endChangeSet()
.execute(destination);
Edit:
This is fixed as of version 3.25.0.
Initial Answer:
This seems to be a bug. I was able to reproduce this with a different service and the behaviour is the same: The if-match header is incorrectly applied to the POST operation as well.
When debugging it seems like the request is build up correctly with the header only being present on update and delete. However, it seems that when the batch request is serialised to JSON it gets added to all requests.
So until this is fixed the workaround is isolating these operations via change sets, as you already pointed out.
Looks like eTag handling is not supported for your endpoint.
Now you can do the following to omit eTag headers:
orderToUpdate.setVersionIdentifier(null);
orderToDelete.setVersionIdentifier(null);
However I'm not sure how 'POST' fits the error description, because update uses PATCH and delete uses DELETE. The only POST that I expect would be coming from create. But we do not add headers for entity version identifiers (eTag) in OData create operation. If the same error still comes up, please try again without running createSalesOrder(salesOrder1).

Why are there hexadecimal numbers included in my view results from CouchDB?

Why are there hexadecimal numbers included in my view results from CouchDB? How can I get rid of them?
7f
{"total_rows":108,"offset":0,"rows":[
{"id":"5c718dbd01bc0cde8152e08ed6003405","key":"2013-03-19T22:43:27.2683661Z","value":0}
5b
,
...
{"id":"5c718dbd01bc0cde8152e08ed6037404","key":"2013-03-19T23:07:35.5972058Z","value":0}
5b
,
{"id":"5c718dbd01bc0cde8152e08ed60376e5","key":"2013-03-19T23:07:35.6062063Z","value":0}
4
]}
1
0
TL;DR
I am new to CouchDB, and are investigating its use as a database for an event log. I have created a simple map function to view the event log by date:
function(doc)
{
if (doc.DateTime)
{
emit(doc.DateTime, doc);
}
}
When I use Fiddler to test this view with the following request:
GET http://localhost:5984/stuff/_design/eventlog/_view/datetime
Host: localhost:5984
User-Agent: Fiddler
The results returned included hexadecimal numbers that aren't a part of the JSON structure. Hence the JSON returned is invalid. Why are these hexadecimal numbers included in the results, and how can I get rid of them?
I am using Windows (x86) CouchDB version 1.2.1.
The weird hex numbers are used for so-called chuncked transfer-encoding. This is a way for HTTP responses to become available in a streaming format instead of the client having to wait for entire response to be ready. The hex numbers denote the length for the next chunk.
I think the use of chunking is independent of the request's Accept values, but I'm not sure.
To get a pure JSON result you must include the Accept: application/json header in your HTTP request.
If you omit the the Accept header CouchDB will return results in a manner that is more suitable for displaying nicely in web browsers. The results will be in a JSON format, but with a text/plain content-type.
See Apache CouchDB 1.3 Manual Section 2.2.1. Request Headers.
The hexadecimal numbers are a result of Chunked transfer encoding.

What are all the ways CouchDB reponses fail?

I'm building a Node.js application on the express.js framework with CouchDB as a database. I'm utilizing CouchDB's session api for maintaining session state, and various databases for different sections of data.
On essentially every request my application code makes a request to Couch and then if there's an error (with Node) I can respond appropriately, by logging the error and redirecting to a 404 page or something like that. But if I get a CouchDB error, Node wouldn't consider it an error, it would consider that data. Now that's totally fine with me as long as CouchDB can only return this format:
{
"error": "illegal_database_name",
"reason": "Only lowercase characters (a-z), digits (0-9), and any of the characters _, $, (, ), +, -, and / are allowed. Must begin with a letter."
}
A JSON doc with two properties, error and reason. That's fine I can parse it and return the appropriate message; quite gracefully actually.
BUT! Is that all I can expect from CouchDB, or is there another way Couch might fail, that wouldn't yield a JSON doc with those two fields (properties)?
dscape's information of relying on the response codes is correct, and in most situations you will get an object with error and reason. The bulk-document errors are the only place I can think of where neither of these will be true. If just one document fails then you'll still get a 200, but you'll get the error/reason within the array element corresponding to the document that failed. See the docs for more info on that.

Resources