Spring data Cassandra API code samples - spring-data-cassandra

Is there any place where we can find code samples for different features (at least for most used methods) of CassandraTemplate ?

The best place I know is the source code, here: CassandraOperations is the code for the interface.
You can also look up its test classes to see the methods in action.

Related

Keyset Pagination for Spring Data JDBCs Pageable

AFAIK, the Pageable class supports only LIMIT/OFFSET based paging. However, while being a quite universal solution, it comes with some downsides as outlined here https://momjian.us/main/blogs/pgblog/2020.html#August_10_2020
Keyset Pagination (aka Seek Method or Cursor-based Pagination) has some benefits in terms of performance and behavior during concurrent data inserts and deletes. For details see
https://use-the-index-luke.com/no-offset
http://allyouneedisbackend.com/blog/2017/09/24/the-sql-i-love-part-1-scanning-large-table/
https://slack.engineering/evolving-api-pagination-at-slack-1c1f644f8e12
https://momjian.us/main/blogs/pgblog/2020.html#August_17_2020
So, are there any plans to support this pagination method, e.g. via Pageable<KeyType> and getKey() that then gets incorporated into the SQLs WHERE clause?
This possibility was discussed in the team and while not considered urgent it is something we would like to offer eventually.
The first step would be to provide support for this in Spring Data Commons, i.e. a persistence store independent API. The issue for this is DATACMNS-1729

How to overwrite generic ODATA expand handling functionality

We are currently working on performance issues with our provided OData interface, since the UI5 issues a read request with multiple expand paths attached. Due to the generic handling of the request by the framework this leads to an additional processing per expand option, which we need to prevent.
Reading the blog about this topic there seems to be a way to overwrite the generic handling somehow:
https://blogs.sap.com/2018/03/19/sap-cloud-platform-sdk-for-service-development-create-odata-service-7-more-navigation-read-create-expand-sqo/
In this case it is us who need to decide if we can afford to rely on the FWK-functionality. Of course, such generic support cannot be performant. But for small amount of data it is just nice to get it for free.
Stay tuned to learn how to overwrite such generic FWK-functionality with own specific implementation.
However, there is no further blog post on this and looking through the framework, my only idea to overwrite this would be to configure and use an own com.sap.gateway.core.api.provider.data.IDataProvider implementation which handles the request in a custom way, although this would be an immense workaround.
So the questions is if there is some leaner or easier approach to overwriting this functionality which I missed?
UPDATE:
I was update to create a custom data provider and register it with the RuntimeDelegate after servlet initialization. This custom data provider would then check for a custom annotation on the mapped method handler to see if expand should be handled or not. If not it will just read the entity, but not perform he generic expanded read. This works more or less fine, but what is of course missing is a way to pass the properties to be expanded in the ReadRequest. So far only a static implementation is possible solving our performance problem, but I would gladly have a hint if there is another, better solution for this...
At the time of this writing, no better approach exists at the moment.

Arangodb using Java API: when a graph is created do all Edges need to be defined already?

As far as I can tell, you must specify the edge definitions at creation time and there does not seem to be a method for adding an edge definition later. But I also see examples written in Javascript (I think) where edge definitions can be added later. Am I right about this Java limitation and does that suggest that Javascript might be a better choice for programming language to interact with ArangoDB?
EDIT: Could the edgeDefinitions Collection be added to after the graph is created?
EDIT: Seems to me that since the Java API is making REST calls, adding to the Collection later would not work at all.
It is possible to add an edge definition to an existing graph by using the method addEdgeDefinition of the ArangoDB-Java-Driver.
An example is listed in the Java Driver documentation.
Similar it is possible to replace/remove an edge definition byreplaceEdgeDefinition/removeEdgeDefinition.

how to extend apache spark api?

i've been tasked with figuring out how to extend spark's api to include some custom hooks for another program like iPython Notebook to latch on to. I've already gone through the quick start guide, the cluster mode overview, the submitting applications doc, and this stack overflow question. Everything I'm seeing indicates that, to get something to run in Spark, you need to use
spark-submit
to make it happen. As such I whipped up some code that, visa vis spark, pulled the first ten rows of test data out of an accumulo table I created. My team lead, however, is telling me to modify spark itself. Is this the preferred way to accomplish the task I described? If so, why? What's the value proposition?
No details have been provided about what types of operations your application requires so an answer here will need to remain general in nature.
The question of extending spark itself may come down to:
Can I achieve the needs of the application by leveraging the existing
methods within Spark(/SQL/Hive/Streaming)Context and RDD
(/SchemaRDD/DStream/..)
additional choices:
Is it possible to embed the required functionality inside the
transformation methods of RDD's - either by custom code or by invoking
third party libraries.
The likely distinguishing factors here would be if the existing data access and shuffle/distribution structures support your needs. When it comes to data tranformations - in most cases you should be able to embed the required logic within the methods of RDD.
So:
case class InputRecord(..)
case class OutputRecord(..)
def myTranformationLogic(inputRec: InputRecord) : OutputRecord = {
// put your biz rules/transforms here
(return) outputRec
}
val myData = sc.textFile(<hdfs path>).map{ l => InputRecord.fromInputLine(l)}
val outputData = myData.map(myTransformationLogic)
outputData.saveAsTextFile(<hdfs path>)

How to get more ways of interrogating my data source from subsonic

In the below link, Linq-To-Sql produces a range of methods to get data back based on various real-world needs (e.g. recent topics, etc).
How can I get Subsonic to produce me a set of classes which would interrogate and return data back from my data source in such a way? I get classes which really just present CRUD ops.
Thanks
I don't see a link here but in general I'd say you can use SubSonic 3 in much the same way you can Linq to Sql. Here's some sample code:
http://subsonicproject.com/docs/Linq_Select_Queries
Other than that I'd need more specific examples.
The way I would do it with SubSonic is to make a view with the way you want the data to be formatted, and then retrieve it with SubSonic.

Resources