Pagination using custom payloads option with Java driver 3.1 and Cassandra - pagination

I am using cassansra 3.7 and Java Driver 3.1. i have written a java code to connect cassandra and fetch data from database.
I found that pagination will be possible using custom payload option by following this url http://datastax.github.io/java-driver/manual/custom_payloads/
But i did not find any example to use custom payloads to achieve pagination.
Can anybody please let me know how to use this option.
Thanks in advance.

Custom payloads are not used for pagination. They only mention paging there as a note on how custom payloads are handled during paging. Custom payloads are only used when you're integrating with a custom QueryHandler on the server side.
Paging is done implicitly by the driver. Please see the documentation here for paging with the java driver.

Related

How can I write KSQL queries in nodejs?

Could you provide me with a code sample in node js that runs KSQL queries on Kafka ?
F.e. I would like to create and then retrieve a simple table.
Afaik there's no library that would do that for you. Did you look at using the REST api for this?
https://docs.confluent.io/current/ksql/docs/developer-guide/api.html

DataStax Graph Native API vs Fluent API

i have tried both the native api and fluent api for datastax graph in java.
i found fluent api more readable since it resembles java's OOP.
Native api has less readability in java since basically strings are being appended to create the entire gremlin script. but on the plus side a single call is made to execute the entire gremlin script
i wanted to know which is the best api to go with in case i need to add a large number of edges and vertices in one transaction and what are the performance issues which can occur in either case
Going forward I would recommend using the Fluent API over the String-based API. While we still support the string-based API in the DataStax drivers, most of our work and improvements will be using the fluent API.
The primary benefits of the Fluent API is that you can use the Apache TinkerPop library directly to form Traversals, it doesn't need to go through the groovy scripting engine (like the String-based API does).
In terms of loading multiple vertices/edges in one transaction, you can do that with Apache TinkerPop, and it will be much more effective than the String-based API because that all doesn't need to be evaluated through the gremlin-groovy engine. Also any future work around batching will likely be done in the Fluent API (via Apache TinkerPop), see JAVA-1311 for more details.

Does Hazelcast support JSON-based objects in cache?

As in the topic: I can't find any good information whether Hazelcast can support storing JSON objects as cache data? If so does it allow to query such objects basing on some JSONPath/JPQL/SQL-like expressions?
I can see that Apache Geode (GemFire) does support such functionality: http://geode-docs.cfapps.io/docs/developing/data_serialization/jsonformatter_pdxinstances.html and I am wondering if the big rival can do the same.
A JSON support has been added in Hazelcast 3.12: https://docs.hazelcast.org/docs/latest/manual/html-single/#querying-json-strings
You can checkout the merged pull request: https://github.com/hazelcast/hazelcast/pull/14307
In Hazelcast 3.6 you have custom extractors -> You could have a JSON extractor and then use in queries. See https://github.com/hazelcast/hazelcast-code-samples/tree/master/distributed-map/custom-attributes as an example.
Hazelcast doesn't have out of the box support for JSON.

Kdb+ drivers for NodeJS

I'd like to use NodeJS to implement a simple REST interface to pull data from Kdb+.
Does anyone know of any existing driver or a project to implement a Kdb+ driver for NodeJS?
Not that I know of... but shouldn't be too hard, really. You could simply use the http interface to create calls to kdb and parse the result into json with Arthur's json implementation
There is https://github.com/michaelwittig/node-q
I've used it in my project and works great.

Presto API options other than JDBC

What are all the other options I have to get data to user interface from Hive through Presto query engine other than JDBC
UI <--> Presto <--> Hive
The best interface for UI programming is the Presto REST interface. At Facebook we use this REST interface directly in PHP, Python and R for everything from graphical dashboards to statistical analysis. We are working on formal documentation for the REST interface, but for now the best documentation is here:
https://gist.github.com/electrum/7710544
BTW, the current JDBC driver is just a thin wrapper around the Presto REST interface and is really just a prototype. We are working on improving the driver for an internal project at FB, so expect it to become much better over the next few months.
If you are a python user, there is a decent library PyHive from Dropbox. PrestoDB site lists a collection of different Presto clients.
However, all of them are wrappers on top of Presto REST API with high-level API support.

Resources