Java API for handling collections in Cassandra CQL - cassandra

I am looking for a java API which can handle collections in Cassandra. Which has methods to read/update/insert/delete collections like list/set/map in a column value. I am using Hector client now, I did not find any methods which could perform the above requirement. The API should be able to handle mixed column types (like one column value can be utf8 and other can be collection). Any example or tutorial will be appreciated as well.

C* collections are part of the CQL spec v.3. The only Java driver, that I'm aware of, supporting this spec completely is the open source DataStax Java driver. The driver offers 2 ways of working with CQL statements: one based on Statements/PreparedStatements/etc. and one using a fluent API.
If you are using Cassandra 1.2.x then look for the version 1.x of the driver. In case you are on Cassandra 2.0.x look for the version 2.0 of the driver (this is currently RC2, soon to go final).

Related

Is cassandra-driver-core dependency removed as part of DSE Cassandra 5.x Java driver?

With DSE Cassandra 5.x, in the client code is cassandra-driver-core to be excluded from dependency due to deprecation? Is dse-java-driver-core to be used instead?
I'm not 100% sure on what you're referring, I think that the primary reason is rework of the authentication support, and other things that are specific to DSE driver. OSS driver supports only username/password authentication, while DSE driver also supports Kerberos, plus mixed internal/external authentication schemas.
But you can safely replace cassandra-driver-core with dse-java-driver-core - code is compatible, the same Cluster/Session, only if you don't need geo types, graph support, etc. Look here for full list of differences.

Why was the Cassandra Context removed from DataStax Enterprise 4.7

I came to know from this link that Cassandra context was removed DataStax Enterprise 4.7. Does it mean it will be removed from Spark Cassandra Connector? Also, what is the reason for removing it. Is it performance related?
Cassandra Context
The 'CassandraContext' object was Datastax Only and never existed in the Spark Cassandra connector. It was basically a compiled mapping of Cassandra tables to Scala objects and case classes. It required compiling a new object every time the underlying schema of Cassandra changed and created a divergence with the OSS Spark Cassandra Connector API. The additional performance cost of creating this object was seen as a waste of time versus the limited convenience it offered. In addition, the code would only work in the spark shell so it was not suitable for prototyping code for stand alone applications.
Edit: I was mistaken the Cassandra Context is a Separate structure than the CassandraSQLContext. My memory was wrong.
The CassandraSQLContext's main purpose was to provide a persistent catalogue and automatic mapping to Cassandra tables from Spark when the system does not have a HiveMetastore present. When using the CassandraSqlContext the user is limited to a tiny subset of AnsiSQL as opposed to with a HiveContext which uses 99% of HiveQL. The code for the CassandraSQLContext is still present in the Connector and you are still able to create a CassandraSQLContext in DSE.
In Datastax Enterprise there is already a HiveMetastore written to work with Cassandra. The custom Metastore automatically registers all Cassandra tables as well so having the CassandraSQLContext was seen as being redundant, confusing, and less featured than it's Hive counterpart. To this end it is recommended that all users use a HiveContext instead of the CassandraSQLContext and we removed the automatic cc object from the shell.

For Cassandra kundera.client.lookup.class options

In order to configure kundera for Cassandra, I notice there are 3 possible options for kundera.client.lookup.class as below
com.impetus.client.cassandra.pelops.PelopsClientFactory
com.impetus.kundera.client.cassandra.dsdriver.DSClientFactory
com.impetus.client.cassandra.thrift.ThriftClientFactory
I am not sure of the Pros and Cons of the above 3 and hence not sure which one to use. Please help me decide
I suggest you to use com.impetus.client.cassandra.thrift.ThriftClientFactory. It is the implementation using just Cassandra's thrift api.
PelopsClient is not in active development.
DSClient is built over datastax driver of cassandra.
There is no real advantage of using either DSClient or ThriftClient.
After further research, I found the following
Don't use PelopsClient as its not in active development as mentioned by #karthik , but more importantly because of the issue reported here
Data Stax Driver is better than thrift client as it over comes few limitations of thrift and they use a different binary protocol specific to cassandra which gives a better performance. Refer Datastax java driver support for Cassandra using Kundera

Spring Cassandra vs. Astyanax performance

I am trying to evaluate the performance of Astyanax and Spring Cassandra. However I did write up a program to measure insertion and read time. It turned out that with large data Astyanax showed up to 600 times faster insertion rate than Spring Cassandra. I believe Spring Cassandra uses datastax driver to communicate with Cassandra though Astyanax uses thrift. Can anyone who have much knowledge about Cassandra client APIs give me more information on their performance analysis? Is anything appearing wrong in my analysis?
Astyanax and the Thrift protocol are deprecated in Cassandra. Netflix, who contributed Astyanax, has ceased all new development in favor of the Datastax Java driver.
SDC* uses the Datastax Java Driver, which uses the latest protocol, and is very fast in the production emvironments I have deployed into.
Without your test, it is impossible to tell you why you are seeing what you are seeing.
Are you testing reads or writes?
Are you using the spring-data-cassandra or spring-cql module?
Are you explicitly setting the ConsistencyLevel in your SDC* tests?
Which methods of the template or repository are you using for your test.
We can perform 10K writes per second PER NODE in a C* cluster using the DS java driver.

Differences betweeen Hector Cassandra and JDBC

I'm currently starting a project that use Cassandra Apache. So I'm interesting in accessing to my database cassandra from Java. For that, I'm using Hector Cassandra. However, I've some doubts about what's the differences between the access via Hector or JDBC Cassandra (specifically this: https://code.google.com/a/apache-extras.org/p/cassandra-jdbc/).
I believe the following (although I not sure if I'm right):
one difference between both could be that are API of different level (I consider that Hector Cassandra is an API of higher-level than JDBC Cassandra)?
in JDBC Cassandra is used CQL for accessing/modifying the database, while Hector Cassandra don't use CQL (only use the methods provided for that).
I'll be thankful if someone can help me and tell me if I'm right/wrong in the previous lines and more differences between both (Hector and JDBC Cassandra).
Thank in advance!
Official Cassandra Java Driver (https://github.com/datastax/java-driver) is probably the best (IMHO, the only) choice for a new project for several reasons:
New features
All other Cassandra clients (Hector, Astyanax, etc) are based on legacy Thrift RPC protocol. RPC "One response per one request" model has severe limitations, for example it doesn't allow processing several requests at the same time in a single connection or streaming large ResultSets.
So, DataStax developed a new protocol that doesn't have RPC limitations. Thrift API won't be getting new features, it's only kept for backward-compatibility. In contrast, Java Driver is actively developed to incorporate the new features of Cassandra 2.0, like conditional updates, batching prepared statements, etc. The overview of new features is here: http://www.datastax.com/dev/blog/cql-in-cassandra-2-0
Convenience
In early Cassandra days (0.7) in our company we have used in-house low-level Thrift client. Later on we have used Hector, Pelops and Astyanax in various projects. I can say that the clients based on Java Driver look the most simple and clean to me.
Performance
We have made some performance testing of Cassandra Java Driver vs other clients. In most scenarios the performance is roughly the same. However, there are certain situations when Cassandra Java Driver significantly outperforms other clients due to its asynchronous nature.
Btw, there's a couple of related questions with excellent answers:
Advantages of using cql over thrift
Cassandra Client Java API's
EDIT: When I wrote this, I wasn't aware that Achilles (https://github.com/doanduyhai/Achilles) mentioned in another answer has CQL implementation that works via Java Driver. For the same of completeness I must say that Achilles' DAO on top of CQL might be (or might became one day) viable alternative to plain CQL via Java Driver.
#mol
Why do you restrict to Hector and cassandra-jdbc if you're starting a new project ?
There are many other interesting choices:
Astyanax as Martin mentioned (Thrift & CQL3)
FireBrand (Thrift via Hector)
Achilles I've just developed (CQL3 & Cassandra 2.0 via Java driver core)
Java Driver Core for plain CQL3
Hector is indeed a higher-level API. Internally it will use Cassandra's Thrift API to execute its functions. It will not convert them to equivalent CQL calls. But its API also provides access to CQL. In this case it will pass the CQL (via Thrift) to Cassandra's APIs for CQL.
CQL in Cassandra is a SQL-like language that works via the Cassandra APIs. So it does not provide any additional capability in the use of Cassandra than the APIs but does make it easier at times to use. If you are considering using Hector I would also look at Astyanax which is a newer take on a high-level Java API to Cassandra.
Since you are starting a new project, it is best to start with CQL as Java native driver:
http://www.datastax.com/documentation/developer/java-driver/1.0/webhelp/index.html#common/drivers/introduction/introArchOverview_c.html
Per DataStax, it is 10-15% faster than Thrift APIs, as it uses Binary Protocol.

Resources