How to create nodes and collections(table equivalent in mysql) - cassandra

I am new to cassandra,
I have more than 10 years experience of working with mysql and sql server and it makes my job harder to transfer to nosql databases.
As you know sql databases provide very user friendly workbrench for working with them.
But the only thing that I found for cassandra was datastax and I am not even sure if I can even create nodes and colloections in a visualize way and not by command line. Is it possible to do such a thing in cassandra ?

You can use OpsCenter and DevCenter, both from DataStax. OpsCenter lets you do operations-centric tasks, while DevCenter lets you perform developer-centric tasks.

Related

Does Scylla DB have a similar migration support to GKE as K8ssandra's Zero Downtime Migration feature?

We are trying to migrate our ScyllaDB cluster deployed on GCE machines to the GKE cluster in Google Cloud, we came across one approach of Cassandra migration and want to implement the same here in ScyllaDB migration. Below is the link for the same, can you please suggest if this is possible in Scylla ?
or if Scylla hasn't introduced such a migration technique with the Scylla K8S operator ?
https://k8ssandra.io/blog/tutorials/cassandra-database-migration-to-kubernetes-zero-downtime/
Adding a new "destination" DC to your existing cluster "source" DC, is a very common technic to migrate to a new DC.
Add the new "destination" DC
Change replication factor settings accordingly
nodetool rebuild --> stream data from the "source" DC to the "destination" DC
nodetool repair the new DC.
Update your application clients to connect to the new DC once it's ready to serve (all data streamed + repaired)
Decommission the "old" (source) DC
For the gory details see here:
https://docs.scylladb.com/stable/operating-scylla/procedures/cluster-management/add-dc-to-existing-dc.html
https://docs.scylladb.com/stable/operating-scylla/procedures/cluster-management/decommissioning-data-center.html
If you prefer to go the full scan route. CQL reads on the source and CQL writes on the destination, with some ability for data manipulation and save points to resume from, than the Scylla Spark Migrator is a good option.
https://github.com/scylladb/scylla-code-samples/tree/master/spark-scylla-migrator-demo
You can also use the Scylla Spark migrator to migrate parquet files
https://www.scylladb.com/2020/06/10/migrate-parquet-files-with-the-scylla-migrator/
Remember not to migrate Materialized views (MV), you can always re-create them post migration again from the base tables.
We use an Apache Spark-based Migrator: https://github.com/scylladb/scylla-migrator
Here's the blog we wrote on how to do this back in 2019: https://www.scylladb.com/2019/02/07/moving-from-cassandra-to-scylla-via-apache-spark-scylla-migrator/
Though in this case, you aren't moving from Cassandra to ScyllaDB; just moving from one ScyllaDB instance to another. If this makes sense to you, it should be straight forward. If you have questions, feel free to join our Slack community to get more interactive assistance:
http://slack.scylladb.com/

Quartz Scheduler with Cassandra NoSQL database for Cluster failover mechanism

I am using Weblogic 12c in a cluster environment,
I am using Cassandra for database operations.
My requirement is to execute a batch which picks up records from DB, process them and upload to a webservice.
For this I am looking at Quartz JDBCJobStore implementation.
For normal sql database we can achieve it by JDBC JobStore
However I am struggling to get on how to implement on Cassandra
I want to create JDBC JobStore for NoSQL database like Cassandra
Any help would be great on this.
It would be helpful if some example implementation of quartz.properties and table script is given
Update: This delivers an answer for a SQL DB. I want the same for NoSQL DB

3 nodes cassandra with one being a spark master - to solve geospatial data or geographic data

I am looking for directions:
I have a cassandra database with latitude & longitude data. I need to search for data within a radius or a box coordinates around a point. I am using golang(gocql) client to query Cassandra.
I need some understanding regarding Spark and Cassandra as this seams like the way to go.
Is the following assumptions correct; I have 2 Cassandra nodes(the data in a replica of 2).
Should I then install an extra node and install Spark on this and then connect it to the other two existing Cassandra nodes containing the data(With the Spark Connector from DataStax).
And do the two existing Cassandra nodes need to have Spark workers installed on them to work with Spark Master node?
When the Spark setup is in place, do you query(Scala) the existing data and then save the data onto the Spark node and then query this with the gaoling(gocql) client?
Any directions is welcome
Thanks in advance
Geospatial Searching is a pretty deep topic. If it's just doing searches that you're after (not batch/analytics), I can tell you that you probably don't want to use Spark. Spark isn't very good at 'searching' for data - even when it's geospatial. The main reason is that Spark doesn't index data for efficient searches and you'd have to create a job/context (unless using job server) every time you'd want to do a search. That takes forever when you're thinking in terms of user facing application time.
Solr, Elastic Search, and DataStax Enterprise Search (Disclaimer I work for DataStax) are all capable of box and radius searches on Cassandra data and do so in near real time.
To answer your original question though, if the bulk of your analytics in general come from Cassandra data, it may be good idea to run Spark on the same nodes as Cassandra for data locality. The nice thing is that Spark scales quite nicely, so if you find Spark taking too many resources from Cassandra, you can simply scale out (both Cassandra and Spark).
Should I then install an extra node and install Spark on this and then
connect it to the other two existing Cassandra nodes containing the
data(With the Spark Connector from DataStax).
Spark is a cluster compute engine so it needs a cluster of nodes to work well. You'll need to install it on all nodes if you want it to be as efficient as possible.
And do the two existing Cassandra nodes need to have Spark workers
installed on them to work with Spark Master node?
I don't think they 'have' to have them, but it's a good idea for locality. There's a really good video on academy.datastax.com that shows how the spark cassandra connector reads data from Cassandra to Spark. I think it will clear a lot of things up for you: https://academy.datastax.com/demos/how-spark-cassandra-connector-reads-data
When the Spark setup is in place, do you query(Scala) the existing
data and then save the data onto the Spark node and then query this
with the gaoling(gocql) client?
The Spark-Cassandra connector can communicate to both Cassandra and Spark. There are methods, saveToCassandra(), for example, that will write data back to Cassandra your jobs are processed. Then you can use your client as you normally would.
There are some really good free Spark + Cassandra tutorials at academy.datastax.com. This is also a good place to start: http://rustyrazorblade.com/2015/01/introduction-to-spark-cassandra/

Using Spark in conjunction with Cassandra?

In our current infrastructure we use a Cassandra cluster as our backend database, and via Solr we use a web UI for our customers to perform read queries on our database as necessary.
I've been asked to look into Spark as something that we could implement in the future, but I'm having trouble understanding how it will improve what we currently do.
So my basic questions are:
1) Is Spark something that would replace Solr for querying the database, like when a user is looking something up on our site?
2) Just a general idea, what type of infrastructure would be necessary to improve our current situation (5 Cassandra nodes, all of which also run Solr).
In other words, we would simple be looking at building another cluster of just Spark nodes?
3) Can Spark nodes run on the same physical machine as Cassandra? I'm guessing it would be a bad idea due to memory constraints as my very basic understanding of Spark is that it does everything in memory.
4) Any good quick/basic resources I can use to start figuring out how Spark might benefit us? I have access to Datastax Academy courses so I'm going through those, just wondering if there is anything else to help with my research.
Basically once I figure out what it is, and more importantly how/if it is something we can use to our advantage I'll start playing with some test instances, but I should probably familiarize myself with the basics first.
1) No, Spark is a batch processing system and Solr is live indexing solution. Latency on solr is going to be sub second, Spark jobs are meant to take minutes (or more). There should really be no situation where Spark can be a drop in replacement for Solr.
2) I generally recommend a second Datacenter running both C* and Spark on the same machines. This will have the data from the first Datacenter via replication.
3) Spark Does not do everything in memory. Depending on your use case it can be a great idea to run on the same machines as C*. This can allow for data locality in reading from C* and help out significantly on table scan times. I usually also recommend colocating Spark Executors and C* nodes.
4) DS Academy 320 course is probably the best resource out there atm. https://academy.datastax.com/courses/getting-started-apache-spark

Possibilities of Hadoop with MSSQL Reporting

I have been evaluating Hadoop on azure HDInsight to find a big data solution for our reporting application. The key part of this technology evaluation is that the I need to integrate with MSSQL Reporting Services as that is what our application already uses. We are very short on developer resources so the more I can make this into an engineering exercise the better. What I have tried so far
Use an ODBC connection from MSSQL mapped to the Hive on HDInsight.
Use an ODBC connection from MSSQL using HBASE on HDInsight.
Use SPARKQL locally on the azure HDInsight Remote desktop
What I have found is that HBASE and Hive are far slower to use with our reports. For test data I used a table with 60k rows and found that the report on MSSQL ran in less than 10 seconds. I ran the query on the hive query console and on the ODBC connection and found that it took over a minute to execute. Spark was faster (30 seconds) but there is no way to connect to it externally since ports cannot be opened on the HDInsight cluster.
Big data and Hadoop are all new to me. My question is, am I looking for Hadoop to do something it is not designed to do and are there ways to make this faster?I have considered caching results and periodically refreshing them, but it sounds like a management nightmare. Kylin looks promising but we are pretty married to windows azure, so I am not sure that is a viable solution.
Look at this documentation on optimizing Hive queries: https://azure.microsoft.com/en-us/documentation/articles/hdinsight-hadoop-optimize-hive-query/
Specifically look at ORC and using Tez. I would create a cluster that has Tez on by default and then store your data in ORC format. Your queries should be much more performant then.
If going through Spark is fast enough, you should consider using the Microsoft Spark ODBC driver. I am using it and the performance is not comparable to what you'll get with MSSQL, other RDBMS or something like ElasticSearch but it does work pretty reliably.

Resources