Insert Data using Spark in Cassandra - apache-spark

I am writing 1.2 billion rows of data (two columns) in Cassandra using spark and datastax spark connector. I have a two DC setup, I will be writing with local_quorum. I have 3 replications in both DC. Will there be latency introduced due to other DC. What other things should I keep in mind while inserting Data. I have tested on single DC and results are satisfactory.

Writes will be sent to other DC anyway, but because you're using LOCAL_QUORUM, Spark won't wait for confirmation from nodes in that DC, so it shouldn't affect the latency. The only thing that I would monitor - if the another DC is far away, and/or have a slow link, then the nodes where write happens may start to collect hints, and if this happens, then this may slightly affect performance because hints need to be written & then replayed after the remote node is back.

Related

Data Inconsistency in Cassandra Cluster after migration of data to a new cluster

I see some data inconsistency after moving data to a new cluster.
Old cluster has 9 nodes in total and each has got 2+ TB of data on it.
New cluster has same set of nodes as old and configuration is same.
Here is what I've performed in order:
nodetool snapshot.
Copied that snapshot to destination
Created a new Keyspace on Destination Cluster.
Used sstableloader utility to load.
Restarted all nodes.
After successful completion of transfer, I ran few queries to compare(Old vs New Cluster) and found out that the new cluster is not consistent but the data I see is properly distributed on each node (nodetool status).
Same query returns different sets of results for some of the partitions and I get zero rows first time, second time 100 rows,200 rows and eventually it becomes consistent for few partitions and record count matches with old cluster.
Few partitions have no data in the new cluster where as old cluster has data for those partitions.
I tried running queries on cqlsh with CONSISTENCY ALL but the problem still exist.
Did i miss any important steps to consider before and after?
Is there any procedure to find out the root cause of this?
I am currently running "nodetool repair" but I doubt if that could solve as I tried with Consistency ALL.
Highly Appreciate your help!
The fact that the results eventually becomes consistent indicates that the replicas are out-of-sync.
You can verify this by reviewing the logs around the time that you were loading data, particularly for dropped mutations. You can also check the output of nodetool netstats. If you're seeing blocking read repairs, that's another confirmation that the replicas are out-of-sync.
If you still have other partitions you can test, enable TRACING ON in cqlsh when you query with CONSISTENCY ALL. You will see if there are digest mismatches in the trace output which should also trigger read repairs. Cheers!
[EDIT] Based on your comments below, it sounds like you possibly did not load the snapshots from ALL the nodes in the source cluster with sstableloader. If you've missed loading SSTables to the target cluster, then that would explain why data is missing.

Maintaining RF when node fails

Does Cassandra maintains RF when a node goes down. For e.g. if number of nodes is 5 and RF is 2 then when a single node goes down, does the remaining replica copies it's data to some other node to maintain the RF of 2?
In the Datastax's documentation, it's mentioned that "If a node fails, the load is spread evenly across other nodes in the cluster". Does this mean that migration of data happens when a node goes down? Is this a feature available only in Datastax's Cassandra and not Apache Cassandra?
No, instead a "hint" will be stored in the coordinator node and will get eventually written to the node which owns the token range when the node comes back up - the write will succeed depending on your consistency level. So in the above example the write will succeed if you are writing with consistency level as ONE.
If the node is down only for short period - the node will receive the data back from hints from other nodes when it comes back. But if you decommission a node, then the data gets replicated to other nodes and the other nodes will have the new token ranges (same case when a node is added to the cluster as well).
Over time the data in one replica can become inconsistent with others and the repair process helps Cassandra in fixing them - https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsRepairNodesTOC.html
This is applicable in Apache Cassandra as well.

How to speedup node joining process in cassandra cluster

I have a cluster 4 cassandra nodes. I have recently added a new node but data processing is taking too long. Is there a way to make this process faster ? output of nodetool
Less data per node. Your screenshot shows 80TB per node, which is insanely high.
The recommendation is 1TB per node, 2TB at most. The logic behind this is bootstrap times get too high (as you have noticed). A good Cassandra ring should be able to rapidly recover from node failure. What happens if other nodes fail while the first one is rebuilding?
Keep in mind that the typical model for Cassandra is lots of smaller nodes, in contrast to SQL where you would have a few really powerful servers. (Scale out vs scale up)
So, I would fix the problem by growing your cluster to have 10X - 20X the number of nodes.
https://groups.google.com/forum/m/#!topic/nosql-databases/FpcSJcN9Opw

Cassandra throughput descrease when moving from "single data node" to "two data node" cassandra cluster

I have a single data node cassandra version 3.11.2 , and a cassandra c++ driver version 2.7. Single data node cluster having 500 000 rows. I read asynchronous and then and pushed data to queue where a scheduler take up the data write asynchronous using cassandra c++ driver. I have 10 application thread 10 io thread and 10 schedular thread. I got a TPS of 38000.
But the same activity I did with "TWO DATA NODE" cassandra cluster both reside on same Rack and try to read and write with consistency level "TWO". My TPS drop down to 12000.
Why my performance degrades so much even all configuration and client binary is same? By just changing READ CONSISTENCY to TWO and WRITE CONSITENCY to TWO.
What I need to do more to get a TPS around 40000. Do I need to add more DATA NODE?
The TWO consistency level means that when you read, you need to get data from two nodes, and this adds latency. The same for write - when you write with TWO, 2 nodes should confirm that data is written, that also adds latency...
I would recommend to read following section in DSE Architecture guide (better the whole guide completely) to get understanding about consistency levels.

Cassandra Fast Read Configuration

I have 4 Cassandra nodes with 1 seed in a single data center. I have about 5M records in which Cassandra takes around 4 mins to read where with MySQL, it takes only 17 seconds. So my guess is that there is something wrong in my configuration. So kindly will anyone let me know what configuration attributes so I have to check in Cassandra.yaml.
You may be doing an apples to oranges comparison if you are reading all 5M records from one client.
With MySQL all the data is local and optimized for reads since data is updated in place.
Cassandra is distributed and optimized for writes. Writes are simple appends, but reads are expensive since all the appends need to be read and merged to get the current value of each column.
Since the data is distributed across multiple nodes, there is a lot of overhead of accessing and retrieving the data over the network.
If you were using Spark with Cassandra and loading the data into Spark workers in parallel without shuffling it across the network to a single client, then it would be a more similar comparison.
Cassandra is generally good at ingesting large amounts of data and then working on small slices of it (i.e. partitions) rather than doing table scan operations such as reading the entire table.

Resources