Cassandra consistency issue? - cassandra

We have cassandra cluster in three different datacenters (DC1, DC2 and DC3) and we have 10 machines in each datacenter. We have few tables in cassandra in which we have less than 100 records.
What we are seeing - some tables are out of sync between machines in DC3 as compared to DC1 or DC2 when we do select count(*) on it.
As an example we did select count(*) while connecting to one cassandra machine in dc3 datacenter as compared to one cassandra machine in dc1 datacenter and the results were different.
root#machineA:/home/david/apache-cassandra/bin# python cqlsh dc3114.dc3.host.com
Connected to TestCluster at dc3114.dc3.host.com:9160.
[cqlsh 2.3.0 | Cassandra 1.2.9 | CQL spec 3.0.0 | Thrift protocol 19.36.0]
Use HELP for help.
cqlsh> use testingkeyspace ;
cqlsh:testingkeyspace> select count(*) from test_metadata ;
count
-------
12
cqlsh:testingkeyspace> exit
root#machineA:/home/david/apache-cassandra/bin# python cqlsh dc18b0c.dc1.host.com
Connected to TestCluster at dc18b0c.dc1.host.com:9160.
[cqlsh 2.3.0 | Cassandra 1.2.9 | CQL spec 3.0.0 | Thrift protocol 19.36.0]
Use HELP for help.
cqlsh> use testingkeyspace ;
cqlsh:testingkeyspace> select count(*) from test_metadata ;
count
-------
16
What could be the reason for this sync issue? Is it suppose to happen ever? Can anyone shed some light on this?
Since our java driver code and datastax c++ driver code are using these tables with CONSISTENCY LEVEL ONE.

What's your replication strategy? For cross datacenter, you should be looking at NetowrokTopologyStrategy with replications factors specified for each data center. Then during your queries, you can specify quorum / local quorum, etc. However, think about this for a minute:
You have a distributed cluster with multiple datacenters. If you want an each_quorum, think what your asking cassandra to do - for reads or writes, you ask it to quorum persist to both data centers separately before returning a success. Think about the latencies, and network connections going down. For a read, the client's requested node becomes the coordinator. It sends the write to the datacenter local replicas and to one node for the remote data centers. The recipient there coordinates to its local quorum. Once done, it returns results, and when the initial coordinator receives enough responses, it returns. All is well. Slow, but well. Now for writes, kind of a similar thing happens, but if a coordinator doesn't know that a node is down, it still sends to nodes. The write completes when the node comes back up, but the client can get a write timeout (note, not a failure - the write will eventually succeed). This can happen more often between multiple data centers.
Your looking to do count(*) queries. This is in general a terrible idea. It needs to hit every partition for a table. Cassandra likes queries that hit a single partition, or at least a small number of partitions (via IN filter).
Think about what select count(*) does in a distributed system. What does the result even mean? The result can be stale an instant later. There may be another insert in some other data center while you're processing the result of the query.
If you're looking to do aggregations over lots or all partitions, consider pairing cassandra with spark, rather than trying to do select(*) across data centers. And to go back to the earlier point, don't assume (or depend on) cross data center immediate consistency. Embrace eventual consistency, and design your applications around that.
Hope that helps.

Related point, you can query with different consistency levels from cqlsh. Just run:
CONSISTENCY EACH_QUORUM;
or
CONSISTENCY ALL;
etc.
The setting will persist as long as your cqlsh session or until you replace it with another CONSISTENCY statement.
EACH_QUORUM or ALL should guarantee you the same response regardless of your coordinator node. Though performance will take a hit. See ashic's point on count(*) in general. If this is a common query another option is to maintain the count in a separate table.

Related

Cassandra Spark Connector : requirement failed, contact points contain multiple data centers

I have two Cassandra datacenters, with all servers in the same building, connected with 10 gbps network. The RF is 2 in each datacenter.
I need to ensure strong consistency inside my app, so I first planed to use QUORUM consistency (3 replicas of 4 must respond) on both reads and writes. With that configuration, I can also be fault tolerant if a node crash on a particular datacenter.
So I set multiples contact point from multiples datacenter to my spark connector, but the following error is immediately returned : requirement failed, contact points contain multiple data centers
So I look at the documentation. It say :
Connections are never made to data centers other than the data center of spark.cassandra.connection.host [...]. This technique guarantees proper workload isolation so that a huge analytics job won't disturb the realtime part of the system.
Okay. So after reading that, I plan to switch to LOCAL_QUORUM (2 replicas of 2 must respond) on write, and LOCAL_ONE on read, to still get strong consistency, and connect by default on datacenter1.
The problem, is still consistency, because Spark apps working on the second datacenter datacenter2 don't have strong consistency on write, because data are just asynchronously synchronized from datacenter1.
To avoid that, I can set write consistency to EACH_QUORUM (= ALL). But the problem in that case, is if a single node is unresponsive or down, the entire writes are unable to process.
So my only option, to have both some fault tolerance, AND strong consistency, is to switch my replication factor from 2 to 3 on each datacenter. Then use EACH_QUORUM on write, and LOCAL_QUORUM on read ? Is that correct ?
Thank you
This comment indicates there is some misunderstanding on your part:
... because data are just asynchronously synchronized from datacenter1.
so allow me to clarify.
The coordinator of a write request sends each mutation (INSERT, UPDATE, DELETE) to ALL replicas in ALL data centres in real time. It doesn't happen at some later point in time (i.e. 2 seconds later, 10s later or 1 minute later) -- it gets sent to all DCs at the same time without delay regardless of whether you have a 1Mbps or 10Gbps link between DCs.
We also recommend a minimum of 3 replicas in each DC in production as well as use LOCAL_QUORUM for both reads and writes. There are very limited edge cases where these recommendations do not apply.
The spark-cassandra-connector requires all contacts points to belong to the same DC so:
analytics workloads do not impact the performance of OLTP DCs (as you already pointed out), and
it can achieve data-locality for optimal performance where possible.

Apache Cassandra Reading explanation

I am currently managing a percona xtradb cluster composed by 5 nodes, that hadle milions of insert every day. Write performance are very good but reading is not so fast, specially when i request a big dataset.
The record inserted are sensors time series.
I would like to try apache cassandra to replace percona cluster, but i don't understand how data reading works. I am looking for something able to split query around all the nodes and read in parallel from more than one node.
I know that cassandra sharding can have shard replicas.
If i have 5 nodes and i set a replica factor of 5, does reading will be 5x faster?
Cassandra read path
The read request initiated by a client is sent over to a coordinator node which checks the partitioner what are the replicas responsible for the data and if the consistency level is met.
The coordinator will check is it is responsible for the data. If yes, will satisfy the request. If no, it will send the request to fastest answering replica (this is determined using the dynamic snitch). Also, a request digest is sent over to the other replicas.
The node will compare the returning data digests and if all are the same and the consistency level has been met, the data is returned from the fastest answering replica. If the digests are not the same, the coordinator will issue some read repair operations.
On the node there are a few steps performed: check row cache, check memtables, check sstables. More information: How is data read? and ReadPathForUsers.
Load balancing queries
Since you have a replication factor that is equal to the number of nodes, this means that each node will hold all of your data. So, when a coordinator node will receive a read query it will satisfy it from itself. In particular(if you would use a LOCAL_ONE consistency level, the request will be pretty fast).
The client drivers implement the load balancing policies, which means that on your client you can configure how the queries will be spread around the cluster. Some more reading - ClientRequestsRead
If i have 5 nodes and i set a replica factor of 5, does reading will be 5x faster?
No. It means you will have up to 5 copies of the data to ensure that your query can be satisfied when nodes are down. Cassandra does not divide up the work for the read. Instead it tries to force you to design your data in a way that makes the reads efficient and fast.
Best way to read cassandra is by making sure that each query you generate hits cassandra partition. Which means the first part of your simple primary(x,y,z) key and first bracket of compound ((x,y),z) primary key are provided as query parameters.
This goes back to cassandra table design principle of having a table design by your query needs.
Replication is about copies of data and Partitioning is about distributing data.
https://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archPartitionerAbout.html
some references about cassandra modelling,
https://www.datastax.com/dev/blog/the-most-important-thing-to-know-in-cassandra-data-modeling-the-primary-key
https://www.datastax.com/dev/blog/basic-rules-of-cassandra-data-modeling
it is recommended to have 100 MB partitions but not compulsory.
You can use cassandra-stress utility to have look report of how your reads and writes look.

Select All Performance in Cassandra

I'm current using DB2 and planning to use cassandra because as i know cassandra have a read performance greater than RDBMS.
May be this is a stupid question but I have experiment that compare read performance between DB2 and Cassandra.
Testing with 5 million records and same table schema.
With query SELECT * FROM customer. DB2 using 25-30s and Cassandra using 40-50s.
But query with where condition SELECT * FROM customer WHERE cusId IN (100,200,300,400,500) DB2 using 2-3s and Cassandra using 3-5ms.
Why Cassandra faster than DB2 with where condition? So i can't prove which database is greater with SELECT * FROM customer right?
FYI.
Cassandra: RF=3 and CL=1 with 3 nodes each node run on 3 computers (VM-Ubuntu)
DB2: Run on windows
Table schema:
cusId int PRIMARY KEY, cusName varchar
If you look at the types of problems that Cassandra is good at solving, then the reasons behind why unbound ("Select All") queries suck become quite apparent.
Cassandra was designed to be a distributed data base. In many Cassandra storage patterns, the number of nodes is greater than the replication factor (I.E., not all nodes contain all of the data). Therefore, limiting the number of network hops becomes essential to modeling high-performing queries. Cassandra performs very well with specific queries (which utilize the partition/clustering key structure), because it can quickly locate the node primarily responsible for the data.
Unbound queries (A.K.A. multi-key queries) incur the extra network time because a coordinator node is required. So one node acts as the coordinator, queries all other nodes, collates data, and returns the result set. Specifying a WHERE clause (with at least a partition key) and while using a "Token Aware" load balancing policy, performs well for two reasons:
A coordinator node is not required.
The node primarily responsible for the range is queried, returning the result set in a single netowrk hop.
tl;dr;
Querying Cassandra with an unbound query, causes it to incur a lot of extra processing and network time that it normally wouldn't have to do, had the query been specified with a WHERE clause.
Even as a troublesome query like a no-condition range query, 40-50s is pretty extreme for C*. Is the coordinator hitting GCs with the coordination? Can you include code used for your test?
When you make a select * vs millions of records, it wont fetch them all at once, it will grab the fetchSize at a time. If your just iterating through this, the iterator will actually block even if you used executeAsync initially. This means that every 10k (default) records it will issue a new query that you will block on. The serialized nature of this will take time just from a network perspective. http://docs.datastax.com/en/developer/java-driver/3.1/manual/async/#async-paging explains how to do it in a non-blocking way. You can use this to to kick off the next page fetch while processing the current which would help.
Decreasing the limit or fetch size could also help, since the coordinator may walk token ranges (parallelism is possible here but its heuristic is not perfect) one at a time until it has read enough. If it has to walk too many nodes to respond it will be slow, this is why empty tables can be very slow to do a select * on, it may serially walk every replica set. With 256 vnodes this can be very bad.

Is it possible to read data only from a single node in a Cassandra cluster with a replication factor of 3?

I know that Cassandra have different read consistency levels but I haven't seen a consistency level which allows as read data by key only from one node. I mean if we have a cluster with a replication factor of 3 then we will always ask all nodes when we read. Even if we choose a consistency level of one we will ask all nodes but wait for the first response from any node. That is why we will load not only one node when we read but 3 (4 with a coordinator node). I think we can't really improve a read performance even if we set a bigger replication factor.
Is it possible to read really only from a single node?
Are you using a Token-Aware Load Balancing Policy?
If you are, and you are querying with a consistency of LOCAL_ONE/ONE, a read query should only contact a single node.
Give the article Ideology and Testing of a Resilient Driver a read. In it, you'll notice that using the TokenAwarePolicy has this effect:
"For cases with a single datacenter, the TokenAwarePolicy chooses the primary replica to be the chosen coordinator in hopes of cutting down latency by avoiding the typical coordinator-replica hop."
So here's what happens. Let's say that I have a table for keeping track of Kerbalnauts, and I want to get all data for "Bill." I would use a query like this:
SELECT * FROM kerbalnauts WHERE name='Bill';
The driver hashes my partition key value (name) to the token of 4639906948852899531 (SELECT token(name) FROM kerbalnauts WHERE name='Bill'; returns that value). If I am working with a 6-node cluster, then my primary token ranges will look like this:
node start range end range
1) 9223372036854775808 to -9223372036854775808
2) -9223372036854775807 to -5534023222112865485
3) -5534023222112865484 to -1844674407370955162
4) -1844674407370955161 to 1844674407370955161
5) 1844674407370955162 to 5534023222112865484
6) 5534023222112865485 to 9223372036854775807
As node 5 is responsible for the token range containing the partition key "Bill," my query will be sent to node 5. As I am reading at a consistency of LOCAL_ONE, there will be no need for another node to be contacted, and the result will be returned to the client...having only hit a single node.
Note: Token ranges computed with:
python -c'print [str(((2**64 /5) * i) - 2**63) for i in range(6)]'
I mean if we have a cluster with a replication factor of 3 then we will always ask all nodes when we read
Wrong, with Consistency Level ONE the coordinator picks the fastest node (the one with lowest latency) to ask for data.
How does it know which replica is the fastest ? By keeping internal latency stats for each node.
With consistency level >= QUORUM, the coordinator will ask for data from the fastest node and also asks for digest from other replicas
From the client side, if you choose the appropriate load balancing strategy (e.g. TokenAwareStrategy) the client will always contact the primary replica when using consistency level ONE

Frequent rpc_timeouts of the query SELECT count(*) FROM Keyspace1.Standard1 limit 5; in cassandra

I have a 5 node cassandra cluster with 3 nodes on a private DC & the other 2 on AWS.
Select * requests are timing out even when it is limited to 5. I understand if they are timing out for high numbers but timing out for single digits looks strange strange.
Any one observed this before?
NOTE: Queries with WHERE clause are normal.
There are two or three options:
1) Your servers are too busy / slow to reply to the query.
2) You're hitting a tombstone exception, which sometimes doesn't get reported properly. Check the log on the cassandra server for the word 'tombstone' to be sure.
3) You're asking for too much data at once - less likely if it happens when you LIMIT 5.
I'm guessing it's #2. Look for tombstone warnings in your cassandra server logs. If that's the problem, you likely have a data model problem.
Are the nodes on two different networks (you said private DC and AWS), check if the communication between nodes.
what is the consistency you are using when querying, try with consistency of one and see the response and then checkthe communication between nodes (with higher consistency it always checks the consistency of data with other nodes before responding back with results).
Does your select have any where clause or a simple select *, if later then again retrieving data from different nodes with a slow inter node communication might be an issue.

Resources