Scylla, datastax-java-driver integration problem - cassandra

I have three nodes Syclla cluster. I have keyspace which has 3 replication factor. I use datastax-java-driver 3.6.0 version and Scylla 3.0.0 version. When I tried to read my data with consistency level = LOCAL_QUORUM, I get error message below which is impossible in my opinion. As far as If I use LOCAL_QUORUM 2 nodes is enough for 3 replication factor.
Is it bug or am I missing something?
com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency LOCAL_QUORUM (3 responses were required but only 2 replica responded)
com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency LOCAL_QUORUM (3 responses were required but only 2 replica responded)
com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency LOCAL_QUORUM (3 responses were required but only 2 replica responded)

What happened here is that Scylla chose to do probabilistic read repair and detected mismatch before CL was reached. At this point it started to do repair between all three replicas and failed to read from all of them (either due to an overload or one node crashed/was restarted while operation was ongoing). You can disable probabilistic read repair to avoid it.

Related

Cassandra timeout during read query at consistency ALL

we use Cassandra 3.11.1, com.datastax.oss java-driver-core 4.13.0 and Java 13.
we have multiple read micro-services which read data from Cassandra, but got this error:
Cassandra timeout during read query at consistency ALL (8 responses were required but only 7 replica responded)
our queries are mostly just select by primary key, some of queries even specify to set consistency level to local_quorum, wonder why we still ran into this issue, sample queries:
QueryBuilder.selectFrom(Email.TABLE_NAME)
.all()
.whereColumn(Email.COLUMN_EMAIL_ADDRESS)
.isEqualTo(bindMarker())
.limit(bindMarker())
.build()
.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM)
QueryBuilder.selectFrom(Account.TABLE_NAME)
.columns(
AccountDataObject.COLUMN_ACCOUNT_ID,
AccountDataObject.COLUMN_EMAIL_ADDRESS)
.whereColumn(Account.COLUMN_ACCOUNT_ID)
.isEqualTo(bindMarker())
.build()

Cassandra timeout during read query at consistency LOCAL_QUORUM (2 responses were required but only 0 replica responded)

[![enter image description here][1]][1]Cassandra timeout during read query at consistency LOCAL_QUORUM (2 responses were required but only 0 replica responded); nested exception is com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency LOCAL_QUORUM (2 responses were required but only
I am getting the above error for while executing a query on one of the Cassandra table of my application? The table has 3 columns promo ,store and upc. promo is the type = PrimaryKeyType.PARTITIONED & both store and upc are type = PrimaryKeyType.CLUSTERED. I am getting this for only one of the promo. How can I resolve this?
The exception means that the nodes in your cluster were unresponsive. You need to investigate why. Start by reviewing the system.log and debug.log on the nodes.
In a lot of cases, this is caused by nodes being overloaded and GC pauses. Cheers!

JanusGraph Not enough replicas available for query at consistency QUORUM

We have 7 nodes storage cluster with RF=4 and CL=ONE. Janus Graph has below settings in properties file-
storage.cql.replication-factor=4
storage.cql.read-consistency-level=ONE
storage.cql.write-consistency-level=ONE
log.tx.key-consistent=true
When we stopped 2 nodes (out of 7) , Janus Graph failing with below errors:
gremlin-server.log:Caused by: java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas available for query at consistency QUORUM (3 required but only 2 alive)
I tried log.tx.key-consistent=true , but its not working.
Can you please assist here?
Obliviously there is a quorum operation going on. Seems like the configuration of cl=1 wasn't enough

Not enough replica available for query at consistency SERIAL (2 required but only 1 alive)

Experts,
I have the following configuration 3 nodes cluster (Cassandra 2.1):
- Replication factor of 2
- Consistency level ONE
- Driver consistency level SERIAL
- SimpleStrategy
- GossipingPropertyFileSnitch
With this configuration, if I bring down one node I get the following error:
Not enough replica available for query at consistency SERIAL (2 required but only 1 alive)
Data is distributed evenly across all the nodes and nodetool status correctly shows that one node is down on the running 2 cassandra nodes
With CONSISTENCY ONE and 2 nodes ups, why does it require both the replica nodes to be up???
Also I read that with SERIAL drive consistency wrt WRITE failures:
If one of three nodes is down, the Paxos commit fails under the following conditions:
CQL query-configured consistency level of ALL
Driver-configured serial consistency level of SERIAL
Replication factor of 3
This works if I set the replication factor to 3. But I don't think there should be a need to do so.
Am I missing something here?
You have hit one of the hidden gems of the Paxos protocol in Cassandra. Under the hood, Paxos works in a way that it uses a QUORUM-like consistency level for its calls.
Note that it complains about SERIAL consistency level in your error message instead of the consistency level ONE that you have set. LWT ignores what normal consistency level is set in most cases. It follows either SERIAL or LOCAL_SERIAL consistency level, which maps almost directly to a QUORUM or a LOCAL_QUORUM of nodes.
The quorum of two nodes is: two. Therefore you are getting this error message when one node is down.

Consistency level in cassandra issue

Environment :
5 machines Cassandra 2.1.15 cluster.
RF = 3, CL = QUORUM
1 machine goes down for more than 3 hours, without the possibility to bring it back
Decide to do noderemove and replace it :
The problem i saw is this :
Did heavy load over the node :
cassandra-stress write n=50000000 cl=QUORUM -rate threads=1000 -node 192.168.0.171,192.168.0.177,192.168.0.178,192.168.0.179,192.168.0.220
At one time gave me the error :
com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency QUORUM (3 replica were required but only 2 acknowledged the write)
According to my knowledge QUORUM = RF/2+1 rounded down => 2 replicas should be acquired.
Is this some kind of a bug!? Does it have some kind of negative impact?
Are you certain that cassandra-stress is using your keyspace? If you have not configured it to do so, it must be using default keyspace with replications as many as number of nodes. Try using -schema switch for cassandra-stress.
https://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsCStress_t.html

Resources