From the documentation:
Using the nodetool repair -pr (–partitioner-range) option repairs only the primary range for that node, the other replicas for that range still have to perform the Merkle tree calculation, causing a validation compaction. Because all the replicas are compacting at the same time, all the nodes may be slow to respond for that portion of the data.
There is probably never a time where I can accept all nodes to be slow for a certain portion of the data. But I wonder: Why does it do that (or is there maybe just a mixup with the "-par" option in the documentation?!), when nodetool repair seems to be smarter:
By default, the repair command takes a snapshot of each replica immediately and then sequentially repairs each replica from the snapshots. For example, if you have RF=3 and A, B and C represents three replicas, this command takes a snapshot of each replica immediately and then sequentially repairs each replica from the snapshots (A<->B, A<->C, B<->C) instead of repairing A, B, and C all at once. This allows the dynamic snitch to maintain performance for your application via the other replicas, because at least one replica in the snapshot is not undergoing repair.
However, the datastax blog addresses this issue:
This first phase can be intensive on disk io, however. You can mitigate this to some degree with compaction throttling (since this phase is what we call a validation compaction.) Sometimes that isn’t enough though, and some people try to mitigate this further by using the -pr (–partitioner-range) option to nodetool repair, which repairs only the primary range for that node. Unfortunately, the other replicas for that range will still have to perform the Merkle tree calculation, causing a validation compaction. This can be a problem, since all the replicas will be doing it at the same time, possibly making them all slow to respond for that portion of your data. Fortunately, there is way around this by using the -snapshot option.
That could be nice, but actually, there is no -snapshot option for nodetool repair (see the manpage, or the documentation) (has this option been removed?!)
So overall,
I cannot use nodetool repair -pr, it seems, because I always need at least to keep the system responsive enough to read/write with consistency ONE, without significant delay. (Note: We have only one data center.) Or am I missing/misunderstanding something?
Why is nodetool repair smart, keeping one node responsive, while nodetool repair -pr makes all nodes slow for a portion of data?
Where is the -snapshot option: Has it been removed, never implemented, or does it now maybe automatically work like that, also when using nodetool repair -pr?
The blog below addresses these issues:
http://www.datastax.com/dev/blog/repair-in-cassandra
A simple nodetool repair will not only kick off a repair on the node itself but also all the nodes that hold replicas if its ranges. While this is ok, it is very expensive and typically not an operation you'll carry out on a busy production system during peak times.
Consequently nodetool repair -pr will carry out a repair of the primary ranges on that node. You will need to run this on every node of the cluster as the blog says. Customers with large production systems will typically use this in a rolling fashion across their cluster.
On another note Datastax OpsCenter offers the repair service which runs smaller sub-range repairs all the time so although you're always repairing its going on in the background all the time at a lower resource level.
As for the snapshots, running a regular repair will invoke a snapshot as you stated, you can also invoke a snapshot yourself using nodetool snapshot
Hope this helps!
Related
We have a 13 nodes Cassandra cluster (version 3.10) with RP 2 and read/write consistency of 1.
This means that the cluster isn't fully consistent, but eventually consistent. We chose this setup to speed up the performance, and we can tolerate a few seconds of inconsistency.
The tables are set with TWCS with read-repair disabled, and we don't run full repairs on them
However, we've discovered that some entries of the data are replicated only once, and not twice, which means that when the not-updated node is queried it fails to retrieve the data.
My first question is how could this happen? Shouldn't Cassandra replicate all the data?
Now if we choose to perform repairs, it will create overlapping tombstones, therefore they won't be deleted when their time is up. I'm aware of the unchecked_tombstone_compaction property to ignore the overlap, but I feel like it's a bad approach. Any ideas?
So you've obviously made some deliberate choices regarding your client CL. You've opted to potentially sacrifice consistency for speed. You have achieved your goals, but you assumed that data would always make it to all of the other nodes in the cluster that it belongs. There are no guarantees of that, as you have found out. How could that happen? There are multiple reasons I'm sure, some of which include: networking/issues, hardware overload (I/O, CPU, etc. - which can cause dropped mutations), cassandra/dse being unavailable for whatever reasons, etc.
If none of your nodes have not been "off-line" for at least a few hours (whether it be dse or the host being unavailable), I'm guessing your nodes are dropping mutations, and I would check two things:
1) nodetool tpstats
2) Look through your cassandra logs
For DSE: cat /var/log/cassandra/system.log | grep -i mutation | grep -i drop (and debug.log as well)
I'm guessing you're probably dropping mutations, and the cassandra logs and tpstats will record this (tpstats will only show you since last cassandra/dse restart). If you are dropping mutations, you'll have to try to understand why - typically some sort of load pressure causing it.
I have scheduled 1-second vmstat output that spools to a log continuously with log rotation so I can go back and check a few things out if our nodes start "mis-behaving". It could help.
That's where I would start. Either way, your decision to use read/write CL=1 has put you in this spot. You may want to reconsider that approach.
Consistency level=1 can create a problem sometimes due to many reasons like if data is not replicating to the cluster properly due to mutations or cluster/node overload or high CPU or high I/O or network problem so in this case you can suffer data inconsistency however read repair handles this problem some times if it is enabled. you can go with manual repair to ensure consistency of the cluster but you can get some zombie data too for your case.
I think, to avoid this kind of issue you should consider CL at least Quorum for write or you should run manual repair within GC_grace_period(default is 10 days) for all the tables in the cluster.
Also, you can use incremental repair so that Cassandra run repair in background for chunk of data. For more details you can refer below link
http://cassandra.apache.org/doc/latest/operating/repair.html or https://docs.datastax.com/en/archived/cassandra/3.0/cassandra/tools/toolsRepair.html
I am currently running Cassandra 3.0.9 in a 18 node config. We loaded quite a bit of data and now are running repairs against each node. My nodetool command is scripted to look like:
nodetool repair -j 4 -local -full
Using nodetool tpstats I see the 4 threads for repair but they are repairing very slowly. I have 1000's of repairs that are going to take weeks at this rate. The system log has repair items but also "Redistributing index summaries" listed as well. Is this what is causing my slowness? Is there a faster way to do this?
Repair can take a very long time, sometime days, sometime weeks. You might improve things with the following:
Run primary partition range repair (-pr) This will repair only the primary partition range of each node, which overall, will be faster (you still need to run a repair on each node, one at a time).
Using -j is not necessarily a big winner. Sure, you will repair multiple tables at a time, but you put much more load on your cluster, which can damage your latency.
You might want to prioritize repairing the keyspaces / tables that are most critical to your application.
Make sure you keep your node density reasonable. 1 to 2TB per node.
Focus repairing in priority the nodes that went down for more than 3 hours (assuming max_hint_window_in_ms is set to it's default value)
Focus repairing in priority the tables for which you create tombstones (DELETE statements)
So I did something of a test run/disaster recovery practice deleting a table and restoring in Cassandra via snapshot on a test cluster I have built.
This test cluster has four nodes, and I used the node restart method so after truncating the table in question, all nodes were shutdown, commitlog directories cleared, and the current snapshot data copied back into the table directory for each node. Afterwards, I brought each node back up. Then following the documentation I ran a repair on each node, followed by a refresh on each node.
My question is, why is it necessary for me to run a repair on each node afterwards assuming none of the nodes were down except when I shut them down to perform the restore procedure? (in this test instance it was a small amount of data and took very little time to repair, if this happened in our production environment the repairs would take about 12 hours to perform so this could be a HUGE issue for us in a disaster scenario).
And I assume running the repair would be completely unnecessary on a single node instance, correct?
Just trying to figure out what the purpose of running the repair and subsequent refresh is.
What is repair?
Repair is one of Cassandra's main anti-entropy mechanisms. Essentially it ensures that all your nodes have the latest version of all the data. The reason it takes 12 hours (this is normal by the way) is that it is an expensive operation -- io and CPU intensive -- to generate merkel trees for all your data, compare them with merkel trees from other nodes, and stream any missing / outdated data.
Why run a repair after a restoring from snapshots
Repair gives you a consistency baseline. For Example: If the snapshots weren't taken at the exact same time, you have a chance of reading stale data if you're using CL ONE and hit a replica restored from the older snapshot. Repair ensures all your replicas are up to date with the latest data available.
tl;dr:
repairs would take about 12 hours to perform so this could be a HUGE
issue for us in a disaster scenario).
While your repair is running, you'll have some risk of reading stale data if your snapshots don't have the same exact data. If they are old snapshots, gc_grace may have already passed for some tombstones giving you a higher risk of zombie data if tombstones aren't well propagated across your cluster.
Related side rant - When to run a repair?
The coloquial definition of the term repair seems to imply that your system is broken. We think "I have to run a repair? I must have done something wrong to get to this un-repaired state!" This is simply not true. Repair is a normal maintenance operation with Cassandra. In fact, you should be running repair at least every gc_grace seconds to ensure data consistency and avoid zombie data (or use the opscenter repair service).
In my opinion, we should have called it AntiEntropyMaintenence or CassandraOilChange or something rather than Repair : )
Do we also need to repair "SYSTEM" keyspaces and "OPSCENTER" keyspaces in Cassandra, along with the keyspaces we created?
The answer is no and maybe respectively. Here's why:
System KS
The SYSTEM keyspace uses Local replication strategy so there is no need or sense in repairing it -- remember, repair is an anti-entropy mechanism through which we ensure that multiple replicas on different nodes are holding the same, latest data. Because Local strategy means there is no replication, there is no need to build merkel trees and compare them.
OpsC KS
OpsCenter uses regular reads and writes into Cassandra to store information about your cluster health / statistics / etc. These will have multiple replicas and it is possible that different nodes may get out of sync (say one node is down for some reason and exceeds the max hint window). In this case, you might see stale data if you're reading CL ONE from that node and a Repair would be beneficial. OpsC tables also have a TTL -- so you could see zombie data if for some reason tombstones don't get propagated across the cluster. But the impact of stale data in your OpsCenter statistics will not make or break your business.
So if you have the system resources to run repairs (hopefully using the OpsC repair service) on the OpsC keyspace, it won't hurt and might keep you from seeing stale data, etc. But turning these off for the OpsC keyspace may free up some system resources for your regular workload.
I am developing an automated script for nodetool repair which would execute ever weekend on all the 6 Cassandra nodes. We have 3 in DC1 and 3 in DC2. Just want to understand worst case scenario. What would happens if connectivity between DC1 and DC2 is lost or couple of replica goes down before or during a nodetool repair. It could be a network issue, an network upgrade(which usually happens on weekends),or something else. I understand that nodetool repair computes a Merkle tree for each range of data on that node, and compares it with the versions on other replicas. So if their is no connectivity between replicas how would a nodetool repair behave ? Will it really repair the nodes. Do i have to rerun node tool repair after all nodes are up and connectivity is restored. Will their be any side effects of this event ? I goggled about it but couldn't find much details. Any insight would be helpful.
Thanks.
Let's say you are using vnodes, which by default means that each node has 256 ranges, but the idea is the same.
If the network problem happens after nodetool repair already started you will see in the logs that some ranges where successfully repaired and other don't. The error will say that the range repair failed because node "192.168.1.1 is dead" something like that.
If the network error happens before nodetool repair starts all the ranges will fail with the same error.
In both cases you will need to run another nodetool repair after the network problem is solved.
I don't know the amount of data you have in those 6 nodes, but in my experience if the cluster can handle it it is better to run nodetool repair once a week in a different day of the week. For instance you can repair node 1 on Sunday, node 2 on Monday and so on. If you have a small amount of data or the adds/updates during a day are not too many you can even run a repair once a day. When you have an already repaired cluster and you run nodetool repair more often it will take much less time to finish, but again if you have too much data in it it may not be possible.
Regarding the side effects you can only note a difference in the data if you use consistency level 1, if it happens that you run a query against the "unrepaired" node the data will be different than the one on the "repaired" nodes. You can solve this by increasing the consistency level to 2 for instance, then again if 2 nodes are "unrepaired" and the query you run is resolved using those 2 nodes you will see a difference again. You have a trade-off here since the best option to avoid this "difference" in the queries is to have the consistency level = replication factor, which brings another problem when 1 of the nodes is down the entire cluster is down and you'll start receiving timeouts on your queries.
Hope it helps!
There are multiple repair options available, you can choose one depending upon your application usage. If you are using DSE Cassandra then I would recommend scheduling OpsCenter repair which does incremental repair by giving duration less than gc_grace_seconds.
Following are different options of doing repair:
Default (None): Will repair all 3 partition ranges: 1 primary and 2 replicas owned by the node on which it was run. Total of 5 nodes will be involved 2 nodes will be fixing 1 partition range, 2 nodes will be fixing 2 partition ranges, 1 node will be fixing 3 partition ranges.
-par: Will do the above operation in parallel.
-pr : Will fix only primary partition range for the node on which it was run. If you are using write consistency of EACH_QUORUM then use -local option as well to reduce cross DC traffic.
I would suggest going with option 3 if you are already live in production to avoid any performance impacts due to repair.
If you want read about repair in more detail please have a look at this here