I am running C* 3.11.
What is difference between below two commands ?
Here -dc parameter is not working fine while -local works .
Any reason?
$nodetool repair -local -pr demo msisdn
output : Repair completed successfully
$nodetool repair -dc datacenter1 -pr demo msisdn
output : error: Primary range repair should be performed on all nodes in the cluster.
This question was also asked on https://community.datastax.com/questions/11796/ and I'm re-posting my answer here.
The -local and -dc flags limit the replicas which are repaired to a specific data centre.
Unlike localised repairs, primary range repairs (with the -pr flag) are designed to repair all replicas in all data centres. You can not use the -dc flag with -pr since they are mutually exclusive.
For more information, see Manual repairs in Cassandra. Cheers!
Related
I am running a cluster with 1 datacenter (6 nodes) and Cassandra 3.11.0 installed on each node with replication factor 2. I know nodetool repair -pr
will carry out a repair of the primary ranges on that node. My question, is how nodetool repair -pr -full is different from nodetool repair -pr?
Which repair option should I use on a heavy load production system?
my question is how nodetool repair -pr -full is different from nodetool repair -pr?
So a "full" repair means that all data between the source and target nodes will be verified and repaired. Essentially, it's the opposite of an "incremental" repair, where only a subset of data is repaired. These two options control how much data is repaired.
Once that is decided (incremental vs. full), -pr will run on that subset, and only then repairs the primary replicas.
Additionally, -full is the default for Cassandra 3; which would make -pr and -pr -full essentially the same.
Which repair option should I use on a heavy load production system?
I'll second what Alex said, and also recommend Cassandra Reaper for this. It has the ability to schedule repairs for slower times, as well as allowing you to pause repairs which don't complete in time.
For production systems, as part of it's better to use token range repair (using -st/-et options) to limit the load onto the nodes. Doing it manually could be tedious, but it could be automated with tools like Reaper, that track what token ranges are already repaired, and what not.
Its recommended that do not execute incremental repair with -PR option.
it will skip non-primary replicas unrepaired and is not a good practice in long run !!
I have a cassandra cluster with two datacenters.
In datacenter 2 I have a keyspace with replication factor 3.
I want to repair all keyspaces in datacenter 2.
I have tried to run:
nodetool repair --in-local-dc --full -j 4
But this command does not repair all keyspaces. Does anybody know if this is intended behaviour ? Cassandra logs does not indicate any problems
So I have also had issues with multi-DC repairs when designating a source DC. I don't know if those DC-specific repair flags are buggy, but I have found that pretty much the best way to ensure that only specific nodes are involved in a repair is to specify each one.
nodetool repair keyspace_name -hosts 10.6.8.2 -hosts 10.6.8.3 -hosts 10.6.8.1
-hosts 10.6.8.5 -hosts 10.6.8.4 -hosts 10.1.3.1 -full
Note that my goal was to run this repair on 10.1.3.1 while SSH'd into it. The node you are running the repair on must also be specified with a -hosts flag. Also make sure that each node in the source DC is listed, otherwise you'll get errors about missing source token ranges.
Try that and see if that helps.
I am confused that how to repair the Cassandra cluster.
Below command repairs all nodes of the local datacenter?
nodetool repair -local -j 4
So, I needn't run nodetool repair -pr -j 4 on all nodes one by one?
Running nodetool repair on a single node does the following: The node repairs all data that it holds by contacting other nodes which are also responsible for this data. It will not repair data which this node is not responsible for.
A very simple example without vnodes is this:
Consider you have three nodes (A, B and C) and your lowest replication is two. If you run repair on A, it will run repair on data which A and B have in common as well as data which A and C have in common. However it will not repair the data which is only stored on B and C.
In this example, running the repair on two nodes will ensure that you have repaired everything.
The -pr and -local flags
The -pr flag changes this behavior even further. Instead of repairing AB and AC. You only repair data which A is primarily responsible for. In this case you need to repair B and C separately. This ensures that you do not repair the same data twice.
-local will ensure that the repair only repairs nodes in the same datacenter. B was in a different datacenter, it will be ignored for the purpose of the repair.
I would recommend to run -pr on each node one by one. This will avoid repairing the same data multiple times and break it down into smaller digestable chunks.
What is the right method to run nodetool repair command?
In a 3-node Cassandra cluster in single datacenter, should we run nodetool repair or nodetool repair -pr ?
As per the Cassandra apache document http://cassandra.apache.org/doc/latest/operating/repair.html,
"By default, repair will operate on all token ranges replicated by the node you’re running repair on, which will cause duplicate work if you run it on every node. The -pr flag will only repair the “primary” ranges on a node, so you can repair your entire cluster by running nodetool repair -pr on each node in a single datacenter."
Running "nodetool repair" takes more than 5 mins.But running "nodetool repair -pr" takes lesser time.So,I want to know if "nodetool repair -pr" is the correct choice for 3-node Cassandra cluster in single datacenter.
Please advice.
Notice that if you use -pr, do not use -inc at the same time because these two are not recommended to be used together. So basically like other people suggests, before 2.2, on each node you can just run nodetool repair -pr; whereas, after 2.2, you'd better use nodetool repair -pr -full to suppress the incremental repair.
If you run repair periodically, the best way is to run -pr option, thus repairing only the primary token range. In that way, the whole ring will be repaired.
But if you didn't run repair until now, probably the best way will be to run a full repair and then maintenance repairs using -pr option.
Also, note that depending on your cassandra version, the repair behavior is changed. The default repair in 2.2 and afterwards is incremental. If you want to trigger a full repair you have to explicitly use -full option. For versions prior to 2.2, check the documentation.
The Cassandra documentation recommends to run the nodetool repair for every GC seconds (10 days), but this nodetool repair command takes more time and resources.
Hence to reduce time and resources I followed partitioner range repair mechanism (nodetool repair -pr) on each node of the data center as mentioned in Cassandra docs:
https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/opsRepairNodesManualRepair.html
When I used nodetool repair -pr I could see the time taken is less as compared to nodetool repair.
Hence currently I am running nodetool repair -pr command on each node of all my data centers one by one.
I want to know can we run nodetool repair -pr command in all the nodes in parallel?
(or)
Should I run nodetool repair -pr command with a time difference between each nodes?