I add new node to exist cluster in cassandra and waiting to node join to cluster. after that, I update replication factor and repair each affected node according to Updating the replication factor. but why repairing node takes a long time?
Repair process depends on the amount of data that you have. Repairing 100GB of data usually (depending on your instances or servers) and the load on your cluster takes around 1 hour, this is some very vague rule of thumb. If you have large amounts of data please take into account that it may take hours before the repair is actually finished. It also depends on the cassandra version that you are using. Some versions of cassandra simply hang the repair process, please check system.log for more information. If you notice that repair failed, you might want to consider upgrading cassandra.
Related
We have 14 node cassandra cluster v 3.5.
Can someone enlighten with compact & repair ?
If I am running from one of node, does this needs to be runs from all the nodes in cluster
nodetool compact
I see it is very slow, how often this supposed to be run ?
Same question regarding nodetool repair ( All nodes or certain nodes in cluster)
nodetool repair or
nodetool repair -pr
how often this supposed to be run ?
Compactions are part of the normal operation of Cassandra nodes. They run automatically in the background (otherwise known as minor compactions) and get triggered by each table's defined compaction strategy based on any combination of configured thresholds and compaction sub-properties. This video extract from the DS201 Cassandra Foundations course at the DataStax Academy talks about compactions in more detail.
It is not necessary for an operator/administrator to manually kick off compactions with nodetool compact. In fact, it is not recommended to trigger user-defined compactions (otherwise known as major compactions) because they can create problems down the track like the one I explained in this post -- https://community.datastax.com/questions/6396/.
Repairs on the other hand is something that needs to be managed by a cluster administrator. Since Cassandra has a distributed architecture, it is necessary to run repairs to keep the copies of the data consistent between replicas (Cassandra nodes).
Repairs need to be run at least once every gc_grace_seconds (configured per table). By default, GC grace is 10 days (864000 seconds) so most DB admins run a repair on each node once a week. This short video from the DS210 Cassandra Operations course provides a good overview of Cassandra repairs.
Running a partitioner range repair (with -pr flag) on a node repairs only the data that a node owns so it is necessary to run nodetool repair -pr on each node, one node at a time, until all nodes in the cluster have been repaired. This blog post by Jeremiah Jordan is a good explanation of why this is necessary.
If you're interested, datastax.com/dev has free resources for learning Cassandra. The Cassandra Fundamentals series in particular is a good place to start. It is a collection of short online tutorials where you can quickly learn the basic concepts for free. Cheers!
If I am running from one of node, does this needs to be runs from all the nodes in cluster nodetool compact I see it is very slow, how often this supposed to be run ?
You should not run nodetool compact command generally. Compactions are by default meant to run automatically behind the scene if not disabled. Running compaction manually may create more problems and should be avoided for most of the cases. Auto compactions which run behind the scene should be able to handle your compactions. If you feel your compactions are slow you can tune your compactions by looking after the parameters related to compaction here (Mostly concurrent_compactors and compaction_throughput_mb_per_sec)
Same question regarding nodetool repair ( All nodes or certain nodes
in cluster) nodetool repair or nodetool repair -pr how often this
supposed to be run ?
Repair is a maintenance task which should be run on all then nodes once before each gc_grace_seconds period. For example default gc_grace_seconds is equal to 10 days so it is required to run repair on all the nodes once in this 10 day period. You should schedule your repair to run regularly once in gc_grace_seconds period. Regarding which option to use for running repair. If you are doing it by yourself you should run nodetool repair -pr on all the nodes one by one.
My team is using Apache Cassandra 3.0, not DSE, for our 10 node cluster. We have one DC and all nodes are 1 TB each.
Right now all the nodes are around 300 GB occupied, The RF is 2. We have not run anti-entropy (manual) repair in a long time. The problem I am facing now is that I started repair on one of the nodes and it is taking forever. Is that normal? Also, the repair failed once and I am noticing increase in the disk space for that node, it is ~400GB now. how can I fix this behavior?
incremental repairs will not work in this scenario (default repairs). They have been meant to run from beginning so it never covers too much data. I would strongly recommend using sub range repairs - this can be a little difficult but can be automated with OpsCenters repair service or Reaper
you can use nodetool repair -pr -full
-pr will help node only repair the data range where it owns;
-full will disable the incremental repair and like other people suggests, incremental repair is not a good fit
We are using Apache-Cassandra v3.0.9 and have 3 DC. We are experiencing continuous troubles while running nodetool repair and most of the time the repair process causes big outages. We have 3 different datacenters consisting of 4, 4 & 15 nodes. The total data is around 200 GB at RF=3 and we are using LCS. The RAM is 16 GB, out of which 6 GB is dedicated as heap. Most of the times we try to run full repair the repair process fails with long GC pauses and node becoming unresponsive. Other than at the time of repair our nodes are good on heap and GC pauses are hardly 300 ms. I have following doubts.
Is it still required to run full repair before gc_grace_seconds or just the incremental repairs are good enough in apache cassandra v3.0.9
Do I need to run incremental sequential repairs on every node of the cluster, any one node of each of the datacenters or just any node of the whole cluster? One-by-one or concurrently?
What are the downsides of repair failing because some nodes became unresponsive/died during the repair process, any steps to take care of before starting another repair session.
What are the downsides of not scheduling repairs at all?
We started our cassandra deployment straight away on version 3.0.9. Is the migration as mentioned on Apache Cassandra documentation still required?
full repair is still needed. incremental repair would split SSTables into "repaired" and "unrepaired" parts and the "repaired" part will not be repaired later and that's why incremental repair is more efficient. However, if there're data corruption in the "repaired" sstables, only full repair can fix that; our experience is to run incremental repair every day and full repair only once per grace period. Also, when you have incremental repair, you can make the grace period longer.
better run incremental repair on each node one by one; you can have a cron job or code a simple scheduler to do that;
repair failure, just run it again; no side effect I know of.
if you don't do repair, as time goes, you data consistency is at danger; Cassandra takes the eventual consistency concept, which means that it doesn't guarantee strong consistency when you write data to it unless you explicitly specify that. repair is very important to guarantee the data in the background are all kept up to date and consistent;
if you run full repair already in your cluster, you shouldn't need to migrate explicitly.
This question applies to Cassandra 2.2
I am embarrassed to say that I still do not understand when I should be running a nodetool repair, or to be more precise on which nodes.
So far, I understand that to ensure deletes are handled correctly I should be running a repair at a frequency that is less than the GC_GRACE_SECONDS. So that's cool got that bit.
Q. If I have a cluster of 9 nodes with a replication factor of 3, what type of repair do I run? more importantly do I run the repair on every node, or just one node?
Q. If I have multiple data centers, does that change how I run repairs. Do I have to run them in each DC, or can it be coordinated from just one node in one DC?
I am hoping this is a trivial question and someone can just tell it how it is.
The nodetool repair command can be run on either a specified node or
on all nodes if a node is not specified. The node that initiates the
repair becomes the coordinator node for the operation.
If node it not specified it runs on all the nodes that is responsible for that partition range.
run nodetool repair -pr on every node in the cluster to repair all
data. Otherwise, some ranges of data will not be repaired
The nodetool repair -pr option is good for repairs across multiple datacenters.
Note: For Cassandra 2.2 and later, a recommended option for repairs across datacenters: use the -dcpar or --dc-parallel to repair
datacenters in parallel.
Nodetool Repair
This is the recommendation from datastax.
Run repair frequently enough that every node is repaired before
reaching the time specified in the gc_grace_seconds setting. Deleted
data is properly handled in the cluster if this requirement is met.
I have a 15 node cluster with RF 3 (using vnodes). We are ingesting data into the 15 nodes from multiple clients. It turns out that one of the nodes has been down for a couple of days and it's now almost 200 GBs behind, the other nodes have approx 380 GB.
What sort of nodetool repair would you recommend here? I know that the nodetool repair operation is CPU intensive and this might affect the rate at which the clients would be ingesting into the cluster. There seems to be several nodetool repair operations such as -snapshot, -par, etc and I was wondering if any of these options would better suit my current scenario.
I'm trying to run the repair with the least performance hit possible on the cluster.
Thanks,
mskh
Unless you have already taken a snapshot to repair from, the -snapshot option won't do you any good.
Do you have multiple datacenters? If so, you could do a nodetool repair -local, which would only repair your node from nodes in its local datacenter. This is a good way to repair a node without affecting overall cluster performance.
Otherwise Rock's suggestion of repairing only the first partition range (in parallel) is worth trying, as well.
You can use sh nodetool repair -par to ensure minimum impact for online cluster on each node.
Run sh nodetool cleanup once repair is done.