Unable to start Cassandra: "node already exists" - cassandra

I have problems to get a existing Cassandra node to join the cluster again after reboot (on a new virtual machine instance).
I had a running Cassandra cluster with 4 nodes all in state "up and normal" according to nodetool status. The nodes are running on virtual machines in Azure. I changed the instance type of the virtual machine for 10.0.0.6, which returned in a reboot of this machine. The machine stayed on 10.0.0.6.
After the reboot I am unable to start Cassandra again. I am getting this exception:
INFO 22:39:07 Handshaking version with /10.0.0.4
INFO 22:39:07 Node /10.0.0.6 is now part of the cluster
INFO 22:39:07 Node /10.0.0.5 is now part of the cluster
INFO 22:39:07 Handshaking version with cassandraprd001/10.0.0.6
INFO 22:39:07 Node /10.0.0.9 is now part of the cluster
INFO 22:39:07 Handshaking version with /10.0.0.5
INFO 22:39:07 Node /10.0.0.4 is now part of the cluster
INFO 22:39:07 InetAddress /10.0.0.6 is now UP
INFO 22:39:07 Handshaking version with /10.0.0.9
INFO 22:39:07 InetAddress /10.0.0.4 is now UP
INFO 22:39:07 InetAddress /10.0.0.9 is now UP
INFO 22:39:07 InetAddress /10.0.0.5 is now UP
ERROR 22:39:08 Exception encountered during startup
java.lang.RuntimeException: A node with address cassandraprd001/10.0.0.6 already exists, cancelling join. Use cassandra.replace_address if you want to replace this node.
at org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:455) ~[apache-cassandra-2.1.0.jar:2.1.0]
at org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:667) ~[apache-cassandra-2.1.0.jar:2.1.0]
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:615) ~[apache-cassandra-2.1.0.jar:2.1.0]
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:509) ~[apache-cassandra-2.1.0.jar:2.1.0]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:338) [apache-cassandra-2.1.0.jar:2.1.0]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:457) [apache-cassandra-2.1.0.jar:2.1.0]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:546) [apache-cassandra-2.1.0.jar:2.1.0]
java.lang.RuntimeException: A node with address cassandraprd001/10.0.0.6 already exists, cancelling join. Use cassandra.replace_address if you want to replace this node.
at org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:455)
at org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:667)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:615)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:509)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:338)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:457)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:546)
Exception encountered during startup: A node with address cassandraprd001/10.0.0.6 already exists, cancelling join. Use cassandra.replace_address if you want to replace this node.
INFO 22:39:08 Announcing shutdown
I am using Cassandra 2.1.0. I am not replaying a dead node - I am just trying to get the old node up and running again. According to nodetool status (on the other nodes) all nodes are "up and normal" except 10.0.0.6 which is "down and normal".
How do I get this node up and running again?

First, on another node, use
nodetool status
the results show you list of nodes in the cluster. Find your node with ip which fail to start, get its id and fill to command:
nodetool removenode <node_id>
then start cassandra.
Best,

Quick answer, if the node's ip is 10.200.10.200
add this
JVM_OPTS="$JVM_OPTS -Dcassandra.replace_address=10.200.10.200"
to the end of your
cassandra-env.sh
Don't forget to remove it once your done.

You can look this blog, http://blog.alteroot.org/articles/2014-03-12/replace-a-dead-node-in-cassandra.html.
It works for me, this is a bug for Cassandra. If your node's host_id changed, but use old IP, will throw this exception.
If you use Cassandra 2.x.x, you should modify cassandra/conf/cassandra-env.sh.
Finally, don't forget to REMOVE modifications on cassandra-env.sh after the complete bootstrap!

Related

cassandra service (3.11.5) stops automaticall after it starts/restart on AWS linux

cassandra service (3.11.5) stops automatically after it starts/restart on AWS linux.
I have fresh installation of cassandra on new instance of AWS linux (t3.xlarge) and
sudo service cassandra start
or
sudo service cassandra restart
after 1 or 2 seconds, the service stop automatically. I looked into logs and I found these.
I am not sure, I havent change configs related to snitch and its always SimpleSnitch. I dont have any multiple cassandras. Just only on single EC2.
Logs
INFO [main] 2020-02-12 17:40:50,833 ColumnFamilyStore.java:426 - Initializing system.schema_aggregates
INFO [main] 2020-02-12 17:40:50,836 ViewManager.java:137 - Not submitting build tasks for views in keyspace system as storage service is not initialized
INFO [main] 2020-02-12 17:40:51,094 ApproximateTime.java:44 - Scheduling approximate time-check task with a precision of 10 milliseconds
ERROR [main] 2020-02-12 17:40:51,137 CassandraDaemon.java:759 - Cannot start node if snitch's data center (datacenter1) differs from previous data center (dc1). Please fix the snitch configuration, decommission and rebootstrap this node or use the flag -Dcassandra.ignore_dc=true.
Installation steps
sudo curl -OL https://www.apache.org/dist/cassandra/redhat/311x/cassandra-3.11.5-1.noarch.rpm
sudo rpm -i cassandra-3.11.5-1.noarch.rpm
sudo pip install cassandra-driver
export CQLSH_NO_BUNDLED=true
sudo chkconfig --levels 3 cassandra on
The issue is in your log file:
ERROR [main] 2020-02-12 17:40:51,137 CassandraDaemon.java:759 - Cannot start node if snitch's data center (datacenter1) differs from previous data center (dc1). Please fix the snitch configuration, decommission and rebootstrap this node or use the flag -Dcassandra.ignore_dc=true.
It seems that you started the cluster, stopped it and renamed the datacenter from dc1 to datacenter1.
In order to fix:
If no data is stored, delete the data directories
If data is stored, rename the datacenter back to dc1 in the config
I had the same problem , where cassandra service immediately stops after it was started.
in the cassandra configuration file located at /etc/cassandra/cassandra.yaml change the cluster_name to the previous one, like this:
...
# The name of the cluster. This is mainly used to prevent machines in
# one logical cluster from joining another.
cluster_name: 'dc1'
# This defines the number of tokens randomly assigned to this node on the ring
# The more tokens, relative to other nodes, the larger the proportion of data
...

Apache Cassandra 3.7 snitch issue cannot start data center

I am using ubuntu 14.04 with apache cassandra 3.7. I am trying to start it but get the following error message:
ERROR [main] 2016-07-15 15:22:10,627 CassandraDaemon.java:731 - Cannot start node if snitch's data center (dc1) differs from previous data center (datacenter1). Please fix the snitch configuration, decommission and rebootstrap this node or use the flag -Dcassandra.ignore_dc=true.
I know I can set -Dcassandra.ignore_dc=true, BUT that is not a fix, its a band-aid and for development use only, this is suppose to be in production. I tried to clear out all the files and folders in /var/lib/cassandra, I MEAN every SINGLE FILE AND FOLDER, started apache cassandra again, AND STILL THE SAME ERROR MESSAGE... any other idea??
change in file:
/etc/cassandra/cassandra-rackdc.properties
entry from dc1 to datacenter1
on all nodes
and then do a rolling restart of nodes.
If have just switched to GossipingPropertyFileSnitch, start Cassandra with the option
-Dcassandra.ignore_dc=true
If it starts successfully, execute:
nodetool repair
nodetool cleanup
Afterwards, Cassandra should be able to start normally without the ignore option.
I faced the issue while upgrading my Apache cassandra from 3.11.1 to 3.11.4 .
cassandra.yaml
old_Config : endpoint_snitch: GossipingPropertyFileSnitch
New_Config: endpoint_snitch: SimpleSnitch
{changed it to GossipingPropertyFileSnitch}
cassandra-rackdc.properties
old_version_config: dc:Dc1 rack:Rack1
New_version_config: dc:dc rack:rack (changed this to Dc1 and Rack1)
this resolves my issue

How to connect master and slaves in Apache-Spark? (Standalone Mode)

I'm using Spark Standalone Mode tutorial page to install Spark in Standalone mode.
1- I have started a master by:
./sbin/start-master.sh
2- I have started a worker by:
./bin/spark-class org.apache.spark.deploy.worker.Worker spark://ubuntu:7077
Note: spark://ubuntu:7077 is my master name, which I can see it in Master-WebUI.
Problem: By second command, a worker started successfully. But it couldn't associate with master. It tries repeatedly and then give this message:
15/02/08 11:30:04 WARN Remoting: Tried to associate with unreachable remote address [akka.tcp://sparkMaster#ubuntu:7077]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: Connection refused: ubuntu/127.0.1.1:7077
15/02/08 11:30:04 INFO RemoteActorRefProvider$RemoteDeadLetterActorRef: Message [org.apache.spark.deploy.DeployMessages$RegisterWorker] from Actor[akka://sparkWorker/user/Worker#-1296628173] to Actor[akka://sparkWorker/deadLetters] was not delivered. [20] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
15/02/08 11:31:15 ERROR Worker: All masters are unresponsive! Giving up.
What is the problem?
Thanks
I usually start from spark-env.sh template. And I set, properties that I need. For simple cluster you need:
SPARK_MASTER_IP
Then, create a file called "slaves" in the same directory as spark-env.sh and slaves ip's (one per line). Assure you reach all slaves through ssh.
Finally, copy this configuration in every machine of your cluster. Then start the entire cluster executing start-all.sh script and try spark-shell to check your configuration.
> sbin/start-all.sh
> bin/spark-shell
You can set export SPARK_LOCAL_IP="You-IP" #to set the IP address Spark binds to on this node in $SPARK_HOME/conf/spark-env.sh
In my case, using spark 2.4.7 in standalone mode, I've created a passwordless ssh key using ssh-keygen, but still got asked for worker password when starting the cluster.
What I did was follow the instructions here
https://www.cyberciti.biz/faq/how-to-set-up-ssh-keys-on-linux-unix/
This line solved the problem:
ssh-copy-id -i $HOME/.ssh/id_rsa.pub user#server-ip

Not able to upgrade from cassandra 2.0.3 to 2.0.9

I used the same steps i had enquired for upgrading cassandra from 2.0.3 to 2.0.7 to upgrade it to 2.0.9.
Regarding upgrade from 2.0.3 to 2.0.7
But i am facing some problem with respect to gossiping.
My scenario:
X is running on 2.0.3
Y is started now in 2.0.9
I had copied the data files from Y's 2.0.3 data directory to Y's 2.0.9 data directory. Made the changes as specified in the link above.
Now , when i start 2.0.9 , i get the following stuff in logs of 2.0.9
INFO [HANDSHAKE-/172.20.9.23] 2014-08-19 14:37:28,606 OutboundTcpConnection.java (line 386) Handshaking version with
**/X**** INFO [FlushWriter:1] 2014-08-19 14:37:28,626 Memtable.java (line 403) Completed flushing
/home/sas/apache-cassandra-2.0.9/var/lib/cassandra/data/system/local/system-local-jb-564-Data.db (5279 bytes) for commitlog position
ReplayPosition(segmentId=1408439241980, position=355252) **INFO
[main] 2014-08-19 14:37:28,638 StorageService.java (line 1558) Node /Y
state jump to normal INFO [main] 2014-08-19 14:37:28,653
CassandraDaemon.java (line 543) Waiting for gossip to settle before
accepting client requests... INFO [main] 2014-08-19 14:37:36,654
CassandraDaemon.java (line 575) No gossip backlog; proceeding
It doesnt throw any error in logs too. I can clearly see that the problem is with gossiping but not able to find why .
nodetool status returns "?" for load of X.
Y seems to be up with 2.0.9 but X is not working with Y.
Moreover , Y is not detected in X.
What am i doing wrong in the upgrade? Any idea to make this work?

upgrade form cassandra 1.2.5 to .1.2.6 failing

I have a cluster running on datastax-cassandra 1.2.5, it works fine, because of vnodes adn leveled compaction strategy issue i tried promoting it to 1.2.6.
So upgrading involved -
1 - stopping all the nodes
2 - deleting 1.2.5 rpm
3 - installing 1.2.6 rpm
4 - fixing cassandra.yaml
5 - starting cassandra.
Problem Statement - The problem now is that all the nodes are up and running, but not in one cluster. They all are operating in their own cluster even though the seeds in yaml points to the original seed.
nodetool status also just shows the one node (the node on which we are on)
system log shows one error
ERROR [WRITE-/10.93.3.46] 2013-10-21 19:43:29,101 CassandraDaemon.java (line 192)
Exception in thread Thread[WRITE-/10.10.10.10,5,main]
java.lang.NoClassDefFoundError: Could not initialize class org.xerial.snappy.Snappy
at org.xerial.snappy.SnappyOutputStream.<init>(SnappyOutputStream.java:79)
at org.xerial.snappy.SnappyOutputStream.<init>(SnappyOutputStream.java:66)
at
org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:351)
at
org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:143)
**** 10.10.10.10 is the seed ip
Any help on how to pass through it
Try to set the internode_compression to none. It will disable compression between nodes, which is failing because snappy cannot initialize
internode_compression: none

Resources