cassandra - Saved cluster name Test Cluster != configured name - cassandra

How am I supposed to bot a new Cassandra node when I get this error?
INFO [SSTableBatchOpen:1] 2014-02-25 01:51:17,132 SSTableReader.java (line 223) Opening /var/lib/cassandra/data/system/local/system-local-jb-5 (5725 bytes)
ERROR [main] 2014-02-25 01:51:17,377 CassandraDaemon.java (line 237) Fatal exception during initialization
org.apache.cassandra.exceptions.ConfigurationException: Saved cluster name Test Cluster != configured name thisisstupid
at org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:542)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:233)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:462)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:552)
Name of cluster in the cassandra.yaml file is:
cluster_name: 'thisisstupid'
How do I resolve?

You can rename the cluster without deleting data by updating it's name in the system.local table (but you have to do this for each node...)
cqlsh> UPDATE system.local SET cluster_name = 'test' where key='local';
# flush the sstables to persist the update.
bash $ ./nodetool flush
Finally you need to rename the cluster to the new name in cassandra.yaml (again on each node)

The above commands with UPDATE SET cluster_name does not work for me. I found the working is to follow the instructions from the DataStax documentation on Initializing a multiple node cluster:
sudo service cassandra stop
sudo rm -rf /var/lib/cassandra/data/system/*
sudo vi /etc/cassandra/cassandra.yaml, setup the proper parameters
sudo service cassandra start
nodetool status
For some good cluster node setups, I found this blog post to be very useful.

I had same issue with Datastax4.7 enterprise edition. I resolved it with above instructions:
Change the cluster name back to "test Cluster"
start the cluster:
cqlsh> UPDATE system.local SET cluster_name = 'Cassendra Cluster' where key='local';
cqlsh> exit;
$ nodetool flush system
Stop the cluster:
$sudo service dse stop;sudo service datastax-agent stop
Edit the file:
$ sudo vi /etc/dse/cassandra/cassandra.yaml
cluster_name: 'Cassendra Cluster'
Start the cluster:
$sudo service dse start;sudo service datastax-agent start
Check Installation log:
$ cat /var/log/cassandra/output.log

For Cassandra 3.0.5, I tired the answer by Lyuben Todorov but it was missing a key part:
bash $ ./nodetool flush
should be:
bash $ ./nodetool flush system
as in here

For Cassandra 3.7 you should replace the ./nodetool flush with ./nodetool flush system. Otherwise it won't work.

If cassandra is not running, you can wipe the data folder.
rm -rf /usr/lib/cassandra/data/*
This will clear the system tables which had the wrong cluster name. \
When your restart cassandra with the correct cluster name in your cassandra.yaml everything will be regenerated and you should be good.

For Bitnami certified Cassandra - 3.11.5
Here's the solution that worked for me:
Stop the Cassandra service
sudo /opt/bitnami/ctlscript.sh stop cassandra
Remove the system data
sudo rm -rf /opt/bitnami/cassandra/data/data/system/*
Start the Cassandra Service
sudo /opt/bitnami/ctlscript.sh start cassandra

Related

cassandra service (3.11.5) stops automaticall after it starts/restart on AWS linux

cassandra service (3.11.5) stops automatically after it starts/restart on AWS linux.
I have fresh installation of cassandra on new instance of AWS linux (t3.xlarge) and
sudo service cassandra start
or
sudo service cassandra restart
after 1 or 2 seconds, the service stop automatically. I looked into logs and I found these.
I am not sure, I havent change configs related to snitch and its always SimpleSnitch. I dont have any multiple cassandras. Just only on single EC2.
Logs
INFO [main] 2020-02-12 17:40:50,833 ColumnFamilyStore.java:426 - Initializing system.schema_aggregates
INFO [main] 2020-02-12 17:40:50,836 ViewManager.java:137 - Not submitting build tasks for views in keyspace system as storage service is not initialized
INFO [main] 2020-02-12 17:40:51,094 ApproximateTime.java:44 - Scheduling approximate time-check task with a precision of 10 milliseconds
ERROR [main] 2020-02-12 17:40:51,137 CassandraDaemon.java:759 - Cannot start node if snitch's data center (datacenter1) differs from previous data center (dc1). Please fix the snitch configuration, decommission and rebootstrap this node or use the flag -Dcassandra.ignore_dc=true.
Installation steps
sudo curl -OL https://www.apache.org/dist/cassandra/redhat/311x/cassandra-3.11.5-1.noarch.rpm
sudo rpm -i cassandra-3.11.5-1.noarch.rpm
sudo pip install cassandra-driver
export CQLSH_NO_BUNDLED=true
sudo chkconfig --levels 3 cassandra on
The issue is in your log file:
ERROR [main] 2020-02-12 17:40:51,137 CassandraDaemon.java:759 - Cannot start node if snitch's data center (datacenter1) differs from previous data center (dc1). Please fix the snitch configuration, decommission and rebootstrap this node or use the flag -Dcassandra.ignore_dc=true.
It seems that you started the cluster, stopped it and renamed the datacenter from dc1 to datacenter1.
In order to fix:
If no data is stored, delete the data directories
If data is stored, rename the datacenter back to dc1 in the config
I had the same problem , where cassandra service immediately stops after it was started.
in the cassandra configuration file located at /etc/cassandra/cassandra.yaml change the cluster_name to the previous one, like this:
...
# The name of the cluster. This is mainly used to prevent machines in
# one logical cluster from joining another.
cluster_name: 'dc1'
# This defines the number of tokens randomly assigned to this node on the ring
# The more tokens, relative to other nodes, the larger the proportion of data
...

Apache Cassandra 3.7 snitch issue cannot start data center

I am using ubuntu 14.04 with apache cassandra 3.7. I am trying to start it but get the following error message:
ERROR [main] 2016-07-15 15:22:10,627 CassandraDaemon.java:731 - Cannot start node if snitch's data center (dc1) differs from previous data center (datacenter1). Please fix the snitch configuration, decommission and rebootstrap this node or use the flag -Dcassandra.ignore_dc=true.
I know I can set -Dcassandra.ignore_dc=true, BUT that is not a fix, its a band-aid and for development use only, this is suppose to be in production. I tried to clear out all the files and folders in /var/lib/cassandra, I MEAN every SINGLE FILE AND FOLDER, started apache cassandra again, AND STILL THE SAME ERROR MESSAGE... any other idea??
change in file:
/etc/cassandra/cassandra-rackdc.properties
entry from dc1 to datacenter1
on all nodes
and then do a rolling restart of nodes.
If have just switched to GossipingPropertyFileSnitch, start Cassandra with the option
-Dcassandra.ignore_dc=true
If it starts successfully, execute:
nodetool repair
nodetool cleanup
Afterwards, Cassandra should be able to start normally without the ignore option.
I faced the issue while upgrading my Apache cassandra from 3.11.1 to 3.11.4 .
cassandra.yaml
old_Config : endpoint_snitch: GossipingPropertyFileSnitch
New_Config: endpoint_snitch: SimpleSnitch
{changed it to GossipingPropertyFileSnitch}
cassandra-rackdc.properties
old_version_config: dc:Dc1 rack:Rack1
New_version_config: dc:dc rack:rack (changed this to Dc1 and Rack1)
this resolves my issue

Unable to run nodetool on remote and local server in cassandra

nodetool -h <ipaddress> -p 7199 status
Error connecting to remote Jmx agent!
java.rmi.NoSuchObectException: no such object in the table
Am getting the above error when I tried to run the nodetool status or any other nodetool command. Cassandra is running fine and nodetool status on other nodes in the cluster shows it is UN state. I tried to add the below entry in cassandra-env.sh file but still I got the same error
JVM_OPTS = "$JVM_OPTS -Djava.rmi.server.hostname="
You have to use your listen_address for nodetool host ip.
nodetool -h <listen_address> -p 7199 status
or if it is not working, try with sudo.
In cassandra.yaml file it is written that jmx by default will only work from localhost. To run it from a remote host you need to uncomment and provide values of the parameters written in that file.
also try
cat /var/lob/cassandra/cassandra.log | grep Error
see if it gives you any error regarding JMX connectivity

Cassandra Cluster configuration

I am trying to configure two windows servers in my network as Cassandra cluster.
I did some reading in various sites and changed the below in Cassandra.yalm
after changing the default value of 127.0.0.1 to actual IP the Cassandra service is not starting.
I also added the map to actual IP to localhost in (windows) hosts file.
After doing the above change, the service is coming up when I start the service. it is stopping immediately.
The reason I am changing this IP is to make this a cluster with two node setup,
Please let me know if I miss some thing.
Version: Datastax community version of Cassandra
Server : windows.
Thx
Muthu
Message from Cassandra.txt in logs dir:
ERROR [main] 2014-09-18 11:43:12,155 DatabaseDescriptor.java (line 116) Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: Invalid yaml Caused by: Can't construct a java object for tag:yaml.org,2002:org.apache.cassandra.config.Config; exception=Cannot create property=seed_provider for JavaBean=org.apache.cassandra.config.Config#34e5190a; No suitable constructor with 2 arguments found for class org.apache.cassandra.config.SeedProviderDef in 'reader', line 8, column 1: cluster_name: 'Test Cluster'
If you want to create Cassandra cluster you must have at least two nodes and configure /etc/cassandra/cassandra.yaml
cassandra.yaml
cluster_name: 'Some Cluster Name'
listen_address: [Current IP]
rpc_address: [Current IP]
seed_provuder:
- seeds: "[Current IP], [Remote IP]"
Note: seeds must have at least two IPs which must be reachable for each other
Clean and start Cassandra instance
sudo rm -rf /var/lib/cassandra/* /var/log/cassandra/*
Note: Cassandra instance must be killed before cleaning those folders.

Node is unreachable in single node Cassandra installation

I have a problem with a single node Cassandra installation.
I can start it without any errors in the log.
I can create a keyspace, create tables, insert and delete data.
However truncate is not working
cqlsh> CREATE KEYSPACE mykeyspace WITH REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor': 1};
cqlsh> use mykeyspace;
cqlsh:mykeyspace> create table test1 (num int, primary key (num));
cqlsh:mykeyspace> insert into test1 (num) values (12);
cqlsh:mykeyspace> select * from test1;
num
-----
12
(1 rows)
cqlsh:mykeyspace> truncate test1;
Unable to complete request: one or more nodes were unavailable.
Also if I try to run nodetool describecluster it doesn't return complete response
[XXXX#XXXX dsc-cassandra-2.0.6]$ ./bin/nodetool describecluster
Cluster Information:
Name: Test Cluster
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
UNREACHABLE: [127.0.0.1]
I'm using
Cassandra DSC 2.0.6.
Red Hat 5.8.
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
I get responses for ping 127.0.0.1 and ping localhost
I checked all the ports that I am aware of cassandra may need (7000, 9160, 7199, 9042) using telnet - for example
telnet 127.0.0.1 7199
telnet localhost 7199
I can connect to these ports.
I'm using the default cassandra.yaml. These are the lines where either IP or hostname shows up
listen_address: localhost
rpc_address: localhost
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "127.0.0.1"
I also looked into the source code. I believe the problem can be close to the method org.apache.cassandra.service.StorageProxyMBean.describeSchemaVersions(). Most likely I get no response to the SCHEMA_CHECK message.
I tried to enable TRACE log in log4j for nodetool (conf/log4j-tools.properties) to get more information about the issue, but somehow log4j didn't start logging (it did create the file that I set in the appender, but the file was empty.)
There must be something specific to this environment because I can't repeat this problem in any other environments. So I can't figure out what's causing it.
The problem was that Cassandra couldn't load snappy.
org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:239)
at org.xerial.snappy.Snappy.<clinit>(Snappy.java:48)
at org.xerial.snappy.SnappyOutputStream.<init>(SnappyOutputStream.java:79)
at org.xerial.snappy.SnappyOutputStream.<init>(SnappyOutputStream.java:66)
at org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:359)
at org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:150)
I turned off compression in cassanda.yaml
internode_compression: none
Now both nodetool describecluster and I truncate work.
I also found a similar post here Cassandra Startup Error 1.2.6 on Linux x86_64
Since I can't install another glibc on this machine for the sake of testing I downloaded snappy-java-1.0.4.1.jar and replaced libsnappyjava.so in my snappy-java-1.0.5.jar
With this jar I was able to run cassandra with
internode_compression: all
(I have glibc 2.5 installed)

Resources