Unable to run nodetool on remote and local server in cassandra - cassandra

nodetool -h <ipaddress> -p 7199 status
Error connecting to remote Jmx agent!
java.rmi.NoSuchObectException: no such object in the table
Am getting the above error when I tried to run the nodetool status or any other nodetool command. Cassandra is running fine and nodetool status on other nodes in the cluster shows it is UN state. I tried to add the below entry in cassandra-env.sh file but still I got the same error
JVM_OPTS = "$JVM_OPTS -Djava.rmi.server.hostname="

You have to use your listen_address for nodetool host ip.
nodetool -h <listen_address> -p 7199 status
or if it is not working, try with sudo.

In cassandra.yaml file it is written that jmx by default will only work from localhost. To run it from a remote host you need to uncomment and provide values of the parameters written in that file.
also try
cat /var/lob/cassandra/cassandra.log | grep Error
see if it gives you any error regarding JMX connectivity

Related

Cassandra : nodetool reporting error: Failed to connetc IP address [duplicate]

Cassandra nodetool throws an error after updating OpenJDK
nodetool status
nodetool: Failed to connect to '127.0.0.1:7199' - URISyntaxException: 'Malformed IPv6 address at index 7: rmi://[127.0.0.1]:7199'.
This also affects the current official Docker-Hub Image https://hub.docker.com/_/cassandra version 3.11.12
How can I fix this error?
There seems to be an issue with "improved" IPv6 address parsing in the latest jdk update.
The workaround would be to use the IPv6 notation of localhost
nodetool -h ::FFFF:127.0.0.1 status
You can upgrade to Apache Cassandra 3.11.13 or use this command:
nodetool -Dcom.sun.jndi.rmiURLParsing=legacy status
Another way is to add this -Dcom.sun.jndi.rmiURLParsing=legacy to JAVA_TOOL_OPTIONS environment variable.

MemSQL node failed to start

When I tried today to start my cluster I get the following error
MemSQL node CCBA806E30C9A8E4430ADFEAF5DF435ED91B8F7F failed to start:
Failed to connect to MemSQL node
CCBA806E30C9A8E4430ADFEAF5DF435ED91B8F7F: No error in tracelog
and the worst part is that I am not able to query anything. I get this error whenever I query.
ERROR 1777 (HY000): Partition memsqldb:0 has no master instance.
What is the exact problem here?
i tried running ​memsql-ops report​
memsql-ops report
2015-10-10 09:51:10: Jf3fc7e04 [INFO] Building a diagnostic report for agent A0cd4d79612c64a1298c66f7d346fe44a
2015-10-10 09:51:20: Jf3fc7e04 [WARNING] Could not get info for MemSQL node 378F1F82DE7DE6A3D4552A381F316399A579DF18: [Errno 2003] Can't connect to MySQL server on '192.168.104.184' (4)
figured out that the problem is with IPs as my IP at my office is different from that of my Home.So i did as follows
iptables -t nat -I OUTPUT --dest 192.168.104.184 -j DNAT --to-dest 10.69.0.2
memsql-ops cluster-start
This solved the Problem.

Apache Cassandra - cqlsh operation timeout

I am trying to start cqlsh and this is what I get:
/bin$ ./cqlsh
Connection error: ('Unable to connect to any servers', {'127.0.0.1':
OperationTimedOut('errors=None, last_host=None',)})
I tried removing ~/.cassandra, did not work. I also compared cassandra.yaml with a version that worked.
Any ideas?
posting on this old thread as a memo for others, as i couln't managed to find any info to resolve these symptoms without debugging up to now:
got the same issue with a slow testing cluster, and i resolved it by setting a missing control_connection_timeout kwargs in Cluster() init, in cqlsh.py file.
issue opened and patch proposal provided at https://issues.apache.org/jira/browse/CASSANDRA-10959
Depending on your version and configuration, check the values specified for listen_address and/or rpc_address in your cassandra.yaml. If they are defined to anything other than localhost, you will need to provide that address when connecting with cqlsh.
$ grep listen_address: /etc/cassandra/cassandra.yaml
listen_address: 210.156.89.15
$ cqlsh 210.156.89.15 -u aploetz -p aploetz
Connected to PermanentWaves at 210.156.89.15.
[cqlsh 5.0.1 | Cassandra 2.1.4 | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
aploetz#cqlsh>

Node is unreachable in single node Cassandra installation

I have a problem with a single node Cassandra installation.
I can start it without any errors in the log.
I can create a keyspace, create tables, insert and delete data.
However truncate is not working
cqlsh> CREATE KEYSPACE mykeyspace WITH REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor': 1};
cqlsh> use mykeyspace;
cqlsh:mykeyspace> create table test1 (num int, primary key (num));
cqlsh:mykeyspace> insert into test1 (num) values (12);
cqlsh:mykeyspace> select * from test1;
num
-----
12
(1 rows)
cqlsh:mykeyspace> truncate test1;
Unable to complete request: one or more nodes were unavailable.
Also if I try to run nodetool describecluster it doesn't return complete response
[XXXX#XXXX dsc-cassandra-2.0.6]$ ./bin/nodetool describecluster
Cluster Information:
Name: Test Cluster
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
UNREACHABLE: [127.0.0.1]
I'm using
Cassandra DSC 2.0.6.
Red Hat 5.8.
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
I get responses for ping 127.0.0.1 and ping localhost
I checked all the ports that I am aware of cassandra may need (7000, 9160, 7199, 9042) using telnet - for example
telnet 127.0.0.1 7199
telnet localhost 7199
I can connect to these ports.
I'm using the default cassandra.yaml. These are the lines where either IP or hostname shows up
listen_address: localhost
rpc_address: localhost
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "127.0.0.1"
I also looked into the source code. I believe the problem can be close to the method org.apache.cassandra.service.StorageProxyMBean.describeSchemaVersions(). Most likely I get no response to the SCHEMA_CHECK message.
I tried to enable TRACE log in log4j for nodetool (conf/log4j-tools.properties) to get more information about the issue, but somehow log4j didn't start logging (it did create the file that I set in the appender, but the file was empty.)
There must be something specific to this environment because I can't repeat this problem in any other environments. So I can't figure out what's causing it.
The problem was that Cassandra couldn't load snappy.
org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:239)
at org.xerial.snappy.Snappy.<clinit>(Snappy.java:48)
at org.xerial.snappy.SnappyOutputStream.<init>(SnappyOutputStream.java:79)
at org.xerial.snappy.SnappyOutputStream.<init>(SnappyOutputStream.java:66)
at org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:359)
at org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:150)
I turned off compression in cassanda.yaml
internode_compression: none
Now both nodetool describecluster and I truncate work.
I also found a similar post here Cassandra Startup Error 1.2.6 on Linux x86_64
Since I can't install another glibc on this machine for the sake of testing I downloaded snappy-java-1.0.4.1.jar and replaced libsnappyjava.so in my snappy-java-1.0.5.jar
With this jar I was able to run cassandra with
internode_compression: all
(I have glibc 2.5 installed)

cassandra - Saved cluster name Test Cluster != configured name

How am I supposed to bot a new Cassandra node when I get this error?
INFO [SSTableBatchOpen:1] 2014-02-25 01:51:17,132 SSTableReader.java (line 223) Opening /var/lib/cassandra/data/system/local/system-local-jb-5 (5725 bytes)
ERROR [main] 2014-02-25 01:51:17,377 CassandraDaemon.java (line 237) Fatal exception during initialization
org.apache.cassandra.exceptions.ConfigurationException: Saved cluster name Test Cluster != configured name thisisstupid
at org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:542)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:233)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:462)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:552)
Name of cluster in the cassandra.yaml file is:
cluster_name: 'thisisstupid'
How do I resolve?
You can rename the cluster without deleting data by updating it's name in the system.local table (but you have to do this for each node...)
cqlsh> UPDATE system.local SET cluster_name = 'test' where key='local';
# flush the sstables to persist the update.
bash $ ./nodetool flush
Finally you need to rename the cluster to the new name in cassandra.yaml (again on each node)
The above commands with UPDATE SET cluster_name does not work for me. I found the working is to follow the instructions from the DataStax documentation on Initializing a multiple node cluster:
sudo service cassandra stop
sudo rm -rf /var/lib/cassandra/data/system/*
sudo vi /etc/cassandra/cassandra.yaml, setup the proper parameters
sudo service cassandra start
nodetool status
For some good cluster node setups, I found this blog post to be very useful.
I had same issue with Datastax4.7 enterprise edition. I resolved it with above instructions:
Change the cluster name back to "test Cluster"
start the cluster:
cqlsh> UPDATE system.local SET cluster_name = 'Cassendra Cluster' where key='local';
cqlsh> exit;
$ nodetool flush system
Stop the cluster:
$sudo service dse stop;sudo service datastax-agent stop
Edit the file:
$ sudo vi /etc/dse/cassandra/cassandra.yaml
cluster_name: 'Cassendra Cluster'
Start the cluster:
$sudo service dse start;sudo service datastax-agent start
Check Installation log:
$ cat /var/log/cassandra/output.log
For Cassandra 3.0.5, I tired the answer by Lyuben Todorov but it was missing a key part:
bash $ ./nodetool flush
should be:
bash $ ./nodetool flush system
as in here
For Cassandra 3.7 you should replace the ./nodetool flush with ./nodetool flush system. Otherwise it won't work.
If cassandra is not running, you can wipe the data folder.
rm -rf /usr/lib/cassandra/data/*
This will clear the system tables which had the wrong cluster name. \
When your restart cassandra with the correct cluster name in your cassandra.yaml everything will be regenerated and you should be good.
For Bitnami certified Cassandra - 3.11.5
Here's the solution that worked for me:
Stop the Cassandra service
sudo /opt/bitnami/ctlscript.sh stop cassandra
Remove the system data
sudo rm -rf /opt/bitnami/cassandra/data/data/system/*
Start the Cassandra Service
sudo /opt/bitnami/ctlscript.sh start cassandra

Resources