Cannot connect to Cassandra on localhost - cassandra

First time using cassandra, I have attempted to configue the yaml file according to other related posts but had no luck so far. Any idea how do so on the localhost?
The specified host(s) could not be reached.
All host(s) tried for query failed (tried: localhost/0:0:0:0:0:0:0:1:9042 (com.datastax.driver.core.TransportException: [localhost/0:0:0:0:0:0:0:1:9042] Cannot connect), localhost/127.0.0.1:9042 (com.datastax.driver.core.TransportException: [localhost/127.0.0.1:9042] Cannot connect))
[localhost/0:0:0:0:0:0:0:1:9042] Cannot connect
[localhost/127.0.0.1:9042] Cannot connect

Inverse the process(s) you already executed. To run cassandra on localhost, you don't need to change anything so far. You don't need to change cassandra.yaml. It'll run by default on localhost. Read the documents carefully.
Learn more about cassandra.yaml:https://docs.datastax.com/en/cassandra/2.1/cassandra/configuration/configCassandra_yaml_r.html

Maybe you can stop the Cassandra server and start it again.
bin/cassandra -f -R
Further this link helps understand
https://docs.datastax.com/en/cassandra/2.1/cassandra/configuration/configCassandra_yaml_r.html
cassandra.yaml config parameters.
You can also make sure that the ports are set correctly native_transport(9042),native_transport_port_ssl(9142),Storage_port(7000),roc_port(9160) and JMX port(7199).
Lastly,
seeds - "127.0.0.1"
assuming you are working on single node cassandra setup.

Might be your DataStax Cassandra Community Server windows service is stopped. So, start it and reconnect. i hope you get success.
While you are trying to start this service and if it would stopped again then you have to delete the logs folder (where DataStax installed) and restart this service.

Related

Not able to login to cqlsh in datastax Cassandra cluster hosted in Google cloud platform

I have set up a 2 node Casandra cluster in GCP.
But the issue is I am not able go to cqlsh.I am getting the belwo error
$ cqlsh
Connection error: ('Unable to connect to any servers', {'127.0.0.1':
First, as I mentioned to you in this answer, using cqlsh to connect to 127.0.0.1 simply will not work in a multi-node cluster. You will need to specify the IP address shown in the result of your nodetool status command.
Next, the second part of the error message should give a big clue on this one:
AuthenticationFailed('Remote end requires authentication'),)})
With authentication enabled, you will need to provide a valid username and password to log in. If you have not created any new users, then the username and password will both be defaulted to "cassandra."
$ cqlsh 10.138.0.3 -u cassandra -p cassandra
Try this:
cqlsh 10.138.0.3 9042

running cassandra on a mesos cluster

I'm trying to deploy Cassandra on a small (test) Mesos cluster. I have one master node (say 10.10.10.1) and three worker nodes: 10.10.10.2-4.
On the official site of apache mesos there is a link to cassandra framework developed for mesos (it is here: https://github.com/mesosphere/cassandra-mesos).
I'm following the tutorial that they provide there. In step 3 they are saying I should edit the conf/mesos.yaml file, specifically that I should set mesos.master.url so that it points to the master node (on which I also have the conf file).
The first thing I tried was just to replace localhost by the master node ip, so I had
mesos.master.url: 'zk://10.10.10.1:2181/mesos'
but when I then started the deployment script (by running bin/cassandra-mesos as they say in point 5 I should) I get the following error:
2015-02-24 09:18:24,262:12041(0x7fad617fa700):ZOO_ERROR#handle_socket_error_msg#1697: Socket [10.10.10.1:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
It keeps retrying and displays the same error until I terminate it.
I tried removing 'zk' or replacing it with 'mesos' in the URL, changing (or removing altogether) the port removing the 'mesos' word in the URL but I keep getting the same error.
I also tried looking at how other frameworks do it (specifically spark, which I am hoping to deploy next) but didn't find anything helpful. Any ideas how to run it? Thanks!
The URL provided to mesos.master.url is passed directly to the underlying Mesos Native Java Library. The format listed in your example looks correct.
Next steps in debugging the connection issue would be to verify the IP address the ZooKeeper sever has bound to. You can find out by running sudo netstat -ntplv | grep 2181 on the server that is running ZooKeeper.
I would expect to see something like the following:
tcp 0 0 0.0.0.0:2181 0.0.0.0:* LISTEN 3957/java
Another possibility could be that ZooKeeper is binding specifically to localhost:
tcp 0 0 127.0.0.1:2181 0.0.0.0:* LISTEN 3957/java
If ZooKeeper has bound to localhost a client will only be able to connect to it with the URL zk://127.0.0.1:2181/mesos
A note about the future of the Cassandra Mesos Framework.
I am one of the developers working on rewriting the cassandra-mesos project to be more robust, stable and easier to run. The code in current master(6aa82acfac) is end-of-life and will be replaced within the next couple of weeks with the code that is in the rewrite branch.
If you would like to try out the latest build of the rewrite branch a marathon.json for running the framework can be found here. After downloading the marathon.json update the values for MESOS_ZK and CASSANDRA_ZK (and any resource values you want to update) then POST the json to marathon at /v2/apps.
If you have one master and no zk, how about setting the mesos.master.url to 10.10.10.1:5050 (where 5050 is the default mesos-master port)?
If Ben is right and the Cassandra framework otherwise requires ZK for its own persistence/HA, then try disabling that feature if possible. Otherwise you may have to rip the ZK code out yourself and recompile, if you really want a ZK-free setup (consequently without any HA features).

Unable to connect to Cassandra in Presto

I have setup Cassandra, and I've created a keyspace('mykeyspace') and a table in it. I started Cassandra as a service, added the cassandra.properties file like this, in the presto installation files:
connector.name=cassandra
cassandra.contact-points=localhost
cassandra.native-protocol-port=9142
cassandra.thrift-port=9160
After this I have issued this command in Presto but I'm not sure if it is connecting to the Cassandra data:
./presto --server localhost:8080 --catalog cassandra --schema mykeyspace
Now, when I give the command 'show tables', I get this Exception-message:
All host(s) tried for query failed (tried: localhost/127.0.0.1 (com.datastax.driver.core.TransportException: [localhost/127.0.0.1] Cannot connect))
I have used cqlsh, to view a created table in 'mykeyspace' in cassandra, and hence sure that cassandra is running.
I would really appreciate any help to clear this error.
If you have a default cassandra installation, the dafault native protocol port is 9042. If that is the case, you can remove cassandra.native-protocol-port and cassandra.thrift-port properties.
If you want to keep this ports, you can change cassandra.yaml configuration file, property native_transport_port
I hope it's helps.

I can not get Datastax DevCenter to connect to a remote database

When I try to set up a connection, I get the error
Unable to connect to 'Test Cluster': All host(s) tried for query failed
Unexpected error during transport initialization ... (host ip adresses) Channel has been closed
The remote database is on port 9161, which I added on the "Native port protocol" line.
Additionally is has a username or password, which I also added in the set up. This is all on a 64bit Windows machine.
i did have same your error!!
you have to open cmd firt
type "cd "
type "cassandra" to run cassandra server
then, you try again in DevCenter with localhost and port: 9042
i hope it may help you! ^^
There is another common scenario where this might happen in case someone stumbles upon this thread in the future.
This type of thing typically happens when the host crashes expectantly resulting in the corruption of the sstables or commitlog files.
This is why it is really important to use replication since when you get into this situation you can run nodetool repair to repair the corrupted tables and data from other nodes.
If you are not fortunate enough to have replication configured, then you are in for some data loss. Clear the suspect file from \data\commitlogs, cry a little and restart the node.

hbase on ec2 - dns record changed after restart

I am running a standalone hadoop/hbase cluster on EC2.
Everything is running fine however when I restart, I get messages such as 2014-03-18 12:41:53,401 INFO org.apache.hadoop.ipc.HBaseRPC: Problem connecting to server: ip-10-73-158-244.eu-west-1.compute.internal/10.73.158.244:39781 and eventually hbase shuts down.
This seems to be caused due the fact the the dns record no longer exists but why does HBase look for the old record in the first place? My hbase-site.xml has
<property>
<name>hbase.rootdir</name>
<value>hdfs://127.0.0.1:9000/hbase</value>
</property>
Any ideas how I can resolve this?

Resources