Error while creating a new thrift connection to Cassandra - cassandra

I am trying to setup a KairosDB installation using Cassandra as backend, but I am facing the following error:
[HThriftClient.java:152] - Creating a new thrift connection to localhost(127.0.0.1):9042
ERROR [HConnectionManager.java:418] - MARK HOST AS DOWN TRIGGERED for host localhost(127.0.0.1):9042
ERROR [HConnectionManager.java:422] - Pool state on shutdown: :{localhost(127.0.0.1):9042}; IsActive?: true; Active: 1; Blocked: 0; Idle: 15; NumBeforeExhausted: 49
[HConnectionManager.java:303] - Exception:
me.prettyprint.hector.api.exceptions.HectorTransportException: org.apache.thrift.transport.TTransportException: Read a negative frame size (-2080374784)!
at me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:39) ~[hector-core-1.1-4.jar:na]
I already checked the cassandra opened port and it is set to 9042. Also, I set start_rpc to true on cassandra.yaml file. Any idea on further troubleshooting?

For thrift connection cassandra use 9160 port . so give 9160 port .

What version of Cassandra are you using?
I think thrift is disabled in newer versions of Cassandra, you may enable the protocol by modifying Cassandra.yaml and restarting cassandra (or maybe by using nodetool).

Related

zookeeper connection timing out, kafa-spark streaming

I'm trying some exercise with spark streaming with kafka. If I use kafka producer and consumer in command line, I can publish and consume the messages in kafka. When I try to do it using pyspark in jupyter notebook. I am getting zookeeper connection timeout error.
Client session timed out, have not heard from server in 6004ms for sessionid 0x0, closing socket connection and attempting reconnect
[2017-08-04 15:49:37,494] INFO Initiating client connection, connectString=127.0.0.1:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient#158da8e (org.apache.zookeeper.ZooKeeper)
[2017-08-04 15:49:37,524] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
[2017-08-04 15:49:37,527] INFO Opening socket connection to server 127.0.0.1/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2017-08-04 15:49:37,533] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
[2017-08-04 15:49:38,637] INFO Opening socket connection to server 127.0.0.1/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2017-08-04 15:49:38,639] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
`
Zookeeper has issues when using localhost (127.0.0.1). Described in https://issues.apache.org/jira/browse/ZOOKEEPER-1661?focusedCommentId=13599352
This little program explains the following things:
ZooKeeper does call InetAddress.getAllByName (see StaticHostProvider:60) on the connect string "localhost:2181" => as a result it gets 3 different addresses for localhost which then get shuffled (Collections.shuffle(this.serverAddresses): L72
Because of the shuffling (random), the call to StaticHostProvider.next will sometime return the fe80:0:0:0:0:0:0:1%1 address which as you can see from this small program times out after 5s => this explains the randomness I am experiencing.
It really seems to me that what I am experiencing is a reverse dns lookup issue with IPv6. Whether this reverse dns lookup is actually useful and required by ZooKeeper, I do not know. It did not behave this way in 3.3.3.
Solution, specify your zookeeper address as a FQDN and make sure the reverse lookup works or use 0.0.0.0 instead of localhost.

could not connect to cassandra using the latest cql driver and default settings?

I am using the latest version of cassandra 3.0.2 and the latest version of datastax cassandra core java driver which is 3.0.0. The settings in cassandra.yaml remains unchanged. I have not changed that file. so whatever the default settings are they remain the same. I keep hearing that rpc_address should be 0.0.0.0 but by default it is localhost and the broadcast_rpc_address by default is commmented out however its value is 1.2.3.4 (again by default it is commented out). I have not changed any of the default settings so the cassandra.yaml file remains unchanged. I also don't understand why we need to set rpc_address and all that after all I hear that latest version of cassandra had moved away from rpc?!
Here is the snippet of the code
Cluster cassandra = Cluster.builder().addContactPoint("localhost").withPort(9042).build();
ListenableFuture<Session> session = cassandra.connectAsync("demo1");
......
Here is the error that I get when I turn on the DEBUG FLAG
com.datastax.driver.NEW_NODE_DELAY_SECONDS is undefined, using default value 1
com.datastax.driver.NON_BLOCKING_EXECUTOR_SIZE is undefined, using default value 8
com.datastax.driver.NOTIF_LOCK_TIMEOUT_SECONDS is undefined, using default value 60
Starting new cluster with contact points [localhost/127.0.0.1:9042, localhost/0:0:0:0:0:0:0:1:9042]
log4j:WARN No appenders could be found for logger (io.netty.util.internal.logging.InternalLoggerFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
com.datastax.driver.FORCE_NIO is undefined, using default value false
Did not find Netty's native epoll transport in the classpath, defaulting to NIO.
[localhost/127.0.0.1:9042] preparing to open 1 new connections, total = 1
com.datastax.driver.DISABLE_COALESCING is undefined, using default value false
Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=false] Connection established, initializing transport
[localhost/127.0.0.1:9042] Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=false] Transport initialized, connection ready
[Control connection] Refreshing node list and token map
You listed localhost/0:0:0:0:0:0:0:1:9042 in your contact points, but it wasn't found in the control host's system.peers at startup
[Control connection] Refreshing schema
[Control connection] Refreshing node list and token map
[Control connection] established to localhost/127.0.0.1:9042
Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
New Cassandra host localhost/127.0.0.1:9042 added
[localhost/127.0.0.1:9042] preparing to open 1 new connections, total = 2
Connection[localhost/127.0.0.1:9042-2, inFlight=0, closed=false] Connection established, initializing transport
[localhost/127.0.0.1:9042] Connection[localhost/127.0.0.1:9042-2, inFlight=0, closed=false] Transport initialized, connection ready
Created connection pool to host localhost/127.0.0.1:9042 (1 connections needed, 1 successfully opened)
Added connection pool for localhost/127.0.0.1:9042
Preparing query SELECT "hash" AS col1,"body" AS col2 FROM demo1.ents WHERE "hash"=?;
[1795897147-1] Doing retry 1 for query com.datastax.driver.core.Statement$1#19be093f at consistency null
[1795897147-1] Error querying localhost/127.0.0.1:9042 : com.datastax.driver.core.exceptions.OperationTimedOutException: [localhost/127.0.0.1] Timed out waiting for server response
error message -> All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.exceptions.OperationTimedOutException: [localhost/127.0.0.1] Timed out waiting for server response))
error cause -> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.exceptions.OperationTimedOutException: [localhost/127.0.0.1] Timed out waiting for server response))
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.exceptions.OperationTimedOutException: [localhost/127.0.0.1] Timed out waiting for server response))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.AbstractSession.prepare(AbstractSession.java:98)
at com.datastax.driver.mapping.Mapper.getPreparedQuery(Mapper.java:118)
at com.datastax.driver.mapping.Mapper.getPreparedQuery(Mapper.java:129)
at com.datastax.driver.mapping.Mapper.getQuery(Mapper.java:333)
at com.datastax.driver.mapping.Mapper.getQuery(Mapper.java:325)
at com.datastax.driver.mapping.Mapper.getAsync(Mapper.java:388)
Here is where the exeption happens in cassandra driver
stmt = session().prepare(queryString); //Mapper.java
public PreparedStatement prepare(String query) { //AbstractSession.java
try {
return Uninterruptibles.getUninterruptibly(prepareAsync(query));
} catch (ExecutionException e) {
throw DriverThrowables.propagateCause(e);
}
}

Cassandra and defuncting connection

I've got a question about Cassandra. I haven't found any "understable answer" yet...
I made a cluster build on 3 nodes (RackInferringSnitch) on differents VM. I'm using Datastax's Java Driver to read and update my keyspace (with CSVs).
When one node is down (ie : 10.10.6.172), I've got this debug warning:
INFO 00:47:37,195 New Cassandra host /10.10.6.172:9042 added
INFO 00:47:37,246 New Cassandra host /10.10.6.122:9042 added
DEBUG 00:47:37,264 [Control connection] Refreshing schema
DEBUG 00:47:37,384 [Control connection] Successfully connected to /10.10.6.171:9042
DEBUG 00:47:37,391 Adding /10.10.6.172:9042 to list of queried hosts
DEBUG 00:47:37,395 Defuncting connection to /10.10.6.172:9042
com.datastax.driver.core.TransportException: [/10.10.6.172:9042] Channel has been closed
at com.datastax.driver.core.Connection$Dispatcher.channelClosed(Connection.java:621)
at
[...]
[...]
DEBUG 00:47:37,400 [/10.10.6.172:9042-1] Error connecting to /10.10.6.172:9042 (Connection refused: /10.10.6.172:9042)
DEBUG 00:47:37,407 Error creating pool to /10.10.6.172:9042 ([/10.10.6.172:9042] Cannot connect)
DEBUG 00:47:37,408 /10.10.6.172:9042 is down, scheduling connection retries
DEBUG 00:47:37,409 First reconnection scheduled in 1000ms
DEBUG 00:47:37,410 Adding /10.10.6.122:9042 to list of queried hosts
DEBUG 00:47:37,423 Adding /10.10.6.171:9042 to list of queried hosts
DEBUG 00:47:37,427 Adding /10.10.6.122:9042 to list of queried hosts
DEBUG 00:47:37,435 Shutting down pool
DEBUG 00:47:37,439 Adding /10.10.6.171:9042 to list of queried hosts
DEBUG 00:47:37,443 Shutting down pool
DEBUG 00:47:37,459 Connected to cluster: WormHole
I wanted to know if I need to handle this exception or it will be handled by itself (I mean, when the node will be back again cassandra will do the correct write if the batch was a write...)
EDIT : Current consistency level is ONE.
The DataStax driver keeps track of which nodes are available at all times and routes queries (load balacing) based on this information. The way it does this is based on your reconnection policy.
You will see debug level messages when nodes are detected as down, etc. This is no cause for concern as the driver will re-route to other available nodes, it will also re-try the nodes periodically to find out if they are back up. If you had a problem and the data was not getting saved to Cassandra you would see timeout errors. No action necessary in this case.

DataStax Devcenter fails to connect to the remote cassandra db

I've installed DataStax cassandra and it is up and running on my remote machine. Now I am trying to connecto via DataStax Devcenter but it fails.
Before posting this question I've read identical here: DataStax Devcenter fails to connect to the remote cassandra database
I went to cassandra.yaml conf file but start_native_transport: true option is not in my file. Where should I look for it?
Also I've changed rpc_address to: 0.0.0.0.
UPDATE:
If I add start_native_transport: true into my cassandra.yaml it just crashes on Cassandra restart. Please refer a log below:
ERROR 17:48:32,626 Fatal configuration error error
Can't construct a java object for tag:yaml.org,2002:org.apache.cassandra.config.Config; exception=Cannot create property=start_native_transport for JavaBean=org.apache.cassandra.config.Config#ef28a30; Unable to find property 'start_native_transport' on class: org.apache.cassandra.config.Config
in "<reader>", line 10, column 1:
cluster_name: 'Test Cluster'
^
at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:372)
at org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(BaseConstructor.java:177)
at org.yaml.snakeyaml.constructor.BaseConstructor.constructDocument(BaseConstructor.java:136)
at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:122)
at org.yaml.snakeyaml.Loader.load(Loader.java:52)
at org.yaml.snakeyaml.Yaml.load(Yaml.java:166)
at org.apache.cassandra.config.DatabaseDescriptor.loadYaml(DatabaseDescriptor.java:141)
at org.apache.cassandra.config.DatabaseDescriptor.<clinit>(DatabaseDescriptor.java:116)
at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:124)
at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:389)
at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:107)
Caused by: org.yaml.snakeyaml.error.YAMLException: Cannot create property=start_native_transport for JavaBean=org.apache.cassandra.config.Config#ef28a30; Unable to find property 'start_native_transport' on class: org.apache.cassandra.config.Config
at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.constructJavaBean2ndStep(Constructor.java:305)
at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.construct(Constructor.java:184)
at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:370)
... 10 more
Caused by: org.yaml.snakeyaml.error.YAMLException: Unable to find property 'start_native_transport' on class: org.apache.cassandra.config.Config
at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.getProperty(Constructor.java:342)
at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.constructJavaBean2ndStep(Constructor.java:240)
... 12 more
null; Can't construct a java object for tag:yaml.org,2002:org.apache.cassandra.config.Config; exception=Cannot create property=start_native_transport for JavaBean=org.apache.cassandra.config.Config#ef28a30; Unable to find property 'start_native_transport' on class: org.apache.cassandra.config.Config
Invalid yaml; unable to start server. See log for stacktrace.
Thanks for any Help!
start_native_transport: true
should be there in cassandra.yaml if its not there then you should add it into cassandra.yaml and try after restarting the Cassandra server
What version of Cassandra are you using? DevCenter supports Cassandra versions >= 1.2
If you still see errors with the change in cassandra.yaml you can post a link to a Gist. But the YAML format is pretty simple so I think you'll figure it out.
If you read my previous answer you'll notice that it required the rpc_address to be set to a different value than 0.0.0.0. Anyways the latest version of DevCenter (1.1.1) will work even all the nodes in your cluster have the rpc_address set to 0.0.0.0 (as a side note I don't think that's generally a good setting).
DevCenter.ini does not have java VM information.
Adding below line of VM info helped resolve connection issue.
-vm
C:\Program Files (x86)\JDK64\1.8.0.74\jre\bin\java.exe
NOTE: above line represents appropriate java.exe from JRE version

Error while connecting to Cassandra using Java Driver for Apache Cassandra 1.0 from com.example.cassandra

While connecting to Cassandra client using java driver for Cannsandra by DataStax, it is throwing following error..
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/127.0.0.1])
Please suggest...
Thanks!
My java code is like this:
package com.example.cassandra;
import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.Host;
import com.datastax.driver.core.Metadata;
public class SimpleClient {
private Cluster cluster;
public void connect(String node){
cluster = Cluster.builder().addContactPoint(node).build();
Metadata metadata = cluster.getMetadata();
System.out.println(metadata.getClusterName());
}
public void close()
{
cluster.shutdown();
}
public static void main(String args[]) {
SimpleClient client = new SimpleClient();
client.connect("127.0.0.1");
client.close();
}
In my case, I ran into this issue as I used the default RPC port of 9160 during connection. One can find a different port for CQL in cassandra.yaml -
start_native_transport: true
# port for the CQL native transport to listen for clients on
native_transport_port: 9042
Once I changed the code to use port 9042 the connection attempt succeeded -
public BinaryDriverTest(String cassandraHost, int cassandraPort, String keyspaceName) {
m_cassandraHost = cassandraHost;
m_cassandraPort = cassandraPort;
m_keyspaceName = keyspaceName;
LOG.info("Connecting to {}:{}...", cassandraHost, cassandraPort);
cluster = Cluster.builder().withPort(m_cassandraPort).addContactPoint(cassandraHost).build();
session = cluster.connect(m_keyspaceName);
LOG.info("Connected.");
}
public static void main(String[] args) {
BinaryDriverTest bdt = new BinaryDriverTest("127.0.0.1", 9042, "Tutorial");
}
I had this issue and it was sorted by setting the ReadTimeout in SocketOptions:
Cluster cluster = Cluster.builder().addContactPoint("localhost").build();
cluster.getConfiguration().getSocketOptions().setReadTimeoutMillis(HIGHER_TIMEOUT);
Go to your Apache Cassandra conf directory and enable the binary protocol
Cassandra binary protocol
The Java driver uses the binary protocol that was introduced in Cassandra 1.2. It only works with a version of Cassandra greater than or equal to 1.2. Furthermore, the binary protocol server is not started with the default configuration file in Cassandr a 1.2. You must edit the cassandra.yaml file for each node:
start_native_transport: true
Then restart the node.
I was also having same problem. I have installed Cassandra in a separate Linux pc and tried to connect via Window pc. I was not allowed to create the connection.
But when we edit cassandra.yaml, set my linux pc ip address to rpc_address and restart, it allows me to connect successfully,
# The address or interface to bind the Thrift RPC service and native transport
# server to.
#
# Set rpc_address OR rpc_interface, not both. Interfaces must correspond
# to a single address, IP aliasing is not supported.
#
# Leaving rpc_address blank has the same effect as on listen_address
# (i.e. it will be based on the configured hostname of the node).
#
# Note that unlike listen_address, you can specify 0.0.0.0, but you must also
# set broadcast_rpc_address to a value other than 0.0.0.0.
#rpc_address: localhost
rpc_address: 192.168.0.10
Just posting this for people who might have the same problem as I did, when I got that error message. Turned out my complex dependency tree brought about an old version of com.google.collections, which broke the CQL driver. Removing this dependency and relying entirely on guava solved my problem.
I was having the same issue testing a new cluster with one node.
After removing this from the Cluster builder I was able to connect:
.withLoadBalancingPolicy(new DCAwareRoundRobinPolicy("US_EAST"))
It was able to connect.
In my case this was a port issue, which I forgot to update
Old RPC port is 9160
New binary port is 9042
I too encountered this problem, and it was caused by a simple error in the statement that was being submitted.
session.prepare(null);
Obviously, the error message is misleading.
Edit
/etc/cassandra/cassandra.yaml
and change
rpc_address to 0.0.0.0,broadcast_rpc_address and listen_address to ip address of the cluster.
Assuming you have default configurations in place, check the driver version compatibility. Not all driver versions are compatible with all versions of Cassandra, though they claim backward compatibility. Please see the below link.
http://docs.datastax.com/en/developer/java-driver/3.1/manual/native_protocol/
I ran into a similar issue & changing the driver version solved my problem.
Note: Hopefully, you are using Maven (or something similar) to resolve dependencies. Otherwise, you may have to download a lot of dependencies for higher versions of the driver.
Check below points:
i) check server ip
ii) check listening port
iii) data-stack client dependency must match the server version.
About the yaml file, latest versions has below properties enabled:
start_native_transport: true
native_transport_port: 9042

Resources