DataStax DevCenter cannot create new connection in Windows - cassandra

When I am trying to test a new connection it returns an error:
The specified host(s) could not be reached.
All host(s) tried for query failed (tried: /host_ip:9042 (com.datastax.driver.core.TransportException: [/host_ip:9042] Cannot connect))
[/host_ip:9042] Cannot connect
In my windows firewall, I have already created a rule for DevCenter, which allows DevCenter to communicate with remote Cassandra server. I have no access to Cassandra server but it is configured well, it means that the problem is somewhere on my local computer.

This type of thing typically happens when the host crashes expectantly resulting in the corruption of the sstables or commitlog files.
This is why it is really important to use replication since when you get into this situation you can run nodetool repair to repair the corrupted tables and data from other nodes.
If you are not fortunate enough to have replication configured, then you are in for some data loss. Clear the suspect file from \data\commitlogs, cry a little and restart the node.

Related

cqlsh "Unable to connect to any servers" on Windows installation

I installed Cassandra on Windows 10. When i trying to run cqlsh from /bin/,
I get the following error:
Connection error: ('Unable to connect to any servers', {'127.0.0.1': \
error(10061, "Tried connecting to [('127.0.0.1', 9042)].
Last error: No connection could be made because the target machine \
actively refused it")})
I installed Cassandra from apache.org official site . also I get reference from
https://phoenixnap.com/kb/install-cassandra-on-windows - Everything is looks good from the reference.
can anyone help me to solve this ? thanks in Advance.
The error states that cqlsh can't connect to the local Cassandra instance. The default configuration in conf/cassandra.yaml is for Cassandra to listen for CQL clients on localhost (127.0.0.1) and CQL port 9042:
native_transport_port: 9042
rpc_address: localhost
Since you're getting a "connection refused" error, the most likely issue is that Cassandra is not running on your Windows machine. Check the Cassandra logs (usually in logs/system.log) for errors which would provide clues as to why Cassandra couldn't start.
As a side note, there is very limited Windows support in Cassandra 3.x and there are several known issues that will not be fixed due to limitations in the operating system.
Furthermore, Windows support has been completely dropped in Cassandra 4.0 due to lack of maintainers and testing (CASSANDRA-16171).
As a workaround, we recommend the following:
Deploy Cassandra in Docker
Deploy Cassandra in a VM using software like VirtualBox
Deploy K8ssandra.io
If you just want to build apps with Cassandra as a backend, Astra DB has a free tier that lets you launch a Cassandra cluster in a few clicks with no credit card required. Cheers!
Do you keep this terminal open and Cassandra runs when you are trying to connect? Notice you have to launch cqlsh from a different terminal window.
Please check the steps again, mostly probably Cassandra simply doesn't run. Keep attention on the p.4 specially.

Cassandra NoHostAvailableException : All host(s) tried for query failed

We are running a java micro service that uses Cassandra DB multi node cluster. While writing data seeing following error randomly from different nodes:
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed
Already verified that all nodes are available and running and are reachable to each other in the cluster.
Any pointer are highly appreciated.
Thanks.
Above error suggests that driver was unable to connect to any of the host for a query. Following could be the reasons for the same.
Cassandra Node down - Which you have verified is not the case for you.
Congestion due to high traffic due to which nodes are shown down.
Intermittent issues in network connectivity between client and nodes which compels driver to mark the host down.

Error while saving the Spark RDD to Cassandra

We are trying to save our RDD which will have close to 4 billion rows to Cassandra. While some of the data gets persisted but for some partitions we see these error logs in the spark logs.
We have already set these two properties for cassandra connector. Is there some other optimization that we would need to do? Also what are the recommended settings for reader? We have left them as default.
spark.cassandra.output.batch.size.rows=1
spark.cassandra.output.concurrent.writes=1
We are running spark-1.1.0 and spark-cassandra-connector-java_2.10 v 2.1.0
15/01/08 05:32:44 ERROR QueryExecutor: Failed to execute: com.datastax.driver.core.BoundStatement#3f480b4e
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.87.33.133:9042 (com.datastax.driver.core.exceptions.DriverException: Timed out waiting for server response))
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:108)
at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:179)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Thanks
Ankur
I've seen something similar in my four node cluster. It seemed that if I specified EVERY cassandra node name in the spark settings, then it works, however if I only specified the seeds (of the four, two were seeds) than I got the exact same issue. I haven't followed up on it, as specifying all four is getting the job done (but I intend to at some point). I'm using hostnames for seed values and not ips. And using hostnames in spark cassandra settings. I did hear it could be due to some akka dns issues. Maybe try using ip addresses through and through, or specifying all hosts. The latter's been working flawlessly for me.
I realized, I was running the application with spark.cassandra.output.concurrent.writes=2. I changed it to 1 and there were no exceptions. The exceptions were because Spark was producing data at much higher frequency than our Cassandra cluster could write so changing the setting to 1 worked for us.
Thanks!!

Cassandra hangs on arbitrary commands

We're hosting Cassandra 2.0.2 cluster on AWS. We've recently started upgrading from normal to SSD drives, by bootstrapping new and decommissioning old nodes. It went fairly well, aside from two nodes hanging forever on decommission. Now, after the new 6 nodes are operational, we noticed that some of our old tools, using phpcassa stopped working. Nothing has changed with security groups, all ports TCP/UDP are open, telnet can connect via 9160, cqlsh can 'use' a cluster, select data, however, 'describe cluster' fails, in cli, 'show keyspaces' also fails - and by fail, I mean never exits back to prompt, nor returns any results. The queries work perfectly from the new nodes, but even the old nodes waiting to be decommissioned cannot perform them. The production system, also using phpcassa, does normal data requests - it works fine.
All cassandras have the same config, the same versions, the same package they were installed from. All nodes were recently restarted, due to seed node change.
Versions:
Connected to ### at ####.compute-1.amazonaws.com:9160.
[cqlsh 4.1.0 | Cassandra 2.0.2 | CQL spec 3.1.1 | Thrift protocol 19.38.0]
I've run out out of ideas. Any hints would be greatly appreciated.
Update:
After a bit of random investigating, here's a bit more detailed description.
If I cassandra-cli to any machine, and do "show keyspaces", it works.
If I cassandra-cli to a remote machine, and do "show keyspaces", it hangs indefinitely.
If I cqlsh to a remote cassandra, and do a describe keyspaces, it hangs. ctrl+c, repeat the same query, it instantly responds.
If I cqlsh to a local cassandra, and do a describe keyspaces, it works.
If I cqlsh to a local cassandra, and do a select * from Keyspace limit x, it will return data up to a certain limit. I was able to return data with limit 760, the 761 would fail.
If I do a consistency all, and select the same, it hangs.
If I do a trace, different machines return the data, though sometimes source_elapsed is "null"
Not to forget, applications querying the cluster sometimes do get results, after several attempts.
Update 2
Further playing introduced failed bootstrapping of two nodes, one hanging on bootstrap for 4 days, and eventually failing, possibly due to a rolling restart, and the other plain failing after 2 days. Repairs wouldn't function, and introduced "Stream failed" errors, as well as "Exception in thread Thread[StorageServiceShutdownHook,5,main] java.lang.NullPointerException". Also, after executing repair, started getting "Read an invalid frame size of 0. Are you using tframedtransport on the client side?", so..
Solution
Switch rpc_server_type from hsha to sync. All problems gone. We worked with hsha for months without issues.
If someone also stumbles here:
http://planetcassandra.org/blog/post/hsha-thrift-server-corruption-cassandra-2-0-2-5/
In cassandra.yaml:
Switch rpc_server_type from hsha to sync.

Cassandra datastax java driver ,can not connect to server "No handler set for stream"

If I create a new project like this .
cluster = Cluster.builder().addContactPoint("127.0.0.1").build();
this code works.
But if I take all the jars from this project and migrate the jars to my own project .the code above doesn't work and it says:
13/07/01 16:27:16 ERROR core.Connection: [/127.0.0.1-1] No handler set for stream 1 (this is a bug, either of this driver or of Cassandra, you should report it)
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/127.0.0.1])
What version of Cassandra are you running? Have you enabled the native protocol in your cassandra.yaml?
In Cassandra 1.2.0-1.2.4 the native protocol was disabled by default, but in 1.2.5+ it's on by default.
See https://github.com/apache/cassandra/blob/cassandra-1.2.5/conf/cassandra.yaml#L335
That's the most common reason I've seen for not being able to connect with the driver.

Resources