Cassandra: GossipingPropertyFileSnitch: NoHostAvailable exception while inserting into table - cassandra

I have created 2 node cassandra cluster with below configurations.
Node-1:
cassandra-topology.properties:
192.168.1.177=DC1:RAC1
192.168.1.134=DC2:RAC2
cassandra.yml:
cluster_name: 'TestCluster'
num_tokens: 256
listen_address:
rpc_address: localhost
- seeds: "192.168.1.177,192.168.1.134"
endpoint_snitch: GossipingPropertyFileSnitch
Node-2:
cassandra-topology.properties:
192.168.1.177=DC1:RAC1
127.0.0.1=DC2:RAC2 # Also tried 192.168.1.134 ip
cassandra.yml:
cluster_name: 'TestCluster'
num_tokens: 256
listen_address:
rpc_address: localhost
- seeds: "192.168.1.177"
endpoint_snitch: GossipingPropertyFileSnitch
I can see both the nodes are up and running using 'nodetool status' command. The Keyspace i have created is as below:
> CREATE KEYSPACE testReplication WITH replication = {'class': NetworkTopologyStrategy', 'DC1' : '2', 'DC2' : '2'};
I can also create tables which gets replicated to both the nodes, but when i try to 'INSERT' or 'SELECT' on the table cqlsh gives 'NoHostAvailable:',
but system.log is not showing anything about it.
Any help will be appreciated.
Thanks.

In cassandra.yaml in each node, put your node ip torpc_address: and listen_address: and try to put seeds: same for each node (DC1 and DC2). And also for cassandra-topology.properties, put the same configuration for each node. So, for example your Node-1 configuration will looks like:
cluster_name: 'TestCluster'
num_tokens: 256
listen_address: 192.168.1.177
rpc_address: 192.168.1.177
- seeds: "192.168.1.177,192.168.1.134"
endpoint_snitch: GossipingPropertyFileSnitch
And for each node, cassandra-topology.properties:
192.168.1.177=DC1:RAC1
192.168.1.134=DC2:RAC2

Related

Switching to GossipingPropertyFileSnitch Causes Connection Refusal

I have brand new install of Cassandra 3.0.9 on CentOS 7.4.1708.
I am trying to change from the default SimpleSnitch to GosspingPropertyFileSnitch.
When I try and follow the steps on the DatStax Website, it appears as though I should only have to change the endpoint_snitch setting in the cassandra.yaml file. When I do that and restart the cassandra service, I am no longer able to connect to cassandra with nodetool or cqlsh (Connection Refused). Changing the settign back to the SimpleSnitch yields no reversal and ability to connect. I am confused as to what settings I am missing that are causing this to happen. I'd like to know
What I am missing in trying to get it moved to the GossipingPropertyFileSnitch?
Why changing the setting back does not revert things to its previous state where I can connect using nodetool or cqlsh?
I have two nodes in the cluster, both of which I want to act as seeds.
cassandra.yaml: - Node 1
cluster_name: '<My Cluster Name>'
- seeds: "<IP Add1>, <IP Add2>"
listen_address: <IP Add1>
rpc_address: <IP Add1>
endpoint_snitch: GossipingPropertyFileSnitch
#broadcast_address: 1.2.3.4
cassandra.yaml: - Node 2
cluster_name: '<My Cluster Name>'
- seeds: "<IP Add1>, <IP Add2>"
listen_address: <IP Add2>
rpc_address: <IP Add2>
endpoint_snitch: GossipingPropertyFileSnitch
#broadcast_address: 1.2.3.4
cassandra-rackdc.properties - Node 1:
dc=<DC1 Name>
rack=rack1
prefer_local=true
cassandra-rackdc.properties - Node 2:
dc=<DC1 Name>
rack=rack2
prefer_local=true
cassandra-topology.properties - Both Nodes:
<IP Add1>=<DC1 Name>:RAC1
<IP Add2>=<DC1 Name>:RAC2
In CQLSH - Both Nodes
UPDATE system.local SET cluster_name = '<My Cluster Name>' where key='local';

Datastax service doesn't start after configuring cassandra.yaml for creating a cluster

I am having a problem with configuring my cluster in Cassandra in DSE 5.0. After I change the /etc/dse/cassandra/cassandra.yaml the service dse (sudo service dse start) doesn't start. I am a beginner so I don't know what to do.
Node1:
cluster_name: 'MyCluster'
num_tokens: 256
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "10.1.4.48,10.1.4.49"
listen_address: 10.1.4.48
broadcast_address: 10.1.4.48
rpc_address: 0.0.0.0
broadcast_rpc_address: 10.1.1.48
Node2:
cluster_name: 'MyCluster'
num_tokens: 256
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "10.1.4.48,10.1.4.49"
listen_address: 10.1.4.49
broadcast_address: 10.1.4.49
rpc_address: 0.0.0.0
broadcast_rpc_address: 10.1.1.49
This is what I have changed in each of the two nodes that I want to put in the same cluster. Maybe I need to change another file also?
The yaml file format can be very fussy. I usually grab a vanilla cassandra.yaml one from an install (same version) and run a diff
You may well see some unexpected differences. The most common one is a missing space between the : and <value> so for example
listen_address:192.168.56.20
instead of
listen_address: 192.168.56.20

Cassandra is not starting

I am having trouble with a 3-node Cassandra cluster on AWS.
There is one seed node and two data nodes. The nodes are crashing
when they are launched and when I am trying to start them manually.
The error message appears in all three nodes.
Cassandra's version is 2.0.9
I have tried the following settings:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "<seed.node.public.IP>"
rpc_address: <node.public.IP>
rpc_port: 9160
listen_address: (or with the node's public IP)
storage_port: 7000
endpoint_snitch: SimpleSnitch (and RackInferringSnitch as well).
The error message is
ERROR [main] 2014-09-29 08:59:45,241 CassandraDaemon.java (line 513) Exception encountered during startup
java.lang.RuntimeException: Unable to gossip with any seeds
at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1200)
at org.apache.cassandra.service.StorageService.checkForEndpointCollision (StorageService.java:446)
at org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:657)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:504)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
Ports 7000, 7001, 7199, 8080, 9042, 9160, 61620 and 61621 are open within the Cluster's security group.
I have also read and tried the solutions given on the following links:
Cassandra Not Starting Up
Starting cassandra as a service does not work for 2.0.5, sudo cassandra -f works
Apache Cassandra: Unable to gossip with any seeds
Datastax Enterprise is crashing with Unable to gossip with any seeds error
https://github.com/Netflix/Priam/issues/313
Cassandra can not bind to the public IP address in EC2.
Replacing it with the public DNS or the private IP address
in listen_address, rpc_address and seeds.
The public DNS is resolving to the private IP address which is
the eth0 intherface on EC2 instances, where Cassandra is binding.
The working configuration is:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "<seed.node.public.DNS>"
rpc_address: <node.public.DNS>
rpc_port: 9160
listen_address: (or with the node's public DNS)
storage_port: 7000
endpoint_snitch: SimpleSnitch (and RackInferringSnitch as well).

setting up cassandra multi node cluster: 'Nodes have the same token 0'

I'm trying to set up a Cassandra multi node cluster in my computer just to test, but it seems not work... The Cassandra version is 1.1 and It runs on Ubuntu.
Fist of all, I've modified the cassandra.yaml file for each node as follows:
node0
initial_token: 0
seeds: "127.0.0.1"
listen_address: 127.0.0.1
rpc_address: 0.0.0.0
endpoint_snitch: RackInferringSnitch
node1
same as node0 exept for:
initial_token: 28356863910078205288614550619314017621 (get using
cassandra token generator)
listen_address: 127.0.0.2
After that, I've started first the seed node 127.0.0.1 and, once the node is up, I've started the other node 127.0.0.2. I've got the following:
[...]
INFO 06:09:27,146 Listening for thrift clients...
INFO 06:09:27,909 Node /127.0.0.1 is now part of the cluster
INFO 06:09:27,911 InetAddress /127.0.0.1 is now UP
INFO 06:09:27,913 Nodes /127.0.0.1 and /127.0.0.2 have the same token 0. Ignoring /127.0.0.1
Running nodetool -h localhost ring it shows:
Address: 127.0.0.2
DC: datacenter1
Rack: rack1
Status: Up
State: Normal
Load: 11,21 KB
Owns: 100,00%
Token: 0
As you can see, only the information of the second node is showed owning 100% of the ring. Indeed, the token is initialized to 0 instead of to the value I defined at its cassandra.yaml file.
The gossip Info is:
/127.0.0.2
LOAD:25559.0
STATUS:NORMAL,0
SCHEMA:59adb24e-f3cd-3e02-97f0-5b395827453f
RELEASE_VERSION:1.1.6-SNAPSHOT
RPC_ADDRESS:0.0.0.0
/127.0.0.1
LOAD:29859.0
STATUS:NORMAL,0
SCHEMA:59adb24e-f3cd-3e02-97f0-5b395827453f
RELEASE_VERSION:1.1.6-SNAPSHOT
RPC_ADDRESS:0.0.0.0
Does anyone know what is happening and how can I fix it?
Thank you so much in advance!!
initial_token is only checked at first startup, when it is written to a system table. Delete the system table files and restart.

4 Node Cassandra Cluster, Each Has 50.00%

I am running 4 node on a Cassandra cluster built in Windows platform. When I run
nodetool -h localhost ring
command on seed node, I see each nodes in up status, normal state and own 50.00% which I expect to see 25.00%. Is this normal to see the each node owns the 50.00%?
Here is the configuration of each node;
1. Node
rpc_address: 0.0.0.0
initial_token: -9223372036854775808
listen_address: [IP Addres of the machine]
seeds: "[IP Addres of the seed machine (1. Node)]"
2. Node
rpc_address: 0.0.0.0
initial_token: -4611686018427387904
listen_address: [IP Addres of the machine]
seeds: "[IP Addres of the seed machine (1. Node)]"
3. Node
rpc_address: 0.0.0.0
initial_token: 0
listen_address: [IP Addres of the machine]
seeds: "[IP Addres of the seed machine (1. Node)]"
4. Node
rpc_address: 0.0.0.0
initial_token: 4611686018427387904
listen_address: [IP Addres of the machine]
seeds: "[IP Addres of the seed machine (1. Node)]"
Initial Tokens calculated by using this formula; (2^64 / 4) * [NodeIndex] - 2^63
Cassandra 1.2.1 installed in each node. Any idea about the 50.00% value for each node?
You can't really say how much each node owns until you setup a replication factor and create a keyspace. Create a KS with replication_factor=1 and each node will own 25%.
Is your replication factor 2?
That would explain it - you have your data twice in the ring.

Resources