4 Node Cassandra Cluster, Each Has 50.00% - cassandra

I am running 4 node on a Cassandra cluster built in Windows platform. When I run
nodetool -h localhost ring
command on seed node, I see each nodes in up status, normal state and own 50.00% which I expect to see 25.00%. Is this normal to see the each node owns the 50.00%?
Here is the configuration of each node;
1. Node
rpc_address: 0.0.0.0
initial_token: -9223372036854775808
listen_address: [IP Addres of the machine]
seeds: "[IP Addres of the seed machine (1. Node)]"
2. Node
rpc_address: 0.0.0.0
initial_token: -4611686018427387904
listen_address: [IP Addres of the machine]
seeds: "[IP Addres of the seed machine (1. Node)]"
3. Node
rpc_address: 0.0.0.0
initial_token: 0
listen_address: [IP Addres of the machine]
seeds: "[IP Addres of the seed machine (1. Node)]"
4. Node
rpc_address: 0.0.0.0
initial_token: 4611686018427387904
listen_address: [IP Addres of the machine]
seeds: "[IP Addres of the seed machine (1. Node)]"
Initial Tokens calculated by using this formula; (2^64 / 4) * [NodeIndex] - 2^63
Cassandra 1.2.1 installed in each node. Any idea about the 50.00% value for each node?

You can't really say how much each node owns until you setup a replication factor and create a keyspace. Create a KS with replication_factor=1 and each node will own 25%.

Is your replication factor 2?
That would explain it - you have your data twice in the ring.

Related

Switching to GossipingPropertyFileSnitch Causes Connection Refusal

I have brand new install of Cassandra 3.0.9 on CentOS 7.4.1708.
I am trying to change from the default SimpleSnitch to GosspingPropertyFileSnitch.
When I try and follow the steps on the DatStax Website, it appears as though I should only have to change the endpoint_snitch setting in the cassandra.yaml file. When I do that and restart the cassandra service, I am no longer able to connect to cassandra with nodetool or cqlsh (Connection Refused). Changing the settign back to the SimpleSnitch yields no reversal and ability to connect. I am confused as to what settings I am missing that are causing this to happen. I'd like to know
What I am missing in trying to get it moved to the GossipingPropertyFileSnitch?
Why changing the setting back does not revert things to its previous state where I can connect using nodetool or cqlsh?
I have two nodes in the cluster, both of which I want to act as seeds.
cassandra.yaml: - Node 1
cluster_name: '<My Cluster Name>'
- seeds: "<IP Add1>, <IP Add2>"
listen_address: <IP Add1>
rpc_address: <IP Add1>
endpoint_snitch: GossipingPropertyFileSnitch
#broadcast_address: 1.2.3.4
cassandra.yaml: - Node 2
cluster_name: '<My Cluster Name>'
- seeds: "<IP Add1>, <IP Add2>"
listen_address: <IP Add2>
rpc_address: <IP Add2>
endpoint_snitch: GossipingPropertyFileSnitch
#broadcast_address: 1.2.3.4
cassandra-rackdc.properties - Node 1:
dc=<DC1 Name>
rack=rack1
prefer_local=true
cassandra-rackdc.properties - Node 2:
dc=<DC1 Name>
rack=rack2
prefer_local=true
cassandra-topology.properties - Both Nodes:
<IP Add1>=<DC1 Name>:RAC1
<IP Add2>=<DC1 Name>:RAC2
In CQLSH - Both Nodes
UPDATE system.local SET cluster_name = '<My Cluster Name>' where key='local';

Cassandra: GossipingPropertyFileSnitch: NoHostAvailable exception while inserting into table

I have created 2 node cassandra cluster with below configurations.
Node-1:
cassandra-topology.properties:
192.168.1.177=DC1:RAC1
192.168.1.134=DC2:RAC2
cassandra.yml:
cluster_name: 'TestCluster'
num_tokens: 256
listen_address:
rpc_address: localhost
- seeds: "192.168.1.177,192.168.1.134"
endpoint_snitch: GossipingPropertyFileSnitch
Node-2:
cassandra-topology.properties:
192.168.1.177=DC1:RAC1
127.0.0.1=DC2:RAC2 # Also tried 192.168.1.134 ip
cassandra.yml:
cluster_name: 'TestCluster'
num_tokens: 256
listen_address:
rpc_address: localhost
- seeds: "192.168.1.177"
endpoint_snitch: GossipingPropertyFileSnitch
I can see both the nodes are up and running using 'nodetool status' command. The Keyspace i have created is as below:
> CREATE KEYSPACE testReplication WITH replication = {'class': NetworkTopologyStrategy', 'DC1' : '2', 'DC2' : '2'};
I can also create tables which gets replicated to both the nodes, but when i try to 'INSERT' or 'SELECT' on the table cqlsh gives 'NoHostAvailable:',
but system.log is not showing anything about it.
Any help will be appreciated.
Thanks.
In cassandra.yaml in each node, put your node ip torpc_address: and listen_address: and try to put seeds: same for each node (DC1 and DC2). And also for cassandra-topology.properties, put the same configuration for each node. So, for example your Node-1 configuration will looks like:
cluster_name: 'TestCluster'
num_tokens: 256
listen_address: 192.168.1.177
rpc_address: 192.168.1.177
- seeds: "192.168.1.177,192.168.1.134"
endpoint_snitch: GossipingPropertyFileSnitch
And for each node, cassandra-topology.properties:
192.168.1.177=DC1:RAC1
192.168.1.134=DC2:RAC2

Datastax service doesn't start after configuring cassandra.yaml for creating a cluster

I am having a problem with configuring my cluster in Cassandra in DSE 5.0. After I change the /etc/dse/cassandra/cassandra.yaml the service dse (sudo service dse start) doesn't start. I am a beginner so I don't know what to do.
Node1:
cluster_name: 'MyCluster'
num_tokens: 256
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "10.1.4.48,10.1.4.49"
listen_address: 10.1.4.48
broadcast_address: 10.1.4.48
rpc_address: 0.0.0.0
broadcast_rpc_address: 10.1.1.48
Node2:
cluster_name: 'MyCluster'
num_tokens: 256
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "10.1.4.48,10.1.4.49"
listen_address: 10.1.4.49
broadcast_address: 10.1.4.49
rpc_address: 0.0.0.0
broadcast_rpc_address: 10.1.1.49
This is what I have changed in each of the two nodes that I want to put in the same cluster. Maybe I need to change another file also?
The yaml file format can be very fussy. I usually grab a vanilla cassandra.yaml one from an install (same version) and run a diff
You may well see some unexpected differences. The most common one is a missing space between the : and <value> so for example
listen_address:192.168.56.20
instead of
listen_address: 192.168.56.20

cassandra 3.4 on virtual box not starting

I am using mac osx. i created 3 virtual box by virtualbox. I've installed centos7 minimal version on each of the virtual box.
Then i installed cassandra on each of the box. After installation it was starting by cqlsh and nodetool status command.
But after then when i was trying to link each other and edit cassandra.yaml file its started showing
('Unable to connect to any servers', {'127.0.0.1': error(111, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})
i've edited the cassandra.yaml file as follows:
cluster_name: 'Home Cluster'
num_tokens: 256
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
- seeds: "192.168.56.102,192.168.56.103"
storage_port: 7000
listen_address: 192.168.56.102
rpc_address: 192.168.56.102
rpc_port: 9160
endpoint_snitch: SimpleSnitch
my /etc/hosts file contains:
192.168.56.102 node01
192.168.56.103 node02
192.168.56.104 node03
Please tell me whats wrong i'm doing? My cassandra cluster not working.
solution: I got the solution from AKKI. The problem was enpoint_snitch. I made the endpoint_snitch=GossipingPropertyFileSnitch and it fixed. My now output is as follows:
[root#dbnode2 ~]# nodetool status
Datacenter: dc1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 192.168.56.101 107.38 KB 256 62.5% 0526a2e1-e6ce-4bb4-abeb-b9e33f72510a rack1
UN 192.168.56.102 106.85 KB 256 73.0% 0b7b76c2-27e8-490f-8274-571d00e60c20 rack1
UN 192.168.56.103 83.1 KB 256 64.5% 6c8d80ec-adbb-4be1-b255-f7a0b63e95c2 rack1
I had faced similar problem,
I tried the following solution:
In Cassandra.yaml file check if you have,
start_rpc = true
Changed my endpoint snitch to
endpoint_snitch: GossipingFilePropertySnitch
Opened all ports Cassandra uses on my CentOS
Cassandra inter-node ports
Port number Description
7000 Cassandra inter-node cluster communication.
7001 Cassandra SSL inter-node cluster communication.
7199 Cassandra JMX monitoring port.
Cassandra client port
Port number Description
9042 Cassandra client port.
9160 Cassandra client port (Thrift).
Command to open ports on CentOs 7(Find it according to your OS):
>sudo firewall-cmd --zone=public --add-port=9042/tcp --permanent
>sudo firewall-cmd –reload
Then Restart your systems
Also it seems that you are changing the Cassandra.Yaml file after starting cassandra.
Make sure you edit your Cassandra.yaml file on all nodes before starting Cassandra
Also remember to start the seed node first.

setting up cassandra multi node cluster: 'Nodes have the same token 0'

I'm trying to set up a Cassandra multi node cluster in my computer just to test, but it seems not work... The Cassandra version is 1.1 and It runs on Ubuntu.
Fist of all, I've modified the cassandra.yaml file for each node as follows:
node0
initial_token: 0
seeds: "127.0.0.1"
listen_address: 127.0.0.1
rpc_address: 0.0.0.0
endpoint_snitch: RackInferringSnitch
node1
same as node0 exept for:
initial_token: 28356863910078205288614550619314017621 (get using
cassandra token generator)
listen_address: 127.0.0.2
After that, I've started first the seed node 127.0.0.1 and, once the node is up, I've started the other node 127.0.0.2. I've got the following:
[...]
INFO 06:09:27,146 Listening for thrift clients...
INFO 06:09:27,909 Node /127.0.0.1 is now part of the cluster
INFO 06:09:27,911 InetAddress /127.0.0.1 is now UP
INFO 06:09:27,913 Nodes /127.0.0.1 and /127.0.0.2 have the same token 0. Ignoring /127.0.0.1
Running nodetool -h localhost ring it shows:
Address: 127.0.0.2
DC: datacenter1
Rack: rack1
Status: Up
State: Normal
Load: 11,21 KB
Owns: 100,00%
Token: 0
As you can see, only the information of the second node is showed owning 100% of the ring. Indeed, the token is initialized to 0 instead of to the value I defined at its cassandra.yaml file.
The gossip Info is:
/127.0.0.2
LOAD:25559.0
STATUS:NORMAL,0
SCHEMA:59adb24e-f3cd-3e02-97f0-5b395827453f
RELEASE_VERSION:1.1.6-SNAPSHOT
RPC_ADDRESS:0.0.0.0
/127.0.0.1
LOAD:29859.0
STATUS:NORMAL,0
SCHEMA:59adb24e-f3cd-3e02-97f0-5b395827453f
RELEASE_VERSION:1.1.6-SNAPSHOT
RPC_ADDRESS:0.0.0.0
Does anyone know what is happening and how can I fix it?
Thank you so much in advance!!
initial_token is only checked at first startup, when it is written to a system table. Delete the system table files and restart.

Resources