Build Cassandra Cluster - cassandra

I need to build a Cassandra cluster for my company, I use apache-cassandra-2.1.12-bin.tar.gz downloaded form official website.
I have three machines:
192.168.0.210;
192.168.0.209;
192.168.0.208;
I changed the cassandra.yaml for each one.
Step1: On 192.168.0.210:
listen_address: 192.168.0.210
seeds: 192.168.0.210
Step2: On 192.168.0.209:
listen_address: 192.168.0.209
seeds: 192.168.0.210
Step3: On 192.168.0.208:
listen_address: 192.168.0.208
seeds: 192.168.0.210
I searched online, some people also changed rpc_address, while some people not. When I changed rpc_address to 0.0.0.0, then run ./cassandra ,it shows:
Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException:
If rpc_address is set to a wildcard address (0.0.0.0), then you must set
broadcast_rpc_address to a value other than 0.0.0.0
so I changed broadcast_rpc_address to 1.2.3.4, then run ./cassandra, it shows
ERROR 05:49:42 Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: Invalid yaml
at org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:120) ~[apache-cassandra-2.1.12.jar:2.1.12]
at org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:84) ~[apache-cassandra-2.1.12.jar:2.1.12]
at org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:161) ~[apache-cassandra-2.1.12.jar:2.1.12]
at org.apache.cassandra.config.DatabaseDescriptor.<clinit>(DatabaseDescriptor.java:136) ~[apache-cassandra-2.1.12.jar:2.1.12]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:168) [apache-cassandra-2.1.12.jar:2.1.12]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:562) [apache-cassandra-2.1.12.jar:2.1.12]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:651) [apache-cassandra-2.1.12.jar:2.1.12]
Caused by: org.yaml.snakeyaml.parser.ParserException: while parsing a block mapping; expected <block end>, but found BlockMappingStart; in 'reader', line 455, column 2:
broadcast_rpc_address: 1.2.3.4
^
at org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingKey.produce(ParserImpl.java:570) ~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:158) ~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.parser.ParserImpl.checkEvent(ParserImpl.java:143) ~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:230) ~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:159) ~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.composer.Composer.composeDocument(Composer.java:122) ~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:105) ~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:120) ~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:481) ~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.Yaml.load(Yaml.java:412) ~[snakeyaml-1.11.jar:na]
at org.apache.cassandra.config.YamlConfigurationLoader.logConfig(YamlConfigurationLoader.java:126) ~[apache-cassandra-2.1.12.jar:2.1.12]
at org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:104) ~[apache-cassandra-2.1.12.jar:2.1.12]
... 6 common frames omitted
Invalid yaml
Fatal configuration error; unable to start. See log for stacktrace.
So my questions:
1.do I need to change rpc_address(some people do,while some not)?
2. if yes, how to handle broadcast_rpc_address?
3. except rpc_address/broadcast_rpc_address, what else do I need to do for building the cassandra cluster?

rpc_address is the address or interface to bind the Thrift RPC service and native transport server to. You could leave it blank. Cassandra will use the node's hostname. It is not suggested to set as 0.0.0.0, unless the node is protected by such as firewall. Or else, anyone could access Cassandra.
broadcast_rpc_address is the RPC address to broadcast to drivers and other Cassandra nodes. This cannot be set to 0.0.0.0. The drivers need the valid IP address to send the requests. If you set rpc_address to 0.0.0.0, you should set broadcast_rpc_address to the node's IP. In your example, 192.168.0.208, 192.168.0.209, or 192.168.0.210.
For 3, you just need to set cluster name to be the same on all nodes.

for rpc_address, try using:
rpc_address: localhost
Here is answers to your questions:
1.do I need to change rpc_address(some people do,while some not)?
NO, you dont need it unless you want your clients to connect to a different IP address rather than the actual IP address of the server, example would be SQL server Alias etc.
2. if yes, how to handle broadcast_rpc_address?
broadcast... i think it would be public IPs as the broadcast_address, or 0.0.0.0
except rpc_address/broadcast_rpc_address, what else do I need to do for building the cassandra cluster? make sure the Cluster name is the same for all nodes, and that for your first node setting up the cluster the first time, seed is the same as the listen IP, Then the second node the seed is the first node etc.

Related

Hazelcast dynamic member addition

We want to form Hazel cast cluster at server restart. at server start it will find IP address and add member in hazel cast cluster.
please provide any help on approach.
Default configuration is multicast. If this is enabled by your network administrators machines should find each other for you. But it may be
deactivated so you'd need to do another way.
An easy way is TCP discovery,
configuration (if you use YAML) something like:
hazelcast:
network:
join:
multicast:
enabled: false
tcp-ip:
enabled: true
member-list:
- 12.34.56.78:5701
- 34.56.78.12:5701
- 56.78.12.34:5701
When the process starts, it will try those addresses (ignoring if one is itself) to see if anything is there. If it, it'll cluster together.
If you don't know your IPs in advance, you can pass them in as arguments and et them from Java.
JoinConfig joinConfig = config.getNetworkConfig().getJoin();
joinConfig.getMulticastConfig().setEnabled(false);
TcpIpConfig tcpIpConfig = new TcpIpConfig();
tcpIpConfig.setEnabled(true);
tcpIpConfig.setMembers(List.of("12.34.56.78:5701", "34.56.78.12:5701"));
joinConfig.setTcpIpConfig(tcpIpConfig);

Payara - Hazelcast cluster node picks the wrong network interface

When starting Payara cluster, one of the nodes binds to the wrong IP address (internal IP address of the docker which is installed locally on the node).
What is the proper way of letting know the Payara Cluster instance node which address it should bind to?
Node 1 log:
[2017-12-04T11:35:06.512+0800] [Payara 4.1] [INFO] [] [com.hazelcast.internal.cluster.impl.MulticastJoiner] [tid: _ThreadID=16 _ThreadName=RunLevelControllerThread-1512358500010] [timeMillis: 1512358506512] [levelValue: 800] [[
[172.17.0.1]:5900 [dev] [3.8.5]
Members [1] {
Member [172.17.0.1]:5900 - 9be6669e-b853-44c0-9656-8488d3e1031b this
}
]]
Node 2 log:
[2017-12-04T11:35:06.771+0800] [Payara 4.1] [INFO] [] [com.hazelcast.internal.cluster.impl.MulticastJoiner] [tid: _ThreadID=17 _ThreadName=RunLevelControllerThread-1512358500129] [timeMillis: 1512358506771 [levelValue: 800] [[
[10.4.0.86]:5900 [dev] [3.8.5]
Members [1] {
Member [10.4.0.86]:5900 - e3f9dd48-58b9-45f9-88fc-6b0feaedd78f this
}
]]
I have tested the cluster itself and it works properly on machines with the only one interface (without docker installed).
I have found issues that are related to my case, but was not able to adapt them in Payara Cluster setup:
Hazelcast cluster over AWS using Docker
Configuring a two node
hazelcast cluster - avoiding multicast
Meaning, suggestion to use the local property: -Dhazelcast.local.localAddress=[yourCorrectIpGoesHere] - works, but in case of cluster environment with centralized management of the nodes config - I do not see how to set the different JVM properties for each of the nodes.
Submitting custom hazelcast-config.xml via the "Override configuration file" could be an option, but it means that full configuration should be done via this file, what makes it not super handy to manage, but currently looks like this is the only option that potentially can help here.
Thanks!
Payara server doesn't expose this configuration option directly. Using the system property hazelcast.local.localAddress is the preferred option. However, you shouldn't set it as a JVM option like you did with
-Dhazelcast.local.localAddress=...
Instead, add the system property using the server page in the Admin Console. On the Properties tab go to System properties tab and add a new property with the variable name hazelcast.local.localAddress and override value set to the IP address of the interface you want Hazelcast to bind to.
This way, the configuration is applied during runtime without any server restart needed and should also be propagated to other instances in the cluster if you also set the property for cluster instances. For those, instead of going to the server page you would go to the configuration of each instance and set the system property there.

connecting socketcluster servers

I'm trying to implement this solution (on Win10 x64), but for some reason all the SocketCluster nodes refuse to communicate with each other.
Sothis is my cur. configuration:
1 StateServer [7777]
1 BrokerServer [8888]
2 SocketCluster servers running on ports [ 8000, 8001]
1 LoadBalancer [2000] to divide the trafic between the 2 nodes.
I ensured that both the State and Broker severs are listening:
TCP [::]:7777 [::]:0 LISTENING
TCP [::]:8888 [::]:0 LISTENING
From what I've understood so far, BrokerServer along with the SocketCluster nodes should all connect to the StatusServer(?)
I could successfully connect the BrokerServer to StateServer, but whenever I try to connect any of the SocketCluster services, it reports 'socket hung' errors.
StateServer:
SC Cluster State Server is listening on port 7777
Sever d08298c6-523f-4c1b-9fcc-efd4e92fab22 at address undefined on port 8888 joined the cluster
Client 10612bde-514f-40d3-9340-7179a1901376 at address undefined joined the cluster
Cluster state converged to active:["ws://[undefined]:8888"]
SocketCluster instance:
{ SocketProtocolError: Socket hung up
at Emitter.SCSocket._onSCClose (C:\Users\Alex\AppData\Roaming\npm\node_modules\sc-cluster-broker-client\node_modules\socketcluster-client\lib\scsocket.js:596:15)
at Emitter.<anonymous> (C:\Users\Alex\AppData\Roaming\npm\node_modules\sc-cluster-broker-client\node_modules\socketcluster-client\lib\scsocket.js:285:12)
at Emitter.emit (C:\Users\Alex\AppData\Roaming\npm\node_modules\sc-cluster-broker-client\node_modules\component-emitter\index.js:131:20)
at Emitter.SCEmitter.emit (C:\Users\Alex\AppData\Roaming\npm\node_modules\sc-cluster-broker-client\node_modules\sc-emitter\index.js:28:26)
at Emitter.SCTransport._onClose (C:\Users\Alex\AppData\Roaming\npm\node_modules\sc-cluster-broker-client\node_modules\socketcluster-client\lib\sctransport.js:175:30)
at WebSocket.wsSocket.onerror (C:\Users\Alex\AppData\Roaming\npm\node_modules\sc-cluster-broker-client\node_modules\socketcluster-client\lib\sctransport.js:104:12)
at WebSocket.onError (C:\Users\Alex\AppData\Roaming\npm\node_modules\sc-cluster-broker-client\node_modules\ws\lib\WebSocket.js:452:14)
at emitOne (events.js:96:13)
at WebSocket.emit (events.js:188:7)
at WebSocket.EventEmitter.emit (C:\Users\Alex\AppData\Roaming\npm\node_modules\socketcluster\node_modules\sc-domain\index.js:12:31)
name: 'SocketProtocolError',
message: 'Socket hung up',
code: 1006 }
Are you running those instances in Docker containers by any chance?
Based on the log output that you're getting from the state server (address undefined), it looks like the scc-state instance cannot figure out your instances' IP addresses. This can happen for several reasons. For example, running an instance inside a Docker container can obscure that instance's real IP address. It's also possible that running SCC on Windows could cause similar problems.
The solution to this problem is to set an SCC_INSTANCE_IP environment variable when launching each instance - This environment variable should hold the IP address of the instance which other instances can use to connect to it (if using Docker, you can use the docker inspect command to find the private network IP address of a specific container).
SCC_INSTANCE_IP can be either a private IP address, public IP address or a hostname.
It turned out, that scaling the cluster horizontally isn't working properly on Windows OS yet (using the current version v.1.2.1).
Both SocketCluster nodes aren't communicating with the brokerServer for some reason.

DataStax Devcenter fails to connect to the remote cassandra db

I've installed DataStax cassandra and it is up and running on my remote machine. Now I am trying to connecto via DataStax Devcenter but it fails.
Before posting this question I've read identical here: DataStax Devcenter fails to connect to the remote cassandra database
I went to cassandra.yaml conf file but start_native_transport: true option is not in my file. Where should I look for it?
Also I've changed rpc_address to: 0.0.0.0.
UPDATE:
If I add start_native_transport: true into my cassandra.yaml it just crashes on Cassandra restart. Please refer a log below:
ERROR 17:48:32,626 Fatal configuration error error
Can't construct a java object for tag:yaml.org,2002:org.apache.cassandra.config.Config; exception=Cannot create property=start_native_transport for JavaBean=org.apache.cassandra.config.Config#ef28a30; Unable to find property 'start_native_transport' on class: org.apache.cassandra.config.Config
in "<reader>", line 10, column 1:
cluster_name: 'Test Cluster'
^
at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:372)
at org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(BaseConstructor.java:177)
at org.yaml.snakeyaml.constructor.BaseConstructor.constructDocument(BaseConstructor.java:136)
at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:122)
at org.yaml.snakeyaml.Loader.load(Loader.java:52)
at org.yaml.snakeyaml.Yaml.load(Yaml.java:166)
at org.apache.cassandra.config.DatabaseDescriptor.loadYaml(DatabaseDescriptor.java:141)
at org.apache.cassandra.config.DatabaseDescriptor.<clinit>(DatabaseDescriptor.java:116)
at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:124)
at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:389)
at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:107)
Caused by: org.yaml.snakeyaml.error.YAMLException: Cannot create property=start_native_transport for JavaBean=org.apache.cassandra.config.Config#ef28a30; Unable to find property 'start_native_transport' on class: org.apache.cassandra.config.Config
at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.constructJavaBean2ndStep(Constructor.java:305)
at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.construct(Constructor.java:184)
at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:370)
... 10 more
Caused by: org.yaml.snakeyaml.error.YAMLException: Unable to find property 'start_native_transport' on class: org.apache.cassandra.config.Config
at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.getProperty(Constructor.java:342)
at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.constructJavaBean2ndStep(Constructor.java:240)
... 12 more
null; Can't construct a java object for tag:yaml.org,2002:org.apache.cassandra.config.Config; exception=Cannot create property=start_native_transport for JavaBean=org.apache.cassandra.config.Config#ef28a30; Unable to find property 'start_native_transport' on class: org.apache.cassandra.config.Config
Invalid yaml; unable to start server. See log for stacktrace.
Thanks for any Help!
start_native_transport: true
should be there in cassandra.yaml if its not there then you should add it into cassandra.yaml and try after restarting the Cassandra server
What version of Cassandra are you using? DevCenter supports Cassandra versions >= 1.2
If you still see errors with the change in cassandra.yaml you can post a link to a Gist. But the YAML format is pretty simple so I think you'll figure it out.
If you read my previous answer you'll notice that it required the rpc_address to be set to a different value than 0.0.0.0. Anyways the latest version of DevCenter (1.1.1) will work even all the nodes in your cluster have the rpc_address set to 0.0.0.0 (as a side note I don't think that's generally a good setting).
DevCenter.ini does not have java VM information.
Adding below line of VM info helped resolve connection issue.
-vm
C:\Program Files (x86)\JDK64\1.8.0.74\jre\bin\java.exe
NOTE: above line represents appropriate java.exe from JRE version

Error while connecting to Cassandra using Java Driver for Apache Cassandra 1.0 from com.example.cassandra

While connecting to Cassandra client using java driver for Cannsandra by DataStax, it is throwing following error..
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/127.0.0.1])
Please suggest...
Thanks!
My java code is like this:
package com.example.cassandra;
import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.Host;
import com.datastax.driver.core.Metadata;
public class SimpleClient {
private Cluster cluster;
public void connect(String node){
cluster = Cluster.builder().addContactPoint(node).build();
Metadata metadata = cluster.getMetadata();
System.out.println(metadata.getClusterName());
}
public void close()
{
cluster.shutdown();
}
public static void main(String args[]) {
SimpleClient client = new SimpleClient();
client.connect("127.0.0.1");
client.close();
}
In my case, I ran into this issue as I used the default RPC port of 9160 during connection. One can find a different port for CQL in cassandra.yaml -
start_native_transport: true
# port for the CQL native transport to listen for clients on
native_transport_port: 9042
Once I changed the code to use port 9042 the connection attempt succeeded -
public BinaryDriverTest(String cassandraHost, int cassandraPort, String keyspaceName) {
m_cassandraHost = cassandraHost;
m_cassandraPort = cassandraPort;
m_keyspaceName = keyspaceName;
LOG.info("Connecting to {}:{}...", cassandraHost, cassandraPort);
cluster = Cluster.builder().withPort(m_cassandraPort).addContactPoint(cassandraHost).build();
session = cluster.connect(m_keyspaceName);
LOG.info("Connected.");
}
public static void main(String[] args) {
BinaryDriverTest bdt = new BinaryDriverTest("127.0.0.1", 9042, "Tutorial");
}
I had this issue and it was sorted by setting the ReadTimeout in SocketOptions:
Cluster cluster = Cluster.builder().addContactPoint("localhost").build();
cluster.getConfiguration().getSocketOptions().setReadTimeoutMillis(HIGHER_TIMEOUT);
Go to your Apache Cassandra conf directory and enable the binary protocol
Cassandra binary protocol
The Java driver uses the binary protocol that was introduced in Cassandra 1.2. It only works with a version of Cassandra greater than or equal to 1.2. Furthermore, the binary protocol server is not started with the default configuration file in Cassandr a 1.2. You must edit the cassandra.yaml file for each node:
start_native_transport: true
Then restart the node.
I was also having same problem. I have installed Cassandra in a separate Linux pc and tried to connect via Window pc. I was not allowed to create the connection.
But when we edit cassandra.yaml, set my linux pc ip address to rpc_address and restart, it allows me to connect successfully,
# The address or interface to bind the Thrift RPC service and native transport
# server to.
#
# Set rpc_address OR rpc_interface, not both. Interfaces must correspond
# to a single address, IP aliasing is not supported.
#
# Leaving rpc_address blank has the same effect as on listen_address
# (i.e. it will be based on the configured hostname of the node).
#
# Note that unlike listen_address, you can specify 0.0.0.0, but you must also
# set broadcast_rpc_address to a value other than 0.0.0.0.
#rpc_address: localhost
rpc_address: 192.168.0.10
Just posting this for people who might have the same problem as I did, when I got that error message. Turned out my complex dependency tree brought about an old version of com.google.collections, which broke the CQL driver. Removing this dependency and relying entirely on guava solved my problem.
I was having the same issue testing a new cluster with one node.
After removing this from the Cluster builder I was able to connect:
.withLoadBalancingPolicy(new DCAwareRoundRobinPolicy("US_EAST"))
It was able to connect.
In my case this was a port issue, which I forgot to update
Old RPC port is 9160
New binary port is 9042
I too encountered this problem, and it was caused by a simple error in the statement that was being submitted.
session.prepare(null);
Obviously, the error message is misleading.
Edit
/etc/cassandra/cassandra.yaml
and change
rpc_address to 0.0.0.0,broadcast_rpc_address and listen_address to ip address of the cluster.
Assuming you have default configurations in place, check the driver version compatibility. Not all driver versions are compatible with all versions of Cassandra, though they claim backward compatibility. Please see the below link.
http://docs.datastax.com/en/developer/java-driver/3.1/manual/native_protocol/
I ran into a similar issue & changing the driver version solved my problem.
Note: Hopefully, you are using Maven (or something similar) to resolve dependencies. Otherwise, you may have to download a lot of dependencies for higher versions of the driver.
Check below points:
i) check server ip
ii) check listening port
iii) data-stack client dependency must match the server version.
About the yaml file, latest versions has below properties enabled:
start_native_transport: true
native_transport_port: 9042

Resources