Cassandra create pool with hosts on different ports - cassandra

I have 10 Cassandra Nodes running on Kubernetes on my server and 1 contact point that expose the service on port 10023.
However, when the datastax driver tries to establish a connection with the other nodes of the cluster it uses the exposed port instead of the default one and i get the following error:
com.datastax.driver.core.ConnectionException: [/10.210.1.53:10023] Pool was closed during initialization
Is there a way to expose one single contact point and have it to communicate with the other nodes on the standard port (9042)?
i checked on the datastax documentation if there is anything related to it but i didn't find much.
this is how i connect to the cluster
Cluster.Builder builder = Cluster.builder();
builder.addContactPoints(address)
.withPort(Integer.valueOf(10023))
.withCredentials(user, password)
.withMaxSchemaAgreementWaitSeconds(600)
.withSocketOptions(
new SocketOptions()
.setConnectTimeoutMillis(Integer.valueOf(timeout))
.setReadTimeoutMillis(Integer.valueOf(timeout))
).build();
Cluster cluster = builder.withoutJMXReporting().build();
Session session = cluster.connect();

After driver contacts first node, it fetches information about cluster, and use this information, and this information includes on what ports Cassandra listens.
To implement what you want to do, you need that Cassandra listened on the corresponding port - this is configured via native_transport_port parameter of the cassandra.yaml.
Also, by default Cassandra driver will try to connect to all nodes in cluster because it uses DCAware/TokenAware load balancing policy. If you want to use only one node, then you need to use WhiteListPolicy instead of default policy. But is not optimal from the performance point of view.
I would suggest to re-think how you expose Cassandra to clients.

Related

How do I enable cross-datacenter failover programatically in the Cassandra Java driver v4?

In datastax driver 3.x,we've DCAwareRoundRobin policy,which tries to connect to remote nodes if nodes in local datacenter fails.In the datastax drvier 4.x,we donot have that policy and confines to local-only.But,in the datastax docs,it's mentioned as:
Cross-datacenter failover is enabled with the following configuration option:
datastax-java-driver.advanced.load-balancing-policy.dc-failover {
max-nodes-per-remote-dc = 2
}
The driver will then attempt to open connections to nodes in remote datacenter.But,in the driver,we specify only a single datacenter to connect to as below:
CqlSession session = CqlSession.builder()
.addContactPoint(new InetSocketAddress("1.2.3.4", 9042))
.addContactPoint(new InetSocketAddress("5.6.7.8", 9042))
.withLocalDatacenter("datacenter1")
.build();
How the connection to remote datacenter is handled?Please help..
The quick answer is you don't do it programatically -- the Java driver does it for you after you've enabled advanced.load-balancing-policy.dc-failover.
The contact points are just the initial hosts the driver "contacts" to discover the cluster topology. After it has collected metadata about the cluster, it knows about nodes in remote DCs.
Since you've configured max-nodes-per-remote-dc = 2, the driver will add 2 nodes from each remote DC to the end of the query plan. The nodes in the local DC will be listed first then followed by the remote nodes. If the driver can't contact the nodes in the local DC, then it will start contacting the remote nodes in the query plan, one node at a time until it runs out of nodes to contact.
But I have to reiterate what I said in your other question, we do not recommend enabling DC-failover. For anyone else coming across this answer, you've been warned. Cheers!

Apache Cassandra Server and Datastax Client - Changing IP Addresses

We are using the latest Apache Cassandra database server, and the Datastax Node.js client, running in the cloud.
When our Cassandra servers are rebuilt, they get new IP addresses. Then any running service clients can't find the new servers, the client driver obviously must cache the IP addresses, instead of using DNS.
Is there some way around this problem, other than doing client shutdown and get a new client, in our services when we encounter an error accessing the database?
If you only have 1 server, there is nothing you can do.
Otherwise the node when it rebuilds (if it is a single node in the cluster of many) will advertise the new IP to the cluster and cluster topology is updated. So the peers table will be updated and the driver can register this event (AFAIK).
But why not use private static addresses for your cassandra nodes?

Cassandra client port enable

How to enable cassandra port to connect with BI application. Here my setup with cassandra is of multiple nodes (192.xxx.xx.01,192.xxx.xx.02,192.xxx.xx.03). In this scenario which node will be acting like master / coordinator with my application.
Although i worked with listen_address, rpc_address, broadcast_rpc_address and seeds, I opened both tcp ports 9042 and 9160.
version: 3.10
Kindly, lead me to the right direction.
Cassandra uses master-less architecture.All nodes are equal in cassandra.
When you connect to one of the node that node act as co-ordinator node, any of the node can be co-ordinator.
The coordinator is selected by the driver based on the policy you have set. Common policies are DCAwareRoundRobinPolicy and TokenAware Policy.
For DCAwareRoundRobinPolicy, the driver selects the coordinator node based on its round robin policy. See more here: http://docs.datastax.com/en/drivers/java/2.1/com/datastax/driver/core/policies/DCAwareRoundRobinPolicy.html
For TokenAwarePolicy, it selects a coordinator node that has the data being queried - to reduce "hops" and latency. More info: http://docs.datastax.com/en/drivers/java/2.1/com/datastax/driver/core/policies/TokenAwarePolicy.html
native_transport_port is 9042 by default and clients use native transport by default.
Hence you should have connection from your BI to Cassandra host on port 9042.

Will the DataStax Cluster class ever refresh IP address from the hostname given to builder.addContactPoint() if DNS changes?

I've a problem, once set host name, cluster wouldn't update it's IP, even in DNS changes.
Or what is the recommended way of making the application resilient to the fact that more nodes can be added to DNS round robin and old nodes decomissioned ?
I had same thing with Astyanax driver. For me it looks like it works this way:
DNS name is used only when initial connection to cluster is created. At this point driver collects data about cluster nodes. This information is kept in terms of IP addresses already and DNS names are not used any more. Sub-sequential changes in the cluster topology are propagated into the client also using IP addresses.
So, when you add more nodes to the cluster, you actually do not have to assign domain names to them. Just adding a node to the cluster propagates its IP address to the cluster topology table and this info is distributed among all cluster members and smart clients like Java Driver (some third party clients might not have this info and will use only seed nodes to pass queries to).
When you decommission node it works same way. Just all cluster nodes and smart clients receive information that node with a particular IP is not in the cluster any more. It can be even initial seed node.
->Domain name makes sense only for clients which hadn't established cluster connection.
In case you really need to switch IP you have to:
Join node with new IP
Decommission node with old IP
Assign DNS name to new IP

Cassandra big cluster configure the client connection

I've been looking to find how to configure a client to connect to a Cassandra cluster.
Independent of clients like Pelops, Hector, etc, what is the best way to connect to a multi-node Cassandra cluster?
Sending the string IP values works fine, but what about growing number cluster nodes in the future? Is maintaining synchronically ALL IP cluster nodes on client part?
Don't know if this answer all your questions but the growing cluster and your knowledge of clients ip are not related.
I have a 5 node cluster but the client(s) only knows 2 ip addresses: the seeds. Since each machine of the cluster knows about the seeds (each cassandra.yaml contains the seeds ip address) if new machine will be added information about new one will come "for free" on the client side.
Imagine a 5 nodes cluster with following ips
192.168.1.1
192.168.1.2 (seed)
192.168.1.3
192.168.1.4 (seed)
192.168.1.5
eg: the node .5 boot -- it will contact the seeds (node 2 and 4) and receive back information about the whole cluster. If you add a new 192.168.1.6 will behave exactly like the .5 and will point to the seeds to know the cluster situation. On the client side you don't have to change anything: you will just know that now you have 6 endpoints instead of 5.
ps: you don't have necessarily to connect to the seeds you can just connect to any node of since after having contacted the seeds each node knows the whole cluster topology
pps: it's your choice how many nodes to put in you "client known hosts", you can also put all 5 but this won't change the fact that if one node will be added you don't need to do anything on the client side
Regards,
Carlo
You will have an easier time letting the client track the state of each node. Smart clients will track endpoint state via the gossipinfo, which passes on new nodes as they appear in the cluster.

Resources