How do Cassandra Clients Learn About New Nodes? - cassandra

If a Cassandra client is given a list of nodes at startup, how does it learn that other nodes exist?
For instance, a client knows a single node IP and asks it for some data. That node acts as a coordinator, finding other nodes in the cluster that hold the data the client requested. The coordinator returns. Is there any part of the Cassandra binary protocol that returns the external address of the node delegated to, along with the requested data?

Related

How is the coordinator node in cassandra determined by a client driver? [duplicate]

This question already has answers here:
how Cassandra chooses the coordinator node and the replication nodes?
(2 answers)
Closed 3 years ago.
I don't understand the load balancing algorithm in cassandra.
It seems that the TokenAwarePolicy can be used to route my request to the coordinator node holding the data. In particular, the documentation states (https://docs.datastax.com/en/developer/java-driver/3.6/manual/load_balancing/) that it works when the driver is able to automatically calculate a routing-key. If it can, I am routed to the coordinator node holding the data, if not, I am routed to another node. I can still specify the routing-key myself if I really want to reach the data without any extra hop.
What does not make sense to me:
If the driver cannot calculate the routing-key automatically, then why can the coordinator do this? Does it have more information than the client driver? Or does the coordinator node then ask every other node in the cluster on my behalf? This would then not scale, right?
I thought that the gossip protocol is used to share the topology of the ring among all nodes (AND the client driver). The client driver than has the complete 'ring' structure and should be equal to any 'hop' node.
Load balancing makes sense to me when the client driver determines the N replicas holding the data, and then prioritizes them (host-distance, etc), but it doesn't make sense to me when I reach a random node that is unlikey to have my data.
Token aware load balancing happens only for that statements that are able to hold routing information. For example, for prepared queries, driver receives information from cluster about fields in query, and has information about partition key(s), so it's able to calculate token for data, and select the node. You can also specify the routing key youself, and driver will send request to corresponding node.
It's all explained in the documentation:
For simple statements, routing information can never be computed automatically
For built statements, the keyspace is available if it was provided while building the query; the routing key is available only if the statement was built using the table metadata, and all components of the partition key appear in the query
For bound statements, the keyspace is always available; the routing key is only available if all components of the partition key are bound as variables
For batch statements, the routing information of each child statement is inspected; the first non-null keyspace is used as the keyspace of the batch, and the first non-null routing key as its routing key
When statement doesn't have routing information, the request is sent to node selected by nested load balancing policy, and the node-coordinator, performs parsing of the statement, extract necessary information and calculates the token, and forwards request to correct node.

Cassandra: the node how to work when a new node joining ring?

I'm a beginner in Cassandra. I want to understand the two nodes(the streaming node and joining node) how to work when a new node joins an existing cluster. Can they provide normal services to the outside?
If the service is provided normally. I assumed the joining node is nodeA, and the node where the fetching data is nodeB. That means nodeA fetch data from nodeB. Assume that the data range C is transmitted from the nodeB to the nodeA, at which time new data falling into the range C is inserted into the cluster. Is the new data written to nodeA or nodeB?
I'm using datastax community edition of cassandra, version 3.11.3.
thanks!
Sun your question is bit confusing .. but what I make of it is , You want to understand the process to adding new node to existing cluster.
Adding a new node to existing cluster requires cassandra.yaml properties for new node identification and communications.
Set the following properties in the cassandra.yaml and, depending on the snitch, the cassandra-topology.properties or cassandra-rackdc.properties configuration files:
auto_bootstrap - This property is not listed in the default cassandra.yaml configuration file, but it might have been added and set to false by other operations. If it is not defined in cassandra.yaml, Cassandra uses true as a default value. For this operation, search for this property in the cassandra.yaml file. If it is present, set it to true or delete it..
cluster_name - The name of the cluster the new node is joining.
listen_address/broadcast_address - Can usually be left blank. Otherwise, use IP address or host name that other Cassandra nodes use to connect to the new node.
endpoint_snitch - The snitch Cassandra uses for locating nodes and routing requests.
num_tokens - The number of vnodes to assign to the node. If the hardware capabilities vary among the nodes in your cluster, you can assign a proportional number of vnodes to the larger machines.
seeds - Determines which nodes the new node contacts to learn about the cluster and establish the gossip process. Make sure that the -seeds list includes the address of at least one node in the existing cluster.
When a new node joins a cluster using topology defined, Seed nodes starts the gossip with the new node by the time it do not communicate with the client directly. Once the gossip completes the new node is ready to take the actual data load.
Hope this helps in understanding the process.

How to handle read/write request in cassandra

I have 5 node cluster with 2 Cassandra,2 solr and 1 hadoop on EC2 with DSE4.5.
My requirement is I dont want to hard code node IP address while requesting for Reading/writing from Cluster. I have to develop web service, thru which requester can send read/write request to my cluster and web service has to determine following
1) route read request to appropriate node.
2) route write request to appropriate node.
If there is any write request then it should direct to Cassandra node on basis of keyspace and replication factor. if it is a read request then request should route to Solr node (as I done indexing on solr) and if there is any analytic query then request should route to hadoop.
And if any node goes down in that case response will not affect.
Apart from dedicated request, is there any way to request a cluster ?
by dedicated mean giving specific IP address for read and write.
Is any method or algorithm exist in DSE? or Is there any tool available in for this?
The Java driver should take care of all of that for you:
http://www.datastax.com/documentation/developer/java-driver/2.0/common/drivers/introduction/introArchOverview_c.html
For example:
Nodes discovery: the driver automatically discovers and uses all nodes of the Cassandra cluster, including newly bootstrapped ones
Configurable load balancing: the driver allows for custom routing and load balancing of queries to Cassandra nodes. Out of the box, round robin is provided with optional data-center awareness (only nodes from the local data-center are queried (and have connections maintained to)) and optional token awareness (that is, the ability to prefer a replica for the query as coordinator).
Transparent failover: if Cassandra nodes fail or become unreachable, the driver automatically and transparently tries other nodes and schedules reconnection to the dead nodes in the background.
On the Solr query side, you can use the SolrJ load balancer, but you have to hard-wire the list of nodes to be used as coordinator nodes, but SolrJ will round robin for you.

Cassandra Clustering Failure handling

How Cassandra behaves when the contact node is dead? I mean to say, Cassandra has the ring structure of "n" nodes, If client is going to access that first node but it is dead. The first node is specified in the java client.
I could not understand the failure handling. Could any one help me?
If you only have one node, or only specify one node, and that node is down then the client won't be able to connect (obviously); but usually a client library, like Hector, will allow you to specify a group of nodes and maintain a connection pool, connecting to which ever node is available.
The Hector documentation goes in to a little more detail, but the simplest way to specify multiple nodes is by passing a comma-separated list of hosts in the CassandraHostConfigurator when creating a cluster:
String hosts = "node1.example.com:9160, node2.example.com:9160, node3.example.com:9160";
Cluster cluster = HFactory.getOrCreateCluster(CLUSTER_NAME, new CassandraHostConfigurator(hosts));
It depends on the client, different client do different things to deal with this, but with most drivers you can supply several contact points.
Astyanax uses token discovery to keep track of various nodes in the cluster. Full docs here.
The host supplier associates the connection pool with a dynamic host registry. The connection pool will frequently poll this supplier for the current list of hosts and update its internal host connection pools to account for new or removed hosts.
This is configured during when you are setting up your context:
AstyanaxContext<Keyspace> context = new AstyanaxContext.Builder()
...
.withAstyanaxConfiguration(new AstyanaxConfigurationImpl()
.setDiscoveryType(NodeDiscoveryType.RING_DESCRIBE)
.setConnectionPoolType(ConnectionPoolType.TOKEN_AWARE)
)
...
context.start();
The DataStax java driver handles failure well as long as you've supplied multiple contact points. If you have 3 nodes with a SimpleStrategy for replication and replication factor of 3 (aka all data is present on all 3 nodes) than you can query as long as one of the 3 nodes is alive.
Cluster cluster = Cluster.builder()
.addContactPoint("127.0.0.1")
.addContactPoint("127.0.0.2")
.addContactPoint("127.0.0.3")
.build();
Session session = cluster.connect();

Cassandra seed nodes and clients connecting to nodes

I'm a little confused about Cassandra seed nodes and how clients are meant to connect to the cluster. I can't seem to find this bit of information in the documentation.
Do the clients only contain a list of the seed node and each node delegates a new host for the client to connect to? Are seed nodes only really for node to node discovery, rather than a special node for clients?
Should each client use a small sample of random nodes in the DC to connect to?
Or, should each client use all the nodes in the DC?
Answering my own question:
Seeds
From the FAQ:
Seeds are used during startup to discover the cluster.
Also from the DataStax documentation on "Gossip":
The seed node designation has no purpose other than bootstrapping the gossip process
for new nodes joining the cluster. Seed nodes are not a single
point of failure, nor do they have any other special purpose in
cluster operations beyond the bootstrapping of nodes.
From these details it seems that a seed is nothing special to clients.
Clients
From the DataStax documentation on client requests:
All nodes in Cassandra are peers. A client read or write request can
go to any node in the cluster. When a client connects to a node and
issues a read or write request, that node serves as the coordinator
for that particular client operation.
The job of the coordinator is to act as a proxy between the client
application and the nodes (or replicas) that own the data being
requested. The coordinator determines which nodes in the ring should
get the request based on the cluster configured partitioner and
replica placement strategy.
I gather that the pool of nodes that a client connects to can just be a handful of (random?) nodes in the DC to allow for potential failures.
seed nodes serve two purposes.
they act as a place for new nodes to announce themselves to a cluster. so, without at least one live seed node, no new nodes can join the cluster because they have no idea how to contact non-seed nodes to get the cluster status.
seed nodes act as gossip hot spots. since nodes gossip more often with seeds than non-seeds, the seeds tend to have more current information, and therefore the whole cluster has more current information. this is the reason you should not make all nodes seeds. similarly, this is also why all nodes in a given data center should have the same list of seed nodes in their cassandra.yaml file. typically, 3 seed nodes per data center is ideal.
the cassandra client contact points simply provide the cluster topology to the client, after which the client may connect to any node in the cluster. as such, they are similar to seed nodes and it makes sense to use the same nodes for both seeds and client contacts. however, you can safely configure as many cassandra client contact points as you like. the only other consideration is that the first node a client contacts sets its data center affinity, so you may wish to order your contact points to prefer a given data center.
for more details about contact points see this question: Cassandra Java driver: how many contact points is reasonable?
Your answer is right. The only thing I would add is that it's recommended to use the same seed list (i.e. in your cassandra.yaml) across the cluster, as a "best practices" sort of thing. Helps gossip traffic propagate in nice, regular rates, since seeds are treated (very minimally) differently by the gossip code (see https://cwiki.apache.org/confluence/display/CASSANDRA2/ArchitectureGossip).

Resources