Ability to write to a particular cassandra node - cassandra

Is there a possibility to write to a particular node using datastax driver?
For example, I have three nodes in datacenter 1 and three nodes in datacenter 2.
Existing
If i build up the cluster with any one of them as seed, all the nodes will get detected by the datastax java driver. So, in this case, if i insert a data using driver, it will automatically choose one of the nodes and proceed with it as the co-ordinator(preferably local data center)
Requirement
I want a way to contact any node in datacenter 2 and hand over the co-ordinator job to one of the nodes in datacenter 2.
Why i need this
I am trying to use the trigger functionality from datacenter 2 alone. Since triggers are taken care by co-ordinator , i want a co-ordinator to be selected from datacenter 2 so that data center 1 doesnt have to do this operation.

You may be able to use the DCAwareRoundRobinPolicy load balancing policy to achieve this by creating the policy such that DC2 is considered the "local" DC.
Cluster.Builder builder = Cluster.builder().withLoadBalancingPolicy(new DCAwareRoundRobinPolicy("dc2"));
In the above example, remote (non-DC2) nodes will be ignored.
There is also a new WhiteListPolicy in driver version 2.0.2 that wraps another load balancing policy and restricts the nodes to a specific list you provide.
Cluster.Builder builder = Cluster.builder().withLoadBalancingPolicy(new WhiteListPolicy(new DCAwareRoundRobinPolicy("dc2"), whiteList));

For multi-DC scenarios Cassandra provides EACH and LOCAL consistency levels where EACH will acknowledge successful operation in each DC and LOCAL only in local one.
If I understood correctly, what you are trying to achieve is DC failover in your application. This is not a good practice. Let's assume your application is hosted in DC1 alongside with Cassandra. If DC1 goes down, your entire application is unavailable. If DC2 goes down, your application still can write with LOCAL CL and C* will replicate changes when DC2 is back.
If you want to achieve HA, you need to deploy application in each DC, use CL=LOCAL_X and finally do failover on DNS level (e.g. using AWS Route53).
See data consistency docs and this blog post for more info about consistency levels for multiple DCs.

Related

How does peer to peer architecture work in Cassandra?

How the peer-to-peer Cassandra architecture really works ? I mean :
When the request hits the Cluster, it must hit some machine based on an IP, right ?
So which machine it will hit first ? : one of the nodes, or something in the Cluster who is responsible to balance and redirect the request to the right node ?
Could you describe what it is ? And how this differ from the Master/Folowers architecture ?
For the purposes of my answer, I will use the Java driver as an example since it is the most popular.
When you connect to a cluster using one of the driver, you need to configure it with details of your cluster including:
Contact points - the entry point to your cluster which is a comma-separated list of IPs/hostnames for some of the nodes in your cluster.
Login credentials - username and password if authentication is enabled on your cluster.
SSL/TLS certificate and credentials - if encryption is enabled on your cluster.
When your application starts, a control connection is established with the first available node in the list of contact points. The driver uses this control connection for admin tasks such as:
get topology information about the cluster including node IPs, rack placement, network/DC information, etc
get schema information such as keyspaces and tables
subscribe to metadata changes including topology and schema updates
When you configure the driver with a load-balancing policy (LBP), the policy will determine which node the driver will pick as the coordinator for each and every single query. By default, the Java driver uses a load balancing policy which picks nodes in the local datacenter. If you don't specify which DC is local to the app, the driver will set the local DC to the DC of the first contact point.
Each time a driver executes a query, it generates a query plan or a list of nodes to contact. This list of nodes has the following characteristics:
A query plan is different for each query to balance the load across nodes in the cluster.
A query plan only lists available nodes and does not include nodes which are down or temporarily unavailable.
Nodes in the local DC are listed first and if the load-balancing policy allows it, remote nodes are included last.
The driver tries to contact each node in the query plan in the order they are listed. If the first node is available then the driver uses it as the coordinator. If the first node does not respond (for whatever reason), the driver tries the next node in the query plan and so on.
Finally, all nodes are equal in Cassandra. There is no active-passive, no leader-follower, no primary-secondary and this makes Cassandra a truly high availability (HA) cluster with no single point-of-failure. Any node can do the work of any other node and the load is distributed equally to all nodes by design.
If you're new to Cassandra, I recommend having a look at datastax.com/dev which has lots of free hands-on interactive learning resources. In particular, the Cassandra Fundamentals learning series lets you learn the basic concepts quickly.
For what it's worth, you can also use the Stargate.io data platform. It allows you to connect to a Cassandra cluster using APIs you're already familiar with. It is fully open-source so it's free to use. Here are links to the Stargate tutorials on datastax.com/dev: REST API, Document API, GraphQL API, and more recently gRPC API. Cheers!
Working with Cassandra, we have to remember two very important things: data is partitioned (split into chunks) and data is replicated (each chunk is stored on a few different servers). Partitioning is needed for scalability purposes while Replication serves High Availability. Given that Cassandra is designed to handle petabytes of data under huge pressure (dozens of millions of queries per second), and there is no single server able to handle such the load, each cluster server is responsible only for a range of data, not for the whole dataset. A node storing data you need for a particular query is called a "replica node". Notice that the different queries there will have different replica nodes.
Together, it brings a few implications:
We have to reach multiple servers during a single query to assure the data is consistent (read) / write data to all responsible servers (write).
How do we know which node is right for that particular query? What happens if a query hits a "wrong" node? How do we configure the application so it sends queries to the replica nodes?
Funny enough, as a developer you have to do one and only one thing: understand partitions and partition keys, and then Cassandra will take care of all the potential issues. Simple as that. When you design a table, you have to declare partition keys and the data placement will be based on that - automagically. Next thing, you have to always specify partition keys while doing your queries. That's it, your job is done, get yourself some coffee!
Meanwhile, Cassandra starts her job. Cassandra nodes are smart, they know data placement, they know what servers are responsible for the data you are writing, and they know the partitions - in Cassandra language it's called token-aware. That does not matter which server will receive the query, as literally every server is able to answer it. Any node that got the request (it's called query coordinator because it coordinates the query operations) will find replica nodes based on the placement of the partitions. With that, the query coordinator will execute the query, making proper calls to the replicas - the coordinator knows which nodes to ask because you did your part of the job and specified partition key value in the query, which is used for the routing.
In short, you can ask any of your cluster nodes to write/read your data, Cassandra is decentralized and you'll get it done. But how do we make it better and get directly to the replica to avoid bothering nodes that don't store our data?
So which machine it will hit first ?
The travel of a request starts much earlier than we could think of - when your application starts, a Cassandra driver connects to a cluster and reads information about data placement: which partition is stored on which nodes, It means that driver knows which node has to be contacted for different queries. You got it right, a driver is token-aware too!
Token-aware drivers understand data placement and will route a query to a proper replica node. Answering the question: under normal circumstances, your query will first hit one of the replica nodes, this node will get answers or write data to the other replica nodes and that's it, we are good. In some rare situations, your query may hit a "wrong" non-replica server, but it doesn't really matter as it also will do the job, with just a minor delay - for example, if your Replication Factor = 3 (you have three replicas), and your query got to a "wrong" node, it will have to ask all three replicas while hitting the "right one" still require 2 network operations. It's not a big deal though as all the operations are done in parallel.
how this differ from the Master/Folowers architecture
With leader/follower architecture, you can read from any server but you can write only to a leader server, which gives two issues:
Your app needs to know who is the leader (or you need to have a special proxy)
Single Point of Failure (SPoF) - if the leader is down, you can't write to the DB at all
With Cassandra's peer-to-peer architecture you can write to any of the cluster nodes, even if there are thousands of them. Of course, there is no SPoF.
P.S. Cassandra is an extremely powerful technology, but great power comes with great responsibility, it's quite complex too. If you plan to work with it, you better invest some time into learning to use it properly. I do suggest taking a Developer Path on the academy.datastax.com (it's free!) or at least watch DataStax "Intro to Cassandra" workshop
It is based on the driver that you used to connect to the Cassandra™ cluster. Again, all nodes in the datacenter are one and same. It would connect to any of the nodes the localdatacenter that you have provided in driver configs based on the contact points configuration (i.e. datastax-java-driver.basic.contact-points in Java Driver).
For example, the Java driver (& most drivers logic will be the same) uses system.peers.rpc-address to connect to newly discovered nodes. For special network topologies, an address translation component can be plugged in.
advanced.address-translator in the configuration.
none by default. Also available: EC2-specific (for deployments that span multiple regions), or write your own.
Each node in the Cassandra cluster is uniquely identified by an IP address that the driver will use to establish connections.
for contact points, these are provided as part of configuring the CqlSession object;
for other nodes, addresses will be discovered dynamically, either by inspecting system.peers on already connected nodes, or via push notifications received on the control connection when new nodes are discovered by gossip.
More info can be found here.
It seems you are asking how specifically Cassandra selects which Node gets hit with data and which ones doesn't.
There are two sides to this: the client and the servers
On the client
When a CQL Connection is established the client (if implemented in the client library and configured) usually also retrieves the Topology from the Cluster. A topology is the information about the token ownership inside the ring as well as information about quorums etc..
So the client itself can already make a decision on the next request what Node to contact for a certain amount of information due to Consistent Hashing of the primary keys in Cassandra. The client is aware who would be the right choice of Node to contact.
But still the client can choose not to use this information and just send the information to any node of the ring - the nodes will then forward the requests to the appropriate token owners -> See the next section.
In the Cluster
The same applies to the nodes themselves. If a client sends a request to a node it will simply look up the owner nodes in it's topology table and forward the request to exactly the nodes that do own this token.
It will always forward it to all of them so the data is consistent across the cluster. Depending on the replication factor it will return a success response to the client if the required replication is acknowledged by the cluster (eg. LOCAL_QUORUM with RF=3 will return a success response when 2 nodes acknowledge the receipt while the 3rd node is still pending).
If a node is detected as down or can't be reached the Command that would have been sent to the node is saved in the local hints table - a buffer that keeps all the operations that haven't been successfully sent to other nodes.
You can read more on Hints in the Cassandra Docs
Compared to a Leader/Follower architecture the Cassandra model is actually simpler and depends mostly on all involved nodes seeing all the mutation commands happening to the data they "own" via the tokens.

Add datacenter with one node to backup existing one

I already have a working datacenter with 3 nodes (replication factor 2). I want to add another datacenter with only one node to have all backup data from existing datacenter. The final solution:
dc1: 3 nodes (2 rf)
dc2: 1 node (1 rf)
My application would then connect only to dc1 nodes and send data. If dc1 breaks down I can recover data from dc2 which is on the other physical machine in different location. I could also use dc2 for AI queries or some other task. I'm a newbie in case of cassandra configuration so I want to know if I'm not making some kind of a mistake in my thinking. I'm planing on using this configuration docs to add new dc: https://docs.datastax.com/en/cassandra-oss/3.0/cassandra/operations/opsAddDCToCluster.html
Is there anything more I should keep in mind to get this to work or some easier solution to have data backup?
Update: It won't only be a backup, we wont to use this second DC for connecting application also when dc1 would be unavailable (ex. power outage).
Update: dc2 is running, I had some problems with coping data from one dc to other and nodetool status didn't show 2 dc's but after fixing firewall rules for port 7000 I managed to connect both dc's and share data between them.
with this approach, your single node will get 2 times more traffic than other nodes. Also, it may add a load to the nodes in dc1 because they will need to collect hints, etc. when node in dc2 is not available. If you need just backup, setup something like medusa, and store data in the cheap environment, like, S3 - but of course, it will require time to restore if you lose the whole DC.
But in reality, you need to think about your high-availability strategy - what will happen with your clients if you lose the primary DC? Is it critical to wait until recovery, or you're really requiring full fault tolerance? I recommend to read the Designing Fault-Tolerant Applications with DataStax and Apache Cassandra™ whitepaper from DataStax - it explains the details of designing really fault tolerant applications.

How cassandra improve performance by adding nodes?

I'm going build apache cassandra 3.11.X cluster with 44 nodes. Each application server will have one cluster node so that application do r/w locally.
I have couple of questions running in my mind kindly answer if possible.
1.How many server Ip's should mention in seednode parameter?
2.How HA works when all the mentioned seed node goes down?
3.What is the dis-advantage to mention all the serverIP's in seednode parameter?
4.How cassandra scales with respect to data other than(Primary key and Tunable consistency). As per my assumption replication factor can improve HA chances but not performances.
then how performance will increase by adding more nodes?
5.Is there any sharding mechanism in Cassandra.
Answers are in order:
It's recommended to point to at least to 2 nodes per DC
Seed/contact node is used only for initial bootstrap - when your program reaches any of listed nodes, it "learns" the topology of whole cluster, and then driver listens for nodes status change, and adjust a list of available hosts. So even if seed node(s) goes down after connection is already established, driver will able to reach other nodes
it's harder to maintain usually - you need to keep a configuration parameters for your driver & list of nodes in sync.
When you have RF > 1, Cassandra may read or write data from/to any replica. Consistency level regulates how many nodes should return answer for read or write operation. When you add the new node, the data is redistributed to new node, and if you have correctly selected partition key, then new node start to receive requests in parallel to old nodes
Partition key is responsible for selection of replica(s) that will hold data associated with it - you can see it as a shard. But you need to be careful with selection of partition key - it's easy to create too big partitions, or partitions that will be "hot" (receiving most of operations in cluster - for example, if you're using the date as partition key, and always writing reading data for today).
P.S. I would recommend to read DataStax Architecture guide - it contains a lot of information about Cassandra as well...

Local/Remote hosts in Cassandra

Given a cluster, how exactly are different nodes labelled as remote/local? Does it depend on per query basis?
Currently, I am thinking like this->For each query that the client sends to the Cassandra cluster, a coordinator node will be selected(based on the load balancing policy). All nodes which belong to the same datacenter as the coordinator node will be called the local nodes and rest all nodes will be the remote nodes for the given query.
Is this correct?
Yes, that's correct from coordinator perspective. But there is also a driver perspective - when you use a driver with DC-aware policy, you specify what DC is local for you (in C++ via cass_cluster_set_load_balance_dc_aware function), and this data is used by driver to select correct node (based on other policies).

What is meant by a node in cassandra?

I am new to Cassandra and I want to install it. So far I've read a small article on it.
But there one thing that I do not understand and it is the meaning of 'node'.
Can anyone tell me what a 'node' is, what it is for, and how many nodes we can have in one cluster ?
A node is the storage layer within a server.
Newer versions of Cassandra use virtual nodes, or vnodes. There are 256 vnodes per server by default.
A vnode is essentially the storage layer.
machine: a physical server, EC2 instance, etc.
server: an installation of Cassandra. Each machine has one installation of Cassandra. The Cassandra server runs core processes such as the snitch, the partitioner, etc.
vnode: The storage layer in a Cassandra server. There are 256 vnodes per server by default.
Helpful tip:
Where you will get confused is that Cassandra terminology (in older blog posts, YouTube videos, and so on) had been used inconsistently. In older versions of Cassandra, each machine had one Cassandra server installed, and each server contained one node. Due to the 1-to-1-to-1 relationship between machine-server-node in old versions of Cassandra people previously used the terms machine, server and node interchangeably.
Cassandra is a distributed database management system designed to handle large amounts of data across many commodity servers. Like all other distributed database systems, it provides high availability with no single point of failure.
You may got some ideas from the description of above paragraph. Generally, when we talk Cassandra, we mean a Cassandra cluster, not a single PC. A node in a cluster is just a fully functional machine that is connected with other nodes in the cluster through high internal network. All nodes work together to make sure that even if one of them failed due to unexpected error, they as a whole cluster can provide service.
All nodes in a Cassandra cluster are same. There is no concept of Master node or slave nodes. There are multiple reason to design like this, and you can Google it for more details if you want.
Theoretically, you can have as many nodes as you want in a Cassandra cluster. For example, Apple used 75,000 nodes served Cassandra summit in 2014.
Of course you can try Cassandra with one machine. It still work while just one node in this cluster.
What is meant by a node in cassandra?
Cassandra Node is a place where data is stored.
Data center is a collection of related nodes.
A cluster is a component which contains one or more data centers.
In other words collection of multiple Cassandra nodes which communicates with each other to perform set of operation.
In Cassandra, each node is independent and at the same time interconnected to other nodes.
All the nodes in a cluster play the same role.
Every node in a cluster can accept read and write requests, regardless of where the data is actually located in the cluster.
In the case of failure of one node, Read/Write requests can be served from other nodes in the network.
If you're looking to understand Cassandra terminology, then the following post is a good reference:
http://exponential.io/blog/2015/01/08/cassandra-terminology/

Resources