Running Cassandra with an incomplete ring - cassandra

We have a total of four data centers with fully meshed communication, except two data centers are (and will be) unable to directly talk to each other due to network restrictions. Is there any way in Cassandra to deal with this type of situation by excluding connectivity between these sites, while replicating through the other data centers and still keeping the cluster consistent?
Thanks!

I actually found a page with the information:
http://www.datastax.com/documentation/opscenter/5.0/opsc/configure/opscConnectionConfig_r.html
[cassandra] auto_node_discovery
Enables or disables auto-discovery of nodes. When disabled, OpsCenter only attempts to contact nodes in the seed list, and will not auto-discover nodes. By default this is True.
Obviously that means you have to make sure to remember that you have to manually identify all the nodes as you add new ones to your cluster.

Related

Cassandra, Multiple endpoints in java drivers

If you provide multiple hosts in cassandra.hosts for resilience, which sequence do they get selected in?
How would multiple hosts work in local_quorum scenario. In case first host is unreachable, datacenter belonging to next host would be treated as local DC or what?
Thanks.
The list that you're providing to a driver is just a list of contact points for initial connection, after connection happens, driver discovers the topology of the cluster (retrieve list of all nodes, which data centers they belong to, etc.) - and then this list will be used for all operations. In many implementations this initial list of the nodes is shuffled before use - to avoid too many connections to the first node in the list, for example, when you have a lot of applications connecting to the same cluster.
For local quorum, full list of the nodes in the given data center will be used independent on how many nodes have you provided to a driver.
Regarding the last question - behaviour is different between driver versions. For example, in Java driver 3.x, the DC of first contacted node was used as the "local DC" if it wasn't provided explicitly. In Java driver 4.x this behaviour has changed, and driver requires that you provide DC explicitly (although in the last versions there was a way of restoring old behavior).

How to make Cassandra nodes have the same data?

I have two computers each one being a Cassandra node and they communicate well with each other.
From what I understand Cassandra will replicate the data to each other but will always query certain portions from one of them.
I would like though to have the data being copied to each other so they have the same data but they only use data from the local node. Is that possible?
The background reason is that the application in each node keeps generating and downloading a lot of data and at the same time both are doing some CPU super intensive tasks. What happens is that one node saves the data and suddenly can't find it anymore because it has been saved in the other node which is busy enough to reply with that data.
Technically, you just need to change replication factor to a number of nodes, and set your application to always read from a local node using whitelist load balancing mode. But it may not help you because if your nodes are very busy, replication of data from another node may also not happen, so the query will fail as well. Or replication will add an additional overhead making the situation much worse.
You need to rethink your approach - typically, you need to separate application nodes from database nodes, so application processes doesn't affect database processes.

How does peer to peer architecture work in Cassandra?

How the peer-to-peer Cassandra architecture really works ? I mean :
When the request hits the Cluster, it must hit some machine based on an IP, right ?
So which machine it will hit first ? : one of the nodes, or something in the Cluster who is responsible to balance and redirect the request to the right node ?
Could you describe what it is ? And how this differ from the Master/Folowers architecture ?
For the purposes of my answer, I will use the Java driver as an example since it is the most popular.
When you connect to a cluster using one of the driver, you need to configure it with details of your cluster including:
Contact points - the entry point to your cluster which is a comma-separated list of IPs/hostnames for some of the nodes in your cluster.
Login credentials - username and password if authentication is enabled on your cluster.
SSL/TLS certificate and credentials - if encryption is enabled on your cluster.
When your application starts, a control connection is established with the first available node in the list of contact points. The driver uses this control connection for admin tasks such as:
get topology information about the cluster including node IPs, rack placement, network/DC information, etc
get schema information such as keyspaces and tables
subscribe to metadata changes including topology and schema updates
When you configure the driver with a load-balancing policy (LBP), the policy will determine which node the driver will pick as the coordinator for each and every single query. By default, the Java driver uses a load balancing policy which picks nodes in the local datacenter. If you don't specify which DC is local to the app, the driver will set the local DC to the DC of the first contact point.
Each time a driver executes a query, it generates a query plan or a list of nodes to contact. This list of nodes has the following characteristics:
A query plan is different for each query to balance the load across nodes in the cluster.
A query plan only lists available nodes and does not include nodes which are down or temporarily unavailable.
Nodes in the local DC are listed first and if the load-balancing policy allows it, remote nodes are included last.
The driver tries to contact each node in the query plan in the order they are listed. If the first node is available then the driver uses it as the coordinator. If the first node does not respond (for whatever reason), the driver tries the next node in the query plan and so on.
Finally, all nodes are equal in Cassandra. There is no active-passive, no leader-follower, no primary-secondary and this makes Cassandra a truly high availability (HA) cluster with no single point-of-failure. Any node can do the work of any other node and the load is distributed equally to all nodes by design.
If you're new to Cassandra, I recommend having a look at datastax.com/dev which has lots of free hands-on interactive learning resources. In particular, the Cassandra Fundamentals learning series lets you learn the basic concepts quickly.
For what it's worth, you can also use the Stargate.io data platform. It allows you to connect to a Cassandra cluster using APIs you're already familiar with. It is fully open-source so it's free to use. Here are links to the Stargate tutorials on datastax.com/dev: REST API, Document API, GraphQL API, and more recently gRPC API. Cheers!
Working with Cassandra, we have to remember two very important things: data is partitioned (split into chunks) and data is replicated (each chunk is stored on a few different servers). Partitioning is needed for scalability purposes while Replication serves High Availability. Given that Cassandra is designed to handle petabytes of data under huge pressure (dozens of millions of queries per second), and there is no single server able to handle such the load, each cluster server is responsible only for a range of data, not for the whole dataset. A node storing data you need for a particular query is called a "replica node". Notice that the different queries there will have different replica nodes.
Together, it brings a few implications:
We have to reach multiple servers during a single query to assure the data is consistent (read) / write data to all responsible servers (write).
How do we know which node is right for that particular query? What happens if a query hits a "wrong" node? How do we configure the application so it sends queries to the replica nodes?
Funny enough, as a developer you have to do one and only one thing: understand partitions and partition keys, and then Cassandra will take care of all the potential issues. Simple as that. When you design a table, you have to declare partition keys and the data placement will be based on that - automagically. Next thing, you have to always specify partition keys while doing your queries. That's it, your job is done, get yourself some coffee!
Meanwhile, Cassandra starts her job. Cassandra nodes are smart, they know data placement, they know what servers are responsible for the data you are writing, and they know the partitions - in Cassandra language it's called token-aware. That does not matter which server will receive the query, as literally every server is able to answer it. Any node that got the request (it's called query coordinator because it coordinates the query operations) will find replica nodes based on the placement of the partitions. With that, the query coordinator will execute the query, making proper calls to the replicas - the coordinator knows which nodes to ask because you did your part of the job and specified partition key value in the query, which is used for the routing.
In short, you can ask any of your cluster nodes to write/read your data, Cassandra is decentralized and you'll get it done. But how do we make it better and get directly to the replica to avoid bothering nodes that don't store our data?
So which machine it will hit first ?
The travel of a request starts much earlier than we could think of - when your application starts, a Cassandra driver connects to a cluster and reads information about data placement: which partition is stored on which nodes, It means that driver knows which node has to be contacted for different queries. You got it right, a driver is token-aware too!
Token-aware drivers understand data placement and will route a query to a proper replica node. Answering the question: under normal circumstances, your query will first hit one of the replica nodes, this node will get answers or write data to the other replica nodes and that's it, we are good. In some rare situations, your query may hit a "wrong" non-replica server, but it doesn't really matter as it also will do the job, with just a minor delay - for example, if your Replication Factor = 3 (you have three replicas), and your query got to a "wrong" node, it will have to ask all three replicas while hitting the "right one" still require 2 network operations. It's not a big deal though as all the operations are done in parallel.
how this differ from the Master/Folowers architecture
With leader/follower architecture, you can read from any server but you can write only to a leader server, which gives two issues:
Your app needs to know who is the leader (or you need to have a special proxy)
Single Point of Failure (SPoF) - if the leader is down, you can't write to the DB at all
With Cassandra's peer-to-peer architecture you can write to any of the cluster nodes, even if there are thousands of them. Of course, there is no SPoF.
P.S. Cassandra is an extremely powerful technology, but great power comes with great responsibility, it's quite complex too. If you plan to work with it, you better invest some time into learning to use it properly. I do suggest taking a Developer Path on the academy.datastax.com (it's free!) or at least watch DataStax "Intro to Cassandra" workshop
It is based on the driver that you used to connect to the Cassandraâ„¢ cluster. Again, all nodes in the datacenter are one and same. It would connect to any of the nodes the localdatacenter that you have provided in driver configs based on the contact points configuration (i.e. datastax-java-driver.basic.contact-points in Java Driver).
For example, the Java driver (& most drivers logic will be the same) uses system.peers.rpc-address to connect to newly discovered nodes. For special network topologies, an address translation component can be plugged in.
advanced.address-translator in the configuration.
none by default. Also available: EC2-specific (for deployments that span multiple regions), or write your own.
Each node in the Cassandra cluster is uniquely identified by an IP address that the driver will use to establish connections.
for contact points, these are provided as part of configuring the CqlSession object;
for other nodes, addresses will be discovered dynamically, either by inspecting system.peers on already connected nodes, or via push notifications received on the control connection when new nodes are discovered by gossip.
More info can be found here.
It seems you are asking how specifically Cassandra selects which Node gets hit with data and which ones doesn't.
There are two sides to this: the client and the servers
On the client
When a CQL Connection is established the client (if implemented in the client library and configured) usually also retrieves the Topology from the Cluster. A topology is the information about the token ownership inside the ring as well as information about quorums etc..
So the client itself can already make a decision on the next request what Node to contact for a certain amount of information due to Consistent Hashing of the primary keys in Cassandra. The client is aware who would be the right choice of Node to contact.
But still the client can choose not to use this information and just send the information to any node of the ring - the nodes will then forward the requests to the appropriate token owners -> See the next section.
In the Cluster
The same applies to the nodes themselves. If a client sends a request to a node it will simply look up the owner nodes in it's topology table and forward the request to exactly the nodes that do own this token.
It will always forward it to all of them so the data is consistent across the cluster. Depending on the replication factor it will return a success response to the client if the required replication is acknowledged by the cluster (eg. LOCAL_QUORUM with RF=3 will return a success response when 2 nodes acknowledge the receipt while the 3rd node is still pending).
If a node is detected as down or can't be reached the Command that would have been sent to the node is saved in the local hints table - a buffer that keeps all the operations that haven't been successfully sent to other nodes.
You can read more on Hints in the Cassandra Docs
Compared to a Leader/Follower architecture the Cassandra model is actually simpler and depends mostly on all involved nodes seeing all the mutation commands happening to the data they "own" via the tokens.

Cassandra: Snitch vs. Gossip

I can't understand the difference between Snitch and Gossip in Cassandra, and I can't find even one source which has discussed the subject, let alone providing a good answer. Seems to me that Snitch and Gossip are both inter-node communication protocols; so why do we need 2 of them?
I know that Gossip helps a node to get information from bootstrap nodes, but that doesn't really explain the difference since when a node starts, it needs to learn about the data centers and racks as well which is supposed to be the domain of the Snitch.
Gossip is a protocol and Snitch is a component which utilizes it. Snitch is a little bit more than gossip and it has at least some heuristics like identifying data centers or racks while gossip is like a convenient tool to get this information. Almost all that gossip is doing is spreading arround with some rules to cover all necessary nodes and receive some technical data like ip, health etc. While Snitch utilizes this info to perform something more. One of its features is to identify different data centers by analyzing received ips. Then this info is used by other components for further actions like replicas location etc. So they've decided to give this functionality separate name to identify it and actually it's all about layering the functionality.
Some relevant information also can be found here: https://books.google.ru/books?id=h36CCwAAQBAJ&pg=PT21&lpg=PT21&dq=snitch+gossip&source=bl&ots=fjxy_z78Gj&sig=KpqdkKaREIo2YAWyJj3yMZCyNn4&hl=ru&sa=X&ved=0ahUKEwiUktS8q8zWAhWIQZoKHTViD0U4ChDoAQhUMAc#v=onepage&q=snitch%20gossip&f=false
And here is a more detailed snitch definition (but in scylla): https://github.com/scylladb/scylla/wiki/Snitches
Gossip is used to identify the state of machines (are they in the cluster, up/down/joining/leaving).
The snitches help map ownership to an actual machine, and route queries (given these 10 nodes in the cluster, which of the 10 own the data for a given key).
Different snitches can help assign data in different ways - the simple snitch just places all instances into datacenter1/rack1, and uses the simple distributed hashtable / naive partitioner placement. The property file snitch lets you create a file that has all of the instances, and maps the instance to a datacenter/rack, ensuring that replicas always exist on different racks (and datacenters, as defined by the replication strategy).
The gossiping-property-file-snitch and the ec2 snitches are somewhat like the property file snitch in that they're rack/topology aware, but they read the local instance topology information (either from a file or from the ec2 apis) and then gossip it to others, so each node is responsible for broadcasting its own topology information (through gossip).
Gossip is an epidemic protocol that spreads through the cluster. It transmits cluster metadata i.e the state of the cluster.
Following are the information shared as part of Gossip:
Generation: when it booted
Version : Timestamp
Application state:
Status : Normal/Joining/leaving
DC : data center location
Rack: rack number of this node
Schema:Schema version on the node
Load: Disk pressure on the node
Severity:The pressure on the system from the I/O standpoint
etc...
Snitch helps map IPs to racks and data centers, in other words. It creates a topology by grouping nodes to help determine where data is read from. When a read request comes in, it reaches the coordinator node, the consistency level of the read request and the read_repair_chance for that Column family decide how the snitch steps in. Only one node will send back the requested data, it is up to the snitch to determine that.

What is meant by a node in cassandra?

I am new to Cassandra and I want to install it. So far I've read a small article on it.
But there one thing that I do not understand and it is the meaning of 'node'.
Can anyone tell me what a 'node' is, what it is for, and how many nodes we can have in one cluster ?
A node is the storage layer within a server.
Newer versions of Cassandra use virtual nodes, or vnodes. There are 256 vnodes per server by default.
A vnode is essentially the storage layer.
machine: a physical server, EC2 instance, etc.
server: an installation of Cassandra. Each machine has one installation of Cassandra. The Cassandra server runs core processes such as the snitch, the partitioner, etc.
vnode: The storage layer in a Cassandra server. There are 256 vnodes per server by default.
Helpful tip:
Where you will get confused is that Cassandra terminology (in older blog posts, YouTube videos, and so on) had been used inconsistently. In older versions of Cassandra, each machine had one Cassandra server installed, and each server contained one node. Due to the 1-to-1-to-1 relationship between machine-server-node in old versions of Cassandra people previously used the terms machine, server and node interchangeably.
Cassandra is a distributed database management system designed to handle large amounts of data across many commodity servers. Like all other distributed database systems, it provides high availability with no single point of failure.
You may got some ideas from the description of above paragraph. Generally, when we talk Cassandra, we mean a Cassandra cluster, not a single PC. A node in a cluster is just a fully functional machine that is connected with other nodes in the cluster through high internal network. All nodes work together to make sure that even if one of them failed due to unexpected error, they as a whole cluster can provide service.
All nodes in a Cassandra cluster are same. There is no concept of Master node or slave nodes. There are multiple reason to design like this, and you can Google it for more details if you want.
Theoretically, you can have as many nodes as you want in a Cassandra cluster. For example, Apple used 75,000 nodes served Cassandra summit in 2014.
Of course you can try Cassandra with one machine. It still work while just one node in this cluster.
What is meant by a node in cassandra?
Cassandra Node is a place where data is stored.
Data center is a collection of related nodes.
A cluster is a component which contains one or more data centers.
In other words collection of multiple Cassandra nodes which communicates with each other to perform set of operation.
In Cassandra, each node is independent and at the same time interconnected to other nodes.
All the nodes in a cluster play the same role.
Every node in a cluster can accept read and write requests, regardless of where the data is actually located in the cluster.
In the case of failure of one node, Read/Write requests can be served from other nodes in the network.
If you're looking to understand Cassandra terminology, then the following post is a good reference:
http://exponential.io/blog/2015/01/08/cassandra-terminology/

Resources