Hazelcast High Availability in case of 3 nodes cluster - hazelcast

We are using Hazelcast IMDG as an in memory grid. The number of nodes in our cluster is three, and we have one sync backup and the cluster is partition aware. In that case, I expect the distributed map will be distributed across 3 nodes (more or less) homogeneously. In case of a node break down, the leadership should be transferred to a healthy node(which has the sync backup for the lost data). If there is a write request to this newly assigned leader node, the same partition should be replicated synchronously to one of the alive nodes. Does it mean that in case of node failure, approximately one third of the distributed map should be replicated and during the replication time, all reads are blocked? Availability is hit if one of three node is down in case of one sync backup till approximately one third of the distribution is restored?

If a node goes down, the cluster will promote the backup partitions to primary.
And the migrations will start to create backups of these new primary partitions.
Please check the Data Partitioning section.
During migrations, read operations are not blocked.
Only write operations are blocked on the partition that is actively migrating.
Since the partitions are migrated one by one, the effect on availability is minimal.

Related

What is the Correct order to restart a cluster for point-in-time restore?

I have a mixed workload cluster across multiple datacenters. I have ran the sstableloader command for the tables I want to restore using snapshots which I had backed up. I have added commit log files which I had backed up from archive to a restore directory on all nodes. I have updated the commitlog_archiving.properties file with these configs.
What is the correct way and order to restart nodes of my cluster?
Do these considerations apply for restarting as well?
As a general rule, we recommend restarting seed nodes in the DC first before other nodes so gossip propagation happens faster particularly for larger clusters (arbitrarily 15+ nodes). It is important to note that a restart is not required if you restored data using sstableloader.
If you are just performing a rolling restart then the order of the DCs does not matter. But it matters if you are starting up a cluster from a cold shutdown meaning all nodes are down and the cluster is completely offline.
When starting from a cold shutdown, it is important to start with the "Analytics DC" (nodes running in Analytics mode, i.e with Spark enabled) because it makes it easier to elect a Spark master. Assuming that the replication for Analytics keyspaces are configured with the recommended replication factor of 3, you will need to start 2 or 3 nodes beginning with the seeds ideally 1 minute apart because the LeaderManager requires a quorum of nodes to elect a Spark master.
We recommend leaving DCs with nodes running in Search mode (with Solr enabled) last as a matter of convenience so that all the other DCs are operational before the cluster starts accepting Search requests from the application(s). Cheers!
If you've done all of that, I don't think the order matters too much. Although, you should restart your seed nodes first, that way the nodes in the cluster have a common cluster entrypoint to find their way back in and correctly rejoin.

Cassandra repair after datacenter went down

I have a Cassandra db (version 3.11.2) running in AWS, with 2 Datacenters - each in another AWS region and 3 nodes in each one.
The replication factor on all keyspaces is 3, so full replication of data on every node. The size of data is about 10GB per node.
All of our writes are in LOCAL_QUORUM against one DC (lets call it DC1). Basically the other DC is just for a kind of backup and disaster recovery, in case the AWS region for DC1 will be unavailable we will redirect traffic to DC2.
My issue is that we had a network disconnection between the two DCs, for several hours, and after several days we noticed that there is missing data in DC2. This all makes sense, since the time the DCs were apart is larger than the Hinted Handoff window (3 hours). So we need to run a repair to bring DC2 back to sync with DC1.
I went over the cassandra docs, and read countless SO answers and for the life of me I couldn't understand what is the right repair to do...
Do I need to issue a 'nodetool repair --full --sequential' from only one node? Do I need to run it on every node in the cluster? Maybe it's better to run 'nodetool rebuild'?
Executing nodetool cleanup on the nodes on datacenter2 should be able to bring up the data up to sync, but depending on the data size affected, this may be a task that can take time and resources. If the datacenter2 is only as a backup for disaster recovery purposes, it may be easier and quicker to backup the current dc1 cluster and restore it in the second datacenter (more information is available here.

How cassandra improve performance by adding nodes?

I'm going build apache cassandra 3.11.X cluster with 44 nodes. Each application server will have one cluster node so that application do r/w locally.
I have couple of questions running in my mind kindly answer if possible.
1.How many server Ip's should mention in seednode parameter?
2.How HA works when all the mentioned seed node goes down?
3.What is the dis-advantage to mention all the serverIP's in seednode parameter?
4.How cassandra scales with respect to data other than(Primary key and Tunable consistency). As per my assumption replication factor can improve HA chances but not performances.
then how performance will increase by adding more nodes?
5.Is there any sharding mechanism in Cassandra.
Answers are in order:
It's recommended to point to at least to 2 nodes per DC
Seed/contact node is used only for initial bootstrap - when your program reaches any of listed nodes, it "learns" the topology of whole cluster, and then driver listens for nodes status change, and adjust a list of available hosts. So even if seed node(s) goes down after connection is already established, driver will able to reach other nodes
it's harder to maintain usually - you need to keep a configuration parameters for your driver & list of nodes in sync.
When you have RF > 1, Cassandra may read or write data from/to any replica. Consistency level regulates how many nodes should return answer for read or write operation. When you add the new node, the data is redistributed to new node, and if you have correctly selected partition key, then new node start to receive requests in parallel to old nodes
Partition key is responsible for selection of replica(s) that will hold data associated with it - you can see it as a shard. But you need to be careful with selection of partition key - it's easy to create too big partitions, or partitions that will be "hot" (receiving most of operations in cluster - for example, if you're using the date as partition key, and always writing reading data for today).
P.S. I would recommend to read DataStax Architecture guide - it contains a lot of information about Cassandra as well...

what happens when an entire cassandra cluster goes down

I have a cassandra cluster having 3 nodes with replication factor of 2. But what would happen if the entire cassandra cluster goes down at the same time. How read and write can be manage in this situation and what would be the best consistency level so that i can manage my cassandra nodes for high availability, As of now i'm using QUORUM.
If your cluster is down on all nodes - it is down.
When you need HA, think of deploying more than one datacenter, so availability can be maintained even when an entire datacenter/rack goes down.
If you can live with stale data, you could use CL.ONE instead - you need only one node to respond.
More replicas also increases availability for CL.QUORUM - you need RF/2+1 nodes from your replicas alive, in case of 2 -> 2/2+1 = 2 or all your replicas need to be online. With RF=3 you sill only need 2 as 3/2+1 = 2 - now you can have one node down.
As for your writes - all acknowleged writes will be written to disk in the commitlog (if there is no caching issue on your disks) and restored when coming back online. Of course there may be a race condition where the changes are written to disk but not acked via network.
Keep in mind to setup NTP!

Cassandra reads slow with multiple nodes

I have a three node Cassandra cluster with version 2.0.5.
RF=3 and all data is synced to all three nodes.
I read from cqlsh with Consistency=ONE.
When I bring down two of the nodes my reads are twice as fast than when I have the entire cluster up.
Tracing from cqlsh shows that the slow down on the reads with a full cluster up occurs when a request is forwarded to other nodes.
All nodes are local to the same datacenter and there is no other activity on the system.
So, why are requests sometimes forwarded to other nodes?
Even for the exact same key if I repeat the same query multiple times I see that sometimes the query executes on the local node and sometimes it gets forwarded and then becomes very slow.
Assuming that the cluster isn't overloaded, Cassandra should always prefer to do local reads when possible. Can you create a bug report at https://issues.apache.org/jira/browse/CASSANDRA ?
This is due to read repair.
By default read repair is applied for all the read with consistency level quorum or with 10% chance for lower consistency levels, that's why for consistency level one sometimes you see more activity and sometime less activity.

Resources