Two datacenters with one node in each datacenter - cassandra

In our organization, We are trying to have two cassandra dataceters with only 1 node on each side. From the preliminary investigation, I see replication is happening but I want to know if we can use this deployment in production? Will there be any performance issue with replication ?
We have already setup 2 datacenters with one node on each datacenter and replication is working fine.
Want to know if this kind of setup is recommended for production deployment.

Not sure what your use case is.
But in general multiple data centers for many reason:
1) Disaster recovery (DR).
2) To run different kind of load like analytics or search.
3) To decrease latency if your users are spread across the world.
In general minimum three nodes per data center is recommended in production. Again it depends on use case.

Related

does pulsar support multiple bookkeeper replicas in different clusters

I have a use case where requires data backup across multiple data centers and needs strong consistency. The ideal view is each segment is replicated to three clusters located at three different data centers. pulsar supports using multiple clusters as a large bookie pool but I didn't find how to configure the replicas in different clusters. Anyone has similar use case before? i think it should be not hard to do considering pulsar separate broker and storage + replicas in different clusters
It's possible to enable a region aware placement policy of bookies (parameter bookkeeperClientRegionawarePolicyEnabled). You'll also need to configure the bookie region with the admin command set-bookie-rack This is not much documented in Pulsar/BookKeeper docs. See this blog post for more details : https://techblog.cdiscount.com/ensure-cross-datacenter-guaranteed-message-delivery-and-resilience-with-apache-pulsar/
Beware that due to the cross-region latency between the brokers and the bookies, the throughput will drop but that can't really be helped if you need strong consistency even in the case of a region failure.

Add datacenter with one node to backup existing one

I already have a working datacenter with 3 nodes (replication factor 2). I want to add another datacenter with only one node to have all backup data from existing datacenter. The final solution:
dc1: 3 nodes (2 rf)
dc2: 1 node (1 rf)
My application would then connect only to dc1 nodes and send data. If dc1 breaks down I can recover data from dc2 which is on the other physical machine in different location. I could also use dc2 for AI queries or some other task. I'm a newbie in case of cassandra configuration so I want to know if I'm not making some kind of a mistake in my thinking. I'm planing on using this configuration docs to add new dc: https://docs.datastax.com/en/cassandra-oss/3.0/cassandra/operations/opsAddDCToCluster.html
Is there anything more I should keep in mind to get this to work or some easier solution to have data backup?
Update: It won't only be a backup, we wont to use this second DC for connecting application also when dc1 would be unavailable (ex. power outage).
Update: dc2 is running, I had some problems with coping data from one dc to other and nodetool status didn't show 2 dc's but after fixing firewall rules for port 7000 I managed to connect both dc's and share data between them.
with this approach, your single node will get 2 times more traffic than other nodes. Also, it may add a load to the nodes in dc1 because they will need to collect hints, etc. when node in dc2 is not available. If you need just backup, setup something like medusa, and store data in the cheap environment, like, S3 - but of course, it will require time to restore if you lose the whole DC.
But in reality, you need to think about your high-availability strategy - what will happen with your clients if you lose the primary DC? Is it critical to wait until recovery, or you're really requiring full fault tolerance? I recommend to read the Designing Fault-Tolerant Applications with DataStax and Apache Cassandra™ whitepaper from DataStax - it explains the details of designing really fault tolerant applications.

Three node Cassandra cluster all nodes configured different rack in same dc

I am a newbie in Cassandra.
In our production environment three node Cassandra clusters are running and serving production traffic but I have below mentioned doubts:-
1) All nodes are configured in different racks i.e rack 1, rack 2 and rack 3 in the same dc. Is it fine or does this configuration have some drawbacks?
2) We are using rf2 and network topology for all the keyspaces except system tables and these system tables are configured with rf2 and simplestrategy ..is it fine or does this need to be changed? should we increase the replication factor of system_auth? ..please let me know..
3) Now I want to add another node in the same dc, what will be the best procedure to do the same without impacting the live traffic?
Cassandra version is Apache cassandra 3.11.
Thanks in advance..
Ans 1) It seems good to have Cassandra nodes in different racks for availability and fault tolerance .
Ans 2) You must increase RF on system_auth so that you can avoid cqlsh login issue from other nodes.
Ans 3) You can add new node without affecting the live traffic on existing cluster. please follow below procedure.
http://cassandra.apache.org/doc/latest/operating/topo_changes.html
Cassandra is designed as a distributed system. Cassandra’s distributed architecture is specifically tailored for multiple-data center deployment. These features are robust and flexible enough that you can configure the cluster for optimal geographical distribution, for redundancy for fail-over and disaster recovery.
Multiple data center deployments are excellent for global solutions where in some applications are operational in one region and other applications in another region and yet using a single cluster of Cassandra which is working in multiple data centers across regions.
For single region applications, still having multiple data-centers is preferred option because it provides disaster recovery even in case one region goes down.
Ans 1) For a single DC Cassandra cluster , recommendation is to have 4 nodes with RF3. Rack 1 with 2 nodes and Rack 2 with 2 nodes. Remember that nodes in the same rack have faster network than nodes in different racks. With two nodes on the same Rack, queries with LOCAL_QUORUM will be faster as compared to queries on a cluster with all nodes on different racks.
If you are not concerned with the query latency , all nodes in different racks (3 racks) will give better disaster recovery as compared with two RACK deployment. Having said that, it's always recommended to use multi DC deployments for production cluster.
Ans 2) It’s always recommended to increase the replication factor of System_auth keyspace and change the replication class to NetworkTopologyStrategy. Please follow this documentation for more details https://docs.datastax.com/en/security/6.0/security/secSystemKeyspace.html
Ans 3) Yes, You can add a new node to existing cluster with ease without impacting the traffic. Please follow this documentation for more details: https://docs.datastax.com/en/archived/cassandra/3.0/cassandra/operations/opsAddNodeToCluster.html

practicallity of having a single node cassandra multisitecluster(3 way)

Is it possible to put a Cassandra cluster with single node DC with 2 remote DC which is also having a single node assuming the replication factor is required to be 3 in this case? The remote cluster is in the same geographical area but not same building for HA. Or is there any hard rules that for high availability and consistency for a need for a local quorum node to achieve that?
Our setup may be smaller compared to big data and usually used to store time series data with approximately 2000/3000 (on different key) sampling per second.
Is there other implications other than read/write may be slow due to the comms delay?
disclaimer: I am new to cassandra.
Turns out I want to deploy a similar setup: 3 nodes on aws, each in its own AZ (But all in the same region). from what I read, this setup is just a single DC, with 3 nodes.
You need to use Ec2Snitch to reduce the latency between your clients and the nodes.
Using RF=3 provides you with the HA that you need, since every node has all the data
Inter-AZ communication should be fairly fast. refer to this: http://highscalability.com/blog/2016/8/1/how-to-setup-a-highly-available-multi-az-cassandra-cluster-o.html
becuase you'll be running in a single DC, local-quorum == quorum. so as long as you'll be writing to QUROUM (which requires 2/3 nodes (AZs) to be up), you'll be strongly consistent and HA.

Whats the best way to mirror a live Cassandra cluster for analytics tasks?

Assuming a live cluster with several DCs, whats the best way to setup some nodes that are dedicated for analytic queries?
Analytic nodes will be hosted in a separate (routed) network and must not write any data back to the production nodes. They also must not be counted against for any CL. This especially applies to EACH_QUORUM that will be used for some writes. Analytics nodes may be offline at any time.
All solutions I've looked into seem to have their own drawbacks.
1) Take snapshots on production and transfer to independent analytics cluster
Significant update delay
IO intensive either on network or disk (e.g. rsync)
Lots of duplicate data due to different replication factors (3:1 prod. vs analytics)
Mismatch in SSTable row ranges and cluster topology on analytics cluster may require to use sstableloader
2) Use write survey mode to establish read-only nodes
Not 100% sure how this could be done for setting up multiple survey nodes to cover the whole ring
Queries can only be executed against each node locally as they could not be part of a coordinated execution
3) Add regular DC dedicated for analytics
EACH_QUORUM will fail in case analytics cluster is not available
Queries on production should not be served from analytics
Would require a way to prevent users on analytics to be able to execute queries or updates on production
Any other options or existing tools that could be used?

Resources