In our organization, We are trying to have two cassandra dataceters with only 1 node on each side. From the preliminary investigation, I see replication is happening but I want to know if we can use this deployment in production? Will there be any performance issue with replication ?
We have already setup 2 datacenters with one node on each datacenter and replication is working fine.
Want to know if this kind of setup is recommended for production deployment.
Not sure what your use case is.
But in general multiple data centers for many reason:
1) Disaster recovery (DR).
2) To run different kind of load like analytics or search.
3) To decrease latency if your users are spread across the world.
In general minimum three nodes per data center is recommended in production. Again it depends on use case.
We plan to use TiDB for a distributed setup in Europe and Australia.
Have anyone some experience with such a distributed setup?
TiDB developer here.
According to your description, your scenario is a long-distance cross-data center scenario. In such scenarios, you need to understand in this kind of deployment your read and write latency will depend heavily on the latency between your data centers.
A more reasonable deployment is, if your workload is mainly in Europe, and you need strong consistency and high availability at the same time, then you can choose two IDCs in Europe and 1 IDC in Australia to deploy TiDB, and your application should deploy in Europe. Because for tidb, a write requires most replicas to be written successfully. In this scenario, the write latency is:
latency = min(latency(IDC1, IDC2), latency(IDC2, IDC3), latency(IDC1, IDC3))
Here’s some deploy suggestions and comparison for different scenarios:
1. 3-DC Deployment Solution
TiDB, TiKV and PD are distributed among 3 DCs.
Advantages:
All the replicas are distributed among 3 DCs. Even if one DC is down, the other 2 DCs will initiate leader election and resume service within a reasonable amount of time (within 20s in most cases) and no data is lost. See the following diagram for more information:
Disadvantages:
The performance is greatly limited by the network latency.
For writes, all the data has to be replicated to at least 2 DCs. Because TiDB uses 2-phase commit for writes, the write latency is at least twice the latency of the network between two DCs.
The read performance will also suffer if the leader is not in the same DC as the TiDB node with the read request.
Each TiDB transaction needs to obtain TimeStamp Oracle (TSO) from the PD leader. So if TiDB and PD leader are not in the same DC, the performance of the transactions will also be impacted by the network latency because each transaction with write request will have to get TSO twice.
Optimizations:
If not all of the three DCs need to provide service to the applications, you can dispatch all the requests to one DC and configure the scheduling policy to migrate all the TiKV Region leader and PD leader to the same DC, as what we have done in the following test. In this way, neither obtaining TSO or reading TiKV Regions will be impacted by the network latency between DCs. If this DC is down, the PD leader and Region leader will be automatically elected in other surviving DCs, and you just need to switch the requests to the DC that are still online.
2. 3-DC in 2 cities Deployment Solution
This solution is similar to the previous 3-DC deployment solution and can be considered as an optimization based on the business scenario. The difference is that the distance between the 2 DCs within the same city is short and thus the latency is very low. In this case, we can dispatch the requests to the two DCs within the same city and configure the TiKV leader and PD leader to be in the 2 DCs in the same city.
Compared with the 3-DC deployment, the 3-DC in 2 cities deployment has the following advantages:
Better write performance
Better usage of the resources because 2 DCs can provide services to the applications.
Even if one DC is down, the TiDB cluster will be still available and no data is lost.
However, the disadvantage is that if the 2 DCs within the same city goes down, whose probability is higher than that of the outage of 2 DCs in 2 cities, the TiDB cluster will not be available and some of the data will be lost.
3. 2-DC + Binlog Synchronization Deployment Solution
The 2-DC + Binlog synchronization is similar to the MySQL Master-Slave solution. 2 complete sets of TiDB clusters (each complete set of the TiDB cluster includes TiDB, PD and TiKV) are deployed in 2 DCs, one acts as the Master and one as the Slave. Under normal circumstances, the Master DC handle all the requests and the data written to the Master DC is asynchronously written to the Slave DC via Binlog.
If the Master DC goes down, the requests can be switched to the slave cluster. Similar to MySQL, some data might be lost. But different from MySQL, this solution can ensure the high availability within the same DC: if some nodes within the DC are down, the online business won’t be impacted and no manual efforts are needed because the cluster will automatically re-elect leaders to provide services.
Some of our production users also adopt the 2-DC multi-active solution, which means:
The application requests are separated and dispatched into 2 DCs.
Each DC has 1 cluster and each cluster has two databases: A Master database to serve part of the application requests and a Slave database to act as the backup of the other DC’s Master database. Data written into the Master database is synchronized via Binlog to the Slave database in the other DC, forming a loop of backup.
Please be noted that for the 2-DC + Binlog synchronization solution, data is asynchronously replicated via Binlog. If the network latency between 2 DCs is too high, the data in the Slave cluster will fall much behind of the Master cluster. If the Master cluster goes down, some data will be lost and it cannot be guaranteed the lost data is within 5 minutes.
Overall analysis for HA and DR
For the 3-DC deployment solution and 3-DC in 2 cities solution, we can guarantee that the cluster will automatically recover, no human interference is needed and that the data is strongly consistent even if any one of the 3 DCs goes down. All the scheduling policies are to tune the performance, but availability is the top 1 priority instead of performance in case of an outage.
For 2-DC + Binlog synchronization solution, we can guarantee that the cluster will automatically recover, no human interference is needed and that the data is strongly consistent even if any some of the nodes within the Master cluster go down. When the entire Master cluster goes down, manual efforts will be needed to switch to the Slave and some data will be lost. The amount of the lost data depends on the network latency and is decided by the network condition.
Recommendations on how to achieve high performance
As is described previously, in the 3-DC scenario, network latency is really critical for the performance. Due to the high latency, a transaction (10 reads 1 write) will take 100 ms and a single thread can only reach 10 TPS.
This table is the result of our Sysbench test (3 IDC, 2 US-West and 1 US-East):
| threads | tps | qps |
|--------:|--------:|---------:|
| 1 | 9.43 | 122.64 |
| 4 | 36.38 | 472.95 |
| 16 | 134.57 | 1749.39 |
| 64 | 517.66 | 6729.55 |
| 256 | 1767.68 | 22979.87 |
| 512 | 2307.36 | 29995.71 |
| 1024 | 2406.29 | 31281.71 |
| 2048 | 2256.27 | 29331.45 |
Compared with the previously recommended deployment which schedules the TiKV Region leaders to be within one DC, the priority of the PD leader is set by pd-ctl member leader_priority pd1 2 to set the PD leader to be located in the same DC of the TiKV Region leaders, avoiding the overly high network latency of getting TSO.
Based on this, we conclude that if you want to more TPS, you should use more concurrencies in your application.
We recommend the following solutions:
Add more TiDB instances with the same configuration for further testing to leverage TiDB’s scalability.
To avoid the performance penalty caused by cross-DC transaction commit requests, consider changing the workload from 10 reads + 1 write for each transaction to 100 reads + 10 writes for higher QPS.
For the question about HA, the answer is that no manual operation is needed if the leader's DC fails. This is because even if the leader’s DC fails and the leaders are locked in one DC, most of the replicas still survive and will elect a new leader after the failure thanks to the Raft consensus algorithm. This process is automatic and requires no manual intervention. The service is still available and will not be impacted, only with slight performance degradation.
We recently deployed micro-services into our production and these micro-service communicates with Cassandra nodes for reads/writes.
After deployment, we started noticing sudden drop in CPU to 0 on all cassandra nodes in primary DC. This is happening at least once per day. when this happens each time, we see randomly 2 nodes (in SAME DC) are not able to reachable to each other ("nodetool describecluster") and when we check "nodetool tpstats", these 2 nodes has higher number of ACTIVE Native-Transport-Requests b/w 100-200. Also these 2 nodes are storing HINTS for each other but when i do longer "pings" b/w them i don't see any packet loss. when we restart those 2 cassandra nodes, issue will be fixed at that moment. This is happening since 2 weeks.
We use Apache Cassandra 2.2.8.
Also microservices logs are having reads/writes timeouts before sudden drop in CPU on all cassandra nodes.
You might be using token aware load balancing policy on client, and updating a single partition or range heavily. In which case all the coordination load will be focused on the single replica set. Can change your application to use RoundRobin (or dc aware round robin) LoadBalancingPolicy and it will likely resolve. If it does you have a hotspot in your application and you might want to give attention to your data model.
It does look like a datamodel problem (hot partitions causing issues in specific replicas).
But in any case you might want to add the following to your cassandra-env.sh to see if it helps:
JVM_OPTS="$JVM_OPTS -Dcassandra.max_queued_native_transport_requests=1024"
More information about this here: https://issues.apache.org/jira/browse/CASSANDRA-11363
Is it possible to put a Cassandra cluster with single node DC with 2 remote DC which is also having a single node assuming the replication factor is required to be 3 in this case? The remote cluster is in the same geographical area but not same building for HA. Or is there any hard rules that for high availability and consistency for a need for a local quorum node to achieve that?
Our setup may be smaller compared to big data and usually used to store time series data with approximately 2000/3000 (on different key) sampling per second.
Is there other implications other than read/write may be slow due to the comms delay?
disclaimer: I am new to cassandra.
Turns out I want to deploy a similar setup: 3 nodes on aws, each in its own AZ (But all in the same region). from what I read, this setup is just a single DC, with 3 nodes.
You need to use Ec2Snitch to reduce the latency between your clients and the nodes.
Using RF=3 provides you with the HA that you need, since every node has all the data
Inter-AZ communication should be fairly fast. refer to this: http://highscalability.com/blog/2016/8/1/how-to-setup-a-highly-available-multi-az-cassandra-cluster-o.html
becuase you'll be running in a single DC, local-quorum == quorum. so as long as you'll be writing to QUROUM (which requires 2/3 nodes (AZs) to be up), you'll be strongly consistent and HA.
I have a cassandra cluster having 3 nodes with replication factor of 2. But what would happen if the entire cassandra cluster goes down at the same time. How read and write can be manage in this situation and what would be the best consistency level so that i can manage my cassandra nodes for high availability, As of now i'm using QUORUM.
If your cluster is down on all nodes - it is down.
When you need HA, think of deploying more than one datacenter, so availability can be maintained even when an entire datacenter/rack goes down.
If you can live with stale data, you could use CL.ONE instead - you need only one node to respond.
More replicas also increases availability for CL.QUORUM - you need RF/2+1 nodes from your replicas alive, in case of 2 -> 2/2+1 = 2 or all your replicas need to be online. With RF=3 you sill only need 2 as 3/2+1 = 2 - now you can have one node down.
As for your writes - all acknowleged writes will be written to disk in the commitlog (if there is no caching issue on your disks) and restored when coming back online. Of course there may be a race condition where the changes are written to disk but not acked via network.
Keep in mind to setup NTP!