distributed setup for TiDB over continental borders - tidb

We plan to use TiDB for a distributed setup in Europe and Australia.
Have anyone some experience with such a distributed setup?

TiDB developer here.
According to your description, your scenario is a long-distance cross-data center scenario. In such scenarios, you need to understand in this kind of deployment your read and write latency will depend heavily on the latency between your data centers.
A more reasonable deployment is, if your workload is mainly in Europe, and you need strong consistency and high availability at the same time, then you can choose two IDCs in Europe and 1 IDC in Australia to deploy TiDB, and your application should deploy in Europe. Because for tidb, a write requires most replicas to be written successfully. In this scenario, the write latency is:
latency = min(latency(IDC1, IDC2), latency(IDC2, IDC3), latency(IDC1, IDC3))
Here’s some deploy suggestions and comparison for different scenarios:
1. 3-DC Deployment Solution
TiDB, TiKV and PD are distributed among 3 DCs.
Advantages:
All the replicas are distributed among 3 DCs. Even if one DC is down, the other 2 DCs will initiate leader election and resume service within a reasonable amount of time (within 20s in most cases) and no data is lost. See the following diagram for more information:
Disadvantages:
The performance is greatly limited by the network latency.
For writes, all the data has to be replicated to at least 2 DCs. Because TiDB uses 2-phase commit for writes, the write latency is at least twice the latency of the network between two DCs.
The read performance will also suffer if the leader is not in the same DC as the TiDB node with the read request.
Each TiDB transaction needs to obtain TimeStamp Oracle (TSO) from the PD leader. So if TiDB and PD leader are not in the same DC, the performance of the transactions will also be impacted by the network latency because each transaction with write request will have to get TSO twice.
Optimizations:
If not all of the three DCs need to provide service to the applications, you can dispatch all the requests to one DC and configure the scheduling policy to migrate all the TiKV Region leader and PD leader to the same DC, as what we have done in the following test. In this way, neither obtaining TSO or reading TiKV Regions will be impacted by the network latency between DCs. If this DC is down, the PD leader and Region leader will be automatically elected in other surviving DCs, and you just need to switch the requests to the DC that are still online.
2. 3-DC in 2 cities Deployment Solution
This solution is similar to the previous 3-DC deployment solution and can be considered as an optimization based on the business scenario. The difference is that the distance between the 2 DCs within the same city is short and thus the latency is very low. In this case, we can dispatch the requests to the two DCs within the same city and configure the TiKV leader and PD leader to be in the 2 DCs in the same city.
Compared with the 3-DC deployment, the 3-DC in 2 cities deployment has the following advantages:
Better write performance
Better usage of the resources because 2 DCs can provide services to the applications.
Even if one DC is down, the TiDB cluster will be still available and no data is lost.
However, the disadvantage is that if the 2 DCs within the same city goes down, whose probability is higher than that of the outage of 2 DCs in 2 cities, the TiDB cluster will not be available and some of the data will be lost.
3. 2-DC + Binlog Synchronization Deployment Solution
The 2-DC + Binlog synchronization is similar to the MySQL Master-Slave solution. 2 complete sets of TiDB clusters (each complete set of the TiDB cluster includes TiDB, PD and TiKV) are deployed in 2 DCs, one acts as the Master and one as the Slave. Under normal circumstances, the Master DC handle all the requests and the data written to the Master DC is asynchronously written to the Slave DC via Binlog.
If the Master DC goes down, the requests can be switched to the slave cluster. Similar to MySQL, some data might be lost. But different from MySQL, this solution can ensure the high availability within the same DC: if some nodes within the DC are down, the online business won’t be impacted and no manual efforts are needed because the cluster will automatically re-elect leaders to provide services.
Some of our production users also adopt the 2-DC multi-active solution, which means:
The application requests are separated and dispatched into 2 DCs.
Each DC has 1 cluster and each cluster has two databases: A Master database to serve part of the application requests and a Slave database to act as the backup of the other DC’s Master database. Data written into the Master database is synchronized via Binlog to the Slave database in the other DC, forming a loop of backup.
Please be noted that for the 2-DC + Binlog synchronization solution, data is asynchronously replicated via Binlog. If the network latency between 2 DCs is too high, the data in the Slave cluster will fall much behind of the Master cluster. If the Master cluster goes down, some data will be lost and it cannot be guaranteed the lost data is within 5 minutes.
Overall analysis for HA and DR
For the 3-DC deployment solution and 3-DC in 2 cities solution, we can guarantee that the cluster will automatically recover, no human interference is needed and that the data is strongly consistent even if any one of the 3 DCs goes down. All the scheduling policies are to tune the performance, but availability is the top 1 priority instead of performance in case of an outage.
For 2-DC + Binlog synchronization solution, we can guarantee that the cluster will automatically recover, no human interference is needed and that the data is strongly consistent even if any some of the nodes within the Master cluster go down. When the entire Master cluster goes down, manual efforts will be needed to switch to the Slave and some data will be lost. The amount of the lost data depends on the network latency and is decided by the network condition.
Recommendations on how to achieve high performance
As is described previously, in the 3-DC scenario, network latency is really critical for the performance. Due to the high latency, a transaction (10 reads 1 write) will take 100 ms and a single thread can only reach 10 TPS.
This table is the result of our Sysbench test (3 IDC, 2 US-West and 1 US-East):
| threads | tps | qps |
|--------:|--------:|---------:|
| 1 | 9.43 | 122.64 |
| 4 | 36.38 | 472.95 |
| 16 | 134.57 | 1749.39 |
| 64 | 517.66 | 6729.55 |
| 256 | 1767.68 | 22979.87 |
| 512 | 2307.36 | 29995.71 |
| 1024 | 2406.29 | 31281.71 |
| 2048 | 2256.27 | 29331.45 |
Compared with the previously recommended deployment which schedules the TiKV Region leaders to be within one DC, the priority of the PD leader is set by pd-ctl member leader_priority pd1 2 to set the PD leader to be located in the same DC of the TiKV Region leaders, avoiding the overly high network latency of getting TSO.
Based on this, we conclude that if you want to more TPS, you should use more concurrencies in your application.
We recommend the following solutions:
Add more TiDB instances with the same configuration for further testing to leverage TiDB’s scalability.
To avoid the performance penalty caused by cross-DC transaction commit requests, consider changing the workload from 10 reads + 1 write for each transaction to 100 reads + 10 writes for higher QPS.
For the question about HA, the answer is that no manual operation is needed if the leader's DC fails. This is because even if the leader’s DC fails and the leaders are locked in one DC, most of the replicas still survive and will elect a new leader after the failure thanks to the Raft consensus algorithm. This process is automatic and requires no manual intervention. The service is still available and will not be impacted, only with slight performance degradation.

Related

How to distribute yb-masters in multi-region deployment in YugabyteDB

[Question posted by a user on YugabyteDB Community Slack]
In terms of numbers of yb-master, they should be as many as the replication factor. My question is, is having master and servers running in all the nodes a bad policy?
And, if we have a multidc deployment, should we have at least 1 master on each dc?
I guess the best is to accommodate the leader of yb-master in DC, which is going to be the main workload (if there is any) right?
It's perfectly normal to colocate an yb-tserver and yb-master in the same server. But in large deployments, it's better for them to be on separate servers for splitting workloads (so heavy usage on yb-tserver won't interfere with yb-master).
And, if you have a multidc deployment, then you should deploy 1 in each region, so that you have region failover for the yb-masters too.
For a situation with YB to be usable, you have to have 2 out of 3 masters, so indeed with a 2DC situation, you cannot build a situation where you always have availability, because you have to have 2 masters at one place and 1 at the other. So the only solution for high availability is 3DC.
Do 3DC with the same number of nodes in each DC, so you will end up with a total of 3, 6, 9, etc. nodes.A master should be in each D.C., if not you will again lose resilience.
I guess the best is to accommodate the leader of yb-master in DC,
which is going to be the main workload (if there is any) right?
In this case you can set 1 region/zone as the preferred one and the database will try to put leaders there automatically using set-preferred-zones https://docs.yugabyte.com/latest/admin/yb-admin/#set-preferred-zones

Frequent Compaction of OpsCenter.rollup_state on all the nodes consuming CPU cycles

I am using Datastax Cassandra 4.8.16. With cluster of 8 DC and 5 nodes on each DC on VM's. For last couple of weeks we observed below performance issue
1) Increase drop count on VM's.
2) LOCAL_QUORUM for some write operation not achieved.
3) Frequent Compaction of OpsCenter.rollup_state and system.hints are visible in Opscenter.
Appreciate any help finding the root cause for this.
Presence of dropped mutations means that cluster is heavily overloaded. It could be increase of the main load, so it + load from OpsCenter, overloaded system - you need to look into statistics about number of requests, latencies, etc. per nodes and per tables, to see where increase happened. Please also check the I/O statistics on machines (for example, with iostat) - sizes of the queues, read/write latencies, etc.
Also it's recommended to use a dedicated OpsCenter cluster to store metrics - it could be smaller size, and doesn't require an additional license for DSE. How it said in the OpsCenter's documentation:
Important: In production environments, DataStax strongly recommends storing data in a separate DataStax Enterprise cluster.
Regarding VMs - usually it's not really recommended setup, but heavily depends on what kind of underlying hardware - number of CPUs, RAM, disk system.

Will CONSISTENCY TWO be affected by low remote DC latency if all local replicas are up?

Scenario: we have up in aws a DSE 5.0 cluster cluster with 2 DCs, and a keyspace with 3 replicas in Australia and 3 replicas in US West coast. App talks to DSE via the dse java driver.
For our users in Sydney, If we use LOCAL_QUORUM, response times as measured in the client are under 90ms. This is good, but if 2 replicas are too slow (happened during a nasty repair caused by the analytics cluster) we are down.
If we use QUORUM, we can lose 2 nodes locally without going down, but our response times are over 450ms at all times because every read needs at least one answer from the remote DC.
My question is: will using CL TWO (which is enough for our case) suffer the same latency cost of QUORUM if all our 3 local replicas are healthy and behaving?
Our end goal is having low latency while still being automatically fail over and eat the latency cost if local fails.
If it makes any difference, we are using DCAwareRoundRobin in the driver.
DCAwareRoundRobin policy provides round-robin queries over the node of
the local data center. It also includes in the query plans returned a
configurable number of hosts in the remote data centers, but those are
always tried after the local nodes. In other words, this policy
guarantees that no host in a remote data center will be queried unless
no host in the local data center can be reached.
CONSISTENCY TWO returns the most recent data from two of the closest replicas.
CONSISTENCY In Cassandra
To obtain minimal latency in Scylla/Cassandra over a multi-dc implementation, you'd need to use the local aspect of the driver.
The challenge with the CL=Two is that it provides the closest response from the nearest replicas based on your snitch configuration.
To my understanding, it means that the coordinator node request is sent to all replicas without the locality aspect. It means you'd be charged for the egress traffic from both sides of the pond. once for the request and once for the actual data traffic coming from all replicas.

practicallity of having a single node cassandra multisitecluster(3 way)

Is it possible to put a Cassandra cluster with single node DC with 2 remote DC which is also having a single node assuming the replication factor is required to be 3 in this case? The remote cluster is in the same geographical area but not same building for HA. Or is there any hard rules that for high availability and consistency for a need for a local quorum node to achieve that?
Our setup may be smaller compared to big data and usually used to store time series data with approximately 2000/3000 (on different key) sampling per second.
Is there other implications other than read/write may be slow due to the comms delay?
disclaimer: I am new to cassandra.
Turns out I want to deploy a similar setup: 3 nodes on aws, each in its own AZ (But all in the same region). from what I read, this setup is just a single DC, with 3 nodes.
You need to use Ec2Snitch to reduce the latency between your clients and the nodes.
Using RF=3 provides you with the HA that you need, since every node has all the data
Inter-AZ communication should be fairly fast. refer to this: http://highscalability.com/blog/2016/8/1/how-to-setup-a-highly-available-multi-az-cassandra-cluster-o.html
becuase you'll be running in a single DC, local-quorum == quorum. so as long as you'll be writing to QUROUM (which requires 2/3 nodes (AZs) to be up), you'll be strongly consistent and HA.

what happens when an entire cassandra cluster goes down

I have a cassandra cluster having 3 nodes with replication factor of 2. But what would happen if the entire cassandra cluster goes down at the same time. How read and write can be manage in this situation and what would be the best consistency level so that i can manage my cassandra nodes for high availability, As of now i'm using QUORUM.
If your cluster is down on all nodes - it is down.
When you need HA, think of deploying more than one datacenter, so availability can be maintained even when an entire datacenter/rack goes down.
If you can live with stale data, you could use CL.ONE instead - you need only one node to respond.
More replicas also increases availability for CL.QUORUM - you need RF/2+1 nodes from your replicas alive, in case of 2 -> 2/2+1 = 2 or all your replicas need to be online. With RF=3 you sill only need 2 as 3/2+1 = 2 - now you can have one node down.
As for your writes - all acknowleged writes will be written to disk in the commitlog (if there is no caching issue on your disks) and restored when coming back online. Of course there may be a race condition where the changes are written to disk but not acked via network.
Keep in mind to setup NTP!

Resources