[Question posted by a user on YugabyteDB Community Slack]
Is there a way to go from RF1 -> RF3 -> RF5 cluster using the default yb-master + yb-tserver components (not using yugabyted cli) ?
And if that is not possible, how ready is the yugabyted join that is in beta?
yugabyted cli doesn't let you change the RF natively. By design, when you start a 1 node cluster it is started as RF1. As you add the second and third node, the RF is automatically bumped to 3 and then it remains 3 when you add more nodes subsequently.
For changing the RF, you can use the yb-admin command modify-placement-info.
Related
[Question posted by a user on YugabyteDB Community Slack]
Is it possible to have the primary data cluster and read replica be at different versions?
primary data # 2.1 and replicate cluster # 2.8
Ideally, no. During software upgrades, temporarily different nodes can be on different releases. But in a steady state, the read-replica nodes should be the same version as the primary cluster.
With xCluster, where the two clusters are really independent clusters linked with an async replication channel, they can be on different releases for extended periods of time.
[Question posted by a user on YugabyteDB Community Slack]
I’m trying to understand the tolerance failure in YugabyteDB.
My scenario is as follows:
Universe is setup with primary data cluster and 1 read replica cluster, max_stale_read_bound_time_ms = 60.
And the primary data cluster got wipe out (lost all data).
Questions:
Would we be able to rebuild the primary data cluster with the read replica cluster?
Can the read replica cluster become the primary data cluster?
The answer is no to both.
xCluster is what you want to use for DR.
The design point for read-replicas in YugabyteDB is not for DR, but rather to bring data closer to where data is being read from. And read replicas have no yb-master. Without a yb-master, one cannot read from it.
[Question posted by a user on YugabyteDB Community Slack]
If I need to restore snapshots from one cluster with(4 nodes) to another cluster with(3 nodes).
How will we do that,as per documentation?
Data of 1st node should be restored on the 1st node of the other cluster.
Similarly for 2 more nodes we can do.
What will we do with the data of the remaining one node of 1st cluster?
Data is snapshotted/restored per-tablet. The new cluster will distribute the tablets on fewer nodes in your case and each node will have more tablets. You just have to go over all the nodes and restore each tablet according to how they were distributed when you imported the snapshot file. Doesn't matter the number of nodes.
More details at: https://docs.yugabyte.com/latest/manage/backup-restore/snapshot-ysql/
we are trying to add a node to the existing ring where in security is enabled and default cassandra user is made nonsuper. Also, alerted keyspace to networktopology with replication = no.of nodes. The ring is currently on AWS.
Once the new node joins the cluster, only user we see is nonsuper cassandra user. we are pretty much lokced out of the cluster. However, once we remove the newly joined node, all the security that we had before comes back.
Are there any best practices that we need to follow to enable security in 3.9?
Thanks in advance for helping me out on this.!!
I have a Cassandra cluster (Datastax open source) and currently there is no authentication configured (i.e., it is using AllowAllAuthenticator), and I want to use PasswordAuthenticator. The official document says that I should follow these steps:
enable PasswordAuthenticator in cassandra.yaml,
restart the Cassandra node, which will create the system_auth keyspace,
change the system_auth replication factor,
create new user and password
However, this is a big problem to me because the cluster is used in production so we cannot have any downtime. Between step 2 and 4 no user has been configured yet, so even if the client supplies username and password, the request would still be rejected, which is not ideal.
I looked into the Datastax Enterprise doc, and it has a TransitionalAuthenticator class, which would create the system_auth keyspace but without rejecting requests. I wonder if this class can be ported to the open source version? Or if there are other ways around this problem? Thanks
Update
This is the Cassandra version I'm using:
cqlsh 4.1.1 | Cassandra 2.0.9 | CQL spec 3.1.1 | Thrift protocol 19.39.0
You should be able to execute steps 2-4 with just one node and have zero downtime, assuming proper client configuration, replication, and cluster capacity. Then, it's just a rolling restart of the remaining nodes.
Clients should be setup with credentials ahead of time, and they will start using them as nodes as nodes with authorizers come online (this behavior could depend on driver -- try it out first).
You might be able to manually generate the schema and data for steps 3-4 before engaging the CassandraAuthenticator, but that shouldn't be necessary.
What are your concerns about downtime?