Changing tserver_master_addrs in YugabyteDB without restarting? - yugabytedb

[Question posted by a user on YugabyteDB Community Slack]
The docs on changing cluster config, indicate that, when changing yb-master cluster membership, yb-tserver tserver_master_addrs should be updated to reflect the latest list of nodes, but I don't see a clear indication of how to best do so. Is this only possible by terminating each yb-tserver process and rerunning it with a new command-line flag?

There is yb-ts-cli set_flag for updating flags during runtime.
In this case, the tserver learns the new master addresses at runtime by itself, you just need to update the yb-tserver config so it has the new masters if it gets restarted.
Also, string flags in general are not safe to change at runtime because they're not thread-safe.

Related

Apache Ignite 2.9.0 Recover from lost partitions

We have setup a apache ignite 2.9.0 cluster with native persistence using kubernetes in Azure with 4 nodes. To update some cache configuration, we restarted all the ignite nodes. After restart, running any sql query on one particular table - results in restart of 2 ignite nodes and after that we see lost partitions exception.
If we try to restart all nodes to recover from lost partitions, then its fine until we run any sql query on that table after which 2 nodes restart and we get lost partitions exception.
Is there anyway we can recover from lost partitions and overcome this problem? We also wanted to understand why its occuring?We could not find any logs related to this.
When all partition owners left the grid, the partition is considered to be lost, you might think of this as a special internal marker. Depending on the PartitionLossPolicy Ignite might ignore this fact and allow cache operations or disallow them to protect data consistency.
If you use native persistence, then most likely there was no physical data loss and all you need is to tell Ignite that you are aware of the situation, now all data are in place and it's safe to remove the "lost" mark from the partitions.
I think the most simple way to handle this would be to use the control script from within a pod:
control.sh --cache reset_lost_partitions cacheName1,cacheName2,...
More details:
https://ignite.apache.org/docs/latest/configuring-caches/partition-loss-policy#handling-partition-loss

Modify master config in YugaByteDb

I'm new in using yugabyte-db and i tried to use 3 local cluster by running ./bin/yb-clt --rf 3 create it work well but ysql is disabled by default (--enable_ysql=false) is there any way to change masters configuration without running new yb-master process?
Changing yb-master config is done using the change-master-config subcommand on yb-admin cli. Arguments will depend on the type of config. Direct link to the command with arguments:
https://docs.yugabyte.com/latest/admin/yb-admin/#change-master-config
--enable_ysql=false is default only in pre-2.0.3 versions. All versions after it have it enabled by default.
And changing it in runtime is not possible. Only on startup.

How to update configuration of a Cassandra cluster

I have a 3 node Cassandra cluster and I want to make some adjustments to the cassandra.yaml
My question is, how should I perform this? One node at a time or is there a way to make it happen without shutting down nodes?
Btw, I am using Cassandra 2.2 and this is a production cluster.
There are multiple approaches here:
If you edit the cassandra.yaml file, you need to restart cassandra to re-read the contents of that file. If you restart all nodes at once, your cluster will be unavailable. Restarting one node at a time is almost always safe (provided you have sane replication-factors and consistency-levels). If your cluster is configured to survive a rack or datacenter outage, then you can safely restart more nodes concurrently.
Many settings can be changed without a restart via JMX, though I don't have a documentation link handy. Changing via JMX WON'T change cassandra.yml though, so you'll need to update that also or your config will revert back to what's in the file when the node restarts.
If you're using DSE, OpsCenter's Lifecycle Manager feature makes updating configs a simple point-and-click affair (disclaimer, I'm biased as I'm an LCM dev).

Enable Cassandra PasswordAuthenticator at up time

I have a Cassandra cluster (Datastax open source) and currently there is no authentication configured (i.e., it is using AllowAllAuthenticator), and I want to use PasswordAuthenticator. The official document says that I should follow these steps:
enable PasswordAuthenticator in cassandra.yaml,
restart the Cassandra node, which will create the system_auth keyspace,
change the system_auth replication factor,
create new user and password
However, this is a big problem to me because the cluster is used in production so we cannot have any downtime. Between step 2 and 4 no user has been configured yet, so even if the client supplies username and password, the request would still be rejected, which is not ideal.
I looked into the Datastax Enterprise doc, and it has a TransitionalAuthenticator class, which would create the system_auth keyspace but without rejecting requests. I wonder if this class can be ported to the open source version? Or if there are other ways around this problem? Thanks
Update
This is the Cassandra version I'm using:
cqlsh 4.1.1 | Cassandra 2.0.9 | CQL spec 3.1.1 | Thrift protocol 19.39.0
You should be able to execute steps 2-4 with just one node and have zero downtime, assuming proper client configuration, replication, and cluster capacity. Then, it's just a rolling restart of the remaining nodes.
Clients should be setup with credentials ahead of time, and they will start using them as nodes as nodes with authorizers come online (this behavior could depend on driver -- try it out first).
You might be able to manually generate the schema and data for steps 3-4 before engaging the CassandraAuthenticator, but that shouldn't be necessary.
What are your concerns about downtime?

Changing Snitch on live cluster in datastax 4.5

I have 8 nodes in one region and now i want to add new node in other region.Presently i m using ec2snitch ,after adding node to new region i need to change snitchs of all nodes to ec2 multiregion snitch.
Now my question is, does this change will impact my current running cluster? and what would be the best practice for doing this .
Thanks
You should do a rolling restart changing to ec2 multi region snitch before adding the new node. It should not impact your running cluster. Though I would suggest you bring up a test cluster briefly to test making the change.
To perform a rolling restart from Opscenter:
Click Nodes in the left pane.
In the contextual menu select Restart
from the Cluster Actions dropdown.
Set the amount of time to wait after restarting each node, select whether the node should be
drained before stopping, and then click Restart Cluster.
See more details here:
http://www.datastax.com/documentation/opscenter/5.0/opsc/online_help/opscRestartingCluster_t.html
Here is a link to the DataStax documentation for switching snitches. I found that to be useful when I switched to the GossipingPropertiesFileSnitch. I also had to edit cassandra-rackdc.properties on all nodes before doing the rolling restart.
Even though my topology didn't change, I followed the instruction in the reference. Stopped all the nodes, restarted them (start with the seeds), then ran 'nodetool repair' and 'nodetool cleanup' on all nodes.

Resources