I am using DSE version 5.09 and Opscenter version 6.08.
Opscenter restore process seems to be failing when node-to-node encryption is enabled on C* nodes. It works successfully when I disable TLS on all nodes and carry out restore process.
Has anyone faced similar issue or Is there a way around to get the restore successfully done without disabling TLS?
Also SSTableloader seems to be failing with node-to-node encryption enabled.
Is DSE restore process using SSTableloader/Opscenter isn't feasible with TLS enabled? Any opinions/comments would be appreciated. Thanks in advance
DataStax OpsCenter engineer here.
This is a known issue and is tracked internally via the ticket id's DSP-14202 and OPSC-12334, if you have support or access to a sales engineer they can check the status of these tickets for you. I'm not on the DSE team, but my sense is that progress has been made on this issue and that it should be addressed in the next round of patch-releases for DSE.
In the meantime, I think you simply won't be able to use OpsCenter to perform your restores with this configuration. You'd have either disable node-to-node encryption or do restores outside of OpsCenter and pass in extra TLS options like:
JVM_OPTS="$JVM_OPTS -Dssl.keystore=$2 -Dssl.enabled=true";
JVM_OPTS="$JVM_OPTS -Dssl.keystore.password=$2";
JVM_OPTS="$JVM_OPTS -Dssl.truststore=$2 -Dssl.enabled=true";
JVM_OPTS="$JVM_OPTS -Dssl.truststore.password=$2";
Related
I'm monitoring a DSE cluster and I see the following problem:
As you can see it says that the Repair is currently failing, this value keeps going up with time. Can someone explain to me what's happening in here? In the Opscenter logs I can only find this error:
Is this related to the problem?
Checked logs and documentation.
In DSE there are two ways to perform anti-entropy repair:
Traditional Cassandra repair using nodetool repair command
NodeSync that is often faster and more intelligent (see this blog post for more details)
But you couldn't use traditional repair on the tables where NodeSync is enabled. So you need to click on settings icon for Repair and disable running it on the keyspaces/tables with NodeSync enabled.
To add to Alex Ott's excellent response, NodeSync is a new feature in DataStax Enterprise which runs a repair continuously in the background using the same mechanism as read-repairs and replaces the traditional anti-entropy repairs.
The OpsCenter Repair Service will skip repairs on tables which have NodeSync enabled because it isn't possible to run traditional repairs on them as I've explained in this post -- https://community.datastax.com/questions/3879/.
If NodeSync was enabled on a table while a repair on that same table was already scheduled and running, it would explain why you're seeing error messages.
You can stop the errors from being generated by explicitly excluding the keyspace(s) or table(s) from subrange repairs with:
[repair_service]
ignore_keyspaces=ks_name_1,ks_name_2
ignore_tables=ks_name_3.table_name_1,ks_name_3.table_name_2
Is there a way to disable TDE once enabled in DSE 6.7.7 version ?
We have already followed the steps from https://docs.datastax.com/en/security/6.7/security/secEncryptEnable.html
But we would like to revert the key creation and disable TDE on the tables. What happens to the existing data once we disable the TDE will we be able to query that data without any issues ?
You can actually just ALTER TABLE to change the "WITH COMPRESSION" settings and remove the encryption that was previously configured.
After that, run "nodetool upgradesstables -a keyspace table".
I want to degrade my datastax cassandra enterprise version from 5.0.15 to 4.8.16 to list down the rollback process in case of any emergency.
Please help me to list down the process.
Before performing upgrade it's always recommended to perform backup of existing data, and if something went wrong, then you can restore data from snapshots - the precise steps will depend on how you did perform backup - via OpsCenter, or with nodetool snapshot.
See DSE upgrade guide for additional information about upgrade.
P.S. DataStax support's KB has very good article on the "manual" backup/restore - I recommend to follow it if you won't use OpsCenter for backup.
How to setup a fail-over node for Cassandra Opscenter. The Opscenter data is stored on Opscenter node itself. So to setup a failover node i need to setup an Opscenter different from current Opscenter and sync Opscenter data and config files between Opscenters.
The stomp_interface on nodes in the cluster are pointed towards Opscenter_1 how will it change automatically to Opscenter_2 when failover occurs??
There are steps on the datastax documentation that have details for doing this. At a minimum:
Mirror the configuration directories stored on the OpsCenter primary to the OpsCenter backup using the method you prefer.
On the backup OpsCenter in the failover directory, create a primary_opscenter_location configuration file that indicates the IP address of the primary OpsCenter daemon to monitor
The stomp_interface setting on the agents gets changed (address.yaml file updated as well) when failover occurs. This is why the documentations recommend making sure there is no 3rd party configuration management on it.
3 things :
If you have firewall on, allow the corresponding ports to communicate (61620,61621,9160,9042,7199)
always verify IF the cassandra is up and running, so agent can actually connect to something.
stop the agent, check again the address.yaml, restart the agent.
I have a Cassandra cluster (Datastax open source) and currently there is no authentication configured (i.e., it is using AllowAllAuthenticator), and I want to use PasswordAuthenticator. The official document says that I should follow these steps:
enable PasswordAuthenticator in cassandra.yaml,
restart the Cassandra node, which will create the system_auth keyspace,
change the system_auth replication factor,
create new user and password
However, this is a big problem to me because the cluster is used in production so we cannot have any downtime. Between step 2 and 4 no user has been configured yet, so even if the client supplies username and password, the request would still be rejected, which is not ideal.
I looked into the Datastax Enterprise doc, and it has a TransitionalAuthenticator class, which would create the system_auth keyspace but without rejecting requests. I wonder if this class can be ported to the open source version? Or if there are other ways around this problem? Thanks
Update
This is the Cassandra version I'm using:
cqlsh 4.1.1 | Cassandra 2.0.9 | CQL spec 3.1.1 | Thrift protocol 19.39.0
You should be able to execute steps 2-4 with just one node and have zero downtime, assuming proper client configuration, replication, and cluster capacity. Then, it's just a rolling restart of the remaining nodes.
Clients should be setup with credentials ahead of time, and they will start using them as nodes as nodes with authorizers come online (this behavior could depend on driver -- try it out first).
You might be able to manually generate the schema and data for steps 3-4 before engaging the CassandraAuthenticator, but that shouldn't be necessary.
What are your concerns about downtime?