Generate keyspace system_schema cassandra - cassandra

I ran command nodetool resetlocalschema on my cassandra node. It is a two node cluster.
Now system_schema is unavailable when I do a desc keyspaces.
How can I recreate system_schema.

The documentation for nodetool resetlocalschema indicates:
The node temporarily loses metadata about the tables on the node, but rewrites the information from another node.
When you restart the cassandra service in the affected node, the issue will be fixed.

Related

Cassandra Node details

I am new to Cassandra. I install Apache Cassandra on ubuntu 16.04. What I do to know how many nodes my cluster have. if only one node by default?. what happens during the replication process? please help, Thank you.
Your new installation gives you one node of Cassandra, so at present you would have a single-node cluster.
Once Cassandra is started and up and running, you can display your cluster topology with the nodetool status command, it prints the cluster information.
http://cassandra.apache.org/doc/latest/tools/nodetool/status.html
With Cassandra, the data is replicated between nodes according to the replication strategies that you define; in the case of a single-node cluster, replication is not really applicable.

Which are reasons restart cassandra's cluster?

I just have one reason to restart cluster below :
All the nodes have the same hardware configuration
1. When i update file cassandra.yaml
Are there other reasons ?
The thing you are asking for is Rolling Restart a cassandra cluster. There are so many reason to restart a cassandra cluster. I'm just mentioning some below-
when you update any value in cassandra.yaml. (As you mentioned above)
When your nodetool got stucked somehow. such as- you gave command nodetool repair and cancelled the command but it got stucked behind, then you won't be able to give another nodetool repair command.
When you are adding a new node to cluster and you got stream_failed due to nproc limit. That time your running cluster nodes could be down to this issue and going to hold the status.
When you don't want to use sstableloader and you need to restore your data from snapshots. That time you need to provide your snapshots to the data_directory on each node and rolling restart.
When you are about to upgrade your cassandra_version.
For example when you upgrading Cassandra version.

Temporarily change multi node to single node

I have configured cassandra 3.0.9 on 3 nodes but I have to use only 1 node for sometime. I have disconnected other 2 nodes from network also removed the entries of both the nodes from Cassandra.yaml, rackdc and topology files.
When I check node tool status it shows me both the down nodes. When I try to execute any query on cqlsh it gives me below error:
Blockquote
OperationTimedOut: errors={'127.0.0.1': 'Request timed out while waiting for schema agreement. See Session.execute_async and Cluster.max_schema_agreement_wait.'}, last_host=127.0.0.1
Blockquote
Warning: schema version mismatch detected; check the schema versions of your nodes in system.local and system.peers.
How I can resolve this?
That's not how you remove a node from a Cassandra cluster. In fact, what you're doing is quite dangerous. Typically, you'd use nodetool decommission. If your other two nodes are still intact and just offline, I suggest bringing them back online temporarily and let decommission do its thing.
I'm going to also throw this out there - it's possible you're missing a good portion of your data with the steps you did above unless all keyspaces had RF=3. Cassandra distributes data evenly between the nodes in a respective DC. The decommission step I mention above redistributes the data.
Now if you don't have the other 2 nodes to run a nodetool decommission, you may have to remove the node with nodetool removenode and in the worst case, nodetool assassinate.
Check these docs for reference and the full steps to removing a node: https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAddingRemovingNodeTOC.html

How to clean Cassandra cluster

I had setup a 50 node Apache Cassandra cluster
I took one node and wanted to install DSE on it and make is a single node DSE cluster
I have removed /var/lib/cassandra and /var/log/cassandra
I have truncated systems.peers table on the single node
When I start dse cassandra on this node, I still see the remaining nodes doing handshake and being added to this cluster.
What is the best way to complete remove any traces of existing Cassandra cluster from this node?
You need to change the cluster_name directive in cassandra.yaml to a different name to the rest of the cluster.

Drop keyspace is not working

I'm trying use Datastax Enterprise for deploying database system
My cluster:
2 dse cassandra node
2 dse solr node
I created a keyspace by cqlsh on one node but I could not drop that keyspace from another nodes on cluster, unless drop from node creating keyspace. Anybody know why?
This sounds like https://issues.apache.org/jira/browse/CASSANDRA-5202
I have had to delete the data directory for the troubled key space in all nodes and restart DSE to fix.
Do you have the output from cqlsh on the node that won't allow you to drop the keyspace?
What version of DSE are you using?

Resources