Drop keyspace is not working - cassandra

I'm trying use Datastax Enterprise for deploying database system
My cluster:
2 dse cassandra node
2 dse solr node
I created a keyspace by cqlsh on one node but I could not drop that keyspace from another nodes on cluster, unless drop from node creating keyspace. Anybody know why?

This sounds like https://issues.apache.org/jira/browse/CASSANDRA-5202
I have had to delete the data directory for the troubled key space in all nodes and restart DSE to fix.

Do you have the output from cqlsh on the node that won't allow you to drop the keyspace?
What version of DSE are you using?

Related

Cassandra Node details

I am new to Cassandra. I install Apache Cassandra on ubuntu 16.04. What I do to know how many nodes my cluster have. if only one node by default?. what happens during the replication process? please help, Thank you.
Your new installation gives you one node of Cassandra, so at present you would have a single-node cluster.
Once Cassandra is started and up and running, you can display your cluster topology with the nodetool status command, it prints the cluster information.
http://cassandra.apache.org/doc/latest/tools/nodetool/status.html
With Cassandra, the data is replicated between nodes according to the replication strategies that you define; in the case of a single-node cluster, replication is not really applicable.

Generate keyspace system_schema cassandra

I ran command nodetool resetlocalschema on my cassandra node. It is a two node cluster.
Now system_schema is unavailable when I do a desc keyspaces.
How can I recreate system_schema.
The documentation for nodetool resetlocalschema indicates:
The node temporarily loses metadata about the tables on the node, but rewrites the information from another node.
When you restart the cassandra service in the affected node, the issue will be fixed.

Restore snapshots from 3 node Cassandra cluster to a new 6 node cluster

I am new to cassandra and would like some help on restoring snapshots from 3 node Cassandra cluster to a new 6 node cluster.
We have few keyspaces and would like to copy data from dev to production.
Thanks in advance.
The easiest way is to use the sstableloader tool that is bundled with Cassandra. You can find it in %installdir%/bin/sstableloader.
You will first need to re-create the schema on your new cluster:
dump the schema for the keyspace you want to transfer from your original cluster using cqlsh -e 'DESC KEYSPACE mykeyspace;' > mykeyspace.cql
load it into your new cluster using cqlsh -f mykeyspace.cql.
(optional) If you new cluster will have a different replication configuration you'll need to modify it manually after loading the schema. (ALTER KEYSPACE mykeyspace WITH REPLICATION = ...;)
Once that's done, you can start bulk-loading the SSTables from your keyspace snapshots into the new cluster:
sstableloader --nodes 10.0.0.1,10.0.0.2 -f /etc/cassandra/cassandra.yaml /path/to/mykeyspace/snapshot/
Note that this might take a while if you have a lot of data to load. You should also run a full repair on the new cluster afterwards to ensure that the replicas are properly distributed.

Migrate Datastax Enterprise Cassandra to Apache Cassandra

We have currently using DSE 4.8 and 5.12. we want to migrate to apache cassandra .since we don't use spark or search thought save some bucks moving to apache. can this be achieved without down time. i see sstableloader works other way. can any one share me the steps to follow to migrate from dse to apache cassandra. something like this from dse to apache.
https://support.datastax.com/hc/en-us/articles/204226209-Clarification-for-the-use-of-SSTABLELOADER
Figure out what version of Apache Cassandra is being run by DSE. Based on the DSE documentation DSE 4.8.14 is using Apache Cassandra 2.1 and DSE 5.1 is using Apache Cassandra 3.11
Simplest way to do this is to build another DC (Logical DC per Cassandra) and add it to the existing cluster.
As usual, with a "Nodetool Rebuild {from-old-DC}" on to the new DC nodes, let Cassandra take care of streaming data to the new Apache Cassandra nodes naturally.
Once data streaming is completed, based on the LoadBalancingPolicy being used by applications, switch their local_dc to DC2 (the new DC). Once the new DC starts taking traffic, shutdown nodes in old DC say DC1 one by one.
alter keyspace dse_system and dse_security not using everywhere
on non-seed nodes, cleanup cassandra data directory
turn on replace in cassandra-env.sh
start instance
monitoring streaming process using command 'nodetool netstats|grep Receiving'
change seeds node definition and rolling restart before finally migrate previous seeds nodes.

How to clean Cassandra cluster

I had setup a 50 node Apache Cassandra cluster
I took one node and wanted to install DSE on it and make is a single node DSE cluster
I have removed /var/lib/cassandra and /var/log/cassandra
I have truncated systems.peers table on the single node
When I start dse cassandra on this node, I still see the remaining nodes doing handshake and being added to this cluster.
What is the best way to complete remove any traces of existing Cassandra cluster from this node?
You need to change the cluster_name directive in cassandra.yaml to a different name to the rest of the cluster.

Resources