does Cassandra's CCM tool only support one keyspace? - cassandra

I am working with a cluster I created with ccm. We are using 3 tables in 2 keyspaces, so 6 tables in total. I was having a problem that it let me create one table in one keyspace and 2 in the other but even when I removed my
IF NOT EXISTS
check then it would give me an error saying it already exists. It seems that the create is ignoring the fact that these are supposed to be in 2 separate keyspaces;
These are the same cql script files that we run against our dev cloud Cassandra cluster, so I know its not an issue with the scripts. That, and the create statements are pretty simple and straightforward.
So does CCM only support one keyspace? If so, that seems like a pretty big limitation and makes it much much less useful, if we can even use it at all for our local dev and testing purposes.
Thanks!

The answer to your question is: No, CCM doesn't support only one keyspace.
CCM doesn't have any restrictions at all built into it. Under the covers it is just a set of python scripts for configuring and launching a cassandra cluster on a single machine.

Related

How to add sstablesplit to an existing Cassandra cluster?

I am a beginner in Cassandra and currently have a small cluster with replication factor 3 and most of the parameters being set to default.
What I noticed the other day is that the SSTables have become absolutely massive (>1TB) and the logs are now starting to complain that they cannot perform a compaction anymore. I've looked into it and decided to switch to the LevelCompactionStrategy, as well as performing an sstablesplit on my existing SSTables.
However, at that point I noticed that sstablesplit did not come with my installation of Cassandra. Is there a way of installing just that tool? All the guides I've seen talk about installing the entire Datastax tech stack, which would probably invalidate my existing cluster or require a great deal of reinstalling which at the moment I cannot do. The Cassandra installation was not set up by me.
At the same time, LCS is complaining that it cannot perform re-compaction because it's trying to recompact all SSTables at once, which, since they now take slightly more than 50% of hard drive space, it can't find enough space to do.
If sstablesplit is impossible (or inadvisable), is there any other way to resolve my issue of having several SSTables which are too massive to be re-compacted into more manageable chunks?
Thanks!
sstablesplit is part of the cassandra codebase, you can use it even without it being packaged. The cassandra-all jar and lib jars to classpath have everything to run it. This is all the sstablesplit script does: https://github.com/apache/cassandra/blob/trunk/tools/bin/sstablesplit.
Is this in AWS or some cloud platform where you can get larger hosts temporarily? Easiest is to replace the hosts with new hosts with 2x the disk space or something, migrate to LCS then switch back for costs.

Unable to start DSE using SPARK_ENABLED=1

We are running 6 node cluster with:
HADOOP_ENABLED=0
SOLR_ENABLED=0
SPARK_ENABLED=0
CFS_ENABLED=0
Now, we would like to add Spark to all of them. It seems like "adding" is not the right term because this would not fail. Anyways, the steps we've done:
1. drained one of the nodes
2. changed /etc/default/dse to SPARK_ENABLED=1 and HADOOP_ENABLED=0
3. sudo service dse restart
And got the following in the log:
ERROR [main] 2016-05-17 11:51:12,739 CassandraDaemon.java:294 - Fatal exception during initialization
org.apache.cassandra.exceptions.ConfigurationException: Cannot start node if snitch's data center (Analytics) differs from previous data center (Cassandra). Please fix the snitch configuration, decommission and rebootstrap this node or use the flag -Dcassandra.ignore_dc=true.
There are two related questions that have been already answered:
Unable to start solr aspect of DSE search
Two node DSE spark cluster error setting up second node. Why?
Unfortunately, clearing the data on the node is not an option - why would I do that? I need the data to be intact.
Using "-Dcassandra.ignore_rack=true -Dcassandra.ignore_dc=true" is a bit scary in production. I don't understand why DSE wants to create another DC and why can't it just use the existing one?
I know that according to datastax's doc one should partition the load using different DC for different workloads. In our case we just want to run SPARK jobs on the same nodes that Cassandra is running using the same DC.
Is that possible?
Thanks!
The other answers are correct. The issue here is trying to warn you that you have previously identified this node as being in another DC. This means that it probably doesn't have the right data for any key-spaces with Network Topology Strategy. For example if you had a NTS keyspace which had only one replica in "Cassandra" and changed the DC to "Analytics" you could inadvertently lose all of the data.
This warning and the accompanying flag are telling you that you are doing something that you should not be doing in a production cluster.
The real solution to this is to explicitly name your dc's using GossipingFileSnitch and not rely on SimpleSnitch which names based on the DSE workload.
In this case, switch to GPFS and set the DC name to Cassandra.

How to change cassandra standalone mode to distributed

I have installed Cassandra 2.1 stand alone mode in two nodes seperately.
Is there any way to change both to distributed or make both the node used in one cluster.
please help.
This is what you're looking for: https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_add_node_to_cluster_t.html
I also suggest taking a look at this hands on training course: https://academy.datastax.com/courses/ds201-cassandra-core-concepts
It's free and definitely worth your time if you're thinking about using Cassandra in production.

What is the best way to test Cassandra applications?

I am currently using Achilles Embedded to spin up a local, temporary Cassandra instance and test my functionality there. While this is working to some extend, there seems to be a memory leak as the more tests I run, the more I see messages like PS Scavenge GC in xx ms, and my system slows to a crawl, even freezing the mouse pointer.
So, is there a better way to automatically spin up a small Cassandra instance to run my tests against?
The tool I use for quickly creating a local Cassandra cluster is the ccm (Cassandra Cluster Manager) utility. You can easily create a multi-node cluster on your local machine for any release. See more information here.
I believe some of the Cassandra developers use ccm for their development work, so ccm is kept up to date with the newest releases.
I agree, you can use use CCM. if you have a test cluster. Try using cassandra stress tool (Either standalone or using yam profile). If I am getting your question correct, it will solve your problem.

Set cluster name when using Cassandra CQL/JDBC driver

I'm using the Cassandra CQL/JDBC driver I got from google code but it doesn't seem to let me provide a cluster name - is there a way?
I'm using cluster names to ensure I don't run commands against a live system, it has a different cluster name to my dev systems.
Edit: Just to clarify, I have two totally separate Cassandra clusters, one live and one for test. They have different cluster names to ensure that I don't accidentally run test code meant for the test cluster on the live cluster. Therefore any client I need to use must let me set a cluster name. Hector does this.
There is no inbuilt protection for checking cluster names for Cassandra clients. It is built to ensure nodes from different clusters don't try and join together but not to ensure clients connect to the right cluster. It would be possible to add this checking to a client though (since the cluster name is exposed to the client) but I'm not aware of any clients doing this.
I'd strongly recommend firewalling off your different environments to avoid this kind of mistake. If that isn't possible, you should choose different ports to avoid confusion. Change this with the 'rpc_port' setting in cassandra.yaml.
You'd have to mirror the data on two different clusters. You cant access the same cluster with different names.
To rename your cluster (from the default 'Test Cluster') you edit the cassandra configuration file found in location/of/cassandra/conf/cassandra.yaml. Its the top line, if you need more details look at the datastax configuration documentation and explanation.

Resources