Cassandra - Every command issued in CQLSH throws errors - cassandra

Cassandra gives me serious headache. Yesterday, everything was running fine and then I dropped a table, ran a CQLSSTableWriter which somehow threw errors about my Lucene index (for not being on classpath or the like) several times and now, every command I issue in the cqlsh is throwing errors.
CREATE KEYSPACE IF NOT EXISTS mydata WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'};
takes a while and then throws:
Warning: schema version mismatch detected, which might be caused by DOWN nodes;
if this is not the case, check the schema versions of your nodes in system.local and system.peers.
OperationTimedOut: errors={}, last_host=XXX.XXX.XXX.20
After that I will create a new table and it will also throw the same error.
cqlsh:mydata> create table test (id text PRIMARY KEY, id2 text);
Warning: schema version mismatch detected, which might be caused by DOWN nodes; if this is not the case, check the schema versions of your nodes in system.local and system.peers.
OperationTimedOut: errors={}, last_host=XXX.XXX.XXX.20
last_host always shows the ip of the host where I run the cqlsh on. I have tried the same commands with different nodes too.
The keyspace and table however is still being created! The error says something about mismatching schema versions, so I made sure and ran:
nodetool describecluster
And the output of it shows that all my nodes are on the same schema. No schema mismatches. I also issued nodetool resetlocalschema before, without any luck though.
When I go ahead and insert some data into the newly created table, following error arises. Note that the insert statement does not return an error.
cqlsh:mydata> insert into test(id, id2) values('test1', 'test2');
cqlsh:mydata> select * from mydata.test ;
Traceback (most recent call last):
File "/usr/bin/cqlsh.py", line 1314, in perform_simple_statement
result = future.result()
File "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py", line 3122, in result
raise self._final_exception
Unavailable: code=1000 [Unavailable exception] message="Cannot achieve consistency level ONE" info={'required_replicas': 1, 'alive_replicas': 0, 'consistency': 'ONE'}
Note that I have one datacenter and five nodes. I do not plan to use more than one datacenter in the future. [cqlsh 5.0.1 | Cassandra 3.0.8 | CQL spec 3.4.0 | Native protocol v4]
I have also restarted Cassandra multiple times. nodetool status shows that all nodes are up and running. Does anyone have a clue about what's going on?

I fixed this by...
dropping all tables in the keyspace
running alter keyspace mydata WITH replication = {'class': 'NetworkTopologyStrategy', 'dc1': '1'}; instead of SimpleStrategy
restarting the cassandra service on all nodes
recreating all tables
runnning nodetool repair
Now I am able to insert data and query data again. Still not quite sure what was the cause of all this to be honest though.

Related

Cassandra drop keyspace, tables not working

I am starting with cassandra and have had some problems. I create keyspaces and tables to go playing, if I delete them and then run a describe keyspace they keep appearing to me. Other times I delete them and it tells me they don't exist, but I can't create them either because it says it exists.
Is there a way to clear that "cache" or something similar?
I would also like to know if through cqlsh I can execute a .cql file that is on my computer.
[cqlsh 5.0.1 | Cassandra 3.11.0 | CQL spec 3.4.4 | Native protocol v4]
This may be due to the eventually consistent nature of Cassandra. If you are on a small test cluster and just playing around you could try doing CONSISTENCY ALL in cqlsh which will force the nodes to become consistent.
You should always run delete or drop command manually with CONSISTENCY ALL so that it will reflect all the nodes and DCs. Also you need to wait for a moment to replicate into the cluster. Once replicated you will not get deleted data else you need to run a repair in the cluster.

Lost data after running nodetool decommission

I have a 3 node cluster with 1 seed and nodes in different zones. All running in GCE with GoogleCLoudSnitch.
I wanted to change hardware on each node so I started with adding a new seed in a different region which joined perfectly to the cluster. Then I started with "nodetool decommission" and when done I removed the the node when it is down and "nodetool status" states it's not in the cluster. I did this for all nodes and lastly I did it on the "extra" seed in the different region just to remove it to get back to a 3 node cluster.
We lost data! What can possibly be the problem? I saw a commando, "nodetool rebuild", which I ran and actually got some data back. "nodetool cleanup" didn't help either. Should I have run "nodetool flush" prior to "decommission"?
At the time of running "decommission" most keyspaces had ..
{'class' : 'NetworkTopologyStrategy', 'europe-west1' : 2}"
Should I first altered key spaces to include the new region/datacenter, which would be "'europe-west3' : 1" since only one node exist in that datacenter? I also noted that some keyspaces in the cluster had by mistake ..
{ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }
Could this have caused the loss of data? It seems that it was in the "SimpleStrategy keyspaces" the data was lost.
(Disclaimer: I'm a ScyllaDB employee)
Did you 1st add new nodes to replace the ones you are decommissioning and configured the keyspace replication strategy accordingly? (you only mentioned the new seed node in your description, you did not mention if you did it for the other nodes).
Your data loss can very well be a result of the following:
Not altering the keyspaces to include the new region/zone with the proper replication strategy and replication factor.
Keyspaces that were configured with simple strategy (no netwrok aware) replication policy and replication factor 1. This means that the data was stored only in 1 node, and once that node went down and decommissioned, you basically lost the data.
Did you by any chance took snapshots and stored them outside your cluster? If you did you could try and restore them.
I would highly recommend reviewing these procedures for better understanding and the proper way to perform the procedure you intended to perform:
http://docs.scylladb.com/procedures/add_dc_to_exist_dc/
http://docs.scylladb.com/procedures/replace_running_node/

Why read fails with cqlsh query when huge tombstones is present

I have a table with huge tombstones.So when i performed a spark job (which reads) on that specific table, it gave result without any issues. But i executed same query using cqlsh it gave me error because huge tombstones present in that table.
Cassandra failure during read query at consistency one(1 replica
needed but o replicas responded ,1 failed
I know tombstones should not be there, i can run repair to avoid them , but apart from that why spark succeeded and cqlsh failed. They both use same sessions and queries.
How spark-cassandra connector works? is it different than cqlsh?
Please let me know .
thank you.
The Spark Cassandra Connector is different to cqlsh in a few ways.
It uses the Java Driver and not the python Driver
It has significantly more lenient retry policies
It full table scans by breaking the request up into pieces
Any of these items could be contributing to why it would work in SCC and not in CQLSH.

Cassandra cluster simple query error

I'm learning Cassandra and I have a problem. I have a cluster with 2 computers (node A and node B). On a computer I can create new users and keyspaces and on the other one I can use this users and keyspaces. But if i create a new table on any of these computers (inside cassandra, on a keyspace), i can't see this new table with a simple query statement like SELECT * FROM table or SELECT * FROM keyspace.table. Cassandra displays this error "ServerError: <ErrorMessage code=0000 [Server error] message='java.lang.AssertionError">"
if i use nodetool status on the node A (node+seeder) displays an error:
java.lang.RuntimeException: No nodes present in the cluster. Has this node finished startin up?
but if i use nodetool status on the node B (only node), displays a node: the node B.
Keyspace stamement:
CREATE KEYSPACE demo WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
Cassandra 3.2 is installed on Debian
What can i do? any ideas? I can't fix it

Cassandra: Insert fails for consistency level "Quorum"

We get an error "cannot achieve consistency level QUORUM" (details below)
in following configuration:
Two datacenters with 6 nodes each, all nodes on same rack.
It works when CL is set as "Local Quorum".
Basically, as far as we use consistency level that require cross DC consistency, it fails to insert data. "Nodetool status" command shows that all 12 nodes are up and running.
What can be wrong?
Your help is much appreciated!
Thanks
Dimitry
Keyspace
CREATE KEYSPACE test6 WITH replication = {'class': 'NetworkTopologyStrategy', 'C
entralUS': '3', 'EastUs': '3'} AND durable_writes = true;
Query
INSERT INTO glsitems (itemid,itemkey) VALUES('1', 'LL');
Error
cassandra-driver-2.7.2\cassandra\cluster.py", line 3347, in result
raise self._final_exception
Unavailable: code=1000 [Unavailable exception] message="Cannot achieve
consistency level QUORUM" info={'required_replicas':
4, 'alive_replicas':3, 'consistency': 'QUORUM'}
It could be that Cassandra thinks all nodes are in the same datacenter. In this case LOCAL_QUORUM would always work properly but not QUORUM.
Did you correctly configure the snitch ?
Snitch – For multi-data center deployments, it is important to make
sure the snitch has complete and accurate information about the
network, either by automatic detection (RackInferringSnitch) or
details specified in a properties file (PropertyFileSnitch). link
You can find which snitch is used in the cassandra yaml file, property endpoint_snitch.
Here is the datastax documentation about existing snitches with Cassandra 2.0.

Resources