Running queries on a MemSQL tables returns the following error:
Error Code: 1777. Partition db_name:0 has no master instance.
I understand MemSQL Enterprise avoids this error appearing in the first place; however, I tried dropping the table, truncating data, and restarting the cluster several times, yet I still get the same error. Any ideas how to fix this error?
Does "SHOW LEAVES" show all leaf nodes as online? In redundancy 1 you need all leaves online to run queries. Run REMOVE LEAF if you want to permanently remove an offline leaf from the cluster.
Related
I have migrated my existing data in 4 nodes Cassandra (with RF=3) to Elassandra and after putting my mappings whole data got indexed into Elassandra. After the completion of indexing, all nodes show a consistent result in /_cat/indices?v API. But as soon as I restart any node the data on that node is reduced substantially, index size as well as the number of records. If I restart another node of the cluster the problem shift to that node and previous node recovers automatically. For more details and detailed use case please refer to the issue I have created with Elassandra.
Upgrade to Elassandra v6.8.4.3 has resolved the problem. Thanks!
After adding/removing tables and views to a keyspace a got problems with inconsistency and error referring to tables previous deleted. We tried to restart cluster nodes, only resulting in the nodes not starting due to java.lang.IllegalArgumentException: Unknown CF.
The current error is thrown from a View that refers to a non existing table (The table do exist but has a new id). Is it possible to some how fix this when Cassandra is not running?
It may be that you have a schema mismatch. Verify first that you're running the same schema versions nodetool describecluster and make sure all nodes are reachable.
The only other time i've seen something like this is when you have corrupt data on a node. In which case you'll want to nodetool removenode the appropriate node and provision a new one.
As an asside MATERIALIZED VIEWS are deprecated in 3.11 and will not be supported going forward. I would suggest that you roll your own.
I have a six node cluster running cassandra 2.1.6. Yesterday I tried to drop a column family and received the message "Column family ID mismatch". I tried running nodetool repair but after repair was complete I got the same message. I then tried selecting from the column family but got the message "Column family not found". I ran the following query to get a list of all column families in my schema
select columnfamily_name from system.schema_columnfamilies where keyspace_name = 'xxx';
At this point I received the message
"Keyspace 'system' not found." I tried the command describe keyspaces and sure enough system was not in the list of keyspaces.
I then tried nodetool resetlocalshema on one of the nodes missing the system keyspace and when that failed to resolve the problem I tried nodetool rebuild but got the same messages after rebuild was complete. I tried stopping the nodes missing the system keyspace and restarted them, once the restart was completed the system keyspace was back and I was able to execute the above query successfully. However, the table I had tried to drop previously was not listed so I tried to recreate it and once again received the message Column family ID mismatch.
Finally, I shutdown the cluster and restarted it... and everything works as expected.
My questions are: How/why did the system keyspace disappear? What happened to the data being inserted into my column families while the system keyspace was missing from two of the six nodes? (my application didn't seem to have any problems) Is there a way I can detect problems like this automatically or do I have to manually check up on my keyspaces each day? Is there a way to fix the missing system keyspace and/or the Column family ID mismatch without restarting the entire cluster?
EDIT
As per Jim Meyers suggestion I queried the cf_id on each node of the cluster and confirmed that all nodes return the same value.
select cf_id from system.schema_columnfamilies where columnfamily_name = 'customer' allow filtering;
cf_id
--------------------------------------
cbb51b40-2b75-11e5-a578-798867d9971f
I then ran ls on my data directory and can see that there are multiple entries for a few of my tables
customer-72bc62d0ff7611e4a5b53386c3f1c9f9
customer-cbb51b402b7511e5a578798867d9971f
My application dynamically creates tables at run time (always using IF NOT EXISTS), seems likely that the application issued the same create table command on separate nodes at the same time resulting in the schema mismatch.
Since I've restarted the cluster everything seems to be working fine.
Is it safe to delete the extra file?
i.e. customer-72bc62d0ff7611e4a5b53386c3f1c9f9
1 The cause of this problem is a CREATE TABLE statement collision. Do not generate tables dynamically from multiple clients, even with IF NOT EXISTS. First thing you need to do is fix your code so that this does not happen. Just create your tables manually from cqlsh allowing time for the schema to settle. Always wait for schema agreement when modifying schema.
2 Here's the fix:
1) Change your code to not automatically re-create tables (even with IF NOT EXISTS).
2) Run a rolling restart to ensure schema matches across nodes. Run nodetool describecluster around your cluster. Check that there is only one schema version.
ON EACH NODE:
3) Check your filesystem and see if you have two directories for the table in question in the data directory.
If THERE ARE TWO OR MORE DIRECTORIES:
4)Identify from schema_column_families which cf ID is the "new" one (currently in use).
cqlsh -e "select * from system.schema_column_families"|grep
5) Move the data from the "old" one to the "new" one and remove the old directory.
6) If there are multiple "old" ones repeat 5 for every "old" directory.
7) run nodetool refresh
IF THERE IS ONLY ONE DIRECTORY:
No further action is needed.
Futures
Schema collisions will continue to be an issue until - CASSANDRA-9424
Here's an example of it occurring on Jira and closed as not a problem CASSANDRA-8387
When you create a table in Cassandra it is assigned a unique id that should be the same on all nodes. Somehow it sounds like your table did not have the same id on all nodes. I'm not sure how that might happen, but maybe there was a glitch when the table was created and it was created multiple times, etc.
You should always use the IF NOT EXISTS clause when creating tables.
To check if your id's are consistent, try this on each node:
In cqlsh, run "SELECT cf_id from system.schema_columnfamilies where columnfamily_name ='yourtablename' allow filtering;
Look in the data directory under the keyspace name the table was created in. You should see a single directory for the table that looks like table_name-cf_id.
If things are correct you should see the same cf_id in all these places. If you see different ones, then somehow things got out of sync.
The other symptoms like the system keyspace disappearing I don't have a suggestion other than you hit some kind of bug in the software. If you get a lot of strange symptoms like this then perhaps you have some kind of data corruption. You might want to think about backing up your data in case things go south and you need to rebuild the cluster.
I'm seeing some very strange effects after modifying column metadata in a columnfamily after executing the following CQL query: ALTER TABLE keyspace_name.table_name ADD column_name cql_type;
I have a cluster of 4 nodes on two data centers (Cassandra version 2.0.9). I also have two application servers talking to the Cassandra cluster via the datastax java driver (version 2.0.4).
After executing this kind of query I see no abnormal behaviour whatsoever (no exceptions detected at all), however long I wait. But once I restart my application on one of the servers I immediately start seeing errors on the other server. What I mean by errors is that after getting my data into a ResultSet, I try to deserialize it row by row and get 'null' values or values from other columns instead of the ones I expect. After restarting the second server (the one that is getting the errors) everything gets back to normal.
I've tried investigating both the logs of datastax-agent and cassandra on both the servers but there is nothing to be found.
Is there a 'proper procedure' to altering the columnfamily? Does anyone have any idea as to what may be the problem?
Thanks!
Cassandra service on one of my nodes went down and we couldnt restart it because of some corruption in one of the tables. So we tried rebuilding it by deleting all the data files and then starting the service, once it shows up in the ring we ran nodetool repair multiple times but it got hung throwing the same error
Caused by: org.apache.cassandra.io.compress.CorruptBlockException: (/var/lib/cassandra/data/profile/AttributeKey/profile-AttributeKey-ib-1848-Data.db): corruption detected, chunk at 1177104 of length 11576.
This occurs after 6gb of data is recovered. Also my replication factor is 3 so the same data is fine on the other 2 nodes.
I am a little new to Cassandra and am not sure what I am missing, has anybody seen this issue with repair? I have also tried scrubbing but it failed because of the corruption.
Please help.
rm /var/lib/cassandra/data/profile/AttributeKey/profile-AttributeKey-ib-1848-* and restart.
Scrub should not fail, please open a ticket to fix that at https://issues.apache.org/jira/browse/CASSANDRA.
first use the nodetool scrub if it does not fix
then shut down the node and run sstablescrub [yourkeyspace] [table] you will be able to remove the corrupted tables which were not done at nodetool scrub utility and run a repair you will be able to figure out the issue.