Cassandra startup problem: Attempt to assign id to existing column family - cassandra

I'm using cassandra 0.7.4 on centos5.5 x86_64 with jdk-1.6.0_24 64-Bit.
When I restart it , it throw out:
ERROR 11:37:32,009 Exception encountered during startup.
java.io.IOError: org.apache.cassandra.config.ConfigurationException: Attempt to assign id to existing column family.
at org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:476)
at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:138)
at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:314)
at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:79)
Caused by: org.apache.cassandra.config.ConfigurationException: Attempt to assign id to existing column family.
at org.apache.cassandra.config.CFMetaData.map(CFMetaData.java:223)
at org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:472)
... 3 more
I try to location the problem: when I delete the file of the system keyspace ,It can restart sucess!
So I think this problem is cause by system Keyspace,even at the CF Scheam.
Then I build a new test environment, I know this proble is cause by this opeartion
update keyspace system with replication_factor=3;
But now how can i repair it ?!
There are many data on this cluster,and I couldn't lose data.
I have already do update keyspace system with replication_factor=1; ,but the problem still exist.
I try to use nodetool to repair after or befor flush, all no effect.
How can I restart cassandra without lose data ? Who can help me?

You should never modify the system keyspace unless you really, really know what you are doing. (If you have to ask, you don't. :)
So, the answer is: don't do that.
To recover, you should set initial_token in cassandra.yaml to your node's current token (which you can see with "nodetool ring"), then delete the system keyspace and restart. Then you'll need to recreate your columnfamily definitions, but your data will not be affected.

Related

Cassandra not starting due to unknown Column Family

After adding/removing tables and views to a keyspace a got problems with inconsistency and error referring to tables previous deleted. We tried to restart cluster nodes, only resulting in the nodes not starting due to java.lang.IllegalArgumentException: Unknown CF.
The current error is thrown from a View that refers to a non existing table (The table do exist but has a new id). Is it possible to some how fix this when Cassandra is not running?
It may be that you have a schema mismatch. Verify first that you're running the same schema versions nodetool describecluster and make sure all nodes are reachable.
The only other time i've seen something like this is when you have corrupt data on a node. In which case you'll want to nodetool removenode the appropriate node and provision a new one.
As an asside MATERIALIZED VIEWS are deprecated in 3.11 and will not be supported going forward. I would suggest that you roll your own.

Loading Cassandra data with SStableloader from different Cassandra cluster

I have two different independent machines running Cassandra and I want to migrate the data from one machine to the other.
Thus, I first took a snapshot of my Cassandra Cluster on machine 1 according to the datastax documentation.
Then I moved the data to machine 2, where I'm trying to import it with sstableloader.
As a note: The keypsace (open_weather) and tablename (raw_weather_data) on the machine 2 have been created and are the same as on machine 1.
The command I'm using looks as follows:
bin/sstableloader -d localhost "path_to_snapshot"/open_weather/raw_weather_data
And then get the following error:
Established connection to initial hosts
Opening sstables and calculating sections to stream
For input string: "CompressionInfo.db"
java.lang.NumberFormatException: For input string: "CompressionInfo.db"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
at org.apache.cassandra.io.sstable.Descriptor.fromFilename(Descriptor.java:276)
at org.apache.cassandra.io.sstable.Descriptor.fromFilename(Descriptor.java:235)
at org.apache.cassandra.io.sstable.Component.fromFilename(Component.java:120)
at org.apache.cassandra.io.sstable.SSTable.tryComponentFromFilename(SSTable.java:160)
at org.apache.cassandra.io.sstable.SSTableLoader$1.accept(SSTableLoader.java:84)
at java.io.File.list(File.java:1161)
at org.apache.cassandra.io.sstable.SSTableLoader.openSSTables(SSTableLoader.java:78)
at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:162)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:106)
Unfortunately I have no idea why?
I'm not sure if it is related to the issue, but somehow on machine 1 my *.db files are name rather "strange" as compared to the *.db files I already have on machine 2.
*.db files from machine 1:
la-53-big-CompressionInfo.db
la-53-big-Data.db
...
la-54-big-CompressionInfo.db
...
*.db files from machine 2:
open_weather-raw_weather_data-ka-5-CompressionInfo.db
open_weather-raw_weather_data-ka-5-Data.db
What am I missing? Any help would be highly appreciated. I'm also open to any other suggestions. The COPY command will most probably not work since it is Limited to 99999999 rows as far as I know.
P.s. I didn't want to create a overly huge post, but if you need any further information to help me out, just let me know.
EDIT:
Note that I'm using Cassandra in the stand-alone mode.
EDIT2:
After installing the same version 2.1.4 on my destination machine (machine 2), I still get all the same error. With SSTableLoader I still get the above mentioned error and with copying the files manually (as described by LHWizard), I still get empty tables after starting Cassandra again and performing a SELECT command.
Regarding the initial tokens, I get a huge list of tokens if I perform node ring on machine 1. I'm not sure what to do with those?
your data is already in the form of a snapshot (or backup). What I have done in the past is the following:
install the same version of cassandra on the restore node
edit cassandra.yaml on the restore node - make sure that cluster_name and snitch are the same.
edit seeds: list and any other properties that were altered in the original node.
get the schema from the original node using cqlsh DESC KEYSPACE.
start cassandra on the restore node and import the schema.
(steps 6 & 7 may not be completely necessary, but this is what I do.)
stop cassandra, delete the contents of /var/lib/cassandra/data/, commitlog/, and saved_caches/* folders.
restart cassandra on the restore node to recreate the correct folders, then stop it
copy the contents of the snapshots folder to each corresponding table folder in the restore node, then start cassandra. You probably want to run nodetool repair.
You don't really need to bulk import the data, it's already in the correct format if you are using the same version of cassandra, although you didn't specify that in your original question.

Cassandra 2.1 system schema missing

I have a six node cluster running cassandra 2.1.6. Yesterday I tried to drop a column family and received the message "Column family ID mismatch". I tried running nodetool repair but after repair was complete I got the same message. I then tried selecting from the column family but got the message "Column family not found". I ran the following query to get a list of all column families in my schema
select columnfamily_name from system.schema_columnfamilies where keyspace_name = 'xxx';
At this point I received the message
"Keyspace 'system' not found." I tried the command describe keyspaces and sure enough system was not in the list of keyspaces.
I then tried nodetool resetlocalshema on one of the nodes missing the system keyspace and when that failed to resolve the problem I tried nodetool rebuild but got the same messages after rebuild was complete. I tried stopping the nodes missing the system keyspace and restarted them, once the restart was completed the system keyspace was back and I was able to execute the above query successfully. However, the table I had tried to drop previously was not listed so I tried to recreate it and once again received the message Column family ID mismatch.
Finally, I shutdown the cluster and restarted it... and everything works as expected.
My questions are: How/why did the system keyspace disappear? What happened to the data being inserted into my column families while the system keyspace was missing from two of the six nodes? (my application didn't seem to have any problems) Is there a way I can detect problems like this automatically or do I have to manually check up on my keyspaces each day? Is there a way to fix the missing system keyspace and/or the Column family ID mismatch without restarting the entire cluster?
EDIT
As per Jim Meyers suggestion I queried the cf_id on each node of the cluster and confirmed that all nodes return the same value.
select cf_id from system.schema_columnfamilies where columnfamily_name = 'customer' allow filtering;
cf_id
--------------------------------------
cbb51b40-2b75-11e5-a578-798867d9971f
I then ran ls on my data directory and can see that there are multiple entries for a few of my tables
customer-72bc62d0ff7611e4a5b53386c3f1c9f9
customer-cbb51b402b7511e5a578798867d9971f
My application dynamically creates tables at run time (always using IF NOT EXISTS), seems likely that the application issued the same create table command on separate nodes at the same time resulting in the schema mismatch.
Since I've restarted the cluster everything seems to be working fine.
Is it safe to delete the extra file?
i.e. customer-72bc62d0ff7611e4a5b53386c3f1c9f9
1 The cause of this problem is a CREATE TABLE statement collision. Do not generate tables dynamically from multiple clients, even with IF NOT EXISTS. First thing you need to do is fix your code so that this does not happen. Just create your tables manually from cqlsh allowing time for the schema to settle. Always wait for schema agreement when modifying schema.
2 Here's the fix:
1) Change your code to not automatically re-create tables (even with IF NOT EXISTS).
2) Run a rolling restart to ensure schema matches across nodes. Run nodetool describecluster around your cluster. Check that there is only one schema version. 
ON EACH NODE:
3) Check your filesystem and see if you have two directories for the table in question in the data directory.
If THERE ARE TWO OR MORE DIRECTORIES:
4)Identify from schema_column_families which cf ID is the "new" one (currently in use). 
cqlsh -e "select * from system.schema_column_families"|grep
5) Move the data from the "old" one to the "new" one and remove the old directory. 
6) If there are multiple "old" ones repeat 5 for every "old" directory.
7) run nodetool refresh
IF THERE IS ONLY ONE DIRECTORY:
No further action is needed.
Futures
Schema collisions will continue to be an issue until - CASSANDRA-9424
Here's an example of it occurring on Jira and closed as not a problem CASSANDRA-8387
When you create a table in Cassandra it is assigned a unique id that should be the same on all nodes. Somehow it sounds like your table did not have the same id on all nodes. I'm not sure how that might happen, but maybe there was a glitch when the table was created and it was created multiple times, etc.
You should always use the IF NOT EXISTS clause when creating tables.
To check if your id's are consistent, try this on each node:
In cqlsh, run "SELECT cf_id from system.schema_columnfamilies where columnfamily_name ='yourtablename' allow filtering;
Look in the data directory under the keyspace name the table was created in. You should see a single directory for the table that looks like table_name-cf_id.
If things are correct you should see the same cf_id in all these places. If you see different ones, then somehow things got out of sync.
The other symptoms like the system keyspace disappearing I don't have a suggestion other than you hit some kind of bug in the software. If you get a lot of strange symptoms like this then perhaps you have some kind of data corruption. You might want to think about backing up your data in case things go south and you need to rebuild the cluster.

Altering a column family in cassandra in a multiple node topology

I'm having the following issue when trying to alter cassandra:
I'm altering the table straight forward:
ALTER TABLE posts ADD is_black BOOLEAN;
on a single-node environment, both under EC2 server and on localhost everything work perfect - select, delete and so on.
When I'm altering on a cluster with 3 nodes - stuff are getting massy.
When I perform
select().all().from(tableName).where..
I'm getting the following exception:
java.lang.IllegalArgumentException: is_black is not a column defined in this metadata
at com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:273)
at com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:279)
at com.datastax.driver.core.ArrayBackedRow.getIndexOf(ArrayBackedRow.java:69)
at com.datastax.driver.core.AbstractGettableData.getString(AbstractGettableData.java:137)
Apparently I'm not the only one who's having this behaviour:
reference
p.s - drop creating the keyspace is not a possibility for me since I cannot delete the data contained in the table.
The bug was resolved :-)
I issue was that DataStax maintains in memory cache that contains the configuration of each node, this cache wasn't update when I alter the table since I used cqlsh instead of their SDK.
After restarting all the node, the in memory cache was dropped and the bug was resolved.

Cassandra hot keyspace structure change

I'm currently running a 12-node Cassandra cluster storing 4TB of data, with a replication factor set to 3. For the needs of an application update, we need to change the configuration of our keyspace, and we'd like to avoid any downtime if possible.
I read on a mailing list that the best way to do it is to:
Kill cassandra process on one server of the cluster
Start it again, wait for the commit log to be written on the disk, and kill it again
Make the modifications in the storage.xml file
Rename or delete files in the data directories according to the changes we made
Start cassandra
Goto 1 with next server on the list
My questions would be:
Did I understand the process well?
Is there any risk of data corruption?
During the process, there will be servers with different versions of the storage.xml file in the same cluser, same keyspace. Is it a problem?
Same question as above if we not only add, rename and remove ColumnFamilies, but if we change the CompareWith parameter / transform an existing column family into a super one. Or do we need to change the name?
Thank you for your answers. It's the first time I'll do this, and I'm a little bit scared.
Your list looks like the one in http://wiki.apache.org/cassandra/FAQ#modify_cf_config. So it should be accurate...

Resources