In my cassandra cluster few of the nodes give following error when I trigger the compaction.
Is there any solution on this?
Caused by: java.io.IOError: org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid column name length 0
at org.apache.cassandra.io.sstable.SSTableIdentityIterator.next(SSTableIdentityIterator.java:179)
at org.apache.cassandra.io.sstable.SSTableIdentityIterator.next(SSTableIdentityIterator.java:42)
at org.apache.commons.collections.iterators.CollatingIterator.set(CollatingIterator.java:284)
at org.apache.commons.collections.iterators.CollatingIterator.least(CollatingIterator.java:326)
at org.apache.commons.collections.iterators.CollatingIterator.next(CollatingIterator.java:230)
at org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterator.java:69)
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
at com.google.common.collect.Iterators$7.computeNext(Iterators.java:614)
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
at org.apache.cassandra.db.ColumnIndexer.serializeInternal(ColumnIndexer.java:76)
at org.apache.cassandra.db.ColumnIndexer.serialize(ColumnIndexer.java:50)
at org.apache.cassandra.db.compaction.LazilyCompactedRow.<init>(LazilyCompactedRow.java:86)
at org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:138)
at org.apache.cassandra.db.compaction.CompactionIterator.getReduced(CompactionIterator.java:123)
at org.apache.cassandra.db.compaction.CompactionIterator.getReduced(CompactionIterator.java:43)
at org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterator.java:74)
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
at org.apache.commons.collections.iterators.FilterIterator.setNextObject(FilterIterator.java:183)
at org.apache.commons.collections.iterators.FilterIterator.hasNext(FilterIterator.java:94)
at org.apache.cassandra.db.compaction.CompactionManager.doCompactionWithoutSizeEstimation(CompactionManager.java:569)
at org.apache.cassandra.db.compaction.CompactionManager.doCompaction(CompactionManager.java:506)
at org.apache.cassandra.db.compaction.CompactionManager$4.call(CompactionManager.java:319)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
... 3 more
Caused by: org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid column name length 0
at org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:89)
at org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:36)
at org.apache.cassandra.io.sstable.SSTableIdentityIterator.next(SSTableIdentityIterator.java:172)
Thanks
Manish
Run nodetool scrub, upgrade to the latest Cassandra version in your release series, and check for bad memory.
Related
I am trying to query pinot table data using presto, below are my configuration details.
started Pinot is one of the sit server.i.e. 10.184.160.52
Controller: 10.184.160.52:9000
server: 10.184.160.52:7000
broker: 10.184.160.52:8000
I have Presto on different server Ports are open b/w these 2 servers. i.e.10.184.160.53
Created One pinot.properties file inside presto/etc/catalog/pinot.properties.
connector.name=pinot
pinot.controller-urls=Controller_Host:9000
bin/launcher run ---> Loaded Pinot catalog.
Started Prestro with Pinot Segment.
./presto --server 10.184.160.53:8080 --catalog pinot
show catalogs;(able to see my Catalog)
pinot
show schemas; (able to see sachema also)
presto> show schemas;
Schema
--------------------
default
presto> use default;
USE
presto:default> show tables;----(able to see pinot tables:)
Table
------------------------------
test
test2
test3
(3 rows)
Query 20210519_124218_00061_vcz4u, FINISHED, 1 node
Splits: 19 total, 19 done (100.00%)
0:00 [3 rows, 98B] [10 rows/s, 340B/s]
but when I am doing select * from test ; its showing broker not found
presto:default> select * from test;
Query 20210519_124230_00062_vcz4u failed: No valid brokers found for test
Complete Presto Logs:
Error Code PINOT_UNABLE_TO_FIND_BROKER (84213767)
Stack Trace
io.prestosql.pinot.PinotException: No valid brokers found for test
at io.prestosql.pinot.client.PinotClient.getBrokerHost(PinotClient.java:285)
at io.prestosql.pinot.client.PinotClient.sendHttpGetToBrokerJson(PinotClient.java:185)
at io.prestosql.pinot.client.PinotClient.getRoutingTableForTable(PinotClient.java:302)
at io.prestosql.pinot.PinotSplitManager.generateSplitsForSegmentBasedScan(PinotSplitManager.java:72)
at io.prestosql.pinot.PinotSplitManager.getSplits(PinotSplitManager.java:167)
at io.prestosql.split.SplitManager.getSplits(SplitManager.java:87)
at io.prestosql.sql.planner.DistributedExecutionPlanner$Visitor.visitScanAndFilter(DistributedExecutionPlanner.java:203)
at io.prestosql.sql.planner.DistributedExecutionPlanner$Visitor.visitTableScan(DistributedExecutionPlanner.java:185)
at io.prestosql.sql.planner.DistributedExecutionPlanner$Visitor.visitTableScan(DistributedExecutionPlanner.java:156)
at io.prestosql.sql.planner.plan.TableScanNode.accept(TableScanNode.java:143)
at io.prestosql.sql.planner.DistributedExecutionPlanner.doPlan(DistributedExecutionPlanner.java:124)
at io.prestosql.sql.planner.DistributedExecutionPlanner.doPlan(DistributedExecutionPlanner.java:131)
at io.prestosql.sql.planner.DistributedExecutionPlanner.plan(DistributedExecutionPlanner.java:101)
at io.prestosql.execution.SqlQueryExecution.planDistribution(SqlQueryExecution.java:470)
at io.prestosql.execution.SqlQueryExecution.start(SqlQueryExecution.java:386)
at io.prestosql.execution.SqlQueryManager.createQuery(SqlQueryManager.java:237)
at io.prestosql.dispatcher.LocalDispatchQuery.lambda$startExecution$7(LocalDispatchQuery.java:143)
at io.prestosql.$gen.Presto_350____20210519_105836_2.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
I am not able to understand what is happening here why this issue is showing for select statement.Looks like pinot broker is not accepting queries.someOne Kindly Suggest, What is the issue here.
Update: This is because the connector does not support mixed case table names. Mixed case column names are supported. There is a pull request to add support for mixed case table names: https://github.com/trinodb/trino/pull/7630
I am tyring to load sstables to cassandra using sstableloader utility. But I am getting the following error.
> java.lang.IllegalArgumentException
java.lang.RuntimeException: Could not retrieve endpoint ranges:
at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:338)
at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:156)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:106)
Caused by: java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:275)
at org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:543)
at org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:124)
at org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:101)
at org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:30)
at org.apache.cassandra.serializers.CollectionSerializer.deserialize(CollectionSerializer.java:50)
at org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:68)
at org.apache.cassandra.cql3.UntypedResultSet$Row.getMap(UntypedResultSet.java:287)
at org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1824)
at org.apache.cassandra.config.CFMetaData.fromThriftCqlRow(CFMetaData.java:1117)
at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:330)
... 2 mor
the command I am using to load the sstable is
$bin/sstableloader -d nodename -u username -pw password path/to/sstable/keyspacename/tablename
this was working a few days back .I am not sure whats changed and how to debug it ?
I am using datastax.
I am loading the sstable from the same node which is in the cluster.i.e my source and destination node are same.
Has someone seen this error before ?
Cassandra version : 2.1
Any Help is appreciated.
The exception in the stack trace comes from this piece of code:
if (version >= Server.VERSION_3)
{
int size = input.getInt();
if (size < 0)
return null;
return ByteBufferUtil.readBytes(input, size); // HERE !
}
I'm wondering if you're loading sstables that have been generated by a Cassandra 2.1 or an older version .... Because the issue seems to be at the byte-encoding level.
There is also a possibility that your SSTables are corrupted.
How did you get those sstables ? From a copy of another Cassandra instance ? Generated by CQLSSTableWriter ?
I had this problem again, so debugged it a little for the root cause. The problem is if at any time you have altered your cassandra table by dropping some column. it triggers a bug for sstableLoader. That why dropping the table and creating it again works.
I'm experiencing this error while trying to query Cassandra using cassandra-jdbc(1.1.3) driver.
Caused by: org.apache.thrift.transport.TTransportException: Read a negative frame size (-2147418110)!
at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133)
at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:354)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:215)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.cassandra.thrift.Cassandra$Client.recv_prepare_cql_query(Cassandra.java:1438)
at org.apache.cassandra.thrift.Cassandra$Client.prepare_cql_query(Cassandra.java:1424)
at org.apache.cassandra.cql.jdbc.CassandraConnection.prepare(CassandraConnection.java:438)
at org.apache.cassandra.cql.jdbc.CassandraConnection.prepare(CassandraConnection.java:452)
at org.apache.cassandra.cql.jdbc.CassandraPreparedStatement.<init>(CassandraPreparedStatement.java:85)
... 79 more
This is my sample code snippet
statement = connection.prepareStatement(SELECT_CQL);
statement.setString(1, ID);
resultSet = statement.executeQuery();
I'm supposing you were trying to connect to Cassandra via JDBC on cql port (9042).
I was able to connect to it enabling thrift with
nodetool enablethrift
and then connecting to port 9160 (or whichever you might have overriden in conf/cassandra.yaml). Hope this helps.
I am trying to make an external table in Hive as shown on page 88 of the Datastax Enterprise 3.1. Documentation.
The statement is further below together with the error message.
What am I doing wrong?
Regards Hans-Peter
hive> create external table testext (m string, n string, o string, p string)
> STORED BY 'org.apache.hadoop.hive.cassandra.cql3.CqlStorageHandler'
> TBLPROPERTIES ( "cassandra.ks.name" = "cql3ks",
> "cassandra.cf.name" = "test",
> "cassandra.cql3.type" = "text, text, text, text");
FAILED: Error in metadata:
com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStoreException:
There was a problem with the Cassandra Hive MetaStore: Problem finding unmapped
keyspaces
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
2013-10-15 12:47:36,657 WARN conf.HiveConf (HiveConf.java:(63)) - DEPRECATED: Ignoring hive-default.xml found on the CLASSPATH at /etc/dse/hive/hive-default.xml
2013-10-15 12:48:41,003 WARN config.DatabaseDescriptor (DatabaseDescriptor.java:loadYaml(253)) - Please rename 'authority' to 'authorizer' in cassandra.yaml
2013-10-15 12:48:42,988 ERROR exec.Task (SessionState.java:printError(400)) - FAILED: Error in metadata: com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStoreException: There was a problem with the Cassandra Hive MetaStore: Problem finding unmapped keyspaces
org.apache.hadoop.hive.ql.metadata.HiveException: com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStoreException: There was a problem with the Cassandra Hive MetaStore: Problem finding unmapped keyspaces
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:544)
at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3305)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:242)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:134)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1326)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1118)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:951)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:215)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:406)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:557)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStoreException: There was a problem with the Cassandra Hive MetaStore: Problem finding unmapped keyspaces
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.createKeyspaceSchemasIfNeeded(SchemaManagerService.java:230)
at com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStore.setConf(CassandraHiveMetaStore.java:112)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
at org.apache.hadoop.hive.metastore.RetryingRawStore.(RetryingRawStore.java:62)
at org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:346)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:333)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:371)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:278)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.(HiveMetaStore.java:248)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:114)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2092)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2102)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:538)
... 17 more
Caused by: com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStoreException: There was a problem with the Cassandra Hive MetaStore: There was a problem retrieving column families for keyspace demo
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.createUnmappedTables(SchemaManagerService.java:277)
at com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStore.getDatabase(CassandraHiveMetaStore.java:148)
at com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStore.getDatabase(CassandraHiveMetaStore.java:136)
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.isKeyspaceMapped(SchemaManagerService.java:186)
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.finUnmappedKeyspaces(SchemaManagerService.java:137)
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.createKeyspaceSchemasIfNeeded(SchemaManagerService.java:224)
... 31 more
Caused by: com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStoreException: There was a problem with the Cassandra Hive MetaStore: Problem creating column mappingsorg.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type)
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.buildTable(SchemaManagerService.java:481)
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.createUnmappedTables(SchemaManagerService.java:254)
... 36 more
Caused by: java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:247)
at org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:51)
at org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:60)
at org.apache.cassandra.db.marshal.AbstractCompositeType.getString(AbstractCompositeType.java:226)
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.addTypeToStorageDescriptor(SchemaManagerService.java:846)
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.buildColumnMappings(SchemaManagerService.java:546)
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.buildTable(SchemaManagerService.java:460)
... 37 more
2013-10-15 12:48:42,990 ERROR ql.Driver (SessionState.java:printError(400)) - FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
I am not sure what the actual problem, but I ran into this while creating a normal table in Hive.
I started Hive with sudo access, and can now run queries as expected.
$ sudo bin/dse hive
So something that worked for me was to totally wipe out the HiveMetaStore keyspace in Cassandra and recreate just the keyspace with the NetworkTopologyStrategy replica strategy. I made sure to add the Analytics datacenter to the new keyspace as well, so it looked something like:
CREATE KEYSPACE HiveMetaStore WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'Analytics' : 2};
I then restarted DSE on my analytics nodes and they correctly created the MetaStore table within the HiveMetaStore keyspace and everything started working again!
First, I read this this.
I cannot get Cassandra up and running again.
I am using Hector as my client to connect to an instance of Cassandra 0.8.2 & load my schema. Through Hector, I am using 2 different classes to create 2 different column families - Articles & TagsArticlesCF.
Through the main class, I create a column families named "Articles" and "TagsArticlesCF" like this:
public static void main(String[] args) {
cluster = HFactory.getOrCreateCluster("test cluster", "xxx.xxx.xxx.xxx:9160");
newKeyspaceDef = HFactory.createKeyspaceDefinition(keyspaceName);
if( (cluster.describeKeyspace(keyspaceName)) == null){
createSchema();
}
Keyspace ksp = HFactory.createKeyspace(keyspaceName, cluster);
Articles art = new Articles(cluster, newKeyspaceDef,ksp);
TagsArticlesCF tags = new TagsArticlesCF(cluster,newKeyspaceDef,ksp);
Here is an example of what my column families look like/ how they are created:
public Articles(Cluster cluster, KeyspaceDefinition ksp, Keyspace ksp2) {
BasicColumnFamilyDefinition bcfDef = new BasicColumnFamilyDefinition();
bcfDef.setName("Articles");
bcfDef.setKeyspaceName("test3");
bcfDef.setDefaultValidationClass(ComparatorType.UTF8TYPE.getClassName());
bcfDef.setKeyValidationClass(ComparatorType.UTF8TYPE.getClassName());
bcfDef.setComparatorType(ComparatorType.UTF8TYPE);
ColumnFamilyDefinition cfDef = new ThriftCfDef(bcfDef);
BasicColumnDefinition columnDefinition = new BasicColumnDefinition();
columnDefinition.setName(StringSerializer.get().toByteBuffer("title"));
columnDefinition.setIndexType(ColumnIndexType.KEYS);
columnDefinition.setValidationClass(ComparatorType.UTF8TYPE.getClassName());
cfDef.addColumnDefinition(columnDefinition);
...
I am trying to add a full schema into Cassandra that will support queries that I plan to execute on the loaded data. I ran the main method a few times to load the new column families into the database. After running the main method several times and adjusting a few things (checking if the column family was already in the KeyspaceDefinition), the running instance of Cassandra went down.
I am curious about a few things using Hector/java:
I plan to have 10 or so column families with different columns (to support different queries). Is it best practice to organize my classes so that I have a class for each column family?
What exactly is the difference between a KeyspaceDefinition & a Keyspace? Why is the distinction made?
We tried to get a new instance of Cassandra & here is what we ran into. I am trying to better understand what's going on so, any comments and help to avoid these types of errors would be greatly appreciated:
[root#appscluster1 bin]# ./cassandra -p cassandra.pid
[root#appscluster1 bin]# INFO 10:52:36,437 Logging initialized
INFO 10:52:36,484 JVM vendor/version: Java HotSpot(TM) 64-Bit Server VM/1.6.0_25
INFO 10:52:36,485 Heap size: 1046937600/1046937600
INFO 10:52:36,490 JNA not found. Native methods will be disabled.
INFO 10:52:36,526 Loading settings from file:/opt/cassandra/apache-cassandra-0.8.2/conf/cassandra.yaml
[root#appscluster1 bin]# INFO 10:52:36,872 DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO 10:52:37,346 Global memtable threshold is enabled at 332MB
INFO 10:52:37,348 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
INFO 10:52:37,497 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
INFO 10:52:37,617 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
INFO 10:52:37,984 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
INFO 10:52:38,252 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
INFO 10:52:38,259 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
INFO 10:52:38,545 Opening /opt/cassandra/persist8/data/system/IndexInfo-g-73
INFO 10:52:38,661 Opening /opt/cassandra/persist8/data/system/Schema-g-169
INFO 10:52:38,685 Opening /opt/cassandra/persist8/data/system/Schema-g-170
INFO 10:52:38,730 Opening /opt/cassandra/persist8/data/system/Schema-g-171
INFO 10:52:38,751 Opening /opt/cassandra/persist8/data/system/Migrations-g-171
INFO 10:52:38,763 Opening /opt/cassandra/persist8/data/system/Migrations-g-170
INFO 10:52:38,776 Opening /opt/cassandra/persist8/data/system/Migrations-g-169
INFO 10:52:38,795 Opening /opt/cassandra/persist8/data/system/LocationInfo-g-2
INFO 10:52:38,827 Opening /opt/cassandra/persist8/data/system/LocationInfo-g-1
INFO 10:52:39,048 Loading schema version ec437ac0-d28a-11e0-0000-c4ffed3367ff
INFO 10:52:39,645 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
INFO 10:52:39,663 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
... (more of same)...
INFO 10:52:40,463 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
INFO 10:52:41,390 Opening /opt/cassandra/persist8/data/test3/Articles-g-367
ERROR 10:52:41,392 Missing sstable component in /opt/cassandra/persist8/data/test3/Articles-g-367=[Index.db, Data.db]; skipped because of /opt/cassandra/persist8/data/test3/Articles-g-367-Index.db (No such file or directory)
INFO 10:52:41,863 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
INFO 10:52:41,865 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
... (more of same) ...
INFO 10:52:41,892 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
ERROR 10:52:41,898 Exception encountered during startup.
java.lang.RuntimeException: javax.management.InstanceAlreadyExistsException: org.apache.cassandra.db:type=ColumnFamilies,keyspace=test3,columnfamily=TagsArticlesCF
at org.apache.cassandra.db.ColumnFamilyStore.<init>(ColumnFamilyStore.java:315)
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:466)
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:436)
at org.apache.cassandra.db.Table.initCf(Table.java:369)
at org.apache.cassandra.db.Table.<init>(Table.java:306)
at org.apache.cassandra.db.Table.open(Table.java:111)
at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:187)
at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:341)
at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:80)
Caused by: javax.management.InstanceAlreadyExistsException: org.apache.cassandra.db:type=ColumnFamilies,keyspace=test3,columnfamily=TagsArticlesCF
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312)
at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482)
at org.apache.cassandra.db.ColumnFamilyStore.<init>(ColumnFamilyStore.java:311)
... 8 more
Exception encountered during startup.
java.lang.RuntimeException: javax.management.InstanceAlreadyExistsException: org.apache.cassandra.db:type=ColumnFamilies,keyspace=test3,columnfamily=TagsArticlesCF
at org.apache.cassandra.db.ColumnFamilyStore.<init>(ColumnFamilyStore.java:315)
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:466)
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:436)
at org.apache.cassandra.db.Table.initCf(Table.java:369)
at org.apache.cassandra.db.Table.<init>(Table.java:306)
at org.apache.cassandra.db.Table.open(Table.java:111)
at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:187)
at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:341)
at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:80)
Caused by: javax.management.InstanceAlreadyExistsException: org.apache.cassandra.db:type=ColumnFamilies,keyspace=test3,columnfamily=TagsArticlesCF
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312)
at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482)
at org.apache.cassandra.db.ColumnFamilyStore.<init>(ColumnFamilyStore.java:311)
... 8 more
[root#appscluster1 bin]#
Thanks!
How are you sending the Keyspace definition to the cluster?
Take a look at the methods following test case:
https://github.com/rantav/hector/blob/master/core/src/test/java/me/prettyprint/cassandra/service/CassandraClusterTest.java#L115-189
If a keyspace and or column family already exist, you should be able to catch an IllegalArgumentException.