Cassandra startup issue - cassandra

First, I read this this.
I cannot get Cassandra up and running again.
I am using Hector as my client to connect to an instance of Cassandra 0.8.2 & load my schema. Through Hector, I am using 2 different classes to create 2 different column families - Articles & TagsArticlesCF.
Through the main class, I create a column families named "Articles" and "TagsArticlesCF" like this:
public static void main(String[] args) {
cluster = HFactory.getOrCreateCluster("test cluster", "xxx.xxx.xxx.xxx:9160");
newKeyspaceDef = HFactory.createKeyspaceDefinition(keyspaceName);
if( (cluster.describeKeyspace(keyspaceName)) == null){
createSchema();
}
Keyspace ksp = HFactory.createKeyspace(keyspaceName, cluster);
Articles art = new Articles(cluster, newKeyspaceDef,ksp);
TagsArticlesCF tags = new TagsArticlesCF(cluster,newKeyspaceDef,ksp);
Here is an example of what my column families look like/ how they are created:
public Articles(Cluster cluster, KeyspaceDefinition ksp, Keyspace ksp2) {
BasicColumnFamilyDefinition bcfDef = new BasicColumnFamilyDefinition();
bcfDef.setName("Articles");
bcfDef.setKeyspaceName("test3");
bcfDef.setDefaultValidationClass(ComparatorType.UTF8TYPE.getClassName());
bcfDef.setKeyValidationClass(ComparatorType.UTF8TYPE.getClassName());
bcfDef.setComparatorType(ComparatorType.UTF8TYPE);
ColumnFamilyDefinition cfDef = new ThriftCfDef(bcfDef);
BasicColumnDefinition columnDefinition = new BasicColumnDefinition();
columnDefinition.setName(StringSerializer.get().toByteBuffer("title"));
columnDefinition.setIndexType(ColumnIndexType.KEYS);
columnDefinition.setValidationClass(ComparatorType.UTF8TYPE.getClassName());
cfDef.addColumnDefinition(columnDefinition);
...
I am trying to add a full schema into Cassandra that will support queries that I plan to execute on the loaded data. I ran the main method a few times to load the new column families into the database. After running the main method several times and adjusting a few things (checking if the column family was already in the KeyspaceDefinition), the running instance of Cassandra went down.
I am curious about a few things using Hector/java:
I plan to have 10 or so column families with different columns (to support different queries). Is it best practice to organize my classes so that I have a class for each column family?
What exactly is the difference between a KeyspaceDefinition & a Keyspace? Why is the distinction made?
We tried to get a new instance of Cassandra & here is what we ran into. I am trying to better understand what's going on so, any comments and help to avoid these types of errors would be greatly appreciated:
[root#appscluster1 bin]# ./cassandra -p cassandra.pid
[root#appscluster1 bin]# INFO 10:52:36,437 Logging initialized
INFO 10:52:36,484 JVM vendor/version: Java HotSpot(TM) 64-Bit Server VM/1.6.0_25
INFO 10:52:36,485 Heap size: 1046937600/1046937600
INFO 10:52:36,490 JNA not found. Native methods will be disabled.
INFO 10:52:36,526 Loading settings from file:/opt/cassandra/apache-cassandra-0.8.2/conf/cassandra.yaml
[root#appscluster1 bin]# INFO 10:52:36,872 DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO 10:52:37,346 Global memtable threshold is enabled at 332MB
INFO 10:52:37,348 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
INFO 10:52:37,497 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
INFO 10:52:37,617 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
INFO 10:52:37,984 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
INFO 10:52:38,252 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
INFO 10:52:38,259 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
INFO 10:52:38,545 Opening /opt/cassandra/persist8/data/system/IndexInfo-g-73
INFO 10:52:38,661 Opening /opt/cassandra/persist8/data/system/Schema-g-169
INFO 10:52:38,685 Opening /opt/cassandra/persist8/data/system/Schema-g-170
INFO 10:52:38,730 Opening /opt/cassandra/persist8/data/system/Schema-g-171
INFO 10:52:38,751 Opening /opt/cassandra/persist8/data/system/Migrations-g-171
INFO 10:52:38,763 Opening /opt/cassandra/persist8/data/system/Migrations-g-170
INFO 10:52:38,776 Opening /opt/cassandra/persist8/data/system/Migrations-g-169
INFO 10:52:38,795 Opening /opt/cassandra/persist8/data/system/LocationInfo-g-2
INFO 10:52:38,827 Opening /opt/cassandra/persist8/data/system/LocationInfo-g-1
INFO 10:52:39,048 Loading schema version ec437ac0-d28a-11e0-0000-c4ffed3367ff
INFO 10:52:39,645 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
INFO 10:52:39,663 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
... (more of same)...
INFO 10:52:40,463 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
INFO 10:52:41,390 Opening /opt/cassandra/persist8/data/test3/Articles-g-367
ERROR 10:52:41,392 Missing sstable component in /opt/cassandra/persist8/data/test3/Articles-g-367=[Index.db, Data.db]; skipped because of /opt/cassandra/persist8/data/test3/Articles-g-367-Index.db (No such file or directory)
INFO 10:52:41,863 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
INFO 10:52:41,865 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
... (more of same) ...
INFO 10:52:41,892 Removing compacted SSTable files (see http://wiki.apache.org/cassandra/MemtableSSTable)
ERROR 10:52:41,898 Exception encountered during startup.
java.lang.RuntimeException: javax.management.InstanceAlreadyExistsException: org.apache.cassandra.db:type=ColumnFamilies,keyspace=test3,columnfamily=TagsArticlesCF
at org.apache.cassandra.db.ColumnFamilyStore.<init>(ColumnFamilyStore.java:315)
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:466)
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:436)
at org.apache.cassandra.db.Table.initCf(Table.java:369)
at org.apache.cassandra.db.Table.<init>(Table.java:306)
at org.apache.cassandra.db.Table.open(Table.java:111)
at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:187)
at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:341)
at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:80)
Caused by: javax.management.InstanceAlreadyExistsException: org.apache.cassandra.db:type=ColumnFamilies,keyspace=test3,columnfamily=TagsArticlesCF
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312)
at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482)
at org.apache.cassandra.db.ColumnFamilyStore.<init>(ColumnFamilyStore.java:311)
... 8 more
Exception encountered during startup.
java.lang.RuntimeException: javax.management.InstanceAlreadyExistsException: org.apache.cassandra.db:type=ColumnFamilies,keyspace=test3,columnfamily=TagsArticlesCF
at org.apache.cassandra.db.ColumnFamilyStore.<init>(ColumnFamilyStore.java:315)
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:466)
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:436)
at org.apache.cassandra.db.Table.initCf(Table.java:369)
at org.apache.cassandra.db.Table.<init>(Table.java:306)
at org.apache.cassandra.db.Table.open(Table.java:111)
at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:187)
at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:341)
at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:80)
Caused by: javax.management.InstanceAlreadyExistsException: org.apache.cassandra.db:type=ColumnFamilies,keyspace=test3,columnfamily=TagsArticlesCF
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312)
at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482)
at org.apache.cassandra.db.ColumnFamilyStore.<init>(ColumnFamilyStore.java:311)
... 8 more
[root#appscluster1 bin]#
Thanks!

How are you sending the Keyspace definition to the cluster?
Take a look at the methods following test case:
https://github.com/rantav/hector/blob/master/core/src/test/java/me/prettyprint/cassandra/service/CassandraClusterTest.java#L115-189
If a keyspace and or column family already exist, you should be able to catch an IllegalArgumentException.

Related

Spark not able to write into a new hive table in partitioned and append mode

Created a new table in hive in partitioned and ORC format.
Writing into this table using spark by using append ,orc and partitioned mode.
It fails with the exception:
org.apache.spark.sql.AnalysisException: The format of the existing table test.table1 is `HiveFileFormat`. It doesn't match the specified format `OrcFileFormat`.;
I change the format to "hive" from "orc" while writing . It still fails with the exception :
Spark not able to understand the underlying structure of table .
So this issue is happening because spark is not able to write into hive table in append mode , because it cant create a new table . I am able to do overwrite successfully because spark creates a table again.
But my use case is to write into append mode from starting. InsertInto also does not work specifically for partitioned tables. I am pretty much blocked with my use case. Any help would be great.
Edit1:
Working on HDP 3.1.0 environment.
Spark Version is 2.3.2
Hive Version is 3.1.0
Edit 2:
// Reading the table
val inputdf=spark.sql("select id,code,amount from t1")
//writing into table
inputdf.write.mode(SaveMode.Append).partitionBy("code").format("orc").saveAsTable("test.t2")
Edit 3: Using insertInto()
val df2 =spark.sql("select id,code,amount from t1")
df2.write.format("orc").mode("append").insertInto("test.t2");
I get the error as:
20/05/17 19:15:12 WARN SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
20/05/17 19:15:12 WARN SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
20/05/17 19:15:13 WARN AcidUtils: Cannot get ACID state for test.t1 from null
20/05/17 19:15:13 WARN AcidUtils: Cannot get ACID state for test.t1 from null
20/05/17 19:15:13 WARN HiveMetastoreCatalog: Unable to infer schema for table test.t1 from file format ORC (inference mode: INFER_AND_SAVE). Using metastore schema.
If I rerun the insertInto command I get the following exception :
20/05/17 19:16:37 ERROR Hive: MetaException(message:The transaction for alter partition did not commit successfully.)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_partitions_req_result$alter_partitions_req_resultStandardScheme.read(ThriftHiveMetastore.java)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_partitions_req_result$alter_partitions_req_resultStandardScheme.read(ThriftHiveMetastore.java)
Error in hive metastore logs :
2020-05-17T21:17:43,891 INFO [pool-8-thread-198]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(907)) - 163: alter_partitions : tbl=hive.test.t1
2020-05-17T21:17:43,891 INFO [pool-8-thread-198]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(349)) - ugi=X#A.ORG ip=10.10.1.36 cmd=alter_partitions : tbl=hive.test.t1
2020-05-17T21:17:43,891 INFO [pool-8-thread-198]: metastore.HiveMetaStore (HiveMetaStore.java:alter_partitions_with_environment_context(5119)) - New partition values:[BR]
2020-05-17T21:17:43,913 ERROR [pool-8-thread-198]: metastore.ObjectStore (ObjectStore.java:alterPartitions(4397)) - Alter failed
org.apache.hadoop.hive.metastore.api.MetaException: Cannot change stats state for a transactional table without providing the transactional write state for verification (new write ID -1, valid write IDs null; current state null; new state {}
I was able to resolve the issue by using external tables in my use case. We currently have an open issue in spark , which is related to acid properties of hive . Once I create hive table in external mode , I am able to do append operations in partitioned/non partitioned table.
https://issues.apache.org/jira/browse/SPARK-15348

Could not retrieve endpoint ranges: java.lang.IllegalArgumentException

I am tyring to load sstables to cassandra using sstableloader utility. But I am getting the following error.
> java.lang.IllegalArgumentException
java.lang.RuntimeException: Could not retrieve endpoint ranges:
at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:338)
at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:156)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:106)
Caused by: java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:275)
at org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:543)
at org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:124)
at org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:101)
at org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:30)
at org.apache.cassandra.serializers.CollectionSerializer.deserialize(CollectionSerializer.java:50)
at org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:68)
at org.apache.cassandra.cql3.UntypedResultSet$Row.getMap(UntypedResultSet.java:287)
at org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1824)
at org.apache.cassandra.config.CFMetaData.fromThriftCqlRow(CFMetaData.java:1117)
at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:330)
... 2 mor
the command I am using to load the sstable is
$bin/sstableloader -d nodename -u username -pw password path/to/sstable/keyspacename/tablename
this was working a few days back .I am not sure whats changed and how to debug it ?
I am using datastax.
I am loading the sstable from the same node which is in the cluster.i.e my source and destination node are same.
Has someone seen this error before ?
Cassandra version : 2.1
Any Help is appreciated.
The exception in the stack trace comes from this piece of code:
if (version >= Server.VERSION_3)
{
int size = input.getInt();
if (size < 0)
return null;
return ByteBufferUtil.readBytes(input, size); // HERE !
}
I'm wondering if you're loading sstables that have been generated by a Cassandra 2.1 or an older version .... Because the issue seems to be at the byte-encoding level.
There is also a possibility that your SSTables are corrupted.
How did you get those sstables ? From a copy of another Cassandra instance ? Generated by CQLSSTableWriter ?
I had this problem again, so debugged it a little for the root cause. The problem is if at any time you have altered your cassandra table by dropping some column. it triggers a bug for sstableLoader. That why dropping the table and creating it again works.

How to delete graph in Titan with Cassandra storage backend?

I use Titan 0.4.0 All, running Rexster in shared VM mode on Ubuntu 12.04.
How could I properly delete a graph in Titan which is using the Cassandra storage backend?
I have tried the TitanCleanup.clear(graph), but it does not delete everything. The indices are still there. My real issue is that I have an index which I don't want (it crashes every query), however as I understand Titan's documentation it is impossible to remove an index once it is created.
You can clear all the edges/vertices with:
g.V.remove()
but as you have found that won't clear the types/indices previously created. The most cleanly option would be to just delete the Cassandra data directory.
If you are executing the delete via a unit test you might try to do this as part of your test setup:
this.config = new BaseConfiguration(){{
addProperty("storage.backend", "berkeleyje")
addProperty("storage.directory", "/tmp/titan-schema-test")
}}
GraphDatabaseConfiguration graphconfig = new GraphDatabaseConfiguration(config)
graphconfig.getBackend().clearStorage()
g = (StandardTitanGraph) TitanFactory.open(config)
Be sure to call g.shutdown() in your test teardown method.
Just to update this answer.
With Titan 1.0.0 this can be done programmatically in Java with:
TitanGraph graph = TitanFactory.open(config);
graph.close();
TitanCleanup.clear(graph);
For the continuation of Titan called JanusGraph, the command is JanusGraphFactory.clear(graph) but is soon to be JanusGraphCleanup.clear(graph).
As was mentioned in one of the comments to the earlier answer DROPping a keyspace titan using cqlsh should do it:
cqlsh> DROP KEYSPACE titan;
The name of the keyspace Titan uses is set up using storage.cassandra.keyspace configuration option. You can change it to whatever name you want and is acceptable by Cassandra.
storage.cassandra.keyspace=hello_titan
When Cassandra is getting up, it prints out the keyspace's name as follows:
INFO 19:50:32 Create new Keyspace: KSMetaData{name=hello_titan,
strategyClass=SimpleStrategy, strategyOptions={replication_factor=1},
cfMetaData={}, durableWrites=true,
userTypes=org.apache.cassandra.config.UTMetaData#767d6a9f}
In 0.9.0-M1, the name appears in Titan's log in DEBUG (set log4j.rootLogger=DEBUG, stdout in conf/log4j-server.properties):
[DEBUG] AstyanaxStoreManager - Found keyspace titan
or the following when it doesn't:
[DEBUG] AstyanaxStoreManager - Creating keyspace titan...
[DEBUG] AstyanaxStoreManager - Created keyspace titan

Hive error finding unmapped keyspaces

I am trying to make an external table in Hive as shown on page 88 of the Datastax Enterprise 3.1. Documentation.
The statement is further below together with the error message.
What am I doing wrong?
Regards Hans-Peter
hive> create external table testext (m string, n string, o string, p string)
> STORED BY 'org.apache.hadoop.hive.cassandra.cql3.CqlStorageHandler'
> TBLPROPERTIES ( "cassandra.ks.name" = "cql3ks",
> "cassandra.cf.name" = "test",
> "cassandra.cql3.type" = "text, text, text, text");
FAILED: Error in metadata:
com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStoreException:
There was a problem with the Cassandra Hive MetaStore: Problem finding unmapped
keyspaces
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
2013-10-15 12:47:36,657 WARN conf.HiveConf (HiveConf.java:(63)) - DEPRECATED: Ignoring hive-default.xml found on the CLASSPATH at /etc/dse/hive/hive-default.xml
2013-10-15 12:48:41,003 WARN config.DatabaseDescriptor (DatabaseDescriptor.java:loadYaml(253)) - Please rename 'authority' to 'authorizer' in cassandra.yaml
2013-10-15 12:48:42,988 ERROR exec.Task (SessionState.java:printError(400)) - FAILED: Error in metadata: com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStoreException: There was a problem with the Cassandra Hive MetaStore: Problem finding unmapped keyspaces
org.apache.hadoop.hive.ql.metadata.HiveException: com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStoreException: There was a problem with the Cassandra Hive MetaStore: Problem finding unmapped keyspaces
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:544)
at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3305)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:242)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:134)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1326)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1118)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:951)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:215)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:406)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:557)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStoreException: There was a problem with the Cassandra Hive MetaStore: Problem finding unmapped keyspaces
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.createKeyspaceSchemasIfNeeded(SchemaManagerService.java:230)
at com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStore.setConf(CassandraHiveMetaStore.java:112)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
at org.apache.hadoop.hive.metastore.RetryingRawStore.(RetryingRawStore.java:62)
at org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:346)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:333)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:371)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:278)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.(HiveMetaStore.java:248)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:114)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2092)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2102)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:538)
... 17 more
Caused by: com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStoreException: There was a problem with the Cassandra Hive MetaStore: There was a problem retrieving column families for keyspace demo
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.createUnmappedTables(SchemaManagerService.java:277)
at com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStore.getDatabase(CassandraHiveMetaStore.java:148)
at com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStore.getDatabase(CassandraHiveMetaStore.java:136)
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.isKeyspaceMapped(SchemaManagerService.java:186)
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.finUnmappedKeyspaces(SchemaManagerService.java:137)
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.createKeyspaceSchemasIfNeeded(SchemaManagerService.java:224)
... 31 more
Caused by: com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStoreException: There was a problem with the Cassandra Hive MetaStore: Problem creating column mappingsorg.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type)
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.buildTable(SchemaManagerService.java:481)
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.createUnmappedTables(SchemaManagerService.java:254)
... 36 more
Caused by: java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:247)
at org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:51)
at org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:60)
at org.apache.cassandra.db.marshal.AbstractCompositeType.getString(AbstractCompositeType.java:226)
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.addTypeToStorageDescriptor(SchemaManagerService.java:846)
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.buildColumnMappings(SchemaManagerService.java:546)
at com.datastax.bdp.hadoop.hive.metastore.SchemaManagerService.buildTable(SchemaManagerService.java:460)
... 37 more
2013-10-15 12:48:42,990 ERROR ql.Driver (SessionState.java:printError(400)) - FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
I am not sure what the actual problem, but I ran into this while creating a normal table in Hive.
I started Hive with sudo access, and can now run queries as expected.
$ sudo bin/dse hive
So something that worked for me was to totally wipe out the HiveMetaStore keyspace in Cassandra and recreate just the keyspace with the NetworkTopologyStrategy replica strategy. I made sure to add the Analytics datacenter to the new keyspace as well, so it looked something like:
CREATE KEYSPACE HiveMetaStore WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'Analytics' : 2};
I then restarted DSE on my analytics nodes and they correctly created the MetaStore table within the HiveMetaStore keyspace and everything started working again!

CQL3 select succeeds for some keys, but fails for others

I have a pretty simple CQL3 table:
CREATE TABLE user_team_scores (
username text,
team_slug text,
score double,
PRIMARY KEY (username, team_slug)
) WITH
comment='' AND
caching='KEYS_ONLY' AND
read_repair_chance=0.100000 AND
gc_grace_seconds=864000 AND
replicate_on_write='true' AND
compaction_strategy_class='SizeTieredCompactionStrategy' AND
compression_parameters:sstable_compression='SnappyCompressor';
I can run queries like:
select * from user_team_scores where username='paulingalls';
and it succeeds for some usernames, but fails for others with a timeout.
I'm running a 3 node cluster, and on the node I assume has the token range for the username that fails I see the following stack trace in the logs:
ERROR [ReadStage:66211] 2013-01-03 00:11:26,169 AbstractCassandraDaemon.java (line 135) Exception in thread Thread[ReadStage:66211,5,main]
java.lang.AssertionError: DecoratedKey(82585475460624048733030438888619591812, 001373616e2d6672616e636973636f2d343965727300000573636f726500) != DecoratedKey(45868142482903708675972202481337602533, 7061756c696e67616c6c73) in /mnt/datadrive/lib/cassandra/fanzo/user_team_scores/fanzo-user_team_scores-hf-1-Data.db
at org.apache.cassandra.db.columniterator.SSTableSliceIterator.<init>(SSTableSliceIterator.java:60)
at org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:67)
at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:79)
at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:256)
at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1345)
at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1207)
at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1142)
at org.apache.cassandra.db.Table.getRow(Table.java:378)
at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51)
at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Googling around I saw some mention of potential issues with caching, so here is the log for the the cache setup on DB startup.
INFO [main] 2012-12-01 04:07:53,190 DatabaseDescriptor.java (line 124) Loading settings from file:/home/fanzo/apache-cassandra-1.1.6/conf/cassandra.yaml
INFO [main] 2012-12-01 04:07:53,398 DatabaseDescriptor.java (line 183) DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO [main] 2012-12-01 04:07:53,756 DatabaseDescriptor.java (line 249) Global memtable threshold is enabled at 676MB
INFO [main] 2012-12-01 04:07:54,335 CacheService.java (line 96) Initializing key cache with capacity of 100 MBs.
INFO [main] 2012-12-01 04:07:54,352 CacheService.java (line 107) Scheduling key cache save to each 14400 seconds (going to save all keys).
INFO [main] 2012-12-01 04:07:54,354 CacheService.java (line 121) Initializing row cache with capacity of 0 MBs and provider org.apache.cassandra.cache.SerializingCacheProvider
INFO [main] 2012-12-01 04:07:54,360 CacheService.java (line 133) Scheduling row cache save to each 0 seconds (going to save all keys).
I'm wondering if there is a way to fix this, or if I'm hosed....
Thanks!
Paul
Well, not sure what the cause was for the problem, but running
nodetool invalidatekeycache
nodetool invalidaterowcache
made the problem go away. I'm guessing there is a bug somewhere in the caching code.
Hopefully this will help others...
It's https://issues.apache.org/jira/browse/CASSANDRA-4687 and not fixed as of 1.2.0

Resources