Cassandra 3.11.0
running inside Ubuntu 16.04.4, JRE-8
I am trying to create a table with uuid field as follows :
CREATE TABLE IF NOT EXISTS policy (
tenant_id text,
policy_id uuid,
name text,
enabled boolean,
creation_time bigint,
PRIMARY KEY (tenant_id, name)
);
After executing this, schema shows "policy_id" created as "timeuuid" instead of "uuid".
Saw a [Similar issue] : creating uuid type field in Cassandra table using CassandraAdminOperations.createTable
But is this applicable to cqlsh ?
Dropped table/keyspace and tried again but no luck.
This is intermittent issue.
Getting following exception:
Exception while loading CQL script:
com.datastax.driver.core.exceptions.InvalidQueryException: Type error: cannot assign result of function system.uuid (type uuid) to id (type timeuuid)
at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:50)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:68)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:43)
Also seen following in cassandra logs:
org.apache.cassandra.exceptions.ConfigurationException: Column family ID mismatch (found e686a660-8994-11e9-984c-2767f9f5fd28; expected e5d72c80-8994-11e9-b706-831d59206120)
at org.apache.cassandra.config.CFMetaData.validateCompatibility(CFMetaData.java:808) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.config.CFMetaData.apply(CFMetaData.java:770) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.config.Schema.updateTable(Schema.java:621) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1430) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1386) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1336) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.service.MigrationTask$1.response(MigrationTask.java:91) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:53) [apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66) [apache-cassandra-3.11.0.jar:3.11.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_131]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_131]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81) [apache-cassandra-3.11.0.jar:3.11.0]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_131]
policy_id should have datatype "uuid" but it has datatype "timeuuid".
Any additional keyword required in cql statement to create table?
Related
first,create a dataset with the spark-sql command:
spark.sql("select id ,a.userid,regexp_replace(b.tradeno,',','|') as TradeNo
,Amount ,TradeType ,TxTypeId
,regexp_replace(title,',','|') as title
,status ,tradetime ,TradeStatus
,regexp_replace(otherside,',','') as otherside
from
(
select userid
from tableA
where daykey='2018-10-30'
group by userid
) a
left join tableb b
on a.userid=b.userid
where b.userid is not null")
the result is:
dataset: org.apache.spark.sql.DataFrame = [id: bigint, userid: int ... 9 more fields]
then,export the dataset as csv with command:
dataset.coalesce(40).write.option("delimiter", ",").option("charset", "utf-8").csv("/binlog_test/mycsv.excel")
as spark task running,the following error occurs:
Driver stacktrace:
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1430)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1417)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1417)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:797)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:797)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:797)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1645)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1600)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1589)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:623)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1930)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1943)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1963)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:127)
... 69 more
Caused by: java.lang.IllegalArgumentException: Field "id" does not exist.
at org.apache.spark.sql.types.StructType$$anonfun$fieldIndex$1.apply(StructType.scala:290)
at org.apache.spark.sql.types.StructType$$anonfun$fieldIndex$1.apply(StructType.scala:290)
at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
at scala.collection.AbstractMap.getOrElse(Map.scala:59)
at org.apache.spark.sql.types.StructType.fieldIndex(StructType.scala:289)
at org.apache.spark.sql.hive.orc.OrcRelation$$anonfun$6.apply(OrcFileFormat.scala:308)
at org.apache.spark.sql.hive.orc.OrcRelation$$anonfun$6.apply(OrcFileFormat.scala:308)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at org.apache.spark.sql.types.StructType.foreach(StructType.scala:96)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at org.apache.spark.sql.types.StructType.map(StructType.scala:96)
at org.apache.spark.sql.hive.orc.OrcRelation$.setRequiredColumns(OrcFileFormat.scala:308)
at org.apache.spark.sql.hive.orc.OrcFileFormat$$anonfun$buildReader$2.apply(OrcFileFormat.scala:140)
at org.apache.spark.sql.hive.orc.OrcFileFormat$$anonfun$buildReader$2.apply(OrcFileFormat.scala:129)
at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:138)
at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:122)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:168)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:109)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:126)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
but, when i directly execute the join operate use hive, and create a new table with the join result, finally export the dataset with the spark-sql command ahead, all going well.
I am running below code , in the Failure part , i am printing exception stack trace as INFO in yarn log . My code has syntax error in sql so that an exception will be generated .But when i see yarn log it shows some thing unexpected as below saying "call methods on a stopped SparkContext" . Need help on this , If i am doing anything wrong .
Code Snipet:-
var ret:String= Try {
DbUtil.dropTable("cls_mkt_tracker_split_rownum", batchDatabase)
SparkEnvironment.hiveContext.sql(
s"""CREATE TABLE ${batchDatabase}.CLS_MKT_TRACKER_SPLIT_ROWNUM
AS SELECT ROW_NUMBER() OVER(PARTITION BY XREF_IMS_PAT_NBR,MOLECULE ORER BY IMS_DSPNSD_DT ) AS ROWNUM,*
FROM ${batchDatabase}.CLS_MKT_TRACKER_SPLIT
""")
true
} match {
case Success (b:Boolean) => ""
case Failure (t :Throwable) => logger.info("I am in failure" + t.getMessage + t.getStackTraceString) ; "failure return"
}
Yarn Log:-
16/12/13 11:19:42 INFO SessionState: No Tez session required at this point. hive.execution.engine=mr.
16/12/13 11:19:43 INFO DateAdjustment: I am in failureCannot call methods on a stopped SparkContext.
This stopped SparkContext was created at:
org.apache.spark.SparkContext.<init>(SparkContext.scala:83)
SparkEnvironment$.<init>(SparkEnvironment.scala:12)
SparkEnvironment$.<clinit>(SparkEnvironment.scala)
DbUtil$.dropTable(DbUtil.scala:8)
DateAdjustment$$anonfun$1.apply$mcZ$sp(DateAdjustment.scala:126)
DateAdjustment$$anonfun$1.apply(DateAdjustment.scala:125)
DateAdjustment$$anonfun$1.apply(DateAdjustment.scala:125)
scala.util.Try$.apply(Try.scala:161)
DateAdjustment$delayedInit$body.apply(DateAdjustment.scala:125)
scala.Function0$class.apply$mcV$sp(Function0.scala:40)
scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
scala.App$$anonfun$main$1.apply(App.scala:71)
scala.App$$anonfun$main$1.apply(App.scala:71)
scala.collection.immutable.List.foreach(List.scala:318)
scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
scala.App$class.main(App.scala:71)
DateAdjustment$.main(DateAdjustment.scala:14)
DateAdjustment.main(DateAdjustment.scala)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
The currently active SparkContext was created at:
(No active SparkContext.)
org.apache.spark.SparkContext.org$apache$spark$SparkContext$$assertNotStopped(SparkContext.scala:107)
I investigated and found , at one place of my application, SC.stop() has been called and it was causing the issue .
In one of our single node Cassadra deployment, there's this table schema:
CREATE table CTS_SVC_PT_INT_READ (
svc_pt_id bigint,
meas_type_id bigint,
value double,
flags bigint,
read_time timestamp,
last_upd_time timestamp,
PRIMARY KEY (svc_pt_id, meas_type_id, read_time)
) WITH CLUSTERING ORDER BY (meas_type_id ASC, read_time DESC)
AND compaction = {
'class': 'org.apache.cassandra.db.compaction.DateTieredCompactionStrategy',
'timestamp_resolution': 'MILLISECONDS',
'base_time_seconds': '3600',
'max_sstable_age_days': '365'
};
While querying select distinct svc_pt_id from cts.CTS_SVC_PT_INT_READ through the Java client, it's failing with the exception:
select distinct svc_pt_id from cts.CTS_SVC_PT_INT_READ com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded, 1 failed)
java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded, 1 failed)
at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
at rx.internal.operators.OnSubscribeToObservableFuture$ToObservableFuture.call(OnSubscribeToObservableFuture.java:74)
at rx.internal.operators.OnSubscribeToObservableFuture$ToObservableFuture.call(OnSubscribeToObservableFuture.java:43)
at rx.Observable.unsafeSubscribe(Observable.java:8314)
at rx.internal.operators.OperatorSubscribeOn$1.call(OperatorSubscribeOn.java:94)
at rx.internal.schedulers.ScheduledAction.run(ScheduledAction.java:55)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded, 1 failed)
at com.datastax.driver.core.exceptions.ReadFailureException.copy(ReadFailureException.java:95)
at com.datastax.driver.core.Responses$Error.asException(Responses.java:128)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:184)
at com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:43)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:798)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:617)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1005)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:928)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:354)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
... 1 more
Caused by: com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded, 1 failed)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:76)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:37)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:266)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:246)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
... 15 more
I see the same error if I issue this cql command through cqlsh. Is it due to a ReadTimeOut issue or something else?
"ReadFailureException: Cassandra failure ... 0 replica responded, 1 failed)" indicates a failure, not a timeout. You might learn something further by looking at the cassandra log on the server.
Getting the error below when using jdbc to write a dataframe to a hive table. Hiveserver2 is running and I can connect to it using beeline. Hive logs show the same error message below and I do not see any additional information on the log.
***Table was create in advance using ****
CREATE TABLE btiflag_htbl (
`clfentid` CHAR(15),
`clfcust` CHAR(15),
`clfflag` CHAR(2),
`clfcrtdat` CHAR(10),
`clfdtaid` INT,
`indgencod` CHAR(6),
`age` INT,
`indmarsts` CHAR(6),
`indnatcod` CHAR(6),
`quarter` CHAR(8),
`year` INT,
`month` INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
STORED AS PARQUET
*Write Dataframe to Hive *
val prop = new Properties()
prop.put("user", "user1")
prop.put("password", "pwd12")
prop.put("driver", "org.apache.hive.jdbc.HiveDriver")
val dfWriter = btiflagDF.write.mode(SaveMode.Overwrite)
dfWriter.jdbc("jdbc:hive2://192.168.0.10:10000/database_hive", "btiflag_htbl", prop)
**** Error ****
Exception in thread "main" org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: ParseException line 1:36 cannot recognize input near 'INTEGER' ',' 'CLFCUST' in column type
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:264)
at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:250)
at org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:309)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:250)
at org.apache.hive.jdbc.HiveStatement.executeUpdate(HiveStatement.java:448)
at org.apache.spark.sql.DataFrameWriter.jdbc(DataFrameWriter.scala:302)
at com.efx.btiflag_analytics$.main(btiflag_analytics.scala:81)
at com.efx.btiflag_analytics.main(btiflag_analytics.scala)
Caused by: org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: ParseException line 1:36 cannot recognize input near 'INTEGER' ',' 'CLFCUST' in column type
at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:315)
at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:112)
at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:181)
at org.apache.hive.service.cli.operation.Operation.run(Operation.java:257)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:388)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:375)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
at com.sun.proxy.$Proxy20.executeStatementAsync(Unknown Source)
at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:274)
at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:486)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.RuntimeException: org.apache.hadoop.hive.ql.parse.ParseException:line 1:36 cannot recognize input near 'INTEGER' ',' 'CLFCUST' in column type
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:205)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:396)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1122)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1116)
at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:110)
... 27 more
Any help will be appreciated
I'm trying to load data by sstableloader in cassandra 2.0.7.
The terminal shows progress 100%. I check netstats by nodetool netstats
It shows:
Mode: NORMAL
Bulk Load 21d7d610-a5f2-11e5-baa7-8fc95be03ac4
/10.19.150.70
Receiving 4 files, 138895248 bytes total
/root/data/whatyyun/metadata/whatyyun-metadata-tmp-jb-8-Data.db 67039680/67039680 bytes(100%) received from /10.19.150.70
/root/data/whatyyun/metadata/whatyyun-metadata-tmp-jb-10-Data.db 3074549/3074549 bytes(100%) received from /10.19.150.70
/root/data/whatyyun/metadata/whatyyun-metadata-tmp-jb-9-Data.db 43581052/43581052 bytes(100%) received from /10.19.150.70
/root/data/whatyyun/metadata/whatyyun-metadata-tmp-jb-7-Data.db 25199967/25199967 bytes(100%) received from /10.19.150.70
Read Repair Statistics:
Attempted: 0
Mismatch (Blocking): 0
Mismatch (Background): 0
Pool Name Active Pending Completed
Commands n/a 0 0
Responses n/a 0 11671
The sstableloader hangs for hours. I check the log there is an error that may concerns.
ERROR [CompactionExecutor:7] 2015-12-19 09:45:53,811 CassandraDaemon.java (line 198) Exception in thread Thread[CompactionExecutor:7,1,main]
java.lang.IndexOutOfBoundsException
at java.nio.Buffer.checkIndex(Buffer.java:532)
at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:139)
at org.apache.cassandra.db.marshal.TimeUUIDType.compareTimestampBytes(TimeUUIDType.java:62)
at org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:51)
at org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:31)
at org.apache.cassandra.dht.LocalToken.compareTo(LocalToken.java:44)
at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:85)
at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:36)
at java.util.concurrent.ConcurrentSkipListMap.findNode(ConcurrentSkipListMap.java:804)
at java.util.concurrent.ConcurrentSkipListMap.doGet(ConcurrentSkipListMap.java:828)
at java.util.concurrent.ConcurrentSkipListMap.get(ConcurrentSkipListMap.java:1626)
at org.apache.cassandra.db.Memtable.resolve(Memtable.java:215)
at org.apache.cassandra.db.Memtable.put(Memtable.java:173)
at org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:893)
at org.apache.cassandra.db.index.AbstractSimplePerColumnSecondaryIndex.insert(AbstractSimplePerColumnSecondaryIndex.java:107)
at org.apache.cassandra.db.index.SecondaryIndexManager.indexRow(SecondaryIndexManager.java:441)
at org.apache.cassandra.db.Keyspace.indexRow(Keyspace.java:407)
at org.apache.cassandra.db.index.SecondaryIndexBuilder.build(SecondaryIndexBuilder.java:62)
at org.apache.cassandra.db.compaction.CompactionManager$9.run(CompactionManager.java:833)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR [NonPeriodicTasks:1] 2015-12-19 09:45:53,812 CassandraDaemon.java (line 198) Exception in thread Thread[NonPeriodicTasks:1,5,main]
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.IndexOutOfBoundsException
at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:413)
at org.apache.cassandra.db.index.SecondaryIndexManager.maybeBuildSecondaryIndexes(SecondaryIndexManager.java:142)
at org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:113)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: java.lang.IndexOutOfBoundsException
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:409)
... 9 more
Caused by: java.lang.IndexOutOfBoundsException
at java.nio.Buffer.checkIndex(Buffer.java:532)
at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:139)
at org.apache.cassandra.db.marshal.TimeUUIDType.compareTimestampBytes(TimeUUIDType.java:62)
at org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:51)
at org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:31)
at org.apache.cassandra.dht.LocalToken.compareTo(LocalToken.java:44)
at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:85)
at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:36)
at java.util.concurrent.ConcurrentSkipListMap.findNode(ConcurrentSkipListMap.java:804)
at java.util.concurrent.ConcurrentSkipListMap.doGet(ConcurrentSkipListMap.java:828)
at java.util.concurrent.ConcurrentSkipListMap.get(ConcurrentSkipListMap.java:1626)
at org.apache.cassandra.db.Memtable.resolve(Memtable.java:215)
at java.util.concurrent.ConcurrentSkipListMap.get(ConcurrentSkipListMap.java:1626)
at org.apache.cassandra.db.Memtable.resolve(Memtable.java:215)
at org.apache.cassandra.db.Memtable.put(Memtable.java:173)
at org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:893)
at org.apache.cassandra.db.index.AbstractSimplePerColumnSecondaryIndex.insert(AbstractSimplePerColumnSecondaryIndex.java:107)
at org.apache.cassandra.db.index.SecondaryIndexManager.indexRow(SecondaryIndexManager.java:441)
at org.apache.cassandra.db.Keyspace.indexRow(Keyspace.java:407)
at org.apache.cassandra.db.index.SecondaryIndexBuilder.build(SecondaryIndexBuilder.java:62)
at org.apache.cassandra.db.compaction.CompactionManager$9.run(CompactionManager.java:833)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
... 3 more
The scheme of the table is as follows:
CREATE TABLE metadata (
userid timeuuid,
dirname text,
basename text,
ctime timestamp,
fileid timeuuid,
imagefileid timeuuid,
imagefilesize int,
mtime timestamp,
nodetype int,
showname text,
size bigint,
timelong text,
PRIMARY KEY (userid, dirname, basename, ctime)
) WITH
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=864000 AND
index_interval=128 AND
read_repair_chance=0.100000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
default_time_to_live=0 AND
speculative_retry='99.0PERCENTILE' AND
memtable_flush_period_in_ms=0 AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'LZ4Compressor'};
CREATE INDEX idx_fileid ON metadata (fileid);
CREATE INDEX idx_nodetype ON metadata (nodetype);
Can I kill the process of the sstableloader safely? Has this bulk load process finished?
You should try to increase heap for SSTABLELoader.
vim $(which sstableloader)
#########"$JAVA" $JAVA_AGENT -ea -cp "$CLASSPATH" $JVM_OPTS -Xmx$MAX_HEAP_SIZE \
"$JAVA" $JAVA_AGENT -ea -cp "$CLASSPATH" $JVM_OPTS -XX:+UseG1GC -Xmx10G -Xms10G -XX:+UseTLAB -XX:+ResizeTLAB \
-Dcassandra.storagedir="$cassandra_storagedir" \
-Dlogback.configurationFile=logback-tools.xml \
org.apache.cassandra.tools.BulkLoader "$#"
I hope that would solve your issue.
Your node must be running out of resources may be due to heavy load or any other process.
Try restarting Cassandra on the nodes running on high load and see if that helps.
From the above error it seems you are running some resource crunch. So you need to tune some setting on memory side like Xmx,Xms Which indicates min and max heap size in cassandra-env.sh file for lower version of Cassandra(i.e 2.x)
After tuning above you need to restart you node/cluster and try loading again.