JNA link issue while starting cassandra RHEL 6.5 - linux

I am trying to setup cassandra in my RHEL 6.5 server. When i start cassandara, i get an ERROR related to JNA. The exception says class not found. However, I see in the logs that the jna jar is added to the classpath. I tried using both apache-cassandra-3.0.0 and apache-cassandra-2.2.3, i am getting the same exception in both. I find that the jna jar is available in $CASSANDRA_HOME/lib and also in /usr/share/java. The jna jar version installed is 4.0.0. Any help is appreciated. Following is the startup logs -
INFO 05:57:57 Classpath: /home/cassandra-new/apache-cassandra-2.2.3/conf:/home/cassandra-new/apache-cassandra-2.2.3/build/classes/main:/home/cassandra-new/apache-cassandra-2.2.3/build/classes/thrift:/home/cassandra-new/apache-cassandra-2.2.3/lib/airline-0.6.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/antlr-runtime-3.5.2.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/apache-cassandra-2.2.3.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/apache-cassandra-clientutil-2.2.3.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/apache-cassandra-thrift-2.2.3.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/cassandra-driver-core-2.2.0-rc2-SNAPSHOT-20150617-shaded.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/commons-cli-1.1.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/commons-codec-1.2.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/commons-lang3-3.1.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/commons-math3-3.2.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/compress-lzf-0.8.4.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/concurrentlinkedhashmap-lru-1.4.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/crc32ex-0.1.1.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/disruptor-3.0.1.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/ecj-4.4.2.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/guava-16.0.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/high-scale-lib-1.0.6.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/jackson-core-asl-1.9.2.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/jackson-mapper-asl-1.9.2.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/jamm-0.3.0.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/javax.inject.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/jbcrypt-0.3m.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/jcl-over-slf4j-1.7.7.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/jna-4.0.0.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/joda-time-2.4.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/json-simple-1.1.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/libthrift-0.9.2.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/log4j-over-slf4j-1.7.7.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/logback-classic-1.1.3.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/logback-core-1.1.3.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/lz4-1.3.0.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/metrics-core-3.1.0.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/metrics-logback-3.1.0.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/netty-all-4.0.23.Final.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/ohc-core-0.3.4.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/ohc-core-j8-0.3.4.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/reporter-config3-3.0.0.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/reporter-config-base-3.0.0.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/sigar-1.6.4.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/slf4j-api-1.7.7.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/snakeyaml-1.11.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/snappy-java-1.1.1.7.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/ST4-4.0.8.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/stream-2.5.2.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/super-csv-2.1.0.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/thrift-server-0.3.7.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/jsr223//.jar:/home/cassandra-new/apache-cassandra-2.2.3/lib/jamm-0.3.0.jar
WARN 05:57:57 JNA link failure, one or more native method will be unavailable.
WARN 05:57:57 JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info.
INFO 05:57:57 Initializing SIGAR library
WARN 05:57:57 Cassandra server running in degraded mode. Is swap disabled? : false, Address space adequate? : true, nofile limit adequate? : false, nproc limit adequate? : true
INFO 05:57:58 Initializing system.sstable_activity
INFO 05:57:58 Initializing system.hints
INFO 05:57:58 Initializing system.compaction_history
INFO 05:57:58 Initializing system.peers
INFO 05:57:58 Initializing system.schema_columnfamilies
INFO 05:57:59 Initializing system.schema_functions
INFO 05:57:59 Initializing system.IndexInfo
INFO 05:57:59 Initializing system.schema_columns
INFO 05:57:59 Initializing system.schema_triggers
INFO 05:57:59 Initializing system.local
INFO 05:57:59 Initializing system.schema_usertypes
INFO 05:57:59 Initializing system.batchlog
INFO 05:57:59 Initializing system.available_ranges
INFO 05:57:59 Initializing system.schema_aggregates
INFO 05:57:59 Initializing system.paxos
INFO 05:57:59 Initializing system.peer_events
INFO 05:57:59 Initializing system.size_estimates
INFO 05:57:59 Initializing system.compactions_in_progress
INFO 05:57:59 Initializing system.schema_keyspaces
INFO 05:57:59 Initializing system.range_xfers
ERROR 05:57:59 Exception in thread Thread[MemtableFlushWriter:1,5,main]
java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
at org.apache.cassandra.utils.memory.MemoryUtil.allocate(MemoryUtil.java:82) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.util.Memory.(Memory.java:74) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.compress.CompressionMetadata$Writer.(CompressionMetadata.java:274) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.compress.CompressionMetadata$Writer.open(CompressionMetadata.java:288) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.compress.CompressedSequentialWriter.(CompressedSequentialWriter.java:73) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:168) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.sstable.format.big.BigTableWriter.(BigTableWriter.java:75) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:107) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:84) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.db.Memtable$FlushRunnable.createFlushWriter(Memtable.java:424) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:367) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:352) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-2.2.3.jar:2.2.3]
at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) ~[guava-16.0.jar:na]
at org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1134) ~[apache-cassandra-2.2.3.jar:2.2.3]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_65]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_65]

I went through the code in CLibrary.java and found following code where the exception is caught -
catch (UnsatisfiedLinkError e)
{
logger.warn("JNA link failure, one or more native method will be unavailable.");
logger.trace("JNA link failure details: {}", e.getMessage());
}
I restarted cassandra by changing the log-level in conf/logback.xml to TRACE, to print that extra detail -
<logger name="org.apache.cassandra" level="TRACE"/>
I could now see the real issue -
/tmp/jna-3506402/jna6068045839690239595.tmp: failed to map segment from shared object: Operation not permitted
This issue is caused due to noexec flag on the /tmp folder.
I then decided to change the tmp folder by changing tmpdir using option:
-Djava.io.tmpdir=/home/cassandra/tmp
That fixed the issue.
I added the options in cassandra-env.sh file. Added following statement -
JVM_OPTS="$JVM_OPTS -Djava.io.tmpdir=/home/cassandra/tmp"

Related

Databricks 6.1 no database named global_temp error when initializing metastore connection

When initializing hive metastore connection (saving data frame as a table for the first time ) on cluster 6.1 (includes Apache Spark 2.4.4, Scala 2.11) (Azure), I can see health check for database global_temp failing with the error:
20/02/18 12:11:17 INFO HiveUtils: Initializing HiveMetastoreConnection version 0.13.0 using file:
...
20/02/18 12:11:21 INFO HiveMetaStore: 0: get_database: global_temp
20/02/18 12:11:21 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: global_temp
20/02/18 12:11:21 ERROR RetryingHMSHandler: NoSuchObjectException(message:There is no database named global_temp)
at org.apache.hadoop.hive.metastore.ObjectStore.getMDatabase(ObjectStore.java:487)
at org.apache.hadoop.hive.metastore.ObjectStore.getDatabase(ObjectStore.java:498)
...
at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:430)
...
at py4j.GatewayConnection.run(GatewayConnection.java:251)
at java.lang.Thread.run(Thread.java:748)
This doesn't cause python script to fail, but pollutes logs.
Shouldn't the global_temp database be automatically created?
Can the check be switched off? or the error suppressed?

how to understand yarn appattempt log and diagnostics?

I am running a spark application on YARN cluster(on AWS EMR). The application seems to be killed and I want to find the cause. I try to understand the YARN info given in the following screen.
The diagnostic line in the screen seems to show that YARN killing the app because of the memory limit:
Diagnostics: Container [pid=1540,containerID=container_1488651686158_0012_02_000001] is running beyond physical memory limits. Current usage: 1.6 GB of 1.4 GB physical memory used; 3.6 GB of 6.9 GB virtual memory used. Killing container.
However, the appattempt log shows completely different exception, something related to the IO/network. My question is : should I trust the diagnostic in the screen or the appattempt log? Is the IO exception causing the kill or the out of memory cause the IO exception in the appattempt log? Is it another log/diagnostic I should look at? Thanks.
17/03/04 21:59:02 ERROR Utils: Uncaught exception in thread task-result-getter-0
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:202)
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:190)
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:190)
at org.apache.spark.network.BlockTransferService.fetchBlockSync(BlockTransferService.scala:104)
at org.apache.spark.storage.BlockManager.getRemoteBytes(BlockManager.scala:579)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:82)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(TaskResultGetter.scala:62)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "task-result-getter-0" java.lang.Error: java.lang.InterruptedException
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1148)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:202)
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:190)
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:190)
at org.apache.spark.network.BlockTransferService.fetchBlockSync(BlockTransferService.scala:104)
at org.apache.spark.storage.BlockManager.getRemoteBytes(BlockManager.scala:579)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:82)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(TaskResultGetter.scala:62)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
... 2 more
17/03/04 21:59:02 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/03/04 21:59:02 ERROR TransportResponseHandler: Still have 1 requests outstanding when connection from ip-172-31-9-207.ec2.internal/172.31.9.207:38437 is closed
17/03/04 21:59:02 INFO RetryingBlockFetcher: Retrying fetch (1/3) for 1 outstanding blocks after 5000 ms
17/03/04 21:59:02 ERROR DiskBlockManager: Exception while deleting local spark dir: /mnt/yarn/usercache/hadoop/appcache/application_1488651686158_0012/blockmgr-941a13d8-1b31-4347-bdec-180125b6f4ca
java.io.IOException: Failed to delete: /mnt/yarn/usercache/hadoop/appcache/application_1488651686158_0012/blockmgr-941a13d8-1b31-4347-bdec-180125b6f4ca
at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:1010)
at org.apache.spark.storage.DiskBlockManager$$anonfun$org$apache$spark$storage$DiskBlockManager$$doStop$1.apply(DiskBlockManager.scala:169)
at org.apache.spark.storage.DiskBlockManager$$anonfun$org$apache$spark$storage$DiskBlockManager$$doStop$1.apply(DiskBlockManager.scala:165)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at org.apache.spark.storage.DiskBlockManager.org$apache$spark$storage$DiskBlockManager$$doStop(DiskBlockManager.scala:165)
at org.apache.spark.storage.DiskBlockManager.stop(DiskBlockManager.scala:160)
at org.apache.spark.storage.BlockManager.stop(BlockManager.scala:1361)
at org.apache.spark.SparkEnv.stop(SparkEnv.scala:89)
at org.apache.spark.SparkContext$$anonfun$stop$11.apply$mcV$sp(SparkContext.scala:1842)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1283)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1841)
at org.apache.spark.SparkContext$$anonfun$2.apply$mcV$sp(SparkContext.scala:581)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
17/03/04 21:59:02 INFO MemoryStore: MemoryStore cleared
17/03/04 21:59:02 INFO BlockManager: BlockManager stopped
17/03/04 21:59:02 INFO BlockManagerMaster: BlockManagerMaster stopped
17/03/04 21:59:02 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/03/04 21:59:02 ERROR Utils: Uncaught exception in thread Thread-3
java.lang.NoClassDefFoundError: Could not initialize class java.nio.file.FileSystems$DefaultFileSystemHolder
at java.nio.file.FileSystems.getDefault(FileSystems.java:176)
at java.nio.file.Paths.get(Paths.java:138)
at org.apache.spark.util.Utils$.isSymlink(Utils.scala:1021)
at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:991)
at org.apache.spark.SparkEnv.stop(SparkEnv.scala:102)
at org.apache.spark.SparkContext$$anonfun$stop$11.apply$mcV$sp(SparkContext.scala:1842)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1283)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1841)
at org.apache.spark.SparkContext$$anonfun$2.apply$mcV$sp(SparkContext.scala:581)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
17/03/04 21:59:02 WARN ShutdownHookManager: ShutdownHook '$anon$2' failed, java.lang.NoClassDefFoundError: Could not initialize class java.nio.file.FileSystems$DefaultFileSystemHolder
java.lang.NoClassDefFoundError: Could not initialize class java.nio.file.FileSystems$DefaultFileSystemHolder
at java.nio.file.FileSystems.getDefault(FileSystems.java:176)
at java.nio.file.Paths.get(Paths.java:138)
at org.apache.spark.util.Utils$.isSymlink(Utils.scala:1021)
at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:991)
at org.apache.spark.SparkEnv.stop(SparkEnv.scala:102)
at org.apache.spark.SparkContext$$anonfun$stop$11.apply$mcV$sp(SparkContext.scala:1842)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1283)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1841)
at org.apache.spark.SparkContext$$anonfun$2.apply$mcV$sp(SparkContext.scala:581)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
The information in your screenshot is the most relevant. Your ApplicationMaster container ran out of memory. You need to increase yarn.app.mapreduce.am.resource.mb which is set in mapred-site.xml. I recommend a value of 2000 since that will usually accommodate running Spark and MapReduce applications at scale.
The container was killed (memory exceeds physical memory limits) so any attempt to reach this container fails.
Yarn is fine to have an overall view of the process, but you should prefer spark history server to analyse better your job (check unbalanced memory in spark history).

Spark UI's kill is not killing Driver

I am trying to kill my spark-kafka streaming job from Spark UI. It is able to kill the application but the driver is still running.
Can anyone help me with this. I am good with my other streaming jobs. only one of the streaming jobs is giving this problem ever time.
I can't kill the driver through command or spark UI. Spark Master is alive.
Output i collected from logs is -
16/10/25 03:14:25 INFO BlockManagerMaster: Removed 0 successfully in removeExecutor
16/10/25 03:14:25 INFO SparkUI: Stopped Spark web UI at http://***:4040
16/10/25 03:14:25 INFO SparkDeploySchedulerBackend: Shutting down all executors
16/10/25 03:14:25 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
16/10/25 03:14:35 INFO AppClient: Stop request to Master timed out; it may already be shut down.
16/10/25 03:14:35 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/10/25 03:14:35 INFO MemoryStore: MemoryStore cleared
16/10/25 03:14:35 INFO BlockManager: BlockManager stopped
16/10/25 03:14:35 INFO BlockManagerMaster: BlockManagerMaster stopped
16/10/25 03:14:35 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/10/25 03:14:35 INFO SparkContext: Successfully stopped SparkContext
16/10/25 03:14:35 ERROR Inbox: Ignoring error
org.apache.spark.SparkException: Exiting due to error from cluster scheduler: Master removed our application: KILLED
at org.apache.spark.scheduler.TaskSchedulerImpl.error(TaskSchedulerImpl.scala:438)
at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.dead(SparkDeploySchedulerBackend.scala:124)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint.markDead(AppClient.scala:264)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anonfun$receive$1.applyOrElse(AppClient.scala:172)
at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116)
at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/10/25 03:14:35 WARN NettyRpcEnv: Ignored message: true
16/10/25 03:14:35 WARN AppClient$ClientEndpoint: Connection to master:7077 failed; waiting for master to reconnect...
16/10/25 03:14:35 WARN AppClient$ClientEndpoint: Connection to master:7077 failed; waiting for master to reconnect...
Get the running driverId from spark UI, and hit the post rest call(spark master rest port like 6066) to kill the pipeline. I have tested it with spark 1.6.1
curl -X POST http://localhost:6066/v1/submissions/kill/driverId
Hope it helps...

Cassandra: Exiting due to error while processing commit log during initialization

I was loading a large CSV into Cassandra using cassandra-loader.
The VM ran out of disk space during this process and crashed. I allocated more disk space to the VM and tried starting cassandra but it refused to start due to problems with SSTables and commit log.
I could not run nodetool repair as it is only works when the node is online.
I ran sstablescrub which took about 1 hour to finish. So I thought it might have fixed it.
But I still get this error in system.log
ERROR [SSTableBatchOpen:4] 2015-10-23 18:57:45,035 SSTableReader.java:506 - Corrupt sstable /var/lib/cassandra/data/keyspace1/location-777a33d0772911e597a98b820c5778a4/la-1709-big=[TOC.txt, CompressionInfo.db, Statistics.db, Digest.adler32, Data.db, Index.db, Filter.db]; skipping table
org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
at org.apache.cassandra.io.compress.CompressionMetadata.<init>(CompressionMetadata.java:125) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:187) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:179) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:703) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:664) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:458) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:363) ~[apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:501) ~[apache-cassandra-2.2.3.jar:2.2.3]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_60]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
Caused by: java.io.EOFException: null
at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) ~[na:1.8.0_60]
at java.io.DataInputStream.readUTF(DataInputStream.java:589) ~[na:1.8.0_60]
at java.io.DataInputStream.readUTF(DataInputStream.java:564) ~[na:1.8.0_60]
at org.apache.cassandra.io.compress.CompressionMetadata.<init>(CompressionMetadata.java:96) ~[apache-cassandra-2.2.3.jar:2.2.3]
... 15 common frames omitted
INFO [main] 2015-10-23 18:57:45,193 ColumnFamilyStore.java:382 - Initializing system_auth.role_permissions
INFO [main] 2015-10-23 18:57:45,201 ColumnFamilyStore.java:382 - Initializing system_auth.resource_role_permissons_index
INFO [main] 2015-10-23 18:57:45,213 ColumnFamilyStore.java:382 - Initializing system_auth.roles
INFO [main] 2015-10-23 18:57:45,233 ColumnFamilyStore.java:382 - Initializing system_auth.role_members
INFO [main] 2015-10-23 18:57:45,240 ColumnFamilyStore.java:382 - Initializing system_traces.sessions
INFO [main] 2015-10-23 18:57:45,252 ColumnFamilyStore.java:382 - Initializing system_traces.events
INFO [main] 2015-10-23 18:57:45,265 ColumnFamilyStore.java:382 - Initializing simplex.songs
INFO [main] 2015-10-23 18:57:45,276 ColumnFamilyStore.java:382 - Initializing simplex.playlists
INFO [pool-2-thread-1] 2015-10-23 18:57:45,289 AutoSavingCache.java:187 - reading saved cache /var/lib/cassandra/saved_caches/KeyCache-ca.db
INFO [pool-2-thread-1] 2015-10-23 18:57:45,313 AutoSavingCache.java:163 - Completed loading (25 ms; 36 keys) KeyCache cache
INFO [main] 2015-10-23 18:57:45,351 CommitLog.java:168 - Replaying /var/lib/cassandra/commitlog/CommitLog-5-1445578022702.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022703.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022704.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022705.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022706.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022707.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022708.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022709.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022710.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022712.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022713.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022714.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022715.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022716.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022717.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022719.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022720.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022721.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022722.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022723.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022724.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022725.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022727.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022728.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022730.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022731.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022732.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022733.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022734.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022736.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022738.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022740.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022741.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022743.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022744.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022745.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022746.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022748.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022749.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022750.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022751.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022752.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022753.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022755.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022756.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022758.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022759.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022760.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022761.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022763.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022764.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022765.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022766.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022767.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022768.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022769.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022770.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022771.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022772.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022773.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022774.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022775.log, /var/lib/cassandra/commitlog/CommitLog-5-1445578022776.log, /var/lib/cassandra/commitlog/CommitLog-5-1445588991268.log, /var/lib/cassandra/commitlog/CommitLog-5-1445589094722.log, /var/lib/cassandra/commitlog/CommitLog-5-1445589149527.log, /var/lib/cassandra/commitlog/CommitLog-5-1445595828633.log, /var/lib/cassandra/commitlog/CommitLog-5-1445595898055.log, /var/lib/cassandra/commitlog/CommitLog-5-1445596033717.log, /var/lib/cassandra/commitlog/CommitLog-5-1445596400441.log, /var/lib/cassandra/commitlog/CommitLog-5-1445596601854.log, /var/lib/cassandra/commitlog/CommitLog-5-1445598032544.log, /var/lib/cassandra/commitlog/CommitLog-5-1445598758663.log, /var/lib/cassandra/commitlog/CommitLog-5-1445601112953.log, /var/lib/cassandra/commitlog/CommitLog-5-1445601937334.log, /var/lib/cassandra/commitlog/CommitLog-5-1445601985416.log, /var/lib/cassandra/commitlog/CommitLog-5-1445604504389.log, /var/lib/cassandra/commitlog/CommitLog-5-1445606516196.log
ERROR [main] 2015-10-23 18:59:05,091 JVMStabilityInspector.java:78 - Exiting due to error while processing commit log during initialization.
org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: Mutation checksum failure at 4110758 in CommitLog-5-1445578022776.log
at org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:622) [apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:492) [apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:388) [apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:147) [apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:189) [apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:169) [apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:273) [apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:513) [apache-cassandra-2.2.3.jar:2.2.3]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:622) [apache-cassandra-2.2.3.jar:2.2.3]
Exiting due to error while processing commit log during initialization.
How do I fix this? This is test data, I am okay with losing it. How would one handle this in production to avoid data loss?
Tried setting disk_failure_policy: ignore so that I could run nodetool repair once the server is up. But the server does not start even with this setting.
I am operating a single node and replication factor is 1. Would having more nodes and a >1 replication factor enabled me to fix an issue like this without data loss?
I am using Cassandra 2.2.3
Since you don't care about the data, removing files from \data\commitlogs should be easiest solution.
Having some replication would surely help you to fix this without data loss but it would come with a price.
Despite all your effort you cannot manage to recover your corrupted sstable. So you decide to remove it from your file system to start Cassandra again. If you do not have replication your data is lost. But if you have replication on the cluster, you can possibly fetch the data from other nodes. That is what nodetool repair do !
So nodetool repair does not repair corrupted sstable. Basicallynodetool repair compare tables from node to node to find missing or inconsistent data and then repair it. You can find more information on how it works here.
However nodetool repair is very expensive, it is long and uses a lot of cpu, disk and network. There is this good post about repair benefits and drawbacks.
This is how I fixed the problem with commit logs.
You should only do this if you don't care about preserving the state of your commit logs.
Try to restart cassandra using
sudo systemctl restart cassandra
Then I check
systemctl status cassandra
and see that the status is 'exited' so there is a problem. Check the logs for cassandra using
sudo less /var/log/cassandra/system.log
and see org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: Could not read commit log descriptor in file /var/lib/cassandra/commitlog/CommitLog-6-1498210233635.log
Because I don't care about preserving the state of Cassandra I delete all of the commit logs and it now boots up fine
sudo rm /var/lib/cassandra/commitlog/CommitLog*
sudo systemctl restart cassandra
systemctl status cassandra (should confirm that it it now running)
Simply goes in log directory in cassandra and delete the log files.
It work fine....

Error creating transactional connection factory during running Spark on Hive project in IDEA

I am trying to setup a develop environment for a Spark Streaming project which requires write data into Hive.
I have a cluster with 1 master, 2 slaves and 1 develop machine (coding in Intellij Idea 14).
Within the spark shell, everything seems working fine and I am able to store data into default database in Hive via Spark 1.5 using DataFrame.write.insertInto("testtable")
However when creating a scala project in IDEA and run it using same cluster with same setting, Error was thrown when creating transactional connection factory in the metastore database which suppose to be "metastore_db" in mysql.
Here's the hive-site.xml:
<configuration>
<property>
<name>hive.metastore.uris</name>
<value>thrift://10.1.50.73:9083</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://10.1.50.73:3306/metastore_db?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>sa</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>huanhuan</value>
</property>
</configuration>
the machine which I was running IDEA, can remotely login Mysql and Hive to create tables, so there should have no problem with permissions.
Here's the log4j Output:
> /home/stdevelop/jdk1.7.0_79/bin/java -Didea.launcher.port=7536 -Didea.launcher.bin.path=/home/stdevelop/idea/bin -Dfile.encoding=UTF-8 -classpath /home/stdevelop/jdk1.7.0_79/jre/lib/plugin.jar:/home/stdevelop/jdk1.7.0_79/jre/lib/deploy.jar:/home/stdevelop/jdk1.7.0_79/jre/lib/jfxrt.jar:/home/stdevelop/jdk1.7.0_79/jre/lib/charsets.jar:/home/stdevelop/jdk1.7.0_79/jre/lib/javaws.jar:/home/stdevelop/jdk1.7.0_79/jre/lib/jfr.jar:/home/stdevelop/jdk1.7.0_79/jre/lib/jce.jar:/home/stdevelop/jdk1.7.0_79/jre/lib/jsse.jar:/home/stdevelop/jdk1.7.0_79/jre/lib/rt.jar:/home/stdevelop/jdk1.7.0_79/jre/lib/resources.jar:/home/stdevelop/jdk1.7.0_79/jre/lib/management-agent.jar:/home/stdevelop/jdk1.7.0_79/jre/lib/ext/zipfs.jar:/home/stdevelop/jdk1.7.0_79/jre/lib/ext/sunec.jar:/home/stdevelop/jdk1.7.0_79/jre/lib/ext/sunpkcs11.jar:/home/stdevelop/jdk1.7.0_79/jre/lib/ext/sunjce_provider.jar:/home/stdevelop/jdk1.7.0_79/jre/lib/ext/localedata.jar:/home/stdevelop/jdk1.7.0_79/jre/lib/ext/dnsns.jar:/home/stdevelop/IdeaProjects/StreamingIntoHive/target/scala-2.10/classes:/root/.sbt/boot/scala-2.10.4/lib/scala-library.jar:/home/stdevelop/SparkDll/spark-assembly-1.5.0-hadoop2.5.2.jar:/home/stdevelop/SparkDll/datanucleus-api-jdo-3.2.6.jar:/home/stdevelop/SparkDll/datanucleus-core-3.2.10.jar:/home/stdevelop/SparkDll/datanucleus-rdbms-3.2.9.jar:/home/stdevelop/SparkDll/mysql-connector-java-5.1.35-bin.jar:/home/stdevelop/idea/lib/idea_rt.jar com.intellij.rt.execution.application.AppMain StreamingIntoHive 10.1.50.68 8080
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/09/22 19:43:18 INFO SparkContext: Running Spark version 1.5.0
15/09/22 19:43:21 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/09/22 19:43:22 INFO SecurityManager: Changing view acls to: root
15/09/22 19:43:22 INFO SecurityManager: Changing modify acls to: root
15/09/22 19:43:22 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/09/22 19:43:26 INFO Slf4jLogger: Slf4jLogger started
15/09/22 19:43:26 INFO Remoting: Starting remoting
15/09/22 19:43:26 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver#10.1.50.68:58070]
15/09/22 19:43:26 INFO Utils: Successfully started service 'sparkDriver' on port 58070.
15/09/22 19:43:26 INFO SparkEnv: Registering MapOutputTracker
15/09/22 19:43:26 INFO SparkEnv: Registering BlockManagerMaster
15/09/22 19:43:26 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-e7fdc896-ebd2-4faa-a9fe-e61bd93a9db4
15/09/22 19:43:26 INFO MemoryStore: MemoryStore started with capacity 797.6 MB
15/09/22 19:43:27 INFO HttpFileServer: HTTP File server directory is /tmp/spark-fb07a3ad-8077-49a8-bcaf-12254cc90282/httpd-0bb434c9-1418-49b6-a514-90e27cb80ab1
15/09/22 19:43:27 INFO HttpServer: Starting HTTP Server
15/09/22 19:43:27 INFO Utils: Successfully started service 'HTTP file server' on port 38865.
15/09/22 19:43:27 INFO SparkEnv: Registering OutputCommitCoordinator
15/09/22 19:43:29 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/09/22 19:43:29 INFO SparkUI: Started SparkUI at http://10.1.50.68:4040
15/09/22 19:43:29 INFO SparkContext: Added JAR /home/stdevelop/SparkDll/mysql-connector-java-5.1.35-bin.jar at http://10.1.50.68:38865/jars/mysql-connector-java-5.1.35-bin.jar with timestamp 1442922209496
15/09/22 19:43:29 INFO SparkContext: Added JAR /home/stdevelop/SparkDll/datanucleus-api-jdo-3.2.6.jar at http://10.1.50.68:38865/jars/datanucleus-api-jdo-3.2.6.jar with timestamp 1442922209498
15/09/22 19:43:29 INFO SparkContext: Added JAR /home/stdevelop/SparkDll/datanucleus-rdbms-3.2.9.jar at http://10.1.50.68:38865/jars/datanucleus-rdbms-3.2.9.jar with timestamp 1442922209534
15/09/22 19:43:29 INFO SparkContext: Added JAR /home/stdevelop/SparkDll/datanucleus-core-3.2.10.jar at http://10.1.50.68:38865/jars/datanucleus-core-3.2.10.jar with timestamp 1442922209564
15/09/22 19:43:30 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
15/09/22 19:43:30 INFO AppClient$ClientEndpoint: Connecting to master spark://10.1.50.71:7077...
15/09/22 19:43:32 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20150922062654-0004
15/09/22 19:43:32 INFO AppClient$ClientEndpoint: Executor added: app-20150922062654-0004/0 on worker-20150921191458-10.1.50.71-44716 (10.1.50.71:44716) with 1 cores
15/09/22 19:43:32 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150922062654-0004/0 on hostPort 10.1.50.71:44716 with 1 cores, 1024.0 MB RAM
15/09/22 19:43:32 INFO AppClient$ClientEndpoint: Executor added: app-20150922062654-0004/1 on worker-20150921191456-10.1.50.73-36446 (10.1.50.73:36446) with 1 cores
15/09/22 19:43:32 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150922062654-0004/1 on hostPort 10.1.50.73:36446 with 1 cores, 1024.0 MB RAM
15/09/22 19:43:32 INFO AppClient$ClientEndpoint: Executor added: app-20150922062654-0004/2 on worker-20150921191456-10.1.50.72-53999 (10.1.50.72:53999) with 1 cores
15/09/22 19:43:32 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150922062654-0004/2 on hostPort 10.1.50.72:53999 with 1 cores, 1024.0 MB RAM
15/09/22 19:43:32 INFO AppClient$ClientEndpoint: Executor updated: app-20150922062654-0004/1 is now LOADING
15/09/22 19:43:32 INFO AppClient$ClientEndpoint: Executor updated: app-20150922062654-0004/0 is now LOADING
15/09/22 19:43:32 INFO AppClient$ClientEndpoint: Executor updated: app-20150922062654-0004/2 is now LOADING
15/09/22 19:43:32 INFO AppClient$ClientEndpoint: Executor updated: app-20150922062654-0004/0 is now RUNNING
15/09/22 19:43:32 INFO AppClient$ClientEndpoint: Executor updated: app-20150922062654-0004/1 is now RUNNING
15/09/22 19:43:32 INFO AppClient$ClientEndpoint: Executor updated: app-20150922062654-0004/2 is now RUNNING
15/09/22 19:43:33 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 60161.
15/09/22 19:43:33 INFO NettyBlockTransferService: Server created on 60161
15/09/22 19:43:33 INFO BlockManagerMaster: Trying to register BlockManager
15/09/22 19:43:33 INFO BlockManagerMasterEndpoint: Registering block manager 10.1.50.68:60161 with 797.6 MB RAM, BlockManagerId(driver, 10.1.50.68, 60161)
15/09/22 19:43:33 INFO BlockManagerMaster: Registered BlockManager
15/09/22 19:43:34 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
15/09/22 19:43:35 INFO SparkContext: Added JAR /home/stdevelop/Builds/streamingintohive.jar at http://10.1.50.68:38865/jars/streamingintohive.jar with timestamp 1442922215169
15/09/22 19:43:39 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor#10.1.50.72:40110/user/Executor#-132020084]) with ID 2
15/09/22 19:43:39 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor#10.1.50.71:38248/user/Executor#-1615730727]) with ID 0
15/09/22 19:43:40 INFO BlockManagerMasterEndpoint: Registering block manager 10.1.50.72:37819 with 534.5 MB RAM, BlockManagerId(2, 10.1.50.72, 37819)
15/09/22 19:43:40 INFO BlockManagerMasterEndpoint: Registering block manager 10.1.50.71:48028 with 534.5 MB RAM, BlockManagerId(0, 10.1.50.71, 48028)
15/09/22 19:43:42 INFO HiveContext: Initializing execution hive, version 1.2.1
15/09/22 19:43:42 INFO ClientWrapper: Inspected Hadoop version: 2.5.2
15/09/22 19:43:42 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.5.2
15/09/22 19:43:42 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor#10.1.50.73:56385/user/Executor#1871695565]) with ID 1
15/09/22 19:43:43 INFO BlockManagerMasterEndpoint: Registering block manager 10.1.50.73:43643 with 534.5 MB RAM, BlockManagerId(1, 10.1.50.73, 43643)
15/09/22 19:43:45 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
15/09/22 19:43:45 INFO ObjectStore: ObjectStore, initialize called
15/09/22 19:43:47 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
15/09/22 19:43:47 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
15/09/22 19:43:47 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
15/09/22 19:43:48 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
15/09/22 19:43:58 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
15/09/22 19:44:03 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
15/09/22 19:44:03 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
15/09/22 19:44:10 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
15/09/22 19:44:10 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
15/09/22 19:44:12 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
15/09/22 19:44:12 INFO ObjectStore: Initialized ObjectStore
15/09/22 19:44:13 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
15/09/22 19:44:14 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
15/09/22 19:44:15 INFO HiveMetaStore: Added admin role in metastore
15/09/22 19:44:15 INFO HiveMetaStore: Added public role in metastore
15/09/22 19:44:16 INFO HiveMetaStore: No user is added in admin role, since config is empty
15/09/22 19:44:16 INFO HiveMetaStore: 0: get_all_databases
15/09/22 19:44:16 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_all_databases
15/09/22 19:44:17 INFO HiveMetaStore: 0: get_functions: db=default pat=*
15/09/22 19:44:17 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_functions: db=default pat=*
15/09/22 19:44:17 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
15/09/22 19:44:18 INFO SessionState: Created local directory: /tmp/9ee94679-df51-46bc-bf6f-66b19f053823_resources
15/09/22 19:44:18 INFO SessionState: Created HDFS directory: /tmp/hive/root/9ee94679-df51-46bc-bf6f-66b19f053823
15/09/22 19:44:18 INFO SessionState: Created local directory: /tmp/root/9ee94679-df51-46bc-bf6f-66b19f053823
15/09/22 19:44:18 INFO SessionState: Created HDFS directory: /tmp/hive/root/9ee94679-df51-46bc-bf6f-66b19f053823/_tmp_space.db
15/09/22 19:44:19 INFO HiveContext: default warehouse location is /user/hive/warehouse
15/09/22 19:44:19 INFO HiveContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
15/09/22 19:44:19 INFO ClientWrapper: Inspected Hadoop version: 2.5.2
15/09/22 19:44:19 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.5.2
15/09/22 19:44:22 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/09/22 19:44:22 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
15/09/22 19:44:22 INFO ObjectStore: ObjectStore, initialize called
15/09/22 19:44:23 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
15/09/22 19:44:23 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
15/09/22 19:44:23 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
15/09/22 19:44:25 WARN HiveMetaStore: Retrying creating default database after error: Error creating transactional connection factory
javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:587)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:788)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
at java.security.AccessController.doPrivileged(Native Method)
at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:365)
at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:394)
at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:291)
at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)
at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1234)
at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:174)
at org.apache.hadoop.hive.ql.metadata.Hive.<clinit>(Hive.java:166)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:171)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.liftedTree1$1(IsolatedClientLoader.scala:183)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.<init>(IsolatedClientLoader.scala:179)
at org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:227)
at org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:186)
at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:393)
at org.apache.spark.sql.hive.HiveContext.defaultOverrides(HiveContext.scala:175)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:178)
at StreamingIntoHive$.main(StreamingIntoHive.scala:42)
at StreamingIntoHive.main(StreamingIntoHive.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
NestedThrowablesStackTrace:
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:325)
at org.datanucleus.store.AbstractStoreManager.registerConnectionFactory(AbstractStoreManager.java:282)
at org.datanucleus.store.AbstractStoreManager.<init>(AbstractStoreManager.java:240)
at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:286)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1187)
at org.datanucleus.NucleusContext.initialise(NucleusContext.java:356)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:775)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
at java.security.AccessController.doPrivileged(Native Method)
at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:365)
at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:394)
at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:291)
at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)
at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1234)
at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:174)
at org.apache.hadoop.hive.ql.metadata.Hive.<clinit>(Hive.java:166)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:171)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.liftedTree1$1(IsolatedClientLoader.scala:183)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.<init>(IsolatedClientLoader.scala:179)
at org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:227)
at org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:186)
at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:393)
at org.apache.spark.sql.hive.HiveContext.defaultOverrides(HiveContext.scala:175)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:178)
at StreamingIntoHive$.main(StreamingIntoHive.scala:42)
at StreamingIntoHive.main(StreamingIntoHive.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
Caused by: java.lang.OutOfMemoryError: PermGen space
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:165)
at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:153)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at org.datanucleus.store.rdbms.connectionpool.DBCPConnectionPoolFactory.createConnectionPool(DBCPConnectionPoolFactory.java:59)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:238)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.initialiseDataSources(ConnectionFactoryImpl.java:131)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.<init>(ConnectionFactoryImpl.java:85)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:325)
at org.datanucleus.store.AbstractStoreManager.registerConnectionFactory(AbstractStoreManager.java:282)
at org.datanucleus.store.AbstractStoreManager.<init>(AbstractStoreManager.java:240)
………………
………………
Process finished with exit code 1
===============
Can anyone help me to find out the reason?
Thanks.
I believe adding hiveContext may help, if not added already in code.
import org.apache.spark.sql.hive.HiveContext
import hiveContext.implicits._
import hiveContext.sql
val hiveContext = new HiveContext(sc)
A hive context adds support for finding tables in the MetaStore and writing queries using HiveQL. Users who do not have an existing Hive deployment can still create a HiveContext. When not configured by the hive-site.xml, the context automatically creates metastore_db and warehouse in the current directory. -- From spark example
Issue was resolved by,
Actually I created metastore_db manually so if spark connects mysql, it will pass the direct sql DbType test. However since it didn't pass the mysql direct sql therefore spark used derby as default metastore db as what was shown in the log4j. This implies spark was not connected to metastore_db in mysql although hive-site.xml was correctly configured. I found a solution for this which is described in the question comment. Hope it helps
If you are using Mysql database specify mysql driver path in spark-env.sh file,
SampleCode:
export SPARK_CLASSPATH="/home/${user.name}/work/apache-hive-2.0.0-bin/lib/mysql-connector-java-5.1.38-bin.jar"
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_LOG_DIR=/home/hadoop/work/spark-2.0.0-bin-hadoop2.7/logs
export SPARK_WORKER_DIR=/home/hadoop/work/spark-2.0.0-bin-hadoop2.7/logs/worker_dir
More info: https://www.youtube.com/watch?v=y3E8tUFhBt0

Resources