SonarQube: java.lang.IllegalStateException: Webapp did not start at..: the SonarQube server was automatically closed after I started the server - memory-leaks

I was trying to deploy SonarQube at a remote Ubuntu machine. I started the server, and the status info was 'SonarQube is running'. But after a few minutes, the server was automatically closed. I got this exception:
2016.11.08 16:41:53 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [Abandoned connection cleanup thread] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
com.mysql.jdbc.AbandonedConnectionCleanupThread.run(AbandonedConnectionCleanupThread.java:43)
2016.11.08 16:41:53 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [Timer-0] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.util.TimerThread.mainLoop(Timer.java:552)
java.util.TimerThread.run(Timer.java:505)
2016.11.08 16:41:53 INFO web[][o.a.c.h.Http11NioProtocol] Starting ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 16:41:53 INFO web[][o.s.s.a.TomcatAccessLog] Web server is started
2016.11.08 16:41:53 INFO web[][o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2016.11.08 16:41:53 WARN web[][o.s.p.ProcessEntryPoint] Fail to start web
java.lang.IllegalStateException: Webapp did not start
at org.sonar.server.app.EmbeddedTomcat.isUp(EmbeddedTomcat.java:84) ~[sonar-server-6.1.jar:na]
at org.sonar.server.app.WebServer.isUp(WebServer.java:46) [sonar-server-6.1.jar:na]
at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:105) ~[sonar-process-6.1.jar:na]
at org.sonar.server.app.WebServer.main(WebServer.java:67) [sonar-server-6.1.jar:na]
2016.11.08 16:41:53 INFO web[][o.a.c.h.Http11NioProtocol] Pausing ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 16:41:54 INFO web[][o.a.c.h.Http11NioProtocol] Stopping ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 16:41:54 INFO web[][o.a.c.h.Http11NioProtocol] Destroying ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 16:41:54 INFO web[][o.s.s.a.TomcatAccessLog] Web server is stopped
2016.11.08 16:41:54 INFO app[][o.s.p.m.Monitor] Process[es] is stopping
2016.11.08 16:41:55 INFO es[][o.s.p.StopWatcher] Stopping process
2016.11.08 16:41:55 INFO es[][o.elasticsearch.node] [sonarqube] stopping ...
2016.11.08 16:41:55 INFO es[][o.elasticsearch.node] [sonarqube] stopped
2016.11.08 16:41:55 INFO es[][o.elasticsearch.node] [sonarqube] closing ...
2016.11.08 16:41:55 INFO es[][o.elasticsearch.node] [sonarqube] closed
Below are my versions:
Java: JDK 1.8.0_111
Ubuntu: Ubuntu 14.04
MySQL: 5.6.34
SonarQube: 6.1
below is the full logs
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
WrapperSimpleApp: Unable to locate the class org.sonar.application.App: java.lang.UnsupportedClassVersionError: org/sonar/application/App : Unsupported major.minor version 52.0
WrapperSimpleApp Usage:
java org.tanukisoftware.wrapper.WrapperSimpleApp {app_class} [app_arguments]
Where:
app_class: The fully qualified class name of the application to run.
app_arguments: The arguments that would normally be passed to the
application.
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
WrapperSimpleApp: Unable to locate the class org.sonar.application.App: java.lang.UnsupportedClassVersionError: org/sonar/application/App : Unsupported major.minor version 52.0
WrapperSimpleApp Usage:
java org.tanukisoftware.wrapper.WrapperSimpleApp {app_class} [app_arguments]
Where:
app_class: The fully qualified class name of the application to run.
app_arguments: The arguments that would normally be passed to the
application.
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
WrapperSimpleApp: Unable to locate the class org.sonar.application.App: java.lang.UnsupportedClassVersionError: org/sonar/application/App : Unsupported major.minor version 52.0
"sonar.log" 21127L, 1883274C 1,1 Top
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2016.11.08 21:14:58 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [Abandoned connection cleanup thread] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
com.mysql.jdbc.AbandonedConnectionCleanupThread.run(AbandonedConnectionCleanupThread.java:43)
2016.11.08 21:14:58 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [Timer-0] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.util.TimerThread.mainLoop(Timer.java:552)
java.util.TimerThread.run(Timer.java:505)
2016.11.08 21:14:58 INFO web[][o.a.c.h.Http11NioProtocol] Starting ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 21:14:58 INFO web[][o.s.s.a.TomcatAccessLog] Web server is started
2016.11.08 21:14:58 INFO web[][o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2016.11.08 21:14:58 WARN web[][o.s.p.ProcessEntryPoint] Fail to start web
java.lang.IllegalStateException: Webapp did not start
at org.sonar.server.app.EmbeddedTomcat.isUp(EmbeddedTomcat.java:84) ~[sonar-server-6.1.jar:na]
at org.sonar.server.app.WebServer.isUp(WebServer.java:46) [sonar-server-6.1.jar:na]
at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:105) ~[sonar-process-6.1.jar:na]
at org.sonar.server.app.WebServer.main(WebServer.java:67) [sonar-server-6.1.jar:na]
2016.11.08 21:14:58 INFO web[][o.a.c.h.Http11NioProtocol] Pausing ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 21:14:59 INFO web[][o.a.c.h.Http11NioProtocol] Stopping ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 21:14:59 INFO web[][o.a.c.h.Http11NioProtocol] Destroying ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 21:14:59 INFO web[][o.s.s.a.TomcatAccessLog] Web server is stopped
2016.11.08 21:14:59 INFO app[][o.s.p.m.Monitor] Process[es] is stopping
2016.11.08 21:15:00 INFO es[][o.s.p.StopWatcher] Stopping process
2016.11.08 21:15:00 INFO es[][o.elasticsearch.node] [sonarqube] stopping ...
2016.11.08 21:15:00 INFO es[][o.elasticsearch.node] [sonarqube] stopped
2016.11.08 21:15:00 INFO es[][o.elasticsearch.node] [sonarqube] closing ...
2016.11.08 21:15:00 INFO es[][o.elasticsearch.node] [sonarqube] closed
2016.11.08 21:15:00 INFO app[][o.s.p.m.Monitor] Process[es] is stopped

The famous message "Unsupported major.minor version 52.0" means that Java 8 is not being used whereas it is required.
You should check that JDK 1.8.0_111 is in PATH. If not, an alternative is to edit the property wrapper.java.command in conf/wrapper.conf.

Related

Spark Job fails after Cloudera upgrade to 5.16.1

I'have very simple example Spark job which counts 2+2 compiled with Spark 1.6.
I'm performing spark Submit in the following way:
spark-submit --master yarn --deploy-mode cluster --executor-memory 2G --driver-memory 1G --conf spark.yarn.jar=hdfs:/user/bigdata-app-xxx-yyy/diy/lib/spark-assembly-1.6.0-hadoop2.6.0.jar --queue root.xxxyyy --num-executors 4 --principal bigdata-app-xxx-yyy#kontosa.COM --keytab /clf/hadoop/conf/keytabs/bigdata-app-xxx-yyy.keytab --class com.vanilla.meir.Main hdfs:/user/bigdata-app-xxx-yyy/xxx/lib/spark-hello-world.jar
Job submitted, but it fails the following exception:
19/12/08 07:15:37 INFO storage.MemoryStore: MemoryStore started with capacity 457.9 MB
19/12/08 07:15:37 INFO spark.SparkEnv: Registering OutputCommitCoordinator
19/12/08 07:15:37 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
19/12/08 07:15:37 INFO util.Utils: Successfully started service 'SparkUI' on port 35371.
19/12/08 07:15:37 INFO ui.SparkUI: Started SparkUI at http://10.204.152.26:35371
19/12/08 07:15:37 INFO cluster.YarnClusterScheduler: Created YarnClusterScheduler
19/12/08 07:15:37 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 43674.
19/12/08 07:15:37 INFO netty.NettyBlockTransferService: Server created on 43674
19/12/08 07:15:37 INFO storage.BlockManager: external shuffle service port = 7337
19/12/08 07:15:37 INFO storage.BlockManagerMaster: Trying to register BlockManager
19/12/08 07:15:37 INFO storage.BlockManagerMasterEndpoint: Registering block manager 10.204.152.26:43674 with 457.9 MB RAM, BlockManagerId(driver, 10.204.152.26, 43674)
19/12/08 07:15:37 INFO storage.BlockManagerMaster: Registered BlockManager
19/12/08 07:15:37 INFO scheduler.EventLoggingListener: Logging events to hdfs://Titan/user/spark/applicationHistory/application_1564355610025_265304_1
19/12/08 07:15:37 WARN spark.SparkContext: Dynamic Allocation and num executors both set, thus dynamic allocation disabled.
19/12/08 07:15:37 INFO ui.SparkUI: Stopped Spark web UI at http://10.204.152.26:35371
19/12/08 07:15:37 INFO cluster.YarnClusterSchedulerBackend: Shutting down all executors
19/12/08 07:15:37 INFO cluster.YarnClusterSchedulerBackend: Asking each executor to shut down
19/12/08 07:15:38 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
19/12/08 07:15:38 INFO storage.MemoryStore: MemoryStore cleared
19/12/08 07:15:38 INFO storage.BlockManager: BlockManager stopped
19/12/08 07:15:38 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
19/12/08 07:15:38 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
19/12/08 07:15:38 INFO spark.SparkContext: Successfully stopped SparkContext
19/12/08 07:15:38 ERROR spark.SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Exception when registering SparkListener
at org.apache.spark.SparkContext.setupAndStartListenerBus(SparkContext.scala:2155)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:578)
at com.vanilla.meir.Main$.main(Main.scala:16)
at com.vanilla.meir.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:542)
Caused by: java.lang.ClassNotFoundException: com.cloudera.spark.lineage.ClouderaNavigatorListener
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
at org.apache.spark.SparkContext$$anonfun$setupAndStartListenerBus$1.apply(SparkContext.scala:2123)
at org.apache.spark.SparkContext$$anonfun$setupAndStartListenerBus$1.apply(SparkContext.scala:2120)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
at org.apache.spark.SparkContext.setupAndStartListenerBus(SparkContext.scala:2120)
... 8 more
19/12/08 07:15:38 INFO spark.SparkContext: SparkContext already stopped.
19/12/08 07:15:38 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
19/12/08 07:15:38 ERROR yarn.ApplicationMaster: User class threw exception: org.apache.spark.SparkException: Exception when registering SparkListener
org.apache.spark.SparkException: Exception when registering SparkListener
at org.apache.spark.SparkContext.setupAndStartListenerBus(SparkContext.scala:2155)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:578)
at com.vanilla.meir.Main$.main(Main.scala:16)
at com.vanilla.meir.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:542)
Caused by: java.lang.ClassNotFoundException: com.cloudera.spark.lineage.ClouderaNavigatorListener
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
at org.apache.spark.SparkContext$$anonfun$setupAndStartListenerBus$1.apply(SparkContext.scala:2123)
at org.apache.spark.SparkContext$$anonfun$setupAndStartListenerBus$1.apply(SparkContext.scala:2120)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
at org.apache.spark.SparkContext.setupAndStartListenerBus(SparkContext.scala:2120)
... 8 more
19/12/08 07:15:38 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: org.apache.spark.SparkException: Exception when registering SparkListener)
19/12/08 07:15:38 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
19/12/08 07:15:38 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
19/12/08 07:15:46 ERROR yarn.ApplicationMaster: SparkContext did not initialize after waiting for 100000 ms. Please check earlier log output for errors. Failing the application.
19/12/08 07:15:46 INFO util.ShutdownHookManager: Shutdown hook called
it used to be ok on previous release and run successfully on Spark 1.5.2 but recompiling code for new Spark version brings this exeption.
Can someone help?

SPARK Error: java.lang.UnsatisfiedLinkError: /tmp/snappy-1.0.4.1-libsnappyjava [duplicate]

This question already has answers here:
UnsatisfiedLinkError: no snappyjava in java.library.path when running Spark MLLib Unit test within Intellij
(4 answers)
UnsatisfiedLinkError: /tmp/snappy-1.1.4-libsnappyjava.so Error loading shared library ld-linux-x86-64.so.2: No such file or directory
(8 answers)
spark returns error libsnappyjava.so: failed to map segment from shared object: Operation not permitted
(2 answers)
Closed 3 years ago.
I am running CDH 5.16 standalone singlenode in a RHEL 7 Server.
I have written a simple spark code that reads a text file from HDFS and load it as parquet file in a separate location in HDFS. But when ever i am running this code in the server(i am using SBT to build jar and deploy it in cluster using spark-submit), following error is thrown:
19/06/07 12:56:04 INFO spark.SparkContext: Running Spark version 1.6.0
19/06/07 12:56:04 INFO spark.SecurityManager: Changing view acls to: ak_bng
19/06/07 12:56:04 INFO spark.SecurityManager: Changing modify acls to: ak_bng
19/06/07 12:56:04 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ak_bng); users with modify permissions: Set(ak_bng)
19/06/07 12:56:05 INFO util.Utils: Successfully started service 'sparkDriver' on port 44220.
19/06/07 12:56:05 INFO slf4j.Slf4jLogger: Slf4jLogger started
19/06/07 12:56:05 INFO Remoting: Starting remoting
19/06/07 12:56:05 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem#10.188.223.5:36304]
19/06/07 12:56:05 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriverActorSystem#10.188.223.5:36304]
19/06/07 12:56:05 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 36304.
19/06/07 12:56:05 INFO spark.SparkEnv: Registering MapOutputTracker
19/06/07 12:56:05 INFO spark.SparkEnv: Registering BlockManagerMaster
19/06/07 12:56:05 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-c38a27e3-c483-4f56-ab7f-56e4c1be0832
19/06/07 12:56:05 INFO storage.MemoryStore: MemoryStore started with capacity 530.0 MB
19/06/07 12:56:06 INFO spark.SparkEnv: Registering OutputCommitCoordinator
19/06/07 12:56:06 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
19/06/07 12:56:06 INFO ui.SparkUI: Started SparkUI at http://10.188.223.5:4040
19/06/07 12:56:06 INFO spark.SparkContext: Added JAR file:/home/ak_bng/spark_jars/Simple_Project-assembly-1.0.jar at spark://10.188.223.5:44220/jars/Simple_Project-assembly-1.0.jar with timestamp 1559892366578
19/06/07 12:56:06 INFO executor.Executor: Starting executor ID driver on host localhost
19/06/07 12:56:06 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46170.
19/06/07 12:56:06 INFO netty.NettyBlockTransferService: Server created on 46170
19/06/07 12:56:06 INFO storage.BlockManager: external shuffle service port = 7337
19/06/07 12:56:06 INFO storage.BlockManagerMaster: Trying to register BlockManager
19/06/07 12:56:06 INFO storage.BlockManagerMasterEndpoint: Registering block manager localhost:46170 with 530.0 MB RAM, BlockManagerId(driver, localhost, 46170)
19/06/07 12:56:06 INFO storage.BlockManagerMaster: Registered BlockManager
19/06/07 12:56:07 INFO scheduler.EventLoggingListener: Logging events to hdfs://indelsrv185.in.kworld.kpmg.com:8020/user/spark/applicationHistory/local-1559892366602
19/06/07 12:56:07 INFO spark.SparkContext: Registered listener com.cloudera.spark.lineage.ClouderaNavigatorListener
19/06/07 12:56:08 INFO parquet.ParquetRelation: Listing hdfs://10.188.223.5:8020/user/ak_bng/products on driver
19/06/07 12:56:08 INFO parquet.ParquetRelation: Listing hdfs://10.188.223.5:8020/user/ak_bng/categories on driver
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:312)
at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:219)
at org.xerial.snappy.Snappy.<clinit>(Snappy.java:44)
at org.apache.spark.io.SnappyCompressionCodec$.liftedTree1$1(CompressionCodec.scala:169)
at org.apache.spark.io.SnappyCompressionCodec$.org$apache$spark$io$SnappyCompressionCodec$$version$lzycompute(CompressionCodec.scala:168)
at org.apache.spark.io.SnappyCompressionCodec$.org$apache$spark$io$SnappyCompressionCodec$$version(CompressionCodec.scala:168)
at org.apache.spark.io.SnappyCompressionCodec.<init>(CompressionCodec.scala:152)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
at org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:74)
at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:81)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1334)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$.apply(DataSourceStrategy.scala:126)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:48)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:46)
at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:53)
at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:53)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$5.apply(QueryExecution.scala:81)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$5.apply(QueryExecution.scala:81)
at org.apache.spark.sql.execution.QueryExecution.stringOrError(QueryExecution.scala:61)
at org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:81)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:50)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:106)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:56)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:56)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:256)
at org.apache.spark.sql.DataFrameWriter.dataSource$lzycompute$1(DataFrameWriter.scala:181)
at org.apache.spark.sql.DataFrameWriter.org$apache$spark$sql$DataFrameWriter$$dataSource$1(DataFrameWriter.scala:181)
at org.apache.spark.sql.DataFrameWriter$$anonfun$save$1.apply$mcV$sp(DataFrameWriter.scala:188)
at org.apache.spark.sql.DataFrameWriter.executeAndCallQEListener(DataFrameWriter.scala:154)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:188)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:172)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:370)
at SimpleApp$.main(SimpleApp.scala:169)
at SimpleApp.main(SimpleApp.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:730)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.UnsatisfiedLinkError: /tmp/snappy-1.0.4.1-libsnappyjava.so: /tmp/snappy-1.0.4.1-libsnappyjava.so: failed to map segment from shared object: Operation not permitted
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
at java.lang.Runtime.load0(Runtime.java:809)
at java.lang.System.load(System.java:1086)
at org.xerial.snappy.SnappyNativeLoader.load(SnappyNativeLoader.java:39)
... 65 more
Exception in thread "main" java.lang.IllegalArgumentException: java.lang.NoClassDefFoundError: Could not initialize class org.xerial.snappy.Snappy
at org.apache.spark.io.SnappyCompressionCodec$.liftedTree1$1(CompressionCodec.scala:171)
at org.apache.spark.io.SnappyCompressionCodec$.org$apache$spark$io$SnappyCompressionCodec$$version$lzycompute(CompressionCodec.scala:168)
at org.apache.spark.io.SnappyCompressionCodec$.org$apache$spark$io$SnappyCompressionCodec$$version(CompressionCodec.scala:168)
at org.apache.spark.io.SnappyCompressionCodec.<init>(CompressionCodec.scala:152)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
at org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:74)
at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:81)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1334)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$.apply(DataSourceStrategy.scala:126)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:48)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:46)
at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:53)
at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:53)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:51)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:106)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:56)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:56)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:256)
at org.apache.spark.sql.DataFrameWriter.dataSource$lzycompute$1(DataFrameWriter.scala:181)
at org.apache.spark.sql.DataFrameWriter.org$apache$spark$sql$DataFrameWriter$$dataSource$1(DataFrameWriter.scala:181)
at org.apache.spark.sql.DataFrameWriter$$anonfun$save$1.apply$mcV$sp(DataFrameWriter.scala:188)
at org.apache.spark.sql.DataFrameWriter.executeAndCallQEListener(DataFrameWriter.scala:154)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:188)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:172)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:370)
at SimpleApp$.main(SimpleApp.scala:169)
at SimpleApp.main(SimpleApp.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:730)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.xerial.snappy.Snappy
at org.apache.spark.io.SnappyCompressionCodec$.liftedTree1$1(CompressionCodec.scala:169)
... 53 more
19/06/07 12:56:08 INFO spark.SparkContext: Invoking stop() from shutdown hook
19/06/07 12:56:08 INFO ui.SparkUI: Stopped Spark web UI at http://10.188.223.5:4040
19/06/07 12:56:08 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
19/06/07 12:56:09 INFO storage.MemoryStore: MemoryStore cleared
19/06/07 12:56:09 INFO storage.BlockManager: BlockManager stopped
19/06/07 12:56:09 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
19/06/07 12:56:09 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
19/06/07 12:56:09 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
19/06/07 12:56:09 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
19/06/07 12:56:09 INFO Remoting: Remoting shut down
19/06/07 12:56:09 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
19/06/07 12:56:09 INFO spark.SparkContext: Successfully stopped SparkContext
19/06/07 12:56:09 INFO util.ShutdownHookManager: Shutdown hook called
19/06/07 12:56:09 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-111712ef-39a8-41b6-bf6d-5d317d954fa1
spark submit command:
spark-submit --class SimpleApp --master local[8] /home/ak_bng/spark_jars/Simple_Project-assembly-1.0.jar.
I went through few links to resolve this issue(Snappy Compression not working due to tmp folder previliges, Apache Spark - Parquet / Snappy compression error, ) but none couldn't really provide a solution for this.
I had run Spark on HDFS (separate installation) successfully without any errors before. The problem started coming once CDHwas installed.
I am new to setting up cluster and quite don't understand what the issue is here and how to resolve it. Can some one help please shed some light on this.
I am using:
CDH 5.16
Spark 1.6.0
Server OS: RHEL 7
Hadoop 2.6

Docker memory leak with sonarqube

I am trying to run a docker container which contains SonarQube.
After build container, I did below command to run the container. For first several moment it looks fine(I guess since I can find up status in docker ps -a), but I exit automatically exit.
I have typed command like ...
docker run -d --name sonarqube
-p 9000:9000 -p 9092:9092
-e SONARQUBE_JDBC_USERNAME=sonar
-e SONARQUBE_JDBC_PASSWORD=sonar
-e SONARQUBE_JDBC_URL="jdbc:mysql://111.222.33.444:3306/sonar?characterEncoding=utf8&useUnicode=true&rewriteBatchedStatements=true"
sonarqube
And follow is failure log
2017.04.21 06:39:37 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Pip the Troll][[timer]]] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Thread.sleep(Native Method)
org.elasticsearch.threadpool.ThreadPool$EstimatedTimeThread.run(ThreadPool.java:747)
2017.04.21 06:39:37 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Pip the Troll][scheduler][T#1]] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2017.04.21 06:39:37 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Pip the Troll][transport_client_worker][T#1]{New I/O worker #1}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2017.04.21 06:39:37 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Pip the Troll][transport_client_worker][T#2]{New I/O worker #2}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2017.04.21 06:39:37 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Pip the Troll][transport_client_worker][T#3]{New I/O worker #3}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
After all, it seems like main cause of auto shutdown is memory leak. How can I fix this?
FYI, without JDBC information, it works fine.
================= EDIT ==================
Maybe I should provide more information to fix this.
When I type docker run and immediately docker logs sonarqube, log looks like ...
[root#DCSF-DEV08 ice]# docker logs sonarqube
01:00:46.930 [main] WARN org.sonar.application.JdbcSettings - JDBC URL is recommended to have the property 'useConfigs=maxPerformance'
2017.04.24 01:00:47 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2017.04.24 01:00:47 INFO app[][o.s.p.m.JavaProcessLauncher] Launch process[es]: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Djava.awt.headless=true -Xmx1G -Xms256m -Xss256k -Djna.nosys=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/opt/sonarqube/temp -javaagent:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management-agent.jar -cp ./lib/common/*:./lib/search/* org.sonar.search.SearchServer /opt/sonarqube/temp/sq-process7897977644818879465properties
2017.04.24 01:00:47 INFO es[][o.s.p.ProcessEntryPoint] Starting es
2017.04.24 01:00:47 INFO es[][o.s.s.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2017.04.24 01:00:47 INFO es[][o.elasticsearch.node] [sonarqube] version[2.4.4], pid[45], build[fcbb46d/2017-01-03T11:33:16Z]
2017.04.24 01:00:47 INFO es[][o.elasticsearch.node] [sonarqube] initializing ...
2017.04.24 01:00:47 INFO es[][o.e.plugins] [sonarqube] modules [], plugins [], sites []
2017.04.24 01:00:47 INFO es[][o.elasticsearch.env] [sonarqube] using [1] data paths, mounts [[/opt/sonarqube/data (/dev/mapper/centos-root)]], net usable_space [7gb], net total_space [49.9gb], spins? [possibly], types [xfs]
2017.04.24 01:00:47 INFO es[][o.elasticsearch.env] [sonarqube] heap size [989.8mb], compressed ordinary object pointers [true]
2017.04.24 01:00:49 INFO es[][o.elasticsearch.node] [sonarqube] initialized
2017.04.24 01:00:49 INFO es[][o.elasticsearch.node] [sonarqube] starting ...
2017.04.24 01:00:49 INFO es[][o.e.transport] [sonarqube] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
2017.04.24 01:00:49 INFO es[][o.e.discovery] [sonarqube] sonarqube/GPO7RRqHR8a8tfu1KfgVtw
But after few seconds, the error happens and exit. The first error is something related with ElasticSearch.
2017.04.24 01:00:52 INFO es[][o.elasticsearch.node] [sonarqube] started
2017.04.24 01:00:52 INFO es[][o.e.gateway] [sonarqube] recovered [0] indices into cluster_state
2017.04.24 01:00:52 INFO app[][o.s.p.m.Monitor] Process[es] is up
2017.04.24 01:00:52 INFO app[][o.s.p.m.JavaProcessLauncher] Launch process[web]: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/./urandom -Djava.io.tmpdir=/opt/sonarqube/temp -javaagent:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management-agent.jar -cp ./lib/common/*:./lib/server/*:/opt/sonarqube/lib/jdbc/mysql/mysql-connector-java-5.1.39.jar org.sonar.server.app.WebServer /opt/sonarqube/temp/sq-process8670082336569494309properties
2017.04.24 01:00:53 INFO web[][o.s.p.ProcessEntryPoint] Starting web
2017.04.24 01:00:53 INFO web[][o.a.t.u.n.NioSelectorPool] Using a shared selector for servlet write/read
2017.04.24 01:00:54 INFO web[][o.e.plugins] [Immortus] modules [], plugins [], sites []
2017.04.24 01:00:54 INFO web[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2017.04.24 01:00:54 INFO web[][o.s.s.p.LogServerVersion] SonarQube Server / 6.3.0.19869 / 43ea4f4c43aa89d4c435017f86d0da254e115e6b
2017.04.24 01:00:54 INFO web[][o.sonar.db.Database] Create JDBC data source for jdbc:mysql://125.131.88.156:3306/sonar?characterEncoding=utf8&useUnicode=true&rewriteBatchedStatements=true
2017.04.24 01:00:55 ERROR web[][o.a.c.c.C.[.[.[/]] Exception sending context initialized event to listener instance of class org.sonar.server.platform.web.PlatformServletContextListener
org.sonar.api.utils.MessageException: Unsupported mysql version: 5.5. Minimal supported version is 5.6.
2017.04.24 01:00:55 ERROR web[][o.a.c.c.StandardContext] One or more listeners failed to start. Full details will be found in the appropriate container log file
2017.04.24 01:00:55 ERROR web[][o.a.c.c.StandardContext] Context [] startup failed due to previous errors
2017.04.24 01:00:55 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Immortus][[timer]]] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Thread.sleep(Native Method)
org.elasticsearch.threadpool.ThreadPool$EstimatedTimeThread.run(ThreadPool.java:747)
2017.04.24 01:00:55 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Immortus][scheduler][T#1]] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2017.04.24 01:00:55 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Immortus][transport_client_worker][T#1]{New I/O worker #1}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
I think JDBC url looks fine. I can access the database with sqllite.
Thanks for answering my errors.
I have change my database system form MariaDB ver.10.1.22 to MySQL ver.5.7. There are some posts about this problem, but it seems does not solved yet. For now Sonarqube can not be used with some version of MariaDB.
Possibly database connectivity issue. Check your address for typos and make sure your credentials are valid.

Starting SonarQube in my remote linux ubuntu machine

I'm trying to start SonarQube in my linux machine, but i'm getting this error:
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2017.03.28 20:00:24 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonar/temp
2017.03.28 20:00:24 INFO app[][o.s.p.m.JavaProcessLauncher] Launch process[es]: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Djava.awt.headless=true -Xmx1G -Xms256m -Xss256k -Djna.nosys=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/opt/sonar/temp -javaagent:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management-agent.jar -cp ./lib/common/*:./lib/search/* org.sonar.search.SearchServer /opt/sonar/temp/sq-process4414918112205828016properties
2017.03.28 20:00:25 INFO es[][o.s.p.ProcessEntryPoint] Starting es
2017.03.28 20:00:25 INFO es[][o.s.s.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2017.03.28 20:00:26 INFO es[][o.elasticsearch.node] [sonarqube] version[2.3.3], pid[4801], build[218bdf1/2016-05-17T15:40:04Z]
2017.03.28 20:00:26 INFO es[][o.elasticsearch.node] [sonarqube] initializing ...
2017.03.28 20:00:26 INFO es[][o.e.plugins] [sonarqube] modules [], plugins [], sites []
2017.03.28 20:00:26 INFO es[][o.elasticsearch.env] [sonarqube] using [1] data paths, mounts [[/ (/dev/root)]], net usable_space [12.5gb], net total_space [19.4gb], spins? [possibly], types [ext4]
2017.03.28 20:00:26 INFO es[][o.elasticsearch.env] [sonarqube] heap size [1015.6mb], compressed ordinary object pointers [true]
2017.03.28 20:00:30 INFO es[][o.elasticsearch.node] [sonarqube] initialized
2017.03.28 20:00:30 INFO es[][o.elasticsearch.node] [sonarqube] starting ...
2017.03.28 20:00:30 INFO es[][o.e.transport] [sonarqube] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
2017.03.28 20:00:30 INFO es[][o.e.discovery] [sonarqube] sonarqube/upk_GmJgQPmBqNGykii2jw
2017.03.28 20:00:33 INFO es[][o.e.cluster.service] [sonarqube] new_master {sonarqube}{upk_GmJgQPmBqNGykii2jw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube, master=true}, reason: zen-disco-join(elected_as_master, [0] joins received)
2017.03.28 20:00:33 INFO es[][o.elasticsearch.node] [sonarqube] started
2017.03.28 20:00:33 INFO es[][o.e.gateway] [sonarqube] recovered [0] indices into cluster_state
2017.03.28 20:00:33 INFO app[][o.s.p.m.Monitor] Process[es] is up
2017.03.28 20:00:33 INFO app[][o.s.p.m.JavaProcessLauncher] Launch process[web]: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.management.enabled=false -Djruby.compile.invokedynamic=false -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/opt/sonar/temp -javaagent:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management-agent.jar -cp ./lib/common/*:./lib/server/*:/opt/sonar/lib/jdbc/mysql/mysql-connector-java-5.1.39.jar org.sonar.server.app.WebServer /opt/sonar/temp/sq-process3450868362392661343properties
2017.03.28 20:00:34 INFO web[][o.s.p.ProcessEntryPoint] Starting web
2017.03.28 20:00:35 INFO web[][o.s.s.a.TomcatContexts] Webapp directory: /opt/sonar/web
2017.03.28 20:00:35 INFO web[][o.a.c.h.Http11NioProtocol] Initializing ProtocolHandler ["http-nio-127.0.0.1-9000"]
2017.03.28 20:00:35 INFO web[][o.a.t.u.n.NioSelectorPool] Using a shared selector for servlet write/read
2017.03.28 20:00:36 INFO web[][o.e.plugins] [Eternal Brain] modules [], plugins [], sites []
2017.03.28 20:00:38 INFO web[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2017.03.28 20:00:38 INFO web[][o.s.s.p.LogServerVersion] SonarQube Server / 6.1 / dc148a71a1c184ccad588b66251980c994879dff
2017.03.28 20:00:38 INFO web[][o.sonar.db.Database] Create JDBC data source for jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
Tue Mar 28 20:00:38 UTC 2017 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Tue Mar 28 20:00:38 UTC 2017 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
2017.03.28 20:00:40 ERROR web[][o.a.c.c.C.[.[.[/sonar]] Exception sending context initialized event to listener instance of class org.sonar.server.platform.web.PlatformServletContextListener
org.sonar.api.utils.MessageException: Database was upgraded to a more recent of SonarQube. Backup must probably be restored or db settings are incorrect.
2017.03.28 20:00:40 ERROR web[][o.a.c.c.StandardContext] One or more listeners failed to start. Full details will be found in the appropriate container log file
2017.03.28 20:00:40 ERROR web[][o.a.c.c.StandardContext] Context [/sonar] startup failed due to previous errors
2017.03.28 20:00:40 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [sonar] appears to have started a thread named [elasticsearch[Eternal Brain][[timer]]] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Thread.sleep(Native Method)
org.elasticsearch.threadpool.ThreadPool$EstimatedTimeThread.run(ThreadPool.java:719)
2017.03.28 20:00:40 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [sonar] appears to have started a thread named [elasticsearch[Eternal Brain][scheduler][T#1]] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2017.03.28 20:00:40 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [sonar] appears to have started a thread named [elasticsearch[Eternal Brain][transport_client_worker][T#1]{New I/O worker #1}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2017.03.28 20:00:40 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [sonar] appears to have started a thread named [elasticsearch[Eternal Brain][transport_client_worker][T#2]{New I/O worker #2}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2017.03.28 20:00:40 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [sonar] appears to have started a thread named [elasticsearch[Eternal Brain][transport_client_boss][T#1]{New I/O boss #3}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2017.03.28 20:00:40 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [sonar] appears to have started a thread named [elasticsearch[Eternal Brain][transport_client_timer][T#1]{Hashed wheel timer #1}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Thread.sleep(Native Method)
org.jboss.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:445)
org.jboss.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:364)
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
java.lang.Thread.run(Thread.java:745)
2017.03.28 20:00:40 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [sonar] appears to have started a thread named [elasticsearch[Eternal Brain][generic][T#1]] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2017.03.28 20:00:40 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [sonar] appears to have started a thread named [Abandoned connection cleanup thread] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
com.mysql.jdbc.AbandonedConnectionCleanupThread.run(AbandonedConnectionCleanupThread.java:43)
2017.03.28 20:00:40 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [sonar] appears to have started a thread named [Timer-0] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.util.TimerThread.mainLoop(Timer.java:552)
java.util.TimerThread.run(Timer.java:505)
2017.03.28 20:00:40 INFO web[][o.a.c.h.Http11NioProtocol] Starting ProtocolHandler ["http-nio-127.0.0.1-9000"]
2017.03.28 20:00:40 INFO web[][o.s.s.a.TomcatAccessLog] Web server is started
2017.03.28 20:00:40 INFO web[][o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2017.03.28 20:00:40 WARN web[][o.s.p.ProcessEntryPoint] Fail to start web
java.lang.IllegalStateException: Webapp did not start
at org.sonar.server.app.EmbeddedTomcat.isUp(EmbeddedTomcat.java:84) ~[sonar-server-6.1.jar:na]
at org.sonar.server.app.WebServer.isUp(WebServer.java:46) [sonar-server-6.1.jar:na]
at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:105) ~[sonar-process-6.1.jar:na]
at org.sonar.server.app.WebServer.main(WebServer.java:67) [sonar-server-6.1.jar:na]
2017.03.28 20:00:40 INFO web[][o.a.c.h.Http11NioProtocol] Pausing ProtocolHandler ["http-nio-127.0.0.1-9000"]
2017.03.28 20:00:41 INFO web[][o.a.c.h.Http11NioProtocol] Stopping ProtocolHandler ["http-nio-127.0.0.1-9000"]
2017.03.28 20:00:42 INFO web[][o.a.c.h.Http11NioProtocol] Destroying ProtocolHandler ["http-nio-127.0.0.1-9000"]
2017.03.28 20:00:42 INFO web[][o.s.s.a.TomcatAccessLog] Web server is stopped
2017.03.28 20:00:42 INFO app[][o.s.p.m.Monitor] Process[es] is stopping
2017.03.28 20:00:42 INFO es[][o.s.p.StopWatcher] Stopping process
2017.03.28 20:00:42 INFO es[][o.elasticsearch.node] [sonarqube] stopping ...
2017.03.28 20:00:42 INFO es[][o.elasticsearch.node] [sonarqube] stopped
2017.03.28 20:00:42 INFO es[][o.elasticsearch.node] [sonarqube] closing ...
2017.03.28 20:00:42 INFO es[][o.elasticsearch.node] [sonarqube] closed
2017.03.28 20:00:43 INFO app[][o.s.p.m.Monitor] Process[es] is stopped
<-- Wrapper Stopped
the log show me that it's a memory leak, but i don't think that is the error cause.
I was reading in some posts that it's cause because the java isn't reconized.
The error "Database was upgraded to a more recent of SonarQube. Backup must probably be restored or db settings are incorrect." means that the DB schema has already been used by another version of SonarQube (> 6.1 in your case).
You should re-create the schema or install the correct version of SonarQube.

spark-cassandra java.lang.NoClassDefFoundError: com/datastax/spark/connector/japi/CassandraJavaUtil

16/04/26 16:58:46 DEBUG ProtobufRpcEngine: Call: complete took 3ms
Exception in thread "main" java.lang.NoClassDefFoundError: com/datastax/spark/connector/japi/CassandraJavaUtil
at com.baitic.mcava.lecturahdfssaveincassandra.TratamientoCSV.main(TratamientoCSV.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: com.datastax.spark.connector.japi.CassandraJavaUtil
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 10 more
16/04/26 16:58:46 INFO SparkContext: Invoking stop() from shutdown hook
16/04/26 16:58:46 INFO SparkUI: Stopped Spark web UI at http://10.128.0.5:4040
16/04/26 16:58:46 INFO SparkDeploySchedulerBackend: Shutting down all executors
16/04/26 16:58:46 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
16/04/26 16:58:46 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/04/26 16:58:46 INFO MemoryStore: MemoryStore cleared
16/04/26 16:58:46 INFO BlockManager: BlockManager stopped
16/04/26 16:58:46 INFO BlockManagerMaster: BlockManagerMaster stopped
16/04/26 16:58:46 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/04/26 16:58:46 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/04/26 16:58:46 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/04/26 16:58:46 INFO SparkContext: Successfully stopped SparkContext
16/04/26 16:58:46 INFO ShutdownHookManager: Shutdown hook called
16/04/26 16:58:46 INFO ShutdownHookManager: Deleting directory /srv/spark/tmp/spark-2bf57fa2-a2d5-4f8a-980c-994e56b61c44
16/04/26 16:58:46 DEBUG Client: stopping client from cache: org.apache.hadoop.ipc.Client#3fb9a67f
16/04/26 16:58:46 DEBUG Client: removing client from cache: org.apache.hadoop.ipc.Client#3fb9a67f
16/04/26 16:58:46 DEBUG Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client#3fb9a67f
16/04/26 16:58:46 DEBUG Client: Stopping client
16/04/26 16:58:46 DEBUG Client: IPC Client (2107841088) connection to mcava-master/10.128.0.5:54310 from baiticpruebas2: closed
16/04/26 16:58:46 DEBUG Client: IPC Client (2107841088) connection to mcava-master/10.128.0.5:54310 from baiticpruebas2: stopped, remaining connections 0
16/04/26 16:58:46 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
i make this simple code:
/ String pathDatosTratados="hdfs://mcava-master:54310/srv/hadoop/data/spark/DatosApp/medidasSensorTratadas.txt";
String jarPath ="hdfs://mcava-master:54310/srv/hadoop/data/spark/original-LecturaHDFSsaveInCassandra-1.0-SNAPSHOT.jar";
String jar="hdfs://mcava-master:54310/srv/hadoop/data/spark/spark-cassandra-connector-assembly-1.6.0-M1-4-g6f01cfe.jar";
String jar2="hdfs://mcava-master:54310/srv/hadoop/data/spark/spark-cassandra-connector-java-assembly-1.6.0-M1-4-g6f01cfe.jar";
String[] jars= new String[3];
jars[0]=jarPath;
jars[2]=jar;
jars[1]=jar2;
SparkConf conf=new SparkConf().setAppName("TratamientoCSV").setJars(jars);
conf.set("spark.cassandra.connection.host", "10.128.0.5");
conf.set("spark.kryoserializer.buffer.max","512");
conf.set("spark.kryoserializer.buffer","256");
// conf.setJars(jars);
JavaSparkContext sc= new JavaSparkContext(conf);
JavaRDD<String> input= sc.textFile(pathDatos);
i also put the path to cassandra drive in spark-default.conf
spark.driver.extraClassPath hdfs://mcava-master:54310/srv/hadoop/data/spark/spark-cassandra-connector-java-assembly-1.6.0-M1-4-g6f01cfe.jar
spark.executor.extraClassPath hdfs://mcava-master:54310/srv/hadoop/data/spark/spark-cassandra-connector-java-assembly-1.6.0-M1-4-g6f01cfe.jar
i also put the flag --jars to the path of driver but i have always the same error i do not understand why??
i work in google engine
Try to add package when you submit your app.
$SPARK_HOME/bin/spark-submit --packages datastax:spark-cassandra-connector:1.6.0-M2-s_2.11 ....
I add this argument to solve this problem: --packages datastax:spark-cassandra-connector:1.6.0-M2-s_2.10.
At least for 3.0+ spark cassandra connector, the official assembly jar works well for me. It has all the necessary dependencies.
i solve the problem... i maked a fat jar with all dependencies and it not necessary to indicate the references to the cassandra connector only the reference to the fat jar.
I used Spark in my Java programm, and had the same issue.
The problem was, because I didn`t include spark-cassandra-connector into my maven dependencies of my project.
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.11</artifactId>
<version>2.0.7</version> <!-- Check actual version in maven repo -->
</dependency>
After that I builld fat jar with all my dependencies - and it`s worked!
Maybe it will help someone

Resources