I'm trying to start SonarQube in my linux machine, but i'm getting this error:
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2017.03.28 20:00:24 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonar/temp
2017.03.28 20:00:24 INFO app[][o.s.p.m.JavaProcessLauncher] Launch process[es]: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Djava.awt.headless=true -Xmx1G -Xms256m -Xss256k -Djna.nosys=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/opt/sonar/temp -javaagent:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management-agent.jar -cp ./lib/common/*:./lib/search/* org.sonar.search.SearchServer /opt/sonar/temp/sq-process4414918112205828016properties
2017.03.28 20:00:25 INFO es[][o.s.p.ProcessEntryPoint] Starting es
2017.03.28 20:00:25 INFO es[][o.s.s.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2017.03.28 20:00:26 INFO es[][o.elasticsearch.node] [sonarqube] version[2.3.3], pid[4801], build[218bdf1/2016-05-17T15:40:04Z]
2017.03.28 20:00:26 INFO es[][o.elasticsearch.node] [sonarqube] initializing ...
2017.03.28 20:00:26 INFO es[][o.e.plugins] [sonarqube] modules [], plugins [], sites []
2017.03.28 20:00:26 INFO es[][o.elasticsearch.env] [sonarqube] using [1] data paths, mounts [[/ (/dev/root)]], net usable_space [12.5gb], net total_space [19.4gb], spins? [possibly], types [ext4]
2017.03.28 20:00:26 INFO es[][o.elasticsearch.env] [sonarqube] heap size [1015.6mb], compressed ordinary object pointers [true]
2017.03.28 20:00:30 INFO es[][o.elasticsearch.node] [sonarqube] initialized
2017.03.28 20:00:30 INFO es[][o.elasticsearch.node] [sonarqube] starting ...
2017.03.28 20:00:30 INFO es[][o.e.transport] [sonarqube] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
2017.03.28 20:00:30 INFO es[][o.e.discovery] [sonarqube] sonarqube/upk_GmJgQPmBqNGykii2jw
2017.03.28 20:00:33 INFO es[][o.e.cluster.service] [sonarqube] new_master {sonarqube}{upk_GmJgQPmBqNGykii2jw}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube, master=true}, reason: zen-disco-join(elected_as_master, [0] joins received)
2017.03.28 20:00:33 INFO es[][o.elasticsearch.node] [sonarqube] started
2017.03.28 20:00:33 INFO es[][o.e.gateway] [sonarqube] recovered [0] indices into cluster_state
2017.03.28 20:00:33 INFO app[][o.s.p.m.Monitor] Process[es] is up
2017.03.28 20:00:33 INFO app[][o.s.p.m.JavaProcessLauncher] Launch process[web]: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.management.enabled=false -Djruby.compile.invokedynamic=false -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/opt/sonar/temp -javaagent:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management-agent.jar -cp ./lib/common/*:./lib/server/*:/opt/sonar/lib/jdbc/mysql/mysql-connector-java-5.1.39.jar org.sonar.server.app.WebServer /opt/sonar/temp/sq-process3450868362392661343properties
2017.03.28 20:00:34 INFO web[][o.s.p.ProcessEntryPoint] Starting web
2017.03.28 20:00:35 INFO web[][o.s.s.a.TomcatContexts] Webapp directory: /opt/sonar/web
2017.03.28 20:00:35 INFO web[][o.a.c.h.Http11NioProtocol] Initializing ProtocolHandler ["http-nio-127.0.0.1-9000"]
2017.03.28 20:00:35 INFO web[][o.a.t.u.n.NioSelectorPool] Using a shared selector for servlet write/read
2017.03.28 20:00:36 INFO web[][o.e.plugins] [Eternal Brain] modules [], plugins [], sites []
2017.03.28 20:00:38 INFO web[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2017.03.28 20:00:38 INFO web[][o.s.s.p.LogServerVersion] SonarQube Server / 6.1 / dc148a71a1c184ccad588b66251980c994879dff
2017.03.28 20:00:38 INFO web[][o.sonar.db.Database] Create JDBC data source for jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
Tue Mar 28 20:00:38 UTC 2017 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Tue Mar 28 20:00:38 UTC 2017 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
2017.03.28 20:00:40 ERROR web[][o.a.c.c.C.[.[.[/sonar]] Exception sending context initialized event to listener instance of class org.sonar.server.platform.web.PlatformServletContextListener
org.sonar.api.utils.MessageException: Database was upgraded to a more recent of SonarQube. Backup must probably be restored or db settings are incorrect.
2017.03.28 20:00:40 ERROR web[][o.a.c.c.StandardContext] One or more listeners failed to start. Full details will be found in the appropriate container log file
2017.03.28 20:00:40 ERROR web[][o.a.c.c.StandardContext] Context [/sonar] startup failed due to previous errors
2017.03.28 20:00:40 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [sonar] appears to have started a thread named [elasticsearch[Eternal Brain][[timer]]] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Thread.sleep(Native Method)
org.elasticsearch.threadpool.ThreadPool$EstimatedTimeThread.run(ThreadPool.java:719)
2017.03.28 20:00:40 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [sonar] appears to have started a thread named [elasticsearch[Eternal Brain][scheduler][T#1]] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2017.03.28 20:00:40 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [sonar] appears to have started a thread named [elasticsearch[Eternal Brain][transport_client_worker][T#1]{New I/O worker #1}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2017.03.28 20:00:40 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [sonar] appears to have started a thread named [elasticsearch[Eternal Brain][transport_client_worker][T#2]{New I/O worker #2}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2017.03.28 20:00:40 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [sonar] appears to have started a thread named [elasticsearch[Eternal Brain][transport_client_boss][T#1]{New I/O boss #3}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2017.03.28 20:00:40 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [sonar] appears to have started a thread named [elasticsearch[Eternal Brain][transport_client_timer][T#1]{Hashed wheel timer #1}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Thread.sleep(Native Method)
org.jboss.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:445)
org.jboss.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:364)
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
java.lang.Thread.run(Thread.java:745)
2017.03.28 20:00:40 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [sonar] appears to have started a thread named [elasticsearch[Eternal Brain][generic][T#1]] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2017.03.28 20:00:40 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [sonar] appears to have started a thread named [Abandoned connection cleanup thread] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
com.mysql.jdbc.AbandonedConnectionCleanupThread.run(AbandonedConnectionCleanupThread.java:43)
2017.03.28 20:00:40 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [sonar] appears to have started a thread named [Timer-0] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.util.TimerThread.mainLoop(Timer.java:552)
java.util.TimerThread.run(Timer.java:505)
2017.03.28 20:00:40 INFO web[][o.a.c.h.Http11NioProtocol] Starting ProtocolHandler ["http-nio-127.0.0.1-9000"]
2017.03.28 20:00:40 INFO web[][o.s.s.a.TomcatAccessLog] Web server is started
2017.03.28 20:00:40 INFO web[][o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2017.03.28 20:00:40 WARN web[][o.s.p.ProcessEntryPoint] Fail to start web
java.lang.IllegalStateException: Webapp did not start
at org.sonar.server.app.EmbeddedTomcat.isUp(EmbeddedTomcat.java:84) ~[sonar-server-6.1.jar:na]
at org.sonar.server.app.WebServer.isUp(WebServer.java:46) [sonar-server-6.1.jar:na]
at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:105) ~[sonar-process-6.1.jar:na]
at org.sonar.server.app.WebServer.main(WebServer.java:67) [sonar-server-6.1.jar:na]
2017.03.28 20:00:40 INFO web[][o.a.c.h.Http11NioProtocol] Pausing ProtocolHandler ["http-nio-127.0.0.1-9000"]
2017.03.28 20:00:41 INFO web[][o.a.c.h.Http11NioProtocol] Stopping ProtocolHandler ["http-nio-127.0.0.1-9000"]
2017.03.28 20:00:42 INFO web[][o.a.c.h.Http11NioProtocol] Destroying ProtocolHandler ["http-nio-127.0.0.1-9000"]
2017.03.28 20:00:42 INFO web[][o.s.s.a.TomcatAccessLog] Web server is stopped
2017.03.28 20:00:42 INFO app[][o.s.p.m.Monitor] Process[es] is stopping
2017.03.28 20:00:42 INFO es[][o.s.p.StopWatcher] Stopping process
2017.03.28 20:00:42 INFO es[][o.elasticsearch.node] [sonarqube] stopping ...
2017.03.28 20:00:42 INFO es[][o.elasticsearch.node] [sonarqube] stopped
2017.03.28 20:00:42 INFO es[][o.elasticsearch.node] [sonarqube] closing ...
2017.03.28 20:00:42 INFO es[][o.elasticsearch.node] [sonarqube] closed
2017.03.28 20:00:43 INFO app[][o.s.p.m.Monitor] Process[es] is stopped
<-- Wrapper Stopped
the log show me that it's a memory leak, but i don't think that is the error cause.
I was reading in some posts that it's cause because the java isn't reconized.
The error "Database was upgraded to a more recent of SonarQube. Backup must probably be restored or db settings are incorrect." means that the DB schema has already been used by another version of SonarQube (> 6.1 in your case).
You should re-create the schema or install the correct version of SonarQube.
Related
This question already has answers here:
UnsatisfiedLinkError: no snappyjava in java.library.path when running Spark MLLib Unit test within Intellij
(4 answers)
UnsatisfiedLinkError: /tmp/snappy-1.1.4-libsnappyjava.so Error loading shared library ld-linux-x86-64.so.2: No such file or directory
(8 answers)
spark returns error libsnappyjava.so: failed to map segment from shared object: Operation not permitted
(2 answers)
Closed 3 years ago.
I am running CDH 5.16 standalone singlenode in a RHEL 7 Server.
I have written a simple spark code that reads a text file from HDFS and load it as parquet file in a separate location in HDFS. But when ever i am running this code in the server(i am using SBT to build jar and deploy it in cluster using spark-submit), following error is thrown:
19/06/07 12:56:04 INFO spark.SparkContext: Running Spark version 1.6.0
19/06/07 12:56:04 INFO spark.SecurityManager: Changing view acls to: ak_bng
19/06/07 12:56:04 INFO spark.SecurityManager: Changing modify acls to: ak_bng
19/06/07 12:56:04 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ak_bng); users with modify permissions: Set(ak_bng)
19/06/07 12:56:05 INFO util.Utils: Successfully started service 'sparkDriver' on port 44220.
19/06/07 12:56:05 INFO slf4j.Slf4jLogger: Slf4jLogger started
19/06/07 12:56:05 INFO Remoting: Starting remoting
19/06/07 12:56:05 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem#10.188.223.5:36304]
19/06/07 12:56:05 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriverActorSystem#10.188.223.5:36304]
19/06/07 12:56:05 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 36304.
19/06/07 12:56:05 INFO spark.SparkEnv: Registering MapOutputTracker
19/06/07 12:56:05 INFO spark.SparkEnv: Registering BlockManagerMaster
19/06/07 12:56:05 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-c38a27e3-c483-4f56-ab7f-56e4c1be0832
19/06/07 12:56:05 INFO storage.MemoryStore: MemoryStore started with capacity 530.0 MB
19/06/07 12:56:06 INFO spark.SparkEnv: Registering OutputCommitCoordinator
19/06/07 12:56:06 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
19/06/07 12:56:06 INFO ui.SparkUI: Started SparkUI at http://10.188.223.5:4040
19/06/07 12:56:06 INFO spark.SparkContext: Added JAR file:/home/ak_bng/spark_jars/Simple_Project-assembly-1.0.jar at spark://10.188.223.5:44220/jars/Simple_Project-assembly-1.0.jar with timestamp 1559892366578
19/06/07 12:56:06 INFO executor.Executor: Starting executor ID driver on host localhost
19/06/07 12:56:06 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46170.
19/06/07 12:56:06 INFO netty.NettyBlockTransferService: Server created on 46170
19/06/07 12:56:06 INFO storage.BlockManager: external shuffle service port = 7337
19/06/07 12:56:06 INFO storage.BlockManagerMaster: Trying to register BlockManager
19/06/07 12:56:06 INFO storage.BlockManagerMasterEndpoint: Registering block manager localhost:46170 with 530.0 MB RAM, BlockManagerId(driver, localhost, 46170)
19/06/07 12:56:06 INFO storage.BlockManagerMaster: Registered BlockManager
19/06/07 12:56:07 INFO scheduler.EventLoggingListener: Logging events to hdfs://indelsrv185.in.kworld.kpmg.com:8020/user/spark/applicationHistory/local-1559892366602
19/06/07 12:56:07 INFO spark.SparkContext: Registered listener com.cloudera.spark.lineage.ClouderaNavigatorListener
19/06/07 12:56:08 INFO parquet.ParquetRelation: Listing hdfs://10.188.223.5:8020/user/ak_bng/products on driver
19/06/07 12:56:08 INFO parquet.ParquetRelation: Listing hdfs://10.188.223.5:8020/user/ak_bng/categories on driver
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:312)
at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:219)
at org.xerial.snappy.Snappy.<clinit>(Snappy.java:44)
at org.apache.spark.io.SnappyCompressionCodec$.liftedTree1$1(CompressionCodec.scala:169)
at org.apache.spark.io.SnappyCompressionCodec$.org$apache$spark$io$SnappyCompressionCodec$$version$lzycompute(CompressionCodec.scala:168)
at org.apache.spark.io.SnappyCompressionCodec$.org$apache$spark$io$SnappyCompressionCodec$$version(CompressionCodec.scala:168)
at org.apache.spark.io.SnappyCompressionCodec.<init>(CompressionCodec.scala:152)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
at org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:74)
at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:81)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1334)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$.apply(DataSourceStrategy.scala:126)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:48)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:46)
at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:53)
at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:53)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$5.apply(QueryExecution.scala:81)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$5.apply(QueryExecution.scala:81)
at org.apache.spark.sql.execution.QueryExecution.stringOrError(QueryExecution.scala:61)
at org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:81)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:50)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:106)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:56)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:56)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:256)
at org.apache.spark.sql.DataFrameWriter.dataSource$lzycompute$1(DataFrameWriter.scala:181)
at org.apache.spark.sql.DataFrameWriter.org$apache$spark$sql$DataFrameWriter$$dataSource$1(DataFrameWriter.scala:181)
at org.apache.spark.sql.DataFrameWriter$$anonfun$save$1.apply$mcV$sp(DataFrameWriter.scala:188)
at org.apache.spark.sql.DataFrameWriter.executeAndCallQEListener(DataFrameWriter.scala:154)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:188)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:172)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:370)
at SimpleApp$.main(SimpleApp.scala:169)
at SimpleApp.main(SimpleApp.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:730)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.UnsatisfiedLinkError: /tmp/snappy-1.0.4.1-libsnappyjava.so: /tmp/snappy-1.0.4.1-libsnappyjava.so: failed to map segment from shared object: Operation not permitted
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
at java.lang.Runtime.load0(Runtime.java:809)
at java.lang.System.load(System.java:1086)
at org.xerial.snappy.SnappyNativeLoader.load(SnappyNativeLoader.java:39)
... 65 more
Exception in thread "main" java.lang.IllegalArgumentException: java.lang.NoClassDefFoundError: Could not initialize class org.xerial.snappy.Snappy
at org.apache.spark.io.SnappyCompressionCodec$.liftedTree1$1(CompressionCodec.scala:171)
at org.apache.spark.io.SnappyCompressionCodec$.org$apache$spark$io$SnappyCompressionCodec$$version$lzycompute(CompressionCodec.scala:168)
at org.apache.spark.io.SnappyCompressionCodec$.org$apache$spark$io$SnappyCompressionCodec$$version(CompressionCodec.scala:168)
at org.apache.spark.io.SnappyCompressionCodec.<init>(CompressionCodec.scala:152)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
at org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:74)
at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:81)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1334)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$.apply(DataSourceStrategy.scala:126)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:48)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:46)
at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:53)
at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:53)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:51)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:106)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:56)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:56)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:256)
at org.apache.spark.sql.DataFrameWriter.dataSource$lzycompute$1(DataFrameWriter.scala:181)
at org.apache.spark.sql.DataFrameWriter.org$apache$spark$sql$DataFrameWriter$$dataSource$1(DataFrameWriter.scala:181)
at org.apache.spark.sql.DataFrameWriter$$anonfun$save$1.apply$mcV$sp(DataFrameWriter.scala:188)
at org.apache.spark.sql.DataFrameWriter.executeAndCallQEListener(DataFrameWriter.scala:154)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:188)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:172)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:370)
at SimpleApp$.main(SimpleApp.scala:169)
at SimpleApp.main(SimpleApp.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:730)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.xerial.snappy.Snappy
at org.apache.spark.io.SnappyCompressionCodec$.liftedTree1$1(CompressionCodec.scala:169)
... 53 more
19/06/07 12:56:08 INFO spark.SparkContext: Invoking stop() from shutdown hook
19/06/07 12:56:08 INFO ui.SparkUI: Stopped Spark web UI at http://10.188.223.5:4040
19/06/07 12:56:08 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
19/06/07 12:56:09 INFO storage.MemoryStore: MemoryStore cleared
19/06/07 12:56:09 INFO storage.BlockManager: BlockManager stopped
19/06/07 12:56:09 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
19/06/07 12:56:09 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
19/06/07 12:56:09 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
19/06/07 12:56:09 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
19/06/07 12:56:09 INFO Remoting: Remoting shut down
19/06/07 12:56:09 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
19/06/07 12:56:09 INFO spark.SparkContext: Successfully stopped SparkContext
19/06/07 12:56:09 INFO util.ShutdownHookManager: Shutdown hook called
19/06/07 12:56:09 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-111712ef-39a8-41b6-bf6d-5d317d954fa1
spark submit command:
spark-submit --class SimpleApp --master local[8] /home/ak_bng/spark_jars/Simple_Project-assembly-1.0.jar.
I went through few links to resolve this issue(Snappy Compression not working due to tmp folder previliges, Apache Spark - Parquet / Snappy compression error, ) but none couldn't really provide a solution for this.
I had run Spark on HDFS (separate installation) successfully without any errors before. The problem started coming once CDHwas installed.
I am new to setting up cluster and quite don't understand what the issue is here and how to resolve it. Can some one help please shed some light on this.
I am using:
CDH 5.16
Spark 1.6.0
Server OS: RHEL 7
Hadoop 2.6
I am trying to run a docker container which contains SonarQube.
After build container, I did below command to run the container. For first several moment it looks fine(I guess since I can find up status in docker ps -a), but I exit automatically exit.
I have typed command like ...
docker run -d --name sonarqube
-p 9000:9000 -p 9092:9092
-e SONARQUBE_JDBC_USERNAME=sonar
-e SONARQUBE_JDBC_PASSWORD=sonar
-e SONARQUBE_JDBC_URL="jdbc:mysql://111.222.33.444:3306/sonar?characterEncoding=utf8&useUnicode=true&rewriteBatchedStatements=true"
sonarqube
And follow is failure log
2017.04.21 06:39:37 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Pip the Troll][[timer]]] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Thread.sleep(Native Method)
org.elasticsearch.threadpool.ThreadPool$EstimatedTimeThread.run(ThreadPool.java:747)
2017.04.21 06:39:37 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Pip the Troll][scheduler][T#1]] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2017.04.21 06:39:37 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Pip the Troll][transport_client_worker][T#1]{New I/O worker #1}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2017.04.21 06:39:37 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Pip the Troll][transport_client_worker][T#2]{New I/O worker #2}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2017.04.21 06:39:37 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Pip the Troll][transport_client_worker][T#3]{New I/O worker #3}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
After all, it seems like main cause of auto shutdown is memory leak. How can I fix this?
FYI, without JDBC information, it works fine.
================= EDIT ==================
Maybe I should provide more information to fix this.
When I type docker run and immediately docker logs sonarqube, log looks like ...
[root#DCSF-DEV08 ice]# docker logs sonarqube
01:00:46.930 [main] WARN org.sonar.application.JdbcSettings - JDBC URL is recommended to have the property 'useConfigs=maxPerformance'
2017.04.24 01:00:47 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2017.04.24 01:00:47 INFO app[][o.s.p.m.JavaProcessLauncher] Launch process[es]: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Djava.awt.headless=true -Xmx1G -Xms256m -Xss256k -Djna.nosys=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/opt/sonarqube/temp -javaagent:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management-agent.jar -cp ./lib/common/*:./lib/search/* org.sonar.search.SearchServer /opt/sonarqube/temp/sq-process7897977644818879465properties
2017.04.24 01:00:47 INFO es[][o.s.p.ProcessEntryPoint] Starting es
2017.04.24 01:00:47 INFO es[][o.s.s.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2017.04.24 01:00:47 INFO es[][o.elasticsearch.node] [sonarqube] version[2.4.4], pid[45], build[fcbb46d/2017-01-03T11:33:16Z]
2017.04.24 01:00:47 INFO es[][o.elasticsearch.node] [sonarqube] initializing ...
2017.04.24 01:00:47 INFO es[][o.e.plugins] [sonarqube] modules [], plugins [], sites []
2017.04.24 01:00:47 INFO es[][o.elasticsearch.env] [sonarqube] using [1] data paths, mounts [[/opt/sonarqube/data (/dev/mapper/centos-root)]], net usable_space [7gb], net total_space [49.9gb], spins? [possibly], types [xfs]
2017.04.24 01:00:47 INFO es[][o.elasticsearch.env] [sonarqube] heap size [989.8mb], compressed ordinary object pointers [true]
2017.04.24 01:00:49 INFO es[][o.elasticsearch.node] [sonarqube] initialized
2017.04.24 01:00:49 INFO es[][o.elasticsearch.node] [sonarqube] starting ...
2017.04.24 01:00:49 INFO es[][o.e.transport] [sonarqube] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
2017.04.24 01:00:49 INFO es[][o.e.discovery] [sonarqube] sonarqube/GPO7RRqHR8a8tfu1KfgVtw
But after few seconds, the error happens and exit. The first error is something related with ElasticSearch.
2017.04.24 01:00:52 INFO es[][o.elasticsearch.node] [sonarqube] started
2017.04.24 01:00:52 INFO es[][o.e.gateway] [sonarqube] recovered [0] indices into cluster_state
2017.04.24 01:00:52 INFO app[][o.s.p.m.Monitor] Process[es] is up
2017.04.24 01:00:52 INFO app[][o.s.p.m.JavaProcessLauncher] Launch process[web]: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/./urandom -Djava.io.tmpdir=/opt/sonarqube/temp -javaagent:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management-agent.jar -cp ./lib/common/*:./lib/server/*:/opt/sonarqube/lib/jdbc/mysql/mysql-connector-java-5.1.39.jar org.sonar.server.app.WebServer /opt/sonarqube/temp/sq-process8670082336569494309properties
2017.04.24 01:00:53 INFO web[][o.s.p.ProcessEntryPoint] Starting web
2017.04.24 01:00:53 INFO web[][o.a.t.u.n.NioSelectorPool] Using a shared selector for servlet write/read
2017.04.24 01:00:54 INFO web[][o.e.plugins] [Immortus] modules [], plugins [], sites []
2017.04.24 01:00:54 INFO web[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2017.04.24 01:00:54 INFO web[][o.s.s.p.LogServerVersion] SonarQube Server / 6.3.0.19869 / 43ea4f4c43aa89d4c435017f86d0da254e115e6b
2017.04.24 01:00:54 INFO web[][o.sonar.db.Database] Create JDBC data source for jdbc:mysql://125.131.88.156:3306/sonar?characterEncoding=utf8&useUnicode=true&rewriteBatchedStatements=true
2017.04.24 01:00:55 ERROR web[][o.a.c.c.C.[.[.[/]] Exception sending context initialized event to listener instance of class org.sonar.server.platform.web.PlatformServletContextListener
org.sonar.api.utils.MessageException: Unsupported mysql version: 5.5. Minimal supported version is 5.6.
2017.04.24 01:00:55 ERROR web[][o.a.c.c.StandardContext] One or more listeners failed to start. Full details will be found in the appropriate container log file
2017.04.24 01:00:55 ERROR web[][o.a.c.c.StandardContext] Context [] startup failed due to previous errors
2017.04.24 01:00:55 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Immortus][[timer]]] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Thread.sleep(Native Method)
org.elasticsearch.threadpool.ThreadPool$EstimatedTimeThread.run(ThreadPool.java:747)
2017.04.24 01:00:55 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Immortus][scheduler][T#1]] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2017.04.24 01:00:55 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [elasticsearch[Immortus][transport_client_worker][T#1]{New I/O worker #1}] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
org.jboss.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:68)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.select(AbstractNioSelector.java:434)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:212)
org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
I think JDBC url looks fine. I can access the database with sqllite.
Thanks for answering my errors.
I have change my database system form MariaDB ver.10.1.22 to MySQL ver.5.7. There are some posts about this problem, but it seems does not solved yet. For now Sonarqube can not be used with some version of MariaDB.
Possibly database connectivity issue. Check your address for typos and make sure your credentials are valid.
I am running a spark application on YARN cluster(on AWS EMR). The application seems to be killed and I want to find the cause. I try to understand the YARN info given in the following screen.
The diagnostic line in the screen seems to show that YARN killing the app because of the memory limit:
Diagnostics: Container [pid=1540,containerID=container_1488651686158_0012_02_000001] is running beyond physical memory limits. Current usage: 1.6 GB of 1.4 GB physical memory used; 3.6 GB of 6.9 GB virtual memory used. Killing container.
However, the appattempt log shows completely different exception, something related to the IO/network. My question is : should I trust the diagnostic in the screen or the appattempt log? Is the IO exception causing the kill or the out of memory cause the IO exception in the appattempt log? Is it another log/diagnostic I should look at? Thanks.
17/03/04 21:59:02 ERROR Utils: Uncaught exception in thread task-result-getter-0
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:202)
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:190)
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:190)
at org.apache.spark.network.BlockTransferService.fetchBlockSync(BlockTransferService.scala:104)
at org.apache.spark.storage.BlockManager.getRemoteBytes(BlockManager.scala:579)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:82)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(TaskResultGetter.scala:62)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "task-result-getter-0" java.lang.Error: java.lang.InterruptedException
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1148)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:202)
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:190)
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:190)
at org.apache.spark.network.BlockTransferService.fetchBlockSync(BlockTransferService.scala:104)
at org.apache.spark.storage.BlockManager.getRemoteBytes(BlockManager.scala:579)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:82)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(TaskResultGetter.scala:62)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
... 2 more
17/03/04 21:59:02 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/03/04 21:59:02 ERROR TransportResponseHandler: Still have 1 requests outstanding when connection from ip-172-31-9-207.ec2.internal/172.31.9.207:38437 is closed
17/03/04 21:59:02 INFO RetryingBlockFetcher: Retrying fetch (1/3) for 1 outstanding blocks after 5000 ms
17/03/04 21:59:02 ERROR DiskBlockManager: Exception while deleting local spark dir: /mnt/yarn/usercache/hadoop/appcache/application_1488651686158_0012/blockmgr-941a13d8-1b31-4347-bdec-180125b6f4ca
java.io.IOException: Failed to delete: /mnt/yarn/usercache/hadoop/appcache/application_1488651686158_0012/blockmgr-941a13d8-1b31-4347-bdec-180125b6f4ca
at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:1010)
at org.apache.spark.storage.DiskBlockManager$$anonfun$org$apache$spark$storage$DiskBlockManager$$doStop$1.apply(DiskBlockManager.scala:169)
at org.apache.spark.storage.DiskBlockManager$$anonfun$org$apache$spark$storage$DiskBlockManager$$doStop$1.apply(DiskBlockManager.scala:165)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at org.apache.spark.storage.DiskBlockManager.org$apache$spark$storage$DiskBlockManager$$doStop(DiskBlockManager.scala:165)
at org.apache.spark.storage.DiskBlockManager.stop(DiskBlockManager.scala:160)
at org.apache.spark.storage.BlockManager.stop(BlockManager.scala:1361)
at org.apache.spark.SparkEnv.stop(SparkEnv.scala:89)
at org.apache.spark.SparkContext$$anonfun$stop$11.apply$mcV$sp(SparkContext.scala:1842)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1283)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1841)
at org.apache.spark.SparkContext$$anonfun$2.apply$mcV$sp(SparkContext.scala:581)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
17/03/04 21:59:02 INFO MemoryStore: MemoryStore cleared
17/03/04 21:59:02 INFO BlockManager: BlockManager stopped
17/03/04 21:59:02 INFO BlockManagerMaster: BlockManagerMaster stopped
17/03/04 21:59:02 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/03/04 21:59:02 ERROR Utils: Uncaught exception in thread Thread-3
java.lang.NoClassDefFoundError: Could not initialize class java.nio.file.FileSystems$DefaultFileSystemHolder
at java.nio.file.FileSystems.getDefault(FileSystems.java:176)
at java.nio.file.Paths.get(Paths.java:138)
at org.apache.spark.util.Utils$.isSymlink(Utils.scala:1021)
at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:991)
at org.apache.spark.SparkEnv.stop(SparkEnv.scala:102)
at org.apache.spark.SparkContext$$anonfun$stop$11.apply$mcV$sp(SparkContext.scala:1842)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1283)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1841)
at org.apache.spark.SparkContext$$anonfun$2.apply$mcV$sp(SparkContext.scala:581)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
17/03/04 21:59:02 WARN ShutdownHookManager: ShutdownHook '$anon$2' failed, java.lang.NoClassDefFoundError: Could not initialize class java.nio.file.FileSystems$DefaultFileSystemHolder
java.lang.NoClassDefFoundError: Could not initialize class java.nio.file.FileSystems$DefaultFileSystemHolder
at java.nio.file.FileSystems.getDefault(FileSystems.java:176)
at java.nio.file.Paths.get(Paths.java:138)
at org.apache.spark.util.Utils$.isSymlink(Utils.scala:1021)
at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:991)
at org.apache.spark.SparkEnv.stop(SparkEnv.scala:102)
at org.apache.spark.SparkContext$$anonfun$stop$11.apply$mcV$sp(SparkContext.scala:1842)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1283)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1841)
at org.apache.spark.SparkContext$$anonfun$2.apply$mcV$sp(SparkContext.scala:581)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
The information in your screenshot is the most relevant. Your ApplicationMaster container ran out of memory. You need to increase yarn.app.mapreduce.am.resource.mb which is set in mapred-site.xml. I recommend a value of 2000 since that will usually accommodate running Spark and MapReduce applications at scale.
The container was killed (memory exceeds physical memory limits) so any attempt to reach this container fails.
Yarn is fine to have an overall view of the process, but you should prefer spark history server to analyse better your job (check unbalanced memory in spark history).
I was trying to deploy SonarQube at a remote Ubuntu machine. I started the server, and the status info was 'SonarQube is running'. But after a few minutes, the server was automatically closed. I got this exception:
2016.11.08 16:41:53 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [Abandoned connection cleanup thread] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
com.mysql.jdbc.AbandonedConnectionCleanupThread.run(AbandonedConnectionCleanupThread.java:43)
2016.11.08 16:41:53 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [Timer-0] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.util.TimerThread.mainLoop(Timer.java:552)
java.util.TimerThread.run(Timer.java:505)
2016.11.08 16:41:53 INFO web[][o.a.c.h.Http11NioProtocol] Starting ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 16:41:53 INFO web[][o.s.s.a.TomcatAccessLog] Web server is started
2016.11.08 16:41:53 INFO web[][o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2016.11.08 16:41:53 WARN web[][o.s.p.ProcessEntryPoint] Fail to start web
java.lang.IllegalStateException: Webapp did not start
at org.sonar.server.app.EmbeddedTomcat.isUp(EmbeddedTomcat.java:84) ~[sonar-server-6.1.jar:na]
at org.sonar.server.app.WebServer.isUp(WebServer.java:46) [sonar-server-6.1.jar:na]
at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:105) ~[sonar-process-6.1.jar:na]
at org.sonar.server.app.WebServer.main(WebServer.java:67) [sonar-server-6.1.jar:na]
2016.11.08 16:41:53 INFO web[][o.a.c.h.Http11NioProtocol] Pausing ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 16:41:54 INFO web[][o.a.c.h.Http11NioProtocol] Stopping ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 16:41:54 INFO web[][o.a.c.h.Http11NioProtocol] Destroying ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 16:41:54 INFO web[][o.s.s.a.TomcatAccessLog] Web server is stopped
2016.11.08 16:41:54 INFO app[][o.s.p.m.Monitor] Process[es] is stopping
2016.11.08 16:41:55 INFO es[][o.s.p.StopWatcher] Stopping process
2016.11.08 16:41:55 INFO es[][o.elasticsearch.node] [sonarqube] stopping ...
2016.11.08 16:41:55 INFO es[][o.elasticsearch.node] [sonarqube] stopped
2016.11.08 16:41:55 INFO es[][o.elasticsearch.node] [sonarqube] closing ...
2016.11.08 16:41:55 INFO es[][o.elasticsearch.node] [sonarqube] closed
Below are my versions:
Java: JDK 1.8.0_111
Ubuntu: Ubuntu 14.04
MySQL: 5.6.34
SonarQube: 6.1
below is the full logs
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
WrapperSimpleApp: Unable to locate the class org.sonar.application.App: java.lang.UnsupportedClassVersionError: org/sonar/application/App : Unsupported major.minor version 52.0
WrapperSimpleApp Usage:
java org.tanukisoftware.wrapper.WrapperSimpleApp {app_class} [app_arguments]
Where:
app_class: The fully qualified class name of the application to run.
app_arguments: The arguments that would normally be passed to the
application.
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
WrapperSimpleApp: Unable to locate the class org.sonar.application.App: java.lang.UnsupportedClassVersionError: org/sonar/application/App : Unsupported major.minor version 52.0
WrapperSimpleApp Usage:
java org.tanukisoftware.wrapper.WrapperSimpleApp {app_class} [app_arguments]
Where:
app_class: The fully qualified class name of the application to run.
app_arguments: The arguments that would normally be passed to the
application.
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
WrapperSimpleApp: Unable to locate the class org.sonar.application.App: java.lang.UnsupportedClassVersionError: org/sonar/application/App : Unsupported major.minor version 52.0
"sonar.log" 21127L, 1883274C 1,1 Top
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2016.11.08 21:14:58 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [Abandoned connection cleanup thread] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
com.mysql.jdbc.AbandonedConnectionCleanupThread.run(AbandonedConnectionCleanupThread.java:43)
2016.11.08 21:14:58 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [Timer-0] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.util.TimerThread.mainLoop(Timer.java:552)
java.util.TimerThread.run(Timer.java:505)
2016.11.08 21:14:58 INFO web[][o.a.c.h.Http11NioProtocol] Starting ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 21:14:58 INFO web[][o.s.s.a.TomcatAccessLog] Web server is started
2016.11.08 21:14:58 INFO web[][o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2016.11.08 21:14:58 WARN web[][o.s.p.ProcessEntryPoint] Fail to start web
java.lang.IllegalStateException: Webapp did not start
at org.sonar.server.app.EmbeddedTomcat.isUp(EmbeddedTomcat.java:84) ~[sonar-server-6.1.jar:na]
at org.sonar.server.app.WebServer.isUp(WebServer.java:46) [sonar-server-6.1.jar:na]
at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:105) ~[sonar-process-6.1.jar:na]
at org.sonar.server.app.WebServer.main(WebServer.java:67) [sonar-server-6.1.jar:na]
2016.11.08 21:14:58 INFO web[][o.a.c.h.Http11NioProtocol] Pausing ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 21:14:59 INFO web[][o.a.c.h.Http11NioProtocol] Stopping ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 21:14:59 INFO web[][o.a.c.h.Http11NioProtocol] Destroying ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 21:14:59 INFO web[][o.s.s.a.TomcatAccessLog] Web server is stopped
2016.11.08 21:14:59 INFO app[][o.s.p.m.Monitor] Process[es] is stopping
2016.11.08 21:15:00 INFO es[][o.s.p.StopWatcher] Stopping process
2016.11.08 21:15:00 INFO es[][o.elasticsearch.node] [sonarqube] stopping ...
2016.11.08 21:15:00 INFO es[][o.elasticsearch.node] [sonarqube] stopped
2016.11.08 21:15:00 INFO es[][o.elasticsearch.node] [sonarqube] closing ...
2016.11.08 21:15:00 INFO es[][o.elasticsearch.node] [sonarqube] closed
2016.11.08 21:15:00 INFO app[][o.s.p.m.Monitor] Process[es] is stopped
The famous message "Unsupported major.minor version 52.0" means that Java 8 is not being used whereas it is required.
You should check that JDK 1.8.0_111 is in PATH. If not, an alternative is to edit the property wrapper.java.command in conf/wrapper.conf.
I am trying to submit application to spark using the submit script. I have 3 nodes, 1 master & 2 workers. I have configured /etc/hosts:
127.0.0.1 localhost.localdomain localhost
10.90.110.173 spark1
10.90.110.177 spark2
10.90.110.179 spark3
I also added these to my conf/spark-env.sh:
export SPARK_MASTER_IP=127.0.0.1
export SPARK_LOCAL_IP=127.0.0.127
And this is what my submit script looks like:
$SPARK_HOME/bin/spark-submit --class "SimpleApp" \
--master spark://spark1:7077 \
$(find target -iname "*.jar")
I get an error message that it cannot connect to my master.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/10/08 20:03:03 INFO SparkContext: Running Spark version 1.6.1
16/10/08 20:03:03 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/10/08 20:03:03 INFO SecurityManager: Changing view acls to: root
16/10/08 20:03:03 INFO SecurityManager: Changing modify acls to: root
16/10/08 20:03:03 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/10/08 20:03:04 INFO Utils: Successfully started service 'sparkDriver' on port 35920.
16/10/08 20:03:04 INFO Slf4jLogger: Slf4jLogger started
16/10/08 20:03:04 INFO Remoting: Starting remoting
16/10/08 20:03:04 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem#127.0.0.127:36246]
16/10/08 20:03:04 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 36246.
16/10/08 20:03:04 INFO SparkEnv: Registering MapOutputTracker
16/10/08 20:03:04 INFO SparkEnv: Registering BlockManagerMaster
16/10/08 20:03:04 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-0ed2efc4-9df9-4099-9a4b-efa69e8b40f3
16/10/08 20:03:04 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
16/10/08 20:03:04 INFO SparkEnv: Registering OutputCommitCoordinator
16/10/08 20:03:04 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/10/08 20:03:04 INFO SparkUI: Started SparkUI at http://127.0.0.127:4040
16/10/08 20:03:04 INFO HttpFileServer: HTTP File server directory is /tmp/spark-04551a15-1f26-4dc9-9a5e-ff637a6ac1bc/httpd-ebc736d7-339e-4253-ba71-5be690d2fb65
16/10/08 20:03:04 INFO HttpServer: Starting HTTP Server
16/10/08 20:03:04 INFO Utils: Successfully started service 'HTTP file server' on port 46354.
16/10/08 20:03:04 INFO SparkContext: Added JAR file:/usr/local/spark/simpleapp/target/scala-2.10/simple-project_2.10-1.0.jar at http://127.0.0.127:46354/jars/simple-project_2.10-1.0.jar with timestamp 1475974984849
16/10/08 20:03:04 INFO AppClient$ClientEndpoint: Connecting to master spark://spark1:7077...
16/10/08 20:03:05 WARN AppClient$ClientEndpoint: Failed to connect to master spark1:7077
java.io.IOException: Failed to connect to spark1/10.90.110.173:7077
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: spark1/10.90.110.173:7077
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
... 1 more
16/10/08 20:03:24 INFO AppClient$ClientEndpoint: Connecting to master spark://spark1:7077...
16/10/08 20:03:24 WARN AppClient$ClientEndpoint: Failed to connect to master spark1:7077
java.io.IOException: Failed to connect to spark1/10.90.110.173:7077
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: spark1/10.90.110.173:7077
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
... 1 more
16/10/08 20:03:44 INFO AppClient$ClientEndpoint: Connecting to master spark://spark1:7077...
16/10/08 20:03:44 INFO AppClient$ClientEndpoint: Connecting to master spark://spark1:7077...
16/10/08 20:03:44 WARN AppClient$ClientEndpoint: Failed to connect to master spark1:7077
java.io.IOException: Failed to connect to spark1/10.90.110.173:7077
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: spark1/10.90.110.173:7077
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
... 1 more
16/10/08 20:04:04 INFO AppClient$ClientEndpoint: Connecting to master spark://spark1:7077...
16/10/08 20:04:04 INFO AppClient$ClientEndpoint: Connecting to master spark://spark1:7077...
16/10/08 20:04:04 WARN SparkDeploySchedulerBackend: Application ID is not initialized yet.
16/10/08 20:04:04 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
16/10/08 20:04:04 WARN AppClient$ClientEndpoint: Failed to connect to master spark1:7077
java.io.IOException: Failed to connect to spark1/10.90.110.173:7077
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: spark1/10.90.110.173:7077
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
... 1 more
16/10/08 20:04:04 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 43668.
16/10/08 20:04:04 INFO NettyBlockTransferService: Server created on 43668
16/10/08 20:04:04 INFO BlockManagerMaster: Trying to register BlockManager
16/10/08 20:04:04 INFO BlockManagerMasterEndpoint: Registering block manager 127.0.0.127:43668 with 511.1 MB RAM, BlockManagerId(driver, 127.0.0.127, 43668)
16/10/08 20:04:04 INFO BlockManagerMaster: Registered BlockManager
16/10/08 20:04:05 INFO SparkUI: Stopped Spark web UI at http://127.0.0.127:4040
16/10/08 20:04:05 INFO SparkDeploySchedulerBackend: Shutting down all executors
16/10/08 20:04:05 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
16/10/08 20:04:05 WARN AppClient$ClientEndpoint: Drop UnregisterApplication(null) because has not yet connected to master
16/10/08 20:04:05 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[appclient-registration-retry-thread,5,main]
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1039)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.deploy.client.AppClient.stop(AppClient.scala:290)
at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.org$apache$spark$scheduler$cluster$SparkDeploySchedulerBackend$$stop(SparkDeploySchedulerBackend.scala:198)
at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.stop(SparkDeploySchedulerBackend.scala:101)
at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:446)
at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1582)
at org.apache.spark.SparkContext$$anonfun$stop$9.apply$mcV$sp(SparkContext.scala:1740)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1229)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1739)
at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.dead(SparkDeploySchedulerBackend.scala:127)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint.markDead(AppClient.scala:264)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:134)
at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1163)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:129)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/10/08 20:04:05 INFO DiskBlockManager: Shutdown hook called
16/10/08 20:04:05 INFO ShutdownHookManager: Shutdown hook called
16/10/08 20:04:05 INFO ShutdownHookManager: Deleting directory /tmp/spark-04551a15-1f26-4dc9-9a5e-ff637a6ac1bc/userFiles-6d5eb5da-4b3a-4ecd-a5e2-c1799fcc5abb
16/10/08 20:04:05 INFO ShutdownHookManager: Deleting directory /tmp/spark-04551a15-1f26-4dc9-9a5e-ff637a6ac1bc/httpd-ebc736d7-339e-4253-ba71-5be690d2fb65
16/10/08 20:04:05 INFO ShutdownHookManager: Deleting directory /tmp/spark-04551a15-1f26-4dc9-9a5e-ff637a6ac1bc
Is there a list of things that I can check when I run into this kind of problem? I searched many other articles but none seem to work for my problem.
Any help will be appreciated!
Thanks!