Sonarqube 6.7.1 failing to start on linux while tryint to find temp/sq-process7192036004631233695properties - linux

I am trying to start new sonarqube LTS 6.7.1 on Red Hat Enterprise 7 with x64 architechture and failing.
Logs show ->
2017.12.22 07:52:27 INFO app[][o.e.p.PluginsService] no modules loaded
2017.12.22 07:52:27 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2017.12.22 07:52:35 INFO app[][o.s.a.SchedulerImpl] Process[es] is up
2017.12.22 07:52:35 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='web', ipcIndex=2, logFilenamePrefix=web]] from [/opt/sonarqube]: /opt/IBM/WebSphere/AppServer/java_1.8_64/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/opt/sonarqube/temp -Xms512m -XX:+HeapDumpOnOutOfMemoryError -server -cp ./lib/common/*:./lib/server/*:/opt/sonarqube/lib/jdbc/postgresql/postgresql-42.1.4.jar org.sonar.server.app.WebServer /opt/sonarqube/temp/sq-process7192036004631233695properties
2017.12.22 07:52:42 INFO app[][o.s.a.SchedulerImpl] Process [web] is stopped
2017.12.22 07:52:42 WARN app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 143
2017.12.22 07:52:42 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
2017.12.22 07:52:42 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
I believe the problem is that server is trying to find file /opt/sonarqube/temp/sq-process7192036004631233695properties. This path looks incorrect.
Please help share if there is any workaround to solve this problem.

Related

SonarQube wrapper stopped on Windows with no indicator

I have installed this version of the OpenJDK by downloading the zip and following this post to set JAVA_HOME and PATH Windows environment variables.
Executing java -version gives me this:
C:\git\sonarqube-8.6.0.39681\bin\windows-x86-64> java -version
openjdk version "11" 2018-09-25
OpenJDK Runtime Environment 18.9 (build 11+28)
OpenJDK 64-Bit Server VM 18.9 (build 11+28, mixed mode)
C:\git\sonarqube-8.6.0.39681\bin\windows-x86-64>
I then downloaded SonarQube from here and, using PowerShell, navigated to the \sonarqube-8.6.0.39681\bin\windows-x86-64 folder to execute the .\StartSonar.bat file.
The output I see is this:
C:\git\sonarqube-8.6.0.39681\bin\windows-x86-64> .\StartSonar.bat
wrapper | --> Wrapper Started as Console
wrapper | Launching a JVM...
jvm 1 | Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
jvm 1 | Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
jvm 1 |
jvm 1 | 2021.01.19 11:47:42 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory C:\git\sonarqube-8.6.0.39681\temp
jvm 1 | 2021.01.19 11:47:42 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:51631]
jvm 1 | 2021.01.19 11:47:43 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [C:\git\sonarqube-8.6.0.39681\elasticsearch]: C:\jdk-11\bin\java -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.io.tmpdir=C:\git\sonarqube-8.6.0.39681\temp -XX:ErrorFile=../logs/es_hs_err_pid%p.log -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.locale.providers=COMPAT -Xmx512m -Xms512m -XX:MaxDirectMemorySize=256m -XX:+HeapDumpOnOutOfMemoryError -Delasticsearch -Des.path.home=C:\git\sonarqube-8.6.0.39681\elasticsearch -Des.path.conf=C:\git\sonarqube-8.6.0.39681\temp\conf\es -cp lib/* org.elasticsearch.bootstrap.Elasticsearch
jvm 1 | 2021.01.19 11:47:43 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
jvm 1 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
jvm 1 | 2021.01.19 11:47:52 INFO app[][o.s.a.SchedulerImpl] Process[es] is up
jvm 1 | 2021.01.19 11:47:52 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[[key='web', ipcIndex=2, logFilenamePrefix=web]] from [C:\git\sonarqube-8.6.0.39681]: C:\jdk-11\bin\java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=C:\git\sonarqube-8.6.0.39681\temp -XX:-OmitStackTraceInFastThrow --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dhttp.nonProxyHosts=localhost|127.*|[::1] -cp ./lib/common/*;C:\git\sonarqube-8.6.0.39681\lib\jdbc\h2\h2-1.4.199.jar org.sonar.server.app.WebServer C:\git\sonarqube-8.6.0.39681\temp\sq-process18025213426507358055properties
jvm 1 | 2021.01.19 11:47:53 INFO app[][o.s.a.SchedulerImpl] Process[web] is stopped
jvm 1 | 2021.01.19 11:47:53 INFO app[][o.s.a.SchedulerImpl] Process[es] is stopped
jvm 1 | 2021.01.19 11:47:53 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
wrapper | <-- Wrapper Stopped
C:\git\sonarqube-8.6.0.39681\bin\windows-x86-64>
So I checked the \sonarqube-8.6.0.39681\logs\web.log to see this:
2021.01.19 11:42:21 INFO web[][o.s.p.ProcessEntryPoint] Starting web
2021.01.19 11:42:22 ERROR web[][o.s.s.a.EmbeddedTomcat] Fail to start web server
org.apache.catalina.LifecycleException: Failed to initialize connector [Connector[HTTP/1.1-9000]]
at org.apache.catalina.core.StandardService.initInternal(StandardService.java:559)
at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:136)
at org.apache.catalina.core.StandardServer.initInternal(StandardServer.java:848)
at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:136)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:173)
at org.apache.catalina.startup.Tomcat.start(Tomcat.java:440)
at org.sonar.server.app.EmbeddedTomcat.start(EmbeddedTomcat.java:66)
at org.sonar.server.app.WebServer.start(WebServer.java:52)
at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:97)
at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:81)
at org.sonar.server.app.WebServer.main(WebServer.java:99)
2021.01.19 11:42:22 WARN web[][o.s.p.ProcessEntryPoint] Fail to start web
java.lang.RuntimeException: org.apache.catalina.LifecycleException: Failed to initialize connector [Connector[HTTP/1.1-9000]]
at com.google.common.base.Throwables.propagate(Throwables.java:241)
at org.sonar.server.app.EmbeddedTomcat.start(EmbeddedTomcat.java:70)
at org.sonar.server.app.WebServer.start(WebServer.java:52)
at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:97)
at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:81)
at org.sonar.server.app.WebServer.main(WebServer.java:99)
Caused by: org.apache.catalina.LifecycleException: Failed to initialize connector [Connector[HTTP/1.1-9000]]
at org.apache.catalina.core.StandardService.initInternal(StandardService.java:559)
at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:136)
at org.apache.catalina.core.StandardServer.initInternal(StandardServer.java:848)
at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:136)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:173)
at org.apache.catalina.startup.Tomcat.start(Tomcat.java:440)
at org.sonar.server.app.EmbeddedTomcat.start(EmbeddedTomcat.java:66)
... 4 common frames omitted
2021.01.19 11:42:22 INFO web[][o.s.p.ProcessEntryPoint] Hard stopping process
2021.01.19 11:47:53 INFO web[][o.s.p.ProcessEntryPoint] Starting web
2021.01.19 11:47:53 ERROR web[][o.s.s.a.EmbeddedTomcat] Fail to start web server
org.apache.catalina.LifecycleException: Failed to initialize connector [Connector[HTTP/1.1-9000]]
at org.apache.catalina.core.StandardService.initInternal(StandardService.java:559)
at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:136)
at org.apache.catalina.core.StandardServer.initInternal(StandardServer.java:848)
at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:136)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:173)
at org.apache.catalina.startup.Tomcat.start(Tomcat.java:440)
at org.sonar.server.app.EmbeddedTomcat.start(EmbeddedTomcat.java:66)
at org.sonar.server.app.WebServer.start(WebServer.java:52)
at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:97)
at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:81)
at org.sonar.server.app.WebServer.main(WebServer.java:99)
2021.01.19 11:47:53 WARN web[][o.s.p.ProcessEntryPoint] Fail to start web
java.lang.RuntimeException: org.apache.catalina.LifecycleException: Failed to initialize connector [Connector[HTTP/1.1-9000]]
at com.google.common.base.Throwables.propagate(Throwables.java:241)
at org.sonar.server.app.EmbeddedTomcat.start(EmbeddedTomcat.java:70)
at org.sonar.server.app.WebServer.start(WebServer.java:52)
at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:97)
at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:81)
at org.sonar.server.app.WebServer.main(WebServer.java:99)
Caused by: org.apache.catalina.LifecycleException: Failed to initialize connector [Connector[HTTP/1.1-9000]]
at org.apache.catalina.core.StandardService.initInternal(StandardService.java:559)
at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:136)
at org.apache.catalina.core.StandardServer.initInternal(StandardServer.java:848)
at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:136)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:173)
at org.apache.catalina.startup.Tomcat.start(Tomcat.java:440)
at org.sonar.server.app.EmbeddedTomcat.start(EmbeddedTomcat.java:66)
... 4 common frames omitted
2021.01.19 11:47:53 INFO web[][o.s.p.ProcessEntryPoint] Hard stopping process
The \sonarqube-8.6.0.39681\logs\es.log log shows this:
2021.01.19 11:42:13 INFO es[][o.e.n.Node] version[7.9.3], pid[17976], build[unknown/unknown/c4138e51121ef06a6404866cddc601906fe5c868/2020-10-16T10:36:16.141335Z], OS[Windows 10/10.0/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11/11+28]
2021.01.19 11:42:13 INFO es[][o.e.n.Node] JVM home [C:\jdk-11]
2021.01.19 11:42:13 INFO es[][o.e.n.Node] JVM arguments [-XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=C:\git\sonarqube-8.6.0.39681\temp, -XX:ErrorFile=../logs/es_hs_err_pid%p.log, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=COMPAT, -Xmx512m, -Xms512m, -XX:MaxDirectMemorySize=256m, -XX:+HeapDumpOnOutOfMemoryError, -Delasticsearch, -Des.path.home=C:\git\sonarqube-8.6.0.39681\elasticsearch, -Des.path.conf=C:\git\sonarqube-8.6.0.39681\temp\conf\es]
2021.01.19 11:42:14 INFO es[][o.e.p.PluginsService] loaded module [analysis-common]
2021.01.19 11:42:14 INFO es[][o.e.p.PluginsService] loaded module [lang-painless]
2021.01.19 11:42:14 INFO es[][o.e.p.PluginsService] loaded module [parent-join]
2021.01.19 11:42:14 INFO es[][o.e.p.PluginsService] loaded module [percolator]
2021.01.19 11:42:14 INFO es[][o.e.p.PluginsService] loaded module [transport-netty4]
2021.01.19 11:42:14 INFO es[][o.e.p.PluginsService] no plugins loaded
2021.01.19 11:42:14 INFO es[][o.e.e.NodeEnvironment] using [1] data paths, mounts [[OSDisk (C:)]], net usable_space [79.2gb], net total_space [475.3gb], types [NTFS]
2021.01.19 11:42:14 INFO es[][o.e.e.NodeEnvironment] heap size [494.9mb], compressed ordinary object pointers [true]
2021.01.19 11:42:14 WARN es[][o.e.d.c.s.Settings] [node.master] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
2021.01.19 11:42:14 WARN es[][o.e.d.c.s.Settings] [node.data] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
2021.01.19 11:42:14 INFO es[][o.e.n.Node] node name [sonarqube], node ID [rvPV04lsS-S2EVg8okPbPg], cluster name [sonarqube]
2021.01.19 11:42:17 WARN es[][o.e.d.c.r.OperationRouting] searches will not be routed based on awareness attributes starting in version 8.0.0; to opt into this behaviour now please set the system property [es.search.ignore_awareness_attributes] to [true]
2021.01.19 11:42:18 INFO es[][o.e.t.NettyAllocator] creating NettyAllocator with the following configs: [name=unpooled, factors={es.unsafe.use_unpooled_allocator=false, g1gc_enabled=false, g1gc_region_size=0b, heap_size=494.9mb}]
2021.01.19 11:42:18 INFO es[][o.e.d.DiscoveryModule] using discovery type [zen] and seed hosts providers [settings]
2021.01.19 11:42:18 WARN es[][o.e.g.DanglingIndicesState] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
2021.01.19 11:42:18 INFO es[][o.e.n.Node] initialized
2021.01.19 11:42:18 INFO es[][o.e.n.Node] starting ...
2021.01.19 11:42:19 INFO es[][o.e.t.TransportService] publish_address {127.0.0.1:51407}, bound_addresses {127.0.0.1:51407}
2021.01.19 11:42:20 INFO es[][o.e.c.c.Coordinator] cluster UUID [KN6nD07MQgSuD1bgIEWZDw]
2021.01.19 11:42:20 INFO es[][o.e.c.s.MasterService] elected-as-master ([1] nodes joined)[{sonarqube}{rvPV04lsS-S2EVg8okPbPg}{QKaKD--HSEOvHtjU_xyTnw}{127.0.0.1}{127.0.0.1:51407}{dimr}{rack_id=sonarqube} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 3, version: 5, delta: master node changed {previous [], current [{sonarqube}{rvPV04lsS-S2EVg8okPbPg}{QKaKD--HSEOvHtjU_xyTnw}{127.0.0.1}{127.0.0.1:51407}{dimr}{rack_id=sonarqube}]}
2021.01.19 11:42:20 INFO es[][o.e.c.s.ClusterApplierService] master node changed {previous [], current [{sonarqube}{rvPV04lsS-S2EVg8okPbPg}{QKaKD--HSEOvHtjU_xyTnw}{127.0.0.1}{127.0.0.1:51407}{dimr}{rack_id=sonarqube}]}, term: 3, version: 5, reason: Publication{term=3, version=5}
2021.01.19 11:42:20 INFO es[][o.e.h.AbstractHttpServerTransport] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
2021.01.19 11:42:20 INFO es[][o.e.n.Node] started
2021.01.19 11:42:20 INFO es[][o.e.g.GatewayService] recovered [0] indices into cluster_state
2021.01.19 11:47:45 INFO es[][o.e.n.Node] version[7.9.3], pid[8664], build[unknown/unknown/c4138e51121ef06a6404866cddc601906fe5c868/2020-10-16T10:36:16.141335Z], OS[Windows 10/10.0/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11/11+28]
2021.01.19 11:47:45 INFO es[][o.e.n.Node] JVM home [C:\jdk-11]
2021.01.19 11:47:45 INFO es[][o.e.n.Node] JVM arguments [-XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=C:\git\sonarqube-8.6.0.39681\temp, -XX:ErrorFile=../logs/es_hs_err_pid%p.log, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=COMPAT, -Xmx512m, -Xms512m, -XX:MaxDirectMemorySize=256m, -XX:+HeapDumpOnOutOfMemoryError, -Delasticsearch, -Des.path.home=C:\git\sonarqube-8.6.0.39681\elasticsearch, -Des.path.conf=C:\git\sonarqube-8.6.0.39681\temp\conf\es]
2021.01.19 11:47:46 INFO es[][o.e.p.PluginsService] loaded module [analysis-common]
2021.01.19 11:47:46 INFO es[][o.e.p.PluginsService] loaded module [lang-painless]
2021.01.19 11:47:46 INFO es[][o.e.p.PluginsService] loaded module [parent-join]
2021.01.19 11:47:46 INFO es[][o.e.p.PluginsService] loaded module [percolator]
2021.01.19 11:47:46 INFO es[][o.e.p.PluginsService] loaded module [transport-netty4]
2021.01.19 11:47:46 INFO es[][o.e.p.PluginsService] no plugins loaded
2021.01.19 11:47:46 INFO es[][o.e.e.NodeEnvironment] using [1] data paths, mounts [[OSDisk (C:)]], net usable_space [79.2gb], net total_space [475.3gb], types [NTFS]
2021.01.19 11:47:46 INFO es[][o.e.e.NodeEnvironment] heap size [494.9mb], compressed ordinary object pointers [true]
2021.01.19 11:47:46 WARN es[][o.e.d.c.s.Settings] [node.master] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
2021.01.19 11:47:46 WARN es[][o.e.d.c.s.Settings] [node.data] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
2021.01.19 11:47:46 INFO es[][o.e.n.Node] node name [sonarqube], node ID [rvPV04lsS-S2EVg8okPbPg], cluster name [sonarqube]
2021.01.19 11:47:48 WARN es[][o.e.d.c.r.OperationRouting] searches will not be routed based on awareness attributes starting in version 8.0.0; to opt into this behaviour now please set the system property [es.search.ignore_awareness_attributes] to [true]
2021.01.19 11:47:49 INFO es[][o.e.t.NettyAllocator] creating NettyAllocator with the following configs: [name=unpooled, factors={es.unsafe.use_unpooled_allocator=false, g1gc_enabled=false, g1gc_region_size=0b, heap_size=494.9mb}]
2021.01.19 11:47:49 INFO es[][o.e.d.DiscoveryModule] using discovery type [zen] and seed hosts providers [settings]
2021.01.19 11:47:50 WARN es[][o.e.g.DanglingIndicesState] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
2021.01.19 11:47:50 INFO es[][o.e.n.Node] initialized
2021.01.19 11:47:50 INFO es[][o.e.n.Node] starting ...
2021.01.19 11:47:51 INFO es[][o.e.t.TransportService] publish_address {127.0.0.1:51631}, bound_addresses {127.0.0.1:51631}
2021.01.19 11:47:51 INFO es[][o.e.c.c.Coordinator] cluster UUID [KN6nD07MQgSuD1bgIEWZDw]
2021.01.19 11:47:51 INFO es[][o.e.c.s.MasterService] elected-as-master ([1] nodes joined)[{sonarqube}{rvPV04lsS-S2EVg8okPbPg}{AQEp76FURcuprB2K6s4biQ}{127.0.0.1}{127.0.0.1:51631}{dimr}{rack_id=sonarqube} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 4, version: 7, delta: master node changed {previous [], current [{sonarqube}{rvPV04lsS-S2EVg8okPbPg}{AQEp76FURcuprB2K6s4biQ}{127.0.0.1}{127.0.0.1:51631}{dimr}{rack_id=sonarqube}]}
2021.01.19 11:47:51 INFO es[][o.e.c.s.ClusterApplierService] master node changed {previous [], current [{sonarqube}{rvPV04lsS-S2EVg8okPbPg}{AQEp76FURcuprB2K6s4biQ}{127.0.0.1}{127.0.0.1:51631}{dimr}{rack_id=sonarqube}]}, term: 4, version: 7, reason: Publication{term=4, version=7}
2021.01.19 11:47:51 INFO es[][o.e.h.AbstractHttpServerTransport] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
2021.01.19 11:47:51 INFO es[][o.e.n.Node] started
2021.01.19 11:47:51 INFO es[][o.e.g.GatewayService] recovered [0] indices into cluster_state
What do I do to run SonarQube?

How do I use the portable runner and spark-submit to submit beams wordcount python example to a remote spark cluster on EMR running yarn?

I am trying to submit beams wordcount python example to a remote spark cluster on emr running yarn as its resource manager. According to the spark documentation this needs to be done using the portable runner.
Following the portable runner instructions, I have started the job service endpoint, and it appears to start correctly::
$ docker run --net=host apache/beam_spark_job_server:latest --spark-master-url=spark://*.***.***.***:7077
20/08/31 12:13:08 INFO org.apache.beam.runners.jobsubmission.JobServerDriver: ArtifactStagingService started on localhost:8098
20/08/31 12:13:08 INFO org.apache.beam.runners.jobsubmission.JobServerDriver: Java ExpansionService started on localhost:8097
20/08/31 12:13:08 INFO org.apache.beam.runners.jobsubmission.JobServerDriver: JobService started on localhost:8099
20/08/31 12:13:08 INFO org.apache.beam.runners.jobsubmission.JobServerDriver: Job server now running, terminate with Ctrl+C
Now I try to submit the job using spark-submit, input is a plain text version of Sherlock Holmes:
$ spark-submit --master=yarn --deploy-mode=cluster wordcount.py --input data/sherlock.txt --output output --runner=PortableRunner --job_endpoint=localhost:8099 --environment_type=DOCKER --environment_config=apachebeam/python3.7_sdk
20/08/31 12:19:39 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/08/31 12:19:40 INFO RMProxy: Connecting to ResourceManager at ip-***-**-**-***.ec2.internal/***.**.**.***:8032
20/08/31 12:19:40 INFO Client: Requesting a new application from cluster with 2 NodeManagers
20/08/31 12:19:40 INFO Configuration: resource-types.xml not found
20/08/31 12:19:40 INFO ResourceUtils: Unable to find 'resource-types.xml'.
20/08/31 12:19:40 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (6144 MB per container)
20/08/31 12:19:40 INFO Client: Will allocate AM container, with 2432 MB memory including 384 MB overhead
20/08/31 12:19:40 INFO Client: Setting up container launch context for our AM
Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: /usr/lib/spark/python/lib/pyspark.zip not found; cannot run pyspark application in YARN mode.
at scala.Predef$.require(Predef.scala:281)
at org.apache.spark.deploy.yarn.Client.$anonfun$findPySparkArchives$2(Client.scala:1167)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.deploy.yarn.Client.findPySparkArchives(Client.scala:1163)
at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:858)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:178)
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1134)
at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1526)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:853)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:928)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:937)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
20/08/31 12:19:40 INFO ShutdownHookManager: Shutdown hook called
20/08/31 12:19:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-ee751413-e29d-4b1f-8a16-fb8650b1ca10
It appears to want pyspark to be installed, I am fairly new to submitting beam jobs to a spark cluster, is there a reason why pyspark would need to be installed when submitting a beam job? I have a feeling my spark-submit command is wrong, but I am having a hard time finding any more concrete documentation on how to do what I am trying to do.

SonarQube: java.lang.IllegalStateException: Webapp did not start at..: the SonarQube server was automatically closed after I started the server

I was trying to deploy SonarQube at a remote Ubuntu machine. I started the server, and the status info was 'SonarQube is running'. But after a few minutes, the server was automatically closed. I got this exception:
2016.11.08 16:41:53 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [Abandoned connection cleanup thread] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
com.mysql.jdbc.AbandonedConnectionCleanupThread.run(AbandonedConnectionCleanupThread.java:43)
2016.11.08 16:41:53 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [Timer-0] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.util.TimerThread.mainLoop(Timer.java:552)
java.util.TimerThread.run(Timer.java:505)
2016.11.08 16:41:53 INFO web[][o.a.c.h.Http11NioProtocol] Starting ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 16:41:53 INFO web[][o.s.s.a.TomcatAccessLog] Web server is started
2016.11.08 16:41:53 INFO web[][o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2016.11.08 16:41:53 WARN web[][o.s.p.ProcessEntryPoint] Fail to start web
java.lang.IllegalStateException: Webapp did not start
at org.sonar.server.app.EmbeddedTomcat.isUp(EmbeddedTomcat.java:84) ~[sonar-server-6.1.jar:na]
at org.sonar.server.app.WebServer.isUp(WebServer.java:46) [sonar-server-6.1.jar:na]
at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:105) ~[sonar-process-6.1.jar:na]
at org.sonar.server.app.WebServer.main(WebServer.java:67) [sonar-server-6.1.jar:na]
2016.11.08 16:41:53 INFO web[][o.a.c.h.Http11NioProtocol] Pausing ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 16:41:54 INFO web[][o.a.c.h.Http11NioProtocol] Stopping ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 16:41:54 INFO web[][o.a.c.h.Http11NioProtocol] Destroying ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 16:41:54 INFO web[][o.s.s.a.TomcatAccessLog] Web server is stopped
2016.11.08 16:41:54 INFO app[][o.s.p.m.Monitor] Process[es] is stopping
2016.11.08 16:41:55 INFO es[][o.s.p.StopWatcher] Stopping process
2016.11.08 16:41:55 INFO es[][o.elasticsearch.node] [sonarqube] stopping ...
2016.11.08 16:41:55 INFO es[][o.elasticsearch.node] [sonarqube] stopped
2016.11.08 16:41:55 INFO es[][o.elasticsearch.node] [sonarqube] closing ...
2016.11.08 16:41:55 INFO es[][o.elasticsearch.node] [sonarqube] closed
Below are my versions:
Java: JDK 1.8.0_111
Ubuntu: Ubuntu 14.04
MySQL: 5.6.34
SonarQube: 6.1
below is the full logs
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
WrapperSimpleApp: Unable to locate the class org.sonar.application.App: java.lang.UnsupportedClassVersionError: org/sonar/application/App : Unsupported major.minor version 52.0
WrapperSimpleApp Usage:
java org.tanukisoftware.wrapper.WrapperSimpleApp {app_class} [app_arguments]
Where:
app_class: The fully qualified class name of the application to run.
app_arguments: The arguments that would normally be passed to the
application.
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
WrapperSimpleApp: Unable to locate the class org.sonar.application.App: java.lang.UnsupportedClassVersionError: org/sonar/application/App : Unsupported major.minor version 52.0
WrapperSimpleApp Usage:
java org.tanukisoftware.wrapper.WrapperSimpleApp {app_class} [app_arguments]
Where:
app_class: The fully qualified class name of the application to run.
app_arguments: The arguments that would normally be passed to the
application.
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
WrapperSimpleApp: Unable to locate the class org.sonar.application.App: java.lang.UnsupportedClassVersionError: org/sonar/application/App : Unsupported major.minor version 52.0
"sonar.log" 21127L, 1883274C 1,1 Top
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
2016.11.08 21:14:58 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [Abandoned connection cleanup thread] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
com.mysql.jdbc.AbandonedConnectionCleanupThread.run(AbandonedConnectionCleanupThread.java:43)
2016.11.08 21:14:58 WARN web[][o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [Timer-0] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.util.TimerThread.mainLoop(Timer.java:552)
java.util.TimerThread.run(Timer.java:505)
2016.11.08 21:14:58 INFO web[][o.a.c.h.Http11NioProtocol] Starting ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 21:14:58 INFO web[][o.s.s.a.TomcatAccessLog] Web server is started
2016.11.08 21:14:58 INFO web[][o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2016.11.08 21:14:58 WARN web[][o.s.p.ProcessEntryPoint] Fail to start web
java.lang.IllegalStateException: Webapp did not start
at org.sonar.server.app.EmbeddedTomcat.isUp(EmbeddedTomcat.java:84) ~[sonar-server-6.1.jar:na]
at org.sonar.server.app.WebServer.isUp(WebServer.java:46) [sonar-server-6.1.jar:na]
at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:105) ~[sonar-process-6.1.jar:na]
at org.sonar.server.app.WebServer.main(WebServer.java:67) [sonar-server-6.1.jar:na]
2016.11.08 21:14:58 INFO web[][o.a.c.h.Http11NioProtocol] Pausing ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 21:14:59 INFO web[][o.a.c.h.Http11NioProtocol] Stopping ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 21:14:59 INFO web[][o.a.c.h.Http11NioProtocol] Destroying ProtocolHandler ["http-nio-127.0.0.1-9000"]
2016.11.08 21:14:59 INFO web[][o.s.s.a.TomcatAccessLog] Web server is stopped
2016.11.08 21:14:59 INFO app[][o.s.p.m.Monitor] Process[es] is stopping
2016.11.08 21:15:00 INFO es[][o.s.p.StopWatcher] Stopping process
2016.11.08 21:15:00 INFO es[][o.elasticsearch.node] [sonarqube] stopping ...
2016.11.08 21:15:00 INFO es[][o.elasticsearch.node] [sonarqube] stopped
2016.11.08 21:15:00 INFO es[][o.elasticsearch.node] [sonarqube] closing ...
2016.11.08 21:15:00 INFO es[][o.elasticsearch.node] [sonarqube] closed
2016.11.08 21:15:00 INFO app[][o.s.p.m.Monitor] Process[es] is stopped
The famous message "Unsupported major.minor version 52.0" means that Java 8 is not being used whereas it is required.
You should check that JDK 1.8.0_111 is in PATH. If not, an alternative is to edit the property wrapper.java.command in conf/wrapper.conf.

Spark worker can not connect to Master

While starting the worker node I get the following error :
Spark Command: /usr/lib/jvm/default-java/bin/java -cp /home/ubuntu/spark-1.5.1-bin-hadoop2.6/sbin/../conf/:/home/ubuntu/spark-1.5.1-bin-hadoop2.6/lib/spark-assembly-1.5.1-hadoop2.6.0.jar:/home/ubuntu/spark-1.5.1-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/home/ubuntu/spark-1.5.1-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/home/ubuntu/spark-1.5.1-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar -Xms1g -Xmx1g -XX:MaxPermSize=256m org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://ip-1-70-44-5:7077
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/10/16 19:19:10 INFO Worker: Registered signal handlers for [TERM, HUP, INT]
15/10/16 19:19:11 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/10/16 19:19:11 INFO SecurityManager: Changing view acls to: ubuntu
15/10/16 19:19:11 INFO SecurityManager: Changing modify acls to: ubuntu
15/10/16 19:19:11 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ubuntu); users with modify permissions: Set(ubuntu)
15/10/16 19:19:12 INFO Slf4jLogger: Slf4jLogger started
15/10/16 19:19:12 INFO Remoting: Starting remoting
15/10/16 19:19:12 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkWorker#1.70.44.4:55126]
15/10/16 19:19:12 INFO Utils: Successfully started service 'sparkWorker' on port 55126.
15/10/16 19:19:12 INFO Worker: Starting Spark worker 1.70.44.4:55126 with 2 cores, 2.9 GB RAM
15/10/16 19:19:12 INFO Worker: Running Spark version 1.5.1
15/10/16 19:19:12 INFO Worker: Spark home: /home/ubuntu/spark-1.5.1-bin-hadoop2.6
15/10/16 19:19:12 INFO Utils: Successfully started service 'WorkerUI' on port 8081.
15/10/16 19:19:12 INFO WorkerWebUI: Started WorkerWebUI at http://1.70.44.4:8081
15/10/16 19:19:12 INFO Worker: Connecting to master ip-1-70-44-5:7077...
15/10/16 19:19:24 INFO Worker: Retrying connection to master (attempt # 1)
15/10/16 19:19:24 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[sparkWorker-akka.actor.default-dispatcher-5,5,main]
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask#1c5651e9 rejected from java.util.concurrent.ThreadPoolExecutor#671ba687[Running, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 0]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)
at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters$1.apply(Worker.scala:211)
at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters$1.apply(Worker.scala:210)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
at org.apache.spark.deploy.worker.Worker.org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters(Worker.scala:210)
at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$reregisterWithMaster$1.apply$mcV$sp(Worker.scala:288)
at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1119)
at org.apache.spark.deploy.worker.Worker.org$apache$spark$deploy$worker$Worker$$reregisterWithMaster(Worker.scala:234)
at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:521)
at org.apache.spark.rpc.akka.AkkaRpcEnv.org$apache$spark$rpc$akka$AkkaRpcEnv$$processMessage(AkkaRpcEnv.scala:177)
at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1$$anonfun$receiveWithLogging$1$$anonfun$applyOrElse$4.apply$mcV$sp(AkkaRpcEnv.scala:126)
at org.apache.spark.rpc.akka.AkkaRpcEnv.org$apache$spark$rpc$akka$AkkaRpcEnv$$safelyCall(AkkaRpcEnv.scala:197)
at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1$$anonfun$receiveWithLogging$1.applyOrElse(AkkaRpcEnv.scala:125)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:59)
at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1.aroundReceive(AkkaRpcEnv.scala:92)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
at akka.dispatch.Mailbox.run(Mailbox.scala:220)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
15/10/16 19:19:24 INFO ShutdownHookManager: Shutdown hook called
I have added the hostnames to the conf/slaves file. I dont know which enviroment variables to set in spark-env.sh so right not its not being used.
Any pointers to the solution ?
Also, if I should use spark-env.sh then which enviroment vvariables should I run ?
setup details :
2 ubuntu14 machines having 2 cores each.
Please advise.
thanks
So, after some tinkering around I found that slave was not able to communicate with Master on the given port. I changed the security access rules and enabled all TCP traffic on all ports . This solved the problem.
To check if the port is open :
telnet master.ip master.port
The default port is 7077.
My spark-env.sh :
export SPARK_WORKER_INSTANCES=2
export SPARK_MASTER_IP=<ip address>
I'm afraid your hostname may be invalid to Spark, and you hava to change your spark-env.sh.
You can set the variable SPARK_MASTER_IP to be the real ip of master, instead of its hostname.
e.g.
export SPARK_MASTER_IP=1.70.44.5
INSTEAD OF
export SPARK_MASTER_IP=ip-1-70-44-5

SPARK + Standalone Cluster: Cannot start worker from another machine

I've been setting up a Spark standalone cluster setup following this link. I have 2 machines; The first one (ubuntu0) serve as both the master and a worker, and the second one (ubuntu1) is just a worker. Password-less ssh has been properly configured for both machines already and was tested by doing SSH manually on both sides.
Now when I tried to ./start-all.ssh, both master and worker on the master machine (ubuntu0) were started properly. This is signified by (1)WebUI being accessible (localhost:8081 on my part) and (2) Worker registered/displayed on the WebUI.
However, the other worker on the second machine (ubuntu1), was not started. The error displayed was:
ubuntu1: ssh: connect to host ubuntu1 port 22: Connection timed out
Now this is quite weird already given that I've properly configured the ssh to be password-less on both sides. Given this, I accessed the second machine and tried to start the worker manually using these commands:
./spark-class org.apache.spark.deploy.worker.Worker spark://ubuntu0:7707
./spark-class org.apache.spark.deploy.worker.Worker spark://<ip>:7707
However, below is the result:
14/05/23 13:49:08 INFO Utils: Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
14/05/23 13:49:08 WARN Utils: Your hostname, ubuntu1 resolves to a loopback address:
127.0.1.1; using 192.168.122.1 instead (on interface virbr0)
14/05/23 13:49:08 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
14/05/23 13:49:09 INFO Slf4jLogger: Slf4jLogger started
14/05/23 13:49:09 INFO Remoting: Starting remoting
14/05/23 13:49:09 INFO Remoting: Remoting started; listening on addresses :
[akka.tcp://sparkWorker#ubuntu1.local:42739]
14/05/23 13:49:09 INFO Worker: Starting Spark worker ubuntu1.local:42739 with 8 cores,
4.8 GB RAM
14/05/23 13:49:09 INFO Worker: Spark home: /home/ubuntu1/jaysonp/spark/spark-0.9.1
14/05/23 13:49:09 INFO WorkerWebUI: Started Worker web UI at http://ubuntu1.local:8081
14/05/23 13:49:09 INFO Worker: Connecting to master spark://ubuntu0:7707...
14/05/23 13:49:29 INFO Worker: Connecting to master spark://ubuntu0:7707...
14/05/23 13:49:49 INFO Worker: Connecting to master spark://ubuntu0:7707...
14/05/23 13:50:09 ERROR Worker: All masters are unresponsive! Giving up.
Below are the contents of my master and slave\worker spark-env.ssh:
SPARK_MASTER_IP=192.168.3.222
STANDALONE_SPARK_MASTER_HOST=`hostname -f`
How should I resolve this? Thanks in advance!
For those who are still encountering error(s) when it comes to starting workers on different machines, I just want to share that using IP addresses in conf/slaves worked for me.
Hope this helps!
I have add similar issues today running spark 1.5.1 on RHEL 6.7.
I have 2 machines, their hostname being
- master.domain.com
- slave.domain.com
I installed a standalone version of spark (pre-build against hadoop 2.6) and installed my Oracle jdk-8u66.
Spark download:
wget http://d3kbcqa49mib13.cloudfront.net/spark-1.5.1-bin-hadoop2.6.tgz
Java download
wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u66-b17/jdk-8u66-linux-x64.tar.gz"
after spark and java are unpacked in my home directory I did the following:
on 'master.domain.com' I ran:
./sbin/start-master.sh
The webUI become available at http://master.domain.com:8080 (no slave running)
on 'slave.domain.com' I did try:
./sbin/start-slave.sh spark://master.domain.com:7077 FAILED AS FOLLOW
Spark Command: /root/java/bin/java -cp /root/spark-1.5.1-bin-hadoop2.6/sbin/../conf/:/root/spark-1.5.1-bin-hadoop2.6/lib/spark-assembly-1.5.1-hadoop2.6.0.jar:/root/spark-1.5.1-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/root/spark-1.5.1-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/root/spark-1.5.1-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar -Xms1g -Xmx1g org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://master.domain.com:7077
========================================
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/11/06 11:03:51 INFO Worker: Registered signal handlers for [TERM, HUP, INT]
15/11/06 11:03:51 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/11/06 11:03:51 INFO SecurityManager: Changing view acls to: root
15/11/06 11:03:51 INFO SecurityManager: Changing modify acls to: root
15/11/06 11:03:51 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/11/06 11:03:52 INFO Slf4jLogger: Slf4jLogger started
15/11/06 11:03:52 INFO Remoting: Starting remoting
15/11/06 11:03:52 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkWorker#10.80.70.38:50573]
15/11/06 11:03:52 INFO Utils: Successfully started service 'sparkWorker' on port 50573.
15/11/06 11:03:52 INFO Worker: Starting Spark worker 10.80.70.38:50573 with 8 cores, 6.7 GB RAM
15/11/06 11:03:52 INFO Worker: Running Spark version 1.5.1
15/11/06 11:03:52 INFO Worker: Spark home: /root/spark-1.5.1-bin-hadoop2.6
15/11/06 11:03:53 INFO Utils: Successfully started service 'WorkerUI' on port 8081.
15/11/06 11:03:53 INFO WorkerWebUI: Started WorkerWebUI at http://10.80.70.38:8081
15/11/06 11:03:53 INFO Worker: Connecting to master master.domain.com:7077...
15/11/06 11:04:05 INFO Worker: Retrying connection to master (attempt # 1)
15/11/06 11:04:05 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[sparkWorker-akka.actor.default-dispatcher-4,5,main]
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask#48711bf5 rejected from java.util.concurrent.ThreadPoolExecutor#14db705b[Running, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 1]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters$1.apply(Worker.scala:211)
at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters$1.apply(Worker.scala:210)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
at org.apache.spark.deploy.worker.Worker.org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters(Worker.scala:210)
at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$reregisterWithMaster$1.apply$mcV$sp(Worker.scala:288)
at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1119)
at org.apache.spark.deploy.worker.Worker.org$apache$spark$deploy$worker$Worker$$reregisterWithMaster(Worker.scala:234)
at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:521)
at org.apache.spark.rpc.akka.AkkaRpcEnv.org$apache$spark$rpc$akka$AkkaRpcEnv$$processMessage(AkkaRpcEnv.scala:177)
at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1$$anonfun$receiveWithLogging$1$$anonfun$applyOrElse$4.apply$mcV$sp(AkkaRpcEnv.scala:126)
at org.apache.spark.rpc.akka.AkkaRpcEnv.org$apache$spark$rpc$akka$AkkaRpcEnv$$safelyCall(AkkaRpcEnv.scala:197)
at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1$$anonfun$receiveWithLogging$1.applyOrElse(AkkaRpcEnv.scala:125)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:59)
at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1.aroundReceive(AkkaRpcEnv.scala:92)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
at akka.dispatch.Mailbox.run(Mailbox.scala:220)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
15/11/06 11:04:05 INFO ShutdownHookManager: Shutdown hook called
start-slave spark://<master-IP>:7077 also FAILED as above.
start-slave spark://master:7077 WORKED and the worker shows in the master webUI
Spark Command: /root/java/bin/java -cp /root/spark-1.5.1-bin-hadoop2.6/sbin/../conf/:/root/spark-1.5.1-bin-hadoop2.6/lib/spark-assembly-1.5.1-hadoop2.6.0.jar:/root/spark-1.5.1-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/root/spark-1.5.1-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/root/spark-1.5.1-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar -Xms1g -Xmx1g org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://master:7077
========================================
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/11/06 11:08:15 INFO Worker: Registered signal handlers for [TERM, HUP, INT]
15/11/06 11:08:15 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/11/06 11:08:15 INFO SecurityManager: Changing view acls to: root
15/11/06 11:08:15 INFO SecurityManager: Changing modify acls to: root
15/11/06 11:08:15 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/11/06 11:08:16 INFO Slf4jLogger: Slf4jLogger started
15/11/06 11:08:16 INFO Remoting: Starting remoting
15/11/06 11:08:17 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkWorker#10.80.70.38:40780]
15/11/06 11:08:17 INFO Utils: Successfully started service 'sparkWorker' on port 40780.
15/11/06 11:08:17 INFO Worker: Starting Spark worker 10.80.70.38:40780 with 8 cores, 6.7 GB RAM
15/11/06 11:08:17 INFO Worker: Running Spark version 1.5.1
15/11/06 11:08:17 INFO Worker: Spark home: /root/spark-1.5.1-bin-hadoop2.6
15/11/06 11:08:17 INFO Utils: Successfully started service 'WorkerUI' on port 8081.
15/11/06 11:08:17 INFO WorkerWebUI: Started WorkerWebUI at http://10.80.70.38:8081
15/11/06 11:08:17 INFO Worker: Connecting to master master:7077...
15/11/06 11:08:17 INFO Worker: Successfully registered with master spark://master:7077
Note: I haven't added any extra config in conf/spark-env.sh
Note2: when looking in the master webUI, the spark master URL at the top is actually the one that worked for me, so I'd say in doubts just use that one.
I hope this helps ;)
Using hostname in /cong/slaves worked well for me.
Here are some steps I would take it,
Checked SSH public key
scp /etc/spark/conf.dist/spark-env.sh to your workers
My part of setting in spark-env.sh
export STANDALONE_SPARK_MASTER_HOST=hostname
export SPARK_MASTER_IP=$STANDALONE_SPARK_MASTER_HOST
I guess you missed something in your configuration, that's what I learned from your log.
Check your /etc/hosts, make sure ubuntu1 in your master's host list and its Ip is match the slave's IP address.
Add export SPARK_LOCAL_IP='ubuntu1' in the spark-env.sh file of your slave

Resources