hiveserver2 can not run sql on Spark on Yarn - apache-spark

Here is my versions:
Hive: 1.2
Hadoop: CDH5.3
Spark: 1.4.1
I succeeded with hive on spark with hive client, but after I started hiveserver2 and tried a sql using beeline, it failed.
The error is:
2015-11-29 21:49:42,786 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:42 INFO spark.SparkContext: Added JAR file:/root/cdh/apache-hive-1.2.1-bin/lib/hive-exec-1.2.1.jar at http://10.96.30.51:10318/jars/hive-exec-1.2.1.jar with timestamp 1448804982784
2015-11-29 21:49:43,336 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:43 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm297
2015-11-29 21:49:43,356 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:43 INFO retry.RetryInvocationHandler: Exception while invoking getClusterMetrics of class ApplicationClientProtocolPBClientImpl over rm297 after 1 fail over attempts. Trying to fail over immediately.
2015-11-29 21:49:43,357 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:43 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm280
2015-11-29 21:49:43,359 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:43 INFO retry.RetryInvocationHandler: Exception while invoking getClusterMetrics of class ApplicationClientProtocolPBClientImpl over rm280 after 2 fail over attempts. Trying to fail over after sleeping for 477ms.
2015-11-29 21:49:43,359 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - java.net.ConnectException: Call From hd-master-001/10.96.30.51 to hd-master-001:8032 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
2015-11-29 21:49:43,359 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
My yarn's status is that hd-master-002 is active resourcemanager and hd-master-001 is backup. 8032 port on hd-master-001 is not open. So of course, connection error occurs when trying to connect to hd-master-001's 8032 port.
But why she tried to connect a backup resourcemanager.
If I use hive client command shell on spark on yarn, everything is ok.
PS: I didn't rebuild the spark assembly jar without hive, I only removed 'org.apache.hive' and 'org.apache.hadoop.hive' from built assembly jar. But I do not think it is the problem because I succeeded with hive client on spark on yarn.

Related

Apache spark Worker: Failed to connect to master master:7077 in Google cloud cluster

I am trying to start an apache spark cluster in Google cloud with 1 Master and 4 workers. However, when I run start-all.sh It shows that all nodes are starting without any error.
sparkuser#master:/opt/spark/logs$ start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /opt/spark/logs/spark-sparkuser-org.apache.spark.deploy.master.Master-1-master.out
worker1: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark/logs/spark-sparkuser-org.apache.spark.deploy.worker.Worker-1-worker1.out
worker3: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark/logs/spark-sparkuser-org.apache.spark.deploy.worker.Worker-1-worker3.out
worker4: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark/logs/spark-sparkuser-org.apache.spark.deploy.worker.Worker-1-worker4.out
worker2: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark/logs/spark-sparkuser-org.apache.spark.deploy.worker.Worker-1-worker2.out
sparkuser#master:/opt/spark/logs$
When I check the log file, the Master is running. But none of the workers are running. What could be the issue?
spark-env.sh for workers
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_202
export SPARK_HOME=/opt/spark/
export SPARK_MASTER_HOST="master"
export SPARK_LOCAL_IP="127.0.0.1"
export SPARK_MASTER_WEBUI_PORT=8080
export SPARK_LOG_DIR=/opt/spark/logs
spark-env.sh for master
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_202
export SPARK_HOME=/opt/spark/
export SPARK_MASTER_HOST="master"
export SPARK_LOCAL_IP="127.0.0.1"
export SPARK_MASTER_WEBUI_PORT=8080
export SPARK_LOG_DIR=/opt/spark/logs
The error on the log file of worker:
Spark Command: /usr/lib/jvm/jdk1.8.0_202/bin/java -cp /opt/spark/conf/:/opt/spark/jars/* -Xmx1g org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://master:7077
========================================
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
22/11/07 10:20:13 INFO Worker: Started daemon with process name: 18908#worker2
22/11/07 10:20:13 INFO SignalUtils: Registering signal handler for TERM
22/11/07 10:20:13 INFO SignalUtils: Registering signal handler for HUP
22/11/07 10:20:13 INFO SignalUtils: Registering signal handler for INT
22/11/07 10:20:13 WARN Utils: Your hostname, worker2 resolves to a loopback address: 127.0.0.1; using 10.178.0.5 instead (on interface ens4)
22/11/07 10:20:13 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
22/11/07 10:20:14 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
22/11/07 10:20:14 INFO SecurityManager: Changing view acls to: sparkuser
22/11/07 10:20:14 INFO SecurityManager: Changing modify acls to: sparkuser
22/11/07 10:20:14 INFO SecurityManager: Changing view acls groups to:
22/11/07 10:20:14 INFO SecurityManager: Changing modify acls groups to:
22/11/07 10:20:14 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(sparkuser); groups with view permissions: Set(); users with modify permissions: Set(sparkuser); groups with modify permissions: Set()
22/11/07 10:20:14 INFO Utils: Successfully started service 'sparkWorker' on port 46335.
22/11/07 10:20:14 INFO Worker: Worker decommissioning not enabled.
22/11/07 10:20:15 INFO Worker: Starting Spark worker 10.178.0.5:46335 with 2 cores, 6.8 GiB RAM
22/11/07 10:20:15 INFO Worker: Running Spark version 3.2.2
22/11/07 10:20:15 INFO Worker: Spark home: /opt/spark
22/11/07 10:20:15 INFO ResourceUtils: ==============================================================
22/11/07 10:20:15 INFO ResourceUtils: No custom resources configured for spark.worker.
22/11/07 10:20:15 INFO ResourceUtils: ==============================================================
22/11/07 10:20:15 INFO Utils: Successfully started service 'WorkerUI' on port 8081.
22/11/07 10:20:15 INFO WorkerWebUI: Bound WorkerWebUI to 0.0.0.0, and started at http://worker2.c.apache-spark-project-363713.internal:8081
22/11/07 10:20:15 INFO Worker: Connecting to master master:7077...
22/11/07 10:20:24 INFO Worker: Retrying connection to master (attempt # 1)
22/11/07 10:20:24 INFO Worker: Connecting to master master:7077...
22/11/07 10:20:33 INFO Worker: Retrying connection to master (attempt # 2)
22/11/07 10:20:33 INFO Worker: Connecting to master master:7077...
22/11/07 10:20:42 INFO Worker: Retrying connection to master (attempt # 3)
22/11/07 10:20:42 INFO Worker: Connecting to master master:7077...
22/11/07 10:20:51 INFO Worker: Retrying connection to master (attempt # 4)
22/11/07 10:20:51 INFO Worker: Connecting to master master:7077...
22/11/07 10:21:00 INFO Worker: Retrying connection to master (attempt # 5)
22/11/07 10:21:00 INFO Worker: Connecting to master master:7077...
22/11/07 10:21:09 INFO Worker: Retrying connection to master (attempt # 6)
22/11/07 10:21:09 INFO Worker: Connecting to master master:7077...
22/11/07 10:22:00 INFO Worker: Retrying connection to master (attempt # 7)
22/11/07 10:22:00 INFO Worker: Connecting to master master:7077...
22/11/07 10:22:15 ERROR RpcOutboxMessage: Ask terminated before connecting successfully
22/11/07 10:22:15 WARN NettyRpcEnv: Ignored failure: java.io.IOException: Connecting to master/35.216.27.9:7077 timed out (120000 ms)
22/11/07 10:22:15 WARN Worker: Failed to connect to master master:7077
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:301)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:102)
at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:110)
at org.apache.spark.deploy.worker.Worker$$anon$1.run(Worker.scala:311)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Connecting to master/35.216.27.9:7077 timed out (120000 ms)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:285)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:218)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:230)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:204)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:202)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:198)
... 4 more
22/11/07 10:22:51 INFO Worker: Retrying connection to master (attempt # 8)
22/11/07 10:22:51 INFO Worker: Connecting to master master:7077...
22/11/07 10:23:42 INFO Worker: Retrying connection to master (attempt # 9)
22/11/07 10:23:42 INFO Worker: Connecting to master master:7077...
22/11/07 10:24:33 INFO Worker: Retrying connection to master (attempt # 10)
22/11/07 10:24:33 INFO Worker: Connecting to master master:7077...
22/11/07 10:24:51 ERROR RpcOutboxMessage: Ask terminated before connecting successfully
22/11/07 10:24:51 WARN NettyRpcEnv: Ignored failure: java.io.IOException: Connecting to master/35.216.27.9:7077 timed out (120000 ms)
22/11/07 10:24:51 WARN Worker: Failed to connect to master master:7077
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:301)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:102)
at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:110)
at org.apache.spark.deploy.worker.Worker$$anon$1.run(Worker.scala:311)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Connecting to master/35.216.27.9:7077 timed out (120000 ms)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:285)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:218)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:230)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:204)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:202)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:198)
... 4 more
22/11/07 10:25:24 INFO Worker: Retrying connection to master (attempt # 11)
22/11/07 10:25:24 INFO Worker: Connecting to master master:7077...
22/11/07 10:26:15 INFO Worker: Retrying connection to master (attempt # 12)
22/11/07 10:26:15 INFO Worker: Connecting to master master:7077...
22/11/07 10:27:06 INFO Worker: Retrying connection to master (attempt # 13)
22/11/07 10:27:06 INFO Worker: Connecting to master master:7077...
22/11/07 10:27:24 ERROR RpcOutboxMessage: Ask terminated before connecting successfully
22/11/07 10:27:24 WARN NettyRpcEnv: Ignored failure: java.io.IOException: Connecting to master/35.216.27.9:7077 timed out (120000 ms)
22/11/07 10:27:24 WARN Worker: Failed to connect to master master:7077
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:301)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:102)
at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:110)
at org.apache.spark.deploy.worker.Worker$$anon$1.run(Worker.scala:311)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Connecting to master/35.216.27.9:7077 timed out (120000 ms)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:285)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:218)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:230)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:204)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:202)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:198)
... 4 more
22/11/07 10:27:57 INFO Worker: Retrying connection to master (attempt # 14)
22/11/07 10:27:57 INFO Worker: Connecting to master master:7077...
22/11/07 10:28:48 INFO Worker: Retrying connection to master (attempt # 15)
22/11/07 10:28:48 INFO Worker: Connecting to master master:7077...
22/11/07 10:29:39 INFO Worker: Retrying connection to master (attempt # 16)
22/11/07 10:29:39 INFO Worker: Connecting to master master:7077...
22/11/07 10:29:57 ERROR RpcOutboxMessage: Ask terminated before connecting successfully
22/11/07 10:29:57 WARN NettyRpcEnv: Ignored failure: java.io.IOException: Connecting to master/35.216.27.9:7077 timed out (120000 ms)
22/11/07 10:29:57 WARN Worker: Failed to connect to master master:7077
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:301)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:102)
at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:110)
at org.apache.spark.deploy.worker.Worker$$anon$1.run(Worker.scala:311)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Connecting to master/35.216.27.9:7077 timed out (120000 ms)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:285)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:218)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:230)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:204)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:202)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:198)
... 4 more
22/11/07 10:30:30 ERROR Worker: All masters are unresponsive! Giving up.

How do I use the portable runner and spark-submit to submit beams wordcount python example to a remote spark cluster on EMR running yarn?

I am trying to submit beams wordcount python example to a remote spark cluster on emr running yarn as its resource manager. According to the spark documentation this needs to be done using the portable runner.
Following the portable runner instructions, I have started the job service endpoint, and it appears to start correctly::
$ docker run --net=host apache/beam_spark_job_server:latest --spark-master-url=spark://*.***.***.***:7077
20/08/31 12:13:08 INFO org.apache.beam.runners.jobsubmission.JobServerDriver: ArtifactStagingService started on localhost:8098
20/08/31 12:13:08 INFO org.apache.beam.runners.jobsubmission.JobServerDriver: Java ExpansionService started on localhost:8097
20/08/31 12:13:08 INFO org.apache.beam.runners.jobsubmission.JobServerDriver: JobService started on localhost:8099
20/08/31 12:13:08 INFO org.apache.beam.runners.jobsubmission.JobServerDriver: Job server now running, terminate with Ctrl+C
Now I try to submit the job using spark-submit, input is a plain text version of Sherlock Holmes:
$ spark-submit --master=yarn --deploy-mode=cluster wordcount.py --input data/sherlock.txt --output output --runner=PortableRunner --job_endpoint=localhost:8099 --environment_type=DOCKER --environment_config=apachebeam/python3.7_sdk
20/08/31 12:19:39 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/08/31 12:19:40 INFO RMProxy: Connecting to ResourceManager at ip-***-**-**-***.ec2.internal/***.**.**.***:8032
20/08/31 12:19:40 INFO Client: Requesting a new application from cluster with 2 NodeManagers
20/08/31 12:19:40 INFO Configuration: resource-types.xml not found
20/08/31 12:19:40 INFO ResourceUtils: Unable to find 'resource-types.xml'.
20/08/31 12:19:40 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (6144 MB per container)
20/08/31 12:19:40 INFO Client: Will allocate AM container, with 2432 MB memory including 384 MB overhead
20/08/31 12:19:40 INFO Client: Setting up container launch context for our AM
Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: /usr/lib/spark/python/lib/pyspark.zip not found; cannot run pyspark application in YARN mode.
at scala.Predef$.require(Predef.scala:281)
at org.apache.spark.deploy.yarn.Client.$anonfun$findPySparkArchives$2(Client.scala:1167)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.deploy.yarn.Client.findPySparkArchives(Client.scala:1163)
at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:858)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:178)
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1134)
at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1526)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:853)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:928)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:937)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
20/08/31 12:19:40 INFO ShutdownHookManager: Shutdown hook called
20/08/31 12:19:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-ee751413-e29d-4b1f-8a16-fb8650b1ca10
It appears to want pyspark to be installed, I am fairly new to submitting beam jobs to a spark cluster, is there a reason why pyspark would need to be installed when submitting a beam job? I have a feeling my spark-submit command is wrong, but I am having a hard time finding any more concrete documentation on how to do what I am trying to do.

Datastax Spark: Job failed on Zeppelin

I have set up Datastax Enterprice in three nodes in the local network.
Two nodes are debian servers and i used apt package manager for installation. The last node is iMac and i used the .dmg package for installation.
Node #1:
OS: Debian GNU/Linux 8.10 (jessie)
Local IP: 172.16.21.18
Datastax Enterprice: 5.1.7
Node #2:
OS: Ubuntu 16.04.3 LTS
Local IP: 172.16.21.25
Datastax Enterprice: 5.1.7
Node #1:
OS: macOS 10.13.2
Local IP: 192.168.1.108
Datastax Enterprice: 5.1.7
Nodes are up and running in analytics and search mode: ($ dse cassandra -k -s)
Now, I'm trying to connect on Spark Cluster using Apache Zeppelin 0.7.3. Apache Zeppelin is installed and configured in Node #1.
I followed these instructions for configuration. Below you can see some basic changes in config files:
zeppelin-0.7.3-bin-all/conf/zeppelin-env.sh
[..]
export MASTER=spark://172.16.21.18:7077 # Spark master url. eg. spark://master_addr:7077. Leave empty if you want to use local mode.
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export DSE_HOME=/usr
[..]
zeppelin-0.7.3-bin-all/bin/interpreter.sh
[..]
# set spark related env variables
if [[ "${INTERPRETER_ID}" == "spark" ]]; then
if [[ -n "${SPARK_HOME}" ]]; then
export SPARK_SUBMIT="${DSE_HOME}/bin/dse spark-submit"
[..]
Zeppelin Spark Intepreter:
Zeppelin CQL intepreter works perfect with Apache Cassandra but then i'm trying to use Spark Intepreter to execute some queries i'm getting this error:
%spark
val results = spark.sql("SELECT * from keyspace.table")
java.lang.NullPointerException
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38)
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33)
[..]
complete zeppelin log file:
INFO [2018-02-21 04:25:36,185] ({Thread-0} RemoteInterpreterServer.java[run]:97) - Starting remote interpreter server on port 52127
INFO [2018-02-21 04:25:36,562] ({pool-1-thread-3} RemoteInterpreterServer.java[createInterpreter]:198) - Instantiate interpreter org.apache.zeppelin.spark.SparkInterpreter
INFO [2018-02-21 04:25:36,589] ({pool-1-thread-3} RemoteInterpreterServer.java[createInterpreter]:198) - Instantiate interpreter org.apache.zeppelin.spark.SparkSqlInterpreter
INFO [2018-02-21 04:25:36,601] ({pool-1-thread-3} RemoteInterpreterServer.java[createInterpreter]:198) - Instantiate interpreter org.apache.zeppelin.spark.DepInterpreter
INFO [2018-02-21 04:25:36,619] ({pool-1-thread-3} RemoteInterpreterServer.java[createInterpreter]:198) - Instantiate interpreter org.apache.zeppelin.spark.PySparkInterpreter
INFO [2018-02-21 04:25:36,622] ({pool-1-thread-3} RemoteInterpreterServer.java[createInterpreter]:198) - Instantiate interpreter org.apache.zeppelin.spark.SparkRInterpreter
INFO [2018-02-21 04:25:36,683] ({pool-2-thread-2} SchedulerFactory.java[jobStarted]:131) - Job remoteInterpretJob_1519205136682 started by scheduler org.apache.zeppelin.spark.SparkInterpreter269729544
INFO [2018-02-21 04:25:40,733] ({pool-2-thread-2} SparkInterpreter.java[createSparkSession]:318) - ------ Create new SparkContext spark://172.16.21.18:7077 -------
WARN [2018-02-21 04:25:40,740] ({pool-2-thread-2} SparkInterpreter.java[setupConfForSparkR]:577) - sparkr.zip is not found, sparkr may not work.
INFO [2018-02-21 04:25:40,786] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Running Spark version 2.1.0
WARN [2018-02-21 04:25:41,760] ({pool-2-thread-2} NativeCodeLoader.java[<clinit>]:62) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
WARN [2018-02-21 04:25:41,958] ({pool-2-thread-2} Logging.scala[logWarning]:66) -
SPARK_CLASSPATH was detected (set to ':/home/cassandra/zeppelin-0.7.3-bin-all/interpreter/spark/dep/*:/home/cassandra/zeppelin-0.7.3-bin-all/interpreter/spark/*:/home/cassandra/zeppelin-0.7.3-bin-all/lib/interpreter/*:').
This is deprecated in Spark 1.0+.
Please instead use:
- ./spark-submit with --driver-class-path to augment the driver classpath
- spark.executor.extraClassPath to augment the executor classpath
WARN [2018-02-21 04:25:41,959] ({pool-2-thread-2} Logging.scala[logWarning]:66) - Setting 'spark.executor.extraClassPath' to ':/home/cassandra/zeppelin-0.7.3-bin-all/interpreter/spark/dep/*:/home/cassandra/zeppelin-0.7.3-bin-all/interpreter/spark/*:/home/cassandra/zeppelin-0.7.3-bin-all/lib/interpreter/*:' as a work-around.
WARN [2018-02-21 04:25:41,960] ({pool-2-thread-2} Logging.scala[logWarning]:66) - Setting 'spark.driver.extraClassPath' to ':/home/cassandra/zeppelin-0.7.3-bin-all/interpreter/spark/dep/*:/home/cassandra/zeppelin-0.7.3-bin-all/interpreter/spark/*:/home/cassandra/zeppelin-0.7.3-bin-all/lib/interpreter/*:' as a work-around.
WARN [2018-02-21 04:25:41,986] ({pool-2-thread-2} Logging.scala[logWarning]:66) - Your hostname, XPLAIN005 resolves to a loopback address: 127.0.1.1; using 172.16.21.18 instead (on interface eth0)
WARN [2018-02-21 04:25:41,987] ({pool-2-thread-2} Logging.scala[logWarning]:66) - Set SPARK_LOCAL_IP if you need to bind to another address
INFO [2018-02-21 04:25:42,017] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Changing view acls to: cassandra
INFO [2018-02-21 04:25:42,017] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Changing modify acls to: cassandra
INFO [2018-02-21 04:25:42,018] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Changing view acls groups to:
INFO [2018-02-21 04:25:42,019] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Changing modify acls groups to:
INFO [2018-02-21 04:25:42,019] ({pool-2-thread-2} Logging.scala[logInfo]:54) - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(cassandra); groups with view permissions: Set(); users with modify permissions: Set(cassandra); groups with modify permissions: Set()
INFO [2018-02-21 04:25:42,417] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Successfully started service 'sparkDriver' on port 51240.
INFO [2018-02-21 04:25:42,445] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Registering MapOutputTracker
INFO [2018-02-21 04:25:42,476] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Registering BlockManagerMaster
INFO [2018-02-21 04:25:42,481] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
INFO [2018-02-21 04:25:42,482] ({pool-2-thread-2} Logging.scala[logInfo]:54) - BlockManagerMasterEndpoint up
INFO [2018-02-21 04:25:42,507] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Created local directory at /tmp/blockmgr-797ea400-69f1-4228-a6da-fe424edce8d4
INFO [2018-02-21 04:25:42,524] ({pool-2-thread-2} Logging.scala[logInfo]:54) - MemoryStore started with capacity 408.9 MB
INFO [2018-02-21 04:25:42,591] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Registering OutputCommitCoordinator
INFO [2018-02-21 04:25:42,700] ({pool-2-thread-2} Log.java[initialized]:186) - Logging initialized #6930ms
INFO [2018-02-21 04:25:42,864] ({pool-2-thread-2} Server.java[doStart]:327) - jetty-9.2.z-SNAPSHOT
INFO [2018-02-21 04:25:42,902] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#2cbd702d{/jobs,null,AVAILABLE}
INFO [2018-02-21 04:25:42,903] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#240b993c{/jobs/json,null,AVAILABLE}
INFO [2018-02-21 04:25:42,903] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#5b7d8292{/jobs/job,null,AVAILABLE}
INFO [2018-02-21 04:25:42,908] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#4c2353ff{/jobs/job/json,null,AVAILABLE}
INFO [2018-02-21 04:25:42,909] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#bd87e4e{/stages,null,AVAILABLE}
INFO [2018-02-21 04:25:42,910] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#73e2d470{/stages/json,null,AVAILABLE}
INFO [2018-02-21 04:25:42,917] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#44bca18c{/stages/stage,null,AVAILABLE}
INFO [2018-02-21 04:25:42,918] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#1256be4f{/stages/stage/json,null,AVAILABLE}
INFO [2018-02-21 04:25:42,919] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#5a349845{/stages/pool,null,AVAILABLE}
INFO [2018-02-21 04:25:42,919] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#3f108627{/stages/pool/json,null,AVAILABLE}
INFO [2018-02-21 04:25:42,926] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#1e01f088{/storage,null,AVAILABLE}
INFO [2018-02-21 04:25:42,927] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#390281c1{/storage/json,null,AVAILABLE}
INFO [2018-02-21 04:25:42,927] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#470ac014{/storage/rdd,null,AVAILABLE}
INFO [2018-02-21 04:25:42,927] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#7c90476c{/storage/rdd/json,null,AVAILABLE}
INFO [2018-02-21 04:25:42,928] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#6d847dc6{/environment,null,AVAILABLE}
INFO [2018-02-21 04:25:42,936] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#40a5e53e{/environment/json,null,AVAILABLE}
INFO [2018-02-21 04:25:42,937] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#513e975e{/executors,null,AVAILABLE}
INFO [2018-02-21 04:25:42,937] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#2f6b1132{/executors/json,null,AVAILABLE}
INFO [2018-02-21 04:25:42,938] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#61cf2354{/executors/threadDump,null,AVAILABLE}
INFO [2018-02-21 04:25:42,939] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#eacb646{/executors/threadDump/json,null,AVAILABLE}
INFO [2018-02-21 04:25:42,951] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#2b8d44aa{/static,null,AVAILABLE}
INFO [2018-02-21 04:25:42,953] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#5c982268{/,null,AVAILABLE}
INFO [2018-02-21 04:25:42,954] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#44556f2c{/api,null,AVAILABLE}
INFO [2018-02-21 04:25:42,955] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#2fa0ef66{/jobs/job/kill,null,AVAILABLE}
INFO [2018-02-21 04:25:42,955] ({pool-2-thread-2} ContextHandler.java[doStart]:744) - Started o.s.j.s.ServletContextHandler#6e49562c{/stages/stage/kill,null,AVAILABLE}
INFO [2018-02-21 04:25:42,970] ({pool-2-thread-2} AbstractConnector.java[doStart]:266) - Started ServerConnector#53405611{HTTP/1.1}{0.0.0.0:4040}
INFO [2018-02-21 04:25:42,971] ({pool-2-thread-2} Server.java[doStart]:379) - Started #7201ms
INFO [2018-02-21 04:25:42,971] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Successfully started service 'SparkUI' on port 4040.
INFO [2018-02-21 04:25:42,974] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Bound SparkUI to 0.0.0.0, and started at http://172.16.21.18:4040
INFO [2018-02-21 04:25:43,214] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Added file file:/home/cassandra/zeppelin-0.7.3-bin-all/interpreter/spark/pyspark/pyspark.zip at spark://172.16.21.18:51240/files/pyspark.zip with timestamp 1519205143214
INFO [2018-02-21 04:25:43,217] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Copying /home/cassandra/zeppelin-0.7.3-bin-all/interpreter/spark/pyspark/pyspark.zip to /tmp/spark-2e9292e3-8c4d-445a-92f0-7d54188818db/userFiles-4e8301a5-91bc-4753-8436-6cced0bdc5c5/pyspark.zip
INFO [2018-02-21 04:25:43,226] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Added file file:/home/cassandra/zeppelin-0.7.3-bin-all/interpreter/spark/pyspark/py4j-0.10.4-src.zip at spark://172.16.21.18:51240/files/py4j-0.10.4-src.zip with timestamp 1519205143226
INFO [2018-02-21 04:25:43,227] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Copying /home/cassandra/zeppelin-0.7.3-bin-all/interpreter/spark/pyspark/py4j-0.10.4-src.zip to /tmp/spark-2e9292e3-8c4d-445a-92f0-7d54188818db/userFiles-4e8301a5-91bc-4753-8436-6cced0bdc5c5/py4j-0.10.4-src.zip
INFO [2018-02-21 04:25:43,279] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Created default pool default, schedulingMode: FIFO, minShare: 0, weight: 1
INFO [2018-02-21 04:25:43,325] ({appclient-register-master-threadpool-0} Logging.scala[logInfo]:54) - Connecting to master spark://172.16.21.18:7077...
INFO [2018-02-21 04:25:43,391] ({netty-rpc-connection-0} TransportClientFactory.java[createClient]:250) - Successfully created connection to /172.16.21.18:7077 after 33 ms (0 ms spent in bootstraps)
INFO [2018-02-21 04:26:03,326] ({appclient-register-master-threadpool-0} Logging.scala[logInfo]:54) - Connecting to master spark://172.16.21.18:7077...
INFO [2018-02-21 04:26:23,326] ({appclient-register-master-threadpool-0} Logging.scala[logInfo]:54) - Connecting to master spark://172.16.21.18:7077...
ERROR [2018-02-21 04:26:43,328] ({appclient-registration-retry-thread} Logging.scala[logError]:70) - Application has been killed. Reason: All masters are unresponsive! Giving up.
WARN [2018-02-21 04:26:43,328] ({pool-2-thread-2} Logging.scala[logWarning]:66) - Application ID is not initialized yet.
INFO [2018-02-21 04:26:43,336] ({stop-spark-context} AbstractConnector.java[doStop]:306) - Stopped ServerConnector#53405611{HTTP/1.1}{0.0.0.0:4040}
INFO [2018-02-21 04:26:43,339] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 40068.
INFO [2018-02-21 04:26:43,498] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Server created on 172.16.21.18:40068
INFO [2018-02-21 04:26:43,499] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#6e49562c{/stages/stage/kill,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,500] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#2fa0ef66{/jobs/job/kill,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,501] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#44556f2c{/api,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,501] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#5c982268{/,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,505] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#2b8d44aa{/static,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,506] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#eacb646{/executors/threadDump/json,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,507] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#61cf2354{/executors/threadDump,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,508] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
INFO [2018-02-21 04:26:43,508] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#2f6b1132{/executors/json,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,509] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#513e975e{/executors,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,510] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#40a5e53e{/environment/json,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,511] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#6d847dc6{/environment,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,511] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#7c90476c{/storage/rdd/json,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,512] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#470ac014{/storage/rdd,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,513] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#390281c1{/storage/json,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,513] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#1e01f088{/storage,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,513] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#3f108627{/stages/pool/json,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,514] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Registering BlockManager BlockManagerId(driver, 172.16.21.18, 40068, None)
INFO [2018-02-21 04:26:43,514] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#5a349845{/stages/pool,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,515] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#1256be4f{/stages/stage/json,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,515] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#44bca18c{/stages/stage,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,516] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#73e2d470{/stages/json,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,516] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#bd87e4e{/stages,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,517] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#4c2353ff{/jobs/job/json,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,517] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#5b7d8292{/jobs/job,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,518] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#240b993c{/jobs/json,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,518] ({stop-spark-context} ContextHandler.java[doStop]:865) - Stopped o.s.j.s.ServletContextHandler#2cbd702d{/jobs,null,UNAVAILABLE}
INFO [2018-02-21 04:26:43,521] ({dispatcher-event-loop-0} Logging.scala[logInfo]:54) - Registering block manager 172.16.21.18:40068 with 408.9 MB RAM, BlockManagerId(driver, 172.16.21.18, 40068, None)
INFO [2018-02-21 04:26:43,522] ({stop-spark-context} Logging.scala[logInfo]:54) - Stopped Spark web UI at http://172.16.21.18:4040
INFO [2018-02-21 04:26:43,526] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Registered BlockManager BlockManagerId(driver, 172.16.21.18, 40068, None)
INFO [2018-02-21 04:26:43,527] ({pool-2-thread-2} Logging.scala[logInfo]:54) - Initialized BlockManager: BlockManagerId(driver, 172.16.21.18, 40068, None)
INFO [2018-02-21 04:26:43,530] ({stop-spark-context} Logging.scala[logInfo]:54) - Shutting down all executors
INFO [2018-02-21 04:26:43,546] ({dispatcher-event-loop-1} Logging.scala[logInfo]:54) - Asking each executor to shut down
WARN [2018-02-21 04:26:43,561] ({dispatcher-event-loop-0} Logging.scala[logWarning]:66) - Drop UnregisterApplication(null) because has not yet connected to master
INFO [2018-02-21 04:26:43,583] ({dispatcher-event-loop-2} Logging.scala[logInfo]:54) - MapOutputTrackerMasterEndpoint stopped!
INFO [2018-02-21 04:26:43,596] ({stop-spark-context} Logging.scala[logInfo]:54) - MemoryStore cleared
INFO [2018-02-21 04:26:43,597] ({stop-spark-context} Logging.scala[logInfo]:54) - BlockManager stopped
INFO [2018-02-21 04:26:43,605] ({stop-spark-context} Logging.scala[logInfo]:54) - BlockManagerMaster stopped
INFO [2018-02-21 04:26:43,608] ({dispatcher-event-loop-1} Logging.scala[logInfo]:54) - OutputCommitCoordinator stopped!
ERROR [2018-02-21 04:26:43,748] ({pool-2-thread-2} Logging.scala[logError]:91) - Error initializing SparkContext.
java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:91)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:524)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38)
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33)
at org.apache.zeppelin.spark.SparkInterpreter.createSparkSession(SparkInterpreter.java:378)
at org.apache.zeppelin.spark.SparkInterpreter.getSparkSession(SparkInterpreter.java:233)
at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:841)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:491)
at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
INFO [2018-02-21 04:26:43,751] ({pool-2-thread-2} Logging.scala[logInfo]:54) - SparkContext already stopped.
ERROR [2018-02-21 04:26:43,751] ({pool-2-thread-2} Utils.java[invokeMethod]:40) -
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38)
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33)
at org.apache.zeppelin.spark.SparkInterpreter.createSparkSession(SparkInterpreter.java:378)
at org.apache.zeppelin.spark.SparkInterpreter.getSparkSession(SparkInterpreter.java:233)
at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:841)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:491)
at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:91)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:524)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
... 20 more
INFO [2018-02-21 04:26:43,752] ({stop-spark-context} Logging.scala[logInfo]:54) - Successfully stopped SparkContext
INFO [2018-02-21 04:26:43,752] ({pool-2-thread-2} SparkInterpreter.java[createSparkSession]:379) - Created Spark session
ERROR [2018-02-21 04:26:43,753] ({pool-2-thread-2} Job.java[run]:181) - Job failed
java.lang.NullPointerException
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38)
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33)
at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext_2(SparkInterpreter.java:398)
at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:387)
at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:146)
at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:843)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:491)
at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
INFO [2018-02-21 04:26:43,759] ({pool-2-thread-2} SchedulerFactory.java[jobFinished]:137) - Job remoteInterpretJob_1519205136682 finished by scheduler org.apache.zeppelin.spark.SparkInterpreter269729544
What do you think?
UPDATE:
All nodes upgraded to Datastax Enterprice 5.1.7
With DSE 5.1 any reference to the Spark Master should look like this example:
export MASTER=dse://1.20.300.10
ERROR [2018-02-21 04:26:43,328] ({appclient-registration-retry-thread} Logging.scala[logError]:70) - Application has been killed. Reason: All masters are unresponsive! Giving up.
WARN [2018-02-21 04:26:43,328] ({pool-2-thread-2} Logging.scala[logWarning]:66) - Application ID is not initialized yet.
It seems the app is killed. Could you check the logs in spark master ?

ERROR yarn.ApplicationMaster: Uncaught exception: java.util.concurrent.TimeoutException: Futures timed out after 100000 milliseconds [duplicate]

This question already has answers here:
Why does join fail with "java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]"?
(4 answers)
Closed 4 years ago.
I have this problem in my spark application, I use 1.6 spark version, scala 2.10:
17/10/23 14:32:15 ERROR yarn.ApplicationMaster: Uncaught exception:
java.util.concurrent.TimeoutException: Futures timed out after [100000
milliseconds]at
scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at
scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107) at
org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:342)
at
org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:197)
at
org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$main$1.apply$mcV$sp(ApplicationMaster.scala:680)
at
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:69)
at
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:68)
at java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:422) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
at
org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:68)
at
org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:678)
at
org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)
17/10/23 14:32:15 INFO yarn.ApplicationMaster: Final app status:
FAILED, exitCode: 10, (reason: Uncaught exception:
java.util.concurrent.TimeoutException: Futures timed out after [100000
milliseconds]) 17/10/23 14:32:15 INFO spark.SparkContext: Invoking
stop() from shutdown hook 17/10/23 14:32:15 INFO ui.SparkUI: Stopped
Spark web UI at http://180.21.232.30:43576 17/10/23 14:32:15 INFO
scheduler.DAGScheduler: ShuffleMapStage 27 (show at Linkage.scala:282)
failed in 24.519 s due to Stage cancelled because SparkContext was
shut down 17/10/23 14:32:15 arkListenerJobEnd (18,1508761935656,JobFailed (org.apache.spark.SparkException:Job 18 cancelled because SparkContext was shut down)) 17/10/23 14:32:15 INFO spark.MapOutputTrackerMasterEndpoint:
MapOutputTrackerMasterEndpoint stopped! 17/10/23 14:32:15 INFO
storage.MemoryStore: MemoryStore cleared 17/10/23 14:32:15 INFO
storage.BlockManager: BlockManager stopped 17/10/23 14:32:15 INFO
storage.BlockManagerMaster: BlockManagerMaster stopped 17/10/23
14:32:15 INFO remote.RemoteActorRefProvider$RemotingTerminator:
Shutting down remote daemon.
17/10/23 14:32:15 INFO util.ShutdownHookManager: Shutdown hook
calledBlockquote
I read the articules that this problem and I tried to modify the next parameter without result
--conf spark.yarn.am.waitTime=6000s
--conf spark.sql.broadcastTimeout= 6000
--conf spark.network.timeout=600
Best Regars 
Please remove the setMaster(’local’) on the code, because Spark by default uses the YARN cluster manager in EMR.
If you are trying to run your spark job on yarn client/cluster. Don't forget to remove master configuration from your code .master("local[n]").
For submitting spark job on yarn, you need to pass --master yarn --deploy-mode cluster/client.
Having master set as local was giving repeated timeout exception.

spark-cassandra java.lang.NoClassDefFoundError: com/datastax/spark/connector/japi/CassandraJavaUtil

16/04/26 16:58:46 DEBUG ProtobufRpcEngine: Call: complete took 3ms
Exception in thread "main" java.lang.NoClassDefFoundError: com/datastax/spark/connector/japi/CassandraJavaUtil
at com.baitic.mcava.lecturahdfssaveincassandra.TratamientoCSV.main(TratamientoCSV.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: com.datastax.spark.connector.japi.CassandraJavaUtil
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 10 more
16/04/26 16:58:46 INFO SparkContext: Invoking stop() from shutdown hook
16/04/26 16:58:46 INFO SparkUI: Stopped Spark web UI at http://10.128.0.5:4040
16/04/26 16:58:46 INFO SparkDeploySchedulerBackend: Shutting down all executors
16/04/26 16:58:46 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
16/04/26 16:58:46 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/04/26 16:58:46 INFO MemoryStore: MemoryStore cleared
16/04/26 16:58:46 INFO BlockManager: BlockManager stopped
16/04/26 16:58:46 INFO BlockManagerMaster: BlockManagerMaster stopped
16/04/26 16:58:46 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/04/26 16:58:46 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/04/26 16:58:46 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/04/26 16:58:46 INFO SparkContext: Successfully stopped SparkContext
16/04/26 16:58:46 INFO ShutdownHookManager: Shutdown hook called
16/04/26 16:58:46 INFO ShutdownHookManager: Deleting directory /srv/spark/tmp/spark-2bf57fa2-a2d5-4f8a-980c-994e56b61c44
16/04/26 16:58:46 DEBUG Client: stopping client from cache: org.apache.hadoop.ipc.Client#3fb9a67f
16/04/26 16:58:46 DEBUG Client: removing client from cache: org.apache.hadoop.ipc.Client#3fb9a67f
16/04/26 16:58:46 DEBUG Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client#3fb9a67f
16/04/26 16:58:46 DEBUG Client: Stopping client
16/04/26 16:58:46 DEBUG Client: IPC Client (2107841088) connection to mcava-master/10.128.0.5:54310 from baiticpruebas2: closed
16/04/26 16:58:46 DEBUG Client: IPC Client (2107841088) connection to mcava-master/10.128.0.5:54310 from baiticpruebas2: stopped, remaining connections 0
16/04/26 16:58:46 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
i make this simple code:
/ String pathDatosTratados="hdfs://mcava-master:54310/srv/hadoop/data/spark/DatosApp/medidasSensorTratadas.txt";
String jarPath ="hdfs://mcava-master:54310/srv/hadoop/data/spark/original-LecturaHDFSsaveInCassandra-1.0-SNAPSHOT.jar";
String jar="hdfs://mcava-master:54310/srv/hadoop/data/spark/spark-cassandra-connector-assembly-1.6.0-M1-4-g6f01cfe.jar";
String jar2="hdfs://mcava-master:54310/srv/hadoop/data/spark/spark-cassandra-connector-java-assembly-1.6.0-M1-4-g6f01cfe.jar";
String[] jars= new String[3];
jars[0]=jarPath;
jars[2]=jar;
jars[1]=jar2;
SparkConf conf=new SparkConf().setAppName("TratamientoCSV").setJars(jars);
conf.set("spark.cassandra.connection.host", "10.128.0.5");
conf.set("spark.kryoserializer.buffer.max","512");
conf.set("spark.kryoserializer.buffer","256");
// conf.setJars(jars);
JavaSparkContext sc= new JavaSparkContext(conf);
JavaRDD<String> input= sc.textFile(pathDatos);
i also put the path to cassandra drive in spark-default.conf
spark.driver.extraClassPath hdfs://mcava-master:54310/srv/hadoop/data/spark/spark-cassandra-connector-java-assembly-1.6.0-M1-4-g6f01cfe.jar
spark.executor.extraClassPath hdfs://mcava-master:54310/srv/hadoop/data/spark/spark-cassandra-connector-java-assembly-1.6.0-M1-4-g6f01cfe.jar
i also put the flag --jars to the path of driver but i have always the same error i do not understand why??
i work in google engine
Try to add package when you submit your app.
$SPARK_HOME/bin/spark-submit --packages datastax:spark-cassandra-connector:1.6.0-M2-s_2.11 ....
I add this argument to solve this problem: --packages datastax:spark-cassandra-connector:1.6.0-M2-s_2.10.
At least for 3.0+ spark cassandra connector, the official assembly jar works well for me. It has all the necessary dependencies.
i solve the problem... i maked a fat jar with all dependencies and it not necessary to indicate the references to the cassandra connector only the reference to the fat jar.
I used Spark in my Java programm, and had the same issue.
The problem was, because I didn`t include spark-cassandra-connector into my maven dependencies of my project.
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.11</artifactId>
<version>2.0.7</version> <!-- Check actual version in maven repo -->
</dependency>
After that I builld fat jar with all my dependencies - and it`s worked!
Maybe it will help someone

Resources