Livy pyspark Python Session Error in Jypyter with Spark Magic - ERROR repl.PythonInterpreter: Process has died with 1 - apache-spark

I'm running a spark v2.0.0 YARN cluster. I have livy running beside the Spark master.
I have set up a jupyter Python3 notetebook and have Spark Magic installed and have followed the nessesary instructions to connect Spark Magic to Livy although When I create my session I get an error message from the notebook.
Added endpoint http://spark-master:8998
Starting Spark application
ID YARN Application ID Kind State Spark UI Driver log Current session?
0 None pyspark idle ✔
---------------------------------------------------------------------------
LivyUnexpectedStatusException Traceback (most recent call last)
/opt/conda/lib/python3.5/site-packages/hdijupyterutils/ipywidgetfactory.py in submit_clicked(self, button)
63
64 def submit_clicked(self, button):
---> 65 self.parent_widget.run()
/opt/conda/lib/python3.5/site-packages/sparkmagic/controllerwidget/createsessionwidget.py in run(self)
56
57 try:
---> 58 self.spark_controller.add_session(alias, endpoint, skip, properties)
59 except ValueError as e:
60 self.ipython_display.send_error("""Could not add session with
/opt/conda/lib/python3.5/site-packages/sparkmagic/livyclientlib/sparkcontroller.py in add_session(self, name, endpoint, skip_if_exists, properties)
79 session = self._livy_session(http_client, properties, self.ipython_display)
80 self.session_manager.add_session(name, session)
---> 81 session.start()
82
83 def get_session_id_for_client(self, name):
/opt/conda/lib/python3.5/site-packages/sparkmagic/livyclientlib/livysession.py in start(self)
148 else:
149 command = Command("sqlContext")
--> 150 (success, out) = command.execute(self)
151 if success:
152 self.ipython_display.writeln(u"SparkContext available as 'sc'.")
/opt/conda/lib/python3.5/site-packages/sparkmagic/livyclientlib/command.py in execute(self, session)
29 statement_id = -1
30 try:
---> 31 session.wait_for_idle()
32 data = {u"code": self.code}
33 response = session.http_client.post_statement(session.id, data)
/opt/conda/lib/python3.5/site-packages/sparkmagic/livyclientlib/livysession.py in wait_for_idle(self, seconds_to_wait)
238 .format(self.id, self.status)
239 self.logger.error(error)
--> 240 raise LivyUnexpectedStatusException(u'{} See logs:\n{}'.format(error, self.get_logs()))
241
242 if seconds_to_wait <= 0.0:
LivyUnexpectedStatusException: Session 0 unexpectedly reached final status 'error'. See logs:
Error I get from the Livy logs when creating a new session in the manage spark section of jupyter
17/02/10 13:06:08 INFO StateStore$: Using BlackholeStateStore for recovery.
17/02/10 13:06:08 INFO BatchSessionManager: Recovered 0 batch sessions. Next session id: 0
17/02/10 13:06:08 INFO InteractiveSessionManager: Recovered 0 interactive sessions. Next session id: 0
17/02/10 13:06:08 INFO InteractiveSessionManager: Heartbeat watchdog thread started.
17/02/10 13:06:08 INFO WebServer: Starting server on http://spark-master:8998
17/02/10 13:06:34 INFO InteractiveSession$: Creating LivyClient for sessionId: 0
17/02/10 13:06:34 WARN RSCConf: Your hostname, spark-master, resolves to a loopback address, but we couldn't find any external IP address!
17/02/10 13:06:34 WARN RSCConf: Set livy.rsc.rpc.server.address if you need to bind to another address.
17/02/10 13:06:35 INFO InteractiveSessionManager: Registering new session 0
17/02/10 13:06:35 INFO ContextLauncher: 17/02/10 13:06:35 INFO driver.RSCDriver: Starting RPC server...
17/02/10 13:06:35 INFO ContextLauncher: 17/02/10 13:06:35 WARN rsc.RSCConf: Set livy.rsc.rpc.server.address if you need to bind to another address.
17/02/10 13:06:35 INFO ContextLauncher: 17/02/10 13:06:35 INFO driver.RSCDriver: Received job request 3ca8a52b-8dd5-41f0-8151-a8201d72d422
17/02/10 13:06:35 INFO ContextLauncher: 17/02/10 13:06:35 INFO driver.RSCDriver: SparkContext not yet up, queueing job request.
17/02/10 13:06:36 INFO ContextLauncher: Setting default log level to "WARN".
17/02/10 13:06:36 INFO ContextLauncher: To adjust logging level use sc.setLogLevel(newLevel).
17/02/10 13:06:36 INFO ContextLauncher: 17/02/10 13:06:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/02/10 13:06:37 INFO ContextLauncher: 17/02/10 13:06:37 ERROR repl.PythonInterpreter: Process has died with 1
17/02/10 13:06:37 INFO RSCClient: Received result for 3ca8a52b-8dd5-41f0-8151-a8201d72d422
and get this output in the livy logs
I'm unable to put my finger on what the exact issue/fix is. I'm able to create a successful connection if I set my session to use the Scala language instead of the Python. Although I only get the error if I set the session language to python. If someone knows a solution to connecting a livy-repl pyspark session in Jupyter please let me know!
UPDATE
Livy still fails to create a PySpark session.
curl -v -X POST --data '{"kind": "pyspark"}' -H "Content-Type: application/json" example.com/sessions
The session state will go straight from "starting" to "failed". YARN logs on Resource Manager give the following right before the livy session fails.
To adjust logging level use sc.setLogLevel(newLevel).
17/02/26 05:02:25 WARN rsc.RSCConf: Your hostname, yarn-slave1, resolves to a loopback address, but we couldn't find any external IP address!
17/02/26 05:02:25 WARN rsc.RSCConf: Set livy.rsc.rpc.server.address if you need to bind to another address.
17/02/26 05:02:32 ERROR repl.PythonInterpreter: Process has died with 1
17/02/26 05:02:33 WARN yarn.YarnAllocator: Container marked as failed: container_1488085279373_0001_01_000002 on host: yarn-slave1. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1488085279373_0001_01_000002
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
17/02/26 05:02:33 WARN yarn.ApplicationMaster: Reporter thread fails 1 time(s) in a row.
java.lang.IllegalStateException: RpcEnv already stopped.
at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159)
at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131)
at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:185)
at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:508)
at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:531)
at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:512)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:512)
at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:442)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at org.apache.spark.deploy.yarn.YarnAllocator.processCompletedContainers(YarnAllocator.scala:442)
at org.apache.spark.deploy.yarn.YarnAllocator.allocateResources(YarnAllocator.scala:242)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$1.run(ApplicationMaster.scala:372)
17/02/26 05:02:40 WARN yarn.YarnAllocator: Container marked as failed: container_1488085279373_0001_01_000005 on host: yarn-slave1. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1488085279373_0001_01_000005
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
17/02/26 05:02:40 WARN yarn.ApplicationMaster: Reporter thread fails 1 time(s) in a row.
java.lang.IllegalStateException: RpcEnv already stopped.
at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159)
at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131)
at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:185)
at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:508)
at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:531)
at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:512)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:512)
at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:442)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at org.apache.spark.deploy.yarn.YarnAllocator.processCompletedContainers(YarnAllocator.scala:442)
at org.apache.spark.deploy.yarn.YarnAllocator.allocateResources(YarnAllocator.scala:242)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$1.run(ApplicationMaster.scala:372)
17/02/26 05:02:47 WARN yarn.YarnAllocator: Container marked as failed: container_1488085279373_0001_01_000006 on host: yarn-slave1. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1488085279373_0001_01_000006
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
17/02/26 05:02:47 WARN yarn.ApplicationMaster: Reporter thread fails 1 time(s) in a row.
java.lang.IllegalStateException: RpcEnv already stopped.
at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159)
at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131)
at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:185)
at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:508)
at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:531)
at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:512)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:512)
at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:442)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at org.apache.spark.deploy.yarn.YarnAllocator.processCompletedContainers(YarnAllocator.scala:442)
at org.apache.spark.deploy.yarn.YarnAllocator.allocateResources(YarnAllocator.scala:242)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$1.run(ApplicationMaster.scala:372)
17/02/26 05:02:53 WARN yarn.YarnAllocator: Container marked as failed: container_1488085279373_0001_01_000007 on host: yarn-slave1. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1488085279373_0001_01_000007
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
17/02/26 05:02:53 WARN yarn.ApplicationMaster: Reporter thread fails 1 time(s) in a row.
java.lang.IllegalStateException: RpcEnv already stopped.
at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159)
at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131)
at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:185)
at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:508)
at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:531)
at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:512)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:512)
at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:442)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at org.apache.spark.deploy.yarn.YarnAllocator.processCompletedContainers(YarnAllocator.scala:442)
at org.apache.spark.deploy.yarn.YarnAllocator.allocateResources(YarnAllocator.scala:242)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$1.run(ApplicationMaster.scala:372)
spark-defaults.conf
spark.yarn.appMasterEnv.PYSPARK_PYTHON python2
core-site.xml
<property>
<name>hadoop.proxyuser.livy.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.livy.hosts</name>
<value>*</value>
</property>
livy.conf
livy.server.host = 0.0.0.0
livy.server.port = 8998
livy.spark.master = yarn
livy.spark.deployMode = cluster

I was able to reproduce this issue.
The problem seems to be that spark 2.0.0 and livy have incompatible pyspark versions. If you update spark to the most recent version(currently 2.1.0) the pyspark versions can communicate and the spark session is created without a hitch.

I had faced similar issue even with spark 2.1.1 and livy. Livy-session status went to "error" from "starting". Turned out that I was using Java-7 while Livy and Spark need Java-8. Solved my issue.

I was facing a similar issue. Turns out the culprit was livy version. When replaced cloudera livy with apache livy-0.6.0-incubating version, the problem was solved; and I was able to create pyspark kind session on livy.

Related

When running "local-cluster" model in Apache Spark, how to prevent executor from dissociating prematurely?

I have a Spark application that should be tested in both local mode & local-cluster mode, using scalatest.
The local-cluster mode is submitted using this method:
How to scala-test a Spark program under "local-cluster" mode?
The test run successfully, but when terminating the test I got the following error in the log:
22/05/16 17:45:25 ERROR TaskSchedulerImpl: Lost executor 0 on 172.16.224.18: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages.
22/05/16 17:45:25 ERROR Worker: Failed to launch executor app-20220516174449-0000/2 for Test.
java.lang.IllegalStateException: Shutdown hooks cannot be modified during shutdown.
at org.apache.spark.util.SparkShutdownHookManager.add(ShutdownHookManager.scala:195)
at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:153)
at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:142)
at org.apache.spark.deploy.worker.ExecutorRunner.start(ExecutorRunner.scala:77)
at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:547)
at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:117)
at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:215)
at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:102)
at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:221)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
22/05/16 17:45:25 ERROR Worker: Failed to launch executor app-20220516174449-0000/3 for Test.
java.lang.IllegalStateException: Shutdown hooks cannot be modified during shutdown.
at org.apache.spark.util.SparkShutdownHookManager.add(ShutdownHookManager.scala:195)
at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:153)
at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:142)
at org.apache.spark.deploy.worker.ExecutorRunner.start(ExecutorRunner.scala:77)
at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:547)
at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:117)
at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:215)
at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:102)
at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:221)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
22/05/16 17:45:25 ERROR Worker: Failed to launch executor app-20220516174449-0000/4 for Test.
java.lang.IllegalStateException: Shutdown hooks cannot be modified during shutdown.
at org.apache.spark.util.SparkShutdownHookManager.add(ShutdownHookManager.scala:195)
at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:153)
at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:142)
at org.apache.spark.deploy.worker.ExecutorRunner.start(ExecutorRunner.scala:77)
at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:547)
at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:117)
at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:215)
at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:102)
at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:221)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
22/05/16 17:45:25 ERROR Worker: Failed to launch executor app-20220516174449-0000/5 for Test.
java.lang.IllegalStateException: Shutdown hooks cannot be modified during shutdown.
at org.apache.spark.util.SparkShutdownHookManager.add(ShutdownHookManager.scala:195)
at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:153)
at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:142)
at org.apache.spark.deploy.worker.ExecutorRunner.start(ExecutorRunner.scala:77)
at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:547)
at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:117)
at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:215)
at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:102)
at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dis
...
It turns out executor 0 was dropped before the SparkContext is stopped, this triggered a violent self-healing reaction from Spark master that tries to repeatedly launch new executors to compensate for the loss. How do I prevent this from happening?
Spark attempts to recover from failed tasks by attempting to run them again. What you can do to avoid this is to set some properties to 1 in
spark.task.maxFailures (default is 4)
spark.stage.maxConsecutiveAttempts (default is 4)
These properties can be set in $SPARK_HOME/conf/spark-defaults.conf or given as options to spark-submit:
spark-submit --conf spark.task.maxFailures=1 --conf spark.stage.maxConsecutiveAttempts=1
or in the Spark context/session configuration before starting the session.
EDIT:
It looks like your executors are lost due to insufficient memory. You could try to increase:
spark.executor.memory
spark.executor.memoryOverhead
spark.memory.offHeap.size with (spark.memory.offHeap.enabled=true)
(see Spark configuration)
The maximum memory size of container to running executor is determined by the sum of spark.executor.memoryOverhead, spark.executor.memory, spark.memory.offHeap.size and spark.executor.pyspark.memory.

My PySpark Jobs Run Fine in Local Mode, But Fail in Cluster Mode - SOLVED

I have a four node Hadoop/Spark cluster running in AWS. I can submit and run jobs perfectly in local mode:
spark-submit --master local[*] myscript.py
But when I attempt to run the script in cluster mode, it fails. I'm just trying the cluster equivalent of "hello world":
spark-submit spark-yarn.py
Where the script is this one that was recommended:
from pyspark import SparkConf
from pyspark import SparkContext
conf = SparkConf()
conf.setMaster('yarn')
conf.setAppName('spark-yarn')
sc = SparkContext(conf=conf)
def mod(x):
import numpy as np
return (x, np.mod(x, 2))
rdd = sc.parallelize(range(1000)).map(mod).take(10)
print(rdd)
I've spent days looking at every log I can find and reading everything I can online, but nothing has helped me get to the root of why it's not working. Before I tear down all the servers and start over, I'm hoping someone can point me in the right direction to get this working.
Here's the output in the terminal:
20/02/25 12:59:51 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/02/25 13:00:11 ERROR YarnClientSchedulerBackend: YARN application has exited unexpectedly with state FAILED! Check the YARN application logs for more details.
20/02/25 13:00:11 ERROR YarnClientSchedulerBackend: Diagnostics message: Application application_1582603840719_0002 failed 2 times due to AM Container for appattempt_1582603840719_0002_000002 exited with exitCode: -103
Failing this attempt.Diagnostics: [2020-02-25 13:00:11.601]Container [pid=3124,containerID=container_1582603840719_0002_02_000001] is running beyond virtual memory limits. Current usage: 328.7 MB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1582603840719_0002_02_000001 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 3128 3124 3124 3124 (java) 504 34 2359349248 83396 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1582603840719_0002/container_1582603840719_0002_02_000001/tmp -Dspark.yarn.app.container.log.dir=/home/ubuntu/server/hadoop-2.9.2/logs/userlogs/application_1582603840719_0002/container_1582603840719_0002_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg ip-172-31-7-96.ec2.internal:43275 --properties-file /tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1582603840719_0002/container_1582603840719_0002_02_000001/__spark_conf__/__spark_conf__.properties
|- 3124 3122 3124 3124 (bash) 0 0 13635584 760 /bin/bash -c /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1582603840719_0002/container_1582603840719_0002_02_000001/tmp -Dspark.yarn.app.container.log.dir=/home/ubuntu/server/hadoop-2.9.2/logs/userlogs/application_1582603840719_0002/container_1582603840719_0002_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg 'ip-172-31-7-96.ec2.internal:43275' --properties-file /tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1582603840719_0002/container_1582603840719_0002_02_000001/__spark_conf__/__spark_conf__.properties 1> /home/ubuntu/server/hadoop-2.9.2/logs/userlogs/application_1582603840719_0002/container_1582603840719_0002_02_000001/stdout 2> /home/ubuntu/server/hadoop-2.9.2/logs/userlogs/application_1582603840719_0002/container_1582603840719_0002_02_000001/stderr
[2020-02-25 13:00:11.651]Container killed on request. Exit code is 143
[2020-02-25 13:00:11.658]Container exited with a non-zero exit code 143.
For more detailed output, check the application tracking page: http://ec2-34-200-223-235.compute-1.amazonaws.com:8088/cluster/app/application_1582603840719_0002 Then click on links to logs of each attempt.
. Failing the application.
20/02/25 13:00:11 ERROR TransportClient: Failed to send RPC RPC 6867152665638655473 to /172.31.9.94:57526: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
20/02/25 13:00:11 ERROR YarnSchedulerBackend$YarnSchedulerEndpoint: Sending RequestExecutors(0,0,Map(),Set()) to AM was unsuccessful
java.io.IOException: Failed to send RPC RPC 6867152665638655473 to /172.31.9.94:57526: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:362)
at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:339)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:122)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:987)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:869)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1316)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:730)
at io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:38)
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1081)
at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1128)
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1070)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
20/02/25 13:00:11 ERROR Utils: Uncaught exception in thread YARN application state monitor
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.requestTotalExecutors(CoarseGrainedSchedulerBackend.scala:574)
at org.apache.spark.scheduler.cluster.YarnSchedulerBackend.stop(YarnSchedulerBackend.scala:98)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:164)
at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:653)
at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2042)
at org.apache.spark.SparkContext$$anonfun$stop$6.apply$mcV$sp(SparkContext.scala:1949)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1340)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1948)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:121)
Caused by: java.io.IOException: Failed to send RPC RPC 6867152665638655473 to /172.31.9.94:57526: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:362)
at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:339)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:122)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:987)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:869)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1316)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:730)
at io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:38)
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1081)
at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1128)
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1070)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
20/02/25 13:00:12 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalStateException: Spark context stopped while waiting for backend
at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:818)
at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:196)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:560)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Traceback (most recent call last):
File "/home/ubuntu/server/spark-yarn.py", line 7, in <module>
sc = SparkContext(conf=conf)
File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/context.py", line 136, in __init__
File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/context.py", line 198, in _do_init
File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/context.py", line 306, in _initialize_context
File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1525, in __call__
File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: java.lang.IllegalStateException: Spark context stopped while waiting for backend
at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:818)
at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:196)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:560)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
If I change 'spark' to 'spark-client' as the master, it gives a slightly different error:
20/02/25 13:07:31 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/02/25 13:07:46 ERROR TransportClient: Failed to send RPC RPC 5381013595535555066 to /172.31.5.228:39748: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
20/02/25 13:07:46 ERROR YarnScheduler: Lost executor 1 on ip-172-31-5-228.ec2.internal: Slave lost
20/02/25 13:07:51 ERROR YarnClientSchedulerBackend: YARN application has exited unexpectedly with state FAILED! Check the YARN application logs for more details.
20/02/25 13:07:51 ERROR YarnClientSchedulerBackend: Diagnostics message: Application application_1582603840719_0003 failed 2 times due to AM Container for appattempt_1582603840719_0003_000002 exited with exitCode: -103
Failing this attempt.Diagnostics: [2020-02-25 13:07:51.067]Container [pid=3223,containerID=container_1582603840719_0003_02_000001] is running beyond virtual memory limits. Current usage: 320.8 MB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1582603840719_0003_02_000001 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 3227 3223 3223 3223 (java) 489 32 2355855360 81352 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1582603840719_0003/container_1582603840719_0003_02_000001/tmp -Dspark.yarn.app.container.log.dir=/home/ubuntu/server/hadoop-2.9.2/logs/userlogs/application_1582603840719_0003/container_1582603840719_0003_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg ip-172-31-7-96.ec2.internal:40963 --properties-file /tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1582603840719_0003/container_1582603840719_0003_02_000001/__spark_conf__/__spark_conf__.properties
|- 3223 3221 3223 3223 (bash) 0 0 13635584 767 /bin/bash -c /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1582603840719_0003/container_1582603840719_0003_02_000001/tmp -Dspark.yarn.app.container.log.dir=/home/ubuntu/server/hadoop-2.9.2/logs/userlogs/application_1582603840719_0003/container_1582603840719_0003_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg 'ip-172-31-7-96.ec2.internal:40963' --properties-file /tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1582603840719_0003/container_1582603840719_0003_02_000001/__spark_conf__/__spark_conf__.properties 1> /home/ubuntu/server/hadoop-2.9.2/logs/userlogs/application_1582603840719_0003/container_1582603840719_0003_02_000001/stdout 2> /home/ubuntu/server/hadoop-2.9.2/logs/userlogs/application_1582603840719_0003/container_1582603840719_0003_02_000001/stderr
[2020-02-25 13:07:51.089]Container killed on request. Exit code is 143
[2020-02-25 13:07:51.090]Container exited with a non-zero exit code 143.
For more detailed output, check the application tracking page: http://ec2-34-200-223-235.compute-1.amazonaws.com:8088/cluster/app/application_1582603840719_0003 Then click on links to logs of each attempt.
. Failing the application.
20/02/25 13:07:51 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalStateException: Spark context stopped while waiting for backend
at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:818)
at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:196)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:560)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
20/02/25 13:07:52 ERROR TransportClient: Failed to send RPC RPC 8397804982944513692 to /172.31.0.102:39468: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
20/02/25 13:07:52 ERROR YarnSchedulerBackend$YarnSchedulerEndpoint: Sending RequestExecutors(0,0,Map(),Set()) to AM was unsuccessful
java.io.IOException: Failed to send RPC RPC 8397804982944513692 to /172.31.0.102:39468: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:362)
at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:339)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:431)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
20/02/25 13:07:52 ERROR Utils: Uncaught exception in thread YARN application state monitor
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.requestTotalExecutors(CoarseGrainedSchedulerBackend.scala:574)
at org.apache.spark.scheduler.cluster.YarnSchedulerBackend.stop(YarnSchedulerBackend.scala:98)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:164)
at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:653)
at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2042)
at org.apache.spark.SparkContext$$anonfun$stop$6.apply$mcV$sp(SparkContext.scala:1949)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1340)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1948)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:121)
Caused by: java.io.IOException: Failed to send RPC RPC 8397804982944513692 to /172.31.0.102:39468: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:362)
at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:339)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:431)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
Traceback (most recent call last):
File "/home/ubuntu/server/spark-yarn.py", line 7, in <module>
sc = SparkContext(conf=conf)
File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/context.py", line 136, in __init__
File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/context.py", line 198, in _do_init
File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/context.py", line 306, in _initialize_context
File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1525, in __call__
File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: java.lang.IllegalStateException: Spark context stopped while waiting for backend
at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:818)
at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:196)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:560)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
It mentions checking the logs at:
http://ec2-34-200-223-235.compute-1.amazonaws.com:8088/cluster/app/application_1582603840719_0003
But clicking on any of the log links on that page give an error:
Firefox can’t establish a connection to the server at ip-172-31-0-102.ec2.internal:8042.
(That's probably unrelated.)
Grepping for warnings, I see the following:
2020-02-25 13:07:38,904 WARN org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The specific max attempts: 0 for application: 3 is invalid, because it is out of the range [1, 2]. Use the global max attempts instead.
2020-02-25 13:07:51,241 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=ubuntOPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application application_1582603840719_0003 failed 2 times due to AM Container for appattempt_1582603840719_0003_000002 exited with exitCode: -103
2020-02-25 13:07:40,367 WARN org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Checkpoint done. New Image Size: 12086
No errors were generated.
Trying yarn logs from the command line will show me the jobs:
ubuntu#ip-172-31-7-96:~/server$ yarn application -list -appStates ALL
20/02/25 13:31:44 INFO client.RMProxy: Connecting to ResourceManager at ec2-34-200-223-235.compute-1.amazonaws.com/172.31.7.96:8032
Total number of applications (application-types: [], states: [NEW, NEW_SAVING, SUBMITTED, ACCEPTED, RUNNING, FINISHED, FAILED, KILLED] and tags: []):3
Application-Id Application-Name Application-Type User Queue State Final-State Progress Tracking-URL
application_1582603840719_0001 spark-yarn SPARK ubuntu default FINISHED UNDEFINED 100% N/A
But asking for the logs fails:
ubuntu#ip-172-31-7-96:~/server$ yarn logs -applicationId application_1582603840719_0001
20/02/25 13:32:48 INFO client.RMProxy: Connecting to ResourceManager at ec2-34-200-223-235.compute-1.amazonaws.com/172.31.7.96:8032
fs.AbstractFileSystem.ec2-34-200-223-235.compute-1.amazonaws.com.impl=null: No AbstractFileSystem configured for scheme: ec2-34-200-223-235.compute-1.amazonaws.com
Can not find any log file matching the pattern: [ALL] for the application: application_1582603840719_0001
Can not find the logs for the application: application_1582603840719_0001 with the appOwner: ubuntu
Again, if someone could direct me to next trouble-shooting steps, I'd really appreciate it. I've spent days on this and don't seem to be making progress.
Two of these things ended up solving this issue:
First, I added the following lines to all nodes in the yarn-site.xml file:
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
Next, I changed my spark-submit command to include the following lines to give the driver more memory:
spark-submit --master yarn \
--deploy-mode client \
--driver-memory 6g \
--executor-memory 6g \
--executor-cores 2 \
--num-executors 10 \
my_app.py

Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

I have this submitted job fine in YARN mode but I get the following error
I added a Spark jar installed locally, but the Spark jar can also be in a world-readable location on HDFS. This allows YARN to cache it on nodes and added .bashrc yarn_config_dir and hadoop_config_dir.
ERROR:
Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher
9068548 [bioingine-management-service-akka.actor.default-dispatcher-14] INFO org.apache.spark.deploy.yarn.Client - Application report for application_1531990849146_0010 (state: FAILED)
9068548 [bioingine-management-service-akka.actor.default-dispatcher-14] INFO org.apache.spark.deploy.yarn.Client -
client token: N/A
diagnostics: Application application_1531990849146_0010 failed 2 times due to AM Container for appattempt_1531990849146_0010_000002 exited with exitCode: 1
Failing this attempt.Diagnostics: [2018-07-19 11:56:58.484]Exception from container-launch.
Container id: container_1531990849146_0010_02_000001
Exit code: 1
[2018-07-19 11:56:58.484]
[2018-07-19 11:56:58.486]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher
[2018-07-19 11:56:58.486]
[2018-07-19 11:56:58.486]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher
[2018-07-19 11:56:58.486]
For more detailed output, check the application tracking page: http://localhost:8088/cluster/app/application_1531990849146_0010 Then click on links to logs of each attempt.
. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1532001413903
final status: FAILED
tracking URL: http://localhost:8088/cluster/app/application_1531990849146_0010
user: root
9068611 [bioingine-management-service-akka.actor.default-dispatcher-14] INFO org.apache.spark.deploy.yarn.Client - Deleted staging directory file:/root/.sparkStaging/application_1531990849146_0010
9068612 [bioingine-management-service-akka.actor.default-dispatcher-14] ERROR org.apache.spark.SparkContext - Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
at com.bioingine.smash.management.services.SmashExtractorService.getSparkSession(SmashExtractorService.scala:79)
at com.bioingine.smash.management.services.SmashExtractorService.getFileHeaders(SmashExtractorService.scala:83)
at com.bioingine.smash.management.services.SmashService$$anonfun$getcolumnHeaders$1.apply(SmashService.scala:90)
at com.bioingine.smash.management.services.SmashService$$anonfun$getcolumnHeaders$1.apply(SmashService.scala:90)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
9068618 [bioingine-management-service-akka.actor.default-dispatcher-14] INFO o.s.jetty.server.AbstractConnector - Stopped Spark#2152c728{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
9068619 [bioingine-management-service-akka.actor.default-dispatcher-14] INFO org.apache.spark.ui.SparkUI - Stopped Spark web UI at http://localhost:4040
9068620 [dispatcher-event-loop-16] WARN o.a.s.s.c.YarnSchedulerBackend$YarnSchedulerEndpoint - Attempted to request executors before the AM has registered!
9068621 [bioingine-management-service-akka.actor.default-dispatcher-14] INFO o.a.s.s.c.YarnClientSchedulerBackend - Shutting down all executors
9068621 [dispatcher-event-loop-17] INFO o.a.s.s.c.YarnSchedulerBackend$YarnDriverEndpoint - Asking each executor to shut down
9068622 [bioingine-management-service-akka.actor.default-dispatcher-14] INFO o.a.s.s.c.SchedulerExtensionServices - Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
9068622 [bioingine-management-service-akka.actor.default-dispatcher-14] INFO o.a.s.s.c.YarnClientSchedulerBackend - Stopped
9068623 [dispatcher-event-loop-20] INFO o.a.s.MapOutputTrackerMasterEndpoint - MapOutputTrackerMasterEndpoint stopped!
9068624 [bioingine-management-service-akka.actor.default-dispatcher-14] ERROR org.apache.spark.util.Utils - Uncaught exception in thread bioingine-management-service-akka.actor.default-dispatcher-14
java.lang.NullPointerException: null
at org.apache.spark.network.shuffle.ExternalShuffleClient.close(ExternalShuffleClient.java:141)
at org.apache.spark.storage.BlockManager.stop(BlockManager.scala:1485)
at org.apache.spark.SparkEnv.stop(SparkEnv.scala:90)
at org.apache.spark.SparkContext$$anonfun$stop$11.apply$mcV$sp(SparkContext.scala:1937)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1317)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1936)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:587)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
at com.bioingine.smash.management.services.SmashExtractorService.getSparkSession(SmashExtractorService.scala:79)
at com.bioingine.smash.management.services.SmashExtractorService.getFileHeaders(SmashExtractorService.scala:83)
at com.bioingine.smash.management.services.SmashService$$anonfun$getcolumnHeaders$1.apply(SmashService.scala:90)
at com.bioingine.smash.management.services.SmashService$$anonfun$getcolumnHeaders$1.apply(SmashService.scala:90)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
9068624 [bioingine-management-service-akka.actor.default-dispatcher-14] INFO org.apache.spark.SparkContext - Successfully stopped SparkContext
9068627 [bioingine-management-service-akka.actor.default-dispatcher-15] ERROR akka.actor.ActorSystemImpl - Error during processing of request: 'Yarn application has already ended! It might have been killed or unable to launch application master.'. Completing with 500 Internal Server Error response. To change default exception handling behavior, provide a custom ExceptionHandler.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
at com.bioingine.smash.management.services.SmashExtractorService.getSparkSession(SmashExtractorService.scala:79)
at com.bioingine.smash.management.services.SmashExtractorService.getFileHeaders(SmashExtractorService.scala:83)
at com.bioingine.smash.management.services.SmashService$$anonfun$getcolumnHeaders$1.apply(SmashService.scala:90)
at com.bioingine.smash.management.services.SmashService$$anonfun$getcolumnHeaders$1.apply(SmashService.scala:90)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
**SparkSession configuration**
new SparkConf().setMaster("yarn").setAppName("Test").set("spark.executor.memory", "3g")
.set("spark.ui.enabled","true")
.set("spark.driver.memory","9g")
.set("spark.default.parallelism","10")
.set("spark.executor.cores","3")
.set("spark.cores.max","9")
.set("spark.memory.offHeap.enabled","true")
.set("spark.memory.offHeap.size","6g")
.set("spark.yarn.am.memory","2g")
.set("spark.yarn.am.cores","2")
.set("spark.yarn.am.cores","2")
.set("spark.yarn.archive","hdfs://localhost:9000/user/spark/share/lib/spark2-hdp-yarn-archive.tar.gz")
.set("spark.yarn.jars","hdfs://localhost:9000/user/spark/share/lib/spark-yarn_2.11.2.2.0.jar")
**We are added below configuration**
1. These entries are in $SPARK_HOME/conf/spark-defaults.conf
spark.driver.extraJavaOptions -Dhdp.version=2.9.0
spark.yarn.am.extraJavaOptions -Dhdp.version=2.9.0
log4j.rootCategory=WARN, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
2. yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle,spark_shuffle</value>
<description>shuffle service that needs to be set for Map Reduce to run</description>
</property>
<property>
<name>yarn.nodemanager.aux-services.spark_shuffle.class</name>
<value>org.apache.spark.network.yarn.YarnShuffleService</value>
</property>
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.application.classpath</name>
<value>/usr/share/hadoop/etc/hadoop,/usr/share/hadoop/,/usr/share/hadoop/lib/,/usr/share/hadoop/share/hadoop/common/,/usr/share/hadoop/share/hadoop/common/lib, /usr/share/hadoop/share/hadoop/hdfs/,/usr/share/hadoop/share/hadoop/hdfs/lib/,/usr/share/hadoop/share/hadoop/mapreduce/,/usr/share/hadoop/share/hadoop/mapreduce/lib/,/usr/share/hadoop/share/hadoop/tools/lib/,/usr/share/hadoop/share/hadoop/yarn/,/usr/share/hadoop/share/hadoop/yarn/lib/*,/usr/share/spark/jars/spark-yarn_2.11-2.2.0.jar</value>
</property>
</configuration>
3.Spark-env.sh
export HADOOP_CONF_DIR=/home/hadoop/hadoop/etc/hadoop
export SPARK_HOME=/home/hadoop/spark
SPARK_DIST_CLASSPATH="/usr/share/spark/jars/*"
4. .bashrc
export JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64/"
export SBT_OPTS="-Xms16G -Xmx16G"
export HADOOP_INSTALL=/usr/share/hadoop
export HADOOP_CONF_DIR=/usr/share/hadoop/etc/hadoop/
export YARN_CONF_DIR=/usr/share/hadoop/etc/hadoop/
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export SPARK_CLASSPATH="/usr/share/spark/jars/*"
export SPARK_HOME="/usr/share/spark/"
export PATH=$PATH:$SPARK_HOME

Spark Streaming:java.io.InvalidClassException: scala.concurrent.duration.Duration; local class incompatible: stream classdesc serialVersionUID

i am running a streaming job on CDH cluster and get error, and the cdh spark version is 1.2.0-cdh5.3.8, but i need spark2.1.0, so i have downloaded the apache spark and builded it(spark version: 2.1.0-cdh5.3.8,hadoop version=2.5.0-cdh5.3.8).
error message is below:
17/04/14 18:12:34 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() on RPC id 4724089633860239943
java.io.InvalidClassException: scala.concurrent.duration.Duration; local class incompatible: stream classdesc serialVersionUID = -7521802526148376080, local class serialVersionUID = -2941674837829752814
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:617)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:108)
at org.apache.spark.rpc.netty.NettyRpcEnv$$anonfun$deserialize$1$$anonfun$apply$1.apply(NettyRpcEnv.scala:259)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.rpc.netty.NettyRpcEnv.deserialize(NettyRpcEnv.scala:308)
at org.apache.spark.rpc.netty.NettyRpcEnv$$anonfun$deserialize$1.apply(NettyRpcEnv.scala:258)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.rpc.netty.NettyRpcEnv.deserialize(NettyRpcEnv.scala:257)
at org.apache.spark.rpc.netty.NettyRpcHandler.internalReceive(NettyRpcEnv.scala:582)
at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:567)
at org.apache.spark.network.server.TransportRequestHandler.processRpcRequest(TransportRequestHandler.java:159)
at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:107)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:119)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:346)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:346)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:346)
at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:346)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:652)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:575)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:489)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:451)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:745)
17/04/14 18:12:35 INFO impl.AMRMClientImpl: Received new token for : ztdm006:8041
17/04/14 18:12:35 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 0 of them.
17/04/14 18:12:35 INFO yarn.YarnAllocator: Completed container container_1488960736410_229415_01_000004 on host: ztdm009 (state: COMPLETE, exit status: 1)
17/04/14 18:12:35 WARN yarn.YarnAllocator: Container marked as failed: container_1488960736410_229415_01_000004 on host: ztdm009. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1488960736410_229415_01_000004
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:543)
at org.apache.hadoop.util.Shell.run(Shell.java:460)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:707)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:197)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
17/04/14 18:12:35 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1488960736410_229415_01_000004 on host: ztdm009. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1488960736410_229415_01_000004
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:543)
at org.apache.hadoop.util.Shell.run(Shell.java:460)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:707)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:197)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
17/04/14 18:12:35 INFO yarn.YarnAllocator: Completed container container_1488960736410_229415_01_000005 on host: ztdm010 (state: COMPLETE, exit status: 1)
17/04/14 18:12:35 WARN yarn.YarnAllocator: Container marked as failed: container_1488960736410_229415_01_000005 on host: ztdm010. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1488960736410_229415_01_000005
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:543)
at org.apache.hadoop.util.Shell.run(Shell.java:460)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:707)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:197)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
17/04/14 18:12:35 INFO storage.BlockManagerMaster: Removal of executor 3 requested
17/04/14 18:12:35 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1488960736410_229415_01_000005 on host: ztdm010. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1488960736410_229415_01_000005
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:543)
at org.apache.hadoop.util.Shell.run(Shell.java:460)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:707)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:197)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
17/04/14 18:12:35 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 3 from BlockManagerMaster.
17/04/14 18:12:35 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asked to remove non-existent executor 3
17/04/14 18:12:35 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 4 from BlockManagerMaster.
17/04/14 18:12:35 INFO storage.BlockManagerMaster: Removal of executor 4 requested
17/04/14 18:12:35 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asked to remove non-existent executor 4
17/04/14 18:12:38 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 11, (reason: Max number of executor failures (3) reached)
17/04/14 18:12:38 INFO storage.DiskBlockManager: Shutdown hook called
17/04/14 18:12:38 INFO util.ShutdownHookManager: Shutdown hook called
17/04/14 18:12:38 INFO util.ShutdownHookManager: Deleting directory /mnt/disk2/yarn/nm/usercache/efinance/appcache/application_1488960736410_229415/spark-1f1b6198-961b-418d-9274-5f35f8e67829
17/04/14 18:12:38 INFO util.ShutdownHookManager: Deleting directory /mnt/disk6/yarn/nm/usercache/efinance/appcache/application_1488960736410_229415/spark-fdce09f8-8677-45b2-9ce4-ac7134ab63b0
17/04/14 18:12:38 INFO util.ShutdownHookManager: Deleting directory /mnt/disk4/yarn/nm/usercache/efinance/appcache/application_1488960736410_229415/spark-10c52d9a-a76b-465f-82d5-42eba9c89c86
17/04/14 18:12:38 INFO util.ShutdownHookManager: Deleting directory /mnt/disk5/yarn/nm/usercache/efinance/appcache/application_1488960736410_229415/spark-4c492882-813a-4c2b-a041-ae69aba7ce00
17/04/14 18:12:38 INFO util.ShutdownHookManager: Deleting directory /mnt/disk3/yarn/nm/usercache/efinance/appcache/application_1488960736410_229415/spark-1d9e6c60-fc33-45c3-8552-55cbe4266931
17/04/14 18:12:38 INFO util.ShutdownHookManager: Deleting directory /mnt/disk1/yarn/nm/usercache/efinance/appcache/application_1488960736410_229415/spark-b6aa5cf9-042a-4804-9472-9bcddde2814e/userFiles-b259c206-d618-4d54-8630-824d955d0be4
17/04/14 18:12:38 INFO util.ShutdownHookManager: Deleting directory /mnt/disk1/yarn/nm/usercache/efinance/appcache/application_1488960736410_229415/spark-b6aa5cf9-042a-4804-9472-9bcddde2814e
Reason might be the scala jar in my code is able to conflict with the scala jar in the cluster. when i deleted scala directory in the jar file of compiled and built maven project, it fixed the issue.

Spark Jobs crashing with ExitCodeException exitCode=15

I am running a very long spark job which crashes with the following error
Application application_1456200816465_347125 failed 2 times due to AM Container for appattempt_1456200816465_347125_000002 exited with exitCode: 15
For more detailed output, check application tracking page:http://foo.com:8088/proxy/application_1456200816465_347125/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_e24_1456200816465_347125_02_000001
Exit code: 15
Stack trace: ExitCodeException exitCode=15:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 15
Failing this attempt. Failing the application.
I click on the link provided in the error message above and that shows me
java.io.IOException: Target log file already exists (hdfs://nameservice1/user/spark/applicationHistory/application_1456200816465_347125)
at org.apache.spark.scheduler.EventLoggingListener.stop(EventLoggingListener.scala:201)
at org.apache.spark.SparkContext$$anonfun$stop$5.apply(SparkContext.scala:1394)
at org.apache.spark.SparkContext$$anonfun$stop$5.apply(SparkContext.scala:1394)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1394)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:107)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
If I restart the job it works fine for 1 hour or so and then again fails with this error. Note that hdfs://nameservice1/user/spark/applicationHistory/application_1456200816465_347125 is some system generated thing. this folder has nothing to do with my application.
I searched the internet and many people got this error because they were setting the master to local in their code. This is how I initialize my spark context
val conf = new SparkConf().setAppName("Foo")
val context = new SparkContext(conf)
context.hadoopConfiguration.set("mapreduce.input.fileinputformat.input.dir.recursive","true")
val sc = new SQLContext(context)
and I run my spark job like
sudo -u web nohup spark-submit --class com.abhi.Foo--master yarn-cluster
Foo-assembly-1.0.jar "2015-03-18" "2015-03-30" > fn_output.txt 2> fn_error.txt &

Resources