Spark:Error writing stream to file /XXX/stderr java.io.IOException: Stream closed - apache-spark

My codes are as below:
object WordCount extends App{
val conf = new SparkConf().setAppName("WordCount").setMaster("spark://sparkmaster:7077")
val sc = new SparkContext(conf)
sc.addJar("/home/found/FromWindows/testsSpark/out/artifacts/untitled_jar/untitled.jar")
val file = sc.textFile("hdfs://192.168.1.101:9000/Texts.txt")
val count = file.flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_)
val res = count.collect()
res.foreach(println(_))
}
When I run it as local mode,everything is OK.However,when running it on cluster,it would crash.I have got the following error messages:
1.error message from intellij console
16/04/24 19:17:34 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, 192.168.1.101): java.lang.ClassNotFoundException: org.klordy.test.WordCount$$anonfun$2
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
...
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
16/04/24 19:17:34 INFO TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) on executor 192.168.1.101: java.lang.ClassNotFoundException (org.klordy.test.WordCount$$anonfun$2) [duplicate 1]
16/04/24 19:17:34 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 2, 192.168.1.101, ANY, 2133 bytes)
16/04/24 19:17:34 INFO TaskSetManager: Starting task 1.1 in stage 0.0 (TID 3, 192.168.1.101, ANY, 2133 bytes)
16/04/24 19:17:34 INFO TaskSetManager: Lost task 1.1 in stage 0.0 (TID 3) on executor 192.168.1.101: java.lang.ClassNotFoundException (org.klordy.test.WordCount$$anonfun$2) [duplicate 2]
16/04/24 19:17:34 INFO TaskSetManager: Starting task 1.2 in stage 0.0 (TID 4, 192.168.1.101, ANY, 2133 bytes)
16/04/24 19:17:34 ERROR TaskSetManager: Task 1 in stage 0.0 failed 4 times; aborting job
16/04/24 19:17:34 INFO TaskSchedulerImpl: Cancelling stage 0
16/04/24 19:17:34 INFO TaskSchedulerImpl: Stage 0 was cancelled
16/04/24 19:17:34 INFO DAGScheduler: ShuffleMapStage 0 (map at WordCount.scala:17) failed in 2.904 s
16/04/24 19:17:34 INFO DAGScheduler: Job 0 failed: collect at WordCount.scala:18, took 3.103414 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 6, 192.168.1.101): java.lang.ClassNotFoundException: org.klordy.test.WordCount$$anonfun$2
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
...
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
Caused by: java.lang.ClassNotFoundException: org.klordy.test.WordCount$$anonfun$2
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
...
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
16/04/24 19:17:34 INFO SparkContext: Invoking stop() from shutdown hook
16/04/24 19:17:34 INFO TaskSetManager: Lost task 0.3 in stage 0.0 (TID 7) on executor 192.168.1.101: java.lang.ClassNotFoundException (org.klordy.test.WordCount$$anonfun$2) [duplicate 7]
16/04/24 19:17:34 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/04/24 19:17:34 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor#192.168.1.117:45350/user/Executor#2034500302]) with ID 0
16/04/24 19:17:34 INFO SparkUI: Stopped Spark web UI at http://192.168.1.101:4040
16/04/24 19:17:34 INFO DAGScheduler: Stopping DAGScheduler
16/04/24 19:17:34 INFO SparkDeploySchedulerBackend: Shutting down all executors
16/04/24 19:17:34 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
16/04/24 19:17:34 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
2.message from spark logs:
16/04/24 19:17:27 INFO Worker: Asked to launch executor app-20160424191727-0005/1 for WordCount
16/04/24 19:17:27 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (1.327376 ms) AkkaMessage(LaunchExecutor(spark://sparkworker:7077,app-20160424191727-0005,1,ApplicationDescription(WordCount),2,1024),false) from Actor[akka://sparkWorker/deadLetters]
16/04/24 19:17:27 INFO SecurityManager: Changing view acls to: root
16/04/24 19:17:27 INFO SecurityManager: Changing modify acls to: root
16/04/24 19:17:27 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/04/24 19:17:27 DEBUG SSLOptions: No SSL protocol specified
16/04/24 19:17:27 DEBUG SSLOptions: No SSL protocol specified
16/04/24 19:17:27 DEBUG SSLOptions: No SSL protocol specified
16/04/24 19:17:27 DEBUG SecurityManager: SSLConfiguration for file server: SSLOptions{enabled=false, keyStore=None, keyStorePassword=None, trustStore=None, trustStorePassword=None, protocol=None, enabledAlgorithms=Set()}
16/04/24 19:17:27 DEBUG SecurityManager: SSLConfiguration for Akka: SSLOptions{enabled=false, keyStore=None, keyStorePassword=None, trustStore=None, trustStorePassword=None, protocol=None, enabledAlgorithms=Set()}
16/04/24 19:17:27 INFO ExecutorRunner: Launch command: "/usr/java/jdk1.7.0_51/bin/java" "-cp" "/usr/local/Spark/spark-1.5.1-bin-hadoop2.4/sbin/../conf/:/usr/local/Spark/spark-1.5.1-bin-hadoop2.4/lib/spark-assembly-1.5.1-hadoop2.4.0.jar:/usr/local/Spark/spark-1.5.1-bin-hadoop2.4/lib/datanucleus-api-jdo-3.2.6.jar:/usr/local/Spark/spark-1.5.1-bin-hadoop2.4/lib/datanucleus-core-3.2.10.jar:/usr/local/Spark/spark-1.5.1-bin-hadoop2.4/lib/datanucleus-rdbms-3.2.9.jar:/usr/local/Spark/hadoop-2.5.2/etc/hadoop/" "-Xms1024M" "-Xmx1024M" "-Dspark.driver.port=49473" "-XX:MaxPermSize=256m" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "akka.tcp://sparkDriver#192.168.1.101:49473/user/CoarseGrainedScheduler" "--executor-id" "1" "--hostname" "192.168.1.101" "--cores" "2" "--app-id" "app-20160424191727-0005" "--worker-url" "akka.tcp://sparkWorker#192.168.1.101:49838/user/Worker"
16/04/24 19:17:28 DEBUG FileAppender: Started appending thread
16/04/24 19:17:28 DEBUG FileAppender: Opened file /usr/local/Spark/spark-1.5.1-bin-hadoop2.4/work/app-20160424191727-0005/1/stdout
16/04/24 19:17:28 DEBUG FileAppender: Started appending thread
16/04/24 19:17:28 DEBUG FileAppender: Opened file /usr/local/Spark/spark-1.5.1-bin-hadoop2.4/work/app-20160424191727-0005/1/stderr
16/04/24 19:17:34 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(KillExecutor(spark://sparkworker:7077,app-20160424191727-0005,1),false)
16/04/24 19:17:34 INFO Worker: Asked to kill executor app-20160424191727-0005/1
16/04/24 19:17:34 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (0.172782 ms) AkkaMessage(KillExecutor(spark://sparkworker:7077,app-20160424191727-0005,1),false) from Actor[akka://sparkWorker/deadLetters]
16/04/24 19:17:34 INFO ExecutorRunner: Runner thread for executor app-20160424191727-0005/1 interrupted
16/04/24 19:17:34 INFO ExecutorRunner: Killing process!
16/04/24 19:17:34 ERROR FileAppender: Error writing stream to file /usr/local/Spark/spark-1.5.1-bin-hadoop2.4/work/app-20160424191727-0005/1/stderr
java.io.IOException: Stream closed
at java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:162)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:272)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at java.io.FilterInputStream.read(FilterInputStream.java:107)
at org.apache.spark.util.logging.FileAppender.appendStreamToFile(FileAppender.scala:70)
at org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply$mcV$sp(FileAppender.scala:39)
at org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply(FileAppender.scala:39)
at org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply(FileAppender.scala:39)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1699)
at org.apache.spark.util.logging.FileAppender$$anon$1.run(FileAppender.scala:38)
16/04/24 19:17:34 DEBUG FileAppender: Closed file /usr/local/Spark/spark-1.5.1-bin-hadoop2.4/work/app-20160424191727-0005/1/stderr
16/04/24 19:17:34 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(ApplicationFinished(app-20160424191727-0005),false) from Actor[akka://sparkWorker/deadLetters]
16/04/24 19:17:34 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(ApplicationFinished(app-20160424191727-0005),false)
16/04/24 19:17:34 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (0.186628 ms) AkkaMessage(ApplicationFinished(app-20160424191727-0005),false) from Actor[akka://sparkWorker/deadLetters]
16/04/24 19:17:35 DEBUG FileAppender: Closed file /usr/local/Spark/spark-1.5.1-bin-hadoop2.4/work/app-20160424191727-0005/1/stdout
16/04/24 19:17:35 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(ExecutorStateChanged(app-20160424191727-0005,1,KILLED,None,Some(143)),false) from Actor[akka://sparkWorker/deadLetters]
16/04/24 19:17:35 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(ExecutorStateChanged(app-20160424191727-0005,1,KILLED,None,Some(143)),false)
16/04/24 19:17:35 INFO Worker: Executor app-20160424191727-0005/1 finished with state KILLED exitStatus 143
16/04/24 19:17:35 INFO Worker: Cleaning up local directories for application app-20160424191727-0005
Really puzzled about this, any good idea solving this?
My cluster environment is:Spark 1.5.1/hadoop 2.5.2/scala 2.10.4/jdk1.7.0_51.

My guess is that sc.addJar("/home/found/FromWindows/testsSpark/out/artifacts/untitled_jar/untitled.jar") fails on your executor, because that path / jar does not exist on the executor node. Try putting the jar into HDFS and using an HDFS path to specify it, so that your executors have access to it.
Or even better, use the --jars comand line option to spark-submit and specify your jar there, and then you don't have to change your code and recomple when you change your jar.

The problem has been solved.The main reason is that my project have imported two versions of scala(one extra version from sbt downloading).Thanks for guys who helped me solving this problem.

Related

Permission denied error when setting up local Spark instance and running pyspark

I am setting up a local Spark instance on Windows to use with PySpark as described in this guide (but with spark-3.0.0 / hadoop 2.7 instead): https://phoenixnap.com/kb/install-spark-on-windows-10.
I can startup Spark with:
C:\Spark\spark-3.0.0-bin-hadoop2.7\bin>spark-shell.cmd
and connect to it with http://localhost:4040/ in my browser (I see the Spark GUI).
But when am running the Python pyspark example with
C:\Spark\spark-3.0.0-bin-hadoop2.7\examples>run-example SparkPi
it throws an Permission Denied error like in this trace:
21/03/08 10:51:03 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
21/03/08 10:51:04 INFO SparkContext: Running Spark version 3.0.0
21/03/08 10:51:04 INFO ResourceUtils: ==============================================================
21/03/08 10:51:04 INFO ResourceUtils: Resources for spark.driver:
21/03/08 10:51:04 INFO ResourceUtils: ==============================================================
21/03/08 10:51:04 INFO SparkContext: Submitted application: Spark Pi
21/03/08 10:51:04 INFO SecurityManager: Changing view acls to: #####
21/03/08 10:51:04 INFO SecurityManager: Changing modify acls to: #####
21/03/08 10:51:04 INFO SecurityManager: Changing view acls groups to:
21/03/08 10:51:04 INFO SecurityManager: Changing modify acls groups to:
21/03/08 10:51:04 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(#####); groups with view permissions: Set(); users with modify permissions: Set(#####); groups with modify permissions: Set()
21/03/08 10:51:05 INFO Utils: Successfully started service 'sparkDriver' on port 63213.
21/03/08 10:51:05 INFO SparkEnv: Registering MapOutputTracker
21/03/08 10:51:05 INFO SparkEnv: Registering BlockManagerMaster
21/03/08 10:51:05 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
21/03/08 10:51:05 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
21/03/08 10:51:05 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
21/03/08 10:51:05 INFO DiskBlockManager: Created local directory at C:\Users\#####\AppData\Local\Temp\blockmgr-dce03954-27a7-484d-8e54-f552b21433f7
21/03/08 10:51:05 INFO MemoryStore: MemoryStore started with capacity 366.3 MiB
21/03/08 10:51:05 INFO SparkEnv: Registering OutputCommitCoordinator
21/03/08 10:51:05 INFO Utils: Successfully started service 'SparkUI' on port 4040.
21/03/08 10:51:05 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://WORKSTATION.DOMAIN.EXT:4040
21/03/08 10:51:05 INFO SparkContext: Added JAR file:///C:/Spark/spark-3.0.0-bin-hadoop2.7/examples/jars/scopt_2.12-3.7.1.jar at spark://WORKSTATION.DOMAIN.EXT:63213/jars/scopt_2.12-3.7.1.jar with timestamp 1615197065578
21/03/08 10:51:05 INFO SparkContext: Added JAR file:///C:/Spark/spark-3.0.0-bin-hadoop2.7/examples/jars/spark-examples_2.12-3.0.0.jar at spark://WORKSTATION.DOMAIN.EXT:63213/jars/spark-examples_2.12-3.0.0.jar with timestamp 1615197065579
21/03/08 10:51:05 INFO Executor: Starting executor ID driver on host WORKSTATION.DOMAIN.EXT
21/03/08 10:51:05 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 63260.
21/03/08 10:51:05 INFO NettyBlockTransferService: Server created on WORKSTATION.DOMAIN.EXT:63260
21/03/08 10:51:05 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
21/03/08 10:51:05 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, NLLR4000250910.solon.prd, 63260, None)
21/03/08 10:51:05 INFO BlockManagerMasterEndpoint: Registering block manager NLLR4000250910.solon.prd:63260 with 366.3 MiB RAM, BlockManagerId(driver, WORKSTATION.DOMAIN.EXT, 63260, None)
21/03/08 10:51:05 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, NLLR4000250910.solon.prd, 63260, None)
21/03/08 10:51:05 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, NLLR4000250910.solon.prd, 63260, None)
21/03/08 10:51:06 INFO SparkContext: Starting job: reduce at SparkPi.scala:38
21/03/08 10:51:06 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:38) with 2 output partitions
21/03/08 10:51:06 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:38)
21/03/08 10:51:06 INFO DAGScheduler: Parents of final stage: List()
21/03/08 10:51:06 INFO DAGScheduler: Missing parents: List()
21/03/08 10:51:06 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no missing parents
21/03/08 10:51:06 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 3.1 KiB, free 366.3 MiB)
21/03/08 10:51:06 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1816.0 B, free 366.3 MiB)
21/03/08 10:51:06 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on WORKSTATION.DOMAIN.EXT:63260 (size: 1816.0 B, free: 366.3 MiB)
21/03/08 10:51:06 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1200
21/03/08 10:51:06 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34) (first 15 tasks are for partitions Vector(0, 1))
21/03/08 10:51:06 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
21/03/08 10:51:06 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, WORKSTATION.DOMAIN.EXT, executor driver, partition 0, PROCESS_LOCAL, 7393 bytes)
21/03/08 10:51:06 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, WORKSTATION.DOMAIN.EXT, executor driver, partition 1, PROCESS_LOCAL, 7393 bytes)
21/03/08 10:51:06 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
21/03/08 10:51:06 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
21/03/08 10:51:06 INFO Executor: Fetching spark://WORKSTATION.DOMAIN.EXT:63213/jars/spark-examples_2.12-3.0.0.jar with timestamp 1615197065579
21/03/08 10:51:06 ERROR Utils: Aborting task
java.io.IOException: Failed to connect to WORKSTATION.DOMAIN.EXT/192.168.#.#:63213
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:253)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:195)
at org.apache.spark.rpc.netty.NettyRpcEnv.downloadClient(NettyRpcEnv.scala:392)
at org.apache.spark.rpc.netty.NettyRpcEnv.$anonfun$openChannel$4(NettyRpcEnv.scala:360)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1411)
at org.apache.spark.rpc.netty.NettyRpcEnv.openChannel(NettyRpcEnv.scala:359)
at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:719)
at org.apache.spark.util.Utils$.fetchFile(Utils.scala:535)
at org.apache.spark.executor.Executor.$anonfun$updateDependencies$7(Executor.scala:869)
at org.apache.spark.executor.Executor.$anonfun$updateDependencies$7$adapted(Executor.scala:860)
at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:877)
at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:876)
at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:860)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:404)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: io.netty.channel.AbstractChannel$AnnotatedSocketException: Permission denied: no further information: WORKSTATION.DOMAIN.EXT/192.168.#.#:63213
Caused by: java.net.SocketException: Permission denied: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Unknown Source)
[snip]
When running it on a different machine with seemingly the same config where it works fine, I get this trace on the part where the Exception is thrown on the other trace:
[snip]
21/03/08 08:00:22 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
21/03/08 08:00:22 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
21/03/08 08:00:22 INFO Executor: Fetching spark://WORKSTATION.DOMAIN.EXT:63646/jars/spark-examples_2.12-3.0.0.jar with timestamp 1615186820489
21/03/08 08:00:22 INFO TransportClientFactory: Successfully created connection to WORKSTATION.DOMAIN.EXT/10.121.#.#:63646 after 86 ms (0 ms spent in bootstraps)
21/03/08 08:00:22 INFO Utils: Fetching spark://WORKSTATION.DOMAIN.EXT:63646/jars/spark-examples_2.12-3.0.0.jar to C:\Users\#####\AppData\Local\Temp\spark-54a13d9f-9064-4f34-ba81-af49b18d9a0c\userFiles-24c3eabc-02a4-4aca-8abb-424431c6442f\fetchFileTemp5258763437798623210.tmp
21/03/08 08:00:24 INFO Executor: Adding file:/C:/Users/#####/AppData/Local/Temp/spark-54a13d9f-9064-4f34-ba81-af49b18d9a0c/userFiles-24c3eabc-02a4-4aca-8abb-424431c6442f/spark-examples_2.12-3.0.0.jar to class loader
[snip]
At first it seemed to me as a Firewall issue, but adding the executing java.exe as exeption to the firewall didn't solve the issue.
Does anyone know what I should try next to get this issue resolved?
Finally I could solve it by setting my SPARK_LOCAL_IP to localhost in my environment variables: Go to your Windows environment variables and set SPARK_LOCAL_IP=localhost

FileNotFoundException on submitting Spark Jobs to remote

I've created an environment where I've set up 3 Docker containers, 1 for Airflow using the puckel/docker-airflow image with spark and hadoop additionally installed. The other two containers are basically imitating spark master and worker (used gettyimages/spark Docker image to create this). All 3 containers are connected to each other via a bridge network, so all containers are able to communicate with each other.
What I'm trying to do next is to submit spark job from the Airflow container to the Spark cluster (master).
As an initial example, I'm using the wordcount sample script. I created a sample.txt file in the airflow container at path usr/local/airflow/sample.txt. I've bashed into the Airflow container and I'm using the command given below to run the wordcount.py on spark master located at the ip which I found after inspecting the bridge network.
spark-submit --master spark://ipaddress:7077 --files usr/local/airflow/sample.txt /opt/spark-2.4.1/examples/src/main/python/wordcount.py sample.txt
After submitting the script, from the logs, I can see that a connection has been established with the master (from airflow container), and it also copied the file specified by --files to the master and worker, but then it just errors out saying,
java.io.FileNotFoundException: File file:/usr/local/airflow/sample.txt does not exist
As per my understanding (could be wrong), but when we specify files to copy to master using --files you can access it directly via the file name (sample.txt in my case). So what I'm trying to figure out is if a job has been submitted and the file has been copied to master, then why is it searching in the location file:/usr/local/airflow/sample.txt? How do I make it refer to the correct path?
I apologize as this question has been asked a couple of times, but I've read all the related question on stackoverflow, but I'm still unable to resolve this. I'd really appreciate y'alls help on this.
Thanks.
The full log below,
user#machine:/usr/local/airflow# spark-submit --master spark://172.22.0.2:7077 --files sample.txt /opt/spark-2.4.1/examples/src/main/python/wordcount.py ./sample.txt
20/07/25 03:23:34 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/07/25 03:23:35 INFO SparkContext: Running Spark version 2.4.1
20/07/25 03:23:35 INFO SparkContext: Submitted application: PythonWordCount
20/07/25 03:23:35 INFO SecurityManager: Changing view acls to: root
20/07/25 03:23:35 INFO SecurityManager: Changing modify acls to: root
20/07/25 03:23:35 INFO SecurityManager: Changing view acls groups to:
20/07/25 03:23:35 INFO SecurityManager: Changing modify acls groups to:
20/07/25 03:23:35 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
20/07/25 03:23:35 INFO Utils: Successfully started service 'sparkDriver' on port 33457.
20/07/25 03:23:35 INFO SparkEnv: Registering MapOutputTracker
20/07/25 03:23:36 INFO SparkEnv: Registering BlockManagerMaster
20/07/25 03:23:36 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/07/25 03:23:36 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/07/25 03:23:36 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-dd1957de-6907-484d-a3d8-2b3b88e0c7ca
20/07/25 03:23:36 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
20/07/25 03:23:36 INFO SparkEnv: Registering OutputCommitCoordinator
20/07/25 03:23:36 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/07/25 03:23:36 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://0508a77fcaad:4040
20/07/25 03:23:37 INFO SparkContext: Added file file:///usr/local/airflow/sample.txt at spark://0508a77fcaad:33457/files/sample.txt with timestamp 1595647417081
20/07/25 03:23:37 INFO Utils: Copying /usr/local/airflow/sample.txt to /tmp/spark-f9dfe6ee-22d7-4747-beab-9450fc1afce0/userFiles-74f8cfe4-8a19-4d2e-8fa1-1f0bd1f0ef12/sample.txt
20/07/25 03:23:37 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://172.22.0.2:7077...
20/07/25 03:23:37 INFO TransportClientFactory: Successfully created connection to /172.22.0.2:7077 after 32 ms (0 ms spent in bootstraps)
20/07/25 03:23:38 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20200725032338-0003
20/07/25 03:23:38 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 45057.
20/07/25 03:23:38 INFO NettyBlockTransferService: Server created on 0508a77fcaad:45057
20/07/25 03:23:38 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/07/25 03:23:38 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200725032338-0003/0 on worker-20200725025003-172.22.0.4-8881 (172.22.0.4:8881) with 2 core(s)
20/07/25 03:23:38 INFO StandaloneSchedulerBackend: Granted executor ID app-20200725032338-0003/0 on hostPort 172.22.0.4:8881 with 2 core(s), 1024.0 MB RAM
20/07/25 03:23:38 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 0508a77fcaad, 45057, None)
20/07/25 03:23:38 INFO BlockManagerMasterEndpoint: Registering block manager 0508a77fcaad:45057 with 366.3 MB RAM, BlockManagerId(driver, 0508a77fcaad, 45057, None)
20/07/25 03:23:38 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 0508a77fcaad, 45057, None)
20/07/25 03:23:38 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 0508a77fcaad, 45057, None)
20/07/25 03:23:38 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200725032338-0003/0 is now RUNNING
20/07/25 03:23:38 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.020/07/25 03:23:38 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/usr/local/airflow/spark-warehouse').
20/07/25 03:23:38 INFO SharedState: Warehouse path is 'file:/usr/local/airflow/spark-warehouse'.
20/07/25 03:23:40 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
20/07/25 03:23:47 INFO FileSourceStrategy: Pruning directories with:
20/07/25 03:23:47 INFO FileSourceStrategy: Post-Scan Filters:
20/07/25 03:23:47 INFO FileSourceStrategy: Output Data Schema: struct<value: string>
20/07/25 03:23:47 INFO FileSourceScanExec: Pushed Filters:
20/07/25 03:23:51 INFO CodeGenerator: Code generated in 2187.926234 ms
20/07/25 03:23:53 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 220.9 KB, free 366.1 MB)
20/07/25 03:23:55 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 20.8 KB, free 366.1 MB)
20/07/25 03:23:55 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 0508a77fcaad:45057 (size: 20.8 KB, free: 366.3 MB)
20/07/25 03:23:55 INFO SparkContext: Created broadcast 0 from javaToPython at NativeMethodAccessorImpl.java:0
20/07/25 03:23:55 INFO FileSourceScanExec: Planning scan with bin packing, max size: 4194304 bytes, open cost is considered as scanning 4194304 bytes.
20/07/25 03:23:57 INFO SparkContext: Starting job: collect at /opt/spark-2.4.1/examples/src/main/python/wordcount.py:40
20/07/25 03:23:58 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (172.22.0.4:59324) with ID 0
20/07/25 03:23:58 INFO DAGScheduler: Registering RDD 5 (reduceByKey at /opt/spark-2.4.1/examples/src/main/python/wordcount.py:39)
20/07/25 03:23:58 INFO DAGScheduler: Got job 0 (collect at /opt/spark-2.4.1/examples/src/main/python/wordcount.py:40) with 1 output partitions
20/07/25 03:23:58 INFO DAGScheduler: Final stage: ResultStage 1 (collect at /opt/spark-2.4.1/examples/src/main/python/wordcount.py:40)
20/07/25 03:23:58 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
20/07/25 03:23:58 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
20/07/25 03:23:58 INFO DAGScheduler: Submitting ShuffleMapStage 0 (PairwiseRDD[5] at reduceByKey at /opt/spark-2.4.1/examples/src/main/python/wordcount.py:39), which has no missing parents
20/07/25 03:23:58 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 15.2 KB, free 366.0 MB)
20/07/25 03:23:58 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 9.1 KB, free 366.0 MB)
20/07/25 03:23:58 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 0508a77fcaad:45057 (size: 9.1 KB, free: 366.3 MB)
20/07/25 03:23:58 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1161
20/07/25 03:23:58 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (PairwiseRDD[5] at reduceByKey at /opt/spark-2.4.1/examples/src/main/python/wordcount.py:39) (first 15 tasks are for partitions Vector(0))
20/07/25 03:23:58 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
20/07/25 03:23:58 INFO BlockManagerMasterEndpoint: Registering block manager 172.22.0.4:45435 with 366.3 MB RAM, BlockManagerId(0, 172.22.0.4, 45435, None)
20/07/25 03:23:58 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 172.22.0.4, executor 0, partition 0, PROCESS_LOCAL, 8307 bytes)
20/07/25 03:24:03 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 172.22.0.4:45435 (size: 9.1 KB, free: 366.3 MB)
20/07/25 03:24:09 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 172.22.0.4:45435 (size: 20.8 KB, free: 366.3 MB)
20/07/25 03:24:11 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 172.22.0.4, executor 0): java.io.FileNotFoundException: File file:/usr/local/airflow/sample.txt does not exist
It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.hasNext(SerDeUtil.scala:153)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:148)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:224)
at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:557)
at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1.apply(PythonRunner.scala:345)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1945)
at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:194)
20/07/25 03:24:11 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 1, 172.22.0.4, executor 0, partition 0, PROCESS_LOCAL, 8307 bytes)
20/07/25 03:24:11 INFO TaskSetManager: Lost task 0.1 in stage 0.0 (TID 1) on 172.22.0.4, executor 0: java.io.FileNotFoundException (File file:/usr/local/airflow/sample.txt does not exist
It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.) [duplicate 1]
20/07/25 03:24:11 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 2, 172.22.0.4, executor 0, partition 0, PROCESS_LOCAL, 8307 bytes)
20/07/25 03:24:12 INFO TaskSetManager: Lost task 0.2 in stage 0.0 (TID 2) on 172.22.0.4, executor 0: java.io.FileNotFoundException (File file:/usr/local/airflow/sample.txt does not exist
It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.) [duplicate 2]
20/07/25 03:24:12 INFO TaskSetManager: Starting task 0.3 in stage 0.0 (TID 3, 172.22.0.4, executor 0, partition 0, PROCESS_LOCAL, 8307 bytes)
20/07/25 03:24:12 INFO TaskSetManager: Lost task 0.3 in stage 0.0 (TID 3) on 172.22.0.4, executor 0: java.io.FileNotFoundException (File file:/usr/local/airflow/sample.txt does not exist
It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.) [duplicate 3]
20/07/25 03:24:12 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
20/07/25 03:24:12 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
20/07/25 03:24:12 INFO TaskSchedulerImpl: Cancelling stage 0
20/07/25 03:24:12 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage cancelled
20/07/25 03:24:12 INFO DAGScheduler: ShuffleMapStage 0 (reduceByKey at /opt/spark-2.4.1/examples/src/main/python/wordcount.py:39) failed in 13.690 s due to Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, 172.22.0.4, executor 0): java.io.FileNotFoundException: File file:/usr/local/airflow/sample.txt does not exist
It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.hasNext(SerDeUtil.scala:153)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:148)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:224)
at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:557)
at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1.apply(PythonRunner.scala:345)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1945)
at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:194)
Driver stacktrace:
20/07/25 03:24:12 INFO DAGScheduler: Job 0 failed: collect at /opt/spark-2.4.1/examples/src/main/python/wordcount.py:40, took 14.579961 s
Traceback (most recent call last):
File "/opt/spark-2.4.1/examples/src/main/python/wordcount.py", line 40, in <module>
output = counts.collect()
File "/opt/spark-2.4.1/python/lib/pyspark.zip/pyspark/rdd.py", line 816, in collect
File "/opt/spark-2.4.1/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
File "/opt/spark-2.4.1/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
File "/opt/spark-2.4.1/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, 172.22.0.4, executor 0): java.io.FileNotFoundException: File file:/usr/local/airflow/sample.txt does not exist
It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.hasNext(SerDeUtil.scala:153)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:148)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:224)
at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:557)
at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1.apply(PythonRunner.scala:345)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1945)
at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:194)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1889)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1876)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2110)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2059)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2048)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.FileNotFoundException: File file:/usr/local/airflow/sample.txt does not exist
It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.hasNext(SerDeUtil.scala:153)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:148)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:224)
at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:557)
at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1.apply(PythonRunner.scala:345)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1945)
at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:194)
20/07/25 03:24:13 INFO SparkContext: Invoking stop() from shutdown hook
20/07/25 03:24:13 INFO SparkUI: Stopped Spark web UI at http://0508a77fcaad:4040
20/07/25 03:24:13 INFO StandaloneSchedulerBackend: Shutting down all executors
20/07/25 03:24:13 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
20/07/25 03:24:16 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/07/25 03:24:16 INFO MemoryStore: MemoryStore cleared
20/07/25 03:24:16 INFO BlockManager: BlockManager stopped
20/07/25 03:24:16 INFO BlockManagerMaster: BlockManagerMaster stopped
20/07/25 03:24:16 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/07/25 03:24:16 INFO SparkContext: Successfully stopped SparkContext
20/07/25 03:24:16 INFO ShutdownHookManager: Shutdown hook called
20/07/25 03:24:16 INFO ShutdownHookManager: Deleting directory /tmp/spark-2dfb2222-d56c-4ee1-ab62-86e71e5e751b
20/07/25 03:24:16 INFO ShutdownHookManager: Deleting directory /tmp/spark-f9dfe6ee-22d7-4747-beab-9450fc1afce0
20/07/25 03:24:16 INFO ShutdownHookManager: Deleting directory /tmp/spark-f9dfe6ee-22d7-4747-beab-9450fc1afce0/pyspark-2ee74d07-6606-4edc-8420-fe46212c50e5
Change your spark-submit like below for submitting your spark job.
spark-submit \
--master spark://ipaddress:7077 \
--deploy-mode cluster # add this if you want to pass file name to wordcount.py
--files usr/local/airflow/sample.txt \
/opt/spark-2.4.1/examples/src/main/python/wordcount.py sample.txt
OR
spark-submit \
--master spark://ipaddress:7077 \
/opt/spark-2.4.1/examples/src/main/python/wordcount.py /usr/local/airflow/sample.txt

Optimization of spark cluster configuration( Low configuration virtual machine cluster)

When I use Spark's standalone mode to process a large number of datasets,the log said:
ERROR TaskSchedulerImpl:70 - Lost executor 1 on : Executor heartbeat timed out after 381181 ms
I search the internet, they say I should set parameters with spark submit:
[hadoop#Master spark2.4.0]$ bin/spark-submit --master spark://master:7077 --conf spark.worker.timeout 10000000 --py-files id.py id.py --name id
Error message in log:
Error: Invalid argument to --conf: spark.worker.timeout
Questions:
How to set timeout parameter?
Thanks to meniluca's answer, I lost the symbols in instructions
After adjusting the timeout, the log displays
2019-12-05 19:42:27 WARN Utils:87 - Suppressing exception in finally: broken pipe (Write failed)
java.net.SocketException: broken pipe (Write failed)
2019-12-05 21:13:09 INFO SparkContext:54 - Invoking stop() from shutdown hook
Exception in thread "serve-DataFrame" java.net.SocketException: Connection reset
Suppressed: java.net.SocketException: broken pipe (Write failed)
then,I change thessh,add ServerAliveInterval 60 while ~/.ssh/ config
ServerAliveInterval 60
the error stil exits, then I try to increase the driver memory, error still exists, and show that the connection is disconnected
[hadoop#Master spark2.4.0]$ bin/spark-submit --master spark://master:7077 --conf spark.worker.timeout=10000000 --driver-memory 1g --py-files id.py id.py --name id
2019-12-06 10:38:49 INFO ContextCleaner:54 - Cleaned accumulator 374
Exception in thread "serve-DataFrame" java.net.SocketException: broken pipe (Write failed)
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$PythonRDD$$write$1(PythonRDD.scala:212)
at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:224)
at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:224)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:148)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:224)
at org.apache.spark.api.python.PythonRDD$$anonfun$serveIterator$1.apply(PythonRDD.scala:413)
at org.apache.spark.api.python.PythonRDD$$anonfun$serveIterator$1.apply(PythonRDD.scala:412)
at org.apache.spark.api.python.PythonRDD$$anonfun$6$$anonfun$apply$1.apply$mcV$sp(PythonRDD.scala:435)
at org.apache.spark.api.python.PythonRDD$$anonfun$6$$anonfun$apply$1.apply(PythonRDD.scala:435)
at org.apache.spark.api.python.PythonRDD$$anonfun$6$$anonfun$apply$1.apply(PythonRDD.scala:435)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.api.python.PythonRDD$$anonfun$6.apply(PythonRDD.scala:436)
at org.apache.spark.api.python.PythonRDD$$anonfun$6.apply(PythonRDD.scala:432)
at org.apache.spark.api.python.PythonServer$$anon$1.run(PythonRDD.scala:862)
2019-12-06 11:06:12 WARN HeartbeatReceiver:66 - Removing executor 1 with no recent heartbeats: 149103 ms exceeds timeout 120000 ms
2019-12-06 11:06:12 ERROR TaskSchedulerImpl:70 - Lost executor 1 on 219.226.109.129: Executor heartbeat timed out after 149103 ms
2019-12-06 11:06:13 INFO SparkContext:54 - Invoking stop() from shutdown hook
2019-12-06 11:06:13 INFO DAGScheduler:54 - Executor lost: 1 (epoch 6)
2019-12-06 11:06:13 WARN HeartbeatReceiver:66 - Removing executor 0 with no recent heartbeats: 155761 ms exceeds timeout 120000 ms
2019-12-06 11:06:13 ERROR TaskSchedulerImpl:70 - Lost executor 0 on 219.226.109.131: Executor heartbeat timed out after 155761 ms
2019-12-06 11:06:13 INFO StandaloneSchedulerBackend:54 - Requesting to kill executor(s) 1
2019-12-06 11:06:13 INFO BlockManagerMasterEndpoint:54 - Trying to remove executor 1 from BlockManagerMaster.
2019-12-06 11:06:13 INFO BlockManagerMasterEndpoint:54 - Removing block manager BlockManagerId(1, 219.226.109.129, 42501, None)
2019-12-06 11:06:13 INFO BlockManagerMaster:54 - Removed 1 successfully in removeExecutor
2019-12-06 11:06:13 INFO DAGScheduler:54 - Shuffle files lost for executor: 1 (epoch 6)
2019-12-06 11:06:13 INFO StandaloneSchedulerBackend:54 - Actual list of executor(s) to be killed is 1
2019-12-06 11:06:13 INFO DAGScheduler:54 - Host added was in lost list earlier: 219.226.109.129
2019-12-06 11:06:13 INFO DAGScheduler:54 - Executor lost: 0 (epoch 7)
2019-12-06 11:06:13 INFO AbstractConnector:318 - Stopped Spark#490228e{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2019-12-06 11:06:13 INFO BlockManagerMasterEndpoint:54 - Trying to remove executor 0 from BlockManagerMaster.
2019-12-06 11:06:13 INFO BlockManagerMasterEndpoint:54 - Removing block manager BlockManagerId(0, 219.226.109.131, 42164, None)
2019-12-06 11:06:13 INFO BlockManagerMaster:54 - Removed 0 successfully in removeExecutor
2019-12-06 11:06:13 INFO DAGScheduler:54 - Shuffle files lost for executor: 0 (epoch 7)
2019-12-06 11:06:13 INFO DAGScheduler:54 - Host added was in lost list earlier: 219.226.109.131
2019-12-06 11:06:13 INFO SparkUI:54 - Stopped Spark web UI at http://Master:4040
2019-12-06 11:06:13 INFO BlockManagerMasterEndpoint:54 - Registering block manager 219.226.109.129:42501 with 413.9 MB RAM, BlockManagerId(1, 219.226.109.129, 42501, None)
2019-12-06 11:06:13 INFO BlockManagerMasterEndpoint:54 - Registering block manager 219.226.109.131:42164 with 413.9 MB RAM, BlockManagerId(0, 219.226.109.131, 42164, None)
2019-12-06 11:06:14 INFO StandaloneSchedulerBackend:54 - Shutting down all executors
2019-12-06 11:06:14 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:54 - Asking each executor to shut down
2019-12-06 11:06:14 INFO BlockManagerInfo:54 - Added broadcast_15_piece0 in memory on 219.226.109.129:42501 (size: 21.1 KB, free: 413.9 MB)
2019-12-06 11:06:15 INFO MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped!
2019-12-06 11:06:15 INFO BlockManagerInfo:54 - Added broadcast_15_piece0 in memory on 219.226.109.131:42164 (size: 21.1 KB, free: 413.9 MB)
2019-12-06 11:06:16 INFO MemoryStore:54 - MemoryStore cleared
2019-12-06 11:06:16 INFO BlockManager:54 - BlockManager stopped
2019-12-06 11:06:16 INFO BlockManagerMaster:54 - BlockManagerMaster stopped
2019-12-06 11:06:17 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:54 - OutputCommitCoordinator stopped!
2019-12-06 11:06:17 ERROR TransportResponseHandler:144 - Still have 1 requests outstanding when connection from Master/219.226.109.130:7077 is closed
2019-12-06 11:06:17 INFO SparkContext:54 - Successfully stopped SparkContext
2019-12-06 11:06:17 INFO ShutdownHookManager:54 - Shutdown hook called
2019-12-06 11:06:17 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-e2a29bac-7277-4476-ad23-315a27e9ccf0
2019-12-06 11:06:17 INFO ShutdownHookManager:54 - Deleting directory /tmp/localPyFiles-dd95954c-2e77-41ca-969d-a201269f5b5b
2019-12-06 11:06:17 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-bcd56b4a-fb32-4b58-a1d5-71abc5218d32
2019-12-06 11:06:17 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-e2a29bac-7277-4476-ad23-315a27e9ccf0/pyspark-d04b799f-a116-44d5-b6a5-811cc8c03743
Question
Is SSH related to broken pipe?
Is increasing driver memory helpful to this problem?
I see the configuration posts on the Internet, but they're are highly configured. Since I use my computer to built clusters on virtual machine,
, the master has two cores , the slave has one core. How to adjust the configuration ?
Please try with
--conf spark.worker.timeout=10000000
you are missing the equal character between the configuration name and value.
java.net.SocketException: broken pipe (Write failed) occurs when something is wrong with the access port.
I suggest you to change the master which is at port 8080. The port can be changed either in the configuration file or via command-line options.
sbin/start-master.sh
Same can be tried with worker node as well if the above does not fix issue.
To see which ports are being used you can use :
sudo netstat -ltup

Apache Spark Multi Node Clustering - java.io.FileNotFoundException

I am newbie to Apache Spark and Cluster Computing and I implemented Spark in Standalone Mode (Same Machine with Master and Worker), it worked fine for me.
Then, I downloaded pre-built version of spark, and followed these instructions and placed in every nodes of my cluster: http://spark.apache.org/docs/latest/spark-standalone.html#installing-spark-standalone-to-a-cluster.
My Master node has IP address: 172.17.0.224 and my Slave nodes has IP Address: 172.17.0.221, 172.17.0.222 and 172.17.0.223.
And I edited slaves and spark-env.sh files to add the IP addresses of my slaves and IP address of my master respectively.
I started the master node start-master.sh and started the slave nodes with start-slaves.sh, everything worked fine.
I submitted my spark-job using the command spark-submit --class "Rice" --master spark://172.17.0.224:7077 cs453project/target/scala-2.11/simple-project_2.11-1.0.jar cs453project/input.txt cs453project/ouput2 cs453project/ouput3.
This is the error messages I got:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/11/25 11:22:27 INFO SparkContext: Running Spark version 1.5.2
15/11/25 11:22:27 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/11/25 11:22:28 WARN Utils: Your hostname, node04 resolves to a loopback address: 127.0.1.1; using 172.17.0.224 instead (on interface eth0)
15/11/25 11:22:28 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/11/25 11:22:28 INFO SecurityManager: Changing view acls to: ujjwal
15/11/25 11:22:28 INFO SecurityManager: Changing modify acls to: ujjwal
15/11/25 11:22:28 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ujjwal); users with modify permissions: Set(ujjwal)
15/11/25 11:22:28 INFO Slf4jLogger: Slf4jLogger started
15/11/25 11:22:28 INFO Remoting: Starting remoting
15/11/25 11:22:28 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver#172.17.0.224:58478]
15/11/25 11:22:28 INFO Utils: Successfully started service 'sparkDriver' on port 58478.
15/11/25 11:22:28 INFO SparkEnv: Registering MapOutputTracker
15/11/25 11:22:28 INFO SparkEnv: Registering BlockManagerMaster
15/11/25 11:22:28 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-bc18e422-d334-4fe5-9663-9439620ec054
15/11/25 11:22:28 INFO MemoryStore: MemoryStore started with capacity 530.3 MB
15/11/25 11:22:29 INFO HttpFileServer: HTTP File server directory is /tmp/spark-7c6e0ad4-52ae-4f5a-9aaa-6ad9fbf48685/httpd-13d8dd4d-6ff1-450d-baac-f2702c7a4e5b
15/11/25 11:22:29 INFO HttpServer: Starting HTTP Server
15/11/25 11:22:29 INFO Utils: Successfully started service 'HTTP file server' on port 49496.
15/11/25 11:22:29 INFO SparkEnv: Registering OutputCommitCoordinator
15/11/25 11:22:29 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/11/25 11:22:29 INFO SparkUI: Started SparkUI at http://172.17.0.224:4040
15/11/25 11:22:29 INFO SparkContext: Added JAR file:/home/ujjwal/cs453project/target/scala-2.11/simple-project_2.11-1.0.jar at http://172.17.0.224:49496/jars/simple-project_2.11-1.0.jar with timestamp 1448479349380
15/11/25 11:22:29 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
15/11/25 11:22:29 INFO AppClient$ClientEndpoint: Connecting to master spark://172.17.0.224:7077...
15/11/25 11:22:29 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20151125112229-0001
15/11/25 11:22:29 INFO AppClient$ClientEndpoint: Executor added: app-20151125112229-0001/0 on worker-20151125095922-172.17.0.221-33366 (172.17.0.221:33366) with 2 cores
15/11/25 11:22:29 INFO SparkDeploySchedulerBackend: Granted executor ID app-20151125112229-0001/0 on hostPort 172.17.0.221:33366 with 2 cores, 1024.0 MB RAM
15/11/25 11:22:29 INFO AppClient$ClientEndpoint: Executor updated: app-20151125112229-0001/0 is now LOADING
15/11/25 11:22:29 INFO AppClient$ClientEndpoint: Executor updated: app-20151125112229-0001/0 is now RUNNING
15/11/25 11:22:29 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 47843.
15/11/25 11:22:29 INFO NettyBlockTransferService: Server created on 47843
15/11/25 11:22:29 INFO BlockManagerMaster: Trying to register BlockManager
15/11/25 11:22:29 INFO BlockManagerMasterEndpoint: Registering block manager 172.17.0.224:47843 with 530.3 MB RAM, BlockManagerId(driver, 172.17.0.224, 47843)
15/11/25 11:22:29 INFO BlockManagerMaster: Registered BlockManager
15/11/25 11:22:29 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
15/11/25 11:22:30 INFO MemoryStore: ensureFreeSpace(157248) called with curMem=0, maxMem=556038881
15/11/25 11:22:30 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 153.6 KB, free 530.1 MB)
15/11/25 11:22:30 INFO MemoryStore: ensureFreeSpace(14276) called with curMem=157248, maxMem=556038881
15/11/25 11:22:30 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 13.9 KB, free 530.1 MB)
15/11/25 11:22:30 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 172.17.0.224:47843 (size: 13.9 KB, free: 530.3 MB)
15/11/25 11:22:30 INFO SparkContext: Created broadcast 0 from textFile at build.scala:11
15/11/25 11:22:30 INFO FileInputFormat: Total input paths to process : 1
15/11/25 11:22:30 INFO SparkContext: Starting job: count at build.scala:13
15/11/25 11:22:30 INFO DAGScheduler: Got job 0 (count at build.scala:13) with 108 output partitions
15/11/25 11:22:30 INFO DAGScheduler: Final stage: ResultStage 0(count at build.scala:13)
15/11/25 11:22:30 INFO DAGScheduler: Parents of final stage: List()
15/11/25 11:22:30 INFO DAGScheduler: Missing parents: List()
15/11/25 11:22:30 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[4] at map at build.scala:12), which has no missing parents
15/11/25 11:22:30 INFO MemoryStore: ensureFreeSpace(3424) called with curMem=171524, maxMem=556038881
15/11/25 11:22:30 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.3 KB, free 530.1 MB)
15/11/25 11:22:30 INFO MemoryStore: ensureFreeSpace(1934) called with curMem=174948, maxMem=556038881
15/11/25 11:22:30 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1934.0 B, free 530.1 MB)
15/11/25 11:22:30 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 172.17.0.224:47843 (size: 1934.0 B, free: 530.3 MB)
15/11/25 11:22:30 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:861
15/11/25 11:22:30 INFO DAGScheduler: Submitting 108 missing tasks from ResultStage 0 (MapPartitionsRDD[4] at map at build.scala:12)
15/11/25 11:22:30 INFO TaskSchedulerImpl: Adding task set 0.0 with 108 tasks
15/11/25 11:22:31 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor#172.17.0.221:55861/user/Executor#-498212581]) with ID 0
15/11/25 11:22:32 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 172.17.0.221, PROCESS_LOCAL, 2217 bytes)
15/11/25 11:22:32 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 172.17.0.221, PROCESS_LOCAL, 2217 bytes)
15/11/25 11:22:32 INFO BlockManagerMasterEndpoint: Registering block manager 172.17.0.221:49642 with 530.3 MB RAM, BlockManagerId(0, 172.17.0.221, 49642)
15/11/25 11:22:32 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 172.17.0.221:49642 (size: 1934.0 B, free: 530.3 MB)
15/11/25 11:22:32 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 172.17.0.221:49642 (size: 13.9 KB, free: 530.3 MB)
15/11/25 11:22:32 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, 172.17.0.221, PROCESS_LOCAL, 2217 bytes)
15/11/25 11:22:32 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, 172.17.0.221, PROCESS_LOCAL, 2217 bytes)
15/11/25 11:22:32 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, 172.17.0.221, PROCESS_LOCAL, 2217 bytes)
15/11/25 11:22:32 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, 172.17.0.221): java.io.FileNotFoundException: File file:/home/ujjwal/cs453project/input.txt does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:534)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:140)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:341)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:108)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:239)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:216)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
15/11/25 11:22:32 INFO TaskSetManager: Lost task 3.0 in stage 0.0 (TID 3) on executor 172.17.0.221: java.io.FileNotFoundException (File file:/home/ujjwal/cs453project/input.txt does not exist) [duplicate 1]
15/11/25 11:22:32 INFO TaskSetManager: Starting task 3.1 in stage 0.0 (TID 5, 172.17.0.221, PROCESS_LOCAL, 2217 bytes)
15/11/25 11:22:32 INFO TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) on executor 172.17.0.221: java.io.FileNotFoundException (File file:/home/ujjwal/cs453project/input.txt does not exist) [duplicate 2]
15/11/25 11:22:32 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 6, 172.17.0.221, PROCESS_LOCAL, 2217 bytes)
15/11/25 11:22:32 INFO TaskSetManager: Lost task 2.0 in stage 0.0 (TID 2) on executor 172.17.0.221: java.io.FileNotFoundException (File file:/home/ujjwal/cs453project/input.txt does not exist) [duplicate 3]
15/11/25 11:22:32 INFO TaskSetManager: Lost task 4.0 in stage 0.0 (TID 4) on executor 172.17.0.221: java.io.FileNotFoundException (File file:/home/ujjwal/cs453project/input.txt does not exist) [duplicate 4]
15/11/25 11:22:32 INFO TaskSetManager: Starting task 4.1 in stage 0.0 (TID 7, 172.17.0.221, PROCESS_LOCAL, 2217 bytes)
15/11/25 11:22:32 INFO TaskSetManager: Lost task 3.1 in stage 0.0 (TID 5) on executor 172.17.0.221: java.io.FileNotFoundException (File file:/home/ujjwal/cs453project/input.txt does not exist) [duplicate 5]
15/11/25 11:22:32 INFO TaskSetManager: Starting task 3.2 in stage 0.0 (TID 8, 172.17.0.221, PROCESS_LOCAL, 2217 bytes)
15/11/25 11:22:32 INFO TaskSetManager: Lost task 0.1 in stage 0.0 (TID 6) on executor 172.17.0.221: java.io.FileNotFoundException (File file:/home/ujjwal/cs453project/input.txt does not exist) [duplicate 6]
15/11/25 11:22:32 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 9, 172.17.0.221, PROCESS_LOCAL, 2217 bytes)
15/11/25 11:22:32 INFO TaskSetManager: Lost task 3.2 in stage 0.0 (TID 8) on executor 172.17.0.221: java.io.FileNotFoundException (File file:/home/ujjwal/cs453project/input.txt does not exist) [duplicate 7]
15/11/25 11:22:32 INFO TaskSetManager: Starting task 3.3 in stage 0.0 (TID 10, 172.17.0.221, PROCESS_LOCAL, 2217 bytes)
15/11/25 11:22:32 INFO TaskSetManager: Lost task 4.1 in stage 0.0 (TID 7) on executor 172.17.0.221: java.io.FileNotFoundException (File file:/home/ujjwal/cs453project/input.txt does not exist) [duplicate 8]
15/11/25 11:22:32 INFO TaskSetManager: Starting task 4.2 in stage 0.0 (TID 11, 172.17.0.221, PROCESS_LOCAL, 2217 bytes)
15/11/25 11:22:32 INFO TaskSetManager: Lost task 0.2 in stage 0.0 (TID 9) on executor 172.17.0.221: java.io.FileNotFoundException (File file:/home/ujjwal/cs453project/input.txt does not exist) [duplicate 9]
15/11/25 11:22:32 INFO TaskSetManager: Starting task 0.3 in stage 0.0 (TID 12, 172.17.0.221, PROCESS_LOCAL, 2217 bytes)
15/11/25 11:22:32 INFO TaskSetManager: Lost task 3.3 in stage 0.0 (TID 10) on executor 172.17.0.221: java.io.FileNotFoundException (File file:/home/ujjwal/cs453project/input.txt does not exist) [duplicate 10]
15/11/25 11:22:32 ERROR TaskSetManager: Task 3 in stage 0.0 failed 4 times; aborting job
15/11/25 11:22:32 INFO TaskSchedulerImpl: Cancelling stage 0
15/11/25 11:22:32 INFO TaskSchedulerImpl: Stage 0 was cancelled
15/11/25 11:22:32 INFO DAGScheduler: ResultStage 0 (count at build.scala:13) failed in 2.216 s
15/11/25 11:22:32 INFO TaskSetManager: Lost task 4.2 in stage 0.0 (TID 11) on executor 172.17.0.221: java.io.FileNotFoundException (File file:/home/ujjwal/cs453project/input.txt does not exist) [duplicate 11]
15/11/25 11:22:32 INFO DAGScheduler: Job 0 failed: count at build.scala:13, took 2.373631 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 0.0 failed 4 times, most recent failure: Lost task 3.3 in stage 0.0 (TID 10, 172.17.0.221): java.io.FileNotFoundException: File file:/home/ujjwal/cs453project/input.txt does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:534)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:140)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:341)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:108)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:239)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:216)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1824)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1850)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921)
at org.apache.spark.rdd.RDD.count(RDD.scala:1125)
at Rice$.main(build.scala:13)
at Rice.main(build.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.FileNotFoundException: File file:/home/ujjwal/cs453project/input.txt does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:534)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:140)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:341)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:108)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:239)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:216)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
15/11/25 11:22:32 INFO TaskSetManager: Lost task 0.3 in stage 0.0 (TID 12) on executor 172.17.0.221: java.io.FileNotFoundException (File file:/home/ujjwal/cs453project/input.txt does not exist) [duplicate 12]
15/11/25 11:22:32 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
15/11/25 11:22:32 INFO SparkContext: Invoking stop() from shutdown hook
15/11/25 11:22:33 INFO SparkUI: Stopped Spark web UI at http://172.17.0.224:4040
15/11/25 11:22:33 INFO DAGScheduler: Stopping DAGScheduler
15/11/25 11:22:33 INFO SparkDeploySchedulerBackend: Shutting down all executors
15/11/25 11:22:33 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
15/11/25 11:22:33 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
15/11/25 11:22:33 INFO MemoryStore: MemoryStore cleared
15/11/25 11:22:33 INFO BlockManager: BlockManager stopped
15/11/25 11:22:33 INFO BlockManagerMaster: BlockManagerMaster stopped
15/11/25 11:22:33 INFO SparkContext: Successfully stopped SparkContext
15/11/25 11:22:33 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
15/11/25 11:22:33 INFO ShutdownHookManager: Shutdown hook called
15/11/25 11:22:33 INFO ShutdownHookManager: Deleting directory /tmp/spark-7c6e0ad4-52ae-4f5a-9aaa-6ad9fbf48685
Could you please help me understand how can I solve my problem? Thanks!
The path you used is probably only local to the driver. You have to use a path that is accessible to all of the workers. The driver does not send the actual data to the workers - that would be unfortunately slow. The workers will try to read the data using the path you gave them. In this case, they will fail because the don't have the files locally.
#user3180835, as suggested by #Mike Park, in my case, after I copied the file from local linux file system to hdfs, it started working.
hdfs dfs -cp file:///<path_to_local_file> /<hdfs_file_dir>

Spark: EOFException when reading from HDFS

I just started playing with Spark. So I ran the SimpleApp program from tutorial (https://spark.apache.org/docs/1.0.0/quick-start.html), which works fine.
However, if I change the file location from local to hdfs, then I get an EOFException.
I did some search online which suggests this error is caused by hadoop version conflicts, I made the suggested modification in my sbt file, but still get the same error.
I am using CDH5.1, code and full error log is below. Any help is greatly appreciated.
Thanks
Scala:
/* SimpleApp.scala */
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
object SimpleApp {
def main(args: Array[String]) {
val logFile = "hdfs://plogs001.sjc.domain.com:8020/tmp/data.txt" // Should be some file on your system
val conf = new SparkConf()
.setMaster("spark://plogs004.sjc.domain.com:7077")
.setAppName("SimpleApp")
.set("spark.executor.memory", "1g")
val sc = new SparkContext(conf)
//val logFile = "/tmp/data.txt" // Should be some file on your system
//val conf = new SparkConf().setAppName("Simple Application")
//val sc = new SparkContext(conf)
val logData = sc.textFile(logFile, 2).cache()
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
}
}
SBT:
name := "Simple Project"
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.0.0"
libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.3.0-cdh5.1.0"
resolvers += "Akka Repository" at "http://repo.akka.io/releases/"
resolvers += "Cloudera Repository" at "https://repository.cloudera.com/artifactory/cloudera-repos/"
Error Log:
[hdfs#plogs001 test1]$ spark-submit --class SimpleApp --master spark://spark#plogs004.sjc.domain.com:7077 target/scala-2.10/simple-project_2.10-1.0.jar
14/09/09 16:56:41 INFO spark.SecurityManager: Changing view acls to: hdfs
14/09/09 16:56:41 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hdfs)
14/09/09 16:56:41 INFO slf4j.Slf4jLogger: Slf4jLogger started
14/09/09 16:56:41 INFO Remoting: Starting remoting
14/09/09 16:56:41 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark#plogs001.sjc.domain.com:34607]
14/09/09 16:56:41 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark#plogs001.sjc.domain.com:34607]
14/09/09 16:56:41 INFO spark.SparkEnv: Registering MapOutputTracker
14/09/09 16:56:41 INFO spark.SparkEnv: Registering BlockManagerMaster
14/09/09 16:56:41 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-local-20140909165641-375e
14/09/09 16:56:41 INFO storage.MemoryStore: MemoryStore started with capacity 294.9 MB.
14/09/09 16:56:41 INFO network.ConnectionManager: Bound socket to port 40833 with id = ConnectionManagerId(plogs001.sjc.domain.com,40833)
14/09/09 16:56:41 INFO storage.BlockManagerMaster: Trying to register BlockManager
14/09/09 16:56:41 INFO storage.BlockManagerInfo: Registering block manager plogs001.sjc.domain.com:40833 with 294.9 MB RAM
14/09/09 16:56:41 INFO storage.BlockManagerMaster: Registered BlockManager
14/09/09 16:56:41 INFO spark.HttpServer: Starting HTTP Server
14/09/09 16:56:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
14/09/09 16:56:42 INFO server.AbstractConnector: Started SocketConnector#0.0.0.0:47419
14/09/09 16:56:42 INFO broadcast.HttpBroadcast: Broadcast server started at http://172.16.30.161:47419
14/09/09 16:56:42 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-7026d0b6-777e-4dd3-9bbb-e79d7487e7d7
14/09/09 16:56:42 INFO spark.HttpServer: Starting HTTP Server
14/09/09 16:56:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
14/09/09 16:56:42 INFO server.AbstractConnector: Started SocketConnector#0.0.0.0:42388
14/09/09 16:56:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
14/09/09 16:56:42 INFO server.AbstractConnector: Started SelectChannelConnector#0.0.0.0:4040
14/09/09 16:56:42 INFO ui.SparkUI: Started SparkUI at http://plogs001.sjc.domain.com:4040
14/09/09 16:56:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/09/09 16:56:42 INFO spark.SparkContext: Added JAR file:/home/hdfs/kent/test1/target/scala-2.10/simple-project_2.10-1.0.jar at http://172.16.30.161:42388/jars/simple-project_2.10-1.0.jar with timestamp 1410307002737
14/09/09 16:56:42 INFO client.AppClient$ClientActor: Connecting to master spark://plogs004.sjc.domain.com:7077...
14/09/09 16:56:42 INFO storage.MemoryStore: ensureFreeSpace(155704) called with curMem=0, maxMem=309225062
14/09/09 16:56:42 INFO storage.MemoryStore: Block broadcast_0 stored as values to memory (estimated size 152.1 KB, free 294.8 MB)
14/09/09 16:56:42 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20140909165642-0041
14/09/09 16:56:42 INFO client.AppClient$ClientActor: Executor added: app-20140909165642-0041/0 on worker-20140902113555-plogs005.sjc.domain.com-7078 (plogs005.sjc.domain.com:7078) with 24 cores
14/09/09 16:56:42 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20140909165642-0041/0 on hostPort plogs005.sjc.domain.com:7078 with 24 cores, 1024.0 MB RAM
14/09/09 16:56:42 INFO client.AppClient$ClientActor: Executor added: app-20140909165642-0041/1 on worker-20140902113555-plogs006.sjc.domain.com-7078 (plogs006.sjc.domain.com:7078) with 24 cores
14/09/09 16:56:42 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20140909165642-0041/1 on hostPort plogs006.sjc.domain.com:7078 with 24 cores, 1024.0 MB RAM
14/09/09 16:56:42 INFO client.AppClient$ClientActor: Executor added: app-20140909165642-0041/2 on worker-20140902113556-plogs004.sjc.domain.com-7078 (plogs004.sjc.domain.com:7078) with 24 cores
14/09/09 16:56:42 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20140909165642-0041/2 on hostPort plogs004.sjc.domain.com:7078 with 24 cores, 1024.0 MB RAM
14/09/09 16:56:42 INFO client.AppClient$ClientActor: Executor updated: app-20140909165642-0041/2 is now RUNNING
14/09/09 16:56:42 INFO client.AppClient$ClientActor: Executor updated: app-20140909165642-0041/1 is now RUNNING
14/09/09 16:56:42 INFO client.AppClient$ClientActor: Executor updated: app-20140909165642-0041/0 is now RUNNING
14/09/09 16:56:43 INFO mapred.FileInputFormat: Total input paths to process : 1
14/09/09 16:56:43 INFO spark.SparkContext: Starting job: count at SimpleApp.scala:22
14/09/09 16:56:43 INFO scheduler.DAGScheduler: Got job 0 (count at SimpleApp.scala:22) with 2 output partitions (allowLocal=false)
14/09/09 16:56:43 INFO scheduler.DAGScheduler: Final stage: Stage 0(count at SimpleApp.scala:22)
14/09/09 16:56:43 INFO scheduler.DAGScheduler: Parents of final stage: List()
14/09/09 16:56:43 INFO scheduler.DAGScheduler: Missing parents: List()
14/09/09 16:56:43 INFO scheduler.DAGScheduler: Submitting Stage 0 (FilteredRDD[2] at filter at SimpleApp.scala:22), which has no missing parents
14/09/09 16:56:43 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 0 (FilteredRDD[2] at filter at SimpleApp.scala:22)
14/09/09 16:56:43 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
14/09/09 16:56:44 INFO cluster.SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor#plogs005.sjc.domain.com:59110/user/Executor#181141295] with ID 0
14/09/09 16:56:44 INFO scheduler.TaskSetManager: Starting task 0.0:0 as TID 0 on executor 0: plogs005.sjc.domain.com (PROCESS_LOCAL)
14/09/09 16:56:44 INFO scheduler.TaskSetManager: Serialized task 0.0:0 as 1915 bytes in 2 ms
14/09/09 16:56:44 INFO scheduler.TaskSetManager: Starting task 0.0:1 as TID 1 on executor 0: plogs005.sjc.domain.com (PROCESS_LOCAL)
14/09/09 16:56:44 INFO scheduler.TaskSetManager: Serialized task 0.0:1 as 1915 bytes in 0 ms
14/09/09 16:56:44 INFO cluster.SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor#plogs006.sjc.domain.com:45192/user/Executor#2003979349] with ID 1
14/09/09 16:56:44 INFO cluster.SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor#plogs004.sjc.domain.com:46711/user/Executor#-1654256828] with ID 2
14/09/09 16:56:44 INFO storage.BlockManagerInfo: Registering block manager plogs005.sjc.domain.com:36798 with 589.2 MB RAM
14/09/09 16:56:44 INFO storage.BlockManagerInfo: Registering block manager plogs004.sjc.domain.com:40459 with 589.2 MB RAM
14/09/09 16:56:44 INFO storage.BlockManagerInfo: Registering block manager plogs006.sjc.domain.com:54696 with 589.2 MB RAM
14/09/09 16:56:45 WARN scheduler.TaskSetManager: Lost TID 0 (task 0.0:0)
14/09/09 16:56:45 WARN scheduler.TaskSetManager: Loss was due to java.io.EOFException
java.io.EOFException
at java.io.ObjectInputStream$BlockDataInputStream.readFully(ObjectInputStream.java:2744)
at java.io.ObjectInputStream.readFully(ObjectInputStream.java:1032)
at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:68)
at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:106)
at org.apache.hadoop.io.UTF8.readChars(UTF8.java:260)
at org.apache.hadoop.io.UTF8.readString(UTF8.java:252)
at org.apache.hadoop.mapred.FileSplit.readFields(FileSplit.java:87)
at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:285)
at org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:77)
at org.apache.spark.SerializableWritable.readObject(SerializableWritable.scala:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at org.apache.spark.scheduler.ResultTask.readExternal(ResultTask.scala:147)
at java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1837)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1796)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:63)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:169)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
14/09/09 16:56:45 WARN scheduler.TaskSetManager: Lost TID 1 (task 0.0:1)
14/09/09 16:56:45 INFO scheduler.TaskSetManager: Loss was due to java.io.EOFException [duplicate 1]
14/09/09 16:56:45 INFO scheduler.TaskSetManager: Starting task 0.0:1 as TID 2 on executor 2: plogs004.sjc.domain.com (NODE_LOCAL)
14/09/09 16:56:45 INFO scheduler.TaskSetManager: Serialized task 0.0:1 as 1915 bytes in 1 ms
14/09/09 16:56:45 INFO scheduler.TaskSetManager: Starting task 0.0:0 as TID 3 on executor 1: plogs006.sjc.domain.com (NODE_LOCAL)
14/09/09 16:56:45 INFO scheduler.TaskSetManager: Serialized task 0.0:0 as 1915 bytes in 0 ms
14/09/09 16:56:45 WARN scheduler.TaskSetManager: Lost TID 3 (task 0.0:0)
14/09/09 16:56:45 INFO scheduler.TaskSetManager: Loss was due to java.io.EOFException [duplicate 2]
14/09/09 16:56:45 INFO scheduler.TaskSetManager: Starting task 0.0:0 as TID 4 on executor 2: plogs004.sjc.domain.com (NODE_LOCAL)
14/09/09 16:56:45 INFO scheduler.TaskSetManager: Serialized task 0.0:0 as 1915 bytes in 1 ms
14/09/09 16:56:45 WARN scheduler.TaskSetManager: Lost TID 2 (task 0.0:1)
14/09/09 16:56:45 INFO scheduler.TaskSetManager: Loss was due to java.io.EOFException [duplicate 3]
14/09/09 16:56:45 INFO scheduler.TaskSetManager: Starting task 0.0:1 as TID 5 on executor 2: plogs004.sjc.domain.com (NODE_LOCAL)
14/09/09 16:56:45 INFO scheduler.TaskSetManager: Serialized task 0.0:1 as 1915 bytes in 0 ms
14/09/09 16:56:45 WARN scheduler.TaskSetManager: Lost TID 4 (task 0.0:0)
14/09/09 16:56:45 INFO scheduler.TaskSetManager: Loss was due to java.io.EOFException [duplicate 4]
14/09/09 16:56:45 INFO scheduler.TaskSetManager: Starting task 0.0:0 as TID 6 on executor 2: plogs004.sjc.domain.com (NODE_LOCAL)
14/09/09 16:56:45 INFO scheduler.TaskSetManager: Serialized task 0.0:0 as 1915 bytes in 0 ms
14/09/09 16:56:45 WARN scheduler.TaskSetManager: Lost TID 5 (task 0.0:1)
14/09/09 16:56:45 INFO scheduler.TaskSetManager: Loss was due to java.io.EOFException [duplicate 5]
14/09/09 16:56:45 INFO scheduler.TaskSetManager: Starting task 0.0:1 as TID 7 on executor 0: plogs005.sjc.domain.com (NODE_LOCAL)
14/09/09 16:56:45 INFO scheduler.TaskSetManager: Serialized task 0.0:1 as 1915 bytes in 0 ms
14/09/09 16:56:45 WARN scheduler.TaskSetManager: Lost TID 6 (task 0.0:0)
14/09/09 16:56:45 INFO scheduler.TaskSetManager: Loss was due to java.io.EOFException [duplicate 6]
14/09/09 16:56:45 ERROR scheduler.TaskSetManager: Task 0.0:0 failed 4 times; aborting job
14/09/09 16:56:45 INFO scheduler.DAGScheduler: Failed to run count at SimpleApp.scala:22
Exception in thread "main" 14/09/09 16:56:45 INFO scheduler.TaskSchedulerImpl: Cancelling stage 0
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0.0:0 failed 4 times, most recent failure: Exception failure in TID 6 on host plogs004.sjc.domain.com: java.io.EOFException
java.io.ObjectInputStream$BlockDataInputStream.readFully(ObjectInputStream.java:2744)
java.io.ObjectInputStream.readFully(ObjectInputStream.java:1032)
org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:68)
org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:106)
org.apache.hadoop.io.UTF8.readChars(UTF8.java:260)
org.apache.hadoop.io.UTF8.readString(UTF8.java:252)
org.apache.hadoop.mapred.FileSplit.readFields(FileSplit.java:87)
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:285)
org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:77)
org.apache.spark.SerializableWritable.readObject(SerializableWritable.scala:42)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
org.apache.spark.scheduler.ResultTask.readExternal(ResultTask.scala:147)
java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1837)
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1796)
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:63)
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:85)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:169)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1033)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1017)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1015)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1015)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:633)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:633)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:633)
at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1207)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/09/09 16:56:45 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
14/09/09 16:56:45 INFO scheduler.TaskSchedulerImpl: Stage 0 was cancelled
14/09/09 16:56:45 INFO scheduler.TaskSetManager: Loss was due to java.io.EOFException [duplicate 7]
14/09/09 16:56:45 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool

Resources