Spark: why tasks assigned only to one worker? - apache-spark

I'm new to Apache Spark and trying to run a simple program on my cluster. The problem is that the driver allocates all tasks to one worker.
I am running as spark stand-alone cluster mode on 2 computers:
1 - runs the master and a worker with 4 cores: 1 used for the master, 3 for the worker. Ip: 192.168.1.101
2 - runs only a worker with 4 cores: all for worker. Ip: 192.168.1.104
this is the code:
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("spark-project");
JavaSparkContext sc = new JavaSparkContext(conf);
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
JavaRDD<String> lines = sc.textFile("/Datasets/somefile.txt",7);
System.out.println(lines.partitions().size());
Accumulator<Integer> sum = sc.accumulator(0);
JavaRDD<Integer> numbers = lines.map(line -> 1);
System.out.println(numbers.partitions().size());
numbers.foreach(num -> System.out.println(num));
numbers.foreach(num -> sum.add(num));
System.out.println(sum.value());
sc.close();
}
Note: used Thread.sleep() command because I tried this: https://issues.apache.org/jira/browse/SPARK-3100
I used the submit script:
bin/spark-submit --class spark.Main --master spark://192.168.1.101:7077 --deploy-mode cluster /home/sparkUser/JarsOfSpark/JarForSpark.jar
this is the result I have got from the driver stdout:
7
7
50144
logs from the master:
log4j:WARN No appenders could be found for logger(org.apache.hadoop.metrics2.lib.MutableMetricsFactory).log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/01/15 19:22:14 INFO SecurityManager: Changing view acls to: sparkUser
16/01/15 19:22:14 INFO SecurityManager: Changing modify acls to: sparkUser
16/01/15 19:22:14 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(sparkUser); users with modify permissions: Set(sparkUser)
16/01/15 19:22:24 INFO Slf4jLogger: Slf4jLogger started
16/01/15 19:22:24 INFO Utils: Successfully started service 'Driver' on port 46546.
16/01/15 19:22:24 INFO WorkerWatcher: Connecting to worker akka.tcp://sparkWorker#192.168.1.101:43150/user/Worker
16/01/15 19:22:24 INFO SparkContext: Running Spark version 1.4.1
16/01/15 19:22:24 INFO SecurityManager: Changing view acls to: sparkUser
16/01/15 19:22:24 INFO SecurityManager: Changing modify acls to: sparkUser
16/01/15 19:22:24 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(sparkUser); users with modify permissions: Set(sparkUser)
16/01/15 19:22:24 INFO WorkerWatcher: Successfully connected to akka.tcp://sparkWorker#192.168.1.101:43150/user/Worker
16/01/15 19:22:25 INFO Slf4jLogger: Slf4jLogger started
16/01/15 19:22:25 INFO Utils: Successfully started service 'sparkDriver' on port 38186.
16/01/15 19:22:25 INFO SparkEnv: Registering MapOutputTracker
16/01/15 19:22:25 INFO SparkEnv: Registering BlockManagerMaster
16/01/15 19:22:25 INFO DiskBlockManager: Created local directory at /tmp/spark-ef3b8193-e086-4764-993c-0a40534052c1/blockmgr-e80c1c60-fe19-4be1-b3f9-259b3f1031a0
16/01/15 19:22:25 INFO MemoryStore: MemoryStore started with capacity 265.1 MB
16/01/15 19:22:25 INFO HttpFileServer: HTTP File server directory is /tmp/spark-ef3b8193-e086-4764-993c-0a40534052c1/httpd-e05a5a70-dbf3-4055-b6ab-7efa22dfa4d2
16/01/15 19:22:25 INFO HttpServer: Starting HTTP Server
16/01/15 19:22:25 INFO Utils: Successfully started service 'HTTP file server' on port 34728.
16/01/15 19:22:25 INFO SparkEnv: Registering OutputCommitCoordinator
16/01/15 19:22:35 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/01/15 19:22:35 INFO SparkUI: Started SparkUI at http://192.168.1.101:4040
16/01/15 19:22:35 INFO SparkContext: Added JAR file:/home/sparkUser/JarsOfSpark/JarForSpark.jar at http://192.168.1.101:34728/jars/JarForSpark.jar with timestamp 1452878555317
16/01/15 19:22:35 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster#192.168.1.101:7077/user/Master...
16/01/15 19:22:35 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20160115192235-0016
16/01/15 19:22:35 INFO AppClient$ClientActor: Executor added: app-20160115192235-0016/0 on worker-20160115181337-192.168.1.104-50099 (192.168.1.104:50099) with 4 cores
16/01/15 19:22:35 INFO SparkDeploySchedulerBackend: Granted executor ID app-20160115192235-0016/0 on hostPort 192.168.1.104:50099 with 4 cores, 512.0 MB RAM
16/01/15 19:22:35 INFO AppClient$ClientActor: Executor added: app-20160115192235-0016/1 on worker-20160115125104-192.168.1.101-43150 (192.168.1.101:43150) with 3 cores
16/01/15 19:22:35 INFO SparkDeploySchedulerBackend: Granted executor ID app-20160115192235-0016/1 on hostPort 192.168.1.101:43150 with 3 cores, 512.0 MB RAM
16/01/15 19:22:35 INFO AppClient$ClientActor: Executor updated: app-20160115192235-0016/1 is now LOADING
16/01/15 19:22:35 INFO AppClient$ClientActor: Executor updated: app-20160115192235-0016/0 is now LOADING
16/01/15 19:22:35 INFO AppClient$ClientActor: Executor updated: app-20160115192235-0016/0 is now RUNNING
16/01/15 19:22:35 INFO AppClient$ClientActor: Executor updated: app-20160115192235-0016/1 is now RUNNING
16/01/15 19:22:35 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33359.
16/01/15 19:22:35 INFO NettyBlockTransferService: Server created on 33359
16/01/15 19:22:35 INFO BlockManagerMaster: Trying to register BlockManager
16/01/15 19:22:35 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.101:33359 with 265.1 MB RAM, BlockManagerId(driver, 192.168.1.101, 33359)
16/01/15 19:22:35 INFO BlockManagerMaster: Registered BlockManager
16/01/15 19:22:35 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/01/15 19:22:38 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor#192.168.1.104:49573/user/Executor#1472403765]) with ID 0
16/01/15 19:22:39 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.104:33856 with 265.1 MB RAM, BlockManagerId(0, 192.168.1.104, 33856)
16/01/15 19:22:40 INFO MemoryStore: ensureFreeSpace(130448) called with curMem=0, maxMem=278019440
16/01/15 19:22:40 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 127.4 KB, free 265.0 MB)
16/01/15 19:22:40 INFO MemoryStore: ensureFreeSpace(14257) called with curMem=130448, maxMem=278019440
16/01/15 19:22:40 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 13.9 KB, free 265.0 MB)
16/01/15 19:22:40 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.101:33359 (size: 13.9 KB, free: 265.1 MB)
16/01/15 19:22:40 INFO SparkContext: Created broadcast 0 from textFile at Main.java:25
16/01/15 19:22:41 INFO FileInputFormat: Total input paths to process : 1
16/01/15 19:22:41 INFO SparkContext: Starting job: foreach at Main.java:33
16/01/15 19:22:41 INFO DAGScheduler: Got job 0 (foreach at Main.java:33) with 7 output partitions (allowLocal=false)
16/01/15 19:22:41 INFO DAGScheduler: Final stage: ResultStage 0(foreach at Main.java:33)
16/01/15 19:22:41 INFO DAGScheduler: Parents of final stage: List()
16/01/15 19:22:41 INFO DAGScheduler: Missing parents: List()
16/01/15 19:22:41 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at map at Main.java:30), which has no missing parents
16/01/15 19:22:41 INFO MemoryStore: ensureFreeSpace(4400) called with curMem=144705, maxMem=278019440
16/01/15 19:22:41 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.3 KB, free 265.0 MB)
16/01/15 19:22:41 INFO MemoryStore: ensureFreeSpace(2538) called with curMem=149105, maxMem=278019440
16/01/15 19:22:41 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.5 KB, free 265.0 MB)
16/01/15 19:22:41 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.1.101:33359 (size: 2.5 KB, free: 265.1 MB)
16/01/15 19:22:41 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:874
16/01/15 19:22:41 INFO DAGScheduler: Submitting 7 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at map at Main.java:30)
16/01/15 19:22:41 INFO TaskSchedulerImpl: Adding task set 0.0 with 7 tasks
16/01/15 19:22:41 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:41 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:41 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:41 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:41 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.1.104:33856 (size: 2.5 KB, free: 265.1 MB)
16/01/15 19:22:42 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.104:33856 (size: 13.9 KB, free: 265.1 MB)
16/01/15 19:22:43 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:43 INFO TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:43 INFO TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 2017 ms on 192.168.1.104 (1/7)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 2036 ms on 192.168.1.104 (2/7)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 2027 ms on 192.168.1.104 (3/7)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 2027 ms on 192.168.1.104 (4/7)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 143 ms on 192.168.1.104 (5/7)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 199 ms on 192.168.1.104 (6/7)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 206 ms on 192.168.1.104 (7/7)
16/01/15 19:22:43 INFO DAGScheduler: ResultStage 0 (foreach at Main.java:33) finished in 2.218 s
16/01/15 19:22:43 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/01/15 19:22:43 INFO DAGScheduler: Job 0 finished: foreach at Main.java:33, took 2.289399 s
16/01/15 19:22:43 INFO SparkContext: Starting job: foreach at Main.java:34
16/01/15 19:22:43 INFO DAGScheduler: Got job 1 (foreach at Main.java:34) with 7 output partitions (allowLocal=false)
16/01/15 19:22:43 INFO DAGScheduler: Final stage: ResultStage 1(foreach at Main.java:34)
16/01/15 19:22:43 INFO DAGScheduler: Parents of final stage: List()
16/01/15 19:22:43 INFO DAGScheduler: Missing parents: List()
16/01/15 19:22:43 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[2] at map at Main.java:30), which has no missing parents
16/01/15 19:22:43 INFO MemoryStore: ensureFreeSpace(4824) called with curMem=151643, maxMem=278019440
16/01/15 19:22:43 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 4.7 KB, free 265.0 MB)
16/01/15 19:22:43 INFO MemoryStore: ensureFreeSpace(2761) called with curMem=156467, maxMem=278019440
16/01/15 19:22:43 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 2.7 KB, free 265.0 MB)
16/01/15 19:22:43 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 192.168.1.101:33359 (size: 2.7 KB, free: 265.1 MB)
16/01/15 19:22:43 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:874
16/01/15 19:22:43 INFO DAGScheduler: Submitting 7 missing tasks from ResultStage 1 (MapPartitionsRDD[2] at map at Main.java:30)
16/01/15 19:22:43 INFO TaskSchedulerImpl: Adding task set 1.0 with 7 tasks
16/01/15 19:22:43 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 7, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:43 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 8, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:43 INFO TaskSetManager: Starting task 2.0 in stage 1.0 (TID 9, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:43 INFO TaskSetManager: Starting task 3.0 in stage 1.0 (TID 10, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:43 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 192.168.1.104:33856 (size: 2.7 KB, free: 265.1 MB)
16/01/15 19:22:43 INFO TaskSetManager: Starting task 4.0 in stage 1.0 (TID 11, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 7) in 106 ms on 192.168.1.104 (1/7)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 8) in 125 ms on 192.168.1.104 (2/7)
16/01/15 19:22:43 INFO TaskSetManager: Starting task 5.0 in stage 1.0 (TID 12, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:43 INFO TaskSetManager: Starting task 6.0 in stage 1.0 (TID 13, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 2.0 in stage 1.0 (TID 9) in 131 ms on 192.168.1.104 (3/7)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 3.0 in stage 1.0 (TID 10) in 133 ms on 192.168.1.104 (4/7)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 5.0 in stage 1.0 (TID 12) in 32 ms on 192.168.1.104 (5/7)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 4.0 in stage 1.0 (TID 11) in 61 ms on 192.168.1.104 (6/7)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 6.0 in stage 1.0 (TID 13) in 34 ms on 192.168.1.104 (7/7)
16/01/15 19:22:43 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
16/01/15 19:22:43 INFO DAGScheduler: ResultStage 1 (foreach at Main.java:34) finished in 0.165 s
16/01/15 19:22:43 INFO DAGScheduler: Job 1 finished: foreach at Main.java:34, took 0.177378 s
16/01/15 19:22:43 INFO SparkUI: Stopped Spark web UI at http://192.168.1.101:4040
16/01/15 19:22:43 INFO DAGScheduler: Stopping DAGScheduler
16/01/15 19:22:43 INFO SparkDeploySchedulerBackend: Shutting down all executors
16/01/15 19:22:43 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
16/01/15 19:22:43 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/01/15 19:22:43 INFO Utils: path = /tmp/spark-ef3b8193-e086-4764-993c-0a40534052c1/blockmgr-e80c1c60-fe19-4be1-b3f9-259b3f1031a0, already present as root for deletion.
16/01/15 19:22:43 INFO MemoryStore: MemoryStore cleared
16/01/15 19:22:43 INFO BlockManager: BlockManager stopped
16/01/15 19:22:43 INFO BlockManagerMaster: BlockManagerMaster stopped
16/01/15 19:22:43 INFO SparkContext: Successfully stopped SparkContext
16/01/15 19:22:43 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/01/15 19:22:43 INFO Utils: Shutdown hook called
16/01/15 19:22:43 INFO Utils: Deleting directory /tmp/spark-ef3b8193-e086-4764-993c-0a40534052c1
logs from worker 192.168.1.101:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/01/15 18:14:15 INFO CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT]
16/01/15 18:14:15 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/01/15 18:14:15 INFO SecurityManager: Changing view acls to: sparkUser
16/01/15 18:14:15 INFO SecurityManager: Changing modify acls to: sparkUser
16/01/15 18:14:15 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(sparkUser); users with modify permissions: Set(sparkUser)
logs from worker 192.168.1.104:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/01/15 19:23:23 INFO CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT]
16/01/15 19:23:24 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/01/15 19:23:24 INFO SecurityManager: Changing view acls to: root,sparkUser
16/01/15 19:23:24 INFO SecurityManager: Changing modify acls to: root,sparkUser
16/01/15 19:23:24 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, sparkUser); users with modify permissions: Set(root, sparkUser)
16/01/15 19:23:25 INFO Slf4jLogger: Slf4jLogger started
16/01/15 19:23:25 INFO Remoting: Starting remoting
16/01/15 19:23:25 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://driverPropsFetcher#192.168.1.104:43937]
16/01/15 19:23:25 INFO Utils: Successfully started service 'driverPropsFetcher' on port 43937.
16/01/15 19:23:26 INFO SecurityManager: Changing view acls to: root,sparkUser
16/01/15 19:23:26 INFO SecurityManager: Changing modify acls to: root,sparkUser
16/01/15 19:23:26 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, sparkUser); users with modify permissions: Set(root, sparkUser)
16/01/15 19:23:26 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/01/15 19:23:26 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/01/15 19:23:26 INFO Slf4jLogger: Slf4jLogger started
16/01/15 19:23:26 INFO Remoting: Starting remoting
16/01/15 19:23:26 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor#192.168.1.104:49573]
16/01/15 19:23:26 INFO Utils: Successfully started service 'sparkExecutor' on port 49573.
16/01/15 19:23:26 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
16/01/15 19:23:26 INFO DiskBlockManager: Created local directory at /tmp/spark-6ffb215c-7267-4a93-a766-2486d2331f6b/executor-146bfe64-d7e8-4da4-9144-8003754f0b5b/blockmgr-41031d8c-b069-4147-90c9-2237baed04f1
16/01/15 19:23:26 INFO MemoryStore: MemoryStore started with capacity 265.1 MB
16/01/15 19:23:26 INFO CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://sparkDriver#192.168.1.101:38186/user/CoarseGrainedScheduler
16/01/15 19:23:26 INFO WorkerWatcher: Connecting to worker akka.tcp://sparkWorker#192.168.1.104:50099/user/Worker
16/01/15 19:23:26 INFO WorkerWatcher: Successfully connected to akka.tcp://sparkWorker#192.168.1.104:50099/user/Worker
16/01/15 19:23:26 INFO CoarseGrainedExecutorBackend: Successfully registered with driver
16/01/15 19:23:26 INFO Executor: Starting executor ID 0 on host 192.168.1.104
16/01/15 19:23:26 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33856.
16/01/15 19:23:26 INFO NettyBlockTransferService: Server created on 33856
16/01/15 19:23:26 INFO BlockManagerMaster: Trying to register BlockManager
16/01/15 19:23:26 INFO BlockManagerMaster: Registered BlockManager
16/01/15 19:23:29 INFO CoarseGrainedExecutorBackend: Got assigned task 0
16/01/15 19:23:29 INFO CoarseGrainedExecutorBackend: Got assigned task 1
16/01/15 19:23:29 INFO CoarseGrainedExecutorBackend: Got assigned task 2
16/01/15 19:23:29 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
16/01/15 19:23:29 INFO Executor: Running task 2.0 in stage 0.0 (TID 2)
16/01/15 19:23:29 INFO CoarseGrainedExecutorBackend: Got assigned task 3
16/01/15 19:23:29 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
16/01/15 19:23:29 INFO Executor: Running task 3.0 in stage 0.0 (TID 3)
16/01/15 19:23:29 INFO Executor: Fetching http://192.168.1.101:34728/jars/JarForSpark.jar with timestamp 1452878555317
16/01/15 19:23:29 INFO Utils: Fetching http://192.168.1.101:34728/jars/JarForSpark.jar to /tmp/spark-6ffb215c-7267-4a93-a766-2486d2331f6b/executor-146bfe64-d7e8-4da4-9144-8003754f0b5b/fetchFileTemp1585609242243689070.tmp
16/01/15 19:23:29 INFO Utils: Copying /tmp/spark-6ffb215c-7267-4a93-a766-2486d2331f6b/executor-146bfe64-d7e8-4da4-9144-8003754f0b5b/3339800781452878555317_cache to /home/sparkUser2/Programs/spark-1.4.1-bin-hadoop2.6/work/app-20160115192235-0016/0/./JarForSpark.jar
16/01/15 19:23:29 INFO Executor: Adding file:/home/sparkUser2/Programs/spark-1.4.1-bin-hadoop2.6/work/app-20160115192235-0016/0/./JarForSpark.jar to class loader
16/01/15 19:23:29 INFO TorrentBroadcast: Started reading broadcast variable 1
16/01/15 19:23:29 INFO MemoryStore: ensureFreeSpace(2538) called with curMem=0, maxMem=278019440
16/01/15 19:23:29 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.5 KB, free 265.1 MB)
16/01/15 19:23:29 INFO TorrentBroadcast: Reading broadcast variable 1 took 273 ms
16/01/15 19:23:29 INFO MemoryStore: ensureFreeSpace(4400) called with curMem=2538, maxMem=278019440
16/01/15 19:23:29 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.3 KB, free 265.1 MB)
16/01/15 19:23:29 INFO HadoopRDD: Input split: file:/Datasets/somefile.txt:161772+161772
16/01/15 19:23:29 INFO HadoopRDD: Input split: file:/Datasets/somefile.txt:323544+161772
16/01/15 19:23:29 INFO HadoopRDD: Input split: file:/Datasets/somefile.txt:0+161772
16/01/15 19:23:29 INFO HadoopRDD: Input split: file:/Datasets/somefile.txt:485316+161772
16/01/15 19:23:29 INFO TorrentBroadcast: Started reading broadcast variable 0
16/01/15 19:23:29 INFO MemoryStore: ensureFreeSpace(14257) called with curMem=6938, maxMem=278019440
16/01/15 19:23:29 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 13.9 KB, free 265.1 MB)
16/01/15 19:23:30 INFO TorrentBroadcast: Reading broadcast variable 0 took 66 ms
16/01/15 19:23:30 INFO MemoryStore: ensureFreeSpace(188976) called with curMem=21195, maxMem=278019440
16/01/15 19:23:30 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 184.5 KB, free 264.9 MB)
16/01/15 19:23:30 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
16/01/15 19:23:30 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
16/01/15 19:23:30 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
16/01/15 19:23:30 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
16/01/15 19:23:30 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
16/01/15 19:23:30 INFO Executor: Finished task 3.0 in stage 0.0 (TID 3). 1796 bytes result sent to driver
16/01/15 19:23:30 INFO Executor: Finished task 2.0 in stage 0.0 (TID 2). 1796 bytes result sent to driver
16/01/15 19:23:30 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 1796 bytes result sent to driver
16/01/15 19:23:30 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1796 bytes result sent to driver
16/01/15 19:23:31 INFO CoarseGrainedExecutorBackend: Got assigned task 4
16/01/15 19:23:31 INFO Executor: Running task 4.0 in stage 0.0 (TID 4)
16/01/15 19:23:31 INFO HadoopRDD: Input split: file:/Datasets/somefile.txt:647088+161772
16/01/15 19:23:31 INFO CoarseGrainedExecutorBackend: Got assigned task 5
16/01/15 19:23:31 INFO Executor: Running task 5.0 in stage 0.0 (TID 5)
16/01/15 19:23:31 INFO CoarseGrainedExecutorBackend: Got assigned task 6
16/01/15 19:23:31 INFO HadoopRDD: Input split: file:/Datasets/somefile.txt:808860+161772
16/01/15 19:23:31 INFO Executor: Running task 6.0 in stage 0.0 (TID 6)
16/01/15 19:23:31 INFO HadoopRDD: Input split: file:/Datasets/somefile.txt:970632+161773
16/01/15 19:23:31 INFO Executor: Finished task 4.0 in stage 0.0 (TID 4). 1796 bytes result sent to driver
16/01/15 19:23:31 INFO Executor: Finished task 5.0 in stage 0.0 (TID 5). 1796 bytes result sent to driver
16/01/15 19:23:31 INFO Executor: Finished task 6.0 in stage 0.0 (TID 6). 1796 bytes result sent to driver
16/01/15 19:23:31 INFO CoarseGrainedExecutorBackend: Got assigned task 7
16/01/15 19:23:31 INFO Executor: Running task 0.0 in stage 1.0 (TID 7)
16/01/15 19:23:31 INFO TorrentBroadcast: Started reading broadcast variable 2
16/01/15 19:23:31 INFO CoarseGrainedExecutorBackend: Got assigned task 8
16/01/15 19:23:31 INFO Executor: Running task 1.0 in stage 1.0 (TID 8)
16/01/15 19:23:31 INFO CoarseGrainedExecutorBackend: Got assigned task 9
16/01/15 19:23:31 INFO Executor: Running task 2.0 in stage 1.0 (TID 9)
16/01/15 19:23:31 INFO CoarseGrainedExecutorBackend: Got assigned task 10
16/01/15 19:23:31 INFO Executor: Running task 3.0 in stage 1.0 (TID 10)
16/01/15 19:23:31 INFO MemoryStore: ensureFreeSpace(2761) called with curMem=210171, maxMem=278019440
16/01/15 19:23:31 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 2.7 KB, free 264.9 MB)
16/01/15 19:23:31 INFO TorrentBroadcast: Reading broadcast variable 2 took 42 ms
16/01/15 19:23:31 INFO MemoryStore: ensureFreeSpace(4824) called with curMem=212932, maxMem=278019440
16/01/15 19:23:31 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 4.7 KB, free 264.9 MB)
16/01/15 19:23:31 INFO HadoopRDD: Input split: file:/Datasets/somefile.txt:0+161772
16/01/15 19:23:31 INFO HadoopRDD: Input split: file:/Datasets/somefile.txt:161772+161772
16/01/15 19:23:31 INFO HadoopRDD: Input split: file:/Datasets/somefile.txt:323544+161772
16/01/15 19:23:31 INFO HadoopRDD: Input split: file:/Datasets/somefile.txt:485316+161772
16/01/15 19:23:31 INFO Executor: Finished task 0.0 in stage 1.0 (TID 7). 1814 bytes result sent to driver
16/01/15 19:23:31 INFO Executor: Finished task 1.0 in stage 1.0 (TID 8). 1814 bytes result sent to driver
16/01/15 19:23:31 INFO Executor: Finished task 2.0 in stage 1.0 (TID 9). 1814 bytes result sent to driver
16/01/15 19:23:31 INFO Executor: Finished task 3.0 in stage 1.0 (TID 10). 1814 bytes result sent to driver
16/01/15 19:23:31 INFO CoarseGrainedExecutorBackend: Got assigned task 11
16/01/15 19:23:31 INFO Executor: Running task 4.0 in stage 1.0 (TID 11)
16/01/15 19:23:31 INFO HadoopRDD: Input split: file:/Datasets/somefile.txt:647088+161772
16/01/15 19:23:31 INFO CoarseGrainedExecutorBackend: Got assigned task 12
16/01/15 19:23:31 INFO Executor: Running task 5.0 in stage 1.0 (TID 12)
16/01/15 19:23:31 INFO CoarseGrainedExecutorBackend: Got assigned task 13
16/01/15 19:23:31 INFO Executor: Running task 6.0 in stage 1.0 (TID 13)
16/01/15 19:23:31 INFO HadoopRDD: Input split: file:/Datasets/somefile.txt:808860+161772
16/01/15 19:23:31 INFO HadoopRDD: Input split: file:/Datasets/somefile.txt:970632+161773
16/01/15 19:23:31 INFO Executor: Finished task 5.0 in stage 1.0 (TID 12). 1814 bytes result sent to driver
16/01/15 19:23:31 INFO Executor: Finished task 4.0 in stage 1.0 (TID 11). 1814 bytes result sent to driver
16/01/15 19:23:31 INFO Executor: Finished task 6.0 in stage 1.0 (TID 13). 1814 bytes result sent to driver
16/01/15 19:23:31 INFO CoarseGrainedExecutorBackend: Driver commanded a shutdown
I also tried to stop one of the workers to see what happens and the program successfully completed on the other worker.
and I looked at this post but unfortunately it didn't solved my problem:
Why my tasks only be done in one worker in Spark cluster
Appreciate your help!

It is because of Data Locality - "How close data is to the code processing it"
Spark tries to schedule the available tasks to its best locality levels.
Spark by default tries "PROCESS_LOCAL" mode as the first option and switches on to the lower levels only if it sees that the none of the CPU's are freed after a certain time interval.
Default wait time before switching to lower levels is 3s (see spark.locality.wait parameter).
And looking at the logs, all your tasks are finished within 3 seconds.
16/01/15 19:22:41 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:41 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:41 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:41 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:41 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.1.104:33856 (size: 2.5 KB, free: 265.1 MB)
16/01/15 19:22:42 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.104:33856 (size: 13.9 KB, free: 265.1 MB)
16/01/15 19:22:43 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:43 INFO TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:43 INFO TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, 192.168.1.104, PROCESS_LOCAL, 1495 bytes)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 2017 ms on 192.168.1.104 (1/7)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 2036 ms on 192.168.1.104 (2/7)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 2027 ms on 192.168.1.104 (3/7)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 2027 ms on 192.168.1.104 (4/7)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 143 ms on 192.168.1.104 (5/7)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 199 ms on 192.168.1.104 (6/7)
16/01/15 19:22:43 INFO TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 206 ms on 192.168.1.104 (7/7)
16/01/15 19:22:43 INFO DAGScheduler: ResultStage 0 (foreach at Main.java:33) finished in 2.218 s
16/01/15 19:22:43 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/01/15 19:22:43 INFO DAGScheduler: Job 0 finished: foreach at Main.java:33, took 2.289399 s
Would suggest to try with larger files (in GB's) where each tasks takes some time to get the final results.
for more information on Data Locality, please read the section "Data Locality" in Spark Tuning Section

Related

"Application attempt...doesn't exist in ApplicationMasterService cacheā€ cause? (Pregel: maxIterations impact on cluster for non-convergent algorithm)

I've tried to run my own Pregel method for a relatively small graph (250k vertices, 1.5M edges). The algorithm which I use may (high chances are) be non-convergent meaning in most cases maxIterations setting is actually acting as hard stop finishing all calculations.
I'm using AWS EMR with apache spark and m5.2xlarge instances for all nodes in a setup with EMR-managed scaling. Initially, cluster is set to run 1 master and 4 worker nodes with expansion up to 8 max.
For the same setup of cluster, I was increasing the number of maxIterations gradually from 100 to 500 with step of 100 [100, 200, 300, 400, 500]. I was under the assumption that setup enough for 100 iterations will be also enough for any other number just because not used memory will be freeing up.
However, when I ran a set of jobs with maxIterations increasing from 100 to 500 I found that all jobs with maxIterations > 100 were terminated due to step error. I've checked logs of Spark to find issues and this is what I got:
log start
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/mnt1/yarn/usercache/hadoop/filecache/10/__spark_libs__364046395941885636.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
21/02/13 21:23:24 INFO SignalUtils: Registered signal handler for TERM
21/02/13 21:23:24 INFO SignalUtils: Registered signal handler for HUP
21/02/13 21:23:24 INFO SignalUtils: Registered signal handler for INT
21/02/13 21:23:24 INFO SecurityManager: Changing view acls to: yarn,hadoop
21/02/13 21:23:24 INFO SecurityManager: Changing modify acls to: yarn,hadoop
21/02/13 21:23:24 INFO SecurityManager: Changing view acls groups to:
21/02/13 21:23:24 INFO SecurityManager: Changing modify acls groups to:
21/02/13 21:23:24 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); groups with view permissions: Set(); users with modify permissions: Set(yarn, hadoop); groups with modify permissions: Set()
21/02/13 21:23:24 WARN Configuration: __spark_hadoop_conf__.xml:an attempt to override final parameter: fs.s3.buffer.dir; Ignoring.
21/02/13 21:23:24 WARN Configuration: __spark_hadoop_conf__.xml:an attempt to override final parameter: fs.s3.buffer.dir; Ignoring.
21/02/13 21:23:24 WARN Configuration: __spark_hadoop_conf__.xml:an attempt to override final parameter: yarn.nodemanager.local-dirs; Ignoring.
21/02/13 21:23:24 WARN Configuration: __spark_hadoop_conf__.xml:an attempt to override final parameter: fs.s3.buffer.dir; Ignoring.
21/02/13 21:23:24 WARN Configuration: __spark_hadoop_conf__.xml:an attempt to override final parameter: yarn.nodemanager.local-dirs; Ignoring.
21/02/13 21:23:24 WARN Configuration: __spark_hadoop_conf__.xml:an attempt to override final parameter: fs.s3.buffer.dir; Ignoring.
21/02/13 21:23:24 WARN Configuration: __spark_hadoop_conf__.xml:an attempt to override final parameter: yarn.nodemanager.local-dirs; Ignoring.
21/02/13 21:23:24 INFO ApplicationMaster: Preparing Local resources
21/02/13 21:23:25 WARN Configuration: __spark_hadoop_conf__.xml:an attempt to override final parameter: fs.s3.buffer.dir; Ignoring.
21/02/13 21:23:25 WARN Configuration: __spark_hadoop_conf__.xml:an attempt to override final parameter: yarn.nodemanager.local-dirs; Ignoring.
21/02/13 21:23:25 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1613251201422_0001_000001
21/02/13 21:23:25 INFO ApplicationMaster: Starting the user application in a separate Thread
21/02/13 21:23:25 INFO ApplicationMaster: Waiting for spark context initialization...
21/02/13 21:23:25 INFO SparkContext: Running Spark version 2.4.7-amzn-0
21/02/13 21:23:25 INFO SparkContext: Submitted application: Read JDBC Datasites2
21/02/13 21:23:25 INFO SecurityManager: Changing view acls to: yarn,hadoop
21/02/13 21:23:25 INFO SecurityManager: Changing modify acls to: yarn,hadoop
21/02/13 21:23:25 INFO SecurityManager: Changing view acls groups to:
21/02/13 21:23:25 INFO SecurityManager: Changing modify acls groups to:
21/02/13 21:23:25 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); groups with view permissions: Set(); users with modify permissions: Set(yarn, hadoop); groups with modify permissions: Set()
21/02/13 21:23:25 WARN Configuration: __spark_hadoop_conf__.xml:an attempt to override final parameter: fs.s3.buffer.dir; Ignoring.
21/02/13 21:23:25 WARN Configuration: __spark_hadoop_conf__.xml:an attempt to override final parameter: yarn.nodemanager.local-dirs; Ignoring.
21/02/13 21:23:25 INFO Utils: Successfully started service 'sparkDriver' on port 41117.
21/02/13 21:23:25 INFO SparkEnv: Registering MapOutputTracker
21/02/13 21:23:25 INFO SparkEnv: Registering BlockManagerMaster
21/02/13 21:23:25 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
21/02/13 21:23:25 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
21/02/13 21:23:25 INFO DiskBlockManager: Created local directory at /mnt/yarn/usercache/hadoop/appcache/application_1613251201422_0001/blockmgr-bc544c91-1a59-41f3-890f-faaa392bea09
21/02/13 21:23:25 INFO DiskBlockManager: Created local directory at /mnt1/yarn/usercache/hadoop/appcache/application_1613251201422_0001/blockmgr-14e3f36f-6d3f-4ffe-a28c-fa3f81f0c5c9
21/02/13 21:23:26 INFO MemoryStore: MemoryStore started with capacity 1008.9 MB
21/02/13 21:23:26 WARN Configuration: __spark_hadoop_conf__.xml:an attempt to override final parameter: fs.s3.buffer.dir; Ignoring.
21/02/13 21:23:26 WARN Configuration: __spark_hadoop_conf__.xml:an attempt to override final parameter: yarn.nodemanager.local-dirs; Ignoring.
21/02/13 21:23:26 WARN Configuration: __spark_hadoop_conf__.xml:an attempt to override final parameter: fs.s3.buffer.dir; Ignoring.
21/02/13 21:23:26 WARN Configuration: __spark_hadoop_conf__.xml:an attempt to override final parameter: yarn.nodemanager.local-dirs; Ignoring.
21/02/13 21:23:26 INFO SparkEnv: Registering OutputCommitCoordinator
21/02/13 21:23:26 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /jobs, /jobs/json, /jobs/job, /jobs/job/json, /stages, /stages/json, /stages/stage, /stages/stage/json, /stages/pool, /stages/pool/json, /storage, /storage/json, /storage/rdd, /storage/rdd/json, /environment, /environment/json, /executors, /executors/json, /executors/threadDump, /executors/threadDump/json, /static, /, /api, /jobs/job/kill, /stages/stage/kill.
21/02/13 21:23:26 INFO Utils: Successfully started service 'SparkUI' on port 43659.
21/02/13 21:23:26 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://ip-172-31-21-88.ec2.internal:43659
21/02/13 21:23:26 INFO YarnClusterScheduler: Created YarnClusterScheduler
21/02/13 21:23:26 INFO SchedulerExtensionServices: Starting Yarn extension services with app application_1613251201422_0001 and attemptId Some(appattempt_1613251201422_0001_000001)
21/02/13 21:23:26 INFO Utils: Using initial executors = 100, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
21/02/13 21:23:26 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 34665.
21/02/13 21:23:26 INFO Utils: Using initial executors = 100, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
21/02/13 21:23:26 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
21/02/13 21:23:26 WARN Configuration: __spark_hadoop_conf__.xml:an attempt to override final parameter: fs.s3.buffer.dir; Ignoring.
21/02/13 21:23:26 WARN Configuration: __spark_hadoop_conf__.xml:an attempt to override final parameter: yarn.nodemanager.local-dirs; Ignoring.
21/02/13 21:23:27 INFO RMProxy: Connecting to ResourceManager at ip-172-31-29-
command:
LD_LIBRARY_PATH=\"/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:$LD_LIBRARY_PATH\" \
{{JAVA_HOME}}/bin/java \
-server \
-Xmx4743m \
'-verbose:gc' \
'-XX:+PrintGCDetails' \
'-XX:+PrintGCDateStamps' \
'-XX:OnOutOfMemoryError=kill -9 %p' \
'-XX:+UseParallelGC' \
'-XX:InitiatingHeapOccupancyPercent=70' \
-Djava.io.tmpdir={{PWD}}/tmp \
'-Dspark.history.ui.port=18080' \
'-Dspark.ui.port=0' \
'-Dspark.driver.port=41117' \
-Dspark.yarn.app.container.log.dir=<LOG_DIR> \
org.apache.spark.executor.CoarseGrainedExecutorBackend \
--driver-url \
spark://CoarseGrainedScheduler#ip-172-31-21-88.ec2.internal:41117 \
--executor-id \
<executorId> \
--hostname \
<hostname> \
--cores \
2 \
--app-id \
application_1613251201422_0001 \
--user-class-path \
file:$PWD/__app__.jar \
1><LOG_DIR>/stdout \
2><LOG_DIR>/stderr
resources:
__app__.jar -> resource { scheme: "hdfs" host: "ip-172-31-29-153.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1613251201422_0001/force-pregel.jar" } size: 27378 timestamp: 1613251399566 type: FILE visibility: PRIVATE
__spark_libs__ -> resource { scheme: "hdfs" host: "ip-172-31-29-153.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1613251201422_0001/__spark_libs__364046395941885636.zip" } size: 239655683 timestamp: 1613251397751 type: ARCHIVE visibility: PRIVATE
__spark_conf__ -> resource { scheme: "hdfs" host: "ip-172-31-29-153.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1613251201422_0001/__spark_conf__.zip" } size: 274365 timestamp: 1613251399776 type: ARCHIVE visibility: PRIVATE
hive-site.xml -> resource { scheme: "hdfs" host: "ip-172-31-29-153.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1613251201422_0001/hive-site.xml" } size: 2137 timestamp: 1613251399631 type: FILE visibility: PRIVATE
===============================================================================
21/02/13 21:23:27 INFO Configuration: resource-types.xml not found
21/02/13 21:23:27 INFO ResourceUtils: Unable to find 'resource-types.xml'.
21/02/13 21:23:27 INFO ResourceUtils: Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE
21/02/13 21:23:27 INFO ResourceUtils: Adding resource type - name = vcores, units = , type = COUNTABLE
21/02/13 21:23:27 INFO Utils: Using initial executors = 100, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
21/02/13 21:23:27 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark://YarnAM#ip-172-31-21-88.ec2.internal:41117)
21/02/13 21:23:27 INFO YarnAllocator: Will request up to 100 executor container(s), each with <memory:5632, max memory:2147483647, vCores:2, max vCores:2147483647>
21/02/13 21:23:27 INFO YarnAllocator: Submitted 100 unlocalized container requests.
21/02/13 21:23:27 INFO ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /SQL/json.
21/02/13 21:23:27 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /SQL/execution.
21/02/13 21:23:27 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /SQL/execution/json.
21/02/13 21:23:27 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /static/sql.
21/02/13 21:23:27 INFO YarnAllocator: Allocated container container_1613251201422_0001_01_000002 on host ip-172-31-21-88.ec2.internal for executor with ID 1 with resources <memory:5632, max memory:12288, vCores:1, max vCores:8>
21/02/13 21:23:27 INFO YarnAllocator: Launching executor with 4742m of heap (plus 890m overhead) and 2 cores
21/02/13 21:23:27 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
21/02/13 21:23:28 INFO YarnAllocator: Allocated container container_1613251201422_0001_01_000004 on host ip-172-31-25-102.ec2.internal for executor with ID 2 with resources <memory:11264, vCores:2>
21/02/13 21:23:28 INFO YarnAllocator: Launching executor with 9485m of heap (plus 1779m overhead) and 4 cores
21/02/13 21:23:28 INFO YarnAllocator: Allocated container container_1613251201422_0001_01_000006 on host ip-172-31-28-143.ec2.internal for executor with ID 3 with resources <memory:11264, vCores:2>
21/02/13 21:23:28 INFO YarnAllocator: Launching executor with 9485m of heap (plus 1779m overhead) and 4 cores
21/02/13 21:23:28 INFO YarnAllocator: Received 2 containers from YARN, launching executors on 2 of them.
30 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (172.31.21.88:53634) with ID 1
21/02/13 21:23:30 INFO ExecutorAllocationManager: New executor 1 has registered (new total is 1)
21/02/13 21:23:30 INFO BlockManagerMasterEndpoint: Registering block manager ip-172-31-21-88.ec2.internal:45667 with 2.3 GB RAM, BlockManagerId(1, ip-172-31-21-88.ec2.internal, 45667, None)
then approximately 2Mbytes of same output and then it finishes:
21/02/13 21:28:25 INFO TaskSetManager: Finished task 199.0 in stage 37207.0 (TID 93528) in 8 ms on ip-172-31-25-102.ec2.internal (executor 2) (158/200)
21/02/13 21:28:25 INFO BlockManagerInfo: Added rdd_2738_31 in memory on ip-172-31-21-88.ec2.internal:45667 (size: 252.3 KB, free: 2.1 GB)
21/02/13 21:28:25 ERROR ApplicationMaster: Exception from Reporter thread.
org.apache.hadoop.yarn.exceptions.ApplicationAttemptNotFoundException: Application attempt appattempt_1613251201422_0001_000001 doesn't exist in ApplicationMasterService cache.
at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:353)
at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
at org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1003)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:931)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2854)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateYarnException(RPCUtil.java:75)
at org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:116)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:79)
at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy23.allocate(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl.allocate(AMRMClientImpl.java:300)
at org.apache.spark.deploy.yarn.YarnAllocator.allocateResources(YarnAllocator.scala:279)
at org.apache.spark.deploy.yarn.ApplicationMaster.org$apache$spark$deploy$yarn$ApplicationMaster$$allocationThreadImpl(ApplicationMaster.scala:541)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$1.run(ApplicationMaster.scala:607)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.ApplicationAttemptNotFoundException): Application attempt appattempt_1613251201422_0001_000001 doesn't exist in ApplicationMasterService cache.
at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:353)
at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
at org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1003)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:931)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2854)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1549)
at org.apache.hadoop.ipc.Client.call(Client.java:1495)
at org.apache.hadoop.ipc.Client.call(Client.java:1394)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
at com.sun.proxy.$Proxy22.allocate(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
... 13 more
21/02/13 21:28:25 INFO BlockManagerInfo: Added rdd_2738_30 in memory on ip-172-31-21-88.ec2.internal:45667 (size: 244.8 KB, free: 2.1 GB)
21/02/13 21:28:25 INFO TaskSetManager: Starting task 40.0 in stage 37207.0 (TID 93533, ip-172-31-21-88.ec2.internal, executor 1, partition 40, PROCESS_LOCAL, 19161 bytes)
21/02/13 21:28:25 INFO TaskSetManager: Finished task 31.0 in stage 37207.0 (TID 93532) in 16 ms on ip-172-31-21-88.ec2.internal (executor 1) (162/200)
21/02/13 21:28:25 INFO ApplicationMaster: Final app status: FAILED, exitCode: 12, (reason: Application attempt appattempt_1613251201422_0001_000001 doesn't exist in ApplicationMasterService cache.
at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:353)
at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
at org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1003)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:931)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2854)
)
21/02/13 21:28:25 INFO TaskSetManager: Starting task 41.0 in stage 37207.0 (TID 93534, ip-172-31-21-88.ec2.internal, executor 1, partition 41, PROCESS_LOCAL, 19161 bytes)
21/02/13 21:28:25 INFO TaskSetManager: Finished task 30.0 in stage 37207.0 (TID 93531) in 22 ms on ip-172-31-21-88.ec2.internal (executor 1) (163/200)
21/02/13 21:28:25 INFO BlockManagerInfo: Added rdd_2738_40 in memory on ip-172-31-21-88.ec2.internal:45667 (size: 234.2 KB, free: 2.1 GB)
21/02/13 21:28:25 INFO TaskSetManager: Starting task 48.0 in stage 37207.0 (TID 93535, ip-172-31-21-88.ec2.internal, executor 1, partition 48, PROCESS_LOCAL, 19161 bytes)
21/02/13 21:28:25 INFO TaskSetManager: Finished task 40.0 in stage 37207.0 (TID 93533) in 17 ms on ip-172-31-21-88.ec2.internal (executor 1) (164/200)
21/02/13 21:28:25 INFO BlockManagerInfo: Added rdd_2738_41 in memory on ip-172-31-21-88.ec2.internal:45667 (size: 233.4 KB, free: 2.1 GB)
21/02/13 21:28:25 INFO TaskSetManager: Starting task 51.0 in stage 37207.0 (TID 93536, ip-172-31-21-88.ec2.internal, executor 1, partition 51, PROCESS_LOCAL, 19161 bytes)
21/02/13 21:28:25 INFO TaskSetManager: Finished task 41.0 in stage 37207.0 (TID 93534) in 15 ms on ip-172-31-21-88.ec2.internal (executor 1) (165/200)
21/02/13 21:28:25 INFO BlockManagerInfo: Added rdd_2738_48 in memory on ip-172-31-21-88.ec2.internal:45667 (size: 235.1 KB, free: 2.1 GB)
21/02/13 21:28:25 INFO TaskSetManager: Starting task 57.0 in stage 37207.0 (TID 93537, ip-172-31-21-88.ec2.internal, executor 1, partition 57, PROCESS_LOCAL, 19161 bytes)
21/02/13 21:28:25 INFO TaskSetManager: Finished task 48.0 in stage 37207.0 (TID 93535) in 11 ms on ip-172-31-21-88.ec2.internal (executor 1) (166/200)
21/02/13 21:28:25 INFO BlockManagerInfo: Added rdd_2738_57 in memory on ip-172-31-21-88.ec2.internal:45667 (size: 232.2 KB, free: 2.1 GB)
21/02/13 21:28:25 INFO BlockManagerInfo: Added rdd_2738_51 in memory on ip-172-31-21-88.ec2.internal:45667 (size: 244.2 KB, free: 2.1 GB)
21/02/13 21:28:25 INFO TaskSetManager: Starting task 61.0 in stage 37207.0 (TID 93538, ip-172-31-21-88.ec2.internal, executor 1, partition 61, PROCESS_LOCAL, 19161 bytes)
21/02/13 21:28:25 INFO TaskSetManager: Finished task 57.0 in stage 37207.0 (TID 93537) in 10 ms on ip-172-31-21-88.ec2.internal (executor 1) (167/200)
21/02/13 21:28:25 INFO TaskSetManager: Starting task 63.0 in stage 37207.0 (TID 93539, ip-172-31-21-88.ec2.internal, executor 1, partition 63, PROCESS_LOCAL, 19161 bytes)
21/02/13 21:28:25 INFO TaskSetManager: Finished task 51.0 in stage 37207.0 (TID 93536) in 17 ms on ip-172-31-21-88.ec2.internal (executor 1) (168/200)
21/02/13 21:28:25 INFO BlockManagerInfo: Added rdd_2738_61 in memory on ip-172-31-21-88.ec2.internal:45667 (size: 228.6 KB, free: 2.1 GB)
21/02/13 21:28:25 INFO TaskSetManager: Starting task 67.0 in stage 37207.0 (TID 93540, ip-172-31-21-88.ec2.internal, executor 1, partition 67, PROCESS_LOCAL, 19161 bytes)
21/02/13 21:28:25 INFO TaskSetManager: Finished task 61.0 in stage 37207.0 (TID 93538) in 10 ms on ip-172-31-21-88.ec2.internal (executor 1) (169/200)
21/02/13 21:28:25 INFO BlockManagerInfo: Added rdd_2738_63 in memory on ip-172-31-21-88.ec2.internal:45667 (size: 238.3 KB, free: 2.1 GB)
21/02/13 21:28:25 INFO TaskSetManager: Starting task 71.0 in stage 37207.0 (TID 93541, ip-172-31-21-88.ec2.internal, executor 1, partition 71, PROCESS_LOCAL, 19161 bytes)
21/02/13 21:28:25 INFO TaskSetManager: Finished task 63.0 in stage 37207.0 (TID 93539) in 14 ms on ip-172-31-21-88.ec2.internal (executor 1) (170/200)
21/02/13 21:28:25 INFO BlockManagerInfo: Added rdd_2738_67 in memory on ip-172-31-21-88.ec2.internal:45667 (size: 247.2 KB, free: 2.1 GB)
21/02/13 21:28:25 INFO BlockManagerInfo: Added rdd_2738_71 in memory on ip-172-31-21-88.ec2.internal:45667 (size: 243.6 KB, free: 2.1 GB)
21/02/13 21:28:25 INFO TaskSetManager: Starting task 77.0 in stage 37207.0 (TID 93542, ip-172-31-21-88.ec2.internal, executor 1, partition 77, PROCESS_LOCAL, 19161 bytes)
21/02/13 21:28:25 INFO TaskSetManager: Finished task 67.0 in stage 37207.0 (TID 93540) in 18 ms on ip-172-31-21-88.ec2.internal (executor 1) (171/200)
21/02/13 21:28:25 INFO TaskSetManager: Starting task 79.0 in stage 37207.0 (TID 93543, ip-172-31-21-88.ec2.internal, executor 1, partition 79, PROCESS_LOCAL, 19161 bytes)
21/02/13 21:28:25 INFO TaskSetManager: Finished task 71.0 in stage 37207.0 (TID 93541) in 12 ms on ip-172-31-21-88.ec2.internal (executor 1) (172/200)
21/02/13 21:28:25 INFO BlockManagerInfo: Added rdd_2738_79 in memory on ip-172-31-21-88.ec2.internal:45667 (size: 253.6 KB, free: 2.1 GB)
21/02/13 21:28:25 INFO BlockManagerInfo: Added rdd_2738_77 in memory on ip-172-31-21-88.ec2.internal:45667 (size: 222.5 KB, free: 2.1 GB)
21/02/13 21:28:25 INFO TaskSetManager: Starting task 86.0 in stage 37207.0 (TID 93544, ip-172-31-21-88.ec2.internal, executor 1, partition 86, PROCESS_LOCAL, 19161 bytes)
21/02/13 21:28:25 INFO TaskSetManager: Finished task 79.0 in stage 37207.0 (TID 93543) in 12 ms on ip-172-31-21-88.ec2.internal (executor 1) (173/200)
21/02/13 21:28:25 INFO TaskSetManager: Starting task 87.0 in stage 37207.0 (TID 93545, ip-172-31-21-88.ec2.internal, executor 1, partition 87, PROCESS_LOCAL, 19161 bytes)
21/02/13 21:28:25 INFO TaskSetManager: Finished task 77.0 in stage 37207.0 (TID 93542) in 14 ms on ip-172-31-21-88.ec2.internal (executor 1) (174/200)
21/02/13 21:28:25 INFO BlockManagerInfo: Added rdd_2738_86 in memory on ip-172-31-21-88.ec2.internal:45667 (size: 254.5 KB, free: 2.1 GB)
21/02/13 21:28:25 INFO BlockManagerInfo: Added rdd_2738_87 in memory on ip-172-31-21-88.ec2.internal:45667 (size: 267.1 KB, free: 2.1 GB)
Am I correct that Pregel doesn't finish 200 or more iterations due to OutOfMemory error on some of the cluster nodes?
If so, how does Pregel work that 100 iterations are not causing it and 200 or 300 are causing? My understand before this issue was that Pregel as many other iterative approaches only 'store' previous and current iteration values and results and iteration by iteration values are changing, but their quantity is not increasing, meaning it is still graph with 250k vertices and 1.5m edges and only messages valid for current iteration are adding up to the heap.
Throughout the log I was not able to find any information on low memory and as seen, there are Gigabytes of it available on each node before it terminates

Permission denied error when setting up local Spark instance and running pyspark

I am setting up a local Spark instance on Windows to use with PySpark as described in this guide (but with spark-3.0.0 / hadoop 2.7 instead): https://phoenixnap.com/kb/install-spark-on-windows-10.
I can startup Spark with:
C:\Spark\spark-3.0.0-bin-hadoop2.7\bin>spark-shell.cmd
and connect to it with http://localhost:4040/ in my browser (I see the Spark GUI).
But when am running the Python pyspark example with
C:\Spark\spark-3.0.0-bin-hadoop2.7\examples>run-example SparkPi
it throws an Permission Denied error like in this trace:
21/03/08 10:51:03 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
21/03/08 10:51:04 INFO SparkContext: Running Spark version 3.0.0
21/03/08 10:51:04 INFO ResourceUtils: ==============================================================
21/03/08 10:51:04 INFO ResourceUtils: Resources for spark.driver:
21/03/08 10:51:04 INFO ResourceUtils: ==============================================================
21/03/08 10:51:04 INFO SparkContext: Submitted application: Spark Pi
21/03/08 10:51:04 INFO SecurityManager: Changing view acls to: #####
21/03/08 10:51:04 INFO SecurityManager: Changing modify acls to: #####
21/03/08 10:51:04 INFO SecurityManager: Changing view acls groups to:
21/03/08 10:51:04 INFO SecurityManager: Changing modify acls groups to:
21/03/08 10:51:04 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(#####); groups with view permissions: Set(); users with modify permissions: Set(#####); groups with modify permissions: Set()
21/03/08 10:51:05 INFO Utils: Successfully started service 'sparkDriver' on port 63213.
21/03/08 10:51:05 INFO SparkEnv: Registering MapOutputTracker
21/03/08 10:51:05 INFO SparkEnv: Registering BlockManagerMaster
21/03/08 10:51:05 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
21/03/08 10:51:05 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
21/03/08 10:51:05 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
21/03/08 10:51:05 INFO DiskBlockManager: Created local directory at C:\Users\#####\AppData\Local\Temp\blockmgr-dce03954-27a7-484d-8e54-f552b21433f7
21/03/08 10:51:05 INFO MemoryStore: MemoryStore started with capacity 366.3 MiB
21/03/08 10:51:05 INFO SparkEnv: Registering OutputCommitCoordinator
21/03/08 10:51:05 INFO Utils: Successfully started service 'SparkUI' on port 4040.
21/03/08 10:51:05 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://WORKSTATION.DOMAIN.EXT:4040
21/03/08 10:51:05 INFO SparkContext: Added JAR file:///C:/Spark/spark-3.0.0-bin-hadoop2.7/examples/jars/scopt_2.12-3.7.1.jar at spark://WORKSTATION.DOMAIN.EXT:63213/jars/scopt_2.12-3.7.1.jar with timestamp 1615197065578
21/03/08 10:51:05 INFO SparkContext: Added JAR file:///C:/Spark/spark-3.0.0-bin-hadoop2.7/examples/jars/spark-examples_2.12-3.0.0.jar at spark://WORKSTATION.DOMAIN.EXT:63213/jars/spark-examples_2.12-3.0.0.jar with timestamp 1615197065579
21/03/08 10:51:05 INFO Executor: Starting executor ID driver on host WORKSTATION.DOMAIN.EXT
21/03/08 10:51:05 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 63260.
21/03/08 10:51:05 INFO NettyBlockTransferService: Server created on WORKSTATION.DOMAIN.EXT:63260
21/03/08 10:51:05 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
21/03/08 10:51:05 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, NLLR4000250910.solon.prd, 63260, None)
21/03/08 10:51:05 INFO BlockManagerMasterEndpoint: Registering block manager NLLR4000250910.solon.prd:63260 with 366.3 MiB RAM, BlockManagerId(driver, WORKSTATION.DOMAIN.EXT, 63260, None)
21/03/08 10:51:05 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, NLLR4000250910.solon.prd, 63260, None)
21/03/08 10:51:05 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, NLLR4000250910.solon.prd, 63260, None)
21/03/08 10:51:06 INFO SparkContext: Starting job: reduce at SparkPi.scala:38
21/03/08 10:51:06 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:38) with 2 output partitions
21/03/08 10:51:06 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:38)
21/03/08 10:51:06 INFO DAGScheduler: Parents of final stage: List()
21/03/08 10:51:06 INFO DAGScheduler: Missing parents: List()
21/03/08 10:51:06 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no missing parents
21/03/08 10:51:06 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 3.1 KiB, free 366.3 MiB)
21/03/08 10:51:06 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1816.0 B, free 366.3 MiB)
21/03/08 10:51:06 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on WORKSTATION.DOMAIN.EXT:63260 (size: 1816.0 B, free: 366.3 MiB)
21/03/08 10:51:06 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1200
21/03/08 10:51:06 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34) (first 15 tasks are for partitions Vector(0, 1))
21/03/08 10:51:06 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
21/03/08 10:51:06 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, WORKSTATION.DOMAIN.EXT, executor driver, partition 0, PROCESS_LOCAL, 7393 bytes)
21/03/08 10:51:06 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, WORKSTATION.DOMAIN.EXT, executor driver, partition 1, PROCESS_LOCAL, 7393 bytes)
21/03/08 10:51:06 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
21/03/08 10:51:06 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
21/03/08 10:51:06 INFO Executor: Fetching spark://WORKSTATION.DOMAIN.EXT:63213/jars/spark-examples_2.12-3.0.0.jar with timestamp 1615197065579
21/03/08 10:51:06 ERROR Utils: Aborting task
java.io.IOException: Failed to connect to WORKSTATION.DOMAIN.EXT/192.168.#.#:63213
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:253)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:195)
at org.apache.spark.rpc.netty.NettyRpcEnv.downloadClient(NettyRpcEnv.scala:392)
at org.apache.spark.rpc.netty.NettyRpcEnv.$anonfun$openChannel$4(NettyRpcEnv.scala:360)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1411)
at org.apache.spark.rpc.netty.NettyRpcEnv.openChannel(NettyRpcEnv.scala:359)
at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:719)
at org.apache.spark.util.Utils$.fetchFile(Utils.scala:535)
at org.apache.spark.executor.Executor.$anonfun$updateDependencies$7(Executor.scala:869)
at org.apache.spark.executor.Executor.$anonfun$updateDependencies$7$adapted(Executor.scala:860)
at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:877)
at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:876)
at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:860)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:404)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: io.netty.channel.AbstractChannel$AnnotatedSocketException: Permission denied: no further information: WORKSTATION.DOMAIN.EXT/192.168.#.#:63213
Caused by: java.net.SocketException: Permission denied: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Unknown Source)
[snip]
When running it on a different machine with seemingly the same config where it works fine, I get this trace on the part where the Exception is thrown on the other trace:
[snip]
21/03/08 08:00:22 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
21/03/08 08:00:22 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
21/03/08 08:00:22 INFO Executor: Fetching spark://WORKSTATION.DOMAIN.EXT:63646/jars/spark-examples_2.12-3.0.0.jar with timestamp 1615186820489
21/03/08 08:00:22 INFO TransportClientFactory: Successfully created connection to WORKSTATION.DOMAIN.EXT/10.121.#.#:63646 after 86 ms (0 ms spent in bootstraps)
21/03/08 08:00:22 INFO Utils: Fetching spark://WORKSTATION.DOMAIN.EXT:63646/jars/spark-examples_2.12-3.0.0.jar to C:\Users\#####\AppData\Local\Temp\spark-54a13d9f-9064-4f34-ba81-af49b18d9a0c\userFiles-24c3eabc-02a4-4aca-8abb-424431c6442f\fetchFileTemp5258763437798623210.tmp
21/03/08 08:00:24 INFO Executor: Adding file:/C:/Users/#####/AppData/Local/Temp/spark-54a13d9f-9064-4f34-ba81-af49b18d9a0c/userFiles-24c3eabc-02a4-4aca-8abb-424431c6442f/spark-examples_2.12-3.0.0.jar to class loader
[snip]
At first it seemed to me as a Firewall issue, but adding the executing java.exe as exeption to the firewall didn't solve the issue.
Does anyone know what I should try next to get this issue resolved?
Finally I could solve it by setting my SPARK_LOCAL_IP to localhost in my environment variables: Go to your Windows environment variables and set SPARK_LOCAL_IP=localhost

Why would Spark executors be removed (with "ExecutorAllocationManager: Request to remove executorIds" in the logs)?

Im trying to execute a spark job in an AWS cluster of 6 c4.2xlarge nodes and I don't know why Spark is killing the executors...
Any help will be appreciated
Here the spark submit command:
. /usr/bin/spark-submit --packages="com.databricks:spark-avro_2.11:3.2.0" --jars RedshiftJDBC42-1.2.1.1001.jar --deploy-mode client --master yarn --num-executors 12 --executor-cores 3 --executor-memory 7G --driver-memory 7g --py-files dependencies.zip iface_extractions.py 2016-10-01 > output.log
At line this line starts to remove executors
17/05/25 14:42:50 INFO ExecutorAllocationManager: Request to remove executorIds: 5, 3
Output spark-submit log:
Ivy Default Cache set to: /home/hadoop/.ivy2/cache
The jars for the packages stored in: /home/hadoop/.ivy2/jars
:: loading settings :: url = jar:file:/usr/lib/spark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
com.databricks#spark-avro_2.11 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0
confs: [default]
found com.databricks#spark-avro_2.11;3.2.0 in central
found org.slf4j#slf4j-api;1.7.5 in central
found org.apache.avro#avro;1.7.6 in central
found org.codehaus.jackson#jackson-core-asl;1.9.13 in central
found org.codehaus.jackson#jackson-mapper-asl;1.9.13 in central
found com.thoughtworks.paranamer#paranamer;2.3 in central
found org.xerial.snappy#snappy-java;1.0.5 in central
found org.apache.commons#commons-compress;1.4.1 in central
found org.tukaani#xz;1.0 in central
:: resolution report :: resolve 284ms :: artifacts dl 8ms
:: modules in use:
com.databricks#spark-avro_2.11;3.2.0 from central in [default]
com.thoughtworks.paranamer#paranamer;2.3 from central in [default]
org.apache.avro#avro;1.7.6 from central in [default]
org.apache.commons#commons-compress;1.4.1 from central in [default]
org.codehaus.jackson#jackson-core-asl;1.9.13 from central in [default]
org.codehaus.jackson#jackson-mapper-asl;1.9.13 from central in [default]
org.slf4j#slf4j-api;1.7.5 from central in [default]
org.tukaani#xz;1.0 from central in [default]
org.xerial.snappy#snappy-java;1.0.5 from central in [default]
:: evicted modules:
org.slf4j#slf4j-api;1.6.4 by [org.slf4j#slf4j-api;1.7.5] in [default]
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 10 | 0 | 0 | 1 || 9 | 0 |
---------------------------------------------------------------------
:: retrieving :: org.apache.spark#spark-submit-parent
confs: [default]
0 artifacts copied, 9 already retrieved (0kB/8ms)
17/05/25 14:41:37 INFO SparkContext: Running Spark version 2.1.0
17/05/25 14:41:38 INFO SecurityManager: Changing view acls to: hadoop
17/05/25 14:41:38 INFO SecurityManager: Changing modify acls to: hadoop
17/05/25 14:41:38 INFO SecurityManager: Changing view acls groups to:
17/05/25 14:41:38 INFO SecurityManager: Changing modify acls groups to:
17/05/25 14:41:38 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set()
17/05/25 14:41:38 INFO Utils: Successfully started service 'sparkDriver' on port 37132.
17/05/25 14:41:38 INFO SparkEnv: Registering MapOutputTracker
17/05/25 14:41:38 INFO SparkEnv: Registering BlockManagerMaster
17/05/25 14:41:38 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/05/25 14:41:38 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/05/25 14:41:38 INFO DiskBlockManager: Created local directory at /mnt/tmp/blockmgr-e368a261-c1a1-49e7-8533-8081896a45e4
17/05/25 14:41:38 INFO MemoryStore: MemoryStore started with capacity 4.0 GB
17/05/25 14:41:38 INFO SparkEnv: Registering OutputCommitCoordinator
17/05/25 14:41:39 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/05/25 14:41:39 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.185.53.161:4040
17/05/25 14:41:39 INFO Utils: Using initial executors = 12, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
17/05/25 14:41:39 INFO RMProxy: Connecting to ResourceManager at ip-10-185-53-161.eu-west-1.compute.internal/10.185.53.161:8032
17/05/25 14:41:39 INFO Client: Requesting a new application from cluster with 5 NodeManagers
17/05/25 14:41:40 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (11520 MB per container)
17/05/25 14:41:40 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
17/05/25 14:41:40 INFO Client: Setting up container launch context for our AM
17/05/25 14:41:40 INFO Client: Setting up the launch environment for our AM container
17/05/25 14:41:40 INFO Client: Preparing resources for our AM container
17/05/25 14:41:40 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
17/05/25 14:41:42 INFO Client: Uploading resource file:/mnt/tmp/spark-4f534fa1-c377-4113-9c86-96d5cdab4cb5/__spark_libs__6500399427935716229.zip -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/__spark_libs__6500399427935716229.zip
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/RedshiftJDBC42-1.2.1.1001.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/RedshiftJDBC42-1.2.1.1001.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/.ivy2/jars/com.databricks_spark-avro_2.11-3.2.0.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/com.databricks_spark-avro_2.11-3.2.0.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/.ivy2/jars/org.slf4j_slf4j-api-1.7.5.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/org.slf4j_slf4j-api-1.7.5.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/.ivy2/jars/org.apache.avro_avro-1.7.6.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/org.apache.avro_avro-1.7.6.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/.ivy2/jars/org.codehaus.jackson_jackson-core-asl-1.9.13.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/org.codehaus.jackson_jackson-core-asl-1.9.13.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/.ivy2/jars/org.codehaus.jackson_jackson-mapper-asl-1.9.13.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/org.codehaus.jackson_jackson-mapper-asl-1.9.13.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/.ivy2/jars/com.thoughtworks.paranamer_paranamer-2.3.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/com.thoughtworks.paranamer_paranamer-2.3.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/.ivy2/jars/org.xerial.snappy_snappy-java-1.0.5.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/org.xerial.snappy_snappy-java-1.0.5.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/.ivy2/jars/org.apache.commons_commons-compress-1.4.1.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/org.apache.commons_commons-compress-1.4.1.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/.ivy2/jars/org.tukaani_xz-1.0.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/org.tukaani_xz-1.0.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/etc/spark/conf/hive-site.xml -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/hive-site.xml
17/05/25 14:41:43 INFO Client: Uploading resource file:/usr/lib/spark/python/lib/pyspark.zip -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/pyspark.zip
17/05/25 14:41:43 INFO Client: Uploading resource file:/usr/lib/spark/python/lib/py4j-0.10.4-src.zip -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/py4j-0.10.4-src.zip
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/dependencies.zip -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/dependencies.zip
17/05/25 14:41:43 WARN Client: Same path resource file:/home/hadoop/.ivy2/jars/com.databricks_spark-avro_2.11-3.2.0.jar added multiple times to distributed cache.
17/05/25 14:41:43 WARN Client: Same path resource file:/home/hadoop/.ivy2/jars/org.slf4j_slf4j-api-1.7.5.jar added multiple times to distributed cache.
17/05/25 14:41:43 WARN Client: Same path resource file:/home/hadoop/.ivy2/jars/org.apache.avro_avro-1.7.6.jar added multiple times to distributed cache.
17/05/25 14:41:43 WARN Client: Same path resource file:/home/hadoop/.ivy2/jars/org.codehaus.jackson_jackson-core-asl-1.9.13.jar added multiple times to distributed cache.
17/05/25 14:41:43 WARN Client: Same path resource file:/home/hadoop/.ivy2/jars/org.codehaus.jackson_jackson-mapper-asl-1.9.13.jar added multiple times to distributed cache.
17/05/25 14:41:43 WARN Client: Same path resource file:/home/hadoop/.ivy2/jars/com.thoughtworks.paranamer_paranamer-2.3.jar added multiple times to distributed cache.
17/05/25 14:41:43 WARN Client: Same path resource file:/home/hadoop/.ivy2/jars/org.xerial.snappy_snappy-java-1.0.5.jar added multiple times to distributed cache.
17/05/25 14:41:43 WARN Client: Same path resource file:/home/hadoop/.ivy2/jars/org.apache.commons_commons-compress-1.4.1.jar added multiple times to distributed cache.
17/05/25 14:41:43 WARN Client: Same path resource file:/home/hadoop/.ivy2/jars/org.tukaani_xz-1.0.jar added multiple times to distributed cache.
17/05/25 14:41:43 INFO Client: Uploading resource file:/mnt/tmp/spark-4f534fa1-c377-4113-9c86-96d5cdab4cb5/__spark_conf__1516567354161750682.zip -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/__spark_conf__.zip
17/05/25 14:41:43 INFO SecurityManager: Changing view acls to: hadoop
17/05/25 14:41:43 INFO SecurityManager: Changing modify acls to: hadoop
17/05/25 14:41:43 INFO SecurityManager: Changing view acls groups to:
17/05/25 14:41:43 INFO SecurityManager: Changing modify acls groups to:
17/05/25 14:41:43 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set()
17/05/25 14:41:43 INFO Client: Submitting application application_1495720658394_0004 to ResourceManager
17/05/25 14:41:43 INFO YarnClientImpl: Submitted application application_1495720658394_0004
17/05/25 14:41:43 INFO SchedulerExtensionServices: Starting Yarn extension services with app application_1495720658394_0004 and attemptId None
17/05/25 14:41:44 INFO Client: Application report for application_1495720658394_0004 (state: ACCEPTED)
17/05/25 14:41:44 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1495723303463
final status: UNDEFINED
tracking URL: http://ip-10-185-53-161.eu-west-1.compute.internal:20888/proxy/application_1495720658394_0004/
user: hadoop
17/05/25 14:41:45 INFO Client: Application report for application_1495720658394_0004 (state: ACCEPTED)
17/05/25 14:41:46 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
17/05/25 14:41:46 INFO Client: Application report for application_1495720658394_0004 (state: ACCEPTED)
17/05/25 14:41:46 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> ip-10-185-53-161.eu-west-1.compute.internal, PROXY_URI_BASES -> http://ip-10-185-53-161.eu-west-1.compute.internal:20888/proxy/application_1495720658394_0004), /proxy/application_1495720658394_0004
17/05/25 14:41:46 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
17/05/25 14:41:47 INFO Client: Application report for application_1495720658394_0004 (state: RUNNING)
17/05/25 14:41:47 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 10.185.52.31
ApplicationMaster RPC port: 0
queue: default
start time: 1495723303463
final status: UNDEFINED
tracking URL: http://ip-10-185-53-161.eu-west-1.compute.internal:20888/proxy/application_1495720658394_0004/
user: hadoop
17/05/25 14:41:47 INFO YarnClientSchedulerBackend: Application application_1495720658394_0004 has started running.
17/05/25 14:41:47 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 37860.
17/05/25 14:41:47 INFO NettyBlockTransferService: Server created on 10.185.53.161:37860
17/05/25 14:41:47 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/05/25 14:41:47 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.185.53.161, 37860, None)
17/05/25 14:41:47 INFO BlockManagerMasterEndpoint: Registering block manager 10.185.53.161:37860 with 4.0 GB RAM, BlockManagerId(driver, 10.185.53.161, 37860, None)
17/05/25 14:41:47 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.185.53.161, 37860, None)
17/05/25 14:41:47 INFO BlockManager: external shuffle service port = 7337
17/05/25 14:41:47 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.185.53.161, 37860, None)
17/05/25 14:41:47 INFO EventLoggingListener: Logging events to hdfs:///var/log/spark/apps/application_1495720658394_0004
17/05/25 14:41:47 INFO Utils: Using initial executors = 12, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
17/05/25 14:41:50 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (10.185.52.31:57406) with ID 5
17/05/25 14:41:50 INFO ExecutorAllocationManager: New executor 5 has registered (new total is 1)
17/05/25 14:41:50 INFO BlockManagerMasterEndpoint: Registering block manager ip-10-185-52-31.eu-west-1.compute.internal:38781 with 4.0 GB RAM, BlockManagerId(5, ip-10-185-52-31.eu-west-1.compute.internal, 38781, None)
17/05/25 14:41:50 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (10.185.53.45:40096) with ID 3
17/05/25 14:41:50 INFO ExecutorAllocationManager: New executor 3 has registered (new total is 2)
17/05/25 14:41:50 INFO BlockManagerMasterEndpoint: Registering block manager ip-10-185-53-45.eu-west-1.compute.internal:43702 with 4.0 GB RAM, BlockManagerId(3, ip-10-185-53-45.eu-west-1.compute.internal, 43702, None)
17/05/25 14:41:50 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (10.185.53.135:42390) with ID 2
17/05/25 14:41:50 INFO ExecutorAllocationManager: New executor 2 has registered (new total is 3)
17/05/25 14:41:50 INFO BlockManagerMasterEndpoint: Registering block manager ip-10-185-53-135.eu-west-1.compute.internal:41552 with 4.0 GB RAM, BlockManagerId(2, ip-10-185-53-135.eu-west-1.compute.internal, 41552, None)
17/05/25 14:41:50 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (10.185.53.10:60612) with ID 1
17/05/25 14:41:50 INFO ExecutorAllocationManager: New executor 1 has registered (new total is 4)
17/05/25 14:41:50 INFO BlockManagerMasterEndpoint: Registering block manager ip-10-185-53-10.eu-west-1.compute.internal:33391 with 4.0 GB RAM, BlockManagerId(1, ip-10-185-53-10.eu-west-1.compute.internal, 33391, None)
17/05/25 14:41:50 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (10.185.53.68:57424) with ID 4
17/05/25 14:41:50 INFO ExecutorAllocationManager: New executor 4 has registered (new total is 5)
17/05/25 14:41:50 INFO BlockManagerMasterEndpoint: Registering block manager ip-10-185-53-68.eu-west-1.compute.internal:34222 with 4.0 GB RAM, BlockManagerId(4, ip-10-185-53-68.eu-west-1.compute.internal, 34222, None)
17/05/25 14:42:09 INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
17/05/25 14:42:09 INFO SharedState: Warehouse path is 'hdfs:///user/spark/warehouse'.
17/05/25 14:42:10 WARN Utils: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf.
17/05/25 14:42:11 INFO CodeGenerator: Code generated in 170.416763 ms
17/05/25 14:42:11 INFO SparkContext: Starting job: collect at /home/hadoop/iface_extractions/select_fields.py:90
17/05/25 14:42:11 INFO DAGScheduler: Got job 0 (collect at /home/hadoop/iface_extractions/select_fields.py:90) with 1 output partitions
17/05/25 14:42:11 INFO DAGScheduler: Final stage: ResultStage 0 (collect at /home/hadoop/iface_extractions/select_fields.py:90)
17/05/25 14:42:11 INFO DAGScheduler: Parents of final stage: List()
17/05/25 14:42:11 INFO DAGScheduler: Missing parents: List()
17/05/25 14:42:11 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at collect at /home/hadoop/iface_extractions/select_fields.py:90), which has no missing parents
17/05/25 14:42:11 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 7.5 KB, free 4.0 GB)
17/05/25 14:42:11 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 4.1 KB, free 4.0 GB)
17/05/25 14:42:11 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.185.53.161:37860 (size: 4.1 KB, free: 4.0 GB)
17/05/25 14:42:11 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:996
17/05/25 14:42:11 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at collect at /home/hadoop/iface_extractions/select_fields.py:90)
17/05/25 14:42:11 INFO YarnScheduler: Adding task set 0.0 with 1 tasks
17/05/25 14:42:11 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, ip-10-185-53-135.eu-west-1.compute.internal, executor 2, partition 0, PROCESS_LOCAL, 5899 bytes)
17/05/25 14:42:11 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on ip-10-185-53-135.eu-west-1.compute.internal:41552 (size: 4.1 KB, free: 4.0 GB)
17/05/25 14:42:12 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1101 ms on ip-10-185-53-135.eu-west-1.compute.internal (executor 2) (1/1)
17/05/25 14:42:12 INFO YarnScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool
17/05/25 14:42:12 INFO DAGScheduler: ResultStage 0 (collect at /home/hadoop/iface_extractions/select_fields.py:90) finished in 1.109 s
17/05/25 14:42:12 INFO DAGScheduler: Job 0 finished: collect at /home/hadoop/iface_extractions/select_fields.py:90, took 1.290037 s
17/05/25 14:42:12 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 10.185.53.161:37860 in memory (size: 4.1 KB, free: 4.0 GB)
17/05/25 14:42:12 INFO SparkContext: Starting job: collect at /home/hadoop/iface_extractions/select_fields.py:91
17/05/25 14:42:12 INFO BlockManagerInfo: Removed broadcast_0_piece0 on ip-10-185-53-135.eu-west-1.compute.internal:41552 in memory (size: 4.1 KB, free: 4.0 GB)
17/05/25 14:42:12 INFO DAGScheduler: Got job 1 (collect at /home/hadoop/iface_extractions/select_fields.py:91) with 1 output partitions
17/05/25 14:42:12 INFO DAGScheduler: Final stage: ResultStage 1 (collect at /home/hadoop/iface_extractions/select_fields.py:91)
17/05/25 14:42:12 INFO DAGScheduler: Parents of final stage: List()
17/05/25 14:42:12 INFO DAGScheduler: Missing parents: List()
17/05/25 14:42:12 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[5] at collect at /home/hadoop/iface_extractions/select_fields.py:91), which has no missing parents
17/05/25 14:42:12 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 7.5 KB, free 4.0 GB)
17/05/25 14:42:12 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.1 KB, free 4.0 GB)
17/05/25 14:42:12 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 10.185.53.161:37860 (size: 4.1 KB, free: 4.0 GB)
17/05/25 14:42:12 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:996
17/05/25 14:42:12 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[5] at collect at /home/hadoop/iface_extractions/select_fields.py:91)
17/05/25 14:42:12 INFO YarnScheduler: Adding task set 1.0 with 1 tasks
17/05/25 14:42:12 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, ip-10-185-53-68.eu-west-1.compute.internal, executor 4, partition 0, PROCESS_LOCAL, 5900 bytes)
17/05/25 14:42:13 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on ip-10-185-53-68.eu-west-1.compute.internal:34222 (size: 4.1 KB, free: 4.0 GB)
17/05/25 14:42:14 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 1047 ms on ip-10-185-53-68.eu-west-1.compute.internal (executor 4) (1/1)
17/05/25 14:42:14 INFO YarnScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool
17/05/25 14:42:14 INFO DAGScheduler: ResultStage 1 (collect at /home/hadoop/iface_extractions/select_fields.py:91) finished in 1.047 s
17/05/25 14:42:14 INFO DAGScheduler: Job 1 finished: collect at /home/hadoop/iface_extractions/select_fields.py:91, took 1.054768 s
17/05/25 14:42:14 INFO CodeGenerator: Code generated in 13.109425 ms
17/05/25 14:42:14 INFO CodeGenerator: Code generated in 12.568665 ms
17/05/25 14:42:14 INFO CodeGenerator: Code generated in 11.257538 ms
17/05/25 14:42:14 INFO BlockManagerInfo: Removed broadcast_1_piece0 on 10.185.53.161:37860 in memory (size: 4.1 KB, free: 4.0 GB)
17/05/25 14:42:14 INFO BlockManagerInfo: Removed broadcast_1_piece0 on ip-10-185-53-68.eu-west-1.compute.internal:34222 in memory (size: 4.1 KB, free: 4.0 GB)
17/05/25 14:42:14 INFO CodeGenerator: Code generated in 11.563958 ms
17/05/25 14:42:14 INFO CodeGenerator: Code generated in 18.189301 ms
17/05/25 14:42:14 INFO CodeGenerator: Code generated in 13.490762 ms
17/05/25 14:42:14 INFO CodeGenerator: Code generated in 15.156166 ms
17/05/25 14:42:50 INFO ExecutorAllocationManager: Request to remove executorIds: 5, 3
17/05/25 14:42:50 INFO YarnClientSchedulerBackend: Requesting to kill executor(s) 5, 3
17/05/25 14:42:50 INFO YarnClientSchedulerBackend: Actual list of executor(s) to be killed is 5, 3
17/05/25 14:42:50 INFO ExecutorAllocationManager: Removing executor 5 because it has been idle for 60 seconds (new desired total will be 4)
17/05/25 14:42:50 INFO ExecutorAllocationManager: Removing executor 3 because it has been idle for 60 seconds (new desired total will be 3)
17/05/25 14:42:50 INFO ExecutorAllocationManager: Request to remove executorIds: 1
17/05/25 14:42:50 INFO YarnClientSchedulerBackend: Requesting to kill executor(s) 1
17/05/25 14:42:50 INFO YarnClientSchedulerBackend: Actual list of executor(s) to be killed is 1
17/05/25 14:42:50 INFO ExecutorAllocationManager: Removing executor 1 because it has been idle for 60 seconds (new desired total will be 2)
17/05/25 14:42:50 INFO YarnSchedulerBackend$YarnDriverEndpoint: Disabling executor 5.
17/05/25 14:42:50 INFO DAGScheduler: Executor lost: 5 (epoch 0)
17/05/25 14:42:50 INFO BlockManagerMasterEndpoint: Trying to remove executor 5 from BlockManagerMaster.
17/05/25 14:42:50 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(5, ip-10-185-52-31.eu-west-1.compute.internal, 38781, None)
17/05/25 14:42:50 INFO BlockManagerMaster: Removed 5 successfully in removeExecutor
17/05/25 14:42:50 INFO YarnScheduler: Executor 5 on ip-10-185-52-31.eu-west-1.compute.internal killed by driver.
17/05/25 14:42:50 INFO ExecutorAllocationManager: Existing executor 5 has been removed (new total is 4)
17/05/25 14:42:51 INFO YarnSchedulerBackend$YarnDriverEndpoint: Disabling executor 1.
17/05/25 14:42:51 INFO DAGScheduler: Executor lost: 1 (epoch 0)
17/05/25 14:42:51 INFO BlockManagerMasterEndpoint: Trying to remove executor 1 from BlockManagerMaster.
17/05/25 14:42:51 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(1, ip-10-185-53-10.eu-west-1.compute.internal, 33391, None)
17/05/25 14:42:51 INFO BlockManagerMaster: Removed 1 successfully in removeExecutor
17/05/25 14:42:51 INFO YarnScheduler: Executor 1 on ip-10-185-53-10.eu-west-1.compute.internal killed by driver.
17/05/25 14:42:51 INFO ExecutorAllocationManager: Existing executor 1 has been removed (new total is 3)
17/05/25 14:42:51 INFO YarnSchedulerBackend$YarnDriverEndpoint: Disabling executor 3.
17/05/25 14:42:51 INFO DAGScheduler: Executor lost: 3 (epoch 0)
17/05/25 14:42:51 INFO BlockManagerMasterEndpoint: Trying to remove executor 3 from BlockManagerMaster.
17/05/25 14:42:51 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(3, ip-10-185-53-45.eu-west-1.compute.internal, 43702, None)
17/05/25 14:42:51 INFO BlockManagerMaster: Removed 3 successfully in removeExecutor
17/05/25 14:42:51 INFO YarnScheduler: Executor 3 on ip-10-185-53-45.eu-west-1.compute.internal killed by driver.
17/05/25 14:42:51 INFO ExecutorAllocationManager: Existing executor 3 has been removed (new total is 2)
17/05/25 14:43:12 INFO ExecutorAllocationManager: Request to remove executorIds: 2
17/05/25 14:43:12 INFO YarnClientSchedulerBackend: Requesting to kill executor(s) 2
17/05/25 14:43:12 INFO YarnClientSchedulerBackend: Actual list of executor(s) to be killed is 2
17/05/25 14:43:12 INFO ExecutorAllocationManager: Removing executor 2 because it has been idle for 60 seconds (new desired total will be 1)
17/05/25 14:43:13 INFO YarnSchedulerBackend$YarnDriverEndpoint: Disabling executor 2.
17/05/25 14:43:13 INFO DAGScheduler: Executor lost: 2 (epoch 0)
17/05/25 14:43:13 INFO BlockManagerMasterEndpoint: Trying to remove executor 2 from BlockManagerMaster.
17/05/25 14:43:13 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(2, ip-10-185-53-135.eu-west-1.compute.internal, 41552, None)
17/05/25 14:43:13 INFO BlockManagerMaster: Removed 2 successfully in removeExecutor
17/05/25 14:43:13 INFO YarnScheduler: Executor 2 on ip-10-185-53-135.eu-west-1.compute.internal killed by driver.
17/05/25 14:43:13 INFO ExecutorAllocationManager: Existing executor 2 has been removed (new total is 1)
17/05/25 14:43:14 INFO ExecutorAllocationManager: Request to remove executorIds: 4
17/05/25 14:43:14 INFO YarnClientSchedulerBackend: Requesting to kill executor(s) 4
17/05/25 14:43:14 INFO YarnClientSchedulerBackend: Actual list of executor(s) to be killed is 4
17/05/25 14:43:14 INFO ExecutorAllocationManager: Removing executor 4 because it has been idle for 60 seconds (new desired total will be 0)
17/05/25 14:43:17 INFO YarnSchedulerBackend$YarnDriverEndpoint: Disabling executor 4.
17/05/25 14:43:17 INFO DAGScheduler: Executor lost: 4 (epoch 0)
17/05/25 14:43:17 INFO BlockManagerMasterEndpoint: Trying to remove executor 4 from BlockManagerMaster.
17/05/25 14:43:17 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(4, ip-10-185-53-68.eu-west-1.compute.internal, 34222, None)
17/05/25 14:43:17 INFO BlockManagerMaster: Removed 4 successfully in removeExecutor
17/05/25 14:43:17 INFO YarnScheduler: Executor 4 on ip-10-185-53-68.eu-west-1.compute.internal killed by driver.
17/05/25 14:43:17 INFO ExecutorAllocationManager: Existing executor 4 has been removed (new total is 0)
My guess is that you've got Dynamic Resource Allocation enabled in your Spark configuration.
Spark provides a mechanism to dynamically adjust the resources your application occupies based on the workload. This means that your application may give resources back to the cluster if they are no longer used and request them again later when there is demand. This feature is particularly useful if multiple applications share resources in your Spark cluster.
This feature is disabled by default and available on all coarse-grained cluster managers, i.e. standalone mode, YARN mode, and Mesos coarse-grained mode.
I highlighted the relevant part that says it is disabled by default and hence I can only guess that it was enabled.
From ExecutorAllocationManager:
An agent that dynamically allocates and removes executors based on the workload.
With that said, I'd use web UI and see if spark.dynamicAllocation.enabled property is enabled or not.
There are two requirements for using this feature (Dynamic Resource Allocation). First, your application must set spark.dynamicAllocation.enabled to true. Second, you must set up an external shuffle service on each worker node in the same cluster and set spark.shuffle.service.enabled to true in your application.
This is the line that prints out the INFO message:
logInfo("Request to remove executorIds: " + executors.mkString(", "))
You can also kill executors using SparkContext.killExecutors that gives a Spark developer a way to kill executors himself.
killExecutors(executorIds: Seq[String]): Boolean Request that the cluster manager kill the specified executors.
There are two killExecutors actually and they are very helpful for demo purposes as you can easily show how executors come and go.

Scala application using spark cassandra connector hangs

I am developing a test application in Intellij using scala and spark cassandra connector. Here is my build.sbt code:
scalaVersion := "2.11.8"
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.6.1"
libraryDependencies += "org.apache.spark".%%("spark-sql") % "1.6.1"
libraryDependencies += "com.datastax.spark" %% "spark-cassandra-connector" % "1.6.0-M2"
I have created cassandra cluster using ccm with 4 nodes. The keyspace I created has replication factor 3. Here is my code in scala app
val conf = new SparkConf()
.setMaster("local[*]")
.setAppName("SparkCassandra")
//set Cassandra host address as your local address
.set("spark.cassandra.connection.host", "127.0.0.1")
val sc = new SparkContext(conf)
val rdd = sc.cassandraTable("excelsior", "emp")
val total = rdd.count()
println(total)
println("exiting now:")
sc.stop()
But the spark job hangs at following line
CassandraConnector: Disconnected from Cassandra cluster: cluster4nodes
Only 3 tasks out of 4 are completed.Here is the full log:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/07/08 14:23:26 INFO SparkContext: Running Spark version 1.6.0
16/07/08 14:23:26 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/08 14:23:27 WARN Utils: Your hostname, renuka-Inspiron-3542 resolves to a loopback address: 127.0.1.1; using 192.168.1.189 instead (on interface wlan0)
16/07/08 14:23:27 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
16/07/08 14:23:27 INFO SecurityManager: Changing view acls to: renuka
16/07/08 14:23:27 INFO SecurityManager: Changing modify acls to: renuka
16/07/08 14:23:27 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(renuka); users with modify permissions: Set(renuka)
16/07/08 14:23:28 INFO Utils: Successfully started service 'sparkDriver' on port 41027.
16/07/08 14:23:28 INFO Slf4jLogger: Slf4jLogger started
16/07/08 14:23:28 INFO Remoting: Starting remoting
16/07/08 14:23:28 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem#192.168.1.189:46329]
16/07/08 14:23:28 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 46329.
16/07/08 14:23:29 INFO SparkEnv: Registering MapOutputTracker
16/07/08 14:23:29 INFO SparkEnv: Registering BlockManagerMaster
16/07/08 14:23:29 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-d1db2ea9-3fba-4b64-ad2b-80deddd3f05a
16/07/08 14:23:29 INFO MemoryStore: MemoryStore started with capacity 1091.3 MB
16/07/08 14:23:29 INFO SparkEnv: Registering OutputCommitCoordinator
16/07/08 14:23:29 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/07/08 14:23:29 INFO SparkUI: Started SparkUI at http://192.168.1.189:4040
16/07/08 14:23:30 INFO Executor: Starting executor ID driver on host localhost
16/07/08 14:23:30 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 40956.
16/07/08 14:23:30 INFO NettyBlockTransferService: Server created on 40956
16/07/08 14:23:30 INFO BlockManagerMaster: Trying to register BlockManager
16/07/08 14:23:30 INFO BlockManagerMasterEndpoint: Registering block manager localhost:40956 with 1091.3 MB RAM, BlockManagerId(driver, localhost, 40956)
16/07/08 14:23:30 INFO BlockManagerMaster: Registered BlockManager
16/07/08 14:23:30 INFO NettyUtil: Found Netty's native epoll transport in the classpath, using it
16/07/08 14:23:31 INFO Cluster: New Cassandra host /127.0.0.1:9042 added
16/07/08 14:23:31 INFO Cluster: New Cassandra host /127.0.0.2:9042 added
16/07/08 14:23:31 INFO LocalNodeFirstLoadBalancingPolicy: Added host 127.0.0.2 (datacenter1)
16/07/08 14:23:31 INFO Cluster: New Cassandra host /127.0.0.3:9042 added
16/07/08 14:23:31 INFO LocalNodeFirstLoadBalancingPolicy: Added host 127.0.0.3 (datacenter1)
16/07/08 14:23:31 INFO Cluster: New Cassandra host /127.0.0.4:9042 added
16/07/08 14:23:31 INFO LocalNodeFirstLoadBalancingPolicy: Added host 127.0.0.4 (datacenter1)
16/07/08 14:23:31 INFO CassandraConnector: Connected to Cassandra cluster: cluster4nodes
16/07/08 14:23:31 INFO SparkContext: Starting job: count at hello.scala:36
16/07/08 14:23:32 INFO DAGScheduler: Got job 0 (count at hello.scala:36) with 4 output partitions
16/07/08 14:23:32 INFO DAGScheduler: Final stage: ResultStage 0 (count at hello.scala:36)
16/07/08 14:23:32 INFO DAGScheduler: Parents of final stage: List()
16/07/08 14:23:32 INFO DAGScheduler: Missing parents: List()
16/07/08 14:23:32 INFO DAGScheduler: Submitting ResultStage 0 (CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:18), which has no missing parents
16/07/08 14:23:32 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 7.2 KB, free 7.2 KB)
16/07/08 14:23:32 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 3.7 KB, free 10.9 KB)
16/07/08 14:23:32 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:40956 (size: 3.7 KB, free: 1091.2 MB)
16/07/08 14:23:32 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006
16/07/08 14:23:32 INFO DAGScheduler: Submitting 4 missing tasks from ResultStage 0 (CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:18)
16/07/08 14:23:32 INFO TaskSchedulerImpl: Adding task set 0.0 with 4 tasks
16/07/08 14:23:32 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,NODE_LOCAL, 3530 bytes)
16/07/08 14:23:32 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, partition 1,NODE_LOCAL, 3530 bytes)
16/07/08 14:23:32 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, localhost, partition 2,NODE_LOCAL, 3530 bytes)
16/07/08 14:23:32 INFO Executor: Running task 2.0 in stage 0.0 (TID 2)
16/07/08 14:23:32 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
16/07/08 14:23:32 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
16/07/08 14:23:34 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 2082 bytes result sent to driver
16/07/08 14:23:34 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 2082 bytes result sent to driver
16/07/08 14:23:34 INFO Executor: Finished task 2.0 in stage 0.0 (TID 2). 2082 bytes result sent to driver
16/07/08 14:23:34 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 2089 ms on localhost (1/4)
16/07/08 14:23:34 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 2179 ms on localhost (2/4)
16/07/08 14:23:34 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 2117 ms on localhost (3/4)
16/07/08 14:23:41 INFO CassandraConnector: Disconnected from Cassandra cluster: cluster4nodes
If I created a keyspace with replication factor 4 on 4 nodes cluster, app works fine and never hangs. Am I missing anything in configuration. Thanks in advance.

Spark metrics on wordcount example

I read the section Metrics on spark website. I wish to try it on the wordcount example, I can't make it work.
spark/conf/metrics.properties :
# Enable CsvSink for all instances
*.sink.csv.class=org.apache.spark.metrics.sink.CsvSink
# Polling period for CsvSink
*.sink.csv.period=1
*.sink.csv.unit=seconds
# Polling directory for CsvSink
*.sink.csv.directory=/home/spark/Documents/test/
# Worker instance overlap polling period
worker.sink.csv.period=1
worker.sink.csv.unit=seconds
# Enable jvm source for instance master, worker, driver and executor
master.source.jvm.class=org.apache.spark.metrics.source.JvmSource
worker.source.jvm.class=org.apache.spark.metrics.source.JvmSource
driver.source.jvm.class=org.apache.spark.metrics.source.JvmSource
executor.source.jvm.class=org.apache.spark.metrics.source.JvmSource
I run my app in local like in the documentation :
$SPARK_HOME/bin/spark-submit --class "SimpleApp" --master local[4] target/scala-2.10/simple-project_2.10-1.0.jar
I checked /home/spark/Documents/test/ and it is empty.
What did I miss?
Shell:
$SPARK_HOME/bin/spark-submit --class "SimpleApp" --master local[4] --conf spark.metrics.conf=/home/spark/development/spark/conf/metrics.properties target/scala-2.10/simple-project_2.10-1.0.jar
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
INFO SparkContext: Running Spark version 1.3.0
WARN Utils: Your hostname, cv-local resolves to a loopback address: 127.0.1.1; using 192.168.1.64 instead (on interface eth0)
WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
INFO SecurityManager: Changing view acls to: spark
INFO SecurityManager: Changing modify acls to: spark
INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); users with modify permissions: Set(spark)
INFO Slf4jLogger: Slf4jLogger started
INFO Remoting: Starting remoting
INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver#cv-local.local:35895]
INFO Utils: Successfully started service 'sparkDriver' on port 35895.
INFO SparkEnv: Registering MapOutputTracker
INFO SparkEnv: Registering BlockManagerMaster
INFO DiskBlockManager: Created local directory at /tmp/spark-447d56c9-cfe5-4f9d-9e0a-6bb476ddede6/blockmgr-4eaa04f4-b4b2-4b05-ba0e-fd1aeb92b289
INFO MemoryStore: MemoryStore started with capacity 265.4 MB
INFO HttpFileServer: HTTP File server directory is /tmp/spark-fae11cd2-937e-4be3-a273-be8b4c4847df/httpd-ca163445-6fff-45e4-9c69-35edcea83b68
INFO HttpServer: Starting HTTP Server
INFO Utils: Successfully started service 'HTTP file server' on port 52828.
INFO SparkEnv: Registering OutputCommitCoordinator
INFO Utils: Successfully started service 'SparkUI' on port 4040.
INFO SparkUI: Started SparkUI at http://cv-local.local:4040
INFO SparkContext: Added JAR file:/home/spark/workspace/IdeaProjects/wordcount/target/scala-2.10/simple-project_2.10-1.0.jar at http://192.168.1.64:52828/jars/simple-project_2.10-1.0.jar with timestamp 1444049152348
INFO Executor: Starting executor ID <driver> on host localhost
INFO AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver#cv-local.local:35895/user/HeartbeatReceiver
INFO NettyBlockTransferService: Server created on 60320
INFO BlockManagerMaster: Trying to register BlockManager
INFO BlockManagerMasterActor: Registering block manager localhost:60320 with 265.4 MB RAM, BlockManagerId(<driver>, localhost, 60320)
INFO BlockManagerMaster: Registered BlockManager
INFO MemoryStore: ensureFreeSpace(34046) called with curMem=0, maxMem=278302556
INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 33.2 KB, free 265.4 MB)
INFO MemoryStore: ensureFreeSpace(5221) called with curMem=34046, maxMem=278302556
INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 5.1 KB, free 265.4 MB)
INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:60320 (size: 5.1 KB, free: 265.4 MB)
INFO BlockManagerMaster: Updated info of block broadcast_0_piece0
INFO SparkContext: Created broadcast 0 from textFile at SimpleApp.scala:11
WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
WARN LoadSnappy: Snappy native library not loaded
INFO FileInputFormat: Total input paths to process : 1
INFO SparkContext: Starting job: count at SimpleApp.scala:12
INFO DAGScheduler: Got job 0 (count at SimpleApp.scala:12) with 2 output partitions (allowLocal=false)
INFO DAGScheduler: Final stage: Stage 0(count at SimpleApp.scala:12)
INFO DAGScheduler: Parents of final stage: List()
INFO DAGScheduler: Missing parents: List()
INFO DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[2] at filter at SimpleApp.scala:12), which has no missing parents
INFO MemoryStore: ensureFreeSpace(2848) called with curMem=39267, maxMem=278302556
INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 2.8 KB, free 265.4 MB)
INFO MemoryStore: ensureFreeSpace(2056) called with curMem=42115, maxMem=278302556
INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.0 KB, free 265.4 MB)
INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:60320 (size: 2.0 KB, free: 265.4 MB)
INFO BlockManagerMaster: Updated info of block broadcast_1_piece0
INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:839
INFO DAGScheduler: Submitting 2 missing tasks from Stage 0 (MapPartitionsRDD[2] at filter at SimpleApp.scala:12)
INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 1391 bytes)
INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, PROCESS_LOCAL, 1391 bytes)
INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
INFO Executor: Fetching http://192.168.1.64:52828/jars/simple-project_2.10-1.0.jar with timestamp 1444049152348
INFO Utils: Fetching http://192.168.1.64:52828/jars/simple-project_2.10-1.0.jar to /tmp/spark-cab5a940-e2a4-4caf-8549-71e1518271f1/userFiles-c73172c2-7af6-4861-a945-b183edbbafa1/fetchFileTemp4229868141058449157.tmp
INFO Executor: Adding file:/tmp/spark-cab5a940-e2a4-4caf-8549-71e1518271f1/userFiles-c73172c2-7af6-4861-a945-b183edbbafa1/simple-project_2.10-1.0.jar to class loader
INFO CacheManager: Partition rdd_1_1 not found, computing it
INFO CacheManager: Partition rdd_1_0 not found, computing it
INFO HadoopRDD: Input split: file:/home/spark/development/spark/conf/metrics.properties:2659+2659
INFO HadoopRDD: Input split: file:/home/spark/development/spark/conf/metrics.properties:0+2659
INFO MemoryStore: ensureFreeSpace(7840) called with curMem=44171, maxMem=278302556
INFO MemoryStore: Block rdd_1_0 stored as values in memory (estimated size 7.7 KB, free 265.4 MB)
INFO BlockManagerInfo: Added rdd_1_0 in memory on localhost:60320 (size: 7.7 KB, free: 265.4 MB)
INFO BlockManagerMaster: Updated info of block rdd_1_0
INFO MemoryStore: ensureFreeSpace(8648) called with curMem=52011, maxMem=278302556
INFO MemoryStore: Block rdd_1_1 stored as values in memory (estimated size 8.4 KB, free 265.4 MB)
INFO BlockManagerInfo: Added rdd_1_1 in memory on localhost:60320 (size: 8.4 KB, free: 265.4 MB)
INFO BlockManagerMaster: Updated info of block rdd_1_1
INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 2399 bytes result sent to driver
INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 2399 bytes result sent to driver
INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 139 ms on localhost (1/2)
INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 133 ms on localhost (2/2)
INFO DAGScheduler: Stage 0 (count at SimpleApp.scala:12) finished in 0.151 s
INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
INFO DAGScheduler: Job 0 finished: count at SimpleApp.scala:12, took 0.225939 s
INFO SparkContext: Starting job: count at SimpleApp.scala:13
INFO DAGScheduler: Got job 1 (count at SimpleApp.scala:13) with 2 output partitions (allowLocal=false)
INFO DAGScheduler: Final stage: Stage 1(count at SimpleApp.scala:13)
INFO DAGScheduler: Parents of final stage: List()
INFO DAGScheduler: Missing parents: List()
INFO DAGScheduler: Submitting Stage 1 (MapPartitionsRDD[3] at filter at SimpleApp.scala:13), which has no missing parents
INFO MemoryStore: ensureFreeSpace(2848) called with curMem=60659, maxMem=278302556
INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 2.8 KB, free 265.3 MB)
INFO MemoryStore: ensureFreeSpace(2056) called with curMem=63507, maxMem=278302556
INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 2.0 KB, free 265.3 MB)
INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:60320 (size: 2.0 KB, free: 265.4 MB)
INFO BlockManagerMaster: Updated info of block broadcast_2_piece0
INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:839
INFO DAGScheduler: Submitting 2 missing tasks from Stage 1 (MapPartitionsRDD[3] at filter at SimpleApp.scala:13)
INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 2, localhost, PROCESS_LOCAL, 1391 bytes)
INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 3, localhost, PROCESS_LOCAL, 1391 bytes)
INFO Executor: Running task 0.0 in stage 1.0 (TID 2)
INFO Executor: Running task 1.0 in stage 1.0 (TID 3)
INFO BlockManager: Found block rdd_1_0 locally
INFO Executor: Finished task 0.0 in stage 1.0 (TID 2). 1830 bytes result sent to driver
INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 2) in 9 ms on localhost (1/2)
INFO BlockManager: Found block rdd_1_1 locally
INFO Executor: Finished task 1.0 in stage 1.0 (TID 3). 1830 bytes result sent to driver
INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 3) in 10 ms on localhost (2/2)
INFO DAGScheduler: Stage 1 (count at SimpleApp.scala:13) finished in 0.011 s
INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
INFO DAGScheduler: Job 1 finished: count at SimpleApp.scala:13, took 0.024084 s
Lines with a: 5, Lines with b: 12
I made it work specifying in the spark submit the path to the metrics file
--files=/yourPath/metrics.properties --conf spark.metrics.conf=./metrics.properties

Resources