Spark: Jobs are not assigned - apache-spark

I've deployed an spark cluster into my kubernetes. Here webui:
I'm trying to submit an sark SparkPi example using:
$ ./spark-submit \
--class org.apache.spark.examples.SparkPi \
--master spark://spark-cluster-ra-iot-dev.si-origin-cluster.t-systems.es:32316 \
--num-executors 1 \
--driver-memory 512m \
--executor-memory 512m \
--executor-cores 1 \
../examples/jars/spark-examples_2.11-2.4.5.jar 10
Job is reached on spark cluster:
Nevertheless, I'm getting messages like:
WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
I seems like SparkPi application is scheduled but never executed...
Here complete log:
./spark-submit --class org.apache.spark.examples.SparkPi --master spark://spark-cluster-ra-iot-dev.si-origin-cluster.t-systems.es:32316 --num-executors 1 --driver-memory 512m --executor-memory 512m --executor-cores 1 ../examples/jars/spark-examples_2.11-2.4.5.jar 10
20/06/09 10:52:57 WARN Utils: Your hostname, psgd resolves to a loopback address: 127.0.1.1; using 10.0.2.15 instead (on interface enp0s3)
20/06/09 10:52:57 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
20/06/09 10:52:57 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/06/09 10:52:58 INFO SparkContext: Running Spark version 2.4.5
20/06/09 10:52:58 INFO SparkContext: Submitted application: Spark Pi
20/06/09 10:52:58 INFO SecurityManager: Changing view acls to: jeusdi
20/06/09 10:52:58 INFO SecurityManager: Changing modify acls to: jeusdi
20/06/09 10:52:58 INFO SecurityManager: Changing view acls groups to:
20/06/09 10:52:58 INFO SecurityManager: Changing modify acls groups to:
20/06/09 10:52:58 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(jeusdi); groups with view permissions: Set(); users with modify permissions: Set(jeusdi); groups with modify permissions: Set()
20/06/09 10:52:59 INFO Utils: Successfully started service 'sparkDriver' on port 42943.
20/06/09 10:52:59 INFO SparkEnv: Registering MapOutputTracker
20/06/09 10:52:59 INFO SparkEnv: Registering BlockManagerMaster
20/06/09 10:52:59 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/06/09 10:52:59 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/06/09 10:52:59 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-b6c54054-c94b-42c7-b85f-a4e30be4b659
20/06/09 10:52:59 INFO MemoryStore: MemoryStore started with capacity 117.0 MB
20/06/09 10:52:59 INFO SparkEnv: Registering OutputCommitCoordinator
20/06/09 10:52:59 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/06/09 10:53:00 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.0.2.15:4040
20/06/09 10:53:00 INFO SparkContext: Added JAR file:/home/jeusdi/projects/workarea/valladolid/spark-2.4.5-bin-hadoop2.7/bin/../examples/jars/spark-examples_2.11-2.4.5.jar at spark://10.0.2.15:42943/jars/spark-examples_2.11-2.4.5.jar with timestamp 1591692780146
20/06/09 10:53:00 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://spark-cluster-ra-iot-dev.si-origin-cluster.t-systems.es:32316...
20/06/09 10:53:00 INFO TransportClientFactory: Successfully created connection to spark-cluster-ra-iot-dev.si-origin-cluster.t-systems.es/10.49.160.69:32316 after 152 ms (0 ms spent in bootstraps)
20/06/09 10:53:01 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20200609085300-0002
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/0 on worker-20200609084543-10.129.3.127-45867 (10.129.3.127:45867) with 1 core(s)
20/06/09 10:53:01 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/0 on hostPort 10.129.3.127:45867 with 1 core(s), 512.0 MB RAM
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/1 on worker-20200609084543-10.129.3.127-45867 (10.129.3.127:45867) with 1 core(s)
20/06/09 10:53:01 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/1 on hostPort 10.129.3.127:45867 with 1 core(s), 512.0 MB RAM
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/2 on worker-20200609084543-10.129.3.127-45867 (10.129.3.127:45867) with 1 core(s)
20/06/09 10:53:01 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/2 on hostPort 10.129.3.127:45867 with 1 core(s), 512.0 MB RAM
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/3 on worker-20200609084543-10.129.3.127-45867 (10.129.3.127:45867) with 1 core(s)
20/06/09 10:53:01 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/3 on hostPort 10.129.3.127:45867 with 1 core(s), 512.0 MB RAM
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/4 on worker-20200609084543-10.129.3.127-45867 (10.129.3.127:45867) with 1 core(s)
20/06/09 10:53:01 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/4 on hostPort 10.129.3.127:45867 with 1 core(s), 512.0 MB RAM
20/06/09 10:53:01 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33755.
20/06/09 10:53:01 INFO NettyBlockTransferService: Server created on 10.0.2.15:33755
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/5 on worker-20200609084509-10.128.3.197-41600 (10.128.3.197:41600) with 1 core(s)
20/06/09 10:53:01 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/5 on hostPort 10.128.3.197:41600 with 1 core(s), 512.0 MB RAM
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/6 on worker-20200609084509-10.128.3.197-41600 (10.128.3.197:41600) with 1 core(s)
20/06/09 10:53:01 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/6 on hostPort 10.128.3.197:41600 with 1 core(s), 512.0 MB RAM
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/7 on worker-20200609084509-10.128.3.197-41600 (10.128.3.197:41600) with 1 core(s)
20/06/09 10:53:01 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/7 on hostPort 10.128.3.197:41600 with 1 core(s), 512.0 MB RAM
20/06/09 10:53:01 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/8 on worker-20200609084509-10.128.3.197-41600 (10.128.3.197:41600) with 1 core(s)
20/06/09 10:53:01 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/8 on hostPort 10.128.3.197:41600 with 1 core(s), 512.0 MB RAM
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/9 on worker-20200609084509-10.128.3.197-41600 (10.128.3.197:41600) with 1 core(s)
20/06/09 10:53:01 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/9 on hostPort 10.128.3.197:41600 with 1 core(s), 512.0 MB RAM
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/10 on worker-20200609084426-10.131.1.27-46041 (10.131.1.27:46041) with 1 core(s)
20/06/09 10:53:01 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/10 on hostPort 10.131.1.27:46041 with 1 core(s), 512.0 MB RAM
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/11 on worker-20200609084426-10.131.1.27-46041 (10.131.1.27:46041) with 1 core(s)
20/06/09 10:53:01 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/11 on hostPort 10.131.1.27:46041 with 1 core(s), 512.0 MB RAM
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/12 on worker-20200609084426-10.131.1.27-46041 (10.131.1.27:46041) with 1 core(s)
20/06/09 10:53:01 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/12 on hostPort 10.131.1.27:46041 with 1 core(s), 512.0 MB RAM
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/13 on worker-20200609084426-10.131.1.27-46041 (10.131.1.27:46041) with 1 core(s)
20/06/09 10:53:01 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/13 on hostPort 10.131.1.27:46041 with 1 core(s), 512.0 MB RAM
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/14 on worker-20200609084426-10.131.1.27-46041 (10.131.1.27:46041) with 1 core(s)
20/06/09 10:53:01 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/14 on hostPort 10.131.1.27:46041 with 1 core(s), 512.0 MB RAM
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/5 is now RUNNING
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/6 is now RUNNING
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/7 is now RUNNING
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/8 is now RUNNING
20/06/09 10:53:01 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.0.2.15, 33755, None)
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/9 is now RUNNING
20/06/09 10:53:01 INFO BlockManagerMasterEndpoint: Registering block manager 10.0.2.15:33755 with 117.0 MB RAM, BlockManagerId(driver, 10.0.2.15, 33755, None)
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/10 is now RUNNING
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/11 is now RUNNING
20/06/09 10:53:01 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.0.2.15, 33755, None)
20/06/09 10:53:01 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.0.2.15, 33755, None)
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/12 is now RUNNING
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/13 is now RUNNING
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/14 is now RUNNING
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/0 is now RUNNING
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/1 is now RUNNING
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/2 is now RUNNING
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/3 is now RUNNING
20/06/09 10:53:01 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/4 is now RUNNING
20/06/09 10:53:01 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
20/06/09 10:53:02 INFO SparkContext: Starting job: reduce at SparkPi.scala:38
20/06/09 10:53:02 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:38) with 10 output partitions
20/06/09 10:53:02 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:38)
20/06/09 10:53:02 INFO DAGScheduler: Parents of final stage: List()
20/06/09 10:53:02 INFO DAGScheduler: Missing parents: List()
20/06/09 10:53:02 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no missing parents
20/06/09 10:53:03 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 2.0 KB, free 117.0 MB)
20/06/09 10:53:03 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1381.0 B, free 117.0 MB)
20/06/09 10:53:03 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.0.2.15:33755 (size: 1381.0 B, free: 117.0 MB)
20/06/09 10:53:03 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1163
20/06/09 10:53:03 INFO DAGScheduler: Submitting 10 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9))
20/06/09 10:53:03 INFO TaskSchedulerImpl: Adding task set 0.0 with 10 tasks
20/06/09 10:53:18 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
20/06/09 10:53:33 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
20/06/09 10:53:48 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
20/06/09 10:54:03 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
20/06/09 10:54:18 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
20/06/09 10:54:33 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
20/06/09 10:54:48 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/13 is now EXITED (Command exited with code 1)
20/06/09 10:55:03 INFO StandaloneSchedulerBackend: Executor app-20200609085300-0002/13 removed: Command exited with code 1
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/15 on worker-20200609084426-10.131.1.27-46041 (10.131.1.27:46041) with 1 core(s)
20/06/09 10:55:03 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/15 on hostPort 10.131.1.27:46041 with 1 core(s), 512.0 MB RAM
20/06/09 10:55:03 INFO BlockManagerMasterEndpoint: Trying to remove executor 13 from BlockManagerMaster.
20/06/09 10:55:03 INFO BlockManagerMaster: Removal of executor 13 requested
20/06/09 10:55:03 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 13
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/15 is now RUNNING
20/06/09 10:55:03 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/12 is now EXITED (Command exited with code 1)
20/06/09 10:55:03 INFO StandaloneSchedulerBackend: Executor app-20200609085300-0002/12 removed: Command exited with code 1
20/06/09 10:55:03 INFO BlockManagerMaster: Removal of executor 12 requested
20/06/09 10:55:03 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 12
20/06/09 10:55:03 INFO BlockManagerMasterEndpoint: Trying to remove executor 12 from BlockManagerMaster.
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/16 on worker-20200609084426-10.131.1.27-46041 (10.131.1.27:46041) with 1 core(s)
20/06/09 10:55:03 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/16 on hostPort 10.131.1.27:46041 with 1 core(s), 512.0 MB RAM
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/16 is now RUNNING
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/14 is now EXITED (Command exited with code 1)
20/06/09 10:55:03 INFO StandaloneSchedulerBackend: Executor app-20200609085300-0002/14 removed: Command exited with code 1
20/06/09 10:55:03 INFO BlockManagerMaster: Removal of executor 14 requested
20/06/09 10:55:03 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 14
20/06/09 10:55:03 INFO BlockManagerMasterEndpoint: Trying to remove executor 14 from BlockManagerMaster.
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/17 on worker-20200609084426-10.131.1.27-46041 (10.131.1.27:46041) with 1 core(s)
20/06/09 10:55:03 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/17 on hostPort 10.131.1.27:46041 with 1 core(s), 512.0 MB RAM
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/17 is now RUNNING
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/10 is now EXITED (Command exited with code 1)
20/06/09 10:55:03 INFO StandaloneSchedulerBackend: Executor app-20200609085300-0002/10 removed: Command exited with code 1
20/06/09 10:55:03 INFO BlockManagerMaster: Removal of executor 10 requested
20/06/09 10:55:03 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 10
20/06/09 10:55:03 INFO BlockManagerMasterEndpoint: Trying to remove executor 10 from BlockManagerMaster.
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/18 on worker-20200609084426-10.131.1.27-46041 (10.131.1.27:46041) with 1 core(s)
20/06/09 10:55:03 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/18 on hostPort 10.131.1.27:46041 with 1 core(s), 512.0 MB RAM
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/18 is now RUNNING
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/8 is now EXITED (Command exited with code 1)
20/06/09 10:55:03 INFO StandaloneSchedulerBackend: Executor app-20200609085300-0002/8 removed: Command exited with code 1
20/06/09 10:55:03 INFO BlockManagerMaster: Removal of executor 8 requested
20/06/09 10:55:03 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 8
20/06/09 10:55:03 INFO BlockManagerMasterEndpoint: Trying to remove executor 8 from BlockManagerMaster.
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/19 on worker-20200609084509-10.128.3.197-41600 (10.128.3.197:41600) with 1 core(s)
20/06/09 10:55:03 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/19 on hostPort 10.128.3.197:41600 with 1 core(s), 512.0 MB RAM
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/11 is now EXITED (Command exited with code 1)
20/06/09 10:55:03 INFO StandaloneSchedulerBackend: Executor app-20200609085300-0002/11 removed: Command exited with code 1
20/06/09 10:55:03 INFO BlockManagerMasterEndpoint: Trying to remove executor 11 from BlockManagerMaster.
20/06/09 10:55:03 INFO BlockManagerMaster: Removal of executor 11 requested
20/06/09 10:55:03 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 11
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/20 on worker-20200609084426-10.131.1.27-46041 (10.131.1.27:46041) with 1 core(s)
20/06/09 10:55:03 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/20 on hostPort 10.131.1.27:46041 with 1 core(s), 512.0 MB RAM
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/19 is now RUNNING
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/20 is now RUNNING
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/7 is now EXITED (Command exited with code 1)
20/06/09 10:55:03 INFO StandaloneSchedulerBackend: Executor app-20200609085300-0002/7 removed: Command exited with code 1
20/06/09 10:55:03 INFO BlockManagerMaster: Removal of executor 7 requested
20/06/09 10:55:03 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 7
20/06/09 10:55:03 INFO BlockManagerMasterEndpoint: Trying to remove executor 7 from BlockManagerMaster.
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/21 on worker-20200609084509-10.128.3.197-41600 (10.128.3.197:41600) with 1 core(s)
20/06/09 10:55:03 INFO StandaloneSchedulerBackend: Granted executor ID app-20200609085300-0002/21 on hostPort 10.128.3.197:41600 with 1 core(s), 512.0 MB RAM
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/21 is now RUNNING
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200609085300-0002/9 is now EXITED (Command exited with code 1)
20/06/09 10:55:03 INFO StandaloneSchedulerBackend: Executor app-20200609085300-0002/9 removed: Command exited with code 1
20/06/09 10:55:03 INFO BlockManagerMaster: Removal of executor 9 requested
20/06/09 10:55:03 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 9
20/06/09 10:55:03 INFO BlockManagerMasterEndpoint: Trying to remove executor 9 from BlockManagerMaster.
20/06/09 10:55:03 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200609085300-0002/22 on worker-20200609084509-10.128.3.197-41600 (10.128.3.197:41600) with 1 core(s)
...

Related

can driver program and cluster manager(resource manager) can available on same machine in spark stand alone

I'm doing spark submit from the same machine as spark master, using following command ./bin/spark-submit --master spark://ip:port --deploy-mode "client" test.py I'm my application running forever with following kind of output
22/11/18 13:17:37 INFO BlockManagerMaster: Removal of executor 8 requested
22/11/18 13:17:37 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 8
22/11/18 13:17:37 INFO StandaloneSchedulerBackend: Granted executor ID app-20221118131723-0008/10 on hostPort 192.168.210.94:37443 with 2 core(s), 1024.0 MiB RAM
22/11/18 13:17:37 INFO BlockManagerMasterEndpoint: Trying to remove executor 8 from BlockManagerMaster.
22/11/18 13:17:37 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20221118131723-0008/10 is now RUNNING
22/11/18 13:17:38 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20221118131723-0008/9 is now EXITED (Command exited with code 1)
22/11/18 13:17:38 INFO StandaloneSchedulerBackend: Executor app-20221118131723-0008/9 removed: Command exited with code 1
22/11/18 13:17:38 INFO BlockManagerMaster: Removal of executor 9 requested
22/11/18 13:17:38 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 9
22/11/18 13:17:38 INFO BlockManagerMasterEndpoint: Trying to remove executor 9 from BlockManagerMaster.
22/11/18 13:17:38 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20221118131723-0008/11 on worker-20221118111836-192.168.210.82-46395 (192.168.210.82:4639
But when I run from other nodes, my application is running successfully what could be reason?

How to deploy spark on Kubeedge?

I tried to use k8s deployment mode to deploy spark-2.4.3 on Kubeedge 1.1.0 but failed (docker version 19.03.4,k8s version 1.16.1).
SPARK_DRIVER_BIND_ADDRESS=10.4.20.34
SPARK_IMAGE=spark:2.4.3
SPARK_MASTER="k8s://http://127.0.0.1:8080"
CMD=(
"$SPARK_HOME/bin/spark-submit"
--conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS"
--conf "spark.kubernetes.container.image=${SPARK_IMAGE}"
--conf "spark.executor.instances=1"
--conf "spark.kubernetes.executor.limit.cores=1"
--deploy-mode client
--master ${SPARK_MASTER}
--name spark-pi
--class org.apache.spark.examples.SparkPi
--driver-memory 1G
--executor-memory 1G
--num-executors 1
--executor-cores 1
file://${PWD}/spark-examples_2.11-2.4.3.jar
)
${CMD[#]}
Node status is normal.
kubectl get nodes
NAME STATUS ROLES AGE VERSION
edge-node-001 Ready edge 6d1h v1.15.3-kubeedge-v1.1.0-beta.0.178+c6a5aa738261e7-dirty
ubuntu-ms-7b89 Ready master 6d4h v1.16.1
But I got some errors
19/11/17 21:45:12 INFO k8s.ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
19/11/17 21:45:12 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46571.
19/11/17 21:45:12 INFO netty.NettyBlockTransferService: Server created on 10.4.20.34:46571
19/11/17 21:45:12 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
19/11/17 21:45:12 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.4.20.34, 46571, None)
19/11/17 21:45:12 INFO storage.BlockManagerMasterEndpoint: Registering block manager 10.4.20.34:46571 with 366.3 MB RAM, BlockManagerId(driver, 10.4.20.34, 46571, None)
19/11/17 21:45:12 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.4.20.34, 46571, None)
19/11/17 21:45:12 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.4.20.34, 46571, None)
19/11/17 21:45:12 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#451882b2{/metrics/json,null,AVAILABLE,#Spark}
19/11/17 21:45:42 INFO k8s.KubernetesClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
19/11/17 21:45:42 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:38
19/11/17 21:45:42 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:38) with 2 output partitions
19/11/17 21:45:42 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:38)
19/11/17 21:45:42 INFO scheduler.DAGScheduler: Parents of final stage: List()
19/11/17 21:45:42 INFO scheduler.DAGScheduler: Missing parents: List()
19/11/17 21:45:42 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no missing parents
19/11/17 21:45:42 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1936.0 B, free 366.3 MB)
19/11/17 21:45:42 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1256.0 B, free 366.3 MB)
19/11/17 21:45:42 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.4.20.34:46571 (size: 1256.0 B, free: 366.3 MB)
19/11/17 21:45:42 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1161
19/11/17 21:45:42 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34) (first 15 tasks are for partitions Vector(0, 1))
19/11/17 21:45:42 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
19/11/17 21:45:57 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
19/11/17 21:46:12 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
19/11/17 21:46:27 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
19/11/17 21:46:42 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
19/11/17 21:46:57 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
19/11/17 21:47:12 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
Is it possible to deploy spark on Kubeedge in Kubernetes deployment mode? Or I should try standalone deployment mode?
I'm so confused.

Spark Standalone Mode, application runs, but executor is killed with exitStatus 1

I am new to Apache Spark and was trying to run the example Pi Calculation application on my local spark setup (using Standalone Cluster).
Both the Master, Slave and Driver are running on my local machine.
What I am noticing is that, the PI is calculated successfully, however in the slave logs I see that the Worker/Executor is being killed with exitStatus 1.
I do not see any errors/exceptions logged to the console otherwise.
I tried finding help on similar issue, but most of the search hits were referring to exitStatus 137 etc. (e.g: Spark application kills executor)
I have failed miserably to understand why the Worker is being killed instead of completing the execution with 'EXITED' state. I think it's related to how I am executing the app, but am not quite clear what am I doing wrong.
Can someone guide me on identifying the root cause?
Given below is the code I am using for PI calculation and the logs of the master, slave, driver respsectively.
PI Calculation Application
package sparky
import org.apache.spark.scheduler._
import org.apache.spark.sql.SparkSession
import scala.math.random
object Application {
def runSpark(args: Array[String] ): Unit = {
val spark = SparkSession
.builder
.appName("Spark Pi")
.getOrCreate()
spark.sparkContext.addSparkListener(new MyListener())
val slices = if (args.length > 0) args(0).toInt else 2
val n = math.min(100000L * slices, Int.MaxValue).toInt // avoid overflow
val count = spark.sparkContext.parallelize(1 until n, slices).map { i =>
val x = random * 2 - 1
val y = random * 2 - 1
if (x * x + y * y <= 1) 1 else 0
}.reduce(_ + _)
println("Pi is roughly " + 4.0 * count / (n - 1))
spark.stop()
}
def main(args: Array[String]) = {
Application.runSpark(args)
}
}
Master Console Output
C:\Servers\apache-spark\2.2.0\bin
λ start-master.cmd -h 0.0.0.0
C:\Platforms\Java\jdk1.8.0_65\bin\java -cp "C:\Servers\apache-spark\2.2.0\bin\..\conf\;C:\Servers\apache-spark\2.2.0\bin\..\jars\*" -Xmx1g org.apache.spark.deploy.master.Master
18/01/25 09:01:30,099 INFO Master: Started daemon with process name: 14900#somemachine
18/01/25 09:01:30,580 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/01/25 09:01:30,680 INFO SecurityManager: Changing view acls to: someuser
18/01/25 09:01:30,681 INFO SecurityManager: Changing modify acls to: someuser
18/01/25 09:01:30,682 INFO SecurityManager: Changing view acls groups to:
18/01/25 09:01:30,683 INFO SecurityManager: Changing modify acls groups to:
18/01/25 09:01:30,684 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(someuser); groups with view permissions: Set(); users with modify permissions: Set(someuser); groups with modify permissions: Set()
18/01/25 09:01:31,711 INFO Utils: Successfully started service 'sparkMaster' on port 7077.
18/01/25 09:01:31,829 INFO Master: Starting Spark master at spark://0.0.0.0:7077
18/01/25 09:01:31,833 INFO Master: Running Spark version 2.2.0
18/01/25 09:01:31,903 INFO log: Logging initialized #2692ms
18/01/25 09:01:31,960 INFO Server: jetty-9.3.z-SNAPSHOT
18/01/25 09:01:32,025 INFO Server: Started #2816ms
18/01/25 09:01:32,057 INFO AbstractConnector: Started ServerConnector#106ca013{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
18/01/25 09:01:32,058 INFO Utils: Successfully started service 'MasterUI' on port 8080.
18/01/25 09:01:32,087 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#41cc88b{/app,null,AVAILABLE,#Spark}
18/01/25 09:01:32,088 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#1c63bda6{/app/json,null,AVAILABLE,#Spark}
18/01/25 09:01:32,089 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#45ae273f{/,null,AVAILABLE,#Spark}
18/01/25 09:01:32,090 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#7a319c60{/json,null,AVAILABLE,#Spark}
18/01/25 09:01:32,098 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#23510beb{/static,null,AVAILABLE,#Spark}
18/01/25 09:01:32,099 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#462c632c{/app/kill,null,AVAILABLE,#Spark}
18/01/25 09:01:32,101 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#436ef27b{/driver/kill,null,AVAILABLE,#Spark}
18/01/25 09:01:32,104 INFO MasterWebUI: Bound MasterWebUI to 0.0.0.0, and started at http://192.168.56.1:8080
18/01/25 09:01:32,119 INFO Server: jetty-9.3.z-SNAPSHOT
18/01/25 09:01:32,130 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#6f7d1cba{/,null,AVAILABLE}
18/01/25 09:01:32,134 INFO AbstractConnector: Started ServerConnector#3f9e9637{HTTP/1.1,[http/1.1]}{0.0.0.0:6066}
18/01/25 09:01:32,134 INFO Server: Started #2925ms
18/01/25 09:01:32,134 INFO Utils: Successfully started service on port 6066.
18/01/25 09:01:32,135 INFO StandaloneRestServer: Started REST server for submitting applications on port 6066
18/01/25 09:01:32,358 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#7b3e5adb{/metrics/master/json,null,AVAILABLE,#Spark}
18/01/25 09:01:32,362 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#139cbe00{/metrics/applications/json,null,AVAILABLE,#Spark}
18/01/25 09:01:32,399 INFO Master: I have been elected leader! New state: ALIVE
18/01/25 09:01:41,225 INFO Master: Registering worker 192.168.56.1:48591 with 4 cores, 14.4 GB RAM
18/01/25 09:01:53,510 INFO Master: Registering app Spark Pi
18/01/25 09:01:53,515 INFO Master: Registered app Spark Pi with ID app-20180125090153-0000
18/01/25 09:01:53,569 INFO Master: Launching executor app-20180125090153-0000/0 on worker worker-20180125090140-192.168.56.1-48591
18/01/25 09:02:00,262 INFO Master: Received unregister request from application app-20180125090153-0000
18/01/25 09:02:00,269 INFO Master: Removing app app-20180125090153-0000
18/01/25 09:02:00,323 WARN Master: Got status update for unknown executor app-20180125090153-0000/0
18/01/25 09:02:00,338 INFO Master: 127.0.0.1:48625 got disassociated, removing it.
18/01/25 09:02:00,345 INFO Master: 192.168.56.1:48620 got disassociated, removing it.
Slave Console Output
C:\Servers\apache-spark\2.2.0\bin
λ start-slave.cmd -h 0.0.0.0
C:\Platforms\Java\jdk1.8.0_65\bin\java -cp "C:\Servers\apache-spark\2.2.0\bin\..\conf\;C:\Servers\apache-spark\2.2.0\bin\..\jars\*" -Xmx1g org.apache.spark.deploy.worker.Worker spark://0.0.0.0:7077
18/01/25 09:01:38,054 INFO Worker: Started daemon with process name: 14532#somemachine
18/01/25 09:01:38,546 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/01/25 09:01:38,644 INFO SecurityManager: Changing view acls to: someuser
18/01/25 09:01:38,645 INFO SecurityManager: Changing modify acls to: someuser
18/01/25 09:01:38,646 INFO SecurityManager: Changing view acls groups to:
18/01/25 09:01:38,647 INFO SecurityManager: Changing modify acls groups to:
18/01/25 09:01:38,648 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(someuser); groups with view permissions: Set(); users with modify permissions: Set(someuser); groups with modify permissions: Set()
18/01/25 09:01:39,655 INFO Utils: Successfully started service 'sparkWorker' on port 48591.
18/01/25 09:01:40,521 INFO Worker: Starting Spark worker 192.168.56.1:48591 with 4 cores, 14.4 GB RAM
18/01/25 09:01:40,526 INFO Worker: Running Spark version 2.2.0
18/01/25 09:01:40,527 INFO Worker: Spark home: C:\Servers\apache-spark\2.2.0\bin\..
18/01/25 09:01:40,586 INFO log: Logging initialized #3430ms
18/01/25 09:01:40,636 INFO Server: jetty-9.3.z-SNAPSHOT
18/01/25 09:01:40,657 INFO Server: Started #3503ms
18/01/25 09:01:40,787 WARN Utils: Service 'WorkerUI' could not bind on port 8081. Attempting port 8082.
18/01/25 09:01:40,797 INFO AbstractConnector: Started ServerConnector#24c54ec4{HTTP/1.1,[http/1.1]}{0.0.0.0:8082}
18/01/25 09:01:40,797 INFO Utils: Successfully started service 'WorkerUI' on port 8082.
18/01/25 09:01:40,832 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#6e86345{/logPage,null,AVAILABLE,#Spark}
18/01/25 09:01:40,833 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#43dbfd42{/logPage/json,null,AVAILABLE,#Spark}
18/01/25 09:01:40,834 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#768b7729{/,null,AVAILABLE,#Spark}
18/01/25 09:01:40,836 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#382e7183{/json,null,AVAILABLE,#Spark}
18/01/25 09:01:40,844 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#459d7b70{/static,null,AVAILABLE,#Spark}
18/01/25 09:01:40,845 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#5bf4fc9c{/log,null,AVAILABLE,#Spark}
18/01/25 09:01:40,849 INFO WorkerWebUI: Bound WorkerWebUI to 0.0.0.0, and started at http://192.168.56.1:8082
18/01/25 09:01:40,853 INFO Worker: Connecting to master 0.0.0.0:7077...
18/01/25 09:01:40,885 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#4e93ba9d{/metrics/json,null,AVAILABLE,#Spark}
18/01/25 09:01:40,971 INFO TransportClientFactory: Successfully created connection to /0.0.0.0:7077 after 82 ms (0 ms spent in bootstraps)
18/01/25 09:01:41,246 INFO Worker: Successfully registered with master spark://0.0.0.0:7077
18/01/25 09:01:53,621 INFO Worker: Asked to launch executor app-20180125090153-0000/0 for Spark Pi
18/01/25 09:01:53,661 INFO SecurityManager: Changing view acls to: someuser
18/01/25 09:01:53,663 INFO SecurityManager: Changing modify acls to: someuser
18/01/25 09:01:53,664 INFO SecurityManager: Changing view acls groups to:
18/01/25 09:01:53,668 INFO SecurityManager: Changing modify acls groups to:
18/01/25 09:01:53,669 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(someuser); groups with view permissions: Set(); users with modify permissions: Set(someuser); groups with modify permissions: Set()
18/01/25 09:01:53,695 INFO ExecutorRunner: Launch command: "C:\Platforms\Java\jdk1.8.0_65\bin\java" "-cp" "C:\Servers\apache-spark\2.2.0\bin\..\conf\;C:\Servers\apache-spark\2.2.0\bin\..\jars\*" "-Xmx1024M" "-Dspark.driver.port=48620" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler#192.168.56.1:48620" "--executor-id" "0" "--hostname" "192.168.56.1" "--cores" "4" "--app-id" "app-20180125090153-0000" "--worker-url" "spark://Worker#192.168.56.1:48591"
18/01/25 09:02:00,297 INFO Worker: Asked to kill executor app-20180125090153-0000/0
18/01/25 09:02:00,303 INFO ExecutorRunner: Runner thread for executor app-20180125090153-0000/0 interrupted
18/01/25 09:02:00,305 INFO ExecutorRunner: Killing process!
18/01/25 09:02:00,323 INFO Worker: Executor app-20180125090153-0000/0 finished with state KILLED exitStatus 1
18/01/25 09:02:00,336 INFO ExternalShuffleBlockResolver: Application app-20180125090153-0000 removed, cleanupLocalDirs = true
18/01/25 09:02:00,340 INFO Worker: Cleaning up local directories for application app-20180125090153-0000
Driver Console Output
9:01:47 AM: Executing task 'submitToSpark'...
C:\Applications\scala\sparky\app\build\libs\sparky-app-0.0.1.jar
:app:compileJava NO-SOURCE
:app:compileScala UP-TO-DATE
:app:processResources NO-SOURCE
:app:classes UP-TO-DATE
:app:jar UP-TO-DATE
:runner:submitToSpark
C:\Platforms\Java\jdk1.8.0_65\bin\java -cp "C:\Servers\apache-spark\2.2.0\bin\..\conf\;C:\Servers\apache-spark\2.2.0\bin\..\jars\*" -Xmx1g org.apache.spark.deploy.SparkSubmit --master spark://localhost:7077 C:\Applications\scala\sparky\app\build\libs\sparky-app-0.0.1.jar
18/01/25 09:01:51,111 INFO SparkContext: Running Spark version 2.2.0
18/01/25 09:01:51,465 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/01/25 09:01:51,677 INFO SparkContext: Submitted application: Spark Pi
18/01/25 09:01:51,711 INFO SecurityManager: Changing view acls to: someuser
18/01/25 09:01:51,712 INFO SecurityManager: Changing modify acls to: someuser
18/01/25 09:01:51,712 INFO SecurityManager: Changing view acls groups to:
18/01/25 09:01:51,713 INFO SecurityManager: Changing modify acls groups to:
18/01/25 09:01:51,714 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(someuser); groups with view permissions: Set(); users with modify permissions: Set(someuser); groups with modify permissions: Set()
18/01/25 09:01:52,639 INFO Utils: Successfully started service 'sparkDriver' on port 48620.
18/01/25 09:01:52,669 INFO SparkEnv: Registering MapOutputTracker
18/01/25 09:01:52,695 INFO SparkEnv: Registering BlockManagerMaster
18/01/25 09:01:52,699 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
18/01/25 09:01:52,700 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
18/01/25 09:01:52,712 INFO DiskBlockManager: Created local directory at C:\Users\someuser\AppData\Local\Temp\blockmgr-f9908c61-a91a-43d5-8d24-e0fd86d55d1c
18/01/25 09:01:52,740 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
18/01/25 09:01:52,808 INFO SparkEnv: Registering OutputCommitCoordinator
18/01/25 09:01:52,924 INFO log: Logging initialized #3539ms
18/01/25 09:01:53,009 INFO Server: jetty-9.3.z-SNAPSHOT
18/01/25 09:01:53,038 INFO Server: Started #3654ms
18/01/25 09:01:53,067 INFO AbstractConnector: Started ServerConnector#21a5fd96{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
18/01/25 09:01:53,067 INFO Utils: Successfully started service 'SparkUI' on port 4040.
18/01/25 09:01:53,099 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#40bffbca{/jobs,null,AVAILABLE,#Spark}
18/01/25 09:01:53,100 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#6c4f9535{/jobs/json,null,AVAILABLE,#Spark}
18/01/25 09:01:53,100 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#30c31dd7{/jobs/job,null,AVAILABLE,#Spark}
18/01/25 09:01:53,101 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#c1fca1e{/jobs/job/json,null,AVAILABLE,#Spark}
18/01/25 09:01:53,102 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#344344fa{/stages,null,AVAILABLE,#Spark}
18/01/25 09:01:53,103 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#70e659aa{/stages/json,null,AVAILABLE,#Spark}
18/01/25 09:01:53,103 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#285f09de{/stages/stage,null,AVAILABLE,#Spark}
18/01/25 09:01:53,105 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#48e64352{/stages/stage/json,null,AVAILABLE,#Spark}
18/01/25 09:01:53,106 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#4362d7df{/stages/pool,null,AVAILABLE,#Spark}
18/01/25 09:01:53,106 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#1c25b8a7{/stages/pool/json,null,AVAILABLE,#Spark}
18/01/25 09:01:53,107 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#750fe12e{/storage,null,AVAILABLE,#Spark}
18/01/25 09:01:53,108 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#3e587920{/storage/json,null,AVAILABLE,#Spark}
18/01/25 09:01:53,108 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#24f43aa3{/storage/rdd,null,AVAILABLE,#Spark}
18/01/25 09:01:53,109 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#1e11bc55{/storage/rdd/json,null,AVAILABLE,#Spark}
18/01/25 09:01:53,110 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#70e0accd{/environment,null,AVAILABLE,#Spark}
18/01/25 09:01:53,112 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#6ab72419{/environment/json,null,AVAILABLE,#Spark}
18/01/25 09:01:53,112 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#4fdfa676{/executors,null,AVAILABLE,#Spark}
18/01/25 09:01:53,113 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#5be82d43{/executors/json,null,AVAILABLE,#Spark}
18/01/25 09:01:53,114 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#345e5a17{/executors/threadDump,null,AVAILABLE,#Spark}
18/01/25 09:01:53,115 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#443dbe42{/executors/threadDump/json,null,AVAILABLE,#Spark}
18/01/25 09:01:53,125 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#1734f68{/static,null,AVAILABLE,#Spark}
18/01/25 09:01:53,125 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#31c269fd{/,null,AVAILABLE,#Spark}
18/01/25 09:01:53,127 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#47747fb9{/api,null,AVAILABLE,#Spark}
18/01/25 09:01:53,128 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#70eecdc2{/jobs/job/kill,null,AVAILABLE,#Spark}
18/01/25 09:01:53,129 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#7db0565c{/stages/stage/kill,null,AVAILABLE,#Spark}
18/01/25 09:01:53,133 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.56.1:4040
18/01/25 09:01:53,174 INFO SparkContext: Added JAR file:/C:/Applications/scala/sparky/app/build/libs/sparky-app-0.0.1.jar at spark://192.168.56.1:48620/jars/sparky-app-0.0.1.jar with timestamp 1516888913174
18/01/25 09:01:53,318 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://localhost:7077...
18/01/25 09:01:53,389 INFO TransportClientFactory: Successfully created connection to localhost/127.0.0.1:7077 after 42 ms (0 ms spent in bootstraps)
18/01/25 09:01:53,554 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20180125090153-0000
18/01/25 09:01:53,577 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 48642.
18/01/25 09:01:53,578 INFO NettyBlockTransferService: Server created on 192.168.56.1:48642
18/01/25 09:01:53,582 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
18/01/25 09:01:53,590 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.56.1, 48642, None)
18/01/25 09:01:53,595 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.56.1:48642 with 366.3 MB RAM, BlockManagerId(driver, 192.168.56.1, 48642, None)
18/01/25 09:01:53,600 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.56.1, 48642, None)
18/01/25 09:01:53,601 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.56.1, 48642, None)
18/01/25 09:01:53,667 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20180125090153-0000/0 on worker-20180125090140-192.168.56.1-48591 (192.168.56.1:48591) with 4 cores
18/01/25 09:01:53,668 INFO StandaloneSchedulerBackend: Granted executor ID app-20180125090153-0000/0 on hostPort 192.168.56.1:48591 with 4 cores, 1024.0 MB RAM
18/01/25 09:01:53,901 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#74fef3f7{/metrics/json,null,AVAILABLE,#Spark}
18/01/25 09:01:55,026 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20180125090153-0000/0 is now RUNNING
18/01/25 09:01:55,096 INFO EventLoggingListener: Logging events to file:///C:/Dustbin/spark-events/app-20180125090153-0000
18/01/25 09:01:55,127 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
18/01/25 09:01:55,218 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/C:/Applications/scala/sparky/runner/spark-warehouse/').
18/01/25 09:01:55,219 INFO SharedState: Warehouse path is 'file:/C:/Applications/scala/sparky/runner/spark-warehouse/'.
18/01/25 09:01:55,228 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#50a691d3{/SQL,null,AVAILABLE,#Spark}
18/01/25 09:01:55,228 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#3b95d13c{/SQL/json,null,AVAILABLE,#Spark}
18/01/25 09:01:55,229 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#54d901aa{/SQL/execution,null,AVAILABLE,#Spark}
18/01/25 09:01:55,230 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#573284a5{/SQL/execution/json,null,AVAILABLE,#Spark}
18/01/25 09:01:55,233 INFO ContextHandler: Started o.s.j.s.ServletContextHandler#507b79f7{/static/sql,null,AVAILABLE,#Spark}
18/01/25 09:01:56,232 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
18/01/25 09:01:56,609 INFO SparkContext: Starting job: reduce at Application.scala:29
18/01/25 09:01:56,636 INFO DAGScheduler: Got job 0 (reduce at Application.scala:29) with 2 output partitions
18/01/25 09:01:56,637 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at Application.scala:29)
18/01/25 09:01:56,638 INFO DAGScheduler: Parents of final stage: List()
18/01/25 09:01:56,640 INFO DAGScheduler: Missing parents: List()
18/01/25 09:01:56,654 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at Application.scala:25), which has no missing parents
18/01/25 09:01:56,815 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1800.0 B, free 366.3 MB)
18/01/25 09:01:56,980 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1168.0 B, free 366.3 MB)
18/01/25 09:01:56,984 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.56.1:48642 (size: 1168.0 B, free: 366.3 MB)
18/01/25 09:01:56,988 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006
18/01/25 09:01:57,016 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at Application.scala:25) (first 15 tasks are for partitions Vector(0, 1))
18/01/25 09:01:57,018 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
18/01/25 09:01:58,617 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (192.168.56.1:48660) with ID 0
18/01/25 09:01:58,661 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.56.1, executor 0, partition 0, PROCESS_LOCAL, 4829 bytes)
18/01/25 09:01:58,665 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 192.168.56.1, executor 0, partition 1, PROCESS_LOCAL, 4829 bytes)
18/01/25 09:01:59,242 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.56.1:48678 with 366.3 MB RAM, BlockManagerId(0, 192.168.56.1, 48678, None)
18/01/25 09:01:59,819 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.56.1:48678 (size: 1168.0 B, free: 366.3 MB)
18/01/25 09:02:00,139 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1500 ms on 192.168.56.1 (executor 0) (1/2)
18/01/25 09:02:00,142 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 1478 ms on 192.168.56.1 (executor 0) (2/2)
18/01/25 09:02:00,143 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
18/01/25 09:02:00,150 INFO DAGScheduler: ResultStage 0 (reduce at Application.scala:29) finished in 3.109 s
18/01/25 09:02:00,156 INFO DAGScheduler: Job 0 finished: reduce at Application.scala:29, took 3.546255 s
Pi is roughly 3.1363756818784094
18/01/25 09:02:00,168 INFO AbstractConnector: Stopped Spark#21a5fd96{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
18/01/25 09:02:00,170 INFO SparkUI: Stopped Spark web UI at http://192.168.56.1:4040
18/01/25 09:02:00,247 INFO StandaloneSchedulerBackend: Shutting down all executors
18/01/25 09:02:00,249 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
18/01/25 09:02:00,269 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/01/25 09:02:00,300 INFO MemoryStore: MemoryStore cleared
18/01/25 09:02:00,301 INFO BlockManager: BlockManager stopped
18/01/25 09:02:00,321 INFO BlockManagerMaster: BlockManagerMaster stopped
18/01/25 09:02:00,328 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/01/25 09:02:00,353 INFO SparkContext: Successfully stopped SparkContext
2018-01-25 09:02:00.353
18/01/25 09:02:00,358 INFO ShutdownHookManager: Shutdown hook called
18/01/25 09:02:00,360 INFO ShutdownHookManager: Deleting directory C:\Users\someuser\AppData\Local\Temp\spark-ac6369a0-abb8-476e-a527-91e0a8011302
BUILD SUCCESSFUL in 13s
3 actionable tasks: 1 executed, 2 up-to-date
9:02:01 AM: Task execution finished 'submitToSpark'.

Why would Spark executors be removed (with "ExecutorAllocationManager: Request to remove executorIds" in the logs)?

Im trying to execute a spark job in an AWS cluster of 6 c4.2xlarge nodes and I don't know why Spark is killing the executors...
Any help will be appreciated
Here the spark submit command:
. /usr/bin/spark-submit --packages="com.databricks:spark-avro_2.11:3.2.0" --jars RedshiftJDBC42-1.2.1.1001.jar --deploy-mode client --master yarn --num-executors 12 --executor-cores 3 --executor-memory 7G --driver-memory 7g --py-files dependencies.zip iface_extractions.py 2016-10-01 > output.log
At line this line starts to remove executors
17/05/25 14:42:50 INFO ExecutorAllocationManager: Request to remove executorIds: 5, 3
Output spark-submit log:
Ivy Default Cache set to: /home/hadoop/.ivy2/cache
The jars for the packages stored in: /home/hadoop/.ivy2/jars
:: loading settings :: url = jar:file:/usr/lib/spark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
com.databricks#spark-avro_2.11 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0
confs: [default]
found com.databricks#spark-avro_2.11;3.2.0 in central
found org.slf4j#slf4j-api;1.7.5 in central
found org.apache.avro#avro;1.7.6 in central
found org.codehaus.jackson#jackson-core-asl;1.9.13 in central
found org.codehaus.jackson#jackson-mapper-asl;1.9.13 in central
found com.thoughtworks.paranamer#paranamer;2.3 in central
found org.xerial.snappy#snappy-java;1.0.5 in central
found org.apache.commons#commons-compress;1.4.1 in central
found org.tukaani#xz;1.0 in central
:: resolution report :: resolve 284ms :: artifacts dl 8ms
:: modules in use:
com.databricks#spark-avro_2.11;3.2.0 from central in [default]
com.thoughtworks.paranamer#paranamer;2.3 from central in [default]
org.apache.avro#avro;1.7.6 from central in [default]
org.apache.commons#commons-compress;1.4.1 from central in [default]
org.codehaus.jackson#jackson-core-asl;1.9.13 from central in [default]
org.codehaus.jackson#jackson-mapper-asl;1.9.13 from central in [default]
org.slf4j#slf4j-api;1.7.5 from central in [default]
org.tukaani#xz;1.0 from central in [default]
org.xerial.snappy#snappy-java;1.0.5 from central in [default]
:: evicted modules:
org.slf4j#slf4j-api;1.6.4 by [org.slf4j#slf4j-api;1.7.5] in [default]
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 10 | 0 | 0 | 1 || 9 | 0 |
---------------------------------------------------------------------
:: retrieving :: org.apache.spark#spark-submit-parent
confs: [default]
0 artifacts copied, 9 already retrieved (0kB/8ms)
17/05/25 14:41:37 INFO SparkContext: Running Spark version 2.1.0
17/05/25 14:41:38 INFO SecurityManager: Changing view acls to: hadoop
17/05/25 14:41:38 INFO SecurityManager: Changing modify acls to: hadoop
17/05/25 14:41:38 INFO SecurityManager: Changing view acls groups to:
17/05/25 14:41:38 INFO SecurityManager: Changing modify acls groups to:
17/05/25 14:41:38 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set()
17/05/25 14:41:38 INFO Utils: Successfully started service 'sparkDriver' on port 37132.
17/05/25 14:41:38 INFO SparkEnv: Registering MapOutputTracker
17/05/25 14:41:38 INFO SparkEnv: Registering BlockManagerMaster
17/05/25 14:41:38 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/05/25 14:41:38 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/05/25 14:41:38 INFO DiskBlockManager: Created local directory at /mnt/tmp/blockmgr-e368a261-c1a1-49e7-8533-8081896a45e4
17/05/25 14:41:38 INFO MemoryStore: MemoryStore started with capacity 4.0 GB
17/05/25 14:41:38 INFO SparkEnv: Registering OutputCommitCoordinator
17/05/25 14:41:39 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/05/25 14:41:39 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.185.53.161:4040
17/05/25 14:41:39 INFO Utils: Using initial executors = 12, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
17/05/25 14:41:39 INFO RMProxy: Connecting to ResourceManager at ip-10-185-53-161.eu-west-1.compute.internal/10.185.53.161:8032
17/05/25 14:41:39 INFO Client: Requesting a new application from cluster with 5 NodeManagers
17/05/25 14:41:40 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (11520 MB per container)
17/05/25 14:41:40 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
17/05/25 14:41:40 INFO Client: Setting up container launch context for our AM
17/05/25 14:41:40 INFO Client: Setting up the launch environment for our AM container
17/05/25 14:41:40 INFO Client: Preparing resources for our AM container
17/05/25 14:41:40 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
17/05/25 14:41:42 INFO Client: Uploading resource file:/mnt/tmp/spark-4f534fa1-c377-4113-9c86-96d5cdab4cb5/__spark_libs__6500399427935716229.zip -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/__spark_libs__6500399427935716229.zip
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/RedshiftJDBC42-1.2.1.1001.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/RedshiftJDBC42-1.2.1.1001.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/.ivy2/jars/com.databricks_spark-avro_2.11-3.2.0.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/com.databricks_spark-avro_2.11-3.2.0.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/.ivy2/jars/org.slf4j_slf4j-api-1.7.5.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/org.slf4j_slf4j-api-1.7.5.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/.ivy2/jars/org.apache.avro_avro-1.7.6.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/org.apache.avro_avro-1.7.6.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/.ivy2/jars/org.codehaus.jackson_jackson-core-asl-1.9.13.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/org.codehaus.jackson_jackson-core-asl-1.9.13.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/.ivy2/jars/org.codehaus.jackson_jackson-mapper-asl-1.9.13.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/org.codehaus.jackson_jackson-mapper-asl-1.9.13.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/.ivy2/jars/com.thoughtworks.paranamer_paranamer-2.3.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/com.thoughtworks.paranamer_paranamer-2.3.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/.ivy2/jars/org.xerial.snappy_snappy-java-1.0.5.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/org.xerial.snappy_snappy-java-1.0.5.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/.ivy2/jars/org.apache.commons_commons-compress-1.4.1.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/org.apache.commons_commons-compress-1.4.1.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/.ivy2/jars/org.tukaani_xz-1.0.jar -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/org.tukaani_xz-1.0.jar
17/05/25 14:41:43 INFO Client: Uploading resource file:/etc/spark/conf/hive-site.xml -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/hive-site.xml
17/05/25 14:41:43 INFO Client: Uploading resource file:/usr/lib/spark/python/lib/pyspark.zip -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/pyspark.zip
17/05/25 14:41:43 INFO Client: Uploading resource file:/usr/lib/spark/python/lib/py4j-0.10.4-src.zip -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/py4j-0.10.4-src.zip
17/05/25 14:41:43 INFO Client: Uploading resource file:/home/hadoop/dependencies.zip -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/dependencies.zip
17/05/25 14:41:43 WARN Client: Same path resource file:/home/hadoop/.ivy2/jars/com.databricks_spark-avro_2.11-3.2.0.jar added multiple times to distributed cache.
17/05/25 14:41:43 WARN Client: Same path resource file:/home/hadoop/.ivy2/jars/org.slf4j_slf4j-api-1.7.5.jar added multiple times to distributed cache.
17/05/25 14:41:43 WARN Client: Same path resource file:/home/hadoop/.ivy2/jars/org.apache.avro_avro-1.7.6.jar added multiple times to distributed cache.
17/05/25 14:41:43 WARN Client: Same path resource file:/home/hadoop/.ivy2/jars/org.codehaus.jackson_jackson-core-asl-1.9.13.jar added multiple times to distributed cache.
17/05/25 14:41:43 WARN Client: Same path resource file:/home/hadoop/.ivy2/jars/org.codehaus.jackson_jackson-mapper-asl-1.9.13.jar added multiple times to distributed cache.
17/05/25 14:41:43 WARN Client: Same path resource file:/home/hadoop/.ivy2/jars/com.thoughtworks.paranamer_paranamer-2.3.jar added multiple times to distributed cache.
17/05/25 14:41:43 WARN Client: Same path resource file:/home/hadoop/.ivy2/jars/org.xerial.snappy_snappy-java-1.0.5.jar added multiple times to distributed cache.
17/05/25 14:41:43 WARN Client: Same path resource file:/home/hadoop/.ivy2/jars/org.apache.commons_commons-compress-1.4.1.jar added multiple times to distributed cache.
17/05/25 14:41:43 WARN Client: Same path resource file:/home/hadoop/.ivy2/jars/org.tukaani_xz-1.0.jar added multiple times to distributed cache.
17/05/25 14:41:43 INFO Client: Uploading resource file:/mnt/tmp/spark-4f534fa1-c377-4113-9c86-96d5cdab4cb5/__spark_conf__1516567354161750682.zip -> hdfs://ip-10-185-53-161.eu-west-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1495720658394_0004/__spark_conf__.zip
17/05/25 14:41:43 INFO SecurityManager: Changing view acls to: hadoop
17/05/25 14:41:43 INFO SecurityManager: Changing modify acls to: hadoop
17/05/25 14:41:43 INFO SecurityManager: Changing view acls groups to:
17/05/25 14:41:43 INFO SecurityManager: Changing modify acls groups to:
17/05/25 14:41:43 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set()
17/05/25 14:41:43 INFO Client: Submitting application application_1495720658394_0004 to ResourceManager
17/05/25 14:41:43 INFO YarnClientImpl: Submitted application application_1495720658394_0004
17/05/25 14:41:43 INFO SchedulerExtensionServices: Starting Yarn extension services with app application_1495720658394_0004 and attemptId None
17/05/25 14:41:44 INFO Client: Application report for application_1495720658394_0004 (state: ACCEPTED)
17/05/25 14:41:44 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1495723303463
final status: UNDEFINED
tracking URL: http://ip-10-185-53-161.eu-west-1.compute.internal:20888/proxy/application_1495720658394_0004/
user: hadoop
17/05/25 14:41:45 INFO Client: Application report for application_1495720658394_0004 (state: ACCEPTED)
17/05/25 14:41:46 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
17/05/25 14:41:46 INFO Client: Application report for application_1495720658394_0004 (state: ACCEPTED)
17/05/25 14:41:46 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> ip-10-185-53-161.eu-west-1.compute.internal, PROXY_URI_BASES -> http://ip-10-185-53-161.eu-west-1.compute.internal:20888/proxy/application_1495720658394_0004), /proxy/application_1495720658394_0004
17/05/25 14:41:46 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
17/05/25 14:41:47 INFO Client: Application report for application_1495720658394_0004 (state: RUNNING)
17/05/25 14:41:47 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 10.185.52.31
ApplicationMaster RPC port: 0
queue: default
start time: 1495723303463
final status: UNDEFINED
tracking URL: http://ip-10-185-53-161.eu-west-1.compute.internal:20888/proxy/application_1495720658394_0004/
user: hadoop
17/05/25 14:41:47 INFO YarnClientSchedulerBackend: Application application_1495720658394_0004 has started running.
17/05/25 14:41:47 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 37860.
17/05/25 14:41:47 INFO NettyBlockTransferService: Server created on 10.185.53.161:37860
17/05/25 14:41:47 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/05/25 14:41:47 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.185.53.161, 37860, None)
17/05/25 14:41:47 INFO BlockManagerMasterEndpoint: Registering block manager 10.185.53.161:37860 with 4.0 GB RAM, BlockManagerId(driver, 10.185.53.161, 37860, None)
17/05/25 14:41:47 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.185.53.161, 37860, None)
17/05/25 14:41:47 INFO BlockManager: external shuffle service port = 7337
17/05/25 14:41:47 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.185.53.161, 37860, None)
17/05/25 14:41:47 INFO EventLoggingListener: Logging events to hdfs:///var/log/spark/apps/application_1495720658394_0004
17/05/25 14:41:47 INFO Utils: Using initial executors = 12, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
17/05/25 14:41:50 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (10.185.52.31:57406) with ID 5
17/05/25 14:41:50 INFO ExecutorAllocationManager: New executor 5 has registered (new total is 1)
17/05/25 14:41:50 INFO BlockManagerMasterEndpoint: Registering block manager ip-10-185-52-31.eu-west-1.compute.internal:38781 with 4.0 GB RAM, BlockManagerId(5, ip-10-185-52-31.eu-west-1.compute.internal, 38781, None)
17/05/25 14:41:50 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (10.185.53.45:40096) with ID 3
17/05/25 14:41:50 INFO ExecutorAllocationManager: New executor 3 has registered (new total is 2)
17/05/25 14:41:50 INFO BlockManagerMasterEndpoint: Registering block manager ip-10-185-53-45.eu-west-1.compute.internal:43702 with 4.0 GB RAM, BlockManagerId(3, ip-10-185-53-45.eu-west-1.compute.internal, 43702, None)
17/05/25 14:41:50 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (10.185.53.135:42390) with ID 2
17/05/25 14:41:50 INFO ExecutorAllocationManager: New executor 2 has registered (new total is 3)
17/05/25 14:41:50 INFO BlockManagerMasterEndpoint: Registering block manager ip-10-185-53-135.eu-west-1.compute.internal:41552 with 4.0 GB RAM, BlockManagerId(2, ip-10-185-53-135.eu-west-1.compute.internal, 41552, None)
17/05/25 14:41:50 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (10.185.53.10:60612) with ID 1
17/05/25 14:41:50 INFO ExecutorAllocationManager: New executor 1 has registered (new total is 4)
17/05/25 14:41:50 INFO BlockManagerMasterEndpoint: Registering block manager ip-10-185-53-10.eu-west-1.compute.internal:33391 with 4.0 GB RAM, BlockManagerId(1, ip-10-185-53-10.eu-west-1.compute.internal, 33391, None)
17/05/25 14:41:50 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (10.185.53.68:57424) with ID 4
17/05/25 14:41:50 INFO ExecutorAllocationManager: New executor 4 has registered (new total is 5)
17/05/25 14:41:50 INFO BlockManagerMasterEndpoint: Registering block manager ip-10-185-53-68.eu-west-1.compute.internal:34222 with 4.0 GB RAM, BlockManagerId(4, ip-10-185-53-68.eu-west-1.compute.internal, 34222, None)
17/05/25 14:42:09 INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
17/05/25 14:42:09 INFO SharedState: Warehouse path is 'hdfs:///user/spark/warehouse'.
17/05/25 14:42:10 WARN Utils: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf.
17/05/25 14:42:11 INFO CodeGenerator: Code generated in 170.416763 ms
17/05/25 14:42:11 INFO SparkContext: Starting job: collect at /home/hadoop/iface_extractions/select_fields.py:90
17/05/25 14:42:11 INFO DAGScheduler: Got job 0 (collect at /home/hadoop/iface_extractions/select_fields.py:90) with 1 output partitions
17/05/25 14:42:11 INFO DAGScheduler: Final stage: ResultStage 0 (collect at /home/hadoop/iface_extractions/select_fields.py:90)
17/05/25 14:42:11 INFO DAGScheduler: Parents of final stage: List()
17/05/25 14:42:11 INFO DAGScheduler: Missing parents: List()
17/05/25 14:42:11 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at collect at /home/hadoop/iface_extractions/select_fields.py:90), which has no missing parents
17/05/25 14:42:11 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 7.5 KB, free 4.0 GB)
17/05/25 14:42:11 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 4.1 KB, free 4.0 GB)
17/05/25 14:42:11 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.185.53.161:37860 (size: 4.1 KB, free: 4.0 GB)
17/05/25 14:42:11 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:996
17/05/25 14:42:11 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at collect at /home/hadoop/iface_extractions/select_fields.py:90)
17/05/25 14:42:11 INFO YarnScheduler: Adding task set 0.0 with 1 tasks
17/05/25 14:42:11 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, ip-10-185-53-135.eu-west-1.compute.internal, executor 2, partition 0, PROCESS_LOCAL, 5899 bytes)
17/05/25 14:42:11 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on ip-10-185-53-135.eu-west-1.compute.internal:41552 (size: 4.1 KB, free: 4.0 GB)
17/05/25 14:42:12 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1101 ms on ip-10-185-53-135.eu-west-1.compute.internal (executor 2) (1/1)
17/05/25 14:42:12 INFO YarnScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool
17/05/25 14:42:12 INFO DAGScheduler: ResultStage 0 (collect at /home/hadoop/iface_extractions/select_fields.py:90) finished in 1.109 s
17/05/25 14:42:12 INFO DAGScheduler: Job 0 finished: collect at /home/hadoop/iface_extractions/select_fields.py:90, took 1.290037 s
17/05/25 14:42:12 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 10.185.53.161:37860 in memory (size: 4.1 KB, free: 4.0 GB)
17/05/25 14:42:12 INFO SparkContext: Starting job: collect at /home/hadoop/iface_extractions/select_fields.py:91
17/05/25 14:42:12 INFO BlockManagerInfo: Removed broadcast_0_piece0 on ip-10-185-53-135.eu-west-1.compute.internal:41552 in memory (size: 4.1 KB, free: 4.0 GB)
17/05/25 14:42:12 INFO DAGScheduler: Got job 1 (collect at /home/hadoop/iface_extractions/select_fields.py:91) with 1 output partitions
17/05/25 14:42:12 INFO DAGScheduler: Final stage: ResultStage 1 (collect at /home/hadoop/iface_extractions/select_fields.py:91)
17/05/25 14:42:12 INFO DAGScheduler: Parents of final stage: List()
17/05/25 14:42:12 INFO DAGScheduler: Missing parents: List()
17/05/25 14:42:12 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[5] at collect at /home/hadoop/iface_extractions/select_fields.py:91), which has no missing parents
17/05/25 14:42:12 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 7.5 KB, free 4.0 GB)
17/05/25 14:42:12 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.1 KB, free 4.0 GB)
17/05/25 14:42:12 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 10.185.53.161:37860 (size: 4.1 KB, free: 4.0 GB)
17/05/25 14:42:12 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:996
17/05/25 14:42:12 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[5] at collect at /home/hadoop/iface_extractions/select_fields.py:91)
17/05/25 14:42:12 INFO YarnScheduler: Adding task set 1.0 with 1 tasks
17/05/25 14:42:12 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, ip-10-185-53-68.eu-west-1.compute.internal, executor 4, partition 0, PROCESS_LOCAL, 5900 bytes)
17/05/25 14:42:13 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on ip-10-185-53-68.eu-west-1.compute.internal:34222 (size: 4.1 KB, free: 4.0 GB)
17/05/25 14:42:14 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 1047 ms on ip-10-185-53-68.eu-west-1.compute.internal (executor 4) (1/1)
17/05/25 14:42:14 INFO YarnScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool
17/05/25 14:42:14 INFO DAGScheduler: ResultStage 1 (collect at /home/hadoop/iface_extractions/select_fields.py:91) finished in 1.047 s
17/05/25 14:42:14 INFO DAGScheduler: Job 1 finished: collect at /home/hadoop/iface_extractions/select_fields.py:91, took 1.054768 s
17/05/25 14:42:14 INFO CodeGenerator: Code generated in 13.109425 ms
17/05/25 14:42:14 INFO CodeGenerator: Code generated in 12.568665 ms
17/05/25 14:42:14 INFO CodeGenerator: Code generated in 11.257538 ms
17/05/25 14:42:14 INFO BlockManagerInfo: Removed broadcast_1_piece0 on 10.185.53.161:37860 in memory (size: 4.1 KB, free: 4.0 GB)
17/05/25 14:42:14 INFO BlockManagerInfo: Removed broadcast_1_piece0 on ip-10-185-53-68.eu-west-1.compute.internal:34222 in memory (size: 4.1 KB, free: 4.0 GB)
17/05/25 14:42:14 INFO CodeGenerator: Code generated in 11.563958 ms
17/05/25 14:42:14 INFO CodeGenerator: Code generated in 18.189301 ms
17/05/25 14:42:14 INFO CodeGenerator: Code generated in 13.490762 ms
17/05/25 14:42:14 INFO CodeGenerator: Code generated in 15.156166 ms
17/05/25 14:42:50 INFO ExecutorAllocationManager: Request to remove executorIds: 5, 3
17/05/25 14:42:50 INFO YarnClientSchedulerBackend: Requesting to kill executor(s) 5, 3
17/05/25 14:42:50 INFO YarnClientSchedulerBackend: Actual list of executor(s) to be killed is 5, 3
17/05/25 14:42:50 INFO ExecutorAllocationManager: Removing executor 5 because it has been idle for 60 seconds (new desired total will be 4)
17/05/25 14:42:50 INFO ExecutorAllocationManager: Removing executor 3 because it has been idle for 60 seconds (new desired total will be 3)
17/05/25 14:42:50 INFO ExecutorAllocationManager: Request to remove executorIds: 1
17/05/25 14:42:50 INFO YarnClientSchedulerBackend: Requesting to kill executor(s) 1
17/05/25 14:42:50 INFO YarnClientSchedulerBackend: Actual list of executor(s) to be killed is 1
17/05/25 14:42:50 INFO ExecutorAllocationManager: Removing executor 1 because it has been idle for 60 seconds (new desired total will be 2)
17/05/25 14:42:50 INFO YarnSchedulerBackend$YarnDriverEndpoint: Disabling executor 5.
17/05/25 14:42:50 INFO DAGScheduler: Executor lost: 5 (epoch 0)
17/05/25 14:42:50 INFO BlockManagerMasterEndpoint: Trying to remove executor 5 from BlockManagerMaster.
17/05/25 14:42:50 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(5, ip-10-185-52-31.eu-west-1.compute.internal, 38781, None)
17/05/25 14:42:50 INFO BlockManagerMaster: Removed 5 successfully in removeExecutor
17/05/25 14:42:50 INFO YarnScheduler: Executor 5 on ip-10-185-52-31.eu-west-1.compute.internal killed by driver.
17/05/25 14:42:50 INFO ExecutorAllocationManager: Existing executor 5 has been removed (new total is 4)
17/05/25 14:42:51 INFO YarnSchedulerBackend$YarnDriverEndpoint: Disabling executor 1.
17/05/25 14:42:51 INFO DAGScheduler: Executor lost: 1 (epoch 0)
17/05/25 14:42:51 INFO BlockManagerMasterEndpoint: Trying to remove executor 1 from BlockManagerMaster.
17/05/25 14:42:51 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(1, ip-10-185-53-10.eu-west-1.compute.internal, 33391, None)
17/05/25 14:42:51 INFO BlockManagerMaster: Removed 1 successfully in removeExecutor
17/05/25 14:42:51 INFO YarnScheduler: Executor 1 on ip-10-185-53-10.eu-west-1.compute.internal killed by driver.
17/05/25 14:42:51 INFO ExecutorAllocationManager: Existing executor 1 has been removed (new total is 3)
17/05/25 14:42:51 INFO YarnSchedulerBackend$YarnDriverEndpoint: Disabling executor 3.
17/05/25 14:42:51 INFO DAGScheduler: Executor lost: 3 (epoch 0)
17/05/25 14:42:51 INFO BlockManagerMasterEndpoint: Trying to remove executor 3 from BlockManagerMaster.
17/05/25 14:42:51 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(3, ip-10-185-53-45.eu-west-1.compute.internal, 43702, None)
17/05/25 14:42:51 INFO BlockManagerMaster: Removed 3 successfully in removeExecutor
17/05/25 14:42:51 INFO YarnScheduler: Executor 3 on ip-10-185-53-45.eu-west-1.compute.internal killed by driver.
17/05/25 14:42:51 INFO ExecutorAllocationManager: Existing executor 3 has been removed (new total is 2)
17/05/25 14:43:12 INFO ExecutorAllocationManager: Request to remove executorIds: 2
17/05/25 14:43:12 INFO YarnClientSchedulerBackend: Requesting to kill executor(s) 2
17/05/25 14:43:12 INFO YarnClientSchedulerBackend: Actual list of executor(s) to be killed is 2
17/05/25 14:43:12 INFO ExecutorAllocationManager: Removing executor 2 because it has been idle for 60 seconds (new desired total will be 1)
17/05/25 14:43:13 INFO YarnSchedulerBackend$YarnDriverEndpoint: Disabling executor 2.
17/05/25 14:43:13 INFO DAGScheduler: Executor lost: 2 (epoch 0)
17/05/25 14:43:13 INFO BlockManagerMasterEndpoint: Trying to remove executor 2 from BlockManagerMaster.
17/05/25 14:43:13 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(2, ip-10-185-53-135.eu-west-1.compute.internal, 41552, None)
17/05/25 14:43:13 INFO BlockManagerMaster: Removed 2 successfully in removeExecutor
17/05/25 14:43:13 INFO YarnScheduler: Executor 2 on ip-10-185-53-135.eu-west-1.compute.internal killed by driver.
17/05/25 14:43:13 INFO ExecutorAllocationManager: Existing executor 2 has been removed (new total is 1)
17/05/25 14:43:14 INFO ExecutorAllocationManager: Request to remove executorIds: 4
17/05/25 14:43:14 INFO YarnClientSchedulerBackend: Requesting to kill executor(s) 4
17/05/25 14:43:14 INFO YarnClientSchedulerBackend: Actual list of executor(s) to be killed is 4
17/05/25 14:43:14 INFO ExecutorAllocationManager: Removing executor 4 because it has been idle for 60 seconds (new desired total will be 0)
17/05/25 14:43:17 INFO YarnSchedulerBackend$YarnDriverEndpoint: Disabling executor 4.
17/05/25 14:43:17 INFO DAGScheduler: Executor lost: 4 (epoch 0)
17/05/25 14:43:17 INFO BlockManagerMasterEndpoint: Trying to remove executor 4 from BlockManagerMaster.
17/05/25 14:43:17 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(4, ip-10-185-53-68.eu-west-1.compute.internal, 34222, None)
17/05/25 14:43:17 INFO BlockManagerMaster: Removed 4 successfully in removeExecutor
17/05/25 14:43:17 INFO YarnScheduler: Executor 4 on ip-10-185-53-68.eu-west-1.compute.internal killed by driver.
17/05/25 14:43:17 INFO ExecutorAllocationManager: Existing executor 4 has been removed (new total is 0)
My guess is that you've got Dynamic Resource Allocation enabled in your Spark configuration.
Spark provides a mechanism to dynamically adjust the resources your application occupies based on the workload. This means that your application may give resources back to the cluster if they are no longer used and request them again later when there is demand. This feature is particularly useful if multiple applications share resources in your Spark cluster.
This feature is disabled by default and available on all coarse-grained cluster managers, i.e. standalone mode, YARN mode, and Mesos coarse-grained mode.
I highlighted the relevant part that says it is disabled by default and hence I can only guess that it was enabled.
From ExecutorAllocationManager:
An agent that dynamically allocates and removes executors based on the workload.
With that said, I'd use web UI and see if spark.dynamicAllocation.enabled property is enabled or not.
There are two requirements for using this feature (Dynamic Resource Allocation). First, your application must set spark.dynamicAllocation.enabled to true. Second, you must set up an external shuffle service on each worker node in the same cluster and set spark.shuffle.service.enabled to true in your application.
This is the line that prints out the INFO message:
logInfo("Request to remove executorIds: " + executors.mkString(", "))
You can also kill executors using SparkContext.killExecutors that gives a Spark developer a way to kill executors himself.
killExecutors(executorIds: Seq[String]): Boolean Request that the cluster manager kill the specified executors.
There are two killExecutors actually and they are very helpful for demo purposes as you can easily show how executors come and go.

Spark/Python - Slave in Cluster is not used

I'm new to Spark. I have the master(192.168.33.10), and slave(192.168.33.12) cluster setup locally, and I'm wrote to following script to demo that both master and slave are running the get_ip_wrap() on its own machine.
However, when I run with the command ./bin/spark-submit ip.py, I only see the 192.168.33.10 in the output, I was expecting 192.168.33.12 in the output as well.
I have also included the trace for my master and work output file as well.
import socket
import fcntl
import struct
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
def get_ip_address(ifname):
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
return socket.inet_ntoa(fcntl.ioctl(
s.fileno(),
0x8915, # SIOCGIFADDR
struct.pack('256s', ifname[:15])
)[20:24])
def get_ip_wrap(num):
return get_ip_address('eth1')
#spark = SparkSession\
# .builder\
# .appName("PythonALS")\
# .getOrCreate()
#sc = spark.sparkContext
conf = SparkConf().setAppName('appName').setMaster('spark://vagrant-ubuntu-trusty-64:7077')
sc = SparkContext(conf=conf)
data = [x for x in range(0, 50)]
distData = sc.parallelize(data)
result = distData.map(get_ip_wrap)
print result.collect()
vagrant#vagrant-ubuntu-trusty-64:~/spark-2.1.1-bin-hadoop2.7$ ./sbin/start-master.sh
starting org.apache.spark.deploy.master.Master, logging to /home/vagrant/spark-2.1.1-bin-hadoop2.7/logs/spark-vagrant-org.apache.spark.deploy.master.Master-1-vagrant-ubuntu-trusty-64.out
vagrant#vagrant-ubuntu-trusty-64:~/spark-2.1.1-bin-hadoop2.7$
vagrant#vagrant-ubuntu-trusty-64:~/spark-2.1.1-bin-hadoop2.7$ ./sbin/start-slave.sh spark://vagrant-ubuntu-trusty-64:7077
starting org.apache.spark.deploy.worker.Worker, logging to /home/vagrant/spark-2.1.1-bin-hadoop2.7/logs/spark-vagrant-org.apache.spark.deploy.worker.Worker-1-vagrant-ubuntu-trusty-64.out
vagrant#vagrant-ubuntu-trusty-64:~/spark-2.1.1-bin-hadoop2.7$
vagrant#vagrant-ubuntu-trusty-64:~/spark-2.1.1-bin-hadoop2.7$
vagrant#vagrant-ubuntu-trusty-64:~/spark-2.1.1-bin-hadoop2.7$
vagrant#vagrant-ubuntu-trusty-64:~/spark-2.1.1-bin-hadoop2.7$ ./bin/spark-submit ip.py
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/05/27 17:08:09 INFO SparkContext: Running Spark version 2.1.1
17/05/27 17:08:09 WARN SparkContext: Support for Java 7 is deprecated as of Spark 2.0.0
17/05/27 17:08:10 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/05/27 17:08:10 INFO SecurityManager: Changing view acls to: vagrant
17/05/27 17:08:10 INFO SecurityManager: Changing modify acls to: vagrant
17/05/27 17:08:10 INFO SecurityManager: Changing view acls groups to:
17/05/27 17:08:10 INFO SecurityManager: Changing modify acls groups to:
17/05/27 17:08:10 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(vagrant); groups with view permissions: Set(); users with modify permissions: Set(vagrant); groups with modify permissions: Set()
17/05/27 17:08:10 INFO Utils: Successfully started service 'sparkDriver' on port 59290.
17/05/27 17:08:10 INFO SparkEnv: Registering MapOutputTracker
17/05/27 17:08:10 INFO SparkEnv: Registering BlockManagerMaster
17/05/27 17:08:10 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/05/27 17:08:10 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/05/27 17:08:10 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-ad008702-6e92-4e60-ab27-a582b1ba9fb9
17/05/27 17:08:10 INFO MemoryStore: MemoryStore started with capacity 413.9 MB
17/05/27 17:08:11 INFO SparkEnv: Registering OutputCommitCoordinator
17/05/27 17:08:11 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
17/05/27 17:08:11 WARN Utils: Service 'SparkUI' could not bind on port 4041. Attempting port 4042.
17/05/27 17:08:11 INFO Utils: Successfully started service 'SparkUI' on port 4042.
17/05/27 17:08:11 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.0.2.15:4042
17/05/27 17:08:11 INFO SparkContext: Added file file:/home/vagrant/spark-2.1.1-bin-hadoop2.7/ip.py at spark://10.0.2.15:59290/files/ip.py with timestamp 1495904891756
17/05/27 17:08:11 INFO Utils: Copying /home/vagrant/spark-2.1.1-bin-hadoop2.7/ip.py to /tmp/spark-5400808c-1304-404d-ae53-dc6cdb14694f/userFiles-dc94d72e-15d3-4d84-87b9-27e87dcb0f6a/ip.py
17/05/27 17:08:11 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://vagrant-ubuntu-trusty-64:7077...
17/05/27 17:08:11 INFO TransportClientFactory: Successfully created connection to vagrant-ubuntu-trusty-64/10.0.2.15:7077 after 20 ms (0 ms spent in bootstraps)
17/05/27 17:08:12 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20170527170812-0000
17/05/27 17:08:12 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 53124.
17/05/27 17:08:12 INFO NettyBlockTransferService: Server created on 10.0.2.15:53124
17/05/27 17:08:12 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/05/27 17:08:12 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.0.2.15, 53124, None)
17/05/27 17:08:12 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20170527170812-0000/0 on worker-20170527170800-10.0.2.15-54829 (10.0.2.15:54829) with 1 cores
17/05/27 17:08:12 INFO StandaloneSchedulerBackend: Granted executor ID app-20170527170812-0000/0 on hostPort 10.0.2.15:54829 with 1 cores, 1024.0 MB RAM
17/05/27 17:08:12 INFO BlockManagerMasterEndpoint: Registering block manager 10.0.2.15:53124 with 413.9 MB RAM, BlockManagerId(driver, 10.0.2.15, 53124, None)
17/05/27 17:08:12 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.0.2.15, 53124, None)
17/05/27 17:08:12 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.0.2.15, 53124, None)
17/05/27 17:08:12 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20170527170812-0000/0 is now RUNNING
17/05/27 17:08:12 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
17/05/27 17:08:13 INFO SparkContext: Starting job: collect at /home/vagrant/spark-2.1.1-bin-hadoop2.7/ip.py:31
17/05/27 17:08:13 INFO DAGScheduler: Got job 0 (collect at /home/vagrant/spark-2.1.1-bin-hadoop2.7/ip.py:31) with 2 output partitions
17/05/27 17:08:13 INFO DAGScheduler: Final stage: ResultStage 0 (collect at /home/vagrant/spark-2.1.1-bin-hadoop2.7/ip.py:31)
17/05/27 17:08:13 INFO DAGScheduler: Parents of final stage: List()
17/05/27 17:08:13 INFO DAGScheduler: Missing parents: List()
17/05/27 17:08:13 INFO DAGScheduler: Submitting ResultStage 0 (PythonRDD[1] at collect at /home/vagrant/spark-2.1.1-bin-hadoop2.7/ip.py:31), which has no missing parents
17/05/27 17:08:13 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 4.1 KB, free 413.9 MB)
17/05/27 17:08:13 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 2.8 KB, free 413.9 MB)
17/05/27 17:08:13 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.0.2.15:53124 (size: 2.8 KB, free: 413.9 MB)
17/05/27 17:08:13 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:996
17/05/27 17:08:13 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (PythonRDD[1] at collect at /home/vagrant/spark-2.1.1-bin-hadoop2.7/ip.py:31)
17/05/27 17:08:13 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
17/05/27 17:08:15 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (10.0.2.15:40762) with ID 0
17/05/27 17:08:15 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 10.0.2.15, executor 0, partition 0, PROCESS_LOCAL, 6136 bytes)
17/05/27 17:08:15 INFO BlockManagerMasterEndpoint: Registering block manager 10.0.2.15:33949 with 413.9 MB RAM, BlockManagerId(0, 10.0.2.15, 33949, None)
17/05/27 17:08:15 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.0.2.15:33949 (size: 2.8 KB, free: 413.9 MB)
17/05/27 17:08:16 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 10.0.2.15, executor 0, partition 1, PROCESS_LOCAL, 6136 bytes)
17/05/27 17:08:16 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1050 ms on 10.0.2.15 (executor 0) (1/2)
17/05/27 17:08:16 INFO DAGScheduler: ResultStage 0 (collect at /home/vagrant/spark-2.1.1-bin-hadoop2.7/ip.py:31) finished in 2.504 s
17/05/27 17:08:16 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 119 ms on 10.0.2.15 (executor 0) (2/2)
17/05/27 17:08:16 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
17/05/27 17:08:16 INFO DAGScheduler: Job 0 finished: collect at /home/vagrant/spark-2.1.1-bin-hadoop2.7/ip.py:31, took 2.981746 s
['192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10', '192.168.33.10']
17/05/27 17:08:16 INFO SparkContext: Invoking stop() from shutdown hook
17/05/27 17:08:16 INFO SparkUI: Stopped Spark web UI at http://10.0.2.15:4042
17/05/27 17:08:16 INFO StandaloneSchedulerBackend: Shutting down all executors
17/05/27 17:08:16 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
17/05/27 17:08:16 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/05/27 17:08:16 INFO MemoryStore: MemoryStore cleared
17/05/27 17:08:16 INFO BlockManager: BlockManager stopped
17/05/27 17:08:16 INFO BlockManagerMaster: BlockManagerMaster stopped
17/05/27 17:08:16 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/05/27 17:08:16 INFO SparkContext: Successfully stopped SparkContext
17/05/27 17:08:16 INFO ShutdownHookManager: Shutdown hook called
17/05/27 17:08:16 INFO ShutdownHookManager: Deleting directory /tmp/spark-5400808c-1304-404d-ae53-dc6cdb14694f/pyspark-021d6ed2-91d0-481b-b528-108581abe66c
17/05/27 17:08:16 INFO ShutdownHookManager: Deleting directory /tmp/spark-5400808c-1304-404d-ae53-dc6cdb14694f
vagrant#vagrant-ubuntu-trusty-64:~/spark-2.1.1-bin-hadoop2.7$
vagrant#vagrant-ubuntu-trusty-64:~/spark-2.1.1-bin-hadoop2.7$
vagrant#vagrant-ubuntu-trusty-64:~/spark-2.1.1-bin-hadoop2.7$
vagrant#vagrant-ubuntu-trusty-64:~/spark-2.1.1-bin-hadoop2.7$ cat /home/vagrant/spark-2.1.1-bin-hadoop2.7/logs/spark-vagrant-org.apache.spark.deploy.master.Master-1-vagrant-ubuntu-trusty-64.out
Spark Command: /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java -cp /home/vagrant/spark-2.1.1-bin-hadoop2.7/conf/:/home/vagrant/spark-2.1.1-bin-hadoop2.7/jars/* -Xmx1g -XX:MaxPermSize=256m org.apache.spark.deploy.master.Master --host vagrant-ubuntu-trusty-64 --port 7077 --webui-port 8080
========================================
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/05/27 17:07:44 INFO Master: Started daemon with process name: 9384#vagrant-ubuntu-trusty-64
17/05/27 17:07:44 INFO SignalUtils: Registered signal handler for TERM
17/05/27 17:07:44 INFO SignalUtils: Registered signal handler for HUP
17/05/27 17:07:44 INFO SignalUtils: Registered signal handler for INT
17/05/27 17:07:44 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/05/27 17:07:45 INFO SecurityManager: Changing view acls to: vagrant
17/05/27 17:07:45 INFO SecurityManager: Changing modify acls to: vagrant
17/05/27 17:07:45 INFO SecurityManager: Changing view acls groups to:
17/05/27 17:07:45 INFO SecurityManager: Changing modify acls groups to:
17/05/27 17:07:45 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(vagrant); groups with view permissions: Set(); users with modify permissions: Set(vagrant); groups with modify permissions: Set()
17/05/27 17:07:45 INFO Utils: Successfully started service 'sparkMaster' on port 7077.
17/05/27 17:07:45 INFO Master: Starting Spark master at spark://vagrant-ubuntu-trusty-64:7077
17/05/27 17:07:45 INFO Master: Running Spark version 2.1.1
17/05/27 17:07:45 INFO Utils: Successfully started service 'MasterUI' on port 8080.
17/05/27 17:07:45 INFO MasterWebUI: Bound MasterWebUI to 0.0.0.0, and started at http://10.0.2.15:8080
17/05/27 17:07:45 INFO Utils: Successfully started service on port 6066.
17/05/27 17:07:45 INFO StandaloneRestServer: Started REST server for submitting applications on port 6066
17/05/27 17:07:46 INFO Master: I have been elected leader! New state: ALIVE
17/05/27 17:08:00 INFO Master: Registering worker 10.0.2.15:54829 with 1 cores, 2.8 GB RAM
17/05/27 17:08:12 INFO Master: Registering app appName
17/05/27 17:08:12 INFO Master: Registered app appName with ID app-20170527170812-0000
17/05/27 17:08:12 INFO Master: Launching executor app-20170527170812-0000/0 on worker worker-20170527170800-10.0.2.15-54829
17/05/27 17:08:16 INFO Master: Received unregister request from application app-20170527170812-0000
17/05/27 17:08:16 INFO Master: Removing app app-20170527170812-0000
17/05/27 17:08:16 INFO Master: 10.0.2.15:51703 got disassociated, removing it.
17/05/27 17:08:16 INFO Master: 10.0.2.15:59290 got disassociated, removing it.
17/05/27 17:08:16 WARN Master: Got status update for unknown executor app-20170527170812-0000/0
vagrant#vagrant-ubuntu-trusty-64:~/spark-2.1.1-bin-hadoop2.7$

Resources