spark-submit yarn-client run failed - apache-spark

Using the yarn-client to run spark program.
I've build the spark on yarn environment.
the scripts is
./bin/spark-submit --class WordCountTest \
--master yarn-client \
--num-executors 1 \
--executor-cores 1 \
--queue root.hadoop \
/root/Desktop/test2.jar \
10
when running I get the following exception.
15/05/12 17:42:01 INFO spark.SparkContext: Running Spark version 1.3.1
15/05/12 17:42:01 WARN spark.SparkConf:
SPARK_CLASSPATH was detected (set to ':/usr/local/hadoop/hadoop-2.5.2/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar').
This is deprecated in Spark 1.0+.
Please instead use:
- ./spark-submit with --driver-class-path to augment the driver classpath
- spark.executor.extraClassPath to augment the executor classpath
15/05/12 17:42:01 WARN spark.SparkConf: Setting 'spark.executor.extraClassPath' to ':/usr/local/hadoop/hadoop-2.5.2/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar' as a work-around.
15/05/12 17:42:01 WARN spark.SparkConf: Setting 'spark.driver.extraClassPath' to ':/usr/local/hadoop/hadoop-2.5.2/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar' as a work-around.
15/05/12 17:42:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/12 17:42:02 INFO spark.SecurityManager: Changing view acls to: root
15/05/12 17:42:02 INFO spark.SecurityManager: Changing modify acls to: root
15/05/12 17:42:02 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/05/12 17:42:02 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/12 17:42:02 INFO Remoting: Starting remoting
15/05/12 17:42:03 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver#master:49338]
15/05/12 17:42:03 INFO util.Utils: Successfully started service 'sparkDriver' on port 49338.
15/05/12 17:42:03 INFO spark.SparkEnv: Registering MapOutputTracker
15/05/12 17:42:03 INFO spark.SparkEnv: Registering BlockManagerMaster
15/05/12 17:42:03 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-57f5fb29-784d-4730-92b8-c2e8be97c038/blockmgr-752988bc-b2d0-42f7-891d-5d3edbb4526d
15/05/12 17:42:03 INFO storage.MemoryStore: MemoryStore started with capacity 267.3 MB
15/05/12 17:42:04 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-2f2a46eb-9259-4c6e-b9af-7159efb0b3e9/httpd-3c50fe1e-430e-4077-9cd0-58246e182d98
15/05/12 17:42:04 INFO spark.HttpServer: Starting HTTP Server
15/05/12 17:42:04 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/12 17:42:04 INFO server.AbstractConnector: Started SocketConnector#0.0.0.0:41749
15/05/12 17:42:04 INFO util.Utils: Successfully started service 'HTTP file server' on port 41749.
15/05/12 17:42:04 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/05/12 17:42:05 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/05/12 17:42:05 INFO server.AbstractConnector: Started SelectChannelConnector#0.0.0.0:4040
15/05/12 17:42:05 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
15/05/12 17:42:05 INFO ui.SparkUI: Started SparkUI at http://master:4040
15/05/12 17:42:05 INFO spark.SparkContext: Added JAR file:/root/Desktop/test2.jar at http://192.168.147.201:41749/jars/test2.jar with timestamp 1431423725289
15/05/12 17:42:05 WARN cluster.YarnClientSchedulerBackend: NOTE: SPARK_WORKER_MEMORY is deprecated. Use SPARK_EXECUTOR_MEMORY or --executor-memory through spark-submit instead.
15/05/12 17:42:06 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.147.201:8032
15/05/12 17:42:06 INFO yarn.Client: Requesting a new application from cluster with 2 NodeManagers
15/05/12 17:42:06 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
15/05/12 17:42:06 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/05/12 17:42:06 INFO yarn.Client: Setting up container launch context for our AM
15/05/12 17:42:06 INFO yarn.Client: Preparing resources for our AM container
15/05/12 17:42:07 WARN yarn.Client: SPARK_JAR detected in the system environment. This variable has been deprecated in favor of the spark.yarn.jar configuration variable.
15/05/12 17:42:07 INFO yarn.Client: Uploading resource file:/usr/local/spark/spark-1.3.1-bin-hadoop2.5.0-cdh5.3.2/lib/spark-assembly-1.3.1-hadoop2.5.0-cdh5.3.2.jar -> hdfs://master:9000/user/root/.sparkStaging/application_1431423592173_0003/spark-assembly-1.3.1-hadoop2.5.0-cdh5.3.2.jar
15/05/12 17:42:11 INFO yarn.Client: Setting up the launch environment for our AM container
15/05/12 17:42:11 WARN yarn.Client: SPARK_JAR detected in the system environment. This variable has been deprecated in favor of the spark.yarn.jar configuration variable.
15/05/12 17:42:11 INFO spark.SecurityManager: Changing view acls to: root
15/05/12 17:42:11 INFO spark.SecurityManager: Changing modify acls to: root
15/05/12 17:42:11 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/05/12 17:42:11 INFO yarn.Client: Submitting application 3 to ResourceManager
15/05/12 17:42:11 INFO impl.YarnClientImpl: Submitted application application_1431423592173_0003
15/05/12 17:42:12 INFO yarn.Client: Application report for application_1431423592173_0003 (state: FAILED)
15/05/12 17:42:12 INFO yarn.Client:
client token: N/A
diagnostics: Application application_1431423592173_0003 submitted by user root to unknown queue: root.hadoop
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: root.hadoop
start time: 1431423731271
final status: FAILED
tracking URL: N/A
user: root
Exception in thread "main" org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:113)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:59)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:381)
at WordCountTest$.main(WordCountTest.scala:14)
at WordCountTest.main(WordCountTest.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
My code very simple, just as following:
object WordCountTest {
def main (args: Array[String]): Unit = {
Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)
val sparkConf = new SparkConf().setAppName("WordCountTest Prog")
val sc = new SparkContext(sparkConf)
val sqlContext = new SQLContext(sc)
val file = sc.textFile("/data/test/pom.xml")
val counts = file.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _)
println(counts)
counts.saveAsTextFile("/data/test/pom_count.txt")
}
}
I've debug this problem for 2 days. Help!Help! Thx.

Try changing queue name to hadoop

in my case,
change “--queue thequeue” to “--queue default”
it work
运行:
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --driver-memory 4g --executor-memory 2g --executor-cores 1 --queue thequeue lib/spark-examples*.jar 10
时报如下错误,只需要将“--queue thequeue”改成“--queue default”即可。

Related

ERROR SparkContext: Failed to add None to Spark environment

I submit a spark job first like this in a pyspark file
os.system(f'spark-submit --master local --jars ./examples/lib/app.jar app.py')
Then in the submitted app.py file, I create a new SparkSession like this:
spark = SparkSession.builder.appName(appName) \
.config('spark.jars') \
.getOrCreate()
Error message:
23/01/17 11:02:52 INFO SparkContext: Running Spark version 3.3.0
23/01/17 11:02:52 INFO ResourceUtils: ==============================================================
23/01/17 11:02:52 INFO ResourceUtils: No custom resources configured for spark.driver.
23/01/17 11:02:52 INFO ResourceUtils: ==============================================================
23/01/17 11:02:52 INFO SparkContext: Submitted application: symbolic_test
23/01/17 11:02:52 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
23/01/17 11:02:52 INFO ResourceProfile: Limiting resource is cpu
23/01/17 11:02:53 INFO ResourceProfileManager: Added ResourceProfile id: 0
23/01/17 11:02:53 INFO SecurityManager: Changing view acls to: annie
23/01/17 11:02:53 INFO SecurityManager: Changing modify acls to: annie
23/01/17 11:02:53 INFO SecurityManager: Changing view acls groups to:
23/01/17 11:02:53 INFO SecurityManager: Changing modify acls groups to:
23/01/17 11:02:53 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(annie); groups with view permissions: Set(); users with modify permissions: Set(annie); groups with modify permissions: Set()
23/01/17 11:02:53 INFO Utils: Successfully started service 'sparkDriver' on port 42141.
23/01/17 11:02:53 INFO SparkEnv: Registering MapOutputTracker
23/01/17 11:02:53 INFO SparkEnv: Registering BlockManagerMaster
23/01/17 11:02:53 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
23/01/17 11:02:53 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
23/01/17 11:02:53 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
23/01/17 11:02:53 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-e4cc3b01-a6d5-4454-ad2d-4d0f42066479
23/01/17 11:02:53 INFO MemoryStore: MemoryStore started with capacity 434.4 MiB
23/01/17 11:02:53 INFO SparkEnv: Registering OutputCommitCoordinator
23/01/17 11:02:53 INFO Utils: Successfully started service 'SparkUI' on port 4040.
23/01/17 11:02:53 ERROR SparkContext: Failed to add None to Spark environment
java.io.FileNotFoundException: Jar /home/annie/exampleApp/example/None not found
at org.apache.spark.SparkContext.addLocalJarFile$1(SparkContext.scala:1949)
at org.apache.spark.SparkContext.addJar(SparkContext.scala:2004)
at org.apache.spark.SparkContext.$anonfun$new$12(SparkContext.scala:507)
at org.apache.spark.SparkContext.$anonfun$new$12$adapted(SparkContext.scala:507)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:507)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.base/java.lang.Thread.run(Thread.java:829)
when creating spark session through pyspark, I get the following error messages, which only arise when I add .config('spark.jars').
I've set my $SPARK_HOME variable correctly...
Any help will be appreciated!
If your code sample is true you do not assign any value to spark.jars key while creating spark session. Assigning jar path as value may solve the error.
SparkSession.builder.appName(appName) \
.config('config_key', config_value) \

How to run spark-submit in virtualenv for pyspark?

Is there a way to run spark-submit (spark v2.3.2 from HDP 3.1.0) while in a virtualenv? Have situation where have python file that uses python3 (and some specific libs) in a virtualenv (to isolate lib versions from rest of system). I would like to run this file with /bin/spark-submit, but attempting to do so I get...
[me#airflowetl tests]$ source ../venv/bin/activate; /bin/spark-submit sparksubmit.test.py
File "/bin/hdp-select", line 255
print "ERROR: Invalid package - " + name
^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print("ERROR: Invalid package - " + name)?
ls: cannot access /usr/hdp//hadoop/lib: No such file or directory
Exception in thread "main" java.lang.IllegalStateException: hdp.version is not set while running Spark under HDP, please set through HDP_VERSION in spark-env.sh or add a java-opts file in conf with -Dhdp.version=xxx
at org.apache.spark.launcher.Main.main(Main.java:118)
also tried...
(venv) [me#airflowetl tests]$ export HADOOP_CONF_DIR=/etc/hadoop/conf; spark-submit --master yarn --deploy-mode cluster sparksubmit.test.py
19/12/12 13:50:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/12/12 13:50:20 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
Exception in thread "main" java.lang.NoClassDefFoundError: com/sun/jersey/api/client/config/ClientConfig
at org.apache.hadoop.yarn.client.api.TimelineClient.createTimelineClient(TimelineClient.java:55)
....
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: com.sun.jersey.api.client.config.ClientConfig
...or (from here https://www.hackingnote.com/en/spark/trouble-shooting/NoClassDefFoundError-ClientConfig)...
(venv) [airflow#airflowetl tests]$ spark-submit --master yarn --deploy-mode client --conf spark.hadoop.yarn.timeline-service.enabled=false sparksubmit.test.py
19/12/12 15:22:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/12/12 15:22:49 INFO spark.SparkContext: Running Spark version 2.4.4
19/12/12 15:22:49 INFO spark.SparkContext: Submitted application: hph_etl_TEST
19/12/12 15:22:49 INFO spark.SecurityManager: Changing view acls to: airflow
19/12/12 15:22:49 INFO spark.SecurityManager: Changing modify acls to: airflow
19/12/12 15:22:49 INFO spark.SecurityManager: Changing view acls groups to:
19/12/12 15:22:49 INFO spark.SecurityManager: Changing modify acls groups to:
19/12/12 15:22:49 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(airflow); groups with view permissions: Set(); users with modify permissions: Set(airflow); groups with modify permissions: Set()
19/12/12 15:22:49 INFO util.Utils: Successfully started service 'sparkDriver' on port 45232.
19/12/12 15:22:50 INFO spark.SparkEnv: Registering MapOutputTracker
19/12/12 15:22:50 INFO spark.SparkEnv: Registering BlockManagerMaster
19/12/12 15:22:50 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
19/12/12 15:22:50 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
19/12/12 15:22:50 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-320366b6-609a-497b-ac40-119d11682044
19/12/12 15:22:50 INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MB
19/12/12 15:22:50 INFO spark.SparkEnv: Registering OutputCommitCoordinator
19/12/12 15:22:50 INFO util.log: Logging initialized #2663ms
19/12/12 15:22:50 INFO server.Server: jetty-9.3.z-SNAPSHOT, build timestamp: unknown, git hash: unknown
19/12/12 15:22:50 INFO server.Server: Started #2763ms
19/12/12 15:22:50 INFO server.AbstractConnector: Started ServerConnector#50a3c656{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
19/12/12 15:22:50 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#306c15f1{/jobs,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#2b566f8d{/jobs/json,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#1b5ef515{/jobs/job,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#59f7a5e2{/jobs/job/json,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#41c58356{/stages,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#2d5f2026{/stages/json,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#324ca89a{/stages/stage,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#6f487c61{/stages/stage/json,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#3897116a{/stages/pool,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#68ab090f{/stages/pool/json,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#42ea3278{/storage,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#6eedf530{/storage/json,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#6e71a5c6{/storage/rdd,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#5e222a76{/storage/rdd/json,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#4dc8aa38{/environment,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#4c8d82c4{/environment/json,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#2fb15106{/executors,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#608faf1c{/executors/json,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#689e405f{/executors/threadDump,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#48a5742a{/executors/threadDump/json,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#6db93559{/static,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#4d7ed508{/,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#5510f12d{/api,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#6d87de7{/jobs/job/kill,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#62595660{/stages/stage/kill,null,AVAILABLE,#Spark}
19/12/12 15:22:50 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://airflowetl.local:4040
19/12/12 15:22:51 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
19/12/12 15:22:51 INFO client.RMProxy: Connecting to ResourceManager at hw001.local/172.18.4.46:8050
19/12/12 15:22:51 INFO yarn.Client: Requesting a new application from cluster with 4 NodeManagers
19/12/12 15:22:51 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (15360 MB per container)
19/12/12 15:22:51 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
19/12/12 15:22:51 INFO yarn.Client: Setting up container launch context for our AM
19/12/12 15:22:51 INFO yarn.Client: Setting up the launch environment for our AM container
19/12/12 15:22:51 INFO yarn.Client: Preparing resources for our AM container
19/12/12 15:22:51 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
19/12/12 15:22:53 INFO yarn.Client: Uploading resource file:/tmp/spark-4e600acd-2d34-4271-b01c-25f312906f93/__spark_libs__8368679994314392346.zip -> hdfs://hw001.local:8020/user/airflow/.sparkStaging/application_1572898343646_0029/__spark_libs__8368679994314392346.zip
19/12/12 15:22:54 INFO yarn.Client: Uploading resource file:/home/airflow/projects/hph_etl_airflow/venv/lib/python3.6/site-packages/pyspark/python/lib/pyspark.zip -> hdfs://hw001.local:8020/user/airflow/.sparkStaging/application_1572898343646_0029/pyspark.zip
19/12/12 15:22:55 INFO yarn.Client: Uploading resource file:/home/airflow/projects/hph_etl_airflow/venv/lib/python3.6/site-packages/pyspark/python/lib/py4j-0.10.7-src.zip -> hdfs://hw001.local:8020/user/airflow/.sparkStaging/application_1572898343646_0029/py4j-0.10.7-src.zip
19/12/12 15:22:55 INFO yarn.Client: Uploading resource file:/tmp/spark-4e600acd-2d34-4271-b01c-25f312906f93/__spark_conf__5403285055443058510.zip -> hdfs://hw001.local:8020/user/airflow/.sparkStaging/application_1572898343646_0029/__spark_conf__.zip
19/12/12 15:22:55 INFO spark.SecurityManager: Changing view acls to: airflow
19/12/12 15:22:55 INFO spark.SecurityManager: Changing modify acls to: airflow
19/12/12 15:22:55 INFO spark.SecurityManager: Changing view acls groups to:
19/12/12 15:22:55 INFO spark.SecurityManager: Changing modify acls groups to:
19/12/12 15:22:55 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(airflow); groups with view permissions: Set(); users with modify permissions: Set(airflow); groups with modify permissions: Set()
19/12/12 15:22:56 INFO yarn.Client: Submitting application application_1572898343646_0029 to ResourceManager
19/12/12 15:22:56 INFO impl.YarnClientImpl: Submitted application application_1572898343646_0029
19/12/12 15:22:56 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1572898343646_0029 and attemptId None
19/12/12 15:22:57 INFO yarn.Client: Application report for application_1572898343646_0029 (state: ACCEPTED)
19/12/12 15:22:57 INFO yarn.Client:
client token: N/A
diagnostics: AM container is launched, waiting for AM container to Register with RM
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1576200176385
final status: UNDEFINED
tracking URL: http://hw001.local:8088/proxy/application_1572898343646_0029/
user: airflow
19/12/12 15:22:58 INFO yarn.Client: Application report for application_1572898343646_0029 (state: FAILED)
19/12/12 15:22:58 INFO yarn.Client:
client token: N/A
diagnostics: Application application_1572898343646_0029 failed 2 times due to AM Container for appattempt_1572898343646_0029_000002 exited with exitCode: 1
Failing this attempt.Diagnostics: [2019-12-12 15:22:58.214]Exception from container-launch.
Container id: container_e02_1572898343646_0029_02_000001
Exit code: 1
[2019-12-12 15:22:58.215]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
/hadoop/yarn/local/usercache/airflow/appcache/application_1572898343646_0029/container_e02_1572898343646_0029_02_000001/launch_container.sh: line 38: $PWD:$PWD/__spark_conf__:$PWD/__spark_libs__/*:$HADOOP_CONF_DIR:/usr/hdp/3.1.0.0-78/hadoop/*:/usr/hdp/3.1.0.0-78/hadoop/lib/*:/usr/hdp/current/hadoop-hdfs-client/*:/usr/hdp/current/hadoop-hdfs-client/lib/*:/usr/hdp/current/hadoop-yarn-client/*:/usr/hdp/current/hadoop-yarn-client/lib/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure:$PWD/__spark_conf__/__hadoop_conf__: bad substitution
....
Not sure what to make of this or how to proceed further and did not totally understand the error message after googling it.
Anyone with more experience have any further debugging tips for this or fixes?
spark-submit is a bash script, and uses Java classes to run, so using a virtualenv wouldn't necessarily help (although, you can see in the logs that files were uploaded from the environment).
The first error is because hdp-select requires Python2, but it looks like it ran with Python3 (probably due to your venv)
If you want to carry your Python environment to the executors and driver, you'd probably want to use the --pyfiles option instead, or setup the same python environment on each Spark node
Also, you seem to have Spark 2.4.4, not 2.3.2, like you say, which could explain the NoClassDef if you're mixing Spark versions (in particular pyspark from pip doesn't download any scheduler specific packages, like the YARN timeline)
But you ran the code fine and you can find the real exception under
http://hw001.local:8088/proxy/application_1572898343646_0029

Spark on yarn runs indefinity

I had spark (2.2 on hadoop 2.7) jobs running and had to restart the sparkmaster machine. Now the spark jobs on yarn is getting submitted, Accepted and running but does not end.
Cluster ( 1 + 3 nodes). Resourcemanager & Namenode running on sparkmaster node. And Nodemanager and Datanode running on 3 worker nodes.
Executor Log:
/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
17/12/15 08:58:02 INFO executor.CoarseGrainedExecutorBackend: Started daemon with process name: 130256#cassandralake1node3.localdomain
17/12/15 08:58:02 INFO util.SignalUtils: Registered signal handler for TERM
17/12/15 08:58:02 INFO util.SignalUtils: Registered signal handler for HUP
17/12/15 08:58:02 INFO util.SignalUtils: Registered signal handler for INT
17/12/15 08:58:03 WARN util.Utils: Your hostname, cassandralake1node3.localdomain resolves to a loopback address: 127.0.0.1; using 10.204.211.105 instead (on interface em1)
17/12/15 08:58:03 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
17/12/15 08:58:03 INFO spark.SecurityManager: Changing view acls to: root
17/12/15 08:58:03 INFO spark.SecurityManager: Changing modify acls to: root
17/12/15 08:58:03 INFO spark.SecurityManager: Changing view acls groups to:
17/12/15 08:58:03 INFO spark.SecurityManager: Changing modify acls groups to:
17/12/15 08:58:03 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
17/12/15 08:58:03 INFO client.TransportClientFactory: Successfully created connection to /10.204.211.105:40866 after 85 ms (0 ms spent in bootstraps)
17/12/15 08:58:04 INFO spark.SecurityManager: Changing view acls to: root
17/12/15 08:58:04 INFO spark.SecurityManager: Changing modify acls to: root
17/12/15 08:58:04 INFO spark.SecurityManager: Changing view acls groups to:
17/12/15 08:58:04 INFO spark.SecurityManager: Changing modify acls groups to:
17/12/15 08:58:04 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
17/12/15 08:58:04 INFO client.TransportClientFactory: Successfully created connection to /10.204.211.105:40866 after 1 ms (0 ms spent in bootstraps)
17/12/15 08:58:04 INFO storage.DiskBlockManager: Created local directory at /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1513329182871_0010/blockmgr-15ae52df-c267-427e-b8f1-ef1c84059740
17/12/15 08:58:04 INFO memory.MemoryStore: MemoryStore started with capacity 1311.0 MB
17/12/15 08:58:04 INFO executor.CoarseGrainedExecutorBackend: Connecting to driver: spark://CoarseGrainedScheduler#10.204.211.105:40866
17/12/15 08:58:04 INFO executor.CoarseGrainedExecutorBackend: Successfully registered with driver
17/12/15 08:58:04 INFO executor.Executor: Starting executor ID 1 on host cassandranode3
17/12/15 08:58:04 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35983.
17/12/15 08:58:04 INFO netty.NettyBlockTransferService: Server created on cassandranode3:35983
17/12/15 08:58:04 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/12/15 08:58:04 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(1, cassandranode3, 35983, None)
17/12/15 08:58:04 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(1, cassandranode3, 35983, None)
17/12/15 08:58:04 INFO storage.BlockManager: external shuffle service port = 7337
17/12/15 08:58:04 INFO storage.BlockManager: Registering executor with local external shuffle service.
17/12/15 08:58:04 INFO client.TransportClientFactory: Successfully created connection to cassandranode3/10.204.211.105:7337 after 1 ms (0 ms spent in bootstraps)
17/12/15 08:58:04 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(1, cassandranode3, 35983, None)
Driver Log:
O util.Utils: Using initial executors = 2, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
17/12/15 09:50:06 INFO yarn.YarnAllocator: Will request 2 executor container(s), each with 1 core(s) and 3072 MB memory (including 1024 MB of overhead)
17/12/15 09:50:06 INFO yarn.YarnAllocator: Submitted 2 unlocalized container requests.
17/12/15 09:50:06 INFO yarn.ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
17/12/15 09:50:07 INFO impl.AMRMClientImpl: Received new token for : cassandranode2:38628
17/12/15 09:50:07 INFO impl.AMRMClientImpl: Received new token for : cassandranode3:39212
17/12/15 09:50:07 INFO yarn.YarnAllocator: Launching container container_1513329182871_0011_01_000002 on host cassandranode2 for executor with ID 1
17/12/15 09:50:07 INFO yarn.YarnAllocator: Launching container container_1513329182871_0011_01_000003 on host cassandranode3 for executor with ID 2
17/12/15 09:50:07 INFO yarn.YarnAllocator: Received 2 containers from YARN, launching executors on 2 of them.
17/12/15 09:50:07 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
17/12/15 09:50:07 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
17/12/15 09:50:07 INFO impl.ContainerManagementProtocolProxy: Opening proxy : cassandranode3:39212
17/12/15 09:50:07 INFO impl.ContainerManagementProtocolProxy: Opening proxy : cassandranode2:38628
17/12/15 09:50:09 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.204.211.105:47622) with ID 2
17/12/15 09:50:09 INFO spark.ExecutorAllocationManager: New executor 2 has registered (new total is 1)
17/12/15 09:50:09 INFO storage.BlockManagerMasterEndpoint: Registering block manager cassandranode3:33779 with 1311.0 MB RAM, BlockManagerId(2, cassandranode3, 33779, None)
17/12/15 09:50:11 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.204.211.103:43578) with ID 1
17/12/15 09:50:11 INFO spark.ExecutorAllocationManager: New executor 1 has registered (new total is 2)
17/12/15 09:50:11 INFO storage.BlockManagerMasterEndpoint: Registering block manager cassandranode2:37931 with 1311.0 MB RAM, BlockManagerId(1, cassandranode2, 37931, None)
17/12/15 09:50:11 INFO cluster.YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
17/12/15 09:50:11 INFO cluster.YarnClusterScheduler: YarnClusterScheduler.postStartHook done
17/12/15 09:50:11 INFO internal.SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1513329182871_0011/container_1513329182871_0011_01_000001/spark-warehouse').
17/12/15 09:50:11 INFO internal.SharedState: Warehouse path is 'file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1513329182871_0011/container_1513329182871_0011_01_000001/spark-warehouse'.
17/12/15 09:50:11 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#e087bd4{/SQL,null,AVAILABLE,#Spark}
17/12/15 09:50:11 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#c93af1f{/SQL/json,null,AVAILABLE,#Spark}
17/12/15 09:50:11 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#53fd3a5d{/SQL/execution,null,AVAILABLE,#Spark}
17/12/15 09:50:11 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#7dcd6778{/SQL/execution/json,null,AVAILABLE,#Spark}
17/12/15 09:50:11 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#3a25ecc9{/static/sql,null,AVAILABLE,#Spark}
17/12/15 09:50:12 INFO state.StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
17/12/15 09:51:09 INFO spark.ExecutorAllocationManager: Request to remove executorIds: 2
17/12/15 09:51:11 INFO spark.ExecutorAllocationManager: Request to remove executorIds: 1
spark-default.conf
spark.master yarn
spark.eventLog.enabled true
spark.eventLog.dir file:///home/sparkeventlogs
spark.serializer org.apache.spark.serializer.KryoSerializer
spark.driver.memory 5g
spark.driver.cores 1
spark.yarn.am.memory 2048m
spark.yarn.am.cores 1
spark.submit.deployMode cluster
spark.dynamicAllocation.enabled true
spark.shuffle.service.enabled true
spark.driver.maxResultSize 20g
spark.jars.packages datastax:spark-cassandra-connector:2.0.5-s_2.11
spark.cassandra.connection.host 10.204.211.101,10.204.211.103,10.204.211.105
spark.executor.extraJavaOptions -XX:+UseG1GC -XX:+PrintGCDetails -XX:+PrintGCDateStamps
spark.driver.extraJavaOptions -Dhdp.version=2.7.4
spark.cassandra.read.timeout_ms 180000
spark.yarn.stagingDir hdfs:///tmp
spark.network.timeout 2400
spark.yarn.driver.memoryOverhead 2048
spark.yarn.executor.memoryOverhead 1024
spark.network.timeout 2400
yarn.resourcemanager.app.timeout.minutes=-1
spark.yarn.submit.waitAppCompletion true
spark.sql.inMemoryColumnarStorage.compressed true
spark.sql.inMemoryColumnarStorage.batchSize 10000
Spark Submit command:
spark-submit --class com.swcassandrautil.popstatsclone.popihits --master yarn --deploy-mode cluster --executor-cores 1 --executor-memory 2g --conf spark.dynamicAllocation.initialExecutors=2 --conf spark.dynamicAllocation.maxExecutors=8 --conf spark.dynamicAllocation.minExecutors=2 --conf spark.memory.fraction=0.75 --conf spark.memory.storageFraction=0.75 /scala/statscloneihits/target/scala-2.11/popstatscloneihits_2.11-1.0.jar "/mnt/data/tmp/xyz*" "\t";
Request your input and Appreciate.
Thanks

Kafka message consumption with spark

I am using HDP-2.3 sandbox for Consuming kafka messages by running SPARK submit job.
i am putting some messages in kafka as below:
kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic webevent
OR
kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic test --new-producer < myfile.txt
Now i need to consume above messages from spark job as shown below:
./bin/spark-submit --master spark://192.168.255.150:7077 --executor-memory 512m --class org.apache.spark.examples.streaming.JavaDirectKafkaWordCount lib/spark-examples-1.4.1-hadoop2.4.0.jar 192.168.255.150:2181 webevent 10
Where 2181 is a zookeeper port
I am getting Error as shown(Guide me how to consume that message from Kafka):
16/05/02 15:21:30 INFO SparkContext: Running Spark version 1.3.1
16/05/02 15:21:30 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/05/02 15:21:31 INFO SecurityManager: Changing view acls to: root
16/05/02 15:21:31 INFO SecurityManager: Changing modify acls to: root
16/05/02 15:21:31 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/05/02 15:21:31 INFO Slf4jLogger: Slf4jLogger started
16/05/02 15:21:31 INFO Remoting: Starting remoting
16/05/02 15:21:32 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver#sandbox.hortonworks.com:53950]
16/05/02 15:21:32 INFO Utils: Successfully started service 'sparkDriver' on port 53950.
16/05/02 15:21:32 INFO SparkEnv: Registering MapOutputTracker
16/05/02 15:21:32 INFO SparkEnv: Registering BlockManagerMaster
16/05/02 15:21:32 INFO DiskBlockManager: Created local directory at /tmp/spark-c70b08b9-41a3-42c8-9d83-bc4258e299c6/blockmgr-c2d86de6-34a7-497c-8018-d3437a100e87
16/05/02 15:21:32 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
16/05/02 15:21:32 INFO HttpFileServer: HTTP File server directory is /tmp/spark-a8f7ade9-292c-42c4-9e54-43b3b3495b0c/httpd-65d36d04-1e2a-4e69-8d20-295465100070
16/05/02 15:21:32 INFO HttpServer: Starting HTTP Server
16/05/02 15:21:32 INFO Server: jetty-8.y.z-SNAPSHOT
16/05/02 15:21:32 INFO AbstractConnector: Started SocketConnector#0.0.0.0:37014
16/05/02 15:21:32 INFO Utils: Successfully started service 'HTTP file server' on port 37014.
16/05/02 15:21:32 INFO SparkEnv: Registering OutputCommitCoordinator
16/05/02 15:21:32 INFO Server: jetty-8.y.z-SNAPSHOT
16/05/02 15:21:32 INFO AbstractConnector: Started SelectChannelConnector#0.0.0.0:4040
16/05/02 15:21:32 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/05/02 15:21:32 INFO SparkUI: Started SparkUI at http://sandbox.hortonworks.com:4040
16/05/02 15:21:33 INFO SparkContext: Added JAR file:/usr/hdp/2.3.0.0-2130/spark/lib/spark-examples-1.4.1-hadoop2.4.0.jar at http://192.168.255.150:37014/jars/spark-examples-1.4.1-hadoop2.4.0.jar with timestamp 1462202493866
16/05/02 15:21:34 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster#192.168.255.150:7077/user/Master...
16/05/02 15:21:34 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20160502152134-0000
16/05/02 15:21:34 INFO AppClient$ClientActor: Executor added: app-20160502152134-0000/0 on worker-20160502150437-sandbox.hortonworks.com-36920 (sandbox.hortonworks.com:36920) with 1 cores
16/05/02 15:21:34 INFO SparkDeploySchedulerBackend: Granted executor ID app-20160502152134-0000/0 on hostPort sandbox.hortonworks.com:36920 with 1 cores, 512.0 MB RAM
16/05/02 15:21:34 INFO AppClient$ClientActor: Executor updated: app-20160502152134-0000/0 is now RUNNING
16/05/02 15:21:34 INFO AppClient$ClientActor: Executor updated: app-20160502152134-0000/0 is now LOADING
16/05/02 15:21:34 INFO NettyBlockTransferService: Server created on 43440
16/05/02 15:21:34 INFO BlockManagerMaster: Trying to register BlockManager
16/05/02 15:21:34 INFO BlockManagerMasterActor: Registering block manager sandbox.hortonworks.com:43440 with 265.4 MB RAM, BlockManagerId(<driver>, sandbox.hortonworks.com, 43440)
16/05/02 15:21:34 INFO BlockManagerMaster: Registered BlockManager
16/05/02 15:21:35 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/05/02 15:21:35 INFO VerifiableProperties: Verifying properties
16/05/02 15:21:35 INFO VerifiableProperties: Property group.id is overridden to
16/05/02 15:21:35 INFO VerifiableProperties: Property zookeeper.connect is overridden to
16/05/02 15:21:35 INFO SimpleConsumer: Reconnect due to socket error: java.io.EOFException: Received -1 when reading from channel, socket has likely been closed.
Error: application failed with exception
org.apache.spark.SparkException: java.io.EOFException: Received -1 when reading from channel, socket has likely been closed.
at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createDirectStream$2.apply(KafkaUtils.scala:416)
at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createDirectStream$2.apply(KafkaUtils.scala:416)
at scala.util.Either.fold(Either.scala:97)
at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:415)
at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:532)
at org.apache.spark.streaming.kafka.KafkaUtils.createDirectStream(KafkaUtils.scala)
at org.apache.spark.examples.streaming.JavaDirectKafkaWordCount.main(JavaDirectKafkaWordCount.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:577)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:174)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:197)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
OR
wen i use this:
./bin/spark-submit --master spark://192.168.255.150:7077 --executor-memory 512m --class org.apache.spark.examples.streaming.JavaDirectKafkaWordCount lib/spark-examples-1.4.1-hadoop2.4.0.jar 192.168.255.150:6667 webevent 10
where 6667 is a Kafka’s message producing port, i am getting this error:
16/05/02 15:27:26 INFO SimpleConsumer: Reconnect due to socket error: java.nio.channels.ClosedChannelException
Error: application failed with exception
org.apache.spark.SparkException: java.nio.channels.ClosedChannelException
at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createDirectStream$2.apply(KafkaUtils.scala:416)
at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createDirectStream$2.apply(KafkaUtils.scala:416)
i dont know if this can help:
./bin/spark-submit --class consumer.kafka.client.Consumer --master spark://192.168.255.150:7077 --executor-memory 1G lib/kafka-spark-consumer-1.0.6.jar 10

Submitting a job to Apache Spark Error

I have the following settings for my Apache Spark instance that runs locally on my machine:
export SPARK_HOME=/Users/joe/Softwares/apache/spark/spark-1.6.0-bin-hadoop2.6
export SPARK_MASTER_IP=127.0.0.1
export SPARK_MASTER_PORT=7077
export SPARK_MASTER_WEBUI_PORT=8080
export SPARK_LOCAL_DIRS=$SPARK_HOME/work
export SPARK_WORKER_CORES=1
export SPARK_WORKER_MEMORY=1G
export SPARK_EXECUTOR_INSTANCES=2
export SPARK_DAEMON_MEMORY=384m
I have a spark streaming consumer that I would like to submit to Spark. This streaming consumer is just a jar file that I submit like this:
$SPARK_HOME/bin/spark-submit --class com.my.job.MetricsConsumer --master spark://127.0.0.1:7077 /Users/joe/Sandbox/jaguar/spark-kafka-consumer/target/scala-2.11/spark-kafka-consumer-0.1.0-SNAPAHOT.jar
I get the following error:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/01/13 10:30:06 INFO SparkContext: Running Spark version 1.6.0
16/01/13 10:30:06 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/01/13 10:30:06 INFO SecurityManager: Changing view acls to: joe
16/01/13 10:30:06 INFO SecurityManager: Changing modify acls to: joe
16/01/13 10:30:06 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(joe); users with modify permissions: Set(joe)
16/01/13 10:30:07 INFO Utils: Successfully started service 'sparkDriver' on port 65528.
16/01/13 10:30:07 INFO Slf4jLogger: Slf4jLogger started
16/01/13 10:30:08 INFO Remoting: Starting remoting
16/01/13 10:30:08 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem#172.22.0.104:65529]
16/01/13 10:30:08 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 65529.
16/01/13 10:30:08 INFO SparkEnv: Registering MapOutputTracker
16/01/13 10:30:08 INFO SparkEnv: Registering BlockManagerMaster
16/01/13 10:30:08 INFO DiskBlockManager: Created local directory at /Users/joe/Softwares/apache/spark/spark-1.6.0-bin-hadoop2.6/work/blockmgr-cee3388d-ecfc-42a7-a76c-8738401db0c9
16/01/13 10:30:08 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
16/01/13 10:30:08 INFO SparkEnv: Registering OutputCommitCoordinator
16/01/13 10:30:08 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/01/13 10:30:08 INFO SparkUI: Started SparkUI at http://172.22.0.104:4040
16/01/13 10:30:08 INFO HttpFileServer: HTTP File server directory is /Users/joe/Softwares/apache/spark/spark-1.6.0-bin-hadoop2.6/work/spark-10d7d880-7d1d-4234-88d4-d80558c8051a/httpd-40f80936-7508-4b6c-bb90-411aa37d7e93
16/01/13 10:30:08 INFO HttpServer: Starting HTTP Server
16/01/13 10:30:08 INFO Utils: Successfully started service 'HTTP file server' on port 65530.
16/01/13 10:30:09 INFO SparkContext: Added JAR file:/Users/joe/Sandbox/jaguar/spark-kafka-consumer/target/scala-2.11/spark-kafka-consumer-0.1.0-SNAPAHOT.jar at http://172.22.0.104:65530/jars/spark-kafka-consumer-0.1.0-SNAPAHOT.jar with timestamp 1452677409966
16/01/13 10:30:10 INFO AppClient$ClientEndpoint: Connecting to master spark://myhost:7077...
16/01/13 10:30:10 WARN AppClient$ClientEndpoint: Failed to connect to master myhost:7077
java.io.IOException: Failed to connect to myhost:7077
export MAVEN_OPTS="-Xmx512m -XX:MaxPermSize=128m"
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:209)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:207)
at io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1097)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:471)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:456)
at io.netty.channel.ChannelOutboundHandlerAdapter.connect(ChannelOutboundHandlerAdapter.java:47)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:471)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:456)
at io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:471)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:456)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:438)
at io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:908)
at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:203)
at io.netty.bootstrap.Bootstrap$2.run(Bootstrap.java:166)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
... 1 more
I have checked my firewall settings and eveything seems to be Ok. Why would I get this error? Any ideas?

Resources