Zeppelin throws java.lang.OutOfMemoryError: Java heap space - apache-spark

I am trying to use Zeppelin with the following code:
val dataText = sc.parallelize(IOUtils.toString(new URL("http://XXX.XX.XXX.121:8090/my_data.txt"),Charset.forName("utf8")).split("\n"))
case class Data(id: string, time: long, value1: Double, value2: int, mode: int)
val dat = dataText .map(s => s.split("\t")).filter(s => s(0) != "Header:").map(
s => Data(s(0),
s(1).toLong,
s(2).toDouble,
s(3).toInt,
s(4).toInt
)
).toDF()
dat.registerTempTable("mydatatable")
this keeps throwing me following error :
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2367)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:535)
at java.lang.StringBuilder.append(StringBuilder.java:204)
at org.apache.commons.io.output.StringBuilderWriter.write(StringBuilderWriter.java:138)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2002)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1980)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1957)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1907)
at org.apache.commons.io.IOUtils.toString(IOUtils.java:778)
at org.apache.commons.io.IOUtils.toString(IOUtils.java:896)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:38)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:43)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:45)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:47)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:49)
at $iwC$$iwC$$iwC.<init>(<console>:51)
at $iwC$$iwC.<init>(<console>:53)
at $iwC.<init>(<console>:55)
at <init>(<console>:57)
at .<init>(<console>:61)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
I have already set the following in the zeppelin-env.sh
export ZEPPELIN_JAVA_OPTS="-Dhdp.version=2.3.0.0-2557 -Dspark.executor.memory=4g"
any idea what I may be missing. File I am parsing my_data.txt is about 200MB
BTW I am using the Hortonworks Sandbox if that matters
EDIT 1
Here is my zeppelin-env.sh
export HADOOP_CONF_DIR=/etc/hadoop/conf
export ZEPPELIN_PORT=9995
export ZEPPELIN_JAVA_OPTS="-Dhdp.version=2.3.0.0-2557 -Dspark.executor.memory=4g"
export SPARK_SUBMIT_OPTIONS="--driver-java-options -Xmx4g"
export ZEPPELIN_INT_MEM="-Xmx4g"
export SPARK_HOME=/usr/hdp/2.3.0.0-2557/spark
Regards
Kiran

Can you try increasing the memory in SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.sh:
export SPARK_SUBMIT_OPTIONS="--driver-java-options -Xmx20g"
This thread may help
http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/Can-not-configure-driver-memory-size-td1513.html

Increasing the memory for the following zeppelin-env.sh var, did the trick for me. The default is 1/0.5GB, I increased it to 10/5GB
ZEPPELIN_MEM": "-Xmx10024m -XX:MaxPermSize=5120m

I was getting below error while trying to bring up Zeppelin notebook
INFO [2021-05-04 15:16:22,015] ({main} Folder.java[addNote]:185) - Add note 2G7CAFXX7 to folder /
INFO [2021-05-04 15:16:22,016] ({main} Notebook.java[<init>]:127) - Notebook indexing started...
WARN [2021-05-04 15:16:32,045] ({main} ContextHandler.java[log]:2355) - unavailable
MultiException stack 1 of 1
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.store.RAMFile.newBuffer(RAMFile.java:80)
at org.apache.lucene.store.RAMFile.addBuffer(RAMFile.java:53)
To resolve this issue I tuned the ZEPPELIN_MEM parameter in zeppelin-env.sh file like this,
export ZEPPELIN_MEM="-Xmx5024m -XX:MaxPermSize=5120m"
Then restart zeppelin
sudo systemctl stop zeppelin; sudo systemctl start zeppelin
Result
INFO [2021-05-04 18:51:02,939] ({main} Folder.java[addNote]:185) - Add note 2G7CAFXX7 to folder /
INFO [2021-05-04 18:51:02,940] ({main} Notebook.java[<init>]:127) - Notebook indexing started...
INFO [2021-05-04 18:51:05,793] ({main} LuceneSearch.java[addIndexDocs]:305) - Indexing 905 notebooks took 2853ms
INFO [2021-05-04 18:51:05,793] ({main} Notebook.java[<init>]:129) - Notebook indexing finished: 905 indexed in -2s
INFO [2021-05-04 18:51:05,795] ({main} Helium.java[loadConf]:103) - Add helium local registry /usr/lib/zeppelin/helium
INFO [2021-05-04 18:51:05,797] ({main} Helium.java[loadConf]:100) - Add helium
INFO [2021-05-04 18:51:06,631] ({main} Server.java[doStart]:407) - Started #131632ms
INFO [2021-05-04 18:51:06,631] ({main} ZeppelinServer.java[main]:249) - Done, zeppelin server started

The only thing that worked for me (using Spark 2) was to add to conf/zeppelin-env.sh:
export SPARK_SUBMIT_OPTIONS="... --driver-memory 4g ..."
Then restart Zeppelin interpreter (In the Zeppelin for Spark 2, click on the settings button on the top right and then click on the Interpreter link, scroll down and click the Restart button of the Spark section).

Related

spark-submit on local Hadoop-Yarn setup, fails with Stdout path must be absolute error

I have installed the latest Hadoop and Spark versions on my Windows machine.
I am trying to launch one of the provided examples but it fails and I have no idea what the diagnostic means. It seems it's related to the stdout but I can't figure out the root cause.
I launch the following command:
spark-submit --master yarn --class org.apache.spark.examples.JavaSparkPi C:\spark-3.0.1-bin-hadoop3.2\examples\jars\spark-examples_2.12-3.0.1.jar 100
And the exception I have is:
21/01/25 10:53:53 WARN MetricsSystem: Stopping a MetricsSystem that is not running
21/01/25 10:53:53 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
21/01/25 10:53:53 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: Application application_1611568137841_0002 failed 2 times due to AM Container for appattempt_1611568137841_0002_000002 exited with exitCode: -1
Failing this attempt.Diagnostics:
[2021-01-25 10:53:53.381] Stdout path must be absolute
For more detailed output, check the application tracking page: http://xxxx-PC:8088/cluster/app/application_1611568137841_0002 Then click on links to logs of each attempt.
. Failing the application.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBack
end.scala:95)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:201)
at org.apache.spark.SparkContext.(SparkContext.scala:555)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2574)
at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:934)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:928)
at org.apache.spark.examples.JavaSparkPi.main(JavaSparkPi.java:37)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
21/01/25 10:53:53 INFO ShutdownHookManager: Shutdown hook called
21/01/25 10:53:53 INFO ShutdownHookManager: Deleting directory C:\Users\xxx\AppData\Local\Temp\spark-b28ecb32-5e3f-4d6a-973a-c03a7aae0da9
21/01/25 10:53:53 INFO ShutdownHookManager: Deleting directory C:\Users/xxx\AppData\Local\Temp\spark-3665ba77-d2aa-424a-9f75-e772bb5b9104
As for the diagnostics:
Diagnostics:
Application application_1611562870926_0004 failed 2 times due to AM Container for appattempt_1611562870926_0004_000002 exited with exitCode: -1
Failing this attempt.Diagnostics: [2021-01-25 10:29:19.734]Stdout path must be absolute
For more detailed output, check the application tracking page: http://****-PC:8088/cluster/app/application_1611562870926_0004 Then click on links to logs of each attempt.
. Failing the application.
Thank you !
So I am not sure of the root cause yet, it's probably due to the fact that I run under windows and some default property was wrong for Yarn.
When I added the 2 following properties on yarn-site.xml, it worked fine:
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>/tmp</value>
</property>
<property>
<name>yarn.log.dir</name>
<value>/tmp</value>
</property>
Hope it helps someone in the future !

Sqoop Error while run via Spark

When I run this code via sqoop command, it works
sqoop import --connect "jdbc:sqlserver://myhost:port;databaseName=DBNAME" \
--username MYUSER -P \
--compress --compression-codec snappy \
--as-parquetfile \
--table MYTABLE \
--warehouse-dir /user/myuser/test1/ \
--m 1
Then I create spark scala code as below. But when I execute the project using spark-submit, it not working
val sqoop_options: SqoopOptions = new SqoopOptions()
sqoop_options.setConnectString("jdbc:sqlserver://myhost:port;databaseName=DBNAME")
sqoop_options.setTableName("MYTABLE");
sqoop_options.setUsername("MYUSER");
sqoop_options.setPassword("password");
sqoop_options.setNumMappers(1);
sqoop_options.setTargetDir("/user/myuser/test1/");
sqoop_options.setFileLayout(FileLayout.ParquetFile);
sqoop_options.setCompressionCodec("org.apache.hadoop.io.compress.SnappyCodec")
val importTool = new ImportTool
val sqoop = new Sqoop(importTool, conf, sqoop_options);
val retCode = ToolRunner.run(sqoop, null);
It return driver not found error, even though I run it on the same cluster.
I already put appropriate library on /var/lib/sqoop directory, that's why sqoop command run well. But, is it will refer to another library path when I run it via spark-submit?
Detail error log:
/opt/cloudera/parcels/CDH-5.8.2-1.cdh5.8.2.p0.3/lib/spark/conf/spark-env.sh: line 75: spark.driver.extraClassPath=.:/etc/hbase/conf:/opt/cloudera/parcels/CDH/lib/hbase/hbase-common.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-client.jar://opt/cloudera/parcels/CDH/lib/hbase/hbase-server.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-protocol.jar:/opt/cloudera/parcels/CDH/lib/hbase/lib/guava-12.0.1.jar:/opt/cloudera/parcels/CDH/lib/hbase/lib/htrace-core.jar:/opt/cloudera/parcels/CDH/lib/hbase/lib/zookeeper.jar:/opt/cloudera/parcels/CDH/lib/hbase/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-hadoop2-compat.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-hadoop-compat.jar:/opt/cloudera/parcels/CDH/lib/hbase/lib/metrics-core-2.2.0.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-spark.jar:/opt/cloudera/parcels/CDH/lib/hive/lib/hive-hbase-handler.jar: No such file or directory
/opt/cloudera/parcels/CDH-5.8.2-1.cdh5.8.2.p0.3/lib/spark/conf/spark-env.sh: line 77: spark.executor.extraClassPath=.:/opt/cloudera/parcels/CDH/lib/hbase/hbase-common.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-client.jar://opt/cloudera/parcels/CDH/lib/hbase/hbase-server.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-protocol.jar:/opt/cloudera/parcels/CDH/lib/hbase/lib/guava-12.0.1.jar:/opt/cloudera/parcels/CDH/lib/hbase/lib/htrace-core.jar:/opt/cloudera/parcels/CDH/lib/hbase/lib/zookeeper.jar:/opt/cloudera/parcels/CDH/lib/hbase/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-hadoop2-compat.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-hadoop-compat.jar:/opt/cloudera/parcels/CDH/lib/hbase/lib/metrics-core-2.2.0.jar:/opt/cloudera/parcels/CDH/lib/hbase/hbase-spark.jar:/opt/cloudera/parcels/CDH/lib/hive/lib/hive-hbase-handler.jar: No such file or directory
2018-03-09 13:59:37,332 INFO [main] security.UserGroupInformation: Login successful for user myuser using keytab file myuser.keytab
2018-03-09 13:59:37,371 INFO [main] sqoop.Sqoop: Running Sqoop version: 1.4.6
2018-03-09 13:59:37,426 WARN [main] sqoop.ConnFactory: $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
2018-03-09 13:59:37,478 INFO [main] manager.SqlManager: Using default fetchSize of 1000
2018-03-09 13:59:37,479 INFO [main] tool.CodeGenTool: Beginning code generation
2018-03-09 13:59:37,479 INFO [main] tool.CodeGenTool: Will generate java class as codegen_MYTABLE
Exception in thread "main" java.lang.RuntimeException: Could not load db driver class: com.microsoft.sqlserver.jdbc.SQLServerDriver
at org.apache.sqoop.manager.SqlManager.makeConnection(SqlManager.java:856)
at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:744)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:767)
at org.apache.sqoop.manager.SqlManager.getColumnInfoForRawQuery(SqlManager.java:270)
at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:241)
at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:227)
at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:295)
at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1833)
at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1645)
at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605)
at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at com.test.spark.sqoop.SqoopExample$.importSQLToHDFS(SqoopExample.scala:56)
at com.test.spark.sqoop.SqoopExample$.main(SqoopExample.scala:18)
at com.test.spark.sqoop.SqoopExample.main(SqoopExample.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Now my error is:
spark-submit --files kafka-jaas.conf,ampuser.keytab --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=kafka-jaas.conf" --driver-java-options "-Djava.security.auth.login.config=kafka-jaas.conf" --conf spark.driver.extraClassPath=/var/lib/sqoop/sqljdbc4.jar:/opt/cloudera/parcels/CDH/lib/sqoop/lib/*,/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/* --class com.danamon.spark.sqoop.SqoopExample --deploy-mode client --master yarn kafka-streaming-0.0.1-SNAPSHOT-jar-with-dependencies.jar
18/03/13 20:54:51 INFO security.UserGroupInformation: Login successful for user ampuser using keytab file ampuser.keytab
18/03/13 20:54:51 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
18/03/13 20:54:51 WARN sqoop.ConnFactory: $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
18/03/13 20:54:51 INFO manager.SqlManager: Using default fetchSize of 1000
18/03/13 20:54:51 INFO tool.CodeGenTool: Beginning code generation
18/03/13 20:54:51 INFO tool.CodeGenTool: Will generate java class as codegen_BD_AC_ACCT_PREFERENCES
18/03/13 20:54:52 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM [BD_AC_ACCT_PREFERENCES] AS t WHERE 1=0
18/03/13 20:54:52 INFO orm.CompilationManager: $HADOOP_MAPRED_HOME is not set
Note: /tmp/sqoop-ampuser/compile/95e3ef854d67b50d8ef72955151dc846/codegen_BD_AC_ACCT_PREFERENCES.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
18/03/13 20:54:54 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-ampuser/compile/95e3ef854d67b50d8ef72955151dc846/codegen_BD_AC_ACCT_PREFERENCES.jar
18/03/13 20:54:54 INFO mapreduce.ImportJobBase: Beginning import of BD_AC_ACCT_PREFERENCES
18/03/13 20:54:54 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
Exception in thread "main" java.lang.NoClassDefFoundError: org/kitesdk/data/mapreduce/DatasetKeyOutputFormat
at org.apache.sqoop.mapreduce.DataDrivenImportJob.getOutputFormatClass(DataDrivenImportJob.java:190)
at org.apache.sqoop.mapreduce.ImportJobBase.configureOutputFormat(ImportJobBase.java:94)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:259)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:673)
at org.apache.sqoop.manager.SQLServerManager.importTable(SQLServerManager.java:163)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605)
at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at com.danamon.spark.sqoop.SqoopExample$.importSQLToHDFS(SqoopExample.scala:57)
at com.danamon.spark.sqoop.SqoopExample$.main(SqoopExample.scala:18)
at com.danamon.spark.sqoop.SqoopExample.main(SqoopExample.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.kitesdk.data.mapreduce.DatasetKeyOutputFormat
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 22 more
Is it caused by my Cloudera installation is not properly configured? or maybe HADOOP_HOME, MAPRED_HOME, etc not set properly?
Should I create new question for this?
You need to set the HADOOP_MAPRED_HOME to the $HADOOP_HOME in your ~/.bashrc
sudo nano ~/.bashrc
then add this line
export HADOOP_MAPRED_HOME = $HADOOP_HOME
save the file and then run this command
source ~/.bashrc

Spark runs in local but can't find file when running in YARN

I've been trying to submit a simple python script to run it in a cluster with YARN. When I execute the job in local, there's no problem, everything works fine but when I run it in the cluster it fails.
I executed the submit with the following command:
spark-submit --master yarn --deploy-mode cluster test.py
The log error I'm receiving is the following one:
17/11/07 13:02:48 INFO yarn.Client: Application report for application_1510046813642_0010 (state: ACCEPTED)
17/11/07 13:02:49 INFO yarn.Client: Application report for application_1510046813642_0010 (state: ACCEPTED)
17/11/07 13:02:50 INFO yarn.Client: Application report for application_1510046813642_0010 (state: FAILED)
17/11/07 13:02:50 INFO yarn.Client:
client token: N/A
diagnostics: Application application_1510046813642_0010 failed 2 times due to AM Container for appattempt_1510046813642_0010_000002 exited with exitCode: -1000
For more detailed output, check application tracking page:http://myserver:8088/proxy/application_1510046813642_0010/Then, click on links to logs of each attempt.
**Diagnostics: File does not exist: hdfs://myserver:8020/user/josholsan/.sparkStaging/application_1510046813642_0010/test.py**
java.io.FileNotFoundException: File does not exist: hdfs://myserver:8020/user/josholsan/.sparkStaging/application_1510046813642_0010/test.py
at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1266)
at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1258)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1258)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: root.users.josholsan
start time: 1510056155796
final status: FAILED
tracking URL: http://myserver:8088/cluster/app/application_1510046813642_0010
user: josholsan
Exception in thread "main" org.apache.spark.SparkException: Application application_1510046813642_0010 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1025)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1072)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:730)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/11/07 13:02:50 INFO util.ShutdownHookManager: Shutdown hook called
17/11/07 13:02:50 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-5cc8bf5e-216b-4d9e-b66d-9dc01a94e851
I put special attention to this line
Diagnostics: File does not exist: hdfs://myserver:8020/user/josholsan/.sparkStaging/application_1510046813642_0010/test.py
I don't know why it can't finde the test.py, I also tried to put it in HDFS under the directory of the user executing the job: /user/josholsan/
To finish my post I would like to share also my test.py script:
from pyspark import SparkContext
file="/user/josholsan/concepts_copy.csv"
sc = SparkContext("local","Test app")
textFile = sc.textFile(file).cache()
linesWithOMOP=textFile.filter(lambda line: "OMOP" in line).count()
linesWithICD=textFile.filter(lambda line: "ICD" in line).count()
print("Lines with OMOP: %i, lines with ICD9: %i" % (linesWithOMOP,linesWithICD))
Could the error also be in here?:
sc = SparkContext("local","Test app")
Thanks you so much for your help in advance.
Transferred from the comments section:
sc = SparkContext("local","Test app"): having "local" here will override any command line settings; from the docs:
Any values specified as flags or in the properties file will be passed on to the application and merged with those specified through SparkConf. Properties set directly on the SparkConf take highest precedence, then flags passed to spark-submit or spark-shell, then options in the spark-defaults.conf file.
The test.py file must be placed somewhere where it is visible throughout the whole cluster. E.g. spark-submit --master yarn --deploy-mode cluster http://somewhere/accessible/to/master/and/workers/test.py
Any additional files and resources can be specified using the --py-files argument (tested in mesos, not in yarn unfortunately), e.g. --py-files http://somewhere/accessible/to/all/extra_python_code_my_code_uses.zip
Edit: as #desertnaut commented, this argument should be used before the script to be executed.
yarn logs -applicationId <app ID> will give you the output of your submitted job. More here and here
Hope this helps, good luck!

AWS EMR using spark steps in cluster mode. Application application_ finished with failed status

I'm trying to launch a cluster using AWS Cli. I use the following command:
aws emr create-cluster --name "Config1" --release-label emr-5.0.0 --applications Name=Spark --use-default-role --log-uri 's3://aws-logs-813591802533-us-west-2/elasticmapreduce/' --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m1.medium InstanceGroupType=CORE,InstanceCount=2,InstanceType=m1.medium
The cluster is created successfully. Then I add this command:
aws emr add-steps --cluster-id ID_CLUSTER --region us-west-2 --steps Name=SparkSubmit,Jar="command-runner.jar",Args=[spark-submit,--deploy-mode,cluster,--master,yarn,--executor-memory,1G,--class,Traccia2014,s3://tracceale/params/scalaProgram.jar,s3://tracceale/params/configS3.txt,30,300,2,"s3a://tracceale/Tempi1"],ActionOnFailure=CONTINUE
After some time, the step failed. This is the LOG file:
17/02/22 11:00:07 INFO RMProxy: Connecting to ResourceManager at ip-172-31- 31-190.us-west-2.compute.internal/172.31.31.190:8032
17/02/22 11:00:08 INFO Client: Requesting a new application from cluster with 2 NodeManagers
17/02/22 11:00:08 INFO Client: Verifying our application has not requested
Exception in thread "main" org.apache.spark.SparkException: Application application_1487760984275_0001 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1132)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1175)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/02/22 11:01:02 INFO ShutdownHookManager: Shutdown hook called
17/02/22 11:01:02 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-27baeaa9-8b3a-4ae6-97d0-abc1d3762c86
Command exiting with ret '1'
Locally (on SandBox Hortonworks HDP 2.5) I run:
./spark-submit --class Traccia2014 --master local[*] --executor-memory 2G /usr/hdp/current/spark2-client/ScalaProjects/ScripRapportoBatch2.1/target/scala-2.11/traccia-22-ottobre_2.11-1.0.jar "/home/tracce/configHDFS.txt" 30 300 3
and everything works fine.
I've already read something related to my problem, but I can't figure it out.
UPDATE
Checked into Application Master, I get this error:
17/02/22 15:29:54 ERROR ApplicationMaster: User class threw exception: java.io.FileNotFoundException: s3:/tracceale/params/configS3.txt (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at scala.io.Source$.fromFile(Source.scala:91)
at scala.io.Source$.fromFile(Source.scala:76)
at scala.io.Source$.fromFile(Source.scala:54)
at Traccia2014$.main(Rapporto.scala:40)
at Traccia2014.main(Rapporto.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:627)
17/02/22 15:29:55 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.io.FileNotFoundException: s3:/tracceale/params/configS3.txt (No such file or directory))
I pass the path mentioned "s3://tracceale/params/configS3.txt" from S3 to the function 'fromFile' like this:
for(line <- scala.io.Source.fromFile(logFile).getLines())
How could I solve it? Thanks in advance.
Because you are using cluster deploy mode, the logs you have included are not useful at all. They just say that the application failed but not why it failed. To figure out why it failed, you at least need to look at the Application Master logs, since that is where the Spark driver runs in cluster deploy mode, and it will probably give a better hint as to why the application failed.
Since you have configured your cluster with a --log-uri, you will find the logs for the Application Master underneath s3://aws-logs-813591802533-us-west-2/elasticmapreduce/<CLUSTER ID>/containers/<YARN Application ID>/ where the YARN Application ID is (based on the logs you included above) application_1487760984275_0001, and the container ID should be something like container_1487760984275_0001_01_000001. (The first container for an application is the Application Master.)
What you have there is a URL to an object store, reachable from the Hadoop filesystem APIs, and a stack trace coming from java.io.File, which can't read it because it doesn't refer to anything in the local disk.
Use SparkContext.hadoopRDD() as the operation to convert the path into an RDD
There is a probability of file missing in the location, may be you can see it after ssh into EMR cluster but still the steps command wouldn't be able to figure out by itself and starts throwing that file not found exception.
In this scenario what I did is :
Step 1: Checked for the file existence in the project directory which we copied to EMR.
for example mine was in `//usr/local/project_folder/`
Step 2: Copy the script which you're expecting to run on the EMR.
for example I copied from `//usr/local/project_folder/script_name.sh` to `/home/hadoop/`
Step 3: Then executed the script from /home/hadoop/ by passing the absolute path to the command-runner.jar
command-runner.jar bash /home/hadoop/script_name.sh
Thus I found my script running. Hope this may be helpful to someone

running spark on yarn as client

I'm trying to run a spark job with yarn using:
./bin/spark-submit --class "KafkaToMaprfs" --master yarn --deploy-mode client /home/mapr/kafkaToMaprfs/target/scala-2.10/KafkaToMaprfs.jar
But facing this error:
/opt/mapr/hadoop/hadoop-2.7.0 17/01/03 11:19:26 WARN NativeCodeLoader:
Unable to load native-hadoop library for your platform... using
builtin-java classes where applicable 17/01/03 11:19:38 ERROR
SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended!
It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:124)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.(SparkContext.scala:530)
at KafkaToMaprfs$.main(KafkaToMaprfs.scala:61)
at KafkaToMaprfs.main(KafkaToMaprfs.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:752)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 17/01/03 11:19:39 WARN MetricsSystem: Stopping a MetricsSystem that is
not running Exception in thread "main"
org.apache.spark.SparkException: Yarn application has already ended!
It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:124)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.(SparkContext.scala:530)
at KafkaToMaprfs$.main(KafkaToMaprfs.scala:61)
at KafkaToMaprfs.main(KafkaToMaprfs.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:752)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
I have a multi node cluster, i'm deploying the application from a remote node.
I'm using spark 1.6.1 and hadoop 2.7.x versions.
I didn't set the cluster, so I couldn't find where the mistake lies.
Can anyone please help me fix this?
In my case i'm using MapR distribution.And i didn't configure the environment.
So, when i dug down to the all the conf folders.I made some changes in the below files,
1. In Spark-env.sh,Make sure these values are set right.
export SPARK_LOG_DIR=
export SPARK_PID_DIR=
export HADOOP_HOME=
export HADOOP_CONF_DIR=
export JAVA_HOME=
export SPARK_SUBMIT_OPTIONS=
2. yarn-env.sh.
Make sure the yarn_conf_dir, and java_home are set with correct values.
3. In spark-defaults.conf
1.spark.driver.extraClassPath
2.set value for HADOOP_CONF_DIR
4. HADOOP_CONF_DIR and JAVA_HOME in $SPARK_HOME/conf/spark-env.sh
1.export HADOOP_CONF_DIR=/opt/mapr/hadoop/hadoop-2.7.0/etc/hadoop
2.export JAVA_HOME =
5.spark assembly jar
1.Copy the following JAR file from the local file system to a world-readable location on MapR-FS: Substitute your Spark version and
specific JAR file name in the command.
/opt/mapr/spark/spark-/lib/spark-assembly--hadoop-mapr-.jar
Now i'm able to run my spark application as YARN-CLIENT smoothly using spark-submit.
These are basic essentials to make spark connect with yarn.
Correct me if i missed any other things.

Resources