I am trying to set up SPARK2 on my cloudera cluster. For that, I have JDK1.8:
I have installed scala 2.11.8 using the rpm file:
I have downloaded, extracted the spark version 2.2.0 on my home directory: /home/cloudera.
I made changes to the PATH variable in .bashrc as below:
But when I try to execute spark-shell from the home directory: /home/cloudera, it says no such file or directory which can be seen below:
[cloudera#quickstart ~]$ spark-shell
/home/cloudera/spark/bin/spark-class: line 71: /usr/java/jdk1.7.0_67-cloudera/bin/java: No such file or directory
[cloudera#quickstart ~]$
Could anyone let me know how can I fix the problem and configure it properly ?
Java/JVM applications (and spark-shell in particular) uses java binary to launch itself. Therefore they need to know where it is located, which is usually done via JAVA_HOME environment variable.
In your case it's not reset explicitely and value from Clauder's default one Java distribution is used (even if it points to empty location).
You need to set JAVA_HOME pointing to correct java distribution directory for the user under which you want to launch spark-shell and other application.
I tried installing Apache Spark on my 64 bit Windwos 7 machine.
I used the guides -
Installing Spark on Windows 10
How to run Apache Spark on Windows 7
Installing Apache Spark on Windows 7 environment
This is what I did -
Install Scala
Set environment variable SCALA_HOME and add %SCALA_HOME%\bin to Path
Result: scala command works on command prompt
Unpack pre-built Spark
Set environment variable SPARK_HOME and add %SPARK_HOME%\bin to Path
Download winutils.exe
Place winutils.exe under C:/hadoop/bin
Set environment variable HADOOP_HOME and add %HADOOP_HOME%\bin to Path
I already have JDK 8 installed.
Now, the problem is, when I run spark-shell from C:/spark-2.1.1-bin-hadoop2.7/bin, I get this -
"C:\Program Files\Java\jdk1.8.0_131\bin\java" -cp "C:\spark-2.1.1-bin-hadoop2.7\conf\;C:\spark-2.1.1-bin-hadoop2.7\jars\*" "-Dscala.usejavacp=true" -Xmx1g org spark.repl.Main --name "Spark shell" spark-shell
Is it an error? Am I doing something wrong?
Thanks!
I have the same issue when trying to install Spark local with Windows 7. Please make sure the below paths is correct and I am sure I will work with you.
Create JAVA_HOME variable: C:\Program Files\Java\jdk1.8.0_181\bin
Add the following part to your path: ;%JAVA_HOME%\bin
Create SPARK_HOME variable: C:\spark-2.3.0-bin-hadoop2.7\bin
Add the following part to your path: ;%SPARK_HOME%\bin
The most important part Hadoop path should include bin file before winutils.ee as the following: C:\Hadoop\bin Sure you will locate winutils.exe inside this path.
Create HADOOP_HOME Variable: C:\Hadoop
Add the following part to your path: ;%HADOOP_HOME%\bin
Now you can run the cmd and write spark-shell it will work.
I am a newbie in the Hadoop and the Spark domain. For a tutorial, I want to add some data to Hadoop and query it in Spark. So, I installed Hadoop standalone by following this and I downloaded the Spark version that does not include Hadoop. But I got an error like this. I tried setting the classpath to the Hadoop folder that I installed. The classpath was like this:
SPARK_DIST_CLASSPATH=%HADOOP_HOME%\share\hadoop\tools\lib\*
Apart from this, I tracked the Spark sources and found the reference to the environment variable, SPARK_DIST_CLASSPATH in the source. I still got error and inevitably, I installed Spark that includes Hadoop. I am curious whether or not I have other constraints.
No real differences between the standalone Hadoop and the one in Spark. To use Spark, you need Hadoop API at least for the IO.
The error you are reporting is:
Exception in thread "main" java.lang.NoClassDefFoundError:
org/apache/hadoop/fs/FSDataInputStream
It usually means you are not setting the path correctly.
Your file is located in a jar, something like hadoop-hdfs-{version}.jar.
I see, on my computer, that this class is located in:
${HADOOP_HOME}/share/hadoop/hfs/hadoop-hdfs-2.7.3.jar
Please check if all the HADOOP environment variables are set correctly. The most important is HADOOP_HOME, as you can see on Linux (and most likely on Windows) any other variables depend on it.
When you use the startup script they will set even more environment variables which depend on HADOOP_HOME:
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export HADOOP_ROOT_LOGGERi=INFO,console
export HADOOP_SECURITY_LOGGER=INFO,NullAppender
export HADOOP_INSTALL=$HADOOP_HOME
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_PREFIX=$HADOOP_HOME
export HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexec
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH
export HADOOP_YARN_HOME=$HADOOP_HOME
To know what is the hadoop classpath you have to use:
$ hadoop classpath
/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/*:/opt/hadoop/share/hadoop/common/*:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoop/share/hadoop/hdfs/*:/opt/hadoop/share/hadoop/yarn/lib/*:/opt/hadoop/share/hadoop/yarn/*:/opt/hadoop/share/hadoop/mapreduce/lib/*:/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/contrib/capacity-scheduler/*.jar
and SPARK_DIST_CLASSPATH has to be set to this value. Read this. Setting the value by hand is not a good idea.
Most likely you are using wrong paths.
Be sure everything on Hadoop is working before firing up Spark.
I am installing Apache Spark on linux. I already have Java, Scala and Spark downloaded and they are all in the Downloads folder inside the Home folder with the path /home/alex/Downloads/X where X=scala, java, spark, literally that's what the folders are called.
I got scala to work but when I try to run spark by typing ./bin/spark-shell it says:
/home/alex/Downloads/spark/bin/saprk-class: line 100: /usr/bin/java/bin/java: Not a directory
I have already included the file path by editing the bashrc with sudo gedit ~/.bashrc:
# JAVA
export JAVA_HOME=/home/alex/Downloads/java
export PATH=$PATH:$JAVA_HOME/bin
# scala
export SCALA_HOME=/home/alex/Downloads/scala
export PATH=$PATH:$SCALA_HOME/bin
# spark
export SPARK_HOME=/home/alex/Downloads/spark
export PATH=$PATH:$SPARK_HOME/bin
When I try to type sbt/sbt package in the spark folder it say no such file or directory is found also. What should I do from here?
It seems you have a few issues, namely your JAVA_HOME is not pointed to a directory with java, when you are running sbt in spark you should run ./sbt/sbt (or in new versions ./build/sbt). While you can download Java & Scala by hand, you may find that your system packages are sufficient (make sure to get jdk 7 or later).
Furthermore, after using system packages as Holden points out, in Linux you may use the command whereis to make sure of the right paths.
Finally, the following link may prove useful:
http://www.tutorialspoint.com/apache_spark/apache_spark_installation.htm
Hope this helps.
Note: It looks like there may be a configuration issue, misspelling, of the directory name
/home/alex/Downloads/spark/bin/saprk-class: line 100: /usr/bin/java/bin/java: Not a directory
saprk-class
That could be a configuration issue only, but it's worth a look if it is called /spark-class elsewhere to see if it's causing related issues.
I've downloaded the prebuild version of spark 1.4.0 without hadoop (with user-provided Haddop). When I ran the spark-shell command, I got this error:
> Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/
FSDataInputStream
at org.apache.spark.deploy.SparkSubmitArguments$$anonfun$mergeDefaultSpa
rkProperties$1.apply(SparkSubmitArguments.scala:111)
at org.apache.spark.deploy.SparkSubmitArguments$$anonfun$mergeDefaultSpa
rkProperties$1.apply(SparkSubmitArguments.scala:111)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.deploy.SparkSubmitArguments.mergeDefaultSparkPropert
ies(SparkSubmitArguments.scala:111)
at org.apache.spark.deploy.SparkSubmitArguments.<init>(SparkSubmitArgume
nts.scala:97)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:106)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FSDataInputStr
eam
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 7 more
I've searched on Internet, it is said that HADOOP_HOME has not been set yet in spark-env.cmd. But I cannot find spark-env.cmd in the spark installation folder.
I've traced the spark-shell command and it seems that there are no HADOOP_CONFIG in there. I've tried to add the HADOOP_HOME on environment variable but it still give the same exception.
Actually I don't really using the hadoop. I downloaded hadoop as a workaround as suggested in this question
I am using windows 8 and scala 2.10.
Any help will be appreciated. Thanks.
The "without Hadoop" in the Spark's build name is misleading: it means the build is not tied to a specific Hadoop distribution, not that it is meant to run without it: the user should indicate where to find Hadoop (see https://spark.apache.org/docs/latest/hadoop-provided.html)
One clean way to fix this issue is to:
Obtain Hadoop Windows binaries. Ideally build them, but this is painful (for some hints see: Hadoop on Windows Building/ Installation Error). Otherwise Google some up, for instance currently you can download 2.6.0 from here: http://www.barik.net/archive/2015/01/19/172716/
Create a spark-env.cmd file looking like this (modify Hadoop path to match your installation):
#echo off
set HADOOP_HOME=D:\Utils\hadoop-2.7.1
set PATH=%HADOOP_HOME%\bin;%PATH%
set SPARK_DIST_CLASSPATH=<paste here the output of %HADOOP_HOME%\bin\hadoop classpath>
Put this spark-env.cmd either in a conf folder located at the same level as your Spark base folder (which may look weird), or in a folder indicated by the SPARK_CONF_DIR environment variable.
I had the same problem, in fact it's mentioned on the Getting started page of Spark how to handle it:
### in conf/spark-env.sh ###
# If 'hadoop' binary is on your PATH
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
# With explicit path to 'hadoop' binary
export SPARK_DIST_CLASSPATH=$(/path/to/hadoop/bin/hadoop classpath)
# Passing a Hadoop configuration directory
export SPARK_DIST_CLASSPATH=$(hadoop --config /path/to/configs classpath)
If you want to use your own hadoop follow one of the 3 options, copy and paste it into spark-env.sh file :
1- if you have the hadoop on your PATH
2- you want to show hadoop binary explicitly
3- you can also show hadoop configuration folder
http://spark.apache.org/docs/latest/hadoop-provided.html
I too had the issue,
export SPARK_DIST_CLASSPATH=`hadoop classpath`
resolved the issue.
I ran into the same error when trying to get familiar with spark. My understanding of the error message is that while spark doesn't need a hadoop cluster to run, it does need some of the hadoop classes. Since I was just playing around with spark and didn't care what version of hadoop libraries are used, I just downloaded a spark binary pre-built with a version of hadoop (2.6) and things started working fine.
linux
ENV SPARK_DIST_CLASSPATH="$HADOOP_HOME/etc/hadoop/*:$HADOOP_HOME/share/hadoop/common/lib/*:$HADOOP_HOME/share/hadoop/common/*:$HADOOP_HOME/share/hadoop/hdfs/*:$HADOOP_HOME/share/hadoop/hdfs/lib/*:$HADOOP_HOME/share/hadoop/hdfs/*:$HADOOP_HOME/share/hadoop/yarn/lib/*:$HADOOP_HOME/share/hadoop/yarn/*:$HADOOP_HOME/share/hadoop/mapreduce/lib/*:$HADOOP_HOME/share/hadoop/mapreduce/*:$HADOOP_HOME/share/hadoop/tools/lib/*"
windows
set SPARK_DIST_CLASSPATH=%HADOOP_HOME%\etc\hadoop\*;%HADOOP_HOME%\share\hadoop\common\lib\*;%HADOOP_HOME%\share\hadoop\common\*;%HADOOP_HOME%\share\hadoop\hdfs\*;%HADOOP_HOME%\share\hadoop\hdfs\lib\*;%HADOOP_HOME%\share\hadoop\hdfs\*;%HADOOP_HOME%\share\hadoop\yarn\lib\*;%HADOOP_HOME%\share\hadoop\yarn\*;%HADOOP_HOME%\share\hadoop\mapreduce\lib\*;%HADOOP_HOME%\share\hadoop\mapreduce\*;%HADOOP_HOME%\share\hadoop\tools\lib\*
Enter into SPARK_HOME -> conf
copy spark-env.sh.template file and rename it to spark-env.sh
Inside this file you can set the parameters for spark.
Run below from your package dir just before running spark-submit -
export SPARK_DIST_CLASSPATH=`hadoop classpath`
I finally find a solution to remove the exception.
In spark-class2.cmd, add :
set HADOOP_CLASS1=%HADOOP_HOME%\share\hadoop\common\*
set HADOOP_CLASS2=%HADOOP_HOME%\share\hadoop\common\lib\*
set HADOOP_CLASS3=%HADOOP_HOME%\share\hadoop\mapreduce\*
set HADOOP_CLASS4=%HADOOP_HOME%\share\hadoop\mapreduce\lib\*
set HADOOP_CLASS5=%HADOOP_HOME%\share\hadoop\yarn\*
set HADOOP_CLASS6=%HADOOP_HOME%\share\hadoop\yarn\lib\*
set HADOOP_CLASS7=%HADOOP_HOME%\share\hadoop\hdfs\*
set HADOOP_CLASS8=%HADOOP_HOME%\share\hadoop\hdfs\lib\*
set CLASSPATH=%HADOOP_CLASS1%;%HADOOP_CLASS2%;%HADOOP_CLASS3%;%HADOOP_CLASS4%;%HADOOP_CLASS5%;%HADOOP_CLASS6%;%HADOOP_CLASS7%;%HADOOP_CLASS8%;%LAUNCH_CLASSPATH%
Then, change :
"%RUNNER%" -cp %CLASSPATH%;%LAUNCH_CLASSPATH% org.apache.spark.launcher.Main %* > %LAUNCHER_OUTPUT%
to :
"%RUNNER%" -Dhadoop.home.dir=*hadoop-installation-folder* -cp %CLASSPATH% %JAVA_OPTS% %*
It works fine with me, but I'm not sure this is the best solution.
You should add these jars in you code:
common-cli-1.2.jar
hadoop-common-2.7.2.jar
Thank you so much. That worked great, but I had to add the spark jars to the classpath as well:
;c:\spark\lib*
Also, the last line of the cmd file is missing the word "echo"; so it should say:
echo %SPARK_CMD%
I had the same issue ....Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/
FSDataInputStream
at org.apache.spark.deploy.SparkSubmitArguments$$anonfun$mergeDefaultSpa
rkProperties$1.apply(SparkSubmitArguments.scala:111)...
Then I realized that I had installed the spark version without hadoop. I installed the "with-hadoop" version the problem went away.
for my case
running spark job locally differs from running it on cluster. on cluster you might have a different dependency/context to follow. so essentially in your pom.xml you might have dependencies declared as provided.
when running locally, you don't need these provided dependencies. just uncomment them and rebuild again.
I encountered the same error. I wanted to install spark on my windows PC and therefore downloaded the without hadoop version of spark, but turns out you need the hadoop libraries! so download any hadoop spark version and set the environment variables.
I got this error because the file was copied from Windows.
Resolve it using
dos2unix file_name
I think you need spark-core dependency of maven. It worked fine for me.
I used:
export SPARK_HOME=/opt/cloudera/parcels/SPARK2/lib/spark2
export HADOOP_MAPRED_HOME=/opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce
It's work for me!
I added hadoop-client-runtime-3.3.2.jar to my user library.