Can we run spark on mesos using the precompiled hadoop-spark package? - apache-spark

I have a Mesos Cluster on which I want to run Spark jobs.
I have downloaded the spark precompiled package and I can use the spark-shell by simply decompressing the archive.
So far, I haven't managed to run spark jobs on the Mesos Cluster.
First question : Do I need to install and build Spark from source to get it work on Mesos? And Does this precompiled package used only for Spark on Yarn and Hadoop?
Second question : Can anyone provide the best way to build spark. I have found many ways like :
sbt clean assembly
./build/mvn -Pmesos -DskipTests clean package
./build/sbt package
I don't know which one to use, and whether they are all correct or not.

Related

How to use different Spark version (Spark 2.4) on YARN cluster deployed with Spark 2.1?

I have a Hortonworks yarn cluster with Spark 2.1.
However I want to run my application with spark 2.3+ (because an essential third-party ML library in use needs it).
Do we have to use spark-submit from the Spark 2.1 version or we have to submit job to yarn using Java or Scala with a FAT jar? Is this even possible? What about Hadoop libraries?
On a Hortonworks cluster, running a custom spark version in yarn client/cluster mode needs following steps:
Download Spark prebuilt file with appropriate hadoop version
Extract and unpack into a spark folder. eg. /home/centos/spark/spark-2.3.1-bin-hadoop2.7/
Copy jersey-bundle 1.19.1 jar into spark jar folder [Download from here][1]
Create a zip file containing all the jars in spark jar folder. Spark-jar.zip
Put this spark-jar.zip file in a world accessible hdfs location such as (hdfs dfs -put spark-jars.zip /user/centos/data/spark/)
get hdp version (hdp-select status hadoop-client): eg output. hadoop-client - 3.0.1.0-187
Use the above hdp version in export commands below
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-/usr/hdp/3.0.1.0-187/hadoop/conf}
export HADOOP_HOME=${HADOOP_HOME:-/usr/hdp/3.0.1.0-187/hadoop}
export SPARK_HOME=/home/centos/spark/spark-2.3.1-bin-hadoop2.7/
Edit the spark-defaults.conf file in spark_home/conf directory, add following entries
spark.driver.extraJavaOptions -Dhdp.version=3.0.1.0-187
spark.yarn.am.extraJavaOptions -Dhdp.version=3.0.1.0-187
create java-opts file in spark_home/conf directory, add below entries and use the above mentioned hdp version
-Dhdp.version=3.0.1.0-187
export LD_LIBRARY_PATH=/usr/hdp/3.0.1.0-187/hadoop/lib/native:/usr/hdp/3.0.1.0-187/hadoop/lib/native/Linux-amd64-64
spark-shell --master yarn --deploy-mode client --conf spark.yarn.archive=hdfs:///user/centos/data/spark/spark-jars.zip
I assume you use sbt as the build tool in your project. The project itself could use Java or Scala. I also think that the answer in general would be similar if you used gradle or maven, but the plugins would simply be different. The idea is the same.
You have to use an assembly plugin (e.g. sbt-assembly) that is going to bundle all non-Provided dependencies together, including Apache Spark, in order to create a so-called fat jar or uber-jar.
If the custom Apache Spark version is part of the application jar that version is going to be used whatever spark-submit you use for deployment. The trick is to trick the classloader so it loads the jars and classes of your choice not spark-submit's (and hence whatever is used in the cluster).

Compiling Spark program: no 'lib' directory

I am going through the tutorial:
https://www.tutorialspoint.com/apache_spark/apache_spark_deployment.htm
When I got to the Step 2: Compile program section I got stuck, because there is no lib folder in the spark directory which looks the following way:
Where is the lib folder? How could I compile the program?
I looked into the jars folder but there is no file named spark-assembly-1.4.0-hadoop2.6.0.jar
I am sorry I am not answering your question directly, but I want to guide you to the more convenient development process of Spark application.
When you are developing Spark application on your local computer you should use sbt (scala build tool). After you done writing code you should compile it with sbt (running sbt assembly). Sbt will produce 'fat jar' archive, that already has all required dependencies for a job. Then you should upload jar to spark cluster (for example using spark-submit script).
There is no reason to install sbt on your cluster because it is needed only for compilation.
You should check starter project that I created for you.

the conflict between running spark-sql and hive-on-spark

according to the official document, when one want to running hive using the spark engine, he need to rebuild spark without hive.but, when I running spark-sql in the same environment ,I got an error:that I must rebuild the spark with -Phive .
As a workaround,i down spark-hive-thiftserver_2.11-2.2.1.jar and spark-hive-_2.11-2.0.0.jar from maven repository and put them in the ${SPARK_HOME}/jars/ .but this approach seems does not work.
thanks

How to run edited Spark MLLib code in Cloudera managed cluster

I have a 3 node cluster managed by Cloudera Manager. I have edited BlockMatrix.scala file in the Spark MLLib source code and packaged it using:
mvn -DskipTests package
command. It has created new jar file. Now I want to run this newly created MLLib jar on my cluster. What is the way to do that?

Argument list too long when run spark on yarn

I am trying to migrate our application to spark running on yarn. I use cmdline as spark-submit --master yarn --deploy-mode cluster -jars ${my_jars}...
But yarn throws Expections with the following log:
Container id: container_1462875359170_0171_01_000002
Exit code: 1
Exception message: .../launch_container.sh: line 4145: /bin/bash: Argument list too long
I think the reason may be that we have too many jars (684 jars separated by comma) specified by option --jars ${my_jars}, my question is what is the graceful way to specify all our jars? Or how can we avoid this yarn error?
Check if you can use spark.driver.extraClassPath extraClassPath Spark Documentation
spark.driver.extraClassPath /fullpath/firs.jar:/fullpath/second.jar
spark.executor.extraClassPath /fullpath/firs.jar:/fullpath/second.jar
Just found the threadspark-submit-add-multiple-jars-in-classpath
I'd try these two things
Build a fat jar for spark submit application or
Build a thin jar with maven and install unavailable jars in the maven repo. so that it will be available to load at runtime in the cluster.
Try sbt-assembly which packages all your classes and dependency classes into an uber jar.
It is very easy and comfortable to use, but you have to take care of two things:
version conflict
the jar would be a little bit large

Resources