What's difference between HDInsight Hadoop cluster & HDInsight Spark cluster? - azure-hdinsight

What's difference between HDInsight Hadoop cluster & HDInsight Spark cluster? I have seen that even in Hadoop cluster pyspark is available. Is the difference with respect to the cluster type? i.e. Hadoop cluster implies YARN as a cluster management layer and Spark implying Spark Standalone (or Mesos?) as a cluster management layer?
If that is the case we can still run Spark in Hadoop cluster I believe so Spark will run on top of YARN.

HDInsight Spark uses YARN as cluster management layer, just as Hadoop. The binary on the cluster is the same.
The difference between HDInsight Spark and Hadoop clusters are the following:
1) Optimal Configurations:
Spark cluster is tuned and configured for spark workloads. For example, we have pre-configured spark clusters to use SSD and adjust executor memory size based on machine resource, so customers will have better out-of-box experience than the spark default configuration.
2) Service setups:
Spark cluster also run spark-related services including Livy, Jupyter, and Spark Thrift Server.
3)Workload Quality: We test spark workloads on spark clusters prior every release to ensure quality of service.

The bits are the same as you noticed. The difference is set of services and Ambari components that are running by default (on Spark you will have additional spark thrift, livy, jupyter) and set of configurations for those services. So while you technically can run spark jobs on yarn on hadoop cluster it's not recommended, some configs may be not set to optimal values. The other way around would be more reliable - create spark cluster and run hadoop jobs on it.
Maxim (HDInsight Spark PM)

Related

Spark Standalone vs YARN

What features of YARN make it better than Spark Standalone mode for multi-tenant cluster running only Spark applications? Maybe besides authentication.
There are a lot of answers at Google, pretty much of them sounds wrong to me, so I'm not sure where is the truth.
For example:
DZone, Deep Dive Into Spark Cluster Management
Standalone is good for small Spark clusters, but it is not good for
bigger clusters (there is an overhead of running Spark daemons —
master + slave — in cluster nodes)
But other cluster managers also require running agents on cluster nodes. I.e. YARN's slaves are called node managers. They may consume even more memory than Spark's slaves (Spark default is 1 GB).
This answer
The Spark standalone mode requires each application to run an executor
on every node in the cluster; whereas with YARN, you choose the number
of executors to use
agains Spark Standalone # executor/cores control, that shows how you can specify number of consumed resources at Standalone mode.
Spark Standalone Mode documentation
The standalone cluster mode currently only supports a simple FIFO
scheduler across applications.
Against the fact Standalone mode can use Dynamic Allocation, and you can specify spark.dynamicAllocation.minExecutors & spark.dynamicAllocation.maxExecutors. Also I haven't found a note about Standalone doesn't support FairScheduler.
This answer
YARN directly handles rack and machine locality
How does YARN may know anything about data locality in my job? Suppose, I'm storing file locations at AWS Glue (used by EMR as Hive metastore). Inside Spark job I'm querying some-db.some-table. How YARN may know what executor is better for job assignment?
UPD: found another mention about YARN and data locality https://jaceklaskowski.gitbooks.io/mastering-apache-spark/spark-data-locality.html. Still doesn't matter in case of S3 for example.

Mesos Configuration with existing Apache Spark standalone cluster

I am a beginner in Apache-spark!
I have setup Spark standalone cluster using 4 PCs.
I want to use Mesos with existing Spark standalone cluster. But I read that I need to install Mesos first then configure the spark.
I have also seen the Documentation of Spark on setting with Mesos, but it is not helpful for me.
So how to configure Mesos with existing spark standalone cluster?
Mesos is an alternative cluster manager to standalone Spark manger. You don't use it with, you use it instead of.
to create Mesos cluster follow https://mesos.apache.org/gettingstarted/
make sure to distribute Mesos native library is available on the machine you use to submit jobs
for cluster mode start Mesos dispatcher (sbin/start-mesos-dispatcher.sh).
submit application using Mesos master URI (client mode) or dispatcher URI (cluster mode).

Is it worth deploying Spark on YARN if I have no other cluster software?

I have a Spark cluster running in standalone mode. I am currently executing code on using Jupyter notebook calling pyspark. Is there a benefit to using YARN as the cluster manager, assuming that the machines are not doing anything else?
Would I get better performance using YARN? If so, why?
Many thanks,
John
I'd say YES by considering these points.
Why Run on YARN?
Using YARN as Spark’s cluster manager confers a few benefits over Spark standalone:
You can take advantage of all the features of YARN schedulers for categorizing, isolating, and prioritizing workloads.
Any how Spark standalone mode also requires worker for slave activity which can not run non Spark applications, where as with YARN, this is isolated in containers, so adoption of another compute framework should be a code change instead of infra + code. So the cluster can be shared among different frameworks.
YARN is the only cluster manager for Spark that supports security. With
YARN, Spark can run against Kerberized Hadoop clusters and uses
secure authentication between its processes.
YARN allows you to dynamically share and centrally configure the same
pool of cluster resources between all frameworks that run on YARN.
You can throw your entire cluster at a MapReduce job, then use some
of it on an Impala query and the rest on Spark application, without
any changes in configuration.
I would say 1,2 and 3 are suitable for mentioned scenarios but not point 4 as we assumed no other frameworks are going to use the cluster.
souce

Apache Spark and Mesos running on a single node

I am interested in testing Spark running on Mesos. I created a Hadoop 2.6.0 single-node cluster in my Virtualbox and installed Spark on it. I can successfully process files in HDFS using Spark.
Then I installed Mesos Master and Slave on the same node. I tried to run Spark as a framework in Mesos using these instructions. I get the following error with Spark:
WARN TaskSchedulerImpl: Initial job has not accepted any resources;
check your cluster UI to ensure that workers are registered and have
sufficient resources
Sparkshell is successfully registered as a framework in the Mesos. Is there anything wrong with using a single-node setup? Or whether I need to add more Spark worker nodes?
I am very new to Spark and my aim is to just test Spark, HDFS, and Mesos.
If you have allocated enough resources for spark slaves, the cause might be firewall blocking the communication. Take a look at my other answer:
Apache Spark on Mesos: Initial job has not accepted any resources

YARN vs Spark processing engine based on real time application?

I understood YARN and Spark. But I want to know when I need to use Yarn and Spark processing engine. What are the different case studies in that I can identify the difference between YARN and Spark?
You cannot compare Yarn and Spark directly per se. Yarn is a distributed container manager, like Mesos for example, whereas Spark is a data processing tool. Spark can run on Yarn, the same way Hadoop Map Reduce can run on Yarn. It just happens that Hadoop Map Reduce is a feature that ships with Yarn, when Spark is not.
If you mean comparing Map Reduce and Spark, I suggest reading this other answer.
Apache Spark can be run on YARN, MESOS or StandAlone Mode.
Spark in StandAlone mode - it means that all the resource management and job scheduling are taken care Spark inbuilt.
Spark in YARN - YARN is a resource manager introduced in MRV2, which not only supports native hadoop but also Spark, Kafka, Elastic Search and other custom applications.
Spark in Mesos - Spark also supports Mesos, this is one more type of resource manager.
Advantages of Spark on YARN
YARN allows you to dynamically share and centrally configure the same pool of cluster resources between all frameworks that run on YARN.
YARN schedulers can be used for spark jobs, Only With YARN, Spark can run against Kerberized Hadoop clusters and uses secure authentication between its processes.
Link for more documentation on YARN, Spark.
We can conclude saying this, if you want to build a small and simple cluster independent of everything go for standalone. If you want to use existing hadoop cluster go for YARN/Mesos.

Resources