we are planning to deploy all spark batch and streaming jobs on Kubernetes (as cluster services). I want to know if we can deploy all spark jobs using spark operator manifest file in PROD Kubernetes Server.
Kubernetes cluster
You can take a look at https://cloud.google.com/dataproc/docs/concepts/jobs/dataproc-gke wherein you can run a Dataproc cluster on GKE. It is a managed service by Google Cloud.
Related
I want to submit spark job to azure hdInsights cluster from airflow, I don't want to use livy as it doesn't accumulate logs on airflow. Is it possible to do remotely submit job. SSH is 1 option but if job is long running it might break connection. Is there any other option?
Note - Airflow cluster is remote cluster, it's not colocated with spark cluster.
I have created a small application that submits a spark job at certain intervals and creates some analytical reports. These jobs can read data from a local filesystem or a distributed filesystem (fs could be HDFS, ADLS or WASB). Can I run this application on Azure databricks cluster?
The application works fine on HDInsights cluster as I was able to access the nodes. I kept my deployable jar at one location, started it using the start-script similarly I could also stop it using the stop-script that I prepared.
One thing I found is that Azure Databricks has its own File System: ADFS, I can also add support for this file system but then will I be able to deploy and run my application as I was able to do it on the HDInsight cluster? If not, is there a way I can submit jobs from an edge node, my HDInsight cluster or any other OnPrem Cluster to Azure Databricks cluster.
Have you looked at Jobs? https://docs.databricks.com/user-guide/jobs.html. You can submit jars to spark-submit just like on HDInsight.
Databricks file system is DBFS - ABFS is used for Azure Data Lake. You should not need to modify your application for these - the file paths will be handled by databricks.
Has anyone tried using Azure data bricks as the spark cluster for CDAP job processing. CDAP documentation details how to add it to Azure HDInsight, but just wondering is there a way to configure CDAP to point to data bricks spark cluster, is it even possible? OR this kind of integration needs a specific data bricks client connector jar? If anyone has any insights that would be helpful.
There is no out of box support for Databricks spark on Azure. But, that said you can develop a new Cloud Runtime that is capable of submitting the jobs to Databricks spark cluster. Here is example of how to write a runtime extension for Cloud Dataproc and EMR.
Is it possible to deploy spark code in Azure cloud without the yarn component? thanks in advance
Yes,you can deploy Apache Spark cluster in Azure HDInsight without Yarn.
Spark clusters in HDInsight include the following components that are available on the clusters by default.
1)Spark Core. Includes Spark Core, Spark SQL, Spark streaming APIs, GraphX, and MLlib.
2)Anaconda
3)Livy
4)Jupyter notebook
5)Zeppelin notebook
Spark clusters on HDInsight also provide an ODBC driver for connectivity to Spark clusters in HDInsight from BI tools such as Microsoft Power BI and Tableau.
Refer to the following sites for more information,
Create an Apache Spark cluster in Azure HDInsight
Introduction to Spark on HDInsight
I don't think it is possible to deploy HDInsight cluster without YARN.Refer to the HDInsight documentation
https://learn.microsoft.com/en-sg/azure/hdinsight/hdinsight-hadoop-introduction
https://learn.microsoft.com/en-sg/azure/hdinsight/hdinsight-component-versioning
YARN is the resource manager for Hadoop. Is there any particular reason you would not want to use YARN while working with HDInsight Spark cluster?
If you want to use the standalone mode, you can modify the location of the master url while submitting the job using Spark-submit command.
I have some examples in my repo with Spark-submit both in local mode and on HDInsight cluster
https://github.com/NileshGule/learning-spark
You can refer to
local mode : https://github.com/NileshGule/learning-spark/blob/master/src/main/java/com/nileshgule/movielens/MovieLens.md
HDInsight Spark cluster : https://github.com/NileshGule/learning-spark/blob/master/Azure.md
I am new to AWS EC2 and need to know how I can submit my Spark job to AWS EC2 spark cluster. Like in azure we can directly submit the job through IntelliJ idea with azure plugin.
You can submit a spark job easily through spark-submit command. Refer http://spark.apache.org/docs/latest/submitting-applications.html
Options:
1) login to master or other driver gateway node and use spark-submit to submit the job through YARN/media/etc
2) Use spark submit cluster deploy mode from any machine with sufficient ports and firewall access (may require configuration, such as client config files from Cloudera manager for CDH cluster)
3) Use a server setup like Livy (open source through Cloudera, and MS Azure HDinsights uses and contributes to it) or maybe thrift server. Livy (Livy.io) is a nice simple REST service that also has language APIs for Scala/Java to make it extra easy to submit jobs (and run interactive persisted sessions!)