How to run Cloud Dataflow pipelines using Spark runner? - apache-spark

I have read that Google Cloud Dataflow pipelines, which are based on Apache Beam SDK, can be run with Spark or Flink.
I have some dataflow pipelines currently running on GCP using default Cloud Dataflow runner and I want to run it using Spark runner but I don't know how to.
Is there any documentation or guide about how to do this? Any pointers will help.
Thanks.

I'll assume you're using Java but the equivalent process applies with Python.
You need to migrate your pipeline to use the Apache Beam SDK, replacing your Google Dataflow SDK dependency with:
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-core</artifactId>
<version>2.4.0</version>
</dependency>
Then add the dependency for the runner you wish to use:
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-spark</artifactId>
<version>2.4.0</version>
</dependency>
And add the --runner=spark to specify that this runner should be used when submitting the pipeline.
See https://beam.apache.org/documentation/runners/capability-matrix/ for the full list of runners and comparison of their capabilities.

Thanks to multiple tutorials and documentation scattered all over the web, I was finally able to have a coherent idea about how to use spark runner with any Beam SDK based pipeline.
I have documented entire process here for future reference: http://opreview.blogspot.com/2018/07/running-apache-beam-pipeline-using.html.

Related

Can i use multiple Runners in single apache beam code?

I am new to Apache beam. So far, my understanding is, apache beam is nothing but the tool for ETL processing. Runners can be called as collection of CPU, memory and storage.
My question is, can i use two or more types of runners in single beam python code?
for example, one runner for dataflow, another for spark, third for directrunner, like this?
You can take your Beam pipeline, and submit it to be run on different runners.
You cannot make different runners work together (e.g. a pipeline that runs partially on Dataflow and partially on Spark).
Instead, you can write a pipeline that sometimes runs fully on Dataflow and sometimes runs fully on Spark.
LMK if I should clarify further.

Airflow Livy jobs subission on AWS EMR

We are using Airflow to schedule our jobs on EMR and currently we want to use apache Livy to submit Spark jobs via Airflow
I need more guidance on below :
Which Airflow-Livy operator we should use for python 3+ pyspark and scala jobs.
I have seen below :
https://github.com/rssanders3/airflow-spark-operator-plugin
and
https://github.com/panovvv/airflow-livy-operators
Wants to know more about stable AirflowLivy operator anyone using in production probably in AWS stack.
Also Step by step installation guide for integration.
I would recommend using LivyOperator from https://github.com/apache/airflow/blob/master/airflow/providers/apache/livy/operators/livy.py
Currently, it is only available in Master but you could copy-paste the code and use that as a Custom Operator till we backport all the new operators for Airflow 1.10.* series

Apache Zeppelin Notebook Deployment

I'm new to ETL development with PySpark and I've been writing my scripts as paragraphs on Apache Zeppelin Notebooks. I was curious what the typical flow was for a deployment process? How are you converting your code from a Zeppelin Notebook to your ETL pipeline?
Thanks!
Well that heavily depends on the sort of ETL that you're doing.
If you want to keep the scripts in the notebooks and you just need to orchestrate their execution then you have a couple options:
Use Zeppelin's built-in scheduler
Use cron to launch your notebooks via curl commands and Zeppelin's REST API
But if you already have an up-and-running workflow management tool like Apache Airflow then you can add new tasks that launch the aforementioned curl commands to trigger the notebooks (with Airflow, you can use BashOperator or PythonOperator), but keep in mind that you'll need some workarounds to have a sequential execution of different notes.
One major tech company that's betting heavily on notebooks is Netflix (you can take a look at this), and they developed a set of tools to improve the effeciency of notebook-based ETL pipelines, like Commuter and Papermill. They're more into Jupyter, so Zeppelin compatibility is still not provided, but the core concepts should be the same when working with Zeppelin.
For more on Netflix' notebook-based pipelines, you can refer to this article shared on their tech blog.

Are there any runners supported for apache beam python besides google cloud dataflow?

I have been building python pipelines using google cloud dataflow and apache beam for about a year. I am leaving the google cloud environment for a university cluster, which has spark installed. It looks like the spark runner is only for java (https://beam.apache.org/documentation/runners/spark/)? Are there any suggestions on how to run python apache beam pipelines outside of cloud dataflow?
As of right now, this is not yet possible, but portability across runners and languages is the highest priority and the most active area of development in Beam right now, and I think the portable Flink runner is very close to being able to run simple pipelines in Python, with portable Spark runner development to commence soon (and share lots of code with Flink). Stay tuned and follow the dev# mailing list!

Dataflow SparkPipelineRunner - any available examples?

Does anybody have a working example(s) of using the Cloudera SparkPipielineRunner to execute (on a cluster) a pipeline written using the Dataflow SDK?
I can't see any in the Dataflow or Spark-Dataflow github repos.
We're trying to evaluate if running our pipelines on a Spark cluster will give us any performance gains over running them on the GCP Dataflow service.
There are examples for using the Beam Spark Runner at the Beam site: https://beam.apache.org/documentation/runners/spark/.
The dependency you want is:
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-spark</artifactId>
<version>0.3.0-incubating</version>
</dependency>
To run against a Standalone cluster simply run:
spark-submit --class com.beam.examples.BeamPipeline --master spark://HOST:PORT target/beam-examples-1.0.0-shaded.jar --runner=SparkRunner

Resources