Spark - jdbc read all happens on driver? - apache-spark

I have spark reading from Jdbc source (oracle) I specify lowerbound,upperbound,numpartitions,partitioncolumn but looking at web ui all the read is happening on driver not workers,executors. Is that expected?

In Spark framework, in general whatever code you write within a transformation such as map, flatMap etc. will be executed on the executor. To invoke a transformation you need a RDD which is created using the dataset that you are trying to compute on. To materialize the RDD you need to invoke an action so that transformations are applied to the data.
I believe in your case, you have written a spark application that reads jdbc data. If that is the case it will all be executed on Driver and not executor.
If you haven not already, try creating a Dataframe using this API.

Related

Is there a way to read data without SQL in Spark?

I am beginner in Spark and was given an assignment to read data from csv and perform some query data using Spark Core.
However, every online resource that I search uses some form of SQL from the pyspark.sql module.
Are there any way to read data and perform data query (select, count, group by) using only Spark Core?
Spark Core is concept RDD. Here you can find more information and examples with processing some textfiles.
its good practice to use Spark Dataframe instead Spark RDD.
Spark Dataframe uses catalyst optimizer which automatically calls out code internally in best way to improve performance.
https://blog.bi-geek.com/en/spark-sql-optimizador-catalyst/

Spark: How to write files to s3/hdfs from each executor

I have a use case where I am running some modeling code on each executor and want to store the result in s3/hdfs immediately before waiting for all the executors to finish the tasks.
The dataframe write API works in the same fashion you intend to use here, if you write the dataframe into hdfs, the executors will independently write the data into files rather bringing them all to the driver and then performing the write operation.
Refer this link to further read this topic.

How to use driver to load data and executors for processing and writing?

I would like to use Spark structured streaming to watch a drop location that exists on the driver only. I do this with
val trackerData = spark.readStream.text(sourcePath)
After that I would like to parse, filter, and map incoming data and write it out to elastic.
This works well except that it does only work when spark.master is set to e.g. local[*]. When set to yarn, no files get found even when deployment mode is set to client.
I thought that reading data in from local driver node is achieved by setting deployment to client and doing the actual processing and writing within the Spark cluster.
How could I improve my code to use driver for reading in and cluster for processing and writing?
What you want is possible, but not recommended in Spark Structured Streaming in particular and in Apache Spark in general.
The main motivation of Apache Spark is to bring computation to the data not the opposite as Spark is to process petabytes of data that a single JVM (of a driver) would not be able to handle.
The driver's "job" (no pun intended) is to convert a RDD lineage (= a DAG of transformations) to tasks that know how to load a data. Tasks are executed on Spark executors (in most cases) and that's where data processing happens.
There are some ways to make the reading part on driver and processing on executors and among them the most "lucrative" would be to use broadcast variables.
Broadcast variables allow the programmer to keep a read-only variable cached on each machine rather than shipping a copy of it with tasks. They can be used, for example, to give every node a copy of a large input dataset in an efficient manner. Spark also attempts to distribute broadcast variables using efficient broadcast algorithms to reduce communication cost.
One idea that came to my mind is that you could "hack" Spark "Streams" and write your own streaming sink that would do collect or whatever. That could make the processing local.

Repartition in repartitionAndSortWithinPartitions happens on driver or on worker

I am trying to understand the concept of repartitionAndSortWithinPartitions in Spark Streaming whether the repartition happens on driver or on worker. If suppose it happens on driver then does worker wait for all the data to come before sorting happens.
Like any other transformation it is handled by executors. Data is not passed via the driver. In other words this standard shuffle mechanism and there is nothing streaming specific here.
Destination of each record is defined by:
Its key.
Partitioner used for a given shuffle.
Number of partitions.
and data is passed directly between executor nodes.
From the comments it looks like you're more interested in a Spark Streaming architecture. If that's the case you should take a look at Diving into Apache Spark Streaming’s Execution Model. To give you some overview there can exist two different types of streams:
Receiver-based with a receiver node per stream.
Direct (without receiver) where only metadata is assigned to executors but data is fetched directly.

Spark SQL: how does it map to RDD operations?

When I learn spark SQL, I have a question in my mind:
As said, the SQL execution result is SchemaRDD, but what happens behind the scene? How many transformations or actions in the optimized execution plan, which should be equivalent to plain RDD hand-written codes invoked?
If we write codes by hand instead of SQL, it may generate some intermediate RDDs, e.g. a series of map(), filter() operations upon the source RDD. But the SQL version would not generate intermediate RDDs, correct?
Depending on the SQL content, the generated VM byte codes also involves partitioning, shuffling, correct? But without intermediate RDDs, how could spark schedule and execute them on worker machines?
In fact, I still can not understand the relationship between the spark SQL and spark core. How they interact with each other?
To understand how SparkSQL or the dataframe/dataset DSL maps to RDD operations, look at the physical plan Spark generates using explain.
sql(/* your SQL here */).explain
myDataframe.explain
At the very core of Spark, RDD[_] is the underlying datatype that is manipulated using distributed operations. In Spark versions <= 1.6.x DataFrame is RDD[Row] and Dataset is separate. In Spark versions >= 2.x DataFrame becomes Dataset[Row]. That doesn't change the fact that underneath it all Spark uses RDD operations.
For a deeper dive into understanding Spark execution, read Understanding Spark Through Visualization.

Resources