Difference between reduce and reduceByKey in Apache Spark - apache-spark

What is the difference between reduce and reduceByKey in Apache Spark in terms of their functionalities?
Why reduceByKey is a transformation and reduce is an action?

This is close to a duplicate of my answer explaining reduceByKey, but I will elaborate to the specific part that makes the two different. However refer to my answer for a bit more specifics on the internals of reduceByKey.
Basically, reduce must pull the entire dataset down into a single location because it is reducing to one final value. reduceByKey on the other hand is one value for each key. And since this action can be run on each machine locally first then it can remain an RDD and have further transformations done on its dataset.
Note, however that there is a reduceByKeyLocally you can use to automatically pull down the Map to a single location also.

Please go through this official documentation link .
reduce is an action which Aggregate the elements of the dataset using a function func (which takes two arguments and returns one),also we can use reduce for single RDDs (for more info Please click HERE).
reduceByKey When called on a dataset of (K, V) pairs, returns a dataset of (K, V) pairs where the values for each key are aggregated using the given reduce function func, which must be of type (V,V) => V. (for more info Please click HERE)

this is the qt assistant :
reduce(f): Reduces the elements of this RDD using the specified
commutative and associative binary operator. Currently reduces
partitions locally.
reduceByKey(func, numPartitions=None, partitionFunc=) :
Merge the values for each key using an associative and commutative reduce
function.

Related

reduce, reduceByKey, reduceGroups in Spark or Flink

reduce: function takes accumulated value and next value to find some aggregation.
reduceByKey: is also the same operation with specified key.
reduceGroups: is apply specified operation to the grouped data.
I don't know how memory managed for these operations. For example, how data is taken while using reduce function(e.g all data loaded to the memory?)? I want to know how data is managed for reduce operations. I also want to know what is the difference between these operations according to the data management.
Reduce is one of the cheapest operations in Spark,since that the only thing it does is actually grouping similar data to the same node.The only cost of a reduce operation is the reading of the tuple and a decision of where it should be grouped.
This means that the simple reduce,in contrast to the reduceByKey or reduceGroups is more expensive because Spark does not know how to make the grouping and searches for correlations among tuples.
Reduce can also ignore a tuple if it does not meet any requirement.

Spark: aggregate versus map and reduce

I'm learning Spark and start understanding how Spark distributes the data and combines the results.
I came to the conclusion that using the operation map followed by reduce has an advantage on using just the operation aggregate. This is (at least I believe so) because aggregate uses a sequential operation, which hurts parallelism, while map and reduce can benefit from full parallelism.
So when having a choice, isn't it better to use map and reduce than aggregate ? Are there cases where aggregate is preferred ? Or maybe when aggregate can't be replaced by the combination map and reduce ?
As an example - I want to find the string with the max length:
val z = sc.parallelize(List("123","12","345","4567"))
// instead of this aggregate ....
z.aggregate(0)((x, y) => math.max(x, y.length), (x, y) => math.max(x, y))
// .... shouldn't I rather use this map - reduce combination ?
z.map(_.length).reduce((x, y) => math.max(x, y))
A little example will can be better than long explanations.
Imagine you have a class Toto with an age field. You have many Toto and you desire to compute sum of ages of every Toto.
final case class Toto(val age: Int)
val rdd = sc.parallelize(0 until n).map(Toto(_))
// map/reduce style
val sum1 = rdd
// O(n) operations to go througth every Toto's age
.map(_.age)
// another O(n) to access data then O(n) operations to sum the n values
.reduce(_ + _)
// You get the result with 2 pass over your data plus O(n) additions
// aggregate style
val sum2 = rdd.aggregate(0)((agg, e) => agg + e.age, _ + _)
// With one pass over the data, and O(n) additions you obtain the same result
It's a bit more complicate if you take into account access and each operations.
Because aggregate still access then sum the age into the aggregate wich represent O(2.n) operations, O(n) access plus O(n) additions, plus negligeable merged operation between aggregates.
On the other side with map/reduce style, first the map represent O(n) access, then again O(n) access to data to reduce them with an overhead of O(n) addition operations for a total of O(3.n) operations.
Without forgetting the fact that Spark is lazy and all of your transformation will be leverage by a final action.
I presume that using aggregate will save some operations and then will improve application running time. But depending on what you're doing it could be more usefull to express successive map followed by a reduce for readability compare to an aggregate or combineByKey (generalization of aggregateByKey). So i will suppose that it depends on which goals you desire to reach depending the use case.
I believe I can partially answer my own question. I was wrongly assuming that, because a sequential operation is used, aggregate might be hurt in its parallelism. The data can still be parallelized and the sequential op will be executed on each chunk. This doesn't seem less performing than the map operation. So then the question that remains is: why would you use aggregate as opposed to the map-reduce combination ?
Aggregate operation allows to specify a combiner function (to reduce the amount of data sent through the shuffle), which is different to reducer, with map-reduce combination the same function is used to combine and reduce. I know used old Map Reduce terminology but conceptually all shared nothing shuffle based frameworks do this and if you google for mapreduce combiner you will find a lot of explanations of the concept.

Why does collect_list in Spark not use partial aggregation

I recently played around with UDAFs and looked into the sourcecode of the built-in aggregation function collect_list, I was suprised to see that collect_list does not have a merge method implemented, although I think this is really straight-farward (just concatenate two Arrays). Code taken from org.apache.spark.sql.catalyst.expressions.aggregate.collect.Collect
override def merge(buffer: InternalRow, input: InternalRow): Unit = {
sys.error("Collect cannot be used in partial aggregations.")
}
It is no longer the case, as SPARK-1893 but I'd assume that the initial design had mostly collect_list in mind.
Because collect_list is logically equivalent to groupByKey the motivation would be exactly the same to avoid long GC pauses. In particular map side combine in groupByKey has been disabled with Spark SPARK-772:
Map side combine in group by key case does not reduce the amount of data shuffled. Instead, it forces a lot more objects to go into old gen, and leads to worse GC.
So to address you comment
I think this is really straight-farward (just concatenate two Arrays).
It might be simple but it doesn't add much value (unless there is another reducing operation on top of it) and sequence concatenation is expensive.

Apache Spark Transformations: groupByKey vs reduceByKey vs aggregateByKey

These three Apache Spark Transformations are little confusing. Is there any way I can determine when to use which one and when to avoid one?
I think official guide explains it well enough.
I will highlight differences (you have RDD of type (K, V)):
if you need to keep the values, then use groupByKey
if you no need to keep the values, but you need to get some aggregated info about each group (items of the original RDD, which have the same K), you have two choices: reduceByKey or aggregateByKey (reduceByKey is kind of particular aggregateByKey)
2.1 if you can provide an operation which take as an input (V, V) and returns V, so that all the values of the group can be reduced to the one single value of the same type, then use reduceByKey. As a result you will have RDD of the same (K, V) type.
2.2 if you can not provide this aggregation operation, then use aggregateByKey. It happens when you reduce values to another type. So you will have (K, V2) as a result.
In addition to #Hlib answer, I would like to add few more points.
groupByKey() is just to group your dataset based on a key.
reduceByKey() is something like grouping + aggregation. We can say reduceBykey() equvelent to dataset.group(...).reduce(...).
aggregateByKey() is logically same as reduceByKey() but it lets you return result in different type. In another words, it lets you have a input as type x and aggregate result as type y. For example (1,2),(1,4) as input and (1,"six") as output.

treeReduce vs reduceByKey in Spark

I saw the following post a little bit back: Understanding TreeReduce in Spark
I am still trying to exactly understand when to use a treeReduce vs a reduceByKey. I think we can use a universal example like a word count to help me further understand what is going on.
Does it always make sense to use reduceByKey in a word count?
Or is there a particular size of data when treeReduce makes more sense?
Are there particular cases or rules of thumbs when treeReduce is the better option?
Also this may be answered in the above based on reduceByKey but does anything change with reduceByKeyLocally and treeReduce
How do I appropriately determine depth?
Edit: So playing in spark-shell, I think I fundamentally don't understand the concept of treeReduce but hopefully an example and those question help.
res2: Array[(String, Int)] = Array((D,1), (18964,1), (D,1), (1,1), ("",1), ("",1), ("",1), ("",1), ("",1), (1,1))
scala> val reduce = input.reduceByKey(_+_)
reduce: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[11] at reduceByKey at <console>:25
scala> val tree = input.treeReduce(_+_, 2)
<console>:25: error: type mismatch;
found : (String, Int)
required: String
val tree = input.treeReduce(_+_, 2)
There is a fundamental difference between the two-reduceByKey is only available on key-value pair RDDs, while treeReduce is a generalization of reduce operation on any RDD. reduceByKey is used for implementing treeReduce but they are not related in any other sense.
reduceByKey performs reduction per each key, resulting in an RDD; it is not an "action" in RDD sense but a transformation that returns a ShuffleRDD. This is equivalent to groupByKey followed by a map that does key-wise reduction (check this why using groupByKey is inefficient).
On the other hand, treeAggregate is a generalization of reduce function, inspired from AllReduce. This is an "action" in spark sense, returning the result on the master node. As explained the link posted in your question, after performing local reduce operation, reduce performs rest of the computation on the master, which can be very burdensome (especially in machine learning when the reduce function results in a large vectors or a matrices). Instead, treeReduce perform the reduction in parallel using reduceByKey (this is done by creating a key-value pair RDD on the fly, with the keys determined by the depth of the tree; check implementation here).
So, to answer your first two questions, you have to use reduceByKey for word count since you are interested in getting per word-count and treeReduce is not appropriate here. The other two questions are not related to this topic.

Resources