Suppose I have an RDD (50M records/dayredu) which I want to summarize in several different ways.
The RDD records are 4-tuples: (keep, foo, bar, baz).
keep - boolean
foo, bar, baz - 0/1 int
I want to count how many of each of the foo &c are kept and dropped, i.e., I have to do the following for foo (and the same for bar and baz):
rdd.filter(lambda keep, foo, bar, baz: foo == 1)
.map(lambda keep, foo, bar, baz: keep, 1)
.reduceByKey(operator.add)
which would return (after collect) a list like [(True,40000000),(False,10000000)].
The question is: is there an easy way to avoid scanning rdd 3 times (once for each of foo, bar, baz)?
What I mean is not a way to rewrite the above code to handle all 3 fields, but telling spark to process all 3 pipelines in a single pass.
It's possible to execute the three pipelines in parallel by submitting the job with different threads, but this will pass through the RDD three times and require up to 3x more resources on the cluster.
It's possible to get the job done in one pass by rewriting the job to handle all counts at once - the answer regarding aggregate is an option. Splitting the data in pairs (keep, foo) (keep, bar), (keep, baz) would be another.
It's not possible to get the job done in one pass without any code changes, as there would not be a way for Spark to know that those jobs relate to the same dataset. At most, the speed of subsequent jobs after the first one could be improved by caching the initial rdd with rdd.cache before the .filter().map().reduce() steps; this will still pass through the RDD 3 times, but the 2nd and 3rd time will be potentially a lot faster if all data fits in the memory of the cluster:
rdd.cache
// first reduceByKey action will trigger the cache and rdd data will be kept in memory
val foo = rdd.filter(fooFilter).map(fooMap).reduceByKey(???)
// subsequent operations will execute faster as the rdd is now available in mem
val bar = rdd.filter(barFilter).map(barMap).reduceByKey(???)
val baz = rdd.filter(bazFilter).map(bazMap).reduceByKey(???)
If I were doing this, I would create pairs of the relevant data and count them in a single pass:
// We split the initial tuple into pairs keyed by the data type ("foo", "bar", "baz") and the keep information. dataPairs will contain data like: (("bar",true),1), (("foo",false),1)
val dataPairs = rdd.flatmap{case (keep, foo, bar, baz) =>
def condPair(name:String, x:Int):Option[((String,Boolean), Int)] = if (x==1) Some(((name,keep),x)) else None
Seq(condPair("foo",foo), condPair("bar",bar), condPair("baz",baz)).flatten
}
val totals = dataPairs.reduceByKey(_ + _)
This is easy and will pass over the data only once, but requires rewriting of the code. I'd say it scores 66,66% in answering the question.
If I'm reading your question correctly, you want RDD.aggregate.
val zeroValue = (0L, 0L, 0L, 0L, 0L, 0L) // tfoo, tbar, tbaz, ffoo, fbar, fbaz
rdd.aggregate(zeroValue)(
(prior, current) => if (current._1) {
(prior._1 + current._2, prior._2 + current._3, prior._3 + current._4,
prior._4, prior._5, prior._6)
} else {
(prior._1, prior._2, prior._3,
prior._4 + current._2, prior._5 + current._3, prior._6 + current._4)
},
(left, right) =>
(left._1 + right._1,
left._2 + right._2,
left._3 + right._3,
left._4 + right._4,
left._5 + right._5,
left._6 + right._6)
)
Aggregate is conceptually like the conceptual reduce function on a list, but RDDs aren't lists, they're distributed, so you provide two function arguments, one to operate on each partition, and one to combine the results of processing the partitions.
Related
Recently, one employer ask me a question that how can we prevent laziness of Apache Spark transformation. I know that we can persists and cache RDD data-set but in case of failure, it recompute from parent.
Can anyone please explain me, is there any function to stop the laziness of Spark transformation?
By design, Spark transformations are lazy, and you must use an action in order to retrieve a concrete value out of them.
For example, the following transformations will always remain lazy:
JavaRDD<String> lines = sc.textFile("data.txt");
JavaRDD<Integer> lineLengths = lines.map(s -> s.length());
Functions like map return RDDs, and you can only turn those RDDs into real values by performing actions, such as reduce:
int totalLength = lineLengths.reduce((a, b) -> a + b);
There is no flag that will make map return a concrete value (for example, a list of integers).
The bottom line is that you can use collect or any other Spark action to 'prevent the laziness' of a transformation:
JavaRDD<String> lines = sc.textFile("data.txt");
JavaRDD<Integer> lineLengths = lines.map(s -> s.length());
List<Integer> collectedLengths = lineLengths.collect()
Remember, though, the using collect on a large dataset will probably be a very bad practice, making your driver run out of memory.
I know map function can do like
val a=5
map(data=>data+5)
Is that possible variable a can be dynamic?
For example, the value of variable a is between 1 to 5 so a=1,2,3,4,5.
When I call map function, it can distributed execute like
data + 1
data + 2
data + 3
data + 4
data + 5
If I'm understanding your question correctly, it doesn't make sense from a Spark perspective. What you're asking for makes sense in a non-distributed, sequential processing environment (where each datum can be deterministically applied a different function). However, Spark applies transformations across distributed datasets and the functions applied by these transformations are identical.
One way to achieve what you are trying to do is to use some inherent qualities of the input in transforming your data. This way, even if your transformation function is identical, the arguments provided to it will allow it do behave like (what you described as) a "dynamic variable". In your example, the zipWithIndex() function can suffice. Though it is important to note that if ordering is not guaranteed, the indexes are subject to change on each run of the transformation.
scala> val rdd = sc.parallelize(Array(1,1,1,1,1,1))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:12
scala> val newRDD = rdd.zipWithIndex().map { case (elem, idx) => elem + idx }
...
scala> newRDD.take(6)
...
res0: Array[Long] = Array(1, 2, 3, 4, 5, 6)
I'm looking for a way to split an RDD into two or more RDDs. The closest I've seen is Scala Spark: Split collection into several RDD? which is still a single RDD.
If you're familiar with SAS, something like this:
data work.split1, work.split2;
set work.preSplit;
if (condition1)
output work.split1
else if (condition2)
output work.split2
run;
which resulted in two distinct data sets. It would have to be immediately persisted to get the results I intend...
It is not possible to yield multiple RDDs from a single transformation*. If you want to split a RDD you have to apply a filter for each split condition. For example:
def even(x): return x % 2 == 0
def odd(x): return not even(x)
rdd = sc.parallelize(range(20))
rdd_odd, rdd_even = (rdd.filter(f) for f in (odd, even))
If you have only a binary condition and computation is expensive you may prefer something like this:
kv_rdd = rdd.map(lambda x: (x, odd(x)))
kv_rdd.cache()
rdd_odd = kv_rdd.filter(lambda kv: kv[1]).keys()
rdd_even = kv_rdd.filter(lambda kv: not kv[1]).keys()
It means only a single predicate computation but requires additional pass over all data.
It is important to note that as long as an input RDD is properly cached and there no additional assumptions regarding data distribution there is no significant difference when it comes to time complexity between repeated filter and for-loop with nested if-else.
With N elements and M conditions number of operations you have to perform is clearly proportional to N times M. In case of for-loop it should be closer to (N + MN) / 2 and repeated filter is exactly NM but at the end of the day it is nothing else than O(NM). You can see my discussion** with Jason Lenderman to read about some pros-and-cons.
At the very high level you should consider two things:
Spark transformations are lazy, until you execute an action your RDD is not materialized
Why does it matter? Going back to my example:
rdd_odd, rdd_even = (rdd.filter(f) for f in (odd, even))
If later I decide that I need only rdd_odd then there is no reason to materialize rdd_even.
If you take a look at your SAS example to compute work.split2 you need to materialize both input data and work.split1.
RDDs provide a declarative API. When you use filter or map it is completely up to Spark engine how this operation is performed. As long as the functions passed to transformations are side effects free it creates multiple possibilities to optimize a whole pipeline.
At the end of the day this case is not special enough to justify its own transformation.
This map with filter pattern is actually used in a core Spark. See my answer to How does Sparks RDD.randomSplit actually split the RDD and a relevant part of the randomSplit method.
If the only goal is to achieve a split on input it is possible to use partitionBy clause for DataFrameWriter which text output format:
def makePairs(row: T): (String, String) = ???
data
.map(makePairs).toDF("key", "value")
.write.partitionBy($"key").format("text").save(...)
* There are only 3 basic types of transformations in Spark:
RDD[T] => RDD[T]
RDD[T] => RDD[U]
(RDD[T], RDD[U]) => RDD[W]
where T, U, W can be either atomic types or products / tuples (K, V). Any other operation has to be expressed using some combination of the above. You can check the original RDD paper for more details.
** https://chat.stackoverflow.com/rooms/91928/discussion-between-zero323-and-jason-lenderman
*** See also Scala Spark: Split collection into several RDD?
As other posters mentioned above, there is no single, native RDD transform that splits RDDs, but here are some "multiplex" operations that can efficiently emulate a wide variety of "splitting" on RDDs, without reading multiple times:
http://silex.freevariable.com/latest/api/#com.redhat.et.silex.rdd.multiplex.MuxRDDFunctions
Some methods specific to random splitting:
http://silex.freevariable.com/latest/api/#com.redhat.et.silex.sample.split.SplitSampleRDDFunctions
Methods are available from open source silex project:
https://github.com/willb/silex
A blog post explaining how they work:
http://erikerlandson.github.io/blog/2016/02/08/efficient-multiplexing-for-spark-rdds/
def muxPartitions[U :ClassTag](n: Int, f: (Int, Iterator[T]) => Seq[U],
persist: StorageLevel): Seq[RDD[U]] = {
val mux = self.mapPartitionsWithIndex { case (id, itr) =>
Iterator.single(f(id, itr))
}.persist(persist)
Vector.tabulate(n) { j => mux.mapPartitions { itr => Iterator.single(itr.next()(j)) } }
}
def flatMuxPartitions[U :ClassTag](n: Int, f: (Int, Iterator[T]) => Seq[TraversableOnce[U]],
persist: StorageLevel): Seq[RDD[U]] = {
val mux = self.mapPartitionsWithIndex { case (id, itr) =>
Iterator.single(f(id, itr))
}.persist(persist)
Vector.tabulate(n) { j => mux.mapPartitions { itr => itr.next()(j).toIterator } }
}
As mentioned elsewhere, these methods do involve a trade-off of memory for speed, because they operate by computing entire partition results "eagerly" instead of "lazily." Therefore, it is possible for these methods to run into memory problems on large partitions, where more traditional lazy transforms will not.
One way is to use a custom partitioner to partition the data depending upon your filter condition. This can be achieved by extending Partitioner and implementing something similar to the RangePartitioner.
A map partitions can then be used to construct multiple RDDs from the partitioned RDD without reading all the data.
val filtered = partitioned.mapPartitions { iter => {
new Iterator[Int](){
override def hasNext: Boolean = {
if(rangeOfPartitionsToKeep.contains(TaskContext.get().partitionId)) {
false
} else {
iter.hasNext
}
}
override def next():Int = iter.next()
}
Just be aware that the number of partitions in the filtered RDDs will be the same as the number in the partitioned RDD so a coalesce should be used to reduce this down and remove the empty partitions.
If you split an RDD using the randomSplit API call, you get back an array of RDDs.
If you want 5 RDDs returned, pass in 5 weight values.
e.g.
val sourceRDD = val sourceRDD = sc.parallelize(1 to 100, 4)
val seedValue = 5
val splitRDD = sourceRDD.randomSplit(Array(1.0,1.0,1.0,1.0,1.0), seedValue)
splitRDD(1).collect()
res7: Array[Int] = Array(1, 6, 11, 12, 20, 29, 40, 62, 64, 75, 77, 83, 94, 96, 100)
I am looking for a Spark RDD operation like top or takeOrdered, but that returns another RDD, not an Array, that is, does not collect the full result to RAM.
It can be a sequence of operations, but ideally, in no step trying to collect the full result into the memory of a single node.
Let's say you want to have the top 50% of an RDD.
def top50(rdd: RDD[(Double, String)]) = {
val sorted = rdd.sortByKey(ascending = false)
val partitions = sorted.partitions.size
// Throw away the contents of the lower partitions.
sorted.mapPartitionsWithIndex { (pid, it) =>
if (pid <= partitions / 2) it else Nil
}
}
This is an approximation — you may get more or less than 50%. You could do better but it would cost an extra evaluation of the RDD. For the use cases I have in mind this would not be worth it.
Take a look at
https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/rdd/MLPairRDDFunctions.scala
import org.apache.spark.mllib.rdd.MLPairRDDFunctions._
val rdd: RDD[(String, Int)] // the first string is the key, the rest is the value
val topByKey:RDD[(String, Array[Int])] = rdd.topByKey(n)
Or use aggregate with BoundedPriorityQueue.
I was wondering if when calling reduceByKey in apache spark streaming the order of the records in the stream were guarantied. Basically part of the computation I do has to get the last value.
Here's an example:
JavaPairDStream< String, Double > pairs; // ...
pairs.reduceByKey( new Function2<Double, Double, Double>() {
#Override public Double call(Double first, Double second) throws Exception {
return second;
}
});
No, it isn't. The intention of Map Reduce is to parallize tasks and when parallized you cannot guarantee order. The previous results might get shuffled on the way to the reduce processor. Note that the reduce processor won't wait for all results to arrive, he justs grabs two values and starts reducing.
Once created, the distributed dataset (distData) can be operated on in parallel. For example, we might call distData.reduce((a, b) => a + b) to add up the elements of the array.