Spark RDD find ratio of for key-value pairs - apache-spark

My rdd contains key-value pairs such as this:
(key1, 5),
(key2, 10),
(key3, 20),
I want to perform a map operation that associates each key with its respect ratio in the entire rdd, such as this:
(key1, 5/35),
(key2, 10/35),
(key3, 20/35),
I am struggling to find a method to do this using standard functions, any help will be appreciated.

You can calculate the sum and divide each value by the sum:
from operator import add
rdd = sc.parallelize([('key1', 5), ('key2', 10), ('key3', 20)])
total = rdd.values().reduce(add)
rdd2 = rdd.mapValues(lambda x: x/total)
rdd2.collect()
# [('key1', 0.14285714285714285), ('key2', 0.2857142857142857), ('key3', 0.5714285714285714)]
In Scala it would be
val rdd = sc.parallelize(List(("key1", 5), ("key2", 10), ("key3", 20)))
val total = rdd.values.reduce(_+_)
val rdd2 = rdd.mapValues(1.0*_/total)
rdd2.collect
// Array[(String, Double)] = Array((key1,0.14285714285714285), (key2,0.2857142857142857), (key3,0.5714285714285714))

Related

PySpark sort values

I have a data:
[(u'ab', u'cd'),
(u'ef', u'gh'),
(u'cd', u'ab'),
(u'ab', u'gh'),
(u'ab', u'cd')]
I would like to do a mapreduce on this data and to find out how often same pairs appear.
As a result I get:
[((u'ab', u'cd'), 2),
((u'cd', u'ab'), 1),
((u'ab', u'gh'), 1),
((u'ef', u'gh'), 1)]
As you can see it is not quire right as (u'ab', u'cd') has to be 3 instead of 2 because (u'cd', u'ab') is the same pair.
My question is how can I make the program to count (u'cd', u'ab') and (u'ab', u'cd') as the same pair? I was thinking about sorting values for each row but could not find any solution for this.
You can sort the values then use reduceByKey to count the pairs:
rdd1 = rdd.map(lambda x: (tuple(sorted(x)), 1))\
.reduceByKey(lambda a, b: a + b)
rdd1.collect()
# [(('ab', 'gh'), 1), (('ef', 'gh'), 1), (('ab', 'cd'), 3)]
You can key by the sorted element, and count by key:
result = rdd.keyBy(lambda x: tuple(sorted(x))).countByKey()
print(result)
# defaultdict(<class 'int'>, {('ab', 'cd'): 3, ('ef', 'gh'): 1, ('ab', 'gh'): 1})
To convert the result into a list, you can do:
result2 = sorted(result.items())
print(result2)
# [(('ab', 'cd'), 3), (('ab', 'gh'), 1), (('ef', 'gh'), 1)]

Efficient way to reduceByKey ignoring keys not in another RDD?

I have a large collection of data in pyspark. The format is key-value pairs, and I need to do a reducebykey operation, but ignoring all data whose key isn't in an RDD of 'interesting' keys that I also have.
I found a solution on SO that utilizes the subtractbykey operation to achieve this. It works, but crashes due to low memory on my cluster. I have not been able to change this with tweaking the settings, so I'm hoping there's a more efficient solution.
Here's my solution that works on smaller datasets:
# The keys I'm interested in
edges = sc.parallelize([("a", "b"), ("b", "c"), ("a", "d")])
# Data containing both interesting and uninteresting stuff
data1 = sc.parallelize([(("a", "b"), [42]), (("a", "c"), [60]), (("a", "d"), [13, 37])])
data2 = sc.parallelize([(("a", "b"), [43]), (("b", "c"), [23, 24]), (("a", "c"), [13, 37])])
all_data = [data1, data2]
mask = edges.map(lambda t: (tuple(t), None))
rdds = []
for datum in all_data:
combined = datum.reduceByKey(lambda a, b: a+b)
unwanted = combined.subtractByKey(mask)
wanted = combined.subtractByKey(unwanted)
rdds.append(wanted)
edge_alltimes = sc.union(rdds).reduceByKey(lambda a,b: a+b)
edge_alltimes.collect()
As desired, this outputs [(('a', 'd'), [13, 37]), (('a', 'b'), [42, 43]), (('b', 'c'), [23, 24])]
(i.e. data for the 'interesting' key tuples have been combined and the rest has been dropped).
The reason I have the data in several RDDs is to mimic behavior on my cluster where I can't load all the data simultaneously due to its size.
Any help would be great.
Example with join. A small drawback is that you need to have RDD of pairs before join and you need to strip extra data after join.
import org.apache.spark.{SparkConf, SparkContext}
object Main {
val conf = new SparkConf().setAppName("myapp").setMaster("local[*]")
val sc = new SparkContext(conf)
def main(args: Array[String]): Unit = {
val goodKeys = sc.parallelize(Seq(1, 2))
val allData = sc.parallelize(Seq((1, "a"), (2, "b"), (3, "c")))
val goodPairs = goodKeys.map(v => (v, 0))
val goodData = allData.join(goodPairs).mapValues(p => p._1)
goodData.collect().foreach(println)
}
}
Output:
(1,a)
(2,b)

Add incrementing variable in RDD

Assuming that I have the following RDD:
test1 = (('trial1',[1,2]),('trial2',[3,4]))
test1RDD = sc.parallelize(test1)
How can I create the following rdd:
((1,'trial1',[1,2]),(2,'trial2',[3,4]))
I tried with accumulators but it doesnt work as accumulators cannot be accessed in tasks:
def increm(keyvalue):
global acc
acc +=1
return (acc.value,keyvalue[0],keyvalue[1])
acc = sc.accumulator(0)
test1RDD.map(lambda x: increm(x)).collect()
Any idea how can this be done?
You can use zipWithIndex
zipWithIndex()
Zips this RDD with its element indices.
The ordering is first based on the partition index and then the
ordering of items within each partition. So the first item in the
first partition gets index 0, and the last item in the last partition
receives the largest index.
This method needs to trigger a spark job when this RDD contains more
than one partitions.
>>> sc.parallelize(["a", "b", "c", "d"], 3).zipWithIndex().collect()
[('a', 0), ('b', 1), ('c', 2), ('d', 3)]
and use map to transform the RDD to have the index in front of the new RDD
This is untested as I dont have any environment:
test1 = (('trial1',[1,2]),('trial2',[3,4]))
test1RDD = sc.parallelize(test1)
test1RDD.zipWithIndex().map(lambda x : (x[1],x[0]))

Spark Multiple Joins

Using spark context, I would like to perform multiple joins between
rdd's, where the number of rdd's to be joined should be dynamic.
I would like the result to be unfolded, for example:
val rdd1 = sc.parallelize(List((1,1.0),(11,11.0), (111,111.0)))
val rdd2 = sc.parallelize(List((1,2.0),(11,12.0), (111,112.0)))
val rdd3 = sc.parallelize(List((1,3.0),(11,13.0), (111,113.0)))
val rdd4 = sc.parallelize(List((1,4.0),(11,14.0), (111,114.0)))
val rdd11 = rdd1.join(rdd2).join(rdd3).join(rdd4)
.foreach(println)
generates the following output:
(11,(((11.0,12.0),13.0),14.0))
(111,(((111.0,112.0),113.0),114.0))
(1,(((1.0,2.0),3.0),4.0))
I would like to:
Unfold the values, e.g first line should read:
(11, 11.0, 12.0, 13.0, 14.0).
Do it dynamically so that it can work on any dynamic number
of rdd's to be joined.
Any ideas would be appreciated,
Eli.
Instead of using join, I would use union followed by groupByKey to achieve what you desire.
Here is what I would do -
val emptyRdd = sc.emptyRDD[(Int, Double)]
val listRdds = List(rdd1, rdd2, rdd3, rdd4) // satisfy your dynamic number of rdds
val unioned = listRdds.fold(emptyRdd)(_.union(_))
val grouped = unioned.groupByKey
grouped.collect().foreach(println(_))
This will yields the result:
(1,CompactBuffer(1.0, 2.0, 3.0, 4.0))
(11,CompactBuffer(11.0, 12.0, 13.0, 14.0))
(111,CompactBuffer(111.0, 112.0, 113.0, 114.0))
Updated:
If you would still like to use join, this is how to do it with somewhat complicated foldLeft functions -
val joined = rddList match {
case head::tail => tail.foldLeft(head.mapValues(Array(_)))(_.join(_).mapValues {
case (arr: Array[Double], d: Double) => arr :+ d
})
case Nil => sc.emptyRDD[(Int, Array[Double])]
}
And joined.collect will yield
res14: Array[(Int, Array[Double])] = Array((1,Array(1.0, 2.0, 3.0, 4.0)), (11,Array(11.0, 12.0, 13.0, 14.0)), (111,Array(111.0, 112.0, 113.0, 114.0)))
Others with this problem may find groupWith helpful. From the docs:
>>> w = sc.parallelize([("a", 5), ("b", 6)])
>>> x = sc.parallelize([("a", 1), ("b", 4)])
>>> y = sc.parallelize([("a", 2)])
>>> z = sc.parallelize([("b", 42)])
>>> [(x, tuple(map(list, y))) for x, y in sorted(list(w.groupWith(x, y, z).collect()))]
[('a', ([5], [1], [2], [])), ('b', ([6], [4], [], [42]))]

What is the difference between map and flatMap and a good use case for each?

Can someone explain to me the difference between map and flatMap and what is a good use case for each?
What does "flatten the results" mean?
What is it good for?
Here is an example of the difference, as a spark-shell session:
First, some data - two lines of text:
val rdd = sc.parallelize(Seq("Roses are red", "Violets are blue")) // lines
rdd.collect
res0: Array[String] = Array("Roses are red", "Violets are blue")
Now, map transforms an RDD of length N into another RDD of length N.
For example, it maps from two lines into two line-lengths:
rdd.map(_.length).collect
res1: Array[Int] = Array(13, 16)
But flatMap (loosely speaking) transforms an RDD of length N into a collection of N collections, then flattens these into a single RDD of results.
rdd.flatMap(_.split(" ")).collect
res2: Array[String] = Array("Roses", "are", "red", "Violets", "are", "blue")
We have multiple words per line, and multiple lines, but we end up with a single output array of words
Just to illustrate that, flatMapping from a collection of lines to a collection of words looks like:
["aa bb cc", "", "dd"] => [["aa","bb","cc"],[],["dd"]] => ["aa","bb","cc","dd"]
The input and output RDDs will therefore typically be of different sizes for flatMap.
If we had tried to use map with our split function, we'd have ended up with nested structures (an RDD of arrays of words, with type RDD[Array[String]]) because we have to have exactly one result per input:
rdd.map(_.split(" ")).collect
res3: Array[Array[String]] = Array(
Array(Roses, are, red),
Array(Violets, are, blue)
)
Finally, one useful special case is mapping with a function which might not return an answer, and so returns an Option. We can use flatMap to filter out the elements that return None and extract the values from those that return a Some:
val rdd = sc.parallelize(Seq(1,2,3,4))
def myfn(x: Int): Option[Int] = if (x <= 2) Some(x * 10) else None
rdd.flatMap(myfn).collect
res3: Array[Int] = Array(10,20)
(noting here that an Option behaves rather like a list that has either one element, or zero elements)
Generally we use word count example in hadoop. I will take the same use case and will use map and flatMap and we will see the difference how it is processing the data.
Below is the sample data file.
hadoop is fast
hive is sql on hdfs
spark is superfast
spark is awesome
The above file will be parsed using map and flatMap.
Using map
>>> wc = data.map(lambda line:line.split(" "));
>>> wc.collect()
[u'hadoop is fast', u'hive is sql on hdfs', u'spark is superfast', u'spark is awesome']
Input has 4 lines and output size is 4 as well, i.e., N elements ==> N elements.
Using flatMap
>>> fm = data.flatMap(lambda line:line.split(" "));
>>> fm.collect()
[u'hadoop', u'is', u'fast', u'hive', u'is', u'sql', u'on', u'hdfs', u'spark', u'is', u'superfast', u'spark', u'is', u'awesome']
The output is different from map.
Let's assign 1 as value for each key to get the word count.
fm: RDD created by using flatMap
wc: RDD created using map
>>> fm.map(lambda word : (word,1)).collect()
[(u'hadoop', 1), (u'is', 1), (u'fast', 1), (u'hive', 1), (u'is', 1), (u'sql', 1), (u'on', 1), (u'hdfs', 1), (u'spark', 1), (u'is', 1), (u'superfast', 1), (u'spark', 1), (u'is', 1), (u'awesome', 1)]
Whereas flatMap on RDD wc will give the below undesired output:
>>> wc.flatMap(lambda word : (word,1)).collect()
[[u'hadoop', u'is', u'fast'], 1, [u'hive', u'is', u'sql', u'on', u'hdfs'], 1, [u'spark', u'is', u'superfast'], 1, [u'spark', u'is', u'awesome'], 1]
You can't get the word count if map is used instead of flatMap.
As per the definition, difference between map and flatMap is:
map: It returns a new RDD by applying given function to each element
of the RDD. Function in map returns only one item.
flatMap: Similar to map, it returns a new RDD by applying a function
to each element of the RDD, but output is flattened.
It boils down to your initial question: what you mean by flattening ?
When you use flatMap, a "multi-dimensional" collection becomes "one-dimensional" collection.
val array1d = Array ("1,2,3", "4,5,6", "7,8,9")
//array1d is an array of strings
val array2d = array1d.map(x => x.split(","))
//array2d will be : Array( Array(1,2,3), Array(4,5,6), Array(7,8,9) )
val flatArray = array1d.flatMap(x => x.split(","))
//flatArray will be : Array (1,2,3,4,5,6,7,8,9)
You want to use a flatMap when,
your map function results in creating multi layered structures
but all you want is a simple - flat - one dimensional structure, by removing ALL the internal groupings
all examples are good....Here is nice visual illustration... source courtesy : DataFlair training of spark
Map : A map is a transformation operation in Apache Spark. It applies to each element of RDD and it returns the result as new RDD. In the Map, operation developer can define his own custom business logic. The same logic will be applied to all the elements of RDD.
Spark RDD map function takes one element as input process it according to custom code (specified by the developer) and returns one element at a time. Map transforms an RDD of length N into another RDD of length N. The input and output RDDs will typically have the same number of records.
Example of map using scala :
val x = spark.sparkContext.parallelize(List("spark", "map", "example", "sample", "example"), 3)
val y = x.map(x => (x, 1))
y.collect
// res0: Array[(String, Int)] =
// Array((spark,1), (map,1), (example,1), (sample,1), (example,1))
// rdd y can be re writen with shorter syntax in scala as
val y = x.map((_, 1))
y.collect
// res1: Array[(String, Int)] =
// Array((spark,1), (map,1), (example,1), (sample,1), (example,1))
// Another example of making tuple with string and it's length
val y = x.map(x => (x, x.length))
y.collect
// res3: Array[(String, Int)] =
// Array((spark,5), (map,3), (example,7), (sample,6), (example,7))
FlatMap :
A flatMap is a transformation operation. It applies to each element of RDD and it returns the result as new RDD. It is similar to Map, but FlatMap allows returning 0, 1 or more elements from map function. In the FlatMap operation, a developer can define his own custom business logic. The same logic will be applied to all the elements of the RDD.
What does "flatten the results" mean?
A FlatMap function takes one element as input process it according to custom code (specified by the developer) and returns 0 or more element at a time. flatMap() transforms an RDD of length N into another RDD of length M.
Example of flatMap using scala :
val x = spark.sparkContext.parallelize(List("spark flatmap example", "sample example"), 2)
// map operation will return Array of Arrays in following case : check type of res0
val y = x.map(x => x.split(" ")) // split(" ") returns an array of words
y.collect
// res0: Array[Array[String]] =
// Array(Array(spark, flatmap, example), Array(sample, example))
// flatMap operation will return Array of words in following case : Check type of res1
val y = x.flatMap(x => x.split(" "))
y.collect
//res1: Array[String] =
// Array(spark, flatmap, example, sample, example)
// RDD y can be re written with shorter syntax in scala as
val y = x.flatMap(_.split(" "))
y.collect
//res2: Array[String] =
// Array(spark, flatmap, example, sample, example)
If you are asking the difference between RDD.map and RDD.flatMap in Spark, map transforms an RDD of size N to another one of size N . eg.
myRDD.map(x => x*2)
for example, if myRDD is composed of Doubles .
While flatMap can transform the RDD into anther one of a different size:
eg.:
myRDD.flatMap(x =>new Seq(2*x,3*x))
which will return an RDD of size 2*N
or
myRDD.flatMap(x =>if x<10 new Seq(2*x,3*x) else new Seq(x) )
Use test.md as a example:
➜ spark-1.6.1 cat test.md
This is the first line;
This is the second line;
This is the last line.
scala> val textFile = sc.textFile("test.md")
scala> textFile.map(line => line.split(" ")).count()
res2: Long = 3
scala> textFile.flatMap(line => line.split(" ")).count()
res3: Long = 15
scala> textFile.map(line => line.split(" ")).collect()
res0: Array[Array[String]] = Array(Array(This, is, the, first, line;), Array(This, is, the, second, line;), Array(This, is, the, last, line.))
scala> textFile.flatMap(line => line.split(" ")).collect()
res1: Array[String] = Array(This, is, the, first, line;, This, is, the, second, line;, This, is, the, last, line.)
If you use map method, you will get the lines of test.md, for flatMap method, you will get the number of words.
The map method is similar to flatMap, they are all return a new RDD. map method often to use return a new RDD, flatMap method often to use split words.
map and flatMap are similar, in the sense they take a line from the input RDD and apply a function on it. The way they differ is that the function in map returns only one element, while function in flatMap can return a list of elements (0 or more) as an iterator.
Also, the output of the flatMap is flattened. Although the function in flatMap returns a list of elements, the flatMap returns an RDD which has all the elements from the list in a flat way (not a list).
map returns RDD of equal number of elements while flatMap may not.
An example use case for flatMap Filter out missing or incorrect data.
An example use case for map Use in wide variety of cases where is the number of elements of input and output are the same.
number.csv
1
2
3
-
4
-
5
map.py adds all numbers in add.csv.
from operator import *
def f(row):
try:
return float(row)
except Exception:
return 0
rdd = sc.textFile('a.csv').map(f)
print(rdd.count()) # 7
print(rdd.reduce(add)) # 15.0
flatMap.py uses flatMap to filtered out missing data before addition. Less numbers are added compared to the previous version.
from operator import *
def f(row):
try:
return [float(row)]
except Exception:
return []
rdd = sc.textFile('a.csv').flatMap(f)
print(rdd.count()) # 5
print(rdd.reduce(add)) # 15.0
The difference can be seen from below sample pyspark code:
rdd = sc.parallelize([2, 3, 4])
rdd.flatMap(lambda x: range(1, x)).collect()
Output:
[1, 1, 2, 1, 2, 3]
rdd.map(lambda x: range(1, x)).collect()
Output:
[[1], [1, 2], [1, 2, 3]]
map: It returns a new RDD by applying a function to each element of the RDD. Function in .map can return only one item.
flatMap: Similar to map, it returns a new RDD by applying a function to each element of the RDD, but the output is flattened.
Also, function in flatMap can return a list of elements (0 or more)
For Example:
sc.parallelize([3,4,5]).map(lambda x: range(1,x)).collect()
Output: [[1, 2], [1, 2, 3], [1, 2, 3, 4]]
sc.parallelize([3,4,5]).flatMap(lambda x: range(1,x)).collect()
Output: notice o/p is flattened out in a single list [1, 2, 1, 2, 3,
1, 2, 3, 4]
Source:https://www.linkedin.com/pulse/difference-between-map-flatmap-transformations-spark-pyspark-pandey/
RDD.map returns all elements in single array
RDD.flatMap returns elements in Arrays of array
let's assume we have text in text.txt file as
Spark is an expressive framework
This text is to understand map and faltMap functions of Spark RDD
Using map
val text=sc.textFile("text.txt").map(_.split(" ")).collect
output:
text: **Array[Array[String]]** = Array(Array(Spark, is, an, expressive, framework), Array(This, text, is, to, understand, map, and, faltMap, functions, of, Spark, RDD))
Using flatMap
val text=sc.textFile("text.txt").flatMap(_.split(" ")).collect
output:
text: **Array[String]** = Array(Spark, is, an, expressive, framework, This, text, is, to, understand, map, and, faltMap, functions, of, Spark, RDD)
Flatmap and Map both transforms the collection.
Difference:
map(func)
Return a new distributed dataset formed by passing each element of the source through a function func.
flatMap(func)
Similar to map, but each input item can be mapped to 0 or more output items (so func should return a Seq rather than a single item).
The transformation function:
map: One element in -> one element out.
flatMap: One element in -> 0 or more elements out (a collection).
For all those who've wanted PySpark related:
Example transformation: flatMap
>>> a="hello what are you doing"
>>> a.split()
['hello', 'what', 'are', 'you', 'doing']
>>> b=["hello what are you doing","this is rak"]
>>> b.split()
Traceback (most recent call last):
File "", line 1, in
AttributeError: 'list' object has no attribute 'split'
>>> rline=sc.parallelize(b)
>>> type(rline)
>>> def fwords(x):
... return x.split()
>>> rword=rline.map(fwords)
>>> rword.collect()
[['hello', 'what', 'are', 'you', 'doing'], ['this', 'is', 'rak']]
>>> rwordflat=rline.flatMap(fwords)
>>> rwordflat.collect()
['hello', 'what', 'are', 'you', 'doing', 'this', 'is', 'rak']
Hope it helps :)
map :
is a higher-order method that takes a function as input and applies it to each element in the source RDD.
http://commandstech.com/difference-between-map-and-flatmap-in-spark-what-is-map-and-flatmap-with-examples/
flatMap:
a higher-order method and transformation operation that takes an input function.
map
Return a new RDD by applying a function to each element of this RDD.
>>> rdd = sc.parallelize([2, 3, 4])
>>> sorted(rdd.map(lambda x: [(x, x), (x, x)]).collect())
[[(2, 2), (2, 2)], [(3, 3), (3, 3)], [(4, 4), (4, 4)]]
flatMap
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
Here transformation of one element to many element is possible
>>> rdd = sc.parallelize([2, 3, 4])
>>> sorted(rdd.flatMap(lambda x: [(x, x), (x, x)]).collect())
[(2, 2), (2, 2), (3, 3), (3, 3), (4, 4), (4, 4)]
map(func) Return a new distributed dataset formed by passing each element of the source through a function func declared.so map()is single term
whiles
flatMap(func) Similar to map, but each input item can be mapped to 0 or more output items so func should return a Sequence rather than a single item.
Difference in output of map and flatMap:
1.flatMap
val a = sc.parallelize(1 to 10, 5)
a.flatMap(1 to _).collect()
Output:
1, 1, 2, 1, 2, 3, 1, 2, 3, 4, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 6, 7, 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
2.map:
val a = sc.parallelize(List("dog", "salmon", "salmon", "rat", "elephant"), 3)
val b = a.map(_.length).collect()
Output:
3 6 6 3 8

Resources