How to use sortby in Java in Spark - apache-spark

I have two rdd, and would like to merge together, I have the following question,
I tried the following using union, but union does not sort at all, but I don't know how to use sortby here?
List<Integer> data1 = Arrays.asList(1, 3, 5);
List<Integer> data2 = Arrays.asList(2, 4, 6, 8);
JavaRDD<Integer> rdd1 = sc.parallelize(data1);
JavaRDD<Integer> rdd2 = sc.parallelize(data2);
JavaRDD<Integer> rdd = rdd1.union(rdd2);
rdd.sortBy(w->w._1, false); //compile error
Another question, is there any good way to return the merged the list sorted?

Try below:
List<Integer> data1 = Arrays.asList(1, 3, 5);
List<Integer> data2 = Arrays.asList(2, 4, 6, 8);
JavaRDD<Integer> rdd1 = sc.parallelize(data1);
JavaRDD<Integer> rdd2 = sc.parallelize(data2);
JavaRDD<Integer> rdd = rdd1.union(rdd2);
int noofpartitions = 1;
JavaRDD<Integer> rddSorted = rdd.sortBy(f -> f, true, noofpartitions);
rddSorted.collect().forEach(f -> System.out.println(f));
sortBy take three parameters: 1. the function 2. a boolean - true represents ascending order, false-descending order, 3. number of partitions
It will print :
1
2
3
4
5
6
8
The way you have used is the right one. see this for more details How to merge two presorted rdds in spark?

Related

treeAggregate use case explanation

I am trying to understand treeAggregate but there isn't enough examples online.
So does the following code merges the elements of partition then calls makeSummary and in parallel do the same for each partition (sums the result and summarizes it again) then with depth set to (lets say) 5, is this repeated 5 times?
The result I want to get from these is to summarize the arrays until I get one of them.
val summary = input.transform(rdd=>{
rdd.treeAggregate(initialSet)(addToSet,mergePartitionSets,5)
// this returns Array[Double] not rdd but still
})
val initialSet = Array.empty[Double]
def addToSet = (s: Array[Double], v: (Int,Array[Double])) => {
val p=s ++ v._2
val ret = makeSummary(p,10000)
ret
}
val mergePartitionSets = (p1: Array[Double], p2: Array[Double]) => {
val p = p1 ++ p2
val ret = makeSummary(p,10000)
ret
}
//makeSummary selects half of the points of p randomly

does Spark unpersist() have different strategy?

Just did some experiment on spark unpersist() and feel confused on what it actually did. I googled a lot and almost all people say the unpersist() will immediately evict the RDD from excutor's memory. but in this test, we can see it's not always ture. see the simple test below:
private static int base = 0;
public static Integer[] getInts(){
Integer[] res = new Integer[5];
for(int i=0;i<5;i++){
res[i] = base++;
}
System.out.println("number generated:" + res[0] + " to " + res[4] + "---------------------------------");
return res;
}
public static void main( String[] args )
{
SparkSession sparkSession = SparkSession.builder().appName("spark test").getOrCreate();
JavaSparkContext spark = new JavaSparkContext(sparkSession.sparkContext());
JavaRDD<Integer> first = spark.parallelize(Arrays.asList(getInts()));
System.out.println("first: " + Arrays.toString(first.collect().toArray())); // action
first.unpersist();
System.out.println("first is unpersisted");
System.out.println("compute second ========================");
JavaRDD<Integer> second = first.map(i -> {
System.out.println("double " + i);
return i*2;
}).cache(); // transform
System.out.println("second: " + Arrays.toString(second.collect().toArray())); // action
second.unpersist();
System.out.println("compute third ========================");
JavaRDD<Integer> third = second.map(i -> i+100); // transform
System.out.println("third: " + Arrays.toString(third.collect().toArray())); // action
}
the output is:
number generated:0 to 4---------------------------------
first: [0, 1, 2, 3, 4]
first is unpersisted
compute second ========================
double 0
double 1
double 2
double 3
double 4
second: [0, 2, 4, 6, 8]
compute third ========================
double 0
double 1
double 2
double 3
double 4
third: [100, 102, 104, 106, 108]
As we can see, unpersist() 'first' is useless, it will not recalculate.
but unpersist() 'second' will trigger recalculation.
Anyone can help me to figure out why unpersist() 'first' will not trigger recalculation? if I want to force 'first' to be evicted out of memory, how should I do? is there any special for RDD from parallelize or textFile() API?
Thanks!
This behavior has nothing to do with caching and unpersisting. In fact first is not even persisted, although it wouldn't make much difference here.
When you parallelize, you pass a local, non-distributed object. parallelize takes its argument by value and its life cycle is completely out of Spark's scope. As a result Spark has no reason to recompute it at all, once ParallelCollectionRDD has been initialized. If you want to distribute different collection, just create a new RDD.
It is also worth noting that unpersist can be called in both blocking and non-blocking mode, depending on the blocking argument.

I want to collect the data frame column values in an array list to conduct some computations, is it possible?

I am loading data from phoenix through this:
val tableDF = sqlContext.phoenixTableAsDataFrame("Hbtable", Array("ID", "distance"), conf = configuration)
and want to carry out the following computation on the column values distance:
val list=Array(10,20,30,40,10,20,0,10,20,30,40,50,60)//list of values from the column distance
val first=list(0)
val last=list(list.length-1)
var m = 0;
for (a <- 0 to list.length-2) {
if (list(a + 1) < list(a) && list(a+1)>=0)
{
m = m + list(a)
}
}
val totalDist=(m+last-first)
You can do something like this. It returns Array[Any]
`val array = df.select("distance").rdd.map(r => r(0)).collect()
If you want to get the data type properly, then you can use. It returns the Array[Int]
val array = df.select("distance").rdd.map(r => r(0).asInstanceOf[Int]).collect()

Get all possible sums from a list of numbers

Let's say I have a list of numbers: 2, 2, 5, 7
Now the result of the algorithm should contain all possible sums.
In this case: 2+2, 2+5, 5+7, 2+2+5, 2+2+5+7, 2+5+7, 5+7
I'd like to achieve this by using Dynamic Programming. I tried using a matrix but so far I have not found a way to get all the possibilities.
Based on the question, I think that the answer posted by AT-2016 is correct, and there is no solution that can exploit the concept of dynamic programming to reduce the complexity.
Here is how you can exploit dynamic programming to solve a similar question that asks to return the sum of all possible subsequence sums.
Consider the array {2, 2, 5, 7}: The different possible subsequences are:
{2},{2},{5},{7},{2,5},{2,5},{5,7},{2,5,7},{2,5,7},{2,2,5,7},{2,2},{2,7},{2,7},{2,2,7},{2,2,5}
So, the question is to find the sum of all these elements from all these subsequences. Dynamic Programming comes to the rescue!!
Arrange the subsequences based on the ending element of each subsequence:
subsequences ending with the first element: {2}
subsequences ending with the second element: {2}, {2,2}
subsequences ending with the third element: {5},{2,5},{2,5},{2,2,5}
subsequences ending with the fourth element: {7},{5,7},{2,7},{2,7},{2,2,7},{2,5,7},{2,5,7},{2,2,5,7}.
Here is the code snippet:
The array 's[]' calculates the sums for 1,2,3,4 individually, that is, s[2] calculates the sum of all subsequences ending with third element. The array 'dp[]' calculates the overall sum till now.
s[0]=array[0];
dp[0]=s[0];
k = 2;
for(int i = 1; i < n; i ++)
{
s[i] = s[i-1] + k*array[i];
dp[i] = dp[i-1] + s[i];
k = k * 2;
}
return dp[n-1];
This is done in C# and in an array to find the possible sums that I used earlier:
static void Main(string[] args)
{
//Set up array of integers
int[] items = { 2, 2, 5, 7 };
//Figure out how many bitmasks is needed
//4 bits have a maximum value of 15, so we need 15 masks.
//Calculated as: (2 ^ ItemCount) - 1
int len = items.Length;
int calcs = (int)Math.Pow(2, len) - 1;
//Create array of bitmasks. Each item in the array represents a unique combination from our items array
string[] masks = Enumerable.Range(1, calcs).Select(i => Convert.ToString(i, 2).PadLeft(len, '0')).ToArray();
//Spit out the corresponding calculation for each bitmask
foreach (string m in masks)
{
//Get the items from array that correspond to the on bits in the mask
int[] incl = items.Where((c, i) => m[i] == '1').ToArray();
//Write out the mask, calculation and resulting sum
Console.WriteLine(
"[{0}] {1} = {2}",
m,
String.Join("+", incl.Select(c => c.ToString()).ToArray()),
incl.Sum()
);
}
Console.ReadKey();
}
Possible outputs:
[0001] 7 = 7
[0010] 5 = 5
[0011] 5 + 7 = 12
[0100] 2 = 2
This is not an answer to the question because it does not demonstrate the application of dynamic programming. Rather it notes that this problem involves multisets, for which facilities are available in Sympy.
>>> from sympy.utilities.iterables import multiset_combinations
>>> numbers = [2,2,5,7]
>>> sums = [ ]
>>> for n in range(2,1+len(numbers)):
... for item in multiset_combinations([2,2,5,7],n):
... item
... added = sum(item)
... if not added in sums:
... sums.append(added)
...
[2, 2]
[2, 5]
[2, 7]
[5, 7]
[2, 2, 5]
[2, 2, 7]
[2, 5, 7]
[2, 2, 5, 7]
>>> sums.sort()
>>> sums
[4, 7, 9, 11, 12, 14, 16]
I have a solution that can print a list of all possible subset sums.
Its not dynamic programming(DP) but this solution is faster than the DP approach.
void solve(){
ll i, j, n;
cin>>n;
vector<int> arr(n);
const int maxPossibleSum=1000000;
for(i=0;i<n;i++){
cin>>arr[i];
}
bitset<maxPossibleSum> b;
b[0]=1;
for(i=0;i<n;i++){
b|=b<<arr[i];
}
for(i=0;i<maxPossibleSum;i++){
if(b[i])
cout<<i<<endl;
}
}
Input:
First line has the number of elements N in the array.
The next line contains N space-separated array elements.
4
2 2 5 7
----------
Output:
0
2
4
5
7
9
11
12
14
16
The time complexity of this solution is O(N * maxPossibleSum/32)
The space complexity of this solution is O(maxPossibleSum/8)

Using PartitionBy to split and efficiently compute RDD groups by Key

I've implemented a solution to group RDD[K, V] by key and to compute data according to each group (K, RDD[V]), using partitionBy and Partitioner. Nevertheless, I'm not sure if it is really efficient and I'd like to have your point of view.
Here is a sample case : according to a list of [K: Int, V: Int], compute the Vs mean for each group of K, knowing that it should be distributed and that V values may be very large. That should give :
List[K, V] => (K, mean(V))
The simple Partitioner class:
class MyPartitioner(maxKey: Int) extends Partitioner {
def numPartitions = maxKey
def getPartition(key: Any): Int = key match {
case i: Int if i < maxKey => i
}
}
The partition code :
val l = List((1, 1), (1, 8), (1, 30), (2, 4), (2, 5), (3, 7))
val rdd = sc.parallelize(l)
val p = rdd.partitionBy(new MyPartitioner(4)).cache()
p.foreachPartition(x => {
try {
val r = sc.parallelize(x.toList)
val id = r.first() //get the K partition id
val v = r.map(x => x._2)
println(id._1 + "->" + mean(v))
} catch {
case e: UnsupportedOperationException => 0
}
})
The output is :
1->13, 2->4, 3->7
My questions are :
what does it really happen when calling partitionBy ? (sorry, I didn't find enough specs on it)
Is it really efficient to map by partition, knowing that in my production case it would not be too much keys (as 50 for sample) by very much values (as 1 million for sample)
What is the cost of paralellize(x.toList) ? Is it consistent to do it ? (I need a RDD in input of mean())
How would you do it by yourself ?
Regards
Your code should not work. You cannot pass the SparkContext object to the executors. (It's not Serializable.) Also I don't see why you would need to.
To calculate the mean, you need to calculate the sum and the count and take their ratio. The default partitioner will do fine.
def meanByKey(rdd: RDD[(Int, Int)]): RDD[(Int, Double)] = {
case class SumCount(sum: Double, count: Double)
val sumCounts = rdd.aggregateByKey(SumCount(0.0, 0.0))(
(sc, v) => SumCount(sc.sum + v, sc.count + 1.0),
(sc1, sc2) => SumCount(sc1.sum + sc2.sum, sc1.count + sc2.count))
sumCounts.map(sc => sc.sum / sc.count)
}
This is an efficient single-pass calculation that generalizes well.

Resources