Spark custom aggregation : collect_list+UDF vs UDAF - apache-spark

I often have the need to perform custom aggregations on dataframes in spark 2.1, and used these two approaches :
Using groupby/collect_list to get all the values in a single row, then apply an UDF to aggregate the values
Writing a custom UDAF (User defined aggregate function)
I generally prefer the first option as its easier to implement and more readable than the UDAF implementation. But I would assume that the first option is generally slower, because more data is sent around the network (no partial aggregation), but my experience shows that UDAF are generally slow. Why is that?
Concrete example: Calculating histograms:
Data is in a hive table (1E6 random double values)
val df = spark.table("testtable")
def roundToMultiple(d:Double,multiple:Double) = Math.round(d/multiple)*multiple
UDF approach:
val udf_histo = udf((xs:Seq[Double]) => xs.groupBy(x => roundToMultiple(x,0.25)).mapValues(_.size))
df.groupBy().agg(collect_list($"x").as("xs")).select(udf_histo($"xs")).show(false)
+--------------------------------------------------------------------------------+
|UDF(xs) |
+--------------------------------------------------------------------------------+
|Map(0.0 -> 125122, 1.0 -> 124772, 0.75 -> 250819, 0.5 -> 248696, 0.25 -> 250591)|
+--------------------------------------------------------------------------------+
UDAF-Approach
import org.apache.spark.sql.Row
import org.apache.spark.sql.expressions.{MutableAggregationBuffer, UserDefinedAggregateFunction}
import org.apache.spark.sql.types._
import scala.collection.mutable
class HistoUDAF(binWidth:Double) extends UserDefinedAggregateFunction {
override def inputSchema: StructType =
StructType(
StructField("value", DoubleType) :: Nil
)
override def bufferSchema: StructType =
new StructType()
.add("histo", MapType(DoubleType, IntegerType))
override def deterministic: Boolean = true
override def dataType: DataType = MapType(DoubleType, IntegerType)
override def initialize(buffer: MutableAggregationBuffer): Unit = {
buffer(0) = Map[Double, Int]()
}
private def mergeMaps(a: Map[Double, Int], b: Map[Double, Int]) = {
a ++ b.map { case (k,v) => k -> (v + a.getOrElse(k, 0)) }
}
override def update(buffer: MutableAggregationBuffer, input: Row): Unit = {
val oldBuffer = buffer.getAs[Map[Double, Int]](0)
val newInput = Map(roundToMultiple(input.getDouble(0),binWidth) -> 1)
buffer(0) = mergeMaps(oldBuffer, newInput)
}
override def merge(buffer1: MutableAggregationBuffer, buffer2: Row): Unit = {
val a = buffer1.getAs[Map[Double, Int]](0)
val b = buffer2.getAs[Map[Double, Int]](0)
buffer1(0) = mergeMaps(a, b)
}
override def evaluate(buffer: Row): Any = {
buffer.getAs[Map[Double, Int]](0)
}
}
val histo = new HistoUDAF(0.25)
df.groupBy().agg(histo($"x")).show(false)
+--------------------------------------------------------------------------------+
|histoudaf(x) |
+--------------------------------------------------------------------------------+
|Map(0.0 -> 125122, 1.0 -> 124772, 0.75 -> 250819, 0.5 -> 248696, 0.25 -> 250591)|
+--------------------------------------------------------------------------------+
My tests show that the collect_list/UDF approach is about 2 times faster than the UDAF approach. Is this a general rule, or are there cases where UDAF is really much faster and the rather awkward implemetation is justified?

UDAF is slower because it deserializes/serializes aggregator from/to internal buffer on each update -> on each row which is quite expensive (some more details). Instead you should use Aggregator (in fact, UDAF have been deprecated since Spark 3.0).

Related

How to generate large word count file in Spark?

I want to generate 10 million lines’ wordcount file for performance test(each line has the same sentence). But I have no idea about how to code it.
You can give me an example code, and save file in HDFS directly.
You can try something like this.
Generate 1 column with values from 1 to 100k and one with values from 1 to 100 explode both of them with explode(column).
You can't generate one column with 10 Mil values because kryo buffer is gonna throw an error.
I don't know if this is the best performance way to do it, but it is the fastest way I can think right now.
val generateList = udf((s: Int) => {
val buf = scala.collection.mutable.ArrayBuffer.empty[Int]
for(i <- 1 to s) {
buf += i
}
buf
})
val someDF = Seq(
("Lorem ipsum dolor sit amet, consectetur adipiscing elit.")
).toDF("sentence")
val someDfWithMilColumn = someDF.withColumn("genColumn1", generateList(lit(100000)))
.withColumn("genColumn2", generateList(lit(100)))
val someDfWithMilColumn100k = someDfWithMilColumn
.withColumn("expl_val", explode($"mil")).drop("expl_val", "genColumn1")
val someDfWithMilColumn10mil = someDfWithMilColumn100k
.withColumn("expl_val2", explode($"10")).drop("genColumn2", "expl_val2")
someDfWithMilColumn10mil.write.parquet(path)
You can do it by joining the 2 DFs as below,
Also find the code explanation inline.
import org.apache.spark.sql.SaveMode
object GenerateTenMils {
def main(args: Array[String]): Unit = {
val spark = Constant.getSparkSess
spark.conf.set("spark.sql.crossJoin.enabled","true") // Enable cross join
import spark.implicits._
//Create a DF with your sentence
val df = List("each line has the same sentence").toDF
//Create another Dataset with 10000000 records
spark.range(10000000)
.join(df) // Cross Join the dataframes
.coalesce(1) // Output to a single file
.drop("id") // Drop the extra column
.write
.mode(SaveMode.Overwrite)
.text("src/main/resources/tenMils") // Write as text file
}
}
You could follow this approach.
Tail recursive to generate the objects list and Dataframes, and Union to generate the big Dataframe
val spark = SparkSession
.builder()
.appName("TenMillionsRows")
.master("local[*]")
.config("spark.sql.shuffle.partitions","4") //Change to a more reasonable default number of partitions for our data
.config("spark.app.id","TenMillionsRows") // To silence Metrics warning
.getOrCreate()
val sc = spark.sparkContext
import spark.implicits._
/**
* Returns a List of nums sentences
* #param sentence
* #param num
* #return
*/
def getList(sentence: String, num: Int) : List[String] = {
#tailrec
def loop(st: String,n: Int, acc: List[String]): List[String] = {
n match {
case num if num == 0 => acc
case _ => loop(st, n - 1, st :: acc)
}
}
loop(sentence,num,List())
}
/**
* Returns a Dataframe that is the union of nums dataframes
* #param lst
* #param num
* #return
*/
def getDataFrame(lst: List[String], num: Int): DataFrame = {
#tailrec
def loop (ls: List[String],n: Int, acc: DataFrame): DataFrame = {
n match {
case n if n == 0 => acc
case _ => loop(lst,n - 1, acc.union(sc.parallelize(ls).toDF("sentence")))
}
}
loop(lst, num, sc.parallelize(List(sentence)).toDF("sentence"))
}
val sentence = "hope for the best but prepare for the worst"
val lSentence = getList(sentence, 100000)
val dfs = getDataFrame(lSentence,100)
println(dfs.count())
// output: 10000001
dfs.write.orc("path_to_hdfs") // write dataframe to a orc file
// you can save the file as parquet, txt, json .......
// with dataframe.write
Hope this helps.

Compounding in Spark

I have a dataframe of this format
Date | Return
01/01/2015 0.0
02/02/2015 -0.02
03/02/2015 0.05
04/02/2015 0.07
I would like to do compounding and add a column which will return Compounded return. Compounded return is calculated as:
1 for 1st row.
(1+Return(i))* Compounded(i-1))
So my df finally will be
Date | Return | Compounded
01/01/2015 0.0 1.0
02/02/2015 -0.02 1.0*(1-0.2)=0.8
03/02/2015 0.05 0.8*(1+0.05)=0.84
04/02/2015 0.07 0.84*(1+0.07)=0.8988
Answers in Java will be highly appreciated.
You can also create a custom aggregate function and use it in a window function.
Something like this (writing freeform so there probably would be some mistakes):
package com.myuadfs
import org.apache.spark.sql.Row
import org.apache.spark.sql.expressions.{MutableAggregationBuffer, UserDefinedAggregateFunction}
import org.apache.spark.sql.types._
class MyUDAF() extends UserDefinedAggregateFunction {
def inputSchema: StructType = StructType(Array(StructField("Return", DoubleType)))
def bufferSchema = StructType(StructField("compounded", DoubleType))
def dataType: DataType = DoubleType
def deterministic = true
def initialize(buffer: MutableAggregationBuffer) = {
buffer(0) = 1.0 // set compounded to 1
}
def update(buffer: MutableAggregationBuffer, input: Row) = {
buffer(0) = buffer.getDouble(0) * ( input.getDouble(0) + 1)
}
// this generally merges two aggregated buffers. This means this
// would not have worked properly had you been working with a regular
// aggregate but since you are planning to use this inside a window
// only this should not be called at all.
def merge(buffer1: MutableAggregationBuffer, buffer2: Row) = {
buffer1(0) = buffer1.getDouble(0) + buffer2.getDouble(0)
}
def evaluate(buffer: Row) = {
buffer.getDouble(0)
}
}
Now you can use this inside a window function. Something like this:
import org.apache.spark.sql.Window
val windowSpec = Window.orderBy("date")
val newDF = df.withColumn("compounded", df("Return").over(windowSpec)
Note that this has the limitation that the entire calculation should fit in a single partition so if you have too large a data you would have a problem. That said, nominally this kind of operations are performed after some partitioning by key (e.g. add a partitionBy to the window) and then a single element should be part of a key.
First, we define a function f(line) (suggest a better name, please!!) to process the lines.
def f(line):
global firstLine
global last_compounded
if line[0] == 'Date':
firstLine = True
return (line[0], line[1], 'Compounded')
else:
firstLine = False
if firstLine:
last_compounded = 1
firstLine = False
else:
last_compounded = (1+float(line[1]))*last_compounded
return (line[0], line[1], last_compounded)
Using two global variables (could be improved?), we keep the Compounded(i-1) value and if we are processing the first line.
With your data in some_file, a solution could be:
rdd = sc.textFile('some_file').map(lambda l: l.split())
r1 = rdd.map(lambda l: f(l))
rdd.collect()
[[u'Date', u'Return'], [u'01/01/2015', u'0.0'], [u'02/02/2015', u'-0.02'], [u'03/02/2015', u'0.05'], [u'04/02/2015', u'0.07']]
r1.collect()
[(u'Date', u'Return', 'Compounded'), (u'01/01/2015', u'0.0', 1.0), (u'02/02/2015', u'-0.02', 0.98), (u'03/02/2015', u'0.05', 1.05), (u'04/02/2015', u'0.07', 1.1235000000000002)]

SparkSQL Aggregator: MissingRequirementError

I am trying to use Apache Spark's 2.0 Datasets:
import org.apache.spark.sql.expressions.Aggregator
import org.apache.spark.sql.Encoder
import spark.implicits._
case class C1(f1: String, f2: String, f3: String, f4: String, f5: Double)
val teams = Seq(
C1("hash1", "NLC", "Cubs", "2016-01-23", 3253.21),
C1("hash1", "NLC", "Cubs", "2014-01-23", 353.88),
C1("hash3", "NLW", "Dodgers", "2013-08-15", 4322.12),
C1("hash4", "NLE", "Red Sox", "2010-03-14", 10283.72)
).toDS()
val c1Agg = new Aggregator[C1, Seq[C1], Seq[C1]] with Serializable {
def zero: Seq[C1] = Seq.empty[C1] //Nil
def reduce(b: Seq[C1], a: C1): Seq[C1] = b :+ a
def merge(b1: Seq[C1], b2: Seq[C1]): Seq[C1] = b1 ++ b2
def finish(r: Seq[C1]): Seq[C1] = r
override def bufferEncoder: Encoder[Seq[C1]] = newProductSeqEncoder[C1]
override def outputEncoder: Encoder[Seq[C1]] = newProductSeqEncoder[C1]
}.toColumn
val g_c1 = teams.groupByKey(_.f1).agg(c1Agg).collect
But then when I run it I got the following error message:
scala.reflect.internal.MissingRequirementError: class lineb4c2bb72bf6e417e9975d1a65602aec912.$read in JavaMirror with sun.misc.Launcher$AppClassLoader#14dad5dc of type class sun.misc.Launcher$AppClassLoader with class path [OMITTED] not found
I am assuming the configuration is correct because I am running under Databricks community cloud.
I was finally able to make it work by using ExpressionEncoder() rather than newProductSeqEncoder[C1] in lines 20, 21.
(Not sure why the previous code did not work though.)

Writing an efficient aggregation function for Spark SQL

I wrote an aggregation function which returns range encoded representation of a column of Long data. I ran it on a 1 GB parquet file which has 50 columns. My cluster has 55 executors with 4 cores for each node. The run time is around 5 minutes even after caching the dataframe. Is there any way to run this query in a more efficient manner ?
Here is the UDAF -
class Concat extends UserDefinedAggregateFunction {
def inputSchema: org.apache.spark.sql.types.StructType =
StructType(StructField("value", LongType) :: Nil)
def bufferSchema: StructType = StructType(
StructField("concatenation",ArrayType(LongType,false) ) :: Nil
)
def dataType: DataType = ArrayType(LongType,false)
def deterministic: Boolean = true
def initialize(buffer: MutableAggregationBuffer): Unit = {
buffer.update(0, new ArrayBuffer[Long]() )
}
def update(buffer: MutableAggregationBuffer,input: Row): Unit = {
val l=buffer.getAs[ ArrayBuffer[Long] ](0).toBuffer.asInstanceOf[ ArrayBuffer[Long] ]
val v=input.getAs[ Long ](0)
val n=l.size
if(n<2){
l += v
l += 0L
}
else{
val x1 = l(n-2)
val x2 = l(n-1)
if( x1-1 == v){
l(n-2)= v
l(n-1)= x2+1
}
else if(x1+x2+1 == v)
l(n-1)= x2+1
else{
l += v
l += 0L
}
}
buffer.update(0,l)
}
def merge(buffer1: MutableAggregationBuffer, buffer2: Row): Unit = {
val a=buffer1.getAs[ WrappedArray[Long] ](0)
val b=buffer2.getAs[ WrappedArray[Long] ](0)
buffer1.update(0,a ++ b)
}
def evaluate(buffer: Row): Any = {
buffer.getSeq(0)
}
}
Here is how I am running the query -
val concat = new Concat
sqlContext.udf.register("lcon", concat)
val df=sqlContext.read.parquet("file_url")
df.cache
df.registerTempTable("agg11")
val results=sqlContext.sql("SELECT lcon(Id) from agg11 WHERE Status IN (1) AND Device IN (1,4) AND Medium IN (1)").collect

How to compute percentiles in Apache Spark

I have an rdd of integers (i.e. RDD[Int]) and what I would like to do is to compute the following ten percentiles: [0th, 10th, 20th, ..., 90th, 100th]. What is the most efficient way to do that?
You can :
Sort the dataset via rdd.sortBy()
Compute the size of the dataset via rdd.count()
Zip with index to facilitate percentile retrieval
Retrieve the desired percentile via rdd.lookup() e.g. for 10th percentile rdd.lookup(0.1 * size)
To compute the median and the 99th percentile:
getPercentiles(rdd, new double[]{0.5, 0.99}, size, numPartitions);
In Java 8:
public static double[] getPercentiles(JavaRDD<Double> rdd, double[] percentiles, long rddSize, int numPartitions) {
double[] values = new double[percentiles.length];
JavaRDD<Double> sorted = rdd.sortBy((Double d) -> d, true, numPartitions);
JavaPairRDD<Long, Double> indexed = sorted.zipWithIndex().mapToPair((Tuple2<Double, Long> t) -> t.swap());
for (int i = 0; i < percentiles.length; i++) {
double percentile = percentiles[i];
long id = (long) (rddSize * percentile);
values[i] = indexed.lookup(id).get(0);
}
return values;
}
Note that this requires sorting the dataset, O(n.log(n)) and can be expensive on large datasets.
The other answer suggesting simply computing a histogram would not compute correctly the percentile: here is a counter example: a dataset composed of 100 numbers, 99 numbers being 0, and one number being 1. You end up with all the 99 0's in the first bin, and the 1 in the last bin, with 8 empty bins in the middle.
How about t-digest?
https://github.com/tdunning/t-digest
A new data structure for accurate on-line accumulation of rank-based statistics such as quantiles and trimmed means. The t-digest algorithm is also very parallel friendly making it useful in map-reduce and parallel streaming applications.
The t-digest construction algorithm uses a variant of 1-dimensional k-means clustering to product a data structure that is related to the Q-digest. This t-digest data structure can be used to estimate quantiles or compute other rank statistics. The advantage of the t-digest over the Q-digest is that the t-digest can handle floating point values while the Q-digest is limited to integers. With small changes, the t-digest can handle any values from any ordered set that has something akin to a mean. The accuracy of quantile estimates produced by t-digests can be orders of magnitude more accurate than those produced by Q-digests in spite of the fact that t-digests are more compact when stored on disk.
In summary, the particularly interesting characteristics of the t-digest are that it
has smaller summaries than Q-digest
works on doubles as well as integers.
provides part per million accuracy for extreme quantiles and typically <1000 ppm accuracy for middle quantiles
is fast
is very simple
has a reference implementation that has > 90% test coverage
can be used with map-reduce very easily because digests can be merged
It should be fairly easy to use the reference Java implementation from Spark.
I discovered this gist
https://gist.github.com/felixcheung/92ae74bc349ea83a9e29
that contains the following function:
/**
* compute percentile from an unsorted Spark RDD
* #param data: input data set of Long integers
* #param tile: percentile to compute (eg. 85 percentile)
* #return value of input data at the specified percentile
*/
def computePercentile(data: RDD[Long], tile: Double): Double = {
// NIST method; data to be sorted in ascending order
val r = data.sortBy(x => x)
val c = r.count()
if (c == 1) r.first()
else {
val n = (tile / 100d) * (c + 1d)
val k = math.floor(n).toLong
val d = n - k
if (k <= 0) r.first()
else {
val index = r.zipWithIndex().map(_.swap)
val last = c
if (k >= c) {
index.lookup(last - 1).head
} else {
index.lookup(k - 1).head + d * (index.lookup(k).head - index.lookup(k - 1).head)
}
}
}
}
If you don't mind converting your RDD to a DataFrame, and using a Hive UDAF, you can use percentile. Assuming you've loaded HiveContext hiveContext into scope:
hiveContext.sql("SELECT percentile(x, array(0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9)) FROM yourDataFrame")
I found out about this Hive UDAF in this answer.
Here is my Python implementation on Spark for calculating the percentile for a RDD containing values of interest.
def percentile_threshold(ardd, percentile):
assert percentile > 0 and percentile <= 100, "percentile should be larger then 0 and smaller or equal to 100"
return ardd.sortBy(lambda x: x).zipWithIndex().map(lambda x: (x[1], x[0])) \
.lookup(np.ceil(ardd.count() / 100 * percentile - 1))[0]
# Now test it out
import numpy as np
randlist = range(1,10001)
np.random.shuffle(randlist)
ardd = sc.parallelize(randlist)
print percentile_threshold(ardd,0.001)
print percentile_threshold(ardd,1)
print percentile_threshold(ardd,60.11)
print percentile_threshold(ardd,99)
print percentile_threshold(ardd,99.999)
print percentile_threshold(ardd,100)
# output:
# 1
# 100
# 6011
# 9900
# 10000
# 10000
Separately, I defined the following function to get the 10th to 100th percentile.
def get_percentiles(rdd, stepsize=10):
percentiles = []
rddcount100 = rdd.count() / 100
sortedrdd = ardd.sortBy(lambda x: x).zipWithIndex().map(lambda x: (x[1], x[0]))
for p in range(0, 101, stepsize):
if p == 0:
pass
# I am not aware of a formal definition of 0 percentile,
# you can put a place holder like this if you want
# percentiles.append(sortedrdd.lookup(0)[0] - 1)
elif p == 100:
percentiles.append(sortedrdd.lookup(np.ceil(rddcount100 * 100 - 1))[0])
else:
pv = sortedrdd.lookup(np.ceil(rddcount100 * p) - 1)[0]
percentiles.append(pv)
return percentiles
randlist = range(1,10001)
np.random.shuffle(randlist)
ardd = sc.parallelize(randlist)
get_percentiles(ardd, 10)
# [1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000]
Convert you RDD into a RDD of Double, and then use the .histogram(10) action. See DoubleRDD ScalaDoc
If N percent is small like 10, 20% then I will do the following:
Compute the size of dataset, rdd.count(), skip it maybe you know it already and take as argument.
Rather then sorting the whole dataset, I will find out top(N) from each partition. For that I would have to find out N = what is N% of rdd.count, then sort the partitions and take top(N) from each partition. Now you have a much smaller dataset to sort.
3.rdd.sortBy
4.zipWithIndex
5.filter (index < topN)
Based on the answer given here Median UDAF in Spark/Scala, I used an UDAF to compute percentiles over spark windows (spark 2.1) :
First an abstract generic UDAF used for other aggregations
import org.apache.spark.sql.Row
import org.apache.spark.sql.expressions.{MutableAggregationBuffer, UserDefinedAggregateFunction}
import org.apache.spark.sql.types._
import scala.collection.mutable
import scala.collection.mutable.ArrayBuffer
abstract class GenericUDAF extends UserDefinedAggregateFunction {
def inputSchema: StructType =
StructType(StructField("value", DoubleType) :: Nil)
def bufferSchema: StructType = StructType(
StructField("window_list", ArrayType(DoubleType, false)) :: Nil
)
def deterministic: Boolean = true
def initialize(buffer: MutableAggregationBuffer): Unit = {
buffer(0) = new ArrayBuffer[Double]()
}
def update(buffer: MutableAggregationBuffer,input: org.apache.spark.sql.Row): Unit = {
var bufferVal = buffer.getAs[mutable.WrappedArray[Double]](0).toBuffer
bufferVal+=input.getAs[Double](0)
buffer(0) = bufferVal
}
def merge(buffer1: MutableAggregationBuffer, buffer2: org.apache.spark.sql.Row): Unit = {
buffer1(0) = buffer1.getAs[ArrayBuffer[Double]](0) ++ buffer2.getAs[ArrayBuffer[Double]](0)
}
def dataType: DataType
def evaluate(buffer: Row): Any
}
Then the Percentile UDAF customized for deciles :
import org.apache.spark.sql.Row
import org.apache.spark.sql.expressions.{MutableAggregationBuffer, UserDefinedAggregateFunction}
import org.apache.spark.sql.types._
import scala.collection.mutable
import scala.collection.mutable.ArrayBuffer
class DecilesUDAF extends GenericUDAF {
override def dataType: DataType = ArrayType(DoubleType, false)
override def evaluate(buffer: Row): Any = {
val sortedWindow = buffer.getAs[mutable.WrappedArray[Double]](0).sorted.toBuffer
val windowSize = sortedWindow.size
if (windowSize == 0) return null
if (windowSize == 1) return (0 to 10).map(_ => sortedWindow.head).toArray
(0 to 10).map(i => sortedWindow(Math.min(windowSize-1, i*windowSize/10))).toArray
}
}
The UDAF is then instanciated and called over a partitionned and ordered window :
val deciles = new DecilesUDAF()
df.withColumn("mt_deciles", deciles(col("mt")).over(myWindow))
You can then split the resulting array into multiple columns with getItem :
def splitToColumns(size: Int, splitCol:String)(df: DataFrame) = {
(0 to size).foldLeft(df) {
case (df_arg, i) => df_arg.withColumn("mt_decile_"+i, col(splitCol).getItem(i))
}
}
df.transform(splitToColumns(10, "mt_deciles" ))
The UDAF is slower than native spark functions but as long as each grouped bag or each window is relatively small and fits into a single executor, it should be fine. The main advantage is using spark parallelism.
With little effort, this code could be extend to n-quantiles.
I tested the code using this function :
def testDecilesUDAF = {
val window = W.partitionBy("user")
val deciles = new DecilesUDAF()
val schema = StructType(StructField("mt", DoubleType) :: StructField("user", StringType) :: Nil)
val rows1 = (1 to 20).map(i => Row(i.toDouble, "a"))
val rows2 = (21 to 40).map(i => Row(i.toDouble, "b"))
val df = spark.createDataFrame(spark.sparkContext.makeRDD[Row](rows1++rows2), schema)
df.withColumn("deciles", deciles(col("mt")).over(window))
.transform(splitToColumns(10, "deciles" ))
.drop("deciles")
.show(100, truncate=false)
}
First 3 lines of output :
+----+----+-----------+-----------+-----------+-----------+-----------+-----------+-----------+-----------+-----------+-----------+------------+
|mt |user|mt_decile_0|mt_decile_1|mt_decile_2|mt_decile_3|mt_decile_4|mt_decile_5|mt_decile_6|mt_decile_7|mt_decile_8|mt_decile_9|mt_decile_10|
+----+----+-----------+-----------+-----------+-----------+-----------+-----------+-----------+-----------+-----------+-----------+------------+
|21.0|b |21.0 |23.0 |25.0 |27.0 |29.0 |31.0 |33.0 |35.0 |37.0 |39.0 |40.0 |
|22.0|b |21.0 |23.0 |25.0 |27.0 |29.0 |31.0 |33.0 |35.0 |37.0 |39.0 |40.0 |
|23.0|b |21.0 |23.0 |25.0 |27.0 |29.0 |31.0 |33.0 |35.0 |37.0 |39.0 |40.0 |
Another alternative way can be to use top and last on RDD of double. For example, val percentile_99th_value=scores.top((count/100).toInt).last
This method is more suited for individual percentiles.
Here is my easy approach:
val percentiles = Array(0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1)
val accuracy = 1000000
df.stat.approxQuantile("score", percentiles, 1.0/accuracy)
output:
scala> df.stat.approxQuantile("score", percentiles, 1.0/accuracy)
res88: Array[Double] = Array(0.011044141836464405, 0.02022990956902504, 0.0317261666059494, 0.04638145491480827, 0.06498630344867706, 0.0892181545495987, 0.12161539494991302, 0.16825592517852783, 0.24740923941135406, 0.9188197255134583)
accuracy: The accuracy parameter (default: 10000) is a positive numeric literal which controls approximation accuracy at the cost of memory. Higher value of accuracy yields better accuracy, 1.0/accuracy is the relative error of the approximation.

Resources