Spark.ml DataFrame containing SparseVector - apache-spark

I have a spark.ml DataFrame that contains many columns, each of these columns containing a SparseVector per row. I would like to apply MultivariateStatisticalSummary.colStats to each column, and colStats signature is:
def colStats(X: RDD[Vector]): MultivariateStatisticalSummary
which seems perfect... except that I can't seem to select a column from that DataFrame and get it to be a RDD[Vector]. Here is my attempt:
val df: DataFrame = data.select(shardId)
val col = df.as[(org.apache.spark.mllib.linalg.Vector)].rdd
val s: MultivariateStatisticalSummary = Statistics.colStats(col)
which doesn't compile with the message (in Scala):
Unable to find encoder for type stored in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing sqlContext.implicits._ Support for serializing other types will be added in future releases.
val col = df.as[(org.apache.spark.mllib.linalg.Vector)].rdd
I also tried:
val df = data.select(shardId)
val col: RDD[Vector] = df.map(x => x.asInstanceOf[org.apache.spark.mllib.linalg.Vector])
val s: MultivariateStatisticalSummary = Statistics.colStats(col)
which fails at runtime with error:
java.lang.ClassCastException: org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema cannot be cast to org.apache.spark.mllib.linalg.Vector
How can I bridge the gap between DataFrame and colStats?

I found the answer after all:
val df = data.select(shardId)
val col: RDD[Vector] = df.map { _.get(0).asInstanceOf[org.apache.spark.mllib.linalg.Vector] }
val s: MultivariateStatisticalSummary = Statistics.colStats(col)
The trick was only to extract the first element of each row before casting it.

Related

Technique for joining with spark dataframe w/ custom partitioner works w/ python, but not scala?

I recently read an article that described how to custom partition a dataframe
[ https://dataninjago.com/2019/06/01/create-custom-partitioner-for-spark-dataframe/ ] in which the author illustrated the technique in Python. I use Scala, and the technique looked like a good way to address issues of skew, so I tried something similar, and what I found was that when one does the following:
- create 2 data frames, D1, D2
- convert D1, D2 to 2 Pair RDDs R1,R2
(where the key is the key you want to join on)
- repartition R1,R2 with a custom partitioner 'C'
where 'C' has 2 partitions (p-0,p-1) and
stuffs everything in P-1, except keys == 'a'
- join R1,R2 as R3
- OBSERVE that:
- partitioner for R3 is 'C' (same for R1,R2)
- when printing the contents of each partition of R3 all entries
except the one keyed by 'a' is in p-1
- set D1' <- R1.toDF
- set D2' <- R2.toDF
We note the following results:
0) The join of D1' and D2' produce expected results (good)
1) The partitioners for D1' and D2' are None -- not Some(C),
as was the case with RDD's R1/R2 (bad)
2) The contents of the glom'd underlying RDDs of D1' and D2' did
not have everything (except key 'a') piled up
in partition 1 as expected.(bad)
So, I came away with the following conclusion... which will work for me practically... But it really irks me that I could not get the behavior in the article which used Python:
When one needs to use custom partitioning with Dataframes in Scala one must
drop into RDD's do the join or whatever operation on the RDD, then convert back
to dataframe. You can't apply the custom partitioner, then convert back to
dataframe, do your operations, and expect the custom partitioning to work.
Now...I am hoping I am wrong ! Perhaps someone with more expertise in Spark internals can guide me here. I have written a little program (below) to illustrate the results. Thanks in advance if you can set me straight.
UPDATE
In addition to the Spark code which illustrates the problem I also tried a simplified version of what the original article presented in Python. The conversions below create a dataframe, extract its underlying RDD and repartition it, then recover the dataframe and verify that the partitioner is lost.
Python snippet illustrating problem
from pyspark.sql.types import IntegerType
mylist = [1, 2, 3, 4]
df = spark.createDataFrame(mylist, IntegerType())
def travelGroupPartitioner(key):
return 0
dfRDD = df.rdd.map(lambda x: (x[0],x))
dfRDD2 = dfRDD .partitionBy(8, travelGroupPartitioner)
# this line uses approach of original article and maps to only the value
# but map doesn't guarantee preserving pratitioner, so i tried without the
# map below...
df2 = spark.createDataFrame(dfRDD2 .map(lambda x: x[1]))
print ( df2.rdd.partitioner ) # prints None
# create dataframe from partitioned RDD _without_ the map,
# and we _still_ lose partitioner
df3 = spark.createDataFrame(dfRDD2)
print ( df3.rdd.partitioner ) # prints None
Scala snippet illustrating problem
object Question extends App {
val conf =
new SparkConf().setAppName("blah").
setMaster("local").set("spark.sql.shuffle.partitions", "2")
val sparkSession = SparkSession.builder .config(conf) .getOrCreate()
val spark = sparkSession
import spark.implicits._
sparkSession.sparkContext.setLogLevel("ERROR")
class CustomPartitioner(num: Int) extends Partitioner {
def numPartitions: Int = num
def getPartition(key: Any): Int = if (key.toString == "a") 0 else 1
}
case class Emp(name: String, deptId: String)
case class Dept(deptId: String, name: String)
val value: RDD[Emp] = spark.sparkContext.parallelize(
Seq(
Emp("anne", "a"),
Emp("dave", "d"),
Emp("claire", "c"),
Emp("roy", "r"),
Emp("bob", "b"),
Emp("zelda", "z"),
Emp("moe", "m")
)
)
val employee: Dataset[Emp] = value.toDS()
val department: Dataset[Dept] = spark.sparkContext.parallelize(
Seq(
Dept("a", "ant dept"),
Dept("d", "duck dept"),
Dept("c", "cat dept"),
Dept("r", "rabbit dept"),
Dept("b", "badger dept"),
Dept("z", "zebra dept"),
Dept("m", "mouse dept")
)
).toDS()
val dumbPartitioner: Partitioner = new CustomPartitioner(2)
// Convert to-be-joined dataframes to custom repartition RDDs [ custom partitioner: cp ]
//
val deptPairRdd: RDD[(String, Dept)] = department.rdd.map { dept => (dept.deptId, dept) }
val empPairRdd: RDD[(String, Emp)] = employee.rdd.map { emp: Emp => (emp.deptId, emp) }
val cpEmpRdd: RDD[(String, Emp)] = empPairRdd.partitionBy(dumbPartitioner)
val cpDeptRdd: RDD[(String, Dept)] = deptPairRdd.partitionBy(dumbPartitioner)
assert(cpEmpRdd.partitioner.get == dumbPartitioner)
assert(cpDeptRdd.partitioner.get == dumbPartitioner)
// Here we join using RDDs and ensure that the resultant rdd is partitioned so most things end up in partition 1
val joined: RDD[(String, (Emp, Dept))] = cpEmpRdd.join(cpDeptRdd)
val reso: Array[(Array[(String, (Emp, Dept))], Int)] = joined.glom().collect().zipWithIndex
reso.foreach((item: Tuple2[Array[(String, (Emp, Dept))], Int]) => println(s"array size: ${item._2}. contents: ${item._1.toList}"))
System.out.println("partitioner of RDD created by joining 2 RDD's w/ custom partitioner: " + joined.partitioner)
assert(joined.partitioner.contains(dumbPartitioner))
val recoveredDeptDF: DataFrame = deptPairRdd.toDF
val recoveredEmpDF: DataFrame = empPairRdd.toDF
System.out.println(
"partitioner for DF recovered from custom partitioned RDD (not as expected!):" +
recoveredDeptDF.rdd.partitioner)
val joinedDf = recoveredEmpDF.join(recoveredDeptDF, "_1")
println("printing results of joining the 2 dataframes we 'recovered' from the custom partitioned RDDS (looks good)")
joinedDf.show()
println("PRINTING partitions of joined DF does not match the glom'd results we got from underlying RDDs")
joinedDf.rdd.glom().collect().
zipWithIndex.foreach {
item: Tuple2[Any, Int] =>
val asList = item._1.asInstanceOf[Array[org.apache.spark.sql.Row]].toList
println(s"array size: ${item._2}. contents: $asList")
}
assert(joinedDf.rdd.partitioner.contains(dumbPartitioner)) // this will fail ;^(
}
Check out my new library which adds partitionBy method to the Dataset/Dataframe API level.
Taking your Emp and Dept objects as example:
class DeptByIdPartitioner extends TypedPartitioner[Dept] {
override def getPartitionIdx(value: Dept): Int = if (value.deptId.startsWith("a")) 0 else 1
override def numPartitions: Int = 2
override def partitionKeys: Option[Set[PartitionKey]] = Some(Set(("deptId", StringType)))
}
class EmpByDepIdPartitioner extends TypedPartitioner[Emp] {
override def getPartitionIdx(value: Emp): Int = if (value.deptId.startsWith("a")) 0 else 1
override def numPartitions: Int = 2
override def partitionKeys: Option[Set[PartitionKey]] = Some(Set(("deptId", StringType)))
}
Note that we are extending TypedPartitioner.
It is compile-time safe, you won't be able to repartition a dataset of persons with emp partitioner.
val spark = SparkBuilder.getSpark()
import org.apache.spark.sql.exchange.implicits._ //<-- addtitonal import
import spark.implicits._
val deptPartitioned = department.repartitionBy(new DeptByIdPartitioner)
val empPartitioned = employee.repartitionBy(new EmpByDepIdPartitioner)
Let's check how our data is partitioned:
Dep dataset:
Partition N 0
: List([a,ant dept])
Partition N 1
: List([d,duck dept], [c,cat dept], [r,rabbit dept], [b,badger dept], [z,zebra dept], [m,mouse dept])
If we join repartitioned by the same key dataset Catalyst will properly recognize this:
val joined = deptPartitioned.join(empPartitioned, "deptId")
println("Joined:")
val result: Array[(Int, Array[Row])] = joined.rdd.glom().collect().zipWithIndex.map(_.swap)
for (elem <- result) {
println(s"Partition N ${elem._1}")
println(s"\t: ${elem._2.toList}")
}
Partition N 0
: List([a,ant dept,anne])
Partition N 1
: List([b,badger dept,bob], [c,cat dept,claire], [d,duck dept,dave], [m,mouse dept,moe], [r,rabbit dept,roy], [z,zebra dept,zelda])
What version of Spark are you using? If it's 2.x and above, it's recommended to use Dataframe/Dataset API instead, not RDDs
It's much easier to work with the mentioned API than with RDDs, and it performs much better on later versions of Spark
You may find the link below useful for how to join DFs:
How to join two dataframes in Scala and select on few columns from the dataframes by their index?
Once you get your joined DataFrame, you can use the link below for partitioning by column values, which I assume you're trying to achieve:
Partition a spark dataframe based on column value?

How to iterate through dataframe without converting to dataset in spark?

I have a dataframe through which I want to iterate, but I dont want to convert dataframe to dataset.
We have to convert spark scala code to pyspark and pyspark does not support dataset.
I have tried the following code with by converting to dataset
data in file:
abc,a
mno,b
pqr,a
xyz,b
val a = sc.textFile("<path>")
//creating dataframe with column AA,BB
val b = a.map(x => x.split(",")).map(x =>(x(0).toString,x(1).toString)).toDF("AA","BB")
b.registerTempTable("test")
case class T(AA:String, BB: String)
//creating dataset from dataframe
val d = b.as[T].collect
d.foreach{ x=>
var m = spark.sql(s"select * from test where BB = '${x.BB}'")
m.show()
}
Without converting to dataset it gives error i.e. with
val d = b.collect
d.foreach{ x=>
var m = spark.sql(s"select * from test where BB = '${x.BB}'")
m.show()
}
it gives error:
error: value BB is not member of org.apache.spark.sql.ROW
You cannot loop dataframe as you have given in the above code. Use dataframe's rdd.collect to loop dataframe.
import spark.implicits._
val df = Seq(("abc","a"), ("mno","b"), ("pqr","a"),("xyz","b")).toDF("AA", "BB")
df.registerTempTable("test")
df.rdd.collect.foreach(x => {
val BBvalue = x.mkString(",").split(",")(1)
var m = spark.sql(s"select * from test where BB = '$BBvalue'")
m.show()
})
Inside the loop I used mkString to convert an rdd row to string and then split the column values with comma and use the index of column for accessing the value. For example, in the above code I have used (1) which means, column BB column index is 2.
Please let me know if you have any questions.

Syntax for using toEpochDate with a Dataframe with Spark Scala - elegantly

The following is nice and easy with an RDD in terms of epochDate derivation:
val rdd2 = rdd.map(x => (x._1, x._2, x._3,
LocalDate.parse(x._2.toString).toEpochDay, LocalDate.parse(x._3.toString).toEpochDay))
The RDD are all of String Type. The desired result is gotten. Get this, for example:
...(Mike,2018-09-25,2018-09-30,17799,17804), ...
Trying to do the same if there is a String in the DF appears too tricky for me, and I would like to see something elegant, if possible. Something like this and variations do not work.
val df2 = df.withColumn("s", $"start".LocalDate.parse.toString.toEpochDay)
Get:
notebook:50: error: value LocalDate is not a member of org.apache.spark.sql.ColumnName
I understand the error, but what is an elegant way of doing the conversion?
You can define to_epoch_day as datediff since the beginning of the epoch:
import org.apache.spark.sql.functions.{datediff, lit, to_date}
import org.apache.spark.sql.Column
def to_epoch_day(c: Column) = datediff(c, to_date(lit("1970-01-01")))
and apply it directly on a Column:
df.withColumn("s", to_epoch_day(to_date($"start")))
As long as the string format complies with ISO 8601 you could even skip data conversion (it will be done implicitly by datediff:
df.withColumn("s", to_epoch_day($"start"))
$"start" is of type ColumnName not String.
You will need to define a UDF
Example below:
scala> import java.time._
import java.time._
scala> def toEpochDay(s: String) = LocalDate.parse(s).toEpochDay
toEpochDay: (s: String)Long
scala> val toEpochDayUdf = udf(toEpochDay(_: String))
toEpochDayUdf: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,LongType,Some(List(StringType)))
scala> val df = List("2018-10-28").toDF
df: org.apache.spark.sql.DataFrame = [value: string]
scala> df.withColumn("s", toEpochDayUdf($"value")).collect
res0: Array[org.apache.spark.sql.Row] = Array([2018-10-28,17832])

Spark ML - StringType is not supported [duplicate]

have a DataFrame with some categorical string values (e.g uuid|url|browser).
I would to convert it in a double to execute an ML algorithm that accept double matrix.
As convertion method I used StringIndexer (spark 1.4) that map my string values to double values, so I defined a function like this:
def str(arg: String, df:DataFrame) : DataFrame =
(
val indexer = new StringIndexer().setInputCol(arg).setOutputCol(arg+"_index")
val newDF = indexer.fit(df).transform(df)
return newDF
)
Now the issue is that i would iterate foreach column of a df, call this function and add (or convert) the original string column in the parsed double column, so the result would be:
Initial df:
[String: uuid|String: url| String: browser]
Final df:
[String: uuid|Double: uuid_index|String: url|Double: url_index|String: browser|Double: Browser_index]
Thanks in advance
You can simply foldLeft over the Array of columns:
val transformed: DataFrame = df.columns.foldLeft(df)((df, arg) => str(arg, df))
Still, I will argue that it is not a good approach. Since src discards StringIndexerModel it cannot be used when you get new data. Because of that I would recommend using Pipeline:
import org.apache.spark.ml.Pipeline
val transformers: Array[org.apache.spark.ml.PipelineStage] = df.columns.map(
cname => new StringIndexer()
.setInputCol(cname)
.setOutputCol(s"${cname}_index")
)
// Add the rest of your pipeline like VectorAssembler and algorithm
val stages: Array[org.apache.spark.ml.PipelineStage] = transformers ++ ???
val pipeline = new Pipeline().setStages(stages)
val model = pipeline.fit(df)
model.transform(df)
VectorAssembler can be included like this:
val assembler = new VectorAssembler()
.setInputCols(df.columns.map(cname => s"${cname}_index"))
.setOutputCol("features")
val stages = transformers :+ assembler
You could also use RFormula, which is less customizable, but much more concise:
import org.apache.spark.ml.feature.RFormula
val rf = new RFormula().setFormula(" ~ uuid + url + browser - 1")
val rfModel = rf.fit(dataset)
rfModel.transform(dataset)

How to convert row rdd to typed rdd

Is it possible to convert a Row RDD to Typed RDD. In code below, can I convert row JavaRDD to Counter type JavaRDD
code :
JavaRDD<Counter> rdd = sc.parallelize(counters);
Dataset<Counter> ds = sqlContext.createDataset(rdd.rdd(), encoder);
DataFrame df = ds.toDF();
df.show()
df.write().parquet(path);
DataFrame newDataDF = sqlContext.read().parquet(path);
newDataDF.toJavaRDD(); // This gives a row type rdd
In Scala :
case class A(countId: Long, bytes: Array[Byte], blist: List[B])
case class B(id: String, count: Long)
val b1 = B("a", 1L)
val b2 = B("b", 2L)
val a1 = A(1L, Array(1.toByte,2.toByte), List(a1, a2))
val rdd = sc.parallelize(List(a1))
val dataSet: Dataset[A] = sqlContext.createDataset(rdd)
val df = dataSet.toDF()
// this shows, so this last entry is for List[B] in which it is storing string as null
|1|[01 02]| [[null,3984726108...|]
df.show
df.write.parquet(path)
val roundTripRDD = sqlContext.read.parquet(path).as[A].rdd
//throws error here when run show on df
Caused by: org.codehaus.commons.compiler.CompileException: File 'generated.java',
Line 300, Column 68:
No applicable constructor/method found for actual parameters
"long, byte[], scala.collection.Seq"; candidates are:
"test.data.A(long, byte[], scala.collection.immutable.List)"
roundTripRDD.toDF.show
assertEquals(roundTripRDD, rdd)
DO I need to provide some kind of constructor for case class?
Try:
sqlContext.read().parquet(path).as(encoder).rdd().toJavaRDD();

Resources