Spark custom estimator including persistence - apache-spark

I want to develop a custom estimator for spark which handles persistence of the great pipeline API as well. But as How to Roll a Custom Estimator in PySpark mllib put it there is not a lot of documentation out there (yet).
I have some data cleansing code written in spark and would like to wrap it in a custom estimator. Some na-substitutions, column deletions, filtering and basic feature generation are included (e.g. birthdate to age).
transformSchema will use the case class of the dataset ScalaReflection.schemaFor[MyClass].dataType.asInstanceOf[StructType]
fit will only fit e.g. mean age as na. substitutes
What is still pretty unclear to me:
transform in the custom pipeline model will be used to transform the "fitted" Estimator on new data. Is this correct? If yes how should I transfer the fitted values e.g. the mean age from above into the model?
how to handle persistence? I found some generic loadImpl method within private spark components but am unsure how to transfer my own parameters e.g. the mean age into the MLReader / MLWriter which are used for serialization.
It would be great if you could help me with a custom estimator - especially with the persistence part.

First of all I believe you're mixing a bit two different things:
Estimators - which represent stages that can be fit-ted. Estimator fit method takes Dataset and returns Transformer (model).
Transformers - which represent stages that can transform data.
When you fit Pipeline it fits all Estimators and returns PipelineModel. PipelineModel can transform data sequentially calling transform on all Transformers in the the model.
how should I transfer the fitted values
There is no single answer to this question. In general you have two options:
Pass parameters of the fitted model as the arguments of the Transformer.
Make parameters of the fitted model Params of the Transformer.
The first approach is typically used by the built-in Transformer, but the second one should work in some simple cases.
how to handle persistence
If Transformer is defined only by its Params you can extend DefaultParamsReadable.
If you use more complex arguments you should extend MLWritable and implement MLWriter that makes sense for your data. There are multiple examples in Spark source which show how to implement data and metadata reading / writing.
If you're looking for an easy to comprehend example take a look a the CountVectorizer(Model) where:
Estimator and Transformer share common Params.
Model vocabulary is a constructor argument, model parameters are inherited from the parent.
Metadata (parameters) is written an read using DefaultParamsWriter / DefaultParamsReader.
Custom implementation handles data (vocabulary) writing and reading.

The following uses the Scala API but you can easily refactor it to Python if you really want to...
First things first:
Estimator: implements .fit() that returns a Transformer
Transformer: implements .transform() and manipulates the DataFrame
Serialization/Deserialization: Do your best to use built-in Params and leverage simple DefaultParamsWritable trait + companion object extending DefaultParamsReadable[T]. a.k.a Stay away from MLReader / MLWriter and keep your code simple.
Parameters passing: Use a common trait extending the Params and share it between your Estimator and Model (a.k.a. Transformer)
Skeleton code:
// Common Parameters
trait MyCommonParams extends Params {
final val inputCols: StringArrayParam = // usage: new MyMeanValueStuff().setInputCols(...)
new StringArrayParam(this, "inputCols", "doc...")
def setInputCols(value: Array[String]): this.type = set(inputCols, value)
def getInputCols: Array[String] = $(inputCols)
final val meanValues: DoubleArrayParam =
new DoubleArrayParam(this, "meanValues", "doc...")
// more setters and getters
}
// Estimator
class MyMeanValueStuff(override val uid: String) extends Estimator[MyMeanValueStuffModel]
with DefaultParamsWritable // Enables Serialization of MyCommonParams
with MyCommonParams {
override def copy(extra: ParamMap): Estimator[MeanValueFillerModel] = defaultCopy(extra) // deafult
override def transformSchema(schema: StructType): StructType = schema // no changes
override def fit(dataset: Dataset[_]): MyMeanValueStuffModel = {
// your logic here. I can't do all the work for you! ;)
this.setMeanValues(meanValues)
copyValues(new MyMeanValueStuffModel(uid + "_model").setParent(this))
}
}
// Companion object enables deserialization of MyCommonParams
object MyMeanValueStuff extends DefaultParamsReadable[MyMeanValueStuff]
// Model (Transformer)
class MyMeanValueStuffModel(override val uid: String) extends Model[MyMeanValueStuffModel]
with DefaultParamsWritable // Enables Serialization of MyCommonParams
with MyCommonParams {
override def copy(extra: ParamMap): MyMeanValueStuffModel = defaultCopy(extra) // default
override def transformSchema(schema: StructType): StructType = schema // no changes
override def transform(dataset: Dataset[_]): DataFrame = {
// your logic here: zip inputCols and meanValues, toMap, replace nulls with NA functions
// you have access to both inputCols and meanValues here!
}
}
// Companion object enables deserialization of MyCommonParams
object MyMeanValueStuffModel extends DefaultParamsReadable[MyMeanValueStuffModel]
With the code above you can Serialize/Deserialize a Pipeline containing a MyMeanValueStuff stage.
Want to look at some real simple implementation of an Estimator? MinMaxScaler! (My example is actually simpler though...)

Related

define transform primitive "weekend" and define features using data frames

i was trying to create a base line model with only one tranform primitive.
So i'm defining entities and relationships
eg: create entities and relationships. T
trans_primitives = [IsWeekend]
features = ft.dfs(entities=entities,
relationships=relationships,
target_entity="trips",
trans_primitives=trans_primitives,
agg_primitives=[],
ignore_variables={"trips":
["pickup_latitude", "pickup_longitude",
"dropoff_latitude", "dropoff_longitude"]},
features_only=True)
error is time index not found in dataframe

Apache Spark: pass Column as Transformer parameter

I defined a pipeline Transformer like this:
class MyTransformer(condition: Column) extends SparkTransformer {
override def transform(dataset: Dataset[_]): DataFrame = {...}
}
which is then used in a pipeline:
val pipeline = new Pipeline()
pipeline.setStages(Array(new MyTransformer(col("test).equals(lit("value"))))
pipeline.fit(df).transform(mydf)
In my transformer, I want to apply a transformation only on rows that verify the condition.
It results in a serialization issue:
Serialization stack:
- object not serializable (class: org.apache.spark.sql.Column, value: (test = value))
- field (class: my.project.MyTransformer, name: condition, type: class org.apache.spark.sql.Column)
- ...
In my understanding, the Transformer are serialized to be dispatched to executors, so every parameter should be serializable.
How can I bypass it? Is there a workaround?
Thx.
This question seems a bit old...
I don't know if my (untested) idea match your needs.
A solution could be to use the SQL expression (a String instance)
val pipeline = new Pipeline()
pipeline.setStages(Array(new MyTransformer("test = 'value'")))
pipeline.fit(df).transform(mydf)
and to use functions.expr() to convert the expression String to Column instance in Transformer.transform method.
This way, the condition is Serializable and the non-serializable objects are created when needed in executors.

Spark function aliases - performant udfs

Context
In many of the sql queries I write, I find myself combining spark predefined functions in the exact same way, which often results in verbose and duplicated code, and my developer instinct is to want to refactor it.
So, my question is this : is there some way to define some kind of alias for function combinations without resorting to udfs (which are to avoid for perofmance reasons) - the goal being to make the code clearer and cleaner. Essentially, what I want is something like udfs but without the performance penalty. Also, these function MUST be callable from within a spark-sql query usable in spark.sql calls.
Example
For example, let's say my business logic is to reverse some string and hash it like this : (please note that the function combination here is irrelevant, what is important is that it is some combination of existing pre-defined spark functions - possibly many of them)
SELECT
sha1(reverse(person.name)),
sha1(reverse(person.some_information)),
sha1(reverse(person.some_other_information))
...
FROM person
Is there a way of declaring a business function without paying the performance price of using a udf, allowing the code just above to be rewritten as :
SELECT
business(person.name),
business(person.some_information),
business(person.some_other_information)
...
FROM person
I have searched around quite a bit on the spark documentation and on this website and have not found a way of achieving this, which is pretty weird to me because it looks like a pretty natural need, and I don't understand why you should necessarly pay the black-box price of defining and calling a udf.
Is there a way of declaring a business function without paying the performance price of using a udf
You don't have to use udf, you might extend Expression class, or for the simplest operations - UnaryExpression. Then you will have to implement just several methods and here we go. It is natively integrated into Spark, besides that letting use some advantage features such as code generation.
In your case adding business function is pretty straightforward:
def business(column: Column): Column = {
sha1(reverse(column))
}
MUST be callable from within a spark-sql query usable in spark.sql calls
This is more tricky but achievable.
You need to create custom functions registrar:
import org.apache.spark.sql.catalyst.FunctionIdentifier
import org.apache.spark.sql.catalyst.expressions.Expression
object FunctionAliasRegistrar {
val funcs: mutable.Map[String, Seq[Column] => Column] = mutable.Map.empty
def add(name: String, builder: Seq[Column] => Column): this.type = {
funcs += name -> builder
this
}
def registerAll(spark: SparkSession) = {
funcs.foreach { case (alias, builder) => {
def b(children: Seq[Expression]) = builder.apply(children.map(expr => new Column(expr))).expr
spark.sessionState.functionRegistry.registerFunction(FunctionIdentifier(alias), b)
}}
}
}
Then you can use it as follows:
FunctionAliasRegistrar
.add("business1", child => lower(reverse(child.head)))
.add("business2", child => upper(reverse(child.head)))
.registerAll(spark)
dataset.createTempView("data")
spark.sql(
"""
| SELECT business1(name), business2(name) FROM data
|""".stripMargin)
.show(false)
Output:
+--------------------+--------------------+
|lower(reverse(name))|upper(reverse(name))|
+--------------------+--------------------+
|sined |SINED |
|taram |TARAM |
|1taram |1TARAM |
|2taram |2TARAM |
+--------------------+--------------------+
Hope this helps.

How to specify Encoder when mapping a Spark Dataset from one type to another?

I have a Spark dataset of the following type:
org.apache.spark.sql.Dataset[Array[Double]]
I want to map the array to a Vector so that I can use it as the input dataset for ml.clustering.KMeans.fit(...). So I try to do something like this:
val featureVectors = vectors.map(r => Vectors.dense(r))
But this fails with the following error:
error: Unable to find encoder for type stored in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._ Support for serializing other types will be added in future releases.
I guess I need to specify an encoder for the map operation, but I struggle with finding a way to do it. Any ideas?
You need the encoder to be available as implicit evidence:
def map[U : Encoder](func: T => U): Dataset[U]
breaks down to:
def map[U](func: T => U)(implicit evidence$1: Encoder[U]): Dataset[U]
So, you need to pass it in or have it available implicitly.
That said, I do not believe that Vector is supported as of yet, so you might have to drop to a DataFrame.

Text Classification using Spark ML

I have a free text description based on which I need to perform a classification. For example the description can be that of an incident. Based on the description of the incident , I need to predict the risk associated with the event . For eg : "A murder in town" - this description is a candidate for "high" risk.
I tried logistic regression but realized that currently there is support only for binary classification. For Multi class classification ( there are only three possible values ) based on free text description , what would be the most suitable algorithm? ( Linear Regression or Naive Bayes )
Since you are using spark, I assume you have bigdata, so -I am no expert- but after reading your answer, I would like to make some points.
Create the Training (80%) and Testing Data Sets (20%)
I would partition my data to Training (60-70%), Testing (15-20%) and Evaluation (15-20%) sets..
The idea is that you can fine tune your classification algorithm w.r.t. the Training set, but we really want to do with with Classification tasks, is to have them classify unseen data. So fine tune your algorithm with the Testing set, and when you are done, use the Evaluation set, to get a real understanding of how things work!
Stop words
If your data are articles from Newspapers and such,I personally haven't seen any significant improvement by using more sophisticated stop words removal approaches...
But that's just a personal statement, but if I were you, I wouldn't focus on that step.
Term Frequency
How about using Term Frequency-Inverse Document Frequency (TF-IDF) term weighting instead? You may want to read: How can I create a TF-IDF for Text Classification using Spark?
I would try both and compare!
Multinomial
Do you have any particular reason to try the Multinomial Distribution? If no, since when n is 1 and k is 2 the multinomial distribution is the Bernoulli distribution, as stated in Wikipedia, which is supported.
Try both and compare ( this is something you have to get used to, if you wish to make your model better! :) )
I also see that apache-spark-mllib offers Random forests, which might worth a read, at least! ;)
If your data is not that big, I would also try Support vector machines (SVMs), from scikit-learn, which however supports python, so you should switch to pyspark or plain python, abandoning spark. BTW, if you are actually going for sklearn, this might come in handy: How to split into train, test and evaluation sets in sklearn?, since Pandas plays nicely along with sklearn.
Hope this helps!
Off-topic:
This is really not the way to ask a question in Stack Overflow. Read How to ask a good question?
Personally, if I were you, I would do all the things you have done in your answer first, and then post a question, summarizing my approach.
As for the bounty, you may want to read: How does the Bounty System work?
This is how I solved the above problem.
Though prediction accuracy is not bad ,the model has to be tuned further
for better results.
Experts , please revert back if you find anything wrong.
My input data frame has two columns "Text" and "RiskClassification"
Below are the sequence of steps to predict using Naive Bayes in Java
Add a new column "label" to the input dataframe . This column will basically decode the risk classification like below
sqlContext.udf().register("myUDF", new UDF1<String, Integer>() {
#Override
public Integer call(String input) throws Exception {
if ("LOW".equals(input))
return 1;
if ("MEDIUM".equals(input))
return 2;
if ("HIGH".equals(input))
return 3;
return 0;
}
}, DataTypes.IntegerType);
samplingData = samplingData.withColumn("label", functions.callUDF("myUDF", samplingData.col("riskClassification")));
Create the Training ( 80 % ) and Testing Data Sets ( 20 % )
For eg :
DataFrame lowRisk = samplingData.filter(samplingData.col("label").equalTo(1));
DataFrame lowRiskTraining = lowRisk.sample(false, 0.8);
Union All the dataframes to build the complete training data
Building test data is slightly tricky . Test Data should have all data which
is not present in the training data
Start transformation of training data and build the model
6 . Tokenize the text column in the training data set
Tokenizer tokenizer = new Tokenizer().setInputCol("text").setOutputCol("words");
DataFrame tokenized = tokenizer.transform(trainingRiskData);
Remove Stop Words. (Here you can also do advanced operations like lemme, stemmer, POS etc using Stanford NLP library)
StopWordsRemover remover = new StopWordsRemover().setInputCol("words").setOutputCol("filtered");
DataFrame stopWordsRemoved = remover.transform(tokenized);
Compute Term Frequency using HashingTF. CountVectorizer is another way to do this
int numFeatures = 20;
HashingTF hashingTF = new HashingTF().setInputCol("filtered").setOutputCol("rawFeatures")
.setNumFeatures(numFeatures);
DataFrame rawFeaturizedData = hashingTF.transform(stopWordsRemoved);
IDF idf = new IDF().setInputCol("rawFeatures").setOutputCol("features");
IDFModel idfModel = idf.fit(rawFeaturizedData);
DataFrame featurizedData = idfModel.transform(rawFeaturizedData);
Convert the featurized input into JavaRDD . Naive Bayes works on LabeledPoint
JavaRDD<LabeledPoint> labelledJavaRDD = featurizedData.select("label", "features").toJavaRDD()
.map(new Function<Row, LabeledPoint>() {
#Override
public LabeledPoint call(Row arg0) throws Exception {
LabeledPoint labeledPoint = new LabeledPoint(new Double(arg0.get(0).toString()),
(org.apache.spark.mllib.linalg.Vector) arg0.get(1));
return labeledPoint;
}
});
Build the model
NaiveBayes naiveBayes = new NaiveBayes(1.0, "multinomial");
NaiveBayesModel naiveBayesModel = naiveBayes.train(labelledJavaRDD.rdd(), 1.0);
Run all the above transformations on the test data also
Loop through the test data frame and perform the below actions
Create a LabeledPoint using the "label" and "features" in the test data frame
For eg : If the test data frame has label and features in the third and seventh column , then
LabeledPoint labeledPoint = new LabeledPoint(new Double(dataFrameRow.get(3).toString()),
(org.apache.spark.mllib.linalg.Vector) dataFrameRow.get(7));
Use the Prediction Model to predict the label
double predictedLabel = naiveBayesModel.predict(labeledPoint.features());
Add the predicted label also as a column to the test data frame.
Now test data frame has the expected label and the predicted label.
You can export the test data to csv and do analysis or you can compute the accuracy programatically as well.

Resources