I created model using H2O's Sparkling Water. And now I'd like to apply it to huge Spark DF (populated with sparse vectors). I use python and pyspark, pysparkling. Basically I need to do map job with model.predict() function inside. But copying data into H2O context is huge overhead and not an option. What I think I gonna do is, extract POJO (Java class) model from h2o model and use it to do map in dataframe. My questions are:
Is there a better way?
How to write pyspark wrapper for java class, from which I intend to use only one method .score(double[] data, double[] result)
How to maximally reuse wrappers from Spark ML library?
Thank you!
In this case, you can:
1) use h2o.predict(H2OFrame) method to generate prediction, but you need to transform RDD to H2OFrame. It is not the perfect solution...however, for some cases, it can provide reasonable solution.
2) switch to JVM and call JVM directly via Spark's Py4J gateway
This is not fully working solution right now, since the method score0 needs to accept non-primitive types on H2O side and also to be visible (right now it is protected),
but at least idea:
model = sc._jvm.water.DKV.getGet("deeplearning.model")
double_class = sc._jvm.double
row = sc._gateway.new_array(double_class, nfeatures)
row[0] = ...
...
row[nfeatures-1] = ...
prediction = model.score0(row)
I created JIRA improvement for this case https://0xdata.atlassian.net/browse/PUBDEV-2726
However, workaround is to create a Java wrapper around model which would
expose right shape of score0 function:
class ModelWrapper extends Model {
public double[] score(double[] row) {
return score0(row)
}
}
Please see also hex.ModelUtils: https://github.com/h2oai/sparkling-water/blob/master/core/src/main/scala/hex/ModelUtils.scala
(again you can call them directly via Py4J gateway exposed by Spark)
Related
I've came across an odd behavior of Apache Spark.
The problem is that I am getting wrong JSON representation of my source dataset when I'm using toJson() method.
To explain problem in more detail, imagine I have typed dataset with this fields:
SomeObject
(
adtp
date
deviceType
...
)
Then I want to map elements of this dataset to JSON using toJson() method (for storing objects in Kafka topic).
But Spark converts this objects into their JSON representation incorrectly.
You can see this behaviour on the screenshots:
Before using toJson(), the object values were:
SomeObject
(
adtp=1
date="2019-04-24"
deviceType="Mobile"
...
)
After using toJson(), the values of the object are:
SomeObject
(
adtp=10
date="Mobile"
deviceType=""
...
)
Can you help me with this sort of problem? I tried to debug spark job but it's not an easy task (I'm not an expert in Scala).
Finally I found out the cause of the problem. I have some JOINs in my data transformations and then I make my dataset typed (using as(...)).
But the problem is that Spark doesn't change the internal schema of the dataset after typing.
And these schemas (one of the source dataset and one of the data model class) may differ. Not only by the presence of columns but also by their order.
So when it comes to conversion of the source dataset to the dataset of JSONs, Spark just takes the schema remaining after the JOINs, and uses it when converting to JSON. And this is the cause of the wrong toJson() conversion.
So the solution is quite simple. Just use one of the transformation dataset functions (map(...) as an example) to explicitly update your dataset schema. So in my case it looks pretty awful but the most important thing is that it works:
.as(Encoders.bean(SomeObject.class))
.map(
(MapFunction<SomeObject, SomeObject>) obj -> obj,
Encoders.bean(SomeObject.class)
);
There is also a ticket on this problem: SPARK-17694.
Currently working on PySpark. There is no map function on DataFrame, and one has to go to RDD for map function. In Scala there is a map on DataFrame, is there any reason for this?
Dataset.map is not part of the DataFrame (Dataset[Row]) API. It transforms strongly typed Dataset[T] into strongly typed Dataset[U]:
def map[U](func: (T) ⇒ U)(implicit arg0: Encoder[U]): Dataset[U]
and there is simply no place for Python in the strongly typed Dataset world. In general, Datasets are native JVM objects (unlike RDD it has not Python specific implementation) which depend heavily on rich Scala type system (even Java API is severely limited). Even if Python implemented some variant of the Encoder API, data would still have to be converted to RDD for computations.
In contrast Python implements its own map like mechanism with vectorized udfs, which should be released in Spark 2.3. It is focused on high performance serde implementation coupled with Pandas API.
That includes both typical udfs (in particular SCALAR and SCALAR_ITER variants) as well as map-like variants - GROUPED_MAP and MAP_ITER applied through GroupedData.apply and DataFrame.mapInPandas (Spark >= 3.0.0) respectively.
The Java DSE GraphFrame API does not fully support going from GraphTraversal to DataFrame.
The following GraphTraversal to DataFrame is possible:
gf().E().df()
However this does not:
gf().E().hasLabel("foo").df()
This is because hasLabel() returns a GraphTraversal instead of com.datastax.bdp.graph.spark.graphframe.DseGraphTraversal and GraphTraversal does not have the df() method.
This should be possible per the docs
To finish a traversal and return to the DataFrame API instead of list
or iterator use the .df() method:
graph.V().df()
I'm using dse-graph-frames:5.1.4 along with dse-byos_2.11:5.1.4.
Is this expected? All I really want is to do some graph traversal and convert it into a DataFrame.
It works in Scala as is, in Java you need to add the cast to DseGraphTraversal
((DseGraphTraversal)gf.E().hasLabel("foo")).df()
I gave a longer answer here Iterating a GraphTraversal with GraphFrame causes UnsupportedOperationException Row to Vertex conversion
Currently working on PySpark. There is no map function on DataFrame, and one has to go to RDD for map function. In Scala there is a map on DataFrame, is there any reason for this?
Dataset.map is not part of the DataFrame (Dataset[Row]) API. It transforms strongly typed Dataset[T] into strongly typed Dataset[U]:
def map[U](func: (T) ⇒ U)(implicit arg0: Encoder[U]): Dataset[U]
and there is simply no place for Python in the strongly typed Dataset world. In general, Datasets are native JVM objects (unlike RDD it has not Python specific implementation) which depend heavily on rich Scala type system (even Java API is severely limited). Even if Python implemented some variant of the Encoder API, data would still have to be converted to RDD for computations.
In contrast Python implements its own map like mechanism with vectorized udfs, which should be released in Spark 2.3. It is focused on high performance serde implementation coupled with Pandas API.
That includes both typical udfs (in particular SCALAR and SCALAR_ITER variants) as well as map-like variants - GROUPED_MAP and MAP_ITER applied through GroupedData.apply and DataFrame.mapInPandas (Spark >= 3.0.0) respectively.
From the DataSet and RDD documentation,
DataSet:
A Dataset is a strongly typed collection of domain-specific objects
that can be transformed in parallel using functional or relational
operations. Each dataset also has an untyped view called a DataFrame,
which is a Dataset of Row
RDD:
RDD represents an immutable,partitioned collection of elements that
can be operated on in parallel
Also, it is said the difference between them:
The major difference is, dataset is collection of domain specific
objects where as RDD is collection of any object. Domain object part
of definition signifies the schema part of dataset. So dataset API is
always strongly typed and optimized using schema where RDD is not.
I have two questions here;
what does it mean dataset is collection of domain specific objects while RDD is collection of any object,Given a case class Person, I thought DataSet[Person] and RDD[Person] are both collection of domain specific objects
dataset API is always strongly typed and optimized using schema where RDD is not Why is it said that dataset API always strongly typed while RDD not? I thought RDD[Person] is also strong typed
Strongly typed Dataset (not DataFrame) is a collection of record types (Scala Products) which are mapped to internal storage format using so called Encoders, while RDD can store arbitrary serializable (Serializable or Kryo serializable object). Therefore as a container RDD is much more generic than Dataset.
Following:
. So dataset API is always strongly typed (...) where RDD is not.
is an utter absurd, showing that you shouldn't trust everything you can find on the Internet. In general Dataset API has significantly weaker type protections, than RDD. This is particularly obvious when working Dataset[Row], but applies to any Dataset.
Consider for example following:
case class FooBar(id: Int, foos: Seq[Int])
Seq[(Integer, Integer)]((1, null))
.toDF.select($"_1" as "id", array($"_2") as "foos")
.as[FooBar]
which clearly breaks type safety.