Implementing a Spark SQL UserDefinedAggregateFunction that performs multiple passes over a column - apache-spark

I've been experimenting with the UserDefinedAggregateFunction class to write aggregate functions for use in Spark SQL.
It works well for implementing single pass operations like sum(), avg() etc., but is there a trick you can use to perform multiple passes over a column?
For example, Calculating variance using the naive approach. i.e. With a first pass calculating the column mean and then a second pass that uses this value to calculate the variance. I know that there are single pass algorithms for doing this that give good approximations (as in fact implemented by Spark). I was just using this as an example of a two-pass operation.
It would be nice to be able to do the following,
spark.sql("SELECT product, MultiPassAgg(price) FROM products GROUP BY product")
I appreciate that I can do this kind of thing using Dataset / DataFrame operations in stages etc., but I was just looking clean approach as illustrated in the SQL above.
Any ideas or suggestions?

This should be possible, though the following suggestion could potentially use a large amount of memory if a large number of rows are involved in any given partition.
In the implementation of your UserDefinedAggregateFunction, set up the bufferSchema having a StructField that includes a DataType that is a collection (such as ArrayType) to act as an internal collection of inputs provided via update.
Then, in update you append each input to your collection, and in merge you combine all of the collections into a single collection. This allows you to have the full partition available for use in evaluate.
Finally, during evaluate you can operate across the entire collection of rows in any way you see fit.

Related

Speeding up python load times for read only dictionary?

I currently am working on a bioinformatics project that currently involves a dictionary corresponding to about 10million unique keys, which each return a subset of categorical strings.
I currently use unpickle a dictionary object, but my main issue is that unpickling takes a very long time. I also need to iterate through a file, generating a set of keys(~200) for each row, lookup the keys, appending the list to a list-of-lists, and then subsequently flattening the list to generate a counter object of value frequencies for each row, and I have heard that a SQL database like structure would end up trading load times for lookup times.
The file that has keys typically contain about 100k rows and so this was my best solution, however it seems like even on faster pcs with increased ram, num of cores, and NVME storage that the time spent on loading the database is extremely slow.
I was wondering what direction (different database structure, alternatives to pickle such as shelves or mashall, parallelizing the code with multiprocess) would provide an overall speed up (either through faster loading times, faster lookup, or both) to my code?
Specifically: Need a create databases of the format key -> (DNA sub-sequence) : value ->[A,B,C,Y,Z] on the order of 1e6/1e7 entries.
When used, this database is loaded, and then given a query file (1e6 DNA sequences to query), perform a lookup of all the sub sequences in each sequence do the following.
For each query:
slice the sequence into subsequences.
Lookup each subsequence and return the list of categoricals for each subsequence
Aggregate lists using collections.Counter
I was wondering how to either:
Speed up the loading time of the database, either through a better data structure, or some optimization
Generally improve the speed of the run itself (querying subsequences)
I'm not sure there is a right answer here since there are some tradeoff, BUT.
two options come to mind:
1st. consider using panads.DataFrame for the data-stucture.
It will allow serialization/deserialization to many formats (I believe CSV should be the fastest but would give SQL a try). as for query time, it should be much faster than a dict for the complex queries.
2nd.
key value store, such as MongoDB that has map-reduce and other fancy query capilites, in this case the data is always available without loading times.

Export spark feature transformation pipeline to a file

PMML, Mleap, PFA currently only support row based transformations. None of them support frame based transformations like aggregates or groupby or join. What is the recommended way to export a spark pipeline consisting of these operations.
I see 2 options wrt Mleap:
1) implement dataframe based transformers and the SQLTransformer-Mleap equivalent. This solution seems to be conceptually the best (since you can always encapsule such transformations in a pipeline element) but also alot of work tbh. See https://github.com/combust/mleap/issues/126
2) extend the DefaultMleapFrame with the respective operations, you want to perform and then actually apply the required actions to the data handed to the restserver within a modified MleapServing subproject.
I actually went with 2) and added implode, explode and join as methods to the DefaultMleapFrame and also a HashIndexedMleapFrame that allows for fast joins. I did not implement groupby and agg, but in Scala this is relatively easy to accomplish.
PMML and PFA are standards for representing machine learning models, not data processing pipelines. A machine learning model takes in a data record, performs some computation on it, and emits an output data record. So by definition, you are working with a single isolated data record, not a collection/frame/matrix of data records.
If you need to represent complete data processing pipelines (where the ML model is just part of the workflow) then you need to look for other/combined standards. Perhaps SQL paired with PMML would be a good choice. The idea is that you want to perform data aggregation outside of the ML model, not inside it (eg. a SQL database will be much better at it than any PMML or PFA runtime).

how to make different sum over the same line in Spark

I have a spark dataframe with, some numeric columns.
I would like to make several aggregationg operations on these columns creating a new column for each function, some of which may also be user defined.
The easy solution would be using dataframe and withColumn. For istance, if I wanted to calculate the mean (by hand) and the function my_function on fields field_1 and field_2 I would do:
df=df.withColumn("mean",(df["field_1"]+df["field_2])/2)
df=df.withColumn("foo", my_function(df["field_1"],df["field_2]))
My doubt is about efficiency. Each of the 2 above functions scans the whole database while a smarter approach would calculate both results using one single scan.
Any hint on how to do that?
Thanks
Mauro
TL;DR You're trying to solve problem which doesn't exist
SQL transformations are lazy and declarative. Series of operations is converted into logical execution plan, and then into physical execution plan. At the first stage Spark optimizer has freedom to reorder, combine or even remove any part of the plan. You have to however, distinguish between two cases:
Python udf.
SQL expression.
The first requires separate conversion to Python RDD. It cannot be combined with native processing. The second one is processed natively using generated code.
Once you request the results physical plan is converted into stages and executed.

Apply a custom function to a spark dataframe group

I have a very big table of time series data that have these columns:
Timestamp
LicensePlate
UberRide#
Speed
Each collection of LicensePlate/UberRide data should be processed considering the whole set of data. In others words, I do not need to proccess the data row by row, but all rows grouped by (LicensePlate/UberRide) together.
I am planning to use spark with dataframe api, but I am confused on how can I perform a custom calculation over spark grouped dataframe.
What I need to do is:
Get all data
Group by some columns
Foreach spark dataframe group apply a f(x). Return a custom object foreach group
Get the results by applying g(x) and returning a single custom object
How can I do steps 3 and 4? Any hints over which spark API (dataframe, dataset, rdd, maybe pandas...) should I use?
The whole workflow can be seen below:
What you are looking for exists since Spark 2.3: Pandas vectorized UDFs. It allows to group a DataFrame and apply custom transformations with pandas, distributed on each group:
df.groupBy("groupColumn").apply(myCustomPandasTransformation)
It is very easy to use so I will just put a link to Databricks' presentation of pandas UDF.
However, I don't know such a practical way to make grouped transformations in Scala yet, so any additional advice is welcome.
EDIT: in Scala, you can achieve the same thing since earlier versions of Spark, using Dataset's groupByKey + mapGroups/flatMapGroups.
While Spark provides some ways to integrate with Pandas it doesn't make Pandas distributed. So whatever you do with Pandas in Spark is simply local (either to driver or executor when used inside transformations) operation.
If you're looking for a distributed system with Pandas-like API you should take a look at dask.
You can define User Defined Aggregate functions or Aggregators to process grouped Datasets but this part of the API is directly accessible only in Scala. It is not that hard to write a Python wrapper when you create one.
RDD API provides a number of functions which can be used to perform operations in groups starting with low level repartition / repartitionAndSortWithinPartitions and ending with a number of *byKey methods (combineByKey, groupByKey, reduceByKey, etc.).
Which one is applicable in your case depends on the properties of the function you want to apply (is it associative and commutative, can it work on streams, does it expect specific order).
The most general but inefficient approach can be summarized as follows:
h(rdd.keyBy(f).groupByKey().mapValues(g).collect())
where f maps from value to key, g corresponds to per-group aggregation and h is a final merge. Most of the time you can do much better than that so it should be used only as the last resort.
Relatively complex logic can be expressed using DataFrames / Spark SQL and window functions.
See also Applying UDFs on GroupedData in PySpark (with functioning python example)

Avoid the use of Java data structures in Apache Spark to avoid copying the data

I have a MySQL database with a single table containing about 100 million records (~25GB, ~5 columns). Using Apache Spark, I extract this data via a JDBC connector and store it in a DataFrame.
From here, I do some pre-processing of the data (e.g. replacing the NULL values), so I absolutely need to go through each record.
Then I would like to perform dimensionality reduction and feature selection (e.g. using PCA), perform clustering (e.g. K-Means) and later on do the testing of the model on new data.
I have implemented this in Spark's Java API, but it is too slow (for my purposes) since I do a lot of copying of the data from a DataFrame to a java.util.Vector and java.util.List (to be able to iterate over all records and do the pre-processing), and later back to a DataFrame (since PCA in Spark expects a DataFrame as input).
I have tried extracting information from the database into a org.apache.spark.sql.Column but cannot find a way to iterate over it.
I also tried avoiding the use of Java data structures (such as List and Vector) by using the org.apache.spark.mllib.linalg.{DenseVector, SparseVector}, but cannot get that to work either.
Finally, I also considered using JavaRDD (by creating it from a DataFrame and a custom schema), but couldn't work it out entirely.
After a lengthy description, my question is: is there a way to do all steps mentioned in the first paragraph, without copying all the data into a Java data structure?
Maybe one of the options I tried could actually work, but I just can't seem to find out how, as the docs and literature on Spark are a bit scarce.
From the wording of your question, it seems there is some confusion about the stages of Spark processing.
First, we tell Spark what to do by specifying inputs and transformations. At this point, the only things that are known are (a) the number of partitions at various stages of processing and (b) the schema of the data. org.apache.spark.sql.Column is used at this stage to identify the metadata associated with a column. However, it doesn't contain any of the data. In fact, there is no data at all at this stage.
Second, we tell Spark to execute an action on a dataframe/dataset. This is what kicks off processing. The input is read and flows through the various transformations and into the final action operation, be it collect or save or something else.
So, that explains why you cannot "extract information from the database into" a Column.
As for the core of your question, it's hard to comment without seeing your code and knowing exactly what it is you are trying to accomplish but it is safe to say that much migrating between types is a bad idea.
Here are a couple of questions that might help guide you to a better outcome:
Why can't you perform the data transformations you need by operating directly on the Row instances?
Would it be convenient to wrap some of your transformation code into a UDF or UDAF?
Hope this helps.

Resources