pySpark: is it possible to groupBy() with one single node per group? - apache-spark

I'm using pySpark to compute per-group matrices. It looks like the computation would be faster if Spark stored any given group's rows on one single node, so Spark could compute each matrix locally. I'm afraid inter-node cooperation could take much longer.
Do map() and groupBy() usually achieve this kind of thing? Should I try to specify it as an option, if possible?
NB. The matrices include computing a distance between each row and the previous one, within each (sorted) group.

It seems Spark will do that by default.
See here : http://backtobazics.com/big-data/spark/apache-spark-groupby-example/

I guess you asked for mapPartitions(). Then the operation happens locally in each partition.

Related

Spark intersection implementation

How does Spark implement intersection method? Does it require 2 RDDs to colocate on a single machine?
From here it says that it uses hashtables, which is a bit odd as it's probably not scalable and sorting both rdds and then comparing item by item might have provided a more scalable solution.
Any thoughts on the subject are welcome
It definitely doesn't need the RDDs to colocate on a single machine. You can just look at the code for the details. Looks like it uses a cogroup.

how to make different sum over the same line in Spark

I have a spark dataframe with, some numeric columns.
I would like to make several aggregationg operations on these columns creating a new column for each function, some of which may also be user defined.
The easy solution would be using dataframe and withColumn. For istance, if I wanted to calculate the mean (by hand) and the function my_function on fields field_1 and field_2 I would do:
df=df.withColumn("mean",(df["field_1"]+df["field_2])/2)
df=df.withColumn("foo", my_function(df["field_1"],df["field_2]))
My doubt is about efficiency. Each of the 2 above functions scans the whole database while a smarter approach would calculate both results using one single scan.
Any hint on how to do that?
Thanks
Mauro
TL;DR You're trying to solve problem which doesn't exist
SQL transformations are lazy and declarative. Series of operations is converted into logical execution plan, and then into physical execution plan. At the first stage Spark optimizer has freedom to reorder, combine or even remove any part of the plan. You have to however, distinguish between two cases:
Python udf.
SQL expression.
The first requires separate conversion to Python RDD. It cannot be combined with native processing. The second one is processed natively using generated code.
Once you request the results physical plan is converted into stages and executed.

Apply a custom function to a spark dataframe group

I have a very big table of time series data that have these columns:
Timestamp
LicensePlate
UberRide#
Speed
Each collection of LicensePlate/UberRide data should be processed considering the whole set of data. In others words, I do not need to proccess the data row by row, but all rows grouped by (LicensePlate/UberRide) together.
I am planning to use spark with dataframe api, but I am confused on how can I perform a custom calculation over spark grouped dataframe.
What I need to do is:
Get all data
Group by some columns
Foreach spark dataframe group apply a f(x). Return a custom object foreach group
Get the results by applying g(x) and returning a single custom object
How can I do steps 3 and 4? Any hints over which spark API (dataframe, dataset, rdd, maybe pandas...) should I use?
The whole workflow can be seen below:
What you are looking for exists since Spark 2.3: Pandas vectorized UDFs. It allows to group a DataFrame and apply custom transformations with pandas, distributed on each group:
df.groupBy("groupColumn").apply(myCustomPandasTransformation)
It is very easy to use so I will just put a link to Databricks' presentation of pandas UDF.
However, I don't know such a practical way to make grouped transformations in Scala yet, so any additional advice is welcome.
EDIT: in Scala, you can achieve the same thing since earlier versions of Spark, using Dataset's groupByKey + mapGroups/flatMapGroups.
While Spark provides some ways to integrate with Pandas it doesn't make Pandas distributed. So whatever you do with Pandas in Spark is simply local (either to driver or executor when used inside transformations) operation.
If you're looking for a distributed system with Pandas-like API you should take a look at dask.
You can define User Defined Aggregate functions or Aggregators to process grouped Datasets but this part of the API is directly accessible only in Scala. It is not that hard to write a Python wrapper when you create one.
RDD API provides a number of functions which can be used to perform operations in groups starting with low level repartition / repartitionAndSortWithinPartitions and ending with a number of *byKey methods (combineByKey, groupByKey, reduceByKey, etc.).
Which one is applicable in your case depends on the properties of the function you want to apply (is it associative and commutative, can it work on streams, does it expect specific order).
The most general but inefficient approach can be summarized as follows:
h(rdd.keyBy(f).groupByKey().mapValues(g).collect())
where f maps from value to key, g corresponds to per-group aggregation and h is a final merge. Most of the time you can do much better than that so it should be used only as the last resort.
Relatively complex logic can be expressed using DataFrames / Spark SQL and window functions.
See also Applying UDFs on GroupedData in PySpark (with functioning python example)

Implementing a Spark SQL UserDefinedAggregateFunction that performs multiple passes over a column

I've been experimenting with the UserDefinedAggregateFunction class to write aggregate functions for use in Spark SQL.
It works well for implementing single pass operations like sum(), avg() etc., but is there a trick you can use to perform multiple passes over a column?
For example, Calculating variance using the naive approach. i.e. With a first pass calculating the column mean and then a second pass that uses this value to calculate the variance. I know that there are single pass algorithms for doing this that give good approximations (as in fact implemented by Spark). I was just using this as an example of a two-pass operation.
It would be nice to be able to do the following,
spark.sql("SELECT product, MultiPassAgg(price) FROM products GROUP BY product")
I appreciate that I can do this kind of thing using Dataset / DataFrame operations in stages etc., but I was just looking clean approach as illustrated in the SQL above.
Any ideas or suggestions?
This should be possible, though the following suggestion could potentially use a large amount of memory if a large number of rows are involved in any given partition.
In the implementation of your UserDefinedAggregateFunction, set up the bufferSchema having a StructField that includes a DataType that is a collection (such as ArrayType) to act as an internal collection of inputs provided via update.
Then, in update you append each input to your collection, and in merge you combine all of the collections into a single collection. This allows you to have the full partition available for use in evaluate.
Finally, during evaluate you can operate across the entire collection of rows in any way you see fit.

How to score all user-product combinations in Spark MatrixFactorizationModel?

Given a MatrixFactorizationModel what would be the most efficient way to return the full matrix of user-product predictions (in practice, filtered by some threshold to maintain sparsity)?
Via the current API, once could pass a cartesian product of user-product to the predict function, but it seems to me that this will do a lot of extra processing.
Would accessing the private userFeatures, productFeatures be the correct approach, and if so, is there a good way to take advantage of other aspects of the framework to distribute this computation in an efficient way? Specifically, is there an easy way to do better than multiplying all pairs of userFeature, productFeature "by hand"?
Spark 1.1 has a recommendProducts method that can be mapped to each user ID. This is better than nothing but not really optimized for recommending to all users.
I would double-check that you really mean to make recommendations for everyone; at scale, this is inherently a big slow operation. Consider predicting for users that have been recently active only.
Otherwise, yes your best bet is to create your own method. The cartesian join of the feature RDDs is probably too slow as it's shuffling so many copies of the feature vectors. Choose the larger of the user / product feature set, and map that. In each worker, hold the other product / user feature set in memory in each worker. If this isn't feasible you can make this more complex and map several times against subsets of the smaller RDD in memory.
As of Spark 2.2, recommendProductsForUsers(num) would be the method.
Recommends the top "num" number of products for all users. The number of recommendations returned per user may be less than "num".
https://spark.apache.org/docs/2.2.0/api/python/pyspark.mllib.html

Resources