I have a query running in presto which has array_intersect condition. This is taking around 5 hrs to run. If I remove the array_intersect then it is taking less than an hour.
CARDINALITY(ARRAY_INTERSECT(links, ARRAY['504949547', '504949616', '515604515', '515604526', '515604527', '515604528'])) > 0
Can anyone please let me know how to improve the performance. Have to get it less than 5 mins.
Have tried enabling the spill disk but it didnt help. Input data size is around 1TB.
Thanks
array_intersect materializes the result (the intersection), whereas the only thing you are checking is the membership of certain predefined elements.
In this case I'd recommend using any_match instead.
any_match(links, e -> e IN ('504949547', '504949616', ...))
If you're using Presto versions that doesn't have any_match, you can use reduce:
reduce(
links, -- array to reduce
false, -- initial state
(s, e) -> s OR e IN ('504949547', '504949616', ...), -- reduction function
s -> s) -- output function
Have tried enabling the spill disk but it didnt help.
Note: In Presto, spill is supported for certain operators (most of the Joins, Aggregations, Order By, Window functions). It is not applicable to scalar functions operating on ARRAYs. Also, you should not expect spill to increase performance. It can only reduce memory footprint, at the cost of performance.
Related
let's say I have two dataframes that I want to join using "inner join": A and B, each one has 100 columns and billions of rows.
If in my use case I'm only interested in 10 columns of A and 4 columns of B, does Spark do the optimization for me in order to handle this and shuffle only 14 columns or will he be shuffling everything then selecting 14 columns?
Query 1 :
A_select = A.select("{10 columns}").as("A")
B_select = B.select("{4 columns}").as("B")
result = A_select.join(B_select, $"A.id"==$"B.id")
Query 2 :
A.join(B, $"A.id"==$"B.id").select("{14 columns}")
Is Query1==Query2 in termes of behavior, execution time, data shuffling ?
Thanks in advance for your answers :
Yes, spark will handle the optimization for you. Due to it's lazy evaluation behaviour only the required attributes will be selected from the datafrmes (A and B).
You can use explain function to view logical/physical plan,
result.explain()
Both the query will be returning same physical plan. Hence execution time and data shuffling will be same.
Reference - Pyspark documentation for explain function.
I have a Spark DataFrame where all fields are integer type. I need to count how many individual cells are greater than 0.
I am running locally and have a DataFrame with 17,000 rows and 450 columns.
I have tried two methods, both yielding slow results:
Version 1:
(for (c <- df.columns) yield df.where(s"$c > 0").count).sum
Version 2:
df.columns.map(c => df.filter(df(c) > 0).count)
This calculation takes 80 seconds of wall clock time. With Python Pandas, it takes a fraction of second. I am aware that for small data sets and local operation, Python may perform better, but this seems extreme.
Trying to make a Spark-to-Spark comparison, I find that running MLlib's PCA algorithm on the same data (converted to a RowMatrix) takes less than 2 seconds!
Is there a more efficient implementation I should be using?
If not, how is the seemingly much more complex PCA calculation so much faster?
What to do
import org.apache.spark.sql.functions.{col, count, when}
df.select(df.columns map (c => count(when(col(c) > 0, 1)) as c): _*)
Why
Your both attempts create number of jobs proportional to the number of columns. Computing the execution plan and scheduling the job alone are expensive and add significant overhead depending on the amount of data.
Furthermore, data might be loaded from disk and / or parsed each time the job is executed, unless data is fully cached with significant memory safety margin which ensures that the cached data will not be evicted.
This means that in the worst case scenario nested-loop-like structure you use can roughly quadratic in terms of the number of columns.
The code shown above handles all columns at the same time, requiring only a single data scan.
The problem with your approach is that the file is scanned for every column (unless you have cached it in memory). The fastet way with a single FileScan should be:
import org.apache.spark.sql.functions.{explode,array}
val cnt: Long = df
.select(
explode(
array(df.columns.head,df.columns.tail:_*)
).as("cell")
)
.where($"cell">0).count
Still I think it will be slower than with Pandas, as Spark has a certain overhead due to the parallelization engine
I'm joining 2 datasets using Apache Spark ML LSH's approxSimilarityJoin method, but I'm seeings some strange behaviour.
After the (inner) join the dataset is a bit skewed, however every time one or more tasks take an inordinate amount of time to complete.
As you can see the median is 6ms per task (I'm running it on a smaller source dataset to test), but 1 task takes 10min. It's hardly using any CPU cycles, it actually joins data, but so, so slow.
The next slowest task runs in 14s, has 4x more records & actually spills to disk.
If you look
The join itself is a inner join between the two datasets on pos & hashValue (minhash) in accordance with minhash specification & udf to calculate the jaccard distance between match pairs.
Explode the hashtables:
modelDataset.select(
struct(col("*")).as(inputName), posexplode(col($(outputCol))).as(explodeCols))
Jaccard distance function:
override protected[ml] def keyDistance(x: Vector, y: Vector): Double = {
val xSet = x.toSparse.indices.toSet
val ySet = y.toSparse.indices.toSet
val intersectionSize = xSet.intersect(ySet).size.toDouble
val unionSize = xSet.size + ySet.size - intersectionSize
assert(unionSize > 0, "The union of two input sets must have at least 1 elements")
1 - intersectionSize / unionSize
}
Join of processed datasets :
// Do a hash join on where the exploded hash values are equal.
val joinedDataset = explodedA.join(explodedB, explodeCols)
.drop(explodeCols: _*).distinct()
// Add a new column to store the distance of the two rows.
val distUDF = udf((x: Vector, y: Vector) => keyDistance(x, y), DataTypes.DoubleType)
val joinedDatasetWithDist = joinedDataset.select(col("*"),
distUDF(col(s"$leftColName.${$(inputCol)}"), col(s"$rightColName.${$(inputCol)}")).as(distCol)
)
// Filter the joined datasets where the distance are smaller than the threshold.
joinedDatasetWithDist.filter(col(distCol) < threshold)
I've tried combinations of caching, repartitioning and even enabling spark.speculation, all to no avail.
The data consists of shingles address text that have to be matched:
53536, Evansville, WI => 53, 35, 36, ev, va, an, ns, vi, il, ll, le, wi
will have a short distance with records where there is a typo in the city or zip.
Which gives pretty accurate results, but may be the cause of the join skew.
My question is:
What may cause this discrepancy? (One task taking very very long, even though it has less records)
How can I prevent this skew in minhash without losing accuracy?
Is there a better way to do this at scale? ( I can't Jaro-Winkler / levenshtein compare millions of records with all records in location dataset)
It might be a bit late but I will post my answer here anyways to help others out. I recently had similar issues with matching misspelled company names (All executors dead MinHash LSH PySpark approxSimilarityJoin self-join on EMR cluster). Someone helped me out by suggesting to take NGrams to reduce the data skew. It helped me a lot. You could also try using e.g. 3-grams or 4-grams.
I don’t know how dirty the data is, but you could try to make use of states. It reduces the number of possible matches substantially already.
What really helped me improving the accuracy of the matches is to postprocess the connected components (group of connected matches made by the MinHashLSH) by running a label propagation algorithm on each component. This also allows you to increase N (of the NGrams), therefore mitigating the problem of skewed data, setting the jaccard distance parameter in approxSimilarityJoin less tightly, and postprocess using label propagation.
Finally, I am currently looking into using skipgrams to match it. I found that in some cases it works better and reduces the data skew somewhat.
I have a data set of ~8 GB with ~10 million rows (about 10 columns) and wanted to prove the point that SparkR could outperform SQL. To the contrary, I see extremely poor performance from SparkR compared with SQL.
My code simply loads the file from S3 the runs gapply, where my groupings will typically consist of 1-15 rows -- so 10 million rows divided by 15 gives a lot of groups. Am I forcing too much shuffling, serialization/deserialization? Is that why things run so slowly?
For purposes of illustrating that my build_transition function is not the performance bottleneck, I created a trivial version called build_transition2 as shown below, which returns dummy information with what should be constant execution time per group.
Anything fundamental or obvious with my solution formulation?
build_transition2 <- function(key, x) {
patient_id <- integer()
seq_val <- integer()
patient_id <- append(patient_id, as.integer(1234))
seq_val <- append(seq_val, as.integer(5678))
y <- data.frame(patient_id,
seq_val,
stringsAsFactors = FALSE
)
}
dat_spark <- read.df("s3n://my-awss3/data/myfile.csv", "csv", header = "true", inferSchema = "true", na.strings = "NA")
schema <- structType(structField("patient_ID","integer"),
structField("sequence","integer")
)
result <- gapply(dat_spark, "patient_encrypted_id", build_transition2, schema)
and wanted to prove the point that SparkR could outperform SQL.
That's just not the case. The overhead of indirection caused by the guest language:
Internal Catalyst format
External Java type
Sending data to R
....
Sending data back to JVM
Converting to Catalyst format
is huge.
On to of that, gapply is basically an example of group-by-key - something that we normally avoid in Spark.
Overall gapply should be used if, and only if, business logic cannot be expressed using standard SQL functions. It is definitely not a way to optimize your code under normal circumstances (there might border cases where it might be faster, but in general any special logic, if required, will benefit more from native JVM execution with Scala UDF, UDAF, Aggregator, or reduceGroups / mapGroups).
I'm working on an optimization problem that involves minimizing an expensive map operation over a collection of objects.
The naive solution would be something like
rdd.map(expensive).min()
However, the map function returns values that guaranteed to be >= 0. So, if any single result is 0, I can take that as the answer and do not need to compute the rest of the map operations.
Is there an idiomatic way to do this using Spark?
Is there an idiomatic way to do this using Spark?
No. If you're concerned with low level optimizations like this one, then Spark is not the best option. It doesn't mean it is completely impossible.
If you can for example try something like this:
rdd.cache()
(min_value, ) = rdd.filter(lambda x: x == 0).take(1) or [rdd.min()]
rdd.unpersist()
short circuit partitions:
def min_part(xs):
min_ = None
for x in xs:
min_ = min(x, min_) if min_ is not None else x
if x == 0:
return [0]
return [min_] in min_ is not None else []
rdd.mapPartitions(min_part).min()
Both will usually execute more than required, each giving slightly different performance profile, but can skip evaluating some records. With rare zeros the first one might be better.
You can even listen to accumulator updates and use sc.cancelJobGroup once 0 is seen. Here is one example of similar approach Is there a way to stream results to driver without waiting for all partitions to complete execution?
If "expensive" is really expensive, maybe you can write the result of "expensive" to, say, SQL (Or any other storage available to all the workers).
Then in the beginning of "expensive" check the number currently stored, if it is zero return zero from "expensive" without performing the expensive part.
You can also do this localy for each worker which will save you a lot of time but won't be as "global".