I'm using Spark 2.1.1 (pyspark), doing a groupby followed by an approx_count_distinct aggregation on a DataFrame with about 1.4 billion rows. The groupby operation results in about 6 million groups to perform the approx_count_distinct operation on. The expected distinct counts for the groups range from single-digits to the millions.
Here is the code snippet I'm using, with column 'item_id' containing the ID of items, and 'user_id' containing the ID of users. I want to count the distinct users associated with each item.
>>> distinct_counts_df = data_df.groupby(['item_id']).agg(approx_count_distinct(data_df.user_id).alias('distinct_count'))
In the resulting DataFrame, I'm getting about 16,000 items with a count of 0:
>>> distinct_counts_df.filter(distinct_counts_df.distinct_count == 0).count()
16032
When I checked the actual distinct count for a few of these items, I got numbers between 20 and 60. Is this a known issue with the accuracy of the HLL approximate counting algorithm or is this a bug?
Although I am not sure where the actual problem lies, but since approx_count_distinct relies on approximation(https://stackoverflow.com/a/40889920/7045987), HLL may well be the issue.
You can try this:
There is a parameter 'rsd' which you can pass in approx_count_distinct which determines the error margin. If rsd = 0, it will give you accurate results although the time increases significantly and in that case, countDistinct becomes a better option. Nevertheless, you can try decreasing rsd to say 0.008 at the cost of increasing time. This may help in giving a little more accurate results.
Related
I have a case where i am trying to write some results using dataframe write into S3 using the below query with input_table_1 size is 13 Gb and input_table_2 as 1 Mb
input_table_1 has columns account, membership and
input_table_2 has columns role, id , membership_id, quantity, start_date
SELECT
/*+ BROADCASTJOIN(input_table_2) */
account,
role,
id,
quantity,
cast(start_date AS string) AS start_date
FROM
input_table_1
INNER JOIN
input_table_2
ON array_contains(input_table_1.membership, input_table_2.membership_id)
where membership array contains list of member_ids
This dataset write using Spark dataframe is generating around 1.1TiB of data in S3 with around 700 billion records.
We identified that there are duplicates and used dataframe.distinct.write.parquet("s3path") to remove the duplicates . The record count is reduced to almost 1/3rd of the previous total count with around 200 billion rows but we observed that the output size in S3 is now 17.2 TiB .
I am very confused how this can happen.
I have used the following spark conf settings
spark.sql.shuffle.partitions=20000
I have tried to do a coalesce and write to s3 but it did not work.
Please suggest if this is expected and when can be done ?
There's two sides to this:
1) Physical translation of distinct in Spark
The Spark catalyst optimiser turns a distinct operation into an aggregation by means of the ReplaceDeduplicateWithAggregate rule (Note: in the execution plan distinct is named Deduplicate).
This basically means df.distinct() on all columns is translated into a groupBy on all columns with an empty aggregation:
df.groupBy(df.columns:_*).agg(Map.empty).
Spark uses a HashPartitioner when shuffling data for a groupBy on respective columns. Since the groupBy clause in your case contains all columns (well, implicitly, but it does), you're more or less randomly shuffling data to different nodes in the cluster.
Increasing spark.sql.shuffle.partitions in this case is not going to help.
Now on to the 2nd side, why does this affect the size of your parquet files so much?
2) Compression in parquet files
Parquet is a columnar format, will say your data is organised in columns rather than row by row. This allows for powerful compression if data is adequately laid-out & ordered. E.g. if a column contains the same value for a number of consecutive rows, it is enough to write that value just once and make a note of the number of repetitions (a strategy called run length encoding). But Parquet also uses various other compression strategies.
Unfortunately, data ends up pretty randomly in your case after shuffling to remove duplicates. The original partitioning of input_table_1 was much better fitted.
Solutions
There's no single answer how to solve this, but here's a few pointers I'd suggest doing next:
What's causing the duplicates? Could these be removed upstream? Or is there a problem with the join condition causing duplicates?
A simple solution is to just repartition the dataset after distinct to match the partitioning of your input data. Adding a secondary sorting (sortWithinPartition) is likely going to give you even better compression. However, this comes at the cost of an additional shuffle!
As #matt-andruff pointed out below, you can also achieve this in SQL using cluster by. Obviously, that also requires you to move the distinct keyword into your SQL statement.
Write your own deduplication algorithm as Spark Aggregator and group / shuffle the data just once in a meaningful way.
I have a pretty straightforward pyspark SQL application (spark 2.4.4, EMR 5.29) that reads a dataframe of the schema topic, year, count:
df.show()
+--------+----+------+
| topic|year| count|
+--------+----+------+
|covid-19|2017|606498|
|covid-19|2016|454678|
|covid-19|2011| 10517|
|covid-19|2008| 6193|
|covid-19|2015|510391|
|covid-19|2013| 29551|
I then need to sort by year and collect counts to a list so that they be in ascending order, by year:
df.orderBy('year').groupBy('topic').agg(collect_list('count').alias('counts'))
The issue is, since I order by year, the number of partitions used for this stage is the number of years in my dataset. I thus get a crazy bottleneck stage where 15 out of 300 executors are utilised, leading to obvious memory spills and disk spills, eventually failing the stage due to no space left on device for the overpopulated partitions.
Even more interesting is that I found a way to circumvent this which intuitively appears to be much less efficient, but actually does work, since no bottlenecks are created:
df.groupBy('topic').pivot('year', values=range(START, FINISH)).agg(first('count')) \
.select('topic', array([col(c) for c in range(START, FINISH)]).alias('counts'))
This leads to my desired output, which is an array of counts sorted by year.
Anyone with an explanation or idea why this happens, or how best to prevent this?
I found this answer which and this jira where it is basically suggested to 'add noise' to the sort by key to avoid these skew related issues.
I think it is worth mentioning that the pivot method is a better resolution than adding noise, and to my knowledge whenever sorting by a column that has a small range of values. would appreciate any info on this and alternate implementations.
Range Partitioning is used for Sorting, ordering, under water by Spark.
From the docs it is clear that the calculation for determining the number of partitions that will contain ranges of data for sorting subsequently via mapPartitions,
is based on sampling from the existing partitions prior to computing some heuristically optimal number of partitions for these computed ranges.
These ranges which are partitions may decrease the number of partitions as a range must be contained with a single partition - for the order by / sort to work. Via mapPartitions type approach.
This:
df.repartitionByRange(100, 'some_col1', 'some_colN')...
can help or of you order by more columns I suspect. But here it appears not to be the case based on your DF.
The question has nothing to do with pyspark, BTW.
Interesting point, but explainable: reduced partitions needing to hold more data via collect_list based on year, there are obviously more topics than years.
I have a large pandas DataFrame consisting of some 100k rows and ~100 columns with different dtypes and arbitrary content.
I need to assert that it does not contain a certain value, let's say -1.
Using assert( not (any(test1.isin([-1]).sum()>0))) results in processing time of some seconds.
Any idea how to speed it up?
Just to make a full answer out of my comment:
With -1 not in test1.values you can check if -1 is in your DataFrame.
Regarding the performance, this still needs to check every single value, which is in your case
10^5*10^2 = 10^7.
You only save with this the performance cost for summation and an additional comparison of these results.
I am trying to work with a large dataset, but just play around with a small part of it. Each operation takes a long time, and I want to look at the head or limit of the dataframe.
So, for example, I call a UDF (user defined function) to add a column, but I only care to do so on the first, say, 10 rows.
sum_cols = F.udf(lambda x:x[0] + x[1], IntegerType())
df_with_sum = df.limit(10).withColumn('C',sum_cols(F.array('A','B')))
However, this still to take the same long time it would take if I did not use limit.
If you work with 10 rows first, I think it is better that to create a new df and cache it
df2 = df.limit(10).cache()
df_with_sum = df2.withColumn('C',sum_cols(F.array('A','B')))
limit will first try to get the required data from single partition. If the it does not get the whole data in one partition then it will get remaining data from next partition.
So please check how many partition you have by using df.rdd.getNumPartition
To prove this I would suggest first coalsce your df to one partition and do a limit. You can see this time limit is faster as it’s filtering data from one partition
This question already has answers here:
Spark columnar performance
(2 answers)
Closed 5 years ago.
I have a very wide dataframe > 10,000 columns and I need to compute the percent of nulls in each. Right now I am doing:
threshold=0.9
for c in df_a.columns[:]:
if df_a[df_a[c].isNull()].count() >= (df_a.count()*threshold):
# print(c)
df_a=df_a.drop(c)
Of course this is a slow process and crashes on occasion. Is there a more efficient method I am missing?
Thanks!
There are few strategies you can take depending upon the size of the dataframe. The code looks good to me. You need to go through each column and count the number of null values.
One strategy is to cache the input dataframe. That will enable faster filtering. This however works if the dataframe is not huge
Also
df_a=df_a.drop(c)
I am little skeptical with this as this is changing the dataframe in the loop. Better to keep the null column names and drop from the dataframe later in a separate loop.
If the dataframe is huge and you can't cache it completely you can partition the dataframe in to some finite manageable columns. Like take 100 column each and cache that smaller dataframe and run the analysis 100 times in a loop.
Now you might want to keep track of the analyzed column list separate from the yet to be analyzed columns in this case. That way even if the job fails you can start the analysis from the rest of the columns.
You should avoid iterating when using pyspark, since it does not distribute the computations anymore.
Using count on a column will compute the count of non-null elements.
threshold = 0.9
import pyspark.sql.functions as psf
count_df = df_a\
.agg(*[psf.count("*").alias("count")]+ [psf.count(c).alias(c) for c in df_a.columns])\
.toPandas().transpose()
The first element is the number of lines in the dataframe:
total_count = count_df.iloc[0, 0]
kept_cols = count_df[count_df[0] > (1 - threshold)*total_count].iloc[1:,:]
df_a.select(list(kept_cols.index))