Pyspark parquet file sizes are drastically different - apache-spark

I use pyspark to process a fix set of data records on a daily basis and store them as 16 parquet files in a Hive table using the date as partition. In theory, the number of records every day should be on the same order of magnitude showing below, about 1.2 billion rows and it is indeed on the same order.
When I look at the parquet files, the size of every parquet files in each day is around 86MB like 2019-09-04 showing below
But one thing I noticed to be very strange is the date of 2019-08-03, the file size is 10x larger than the files in other date, but the number of records seems to be more or less the same. I am so confused and could not come up with a reason for it. If you have any idea as to why, please share it with me. Thank you.
I've just realised that the way I saved the data for 2019-08-03 is as follows
cols = sparkSession \
.sql("SELECT * FROM {} LIMIT 1".format(table_name)).columns
df.select(cols).write.insertInto(table_name, overwrite=True)
For other days
insertSQL = """
INSERT OVERWRITE TABLE {}
PARTITION(crawled_at_ds = '{}')
SELECT column1, column2, column3, column4
FROM calendarCrawlsDF
"""
sparkSession.sql(
insertSQL.format(table_name,
calendarCrawlsDF.take(1)[0]["crawled_at_ds"]))
For 2019-08-03, I used Dataframe insertInto method. For other days, I used sparkSession sql to execute INSERT OVERWRITE TABLE
Could this be the reason?

Related

Spark Job stuck writing dataframe to partitioned Delta table

Running databricks to read csv files and then saving as a partitioned delta table.
Total records in file are 179619219 . It is being split on COL A (8419 unique values) and Year ( 10 Years) and Month.
df.write.partitionBy("A","year","month").format("delta") \
.mode("append").save(path)
Job gets stuck on the write step and aborts after running for 5-6 hours
This is very bad partitioning schema. You simply have too many unique values for column A, and additional partitioning is creating even more partitions. Spark will need to create at least 90k partitions, and this will require creation a separate files (small), etc. And small files are harming the performance.
For non-Delta tables, partitioning is primarily used to perform data skipping when reading data. But for Delta lake tables, partitioning may not be so important, as Delta on Databricks includes things like data skipping, you can apply ZOrder, etc.
I would recommend to use different partitioning schema, for example, year + month only, and do OPTIMIZE with ZOrder on A column after the data is written. This will lead to creation of only few partitions with bigger files.

Avoiding use of SELECT in WHERE

I have an input file on hdfs in CSV format with following cols: date, time, public_ip
Using this I need to filter out data from quite a big table (~100M rows daily). The table has the following structure (roughly):
CREATE TABLE big_table (
`user_id` int,
`ip` string,
`timestamp_from` timestamp,
`timestamp_to` timestamp)
PARTITIONED BY (`PARTITION_DATE` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat';
I need to read CSV data and then filter big_table checking which user_ids have been using the desired ip address in selected period.
I tried using spark SQL with different joins, without much luck. Whatever I do, spark is simply not "smart" enough to limit big table per partition. I also tried using WHERE PARTITION_DATE IN (SELECT DISTINCT date FROM csv_file, but this was also quite slow.
CSV should have up to 20 different days or so.
Here's my solution - I ended up picking up distinct days and using this as a string:
spark.sql("select date from csv_file group by date").createOrReplaceTempView("csv_file_uniq_date")
val partitions=spark.sql("select * from csv_file_uniq_date").collect.mkString(sep=",").replaceAll("[\\[\\]]","")
spark.sql("select user_id, timestamp_from, timestamp_to from big_table where partition_date in (" + partitions + ") group by user_id, timestamp_from, timestamp_to").write.csv("output.csv")
Now, this does the work - I cut the tasks from 100s of thousands to thousands, but I feel quite unhappy with the implementation. Could someone point me to the right direction? How to avoid pulling this as a string of comma separated partition values?
Using spark 2.2
Cheers!
What you expect is called Dynamic Partition Pruning, by which Spark will be smart enough to resolve the partitions to filter from the direct join condition.
This feature is available from Spark 3.0 as part of Adaptive Query Execution improvements.
Find more details from this link
It is disabled by default, can be enabled by setting spark.sql.adaptive.enabled=true

Spark find max of date partitioned column

I have a parquet partitioned in the following way:
data
/batch_date=2020-01-20
/batch_date=2020-01-21
/batch_date=2020-01-22
/batch_date=2020-01-23
/batch_date=2020-01-24
Here batch_date which is the partition column is of date type.
I want only read the data from the latest date partition but as a consumer I don't know what is the latest value.
I could use a simple group by something like
df.groupby().agg(max(col('batch_date'))).first()
While this would work it's a very inefficient way since it involves a groupby.
I want to know if we can query the latest partition in a more efficient way.
Thanks.
Doing the method suggested by #pasha701 would involve loading the entire spark data frame with all the batch_date partitions and then finding max of that. I think the author is asking for a way to directly find the max partition date and load only that.
One way is to use hdfs or s3fs, and load the contents of the s3 path as a list and then finding the max partition and then loading only that. That would be more efficient.
Assuming you are using AWS s3 format, something like this:
import sys
import s3fs
datelist=[]
inpath="s3:bucket_path/data/"
fs = s3fs.S3FileSystem(anon=False)
Dirs = fs.ls(inpath)
for paths in Dirs:
date=paths.split('=')[1]
datelist.append(date)
maxpart=max(datelist)
df=spark.read.parquet("s3://bucket_path/data/batch_date=" + maxpart)
This would do all the work in lists without loading anything into memory until it finds the one you want to load.
Function "max" can be used without "groupBy":
df.select(max("batch_date"))
Using Show partitions to get all partition of table
show partitions TABLENAME
Output will be like
pt=2012.07.28.08/is_complete=1
pt=2012.07.28.09/is_complete=1
we can get data form specific partition using below query
select * from TABLENAME where pt='2012.07.28.10' and is_complete='1' limit 1;
Or additional filter or group by can be applied on it.
This worked for me in Pyspark v2.4.3. First extract partitions (this is for a dataframe with a single partition on a date column, haven't tried it when a table has >1 partitions):
df_partitions = spark.sql("show partitions database.dataframe")
"show partitions" returns dataframe with single column called 'partition' with values like partitioned_col=2022-10-31. Now we create a 'value' column extracting just the date part as string. This is then converted to date and the max is taken:
date_filter = df_partitions.withColumn('value', to_date(split('partition', '=')[1], 'yyyy-MM-dd')).agg({"value":"max"}).first()[0]
date_filter contains the maximum date from the partition and can be used in a where clause pulling from the same table.

PySpark: how to read in partitioning columns when reading parquet

I have data stored in a parquet files and hive table partitioned by year, month, day. Thus, each parquet file is stored in /table_name/year/month/day/ folder.
I want to read in data for only some of the partitions. I have list of paths to individual partitions as follows:
paths_to_files = ['hdfs://data/table_name/2018/10/29',
'hdfs://data/table_name/2018/10/30']
And then try to do something like:
df = sqlContext.read.format("parquet").load(paths_to_files)
However, then my data does not include the information about year, month and day, as this is not part of the data per se, rather the information is stored in the path to the file.
I could use sql context and a send hive query with some select statement with where on the year, month and day columns to select only data from partitions i am interested in. However, i'd rather avoid constructing SQL query in python as I am very lazy and don't like reading SQL.
I have two questions:
what is the optimal way (performance-wise) to read in the data stored as parquet, where information about year, month, day is not present in the parquet file, but is only included in the path to the file? (either send hive query using sqlContext.sql('...'), or use read.parquet,... anything really.
Can i somehow extract the partitioning columns when using the
approach i outlined above?
Reading the direct file paths to the parent directory of the year partitions should be enough for a dataframe to determine there's partitions under it. However, it wouldn't know what to name the partitions without the directory structure /year=2018/month=10, for example.
Therefore, if you have Hive, then going via the metastore would be better because the partitions are named there, Hive stores extra useful information about your table, and then you're not reliant on knowing the direct path to the files on disk from the Spark code.
Not sure why you think you need to read/write SQL, though.
Use the Dataframe API instead, e.g
df = spark.table("table_name")
df_2018 = df.filter(df['year'] == 2018)
df_2018.show()
Your data isn't stored in a way optimal for parquet so you'd have to load files one by one and add the dates
Alternatively, you can move the files to a directory structure fit for parquet
( e.g. .../table/year=2018/month=10/day=29/file.parquet)
then you can read the parent directory (table) and filter on year, month, and day (and spark will only read the relevant directories) also you'd get these as attributes in your dataframe

Repartition to avoid large number of small files

Currently I have a ETL job that reads few tables, performs certain transformations and writes them back to the daily table.
I use the following query in spark sql
"INSERT INTO dbname.tablename PARTITION(year_month)
SELECT * from Spark_temp_table "
The target table in which all these records are being inserted is partitioned at a year X month level. Records which are generated on a daily basis are not that much hence I am partitioning on year X month level.
However, when I check the partition, it has small ~50MB files for each day my code runs (code has to run daily) and eventually I will end up having around 30 files in my partition totalling ~1500MB
I want to know if there is way for me to just create one (or maybe 2-3 files as per block size restrictions) in one partition while I append my records on a daily basis
The way I think I can do it is to just read everything from the concerned partition in my spark dataframe, append it with the latest record and repartition it before writing back. How do I ensure I only read data from the concerned partition and only that partition is over written with lesser number of files?
you can use DISTRIBUTE BY clause to control how the records will be distributed in files inside each partition.
to have a single file per partition, you can use DISTRIBUTE BY year, month
and to have 3 file per partition, you can use DISTRIBUTE BY year, month, day % 3
the full query:
INSERT INTO dbname.tablename
PARTITION(year_month)
SELECT * from Spark_temp_table
DISTRIBUTE BY year, month, day % 3

Resources