Selectively overwrite partitions in Delta Live Table pipeline - databricks

I have relatively big table and it is overwritten in DLT pipeline. It is partitioned by date and in most cases I change small portion of data (connected to last couple of partitions). Is it possible to selectively overwrite only specified partitions?

Related

Partition strategy for hive

I have a monthly Spark job that process data and save into Hive/Impala tables (file storage format is parquet). The granularity of the table is daily data, but source data for this job also comes monthly job.
I'm trying to see how to best partition the table. I'm thinking of partitioning the table base a month key. Wondering if anyone sees any problems with this approach, or have other suggestions? Thanks.

Performance of pyspark + hive when a table has many partition columns

I am trying to understand the performance impact on the partitioning scheme when Spark is used to query a hive table. As an example:
Table 1 has 3 partition columns, and data is stored in paths like
year=2021/month=01/day=01/...data...
Table 2 has 1 partition column
date=20210101/...data...
Anecdotally I have found that queries on the second type of table are faster, but I don't know why, and I don't why. I'd like to understand this so I know how to design the partitioning of larger tables that could have more partitions.
Queries being tested:
select * from table limit 1
I realize this won't benefit from any kind of query pruning.
The above is meant as an example query to demonstrate what I am trying to understand. But in case details are important
This is using s3 not HDFS
The data in the table is very small, and there are not a large number of partitons
The time for running the query on the first table is ~2 minutes, and ~10 seconds on the second
Data is stored as parquet
Except all other factors which you did not mention: storage type, configuration, cluster capacity, the number of files in each case, your partitioning schema does not correspond to the use-case.
Partitioning schema should be chosen based on how the data will be selected or how the data will be written or both. In your case partitioning by year, month, day separately is over-partitioning. Partitions in Hive are hierarchical folders and all of them should be traversed (even if using metadata only) to determine the data path, in case of single date partition, only one directory level is being read. Two additional folders: year+month+day instead of date do not help with partition pruning because all columns are related and used together always in the where.
Also, partition pruning probably does not work at all with 3 partition columns and predicate like this: where date = concat(year, month, day)
Use EXPLAIN and check it and compare with predicate like this where year='some year' and month='some month' and day='some day'
If you have one more column in the WHERE clause in the most of your queries, say category, which does not correlate with date and the data is big, then additional partition by it makes sense, you will benefit from partition pruning then.

Spark Job stuck writing dataframe to partitioned Delta table

Running databricks to read csv files and then saving as a partitioned delta table.
Total records in file are 179619219 . It is being split on COL A (8419 unique values) and Year ( 10 Years) and Month.
df.write.partitionBy("A","year","month").format("delta") \
.mode("append").save(path)
Job gets stuck on the write step and aborts after running for 5-6 hours
This is very bad partitioning schema. You simply have too many unique values for column A, and additional partitioning is creating even more partitions. Spark will need to create at least 90k partitions, and this will require creation a separate files (small), etc. And small files are harming the performance.
For non-Delta tables, partitioning is primarily used to perform data skipping when reading data. But for Delta lake tables, partitioning may not be so important, as Delta on Databricks includes things like data skipping, you can apply ZOrder, etc.
I would recommend to use different partitioning schema, for example, year + month only, and do OPTIMIZE with ZOrder on A column after the data is written. This will lead to creation of only few partitions with bigger files.

How to merge partitions in HDFS?

Assuming I have a partitioned table in my HDFS, that gets new information all the time. New data will be partitioned by days by default, while all of the other files are partitioned by months. How can I merge partitions so by this example I would be able to merge all days partitions that came in the last month to be a month partition? Is there a way to repartition only some of the table’s partitions? I’d like to repartition only some of my partitions so only partitions that are small enough would be merged.
Also, does it even possible to merge partitions or should I try to read them, delete and write again to one partition? I'm thinking of something like concatenating the files.
I’d like to know what is the best way to merge only some partitions of a table.

Repartition to avoid large number of small files

Currently I have a ETL job that reads few tables, performs certain transformations and writes them back to the daily table.
I use the following query in spark sql
"INSERT INTO dbname.tablename PARTITION(year_month)
SELECT * from Spark_temp_table "
The target table in which all these records are being inserted is partitioned at a year X month level. Records which are generated on a daily basis are not that much hence I am partitioning on year X month level.
However, when I check the partition, it has small ~50MB files for each day my code runs (code has to run daily) and eventually I will end up having around 30 files in my partition totalling ~1500MB
I want to know if there is way for me to just create one (or maybe 2-3 files as per block size restrictions) in one partition while I append my records on a daily basis
The way I think I can do it is to just read everything from the concerned partition in my spark dataframe, append it with the latest record and repartition it before writing back. How do I ensure I only read data from the concerned partition and only that partition is over written with lesser number of files?
you can use DISTRIBUTE BY clause to control how the records will be distributed in files inside each partition.
to have a single file per partition, you can use DISTRIBUTE BY year, month
and to have 3 file per partition, you can use DISTRIBUTE BY year, month, day % 3
the full query:
INSERT INTO dbname.tablename
PARTITION(year_month)
SELECT * from Spark_temp_table
DISTRIBUTE BY year, month, day % 3

Resources