I have transaction information for individual transactions (eg. customer code, product, product group, price, etc.)
I have this now partitioned in parquet files for each year_month
This is very effective when reporting aggregates, etc. over product groups, etc.
However if I want to retrieve information for a specific customer across months this is not very effective/fast.
I tried to partition by year_month & customer_code, but then there's a lot of disk i/o since each partition is now a customer code with one line of data in it.
Is there a way to increase performance and let's say stick 10000 customers in one partition? Or say partition to the next group if the parquet file size is 64Mb or something like that.
With the logic in Spark that it has the min max per attribute in the parquet file I expect performance to boost, but I am too new to spark/parquet to really understand if this is a correct thoughtpiece and if it is technically possible. (of course I can create customer code groups myself and use this in querying too, but I was hoping something more automatically is possible).
Thanks,
G.
If you order data in each file by customer code and configure Spark to use parquet predicate pushdown (enabled by default) then full scan by customer code will be faster.
Internally parquet file stores column min/max values for each page and block. Filtering by value can efficiently skip reading pages and blocks based on this statistics.
Related
I have a Spark Job that reads data from S3. I apply some transformations and write 2 datasets back to S3. Each write action is treated as a separate job.
Question: Does Spark guarantees that I read the data each time in the same order? For example, if I apply the function:
.withColumn('id', f.monotonically_increasing_id())
Will the id column have the same values for the same records each time?
You state very little, but the following is easily testable and should serve as a guideline:
If you re-read the same files again with same content you will get the same blocks / partitions again and the same id using f.monotonically_increasing_id().
If the total number of rows differs on the successive read(s) with different partitioning applied before this function, then typically you will get different id's.
If you have more data second time round and apply coalesce(1) then the prior entries will have same id still, newer rows will have other ids. A less than realistic scenario of course.
Blocks for files at rest remain static (in general) on HDFS. So partition 0..N will be the same upon reading from rest. Otherwise zipWithIndex would not be usable either.
I would never rely on the same data being in same place when read twice unless there were no updates (you could cache as well).
I'm storing daily reports per client for query with Athena.
At first I thought I'd use a client=c_1/month=12/day=01/ or client=c2/date=2020-12-01/ folder structure, and run MSCK REPAIR TABLE daily to make new day partition available for query.
Then I realized there's the $path special column, so if I store files as 2020-12-01.csv I could run a query with WHERE $path LIKE '%12-01% thus saving a partition and the need to detect/add it daily.
I can see this having an impact on performance if there was a lot of daily data,
But in my case the day partition will include one file at most, so a partition is mostly to have a field to query, not reduce query dataset.
Any other downside?
When using $path column, all table (partition) location needs to be fully listed.
if you have large number of objects in S3, this listing can become a bottleneck.
Partitions avoid this problem.
Of course, having large number of partitions is also a problem.
I don't know what the cardinality of client column, so hard to tell how many partitions to expect with this approach.
Currently Athena does not apply any optimisations for $path, which means that there is no meaningful difference between WHERE "$path" LIKE '%12-01% and WHERE "date" = '2020-12-01' (assuming you have a column date which contains the same date as the file name). Your data probably already has a date or datetime column, and your queries will be more readable using it rather than $path.
You are definitely on the right track questioning whether or not you need the date part of your current partitioning scheme. There are lots of different considerations when partitioning data sets, and it's not easy to always say what is right without analysing the situation in detail.
I would recommend having some kind of time-based partition key. Otherwise you will have no way to limit the amount of data read by queries, and they will be slower and more expensive as time goes. Partitioning on date is probably too fine grained for your use case, but perhaps year or month would work.
However, if there will only be data for a client for a short time (less than one thousand files in total, the size of one S3 listing page), or queries always read all the data for a client, you don't need a time-based partition key.
To do a deeper analysis on how to partition your data I would need to know more about the types of queries you will be running, how the data is updated, how much data files are expected to contain, and how much difference there will be from client to client.
I am trying to utilize Spark Bucketing on a key-value table that is frequently joined on the key column by batch applications. The table is partitioned by timestamp column, and new data arrives periodically and added in a new timestamp partition. Nothing new here.
I thought it was ideal use case for Spark bucketing, but some limitations seem to be fatal when the table is incremental:
Incremental table forces multiple files per bucket, forcing Spark to sort every bucket upon join even though every file is sorted locally. Some Jira's suggest that this is a conscious design choice, not going to change any time soon. This is quite understood, too, as there could be thousands of locally sorted files in each bucket, and iterating concurrently over so many files does not seem a good idea, either.
Bottom line is, sorting cannot be avoided.
Upon map side join, every bucket is handled by a single Task. When the table is incremented, every such Task would consume more and more data as more partitions (increments) are included in the join. Empirically, this ultimately failed on OOM regardless to the configured memory settings. To my understanding, even if the failures can be avoided, this design does not scale at all. It imposes an impossible trade-off when deciding on the number of buckets - aiming for a long term table results in lots of small files during every increment.
This gives the immediate impression that bucketing should not be used with incremental tables. I wonder if anyone has a better opinion on that, or maybe I am missing some basics here.
I find that by default, Spark seem to write many small parquet files. I think it maybe better if I use partitioning to reduce this?
But how do I choose a partition key? For example, for a users dataset which I frequently query by ID do I partition by id? But I am thinking, will it create 1 parquet file for 1 user in that case?
What if I frequently query by 2 keys but only 1 or the other not both at the same time, is it useful to partition by both keys? For example, lets say I query usually by id and country, do I use partitionBy('id', 'country')?
If there is no specific pattern in which I query the data but want to limit the number of files, do I use repartition then?
Partitions create a subdirectory for each value of the partition field, so if you are filtering by that field, instead of reading every file it will read only the files in the appropiate subdirectory.
You should partition when your data is too large and you usually
work with a subset of the data at a time.
You should partition by a field that you both need to filter by
frequently and that has low cardinality, i.e: it will create a
relatively small amount of directories with relatively big amount of
data on each directory.
You don't want to partition by a unique id, for example. It would create lots of directories with only one row per directory; this is very inefficient the moment you need to select more than one id.
Some typical partition fields could be dates if you are working with time series (daily dumps of data for instance), geographies (country, branches,...) or taxonomies (types of object, manufacturer, etc).
I have data that I would like to do a lot of analytic queries on and I'm trying to figure out if there is a mechanism I can use to store it so that Spark can efficiently do joins on it. I have a solution using RedShift, but would ideally prefer to have something that is based on files in S3 instead of having a whole RedShift cluster up 24/7.
Introduction to the data
This is a simplified example. We have 2 initial CSV files.
Person records
Event records
The two tables are linked via the person_id field. person_id is unique in the Person table. Events have a many-to-one relationship with person.
The goal
I'd like to understand how to set up the data so I can efficiently perform the following query. I will need to perform many queries like this (all queries are evaluated on a per person basis):
The query is to produce a data frame with 4 columns, and 1 row for every person.
person_id - person_id for each person in the data set
age - "age" field from the person record
cost - The sum of the "cost" field for all event records for that person where "date" is during the month of 6/2013
All current solutions I have with Spark to this problem involve reshuffling all the data, which ends up making the process slow for large amounts (hundreds of millions of people). I am happy with a solution that requires me to reshuffle the data and write it to a different format once if that can then speed up later queries.
The solution using RedShift
I can accomplish this solution using RedShift in a fairly straightforward way:
Each both files are loaded in as RedShift tables, with DISTKEY person_id, SORTKEY person_id. This distributes the data so that all the data for a person is on a single node. The following query will produce the desired data frame:
select person_id, age, e.cost from person
left join (select person_id, sum(cost) as cost from events
where date between '2013-06-01' and '2013-06-30'
group by person_id) as e using (person_id)
The solution using Spark/Parquet
I have thought of several potential ways to handle this in Spark, but none accomplishes what I need. My ideas and the issues are listed below:
Spark Dataset write 'bucketBy' - Read the CSV files and then rewrite them out as parquet files using "bucketBy". Queries on these parquet files could then be very fast. This would produce a data setup similar to RedShift, but parquet files don't support bucketBy.
Spark parquet partitioning - Parquet does support partitioning. Because parquet creates a separate set of files for each partition key, you have to create a computed column to partition on and use a hash of person_id to create the partitionKey. However, when you later join these tables in spark based on "partition_key" and "person_id", the query plan still does a full hash partition. So this approach is no better than just reading the CSVs and shuffling every time.
Stored in some other data format besides parquet - I am open to this, but don't know of another data source that will work.
Using a compound record format - Parquet supports hierarchical data formats, so can prejoin both tables into a hierarchical record (where a person record has an "events" field which is an array of struct elements) and then do processing on that. When you have a hierarchical record, there are two approaches that to processing it:
** Use explode to create separate records ** - Using this approach you explode array fields into full rows, then use standard data frame operations to do analytics, and then join them back to the main table. Unfortunately, I've been unable to get this approach to efficiently compile queries.
** Use UDFs to perform operations on subrecords ** - This preserves the structure and executes without shuffles, but is an awkward and verbose way to program. Also, it requires lots of UDFs which aren't great for performance (although they beat large scale shuffling of data).
For my use cases, Spark has advantages over RedShift which aren't obvious in this simple example, so I'd prefer to do this with Spark. Please let me know if I am missing something and there is a good approach to this.
Edited per comment.
Assumptions:
Using parquet
Here's what I would try:
val eventAgg = spark.sql("""select person_id, sum(cost) as cost
from events
where date between '2013-06-01' and '2013-06-30'
group by person_id""")
eventAgg.cache.count
val personDF = spark.sql("""SELECT person_id, age from person""")
personDF.cache.count // cache is less important here, so feel free to omit
eventAgg.join(personDF, "person_id", "left")
I just did this with some of my data and here's how it went (9
node/140 vCPUs cluster, ~600GB RAM):
27,000,000,000 "events" (aggregated to 14,331,487 "people")
64,000,000 "people" (~20 columns)
aggregated events building and caching took ~3 min
people caching took ~30 seconds (pulling from network, not parquet)
left joining took several seconds
Not caching the "people" led to the join taking a few seconds longer. Then forcing spark to broadcast the couple hundred MB aggregated events made the join take under 1 second.