Get uncompressed size of the dataset on HDFS after being read by Spark - apache-spark

I am trying to improve the performance of my Spark application. To this end, I am trying to determine the optimal number of shuffle partitions for a dataset. I read from multiple sources that each partition should be about 128 MB.
So, if I have a 1GB file, I'll need around 8 partitions. But my question is how do I find the file size? I know I can find the file size on the hdfs using the following
hdfs dfs -du -s {data_path}
But from what I understand this is the compressed size and the actual size of the file is different. (Spark uses a compression codec while writing parquet files, by default snappy). And this leads me to two questions actually
How do I find the actual uncompressed size of the file?
What should the number of shuffle partitions be based on- compressed size or actual size?

Shuffle partitions are independent of the data size.
The data is uncompressed and then shuffled based on the number of shuffle partitions(using hash partitioner, range partitioner, etc).
Generally, the shuffle partitions are tuned
1. To increase the parallelism available in reducer stage.
2. To reduce the amount of data processed by shuffle partition(if we observe spills or it the reduce stage is memory intensive)
I read from multiple sources that each partition should be about 128 MB.
This is applicable only to mapper stages. The split sizes in the mapper are computed based on the size of compressed data. You can tune the size of the mapper splits using spark.sql.files.maxPartitionBytes
And the shuffle partitions(configured using spark.sql.shuffle.partitions, defaulting to 200) is related to reducer stages.
In short, compression comes into play only in mapper stages and not reducer stages.

Related

How does Spark repartitioning work w.r.t to the input file partitioning?

I have 2 questions:
Can we have less partitions set in a call to coalesce than the HDFS block size? e.g. Suppose I have a 1 GB file size and HDFS block size is 128MB, can I do coalesce(1)?
As we know, input files on HDFS are physically split on the basis of block size. Does Spark further split the data (physically) when we repartition, or change parallelism?
e.g suppose I have a 1 GB file size and hdfs block size is 128MB. can I do coalesce(1)?
Yes, you can coalesce to a single file and write that to an external file system (at least with EMRFS)
does spark further splits the data (physically) when we repartition or change parallelism ?
repartition slices the data into partitions independently of the partitioning of the original input files.

Spark 2.0+: spark.sql.files.maxPartitionBytes is not working?

My understanding is that spark.sql.files.maxPartitionBytes is used to control the partition size when spark reads data from hdfs.
However, I used spark sql to read data for a specific date in hdfs. It contains 768 files. The largest file is 4.7 GB. The smallest file is 17.8 MB.
the hdfs block size is 128MB.
the value of spark.sql.files.maxPartitionBytes is 128MB.
I expected that spark would split a large file into several partitions and make each partition no larger than 128MB. However, it doesn't work like that.
I know we can use repartition(), but it is an expensive operation.

Optimal file size and parquet block size

I have around 100 GB of data per day which I write to S3 using Spark. The write format is parquet. The application which writes this run Spark 2.3
The 100 GB data is further partitioned, where the largest partition is 30 GB. For this case, let's just consider that 30 GB partition.
We are planning to migrate this whole data and rewrite to S3, in Spark 2.4. Initially we didn't decide on file size and block size when writing to S3. Now that we are going to rewrite everything, we want to take into consideration the optimal file size and parquet block size.
What is the optimal file size to write to S3 in parquet ?
Can we write 1 file with 30 GB size and parquet block size as 512 MB ? How will reading work in this case ?
Same as #2 but parquet block size as 1 GB ?
Before talking about the parquet side of the equation, one thing to consider is how the data will be used after you save it to parquet.
If it's going to be read/processed often, you may want to consider what are the access patterns and decide to partition it accordingly.
One common pattern is partitioning by date, because most of our queries have a time range.
Partitioning your data appropriately will have a much bigger impact on performance on using that data after it is written.
Now, onto Parquet, the rule of thumb is for the parquet block size to be roughly the size of the underlying file system. That matters when you're using HDFS, but it doesn't matter much when you're using S3.
Again, the consideration for the Parquet block size, is how you're reading the data.
Since a Parquet block has to be basically reconstructed in memory, the larger it is, the more memory is needed downstream. You also will need fewer workers, so if your downstream workers have plenty of memory you can have larger parquet blocks as it will be slightly more efficient.
However, for better scalability, it's usually better having several smaller objects - especially according to some partitioning scheme - versus one large object, which may act as a performance bottleneck, depending on your use case.
To sum it up:
a larger parquet block size means slightly smaller file size (since compression works better on large files) but larger memory footprint when serializing/deserializing
the optimal file size depends on your setup
if you store 30GB with 512MB parquet block size, since Parquet is a splittable file system and spark relies on HDFS getSplits() the first step in your spark job will have 60 tasks. They will use byte-range fetches to get different parts of the same S3 object in parallel. However, you'll get better performance if you break it down in several smaller (preferably partitioned) S3 objects, since they can be written in parallel (one large file has to be written sequentially) and also most likely have better reading performance when accessed by a large number of readers.

Maximum size of rows in Spark jobs using Avro/Parquet

I am planning to use Spark to process data where each individual element/row in the RDD or DataFrame may occasionally be large (up to several GB).
The data will probably be stored in Avro files in HDFS.
Obviously, each executor must have enough RAM to hold one of these "fat rows" in memory, and some to spare.
But are there other limitations on row size for Spark/HDFS or for the common serialisation formats (Avro, Parquet, Sequence File...)? For example, can individual entries/rows in these formats be much larger than the HDFS block size?
I am aware of published limitations for HBase and Cassandra, but not Spark...
There are currently some fundamental limitations related to block size, both for partitions in use and for shuffle blocks - both are limited to 2GB, which is the maximum size of a ByteBuffer (because it takes an int index, so is limited to Integer.MAX_VALUE bytes).
The maximum size of an individual row will normally need to be much smaller than the maximum block size, because each partition will normally contain many rows, and the largest rows might not be evenly distributed among partitions - if by chance a partition contains an unusually large number of big rows, this may push it over the 2GB limit, crashing the job.
See:
Why does Spark RDD partition has 2GB limit for HDFS?
Related Jira tickets for these Spark issues:
https://issues.apache.org/jira/browse/SPARK-1476
https://issues.apache.org/jira/browse/SPARK-5928
https://issues.apache.org/jira/browse/SPARK-6235

Spark DataFrames with Parquet and Partitioning

I have not been able to find much information on this topic but lets say we use a dataframe to read in a parquet file that is 10 Blocks spark will naturally create 10 partitions. But when the dataframe reads in the file to process it, won't it be processing a large data to partition ratio because if it was processing the file uncompressed the block size would have been much larger meaning the partitions would be larger as well.
So let me clarify, parquet compressed (these numbers are not fully accurate).
1GB Par = 5 Blocks = 5 Partitions which might be decompressed to 5GB making it 25 blocks/25 partitions. But unless you repartition the 1GB par file you will be stuck with just 5 partitions when optimally it would be 25 partitions? Or is my logic wrong.
Would make sense to repartition to increase speed? Or am I thinking about this wrong. Can anyone shed some light on this?
Assumptions:
1 Block = 1 Partition For Spark
1 Core operated on 1 Partition
Spark DataFrame doesn't load parquet files in memory. It uses Hadoop/HDFS API to read it during each operation. So the optimal number of partitions depends on HDFS block size (different from a Parquet block size!).
Spark 1.5 DataFrame partitions parquet file as follows:
1 partition per HDFS block
If HDFS block size is less than configured in Spark parquet block size a partition will be created for multiple HDFS blocks such as total size of partition is no less than parquet block size
I saw the other answer but I thought I can clarify more on this. If you are reading Parquet from posix filesystem then you can increase number of partitioning readings by just having more workers in Spark.
But in order to control the balance of data that comes into workers one may use the hierarchical data structure of the Parquet files, and later in the workers you may point to different partitions or parts of the Parquet file. This will give you control over how much of data should go to each worker according to the domain of your dataset (if by balancing data in workers you mean equal batch of data per worker is not efficient).

Resources