I am reading a csv with 600 records using spark 2.4.2. Last 100 records have large data.
I am running into the problem of,
ERROR Job aborted due to stage failure:
Task 1 in stage 0.0 failed 4 times, most recent failure:
Lost task 1.3 in stage 0.0 (TID 5, 10.244.5.133, executor 3):
org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 47094.
To avoid this, increase spark.kryoserializer.buffer.max value.
I have increased the spark.kryoserializer.buffer.max to 2g (the max allowed setting) and spark driver memory to 1g and was able to process few more records but still cannot process all the records in the csv.
I have tried paging the 600 records. e.g With 6 partition I can process 100 records per partition but since the last 100 records are huge the buffer overflow occurs.
In this case, the last 100 records are large but this can be the first 100 or records between 300 to 400. Unless I sample the data before hand to get an idea on the skew I cannot optimize the processing approach.
Is there a reason why spark.kryoserializer.buffer.max is not allowed to go beyond 2g.
May be I can increase the partitions and decrease the records read per partition? Is it possible to use compression?
Appreciate any thoughts.
Kryo buffers are backed by byte arrays, and primitive arrays can only be
up to 2GB in size.
Please refer to the below link for further details.
https://github.com/apache/spark/commit/49d2ec63eccec8a3a78b15b583c36f84310fc6f0
Please increase the partition number since you cannot optimize the processing approach.
What do you have in those records that a single one blows the kryo buffer.
In general leaving the partitions at default 200 should always be a good starting point. Don't reduce it to 6.
It looks like a single record (line) blows the limit.
There are number of options for reading in the csv data you can try csv options
If there is a single line that translates into a 2GB buffer overflow I would think about parsing the file differently.
csv reader also ignores/skips some text in files (no serialization) if you give it a schema.
If you remove some of the columns that are so huge from the schema it may read in the data easily.
Related
In my Spark job, the results I am sending to the driver as barely a few KBs. I still got the below exception in spite of spark.driver.maxResultSize set to 4 GBs:
ERROR TaskSetManager: Total size of serialized results of 3021102 tasks (4.0 GB) is bigger than spark.driver.maxResultSize (4.0 GB)
Do Spark accumulators or anything else contribute towards memory usage from one allocated by spark.driver.maxResultSize? Is there an official documentation/code I can refer to to learn more on this?
More details about the code/execution:
There are 3 million tasks
Each task reads 50 files from S3 and re-writes them back to S3 post-transformation
Tasks return prefix of S3 files along with some metadata which is collected at the driver for saving to a file. This data is < 50 MBs
This issue has been fixed here: the cause is that when Spark calculates result size it actually also counts metadata(like metrics) in the task binary result sent back to driver. Therefore in case you have huge amount of tasks but collect almost nothing(the real data), you might still hit the error.
I want to process several idependent csv files of similar sizes (100 MB) in parallel with PySpark.
I'm running PySpark on a single machine:
spark.driver.memory 20g
spark.executor.memory 2g
local[1]
File content:
type (has the same value within each csv), timestamp, price
First I tested it on one csv (note I used 35 different window functions):
logData = spark.read.csv("TypeA.csv", header=False,schema=schema)
// Compute moving avg. I used 35 different moving averages.
w = (Window.partitionBy("type").orderBy(f.col("timestamp").cast("long")).rangeBetween(-24*7*3600 * i, 0))
logData = logData.withColumn("moving_avg", f.avg("price").over(w))
// Some other simple operations... No Agg, no sort
logData.write.parquet("res.pr")
This works great. However, i had two issues with scaling this job:
I tried to increase number of window functions to 50 the job OOMs. Not sure why PySpark doesn't spill to disk in this case, since window functions are independent of each other
I tried to run the job for 2 CSV files, it also OOMs. It is also not clear why it is not spilled to disk, since the window functions are basically partitioned by CSV files, so they are independent.
The question is why PySpark doesn't spill to disk in these two cases to prevent OOM, or how can I hint the Spark to do it?
If your machine cannot run all of these you can do that in sequence and write the data of each bulk of files before loading the next bulk.
I'm not sure if this is what you mean but you can try hint spark to write some of the data to your disk instead of keep it on RAM with:
df.persist(StorageLevel.MEMORY_AND_DISK)
Update if it helps
In theory, you could process all these 600 files in one single machine. Spark should spill to disk when meemory is not enough. But there're some points to consider:
As the logic involves window agg, which results in heavy shuffle operation. You need to check whether OOM happened on map or reduce phase. Map phase process each partition of file, then write shuffle output into some file. Then reduce phase need to fetch all these shuffle output from all map tasks. It's obvious that in your case you can't hold all map tasks running.
So it's highly likely that OOM happened on map phase. If this is the case, it means the memory per core can't process one signle partition of file. Please be aware that spark will do rough estimation of memory usage, then do spill if it thinks it should be. As the estatimation is not accurate, so it's still possible OOM. You can tune partition size by below configs:
spark.sql.files.maxPartitionBytes (default 128MB)
Usaually, 128M input needs 2GB heap with total 4G executor memory as
executor JVM heap execution memory (0.5 of total executor memory) =
(total executor memory - executor.memoryOverhead (default 0.1)) * spark.memory.storageFraction (0.6)
You can post all your configs in Spark UI for further investigation.
I tried looking through the various posts but did not get an answer. Lets say my spark job has 1000 input partitions but I only have 8 executor cores. The job has 2 stages. Can someone help me understand exactly how spark processes this. If you can help answer the below questions, I'd really appreciate it
As there are only 8 executor cores, will spark process the Stage 1 of my job 8 partitions at a time?
If the above is true, after the first set of 8 partitions are processed where is this data stored when spark is running the second set of 8 partitions?
If I dont have any wide transformations, will this cause a spill to disk?
For a spark job, what is the optimal file size. I mean spark better with processing 1 MB files and 1000 spark partitions or say a 10MB file with 100 spark partitions?
Sorry, if these questions are vague. This is not a real use case but as I am learning about spark I am trying to understand the internal details of how the different partitions get processed.
Thank You!
Spark will run all jobs for the first stage before starting the second. This does not mean that it will start 8 partitions, wait for them all to complete, and then start another 8 partitions. Instead, this means that each time an executor finishes a partition, it will start another partition from the first stage until all partions from the first stage is started, then spark will wait until all stages in the first stage are complete before starting the second stage.
The data is stored in memory, or if not enough memory is available, spilled to disk on the executor memory. Whether a spill happens will depend on exactly how much memory is available, and how much intermediate data results.
The optimal file size is varies, and is best measured, but some key factors to consider:
The total number of files limits total parallelism, so should be greater than the number of cores.
The amount of memory used processing a partition should be less than the amount available to the executor. (~4GB for AWS glue)
There is overhead per file read, so you don't want too many small files.
I would be inclined towards 10MB files or larger if you only have 8 cores.
I have a dataframe with roughly 200-600 gb of data I am reading, manipulating, and then writing to csv using the spark shell (scala) on an elastic map reduce cluster.Spark write to CSV fails even after 8 hours
here's how I'm writing to csv:
result.persist.coalesce(20000).write.option("delimiter",",").csv("s3://bucket-name/results")
The result variable is created through a mix of columns from some other dataframes:
var result=sources.join(destinations, Seq("source_d","destination_d")).select("source_i","destination_i")
Now, I am able to read the csv data it is based on in roughly 22 minutes. In this same program, I'm also able to write another (smaller) dataframe to csv in 8 minutes. However, for this result dataframe it takes 8+ hours and still fails ... saying one of the connections was closed.
I'm also running this job on 13 x c4.8xlarge instances on ec2, with 36 cores each and 60 gb of ram, so I thought I'd have the capacity to write to csv, especially after 8 hours.
Many stages required retries or had failed tasks and I can't figure out what I'm doing wrong or why it's taking so long. I can see from the Spark UI that it never even got to the write CSV stage and was busy with persist stages, but without the persist function it was still failing after 8 hours. Any ideas? Help is greatly appreciated!
Update:
I've ran the following command to repartition the result variable into 66K partitions:
val r2 = result.repartition(66000) #confirmed with numpartitions
r2.write.option("delimiter",",").csv("s3://s3-bucket/results")
However, even after several hours, the jobs are still failing. What am I doing wrong still?
note, I'm running spark shell via spark-shell yarn --driver-memory 50G
Update 2:
I've tried running the write with a persist first:
r2.persist(StorageLevel.MEMORY_AND_DISK)
But I had many stages fail, returning a, Job aborted due to stage failure: ShuffleMapStage 10 (persist at <console>:36) has failed the maximum allowable number of times: 4. Most recent failure reason: org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 3' or saying Connection from ip-172-31-48-180.ec2.internal/172.31.48.180:7337 closed
Executors page
Spark web UI page for a node returning a shuffle error
Spark web UI page for a node returning an ec2 connection closed error
Overall Job Summary page
I can see from the Spark UI that it never even got to the write CSV
stage and was busy with persist stages, but without the persist
function it was still failing after 8 hours. Any ideas?
It is FetchFailedException i.e Failed to fetch a shuffle block
Since you are able to deal with small files, only huge data its failed...
I strongly feel that not enough partitions.
Fist thing is verify/Print source.rdd.getNumPartitions(). and destinations.rdd.getNumPartitions(). and result.rdd.getNumPartitions().
You need to repartition after the data is loaded in order to partition the data (via shuffle) to other nodes in the cluster. This will give you the parallelism that you need for faster processing with out fail
Further more, to verify the other configurations applied...
print all the config like this, adjust them to correct values as per demand.
sc.getConf.getAll
Also have a look at
SPARK-5928
Spark-TaskRunner-FetchFailedException Possible reasons : OOM or Container memory limits
repartition both source and destination before joining, with number of partitions such that each partition would be 10MB - 128MB(try to tune), there is no need to make it 20000(imho too many).
then join by those two columns and then write, without repartitioning(ie. output partitions should be same as reparitioning before join)
if you still have trouble, try to make same thing after converting to both dataframes to rdd(there are some differences between apis, and especially regarding repartitions, key-value rdds etc)
I'm working on a solution where driver program will read the xml file and from that i will take a HDFS file path and that will be read inside map operation.I have few questions here.
Since the map operation will be performed in containers (Containers will be allocated while starting the job ).
What is the single input file is greater than a executor. Since the file is not read in driver program it cannot allocate more resource? OR the application master will get more memory from resource manager?
Any help is highly appreciated.
What is the single input file is greater than a executor?
As the file is in HDFS, Spark will create 1 partition for 1 block in HDFS. Every partitions will be processed in a Worker.
If file has many blocks which can't be computed at a time then spark make sure the pending partition will be computed once resources are free(after completing transformation with a stage).
Loaded file appears as RDD. RDD is combination of pieces so called partitions which are reside across cluster. Reading file is not problem but after transformation it can throw OOM exception depending on executor memory limitations. Because there can be some shuffle operations which will require transfer of partitions to one place. By default executor memory set to be 512MB. But for processing large amount of data set custom memory parameter.
Spark reserves parts of that memory for cached data storage and for temporary shuffle data. Set the heap for these with the parameters spark.storage.memoryFraction (default 0.6) and spark.shuffle.memoryFraction (default 0.2). Because these parts of the heap can grow before Spark can measure and limit them, two additional safety parameters must be set: spark.storage.safetyFraction (default 0.9) and spark.shuffle.safetyFraction (default 0.8). Safety parameters lower the memory fraction by the amount specified. The actual part of the heap used for storage by default is 0.6 × 0.9 (safety fraction times the storage memory fraction), which equals 54%. Similarly, the part of the heap used for shuffle data is 0.2 × 0.8 (safety fraction times the shuffle memory fraction), which equals 16%. You then have 30% of the heap reserved for other Java objects and resources needed to run tasks. You should, however, count on only 20%.