Spark- What could be the best idle broadcast table size - apache-spark

We use broadcast as one of the joining optimize solution in spark. Could you please help me to understand below things
1) Always broadcast table size should be less than driver memory .
In this case suppose my broadcast table size is 4 GB but driver memory is 3GB , Can i increase the driver memory to 6 GB and broadcast 4 GB table
2) What could be the maximum driver memory can i provide is there any limit ?
I think , It totally depends on what we are bringing to driver ( broadcast, collect etc)
3) I heard we can broadcast upto only 2GB data , because java serialisation has support till 2GB data only , is it true ?

Related

Spark:- What could be the idle parameters to process 300 GB data in Spark

To process 300 GB data, Could you please provide me the following figures.
Job reads the data , make the data frame , apply some filters and aggregates and write the data
I have good cluster with 640 GM RAM and 160 cores. (10 nodes with 64 GB RAM and each node contains 16 cores)
Idle executor memory (in case not using any cache)(without any spill)
Idle executor memory (in case using cache)
Idle driver memory (not doing any collect)
Idle cores (cores for parallelism, let me know each core handle how much data)
Best partition block data size (I think default 64 MB but what is the best 128 MB or 256 MB)
No of shuffle partitions (default 200)
Note: Always think about processing the large data perspective and optimize solution

How to support spark broadcast join for 5 GB table

I am doing broadcast join-in spark via a broadcast hint. My broadcast table size is 5GB and 224m records with 2 columns. As per spark it can support broadcast join till 8GB or 500m records.
But in my case when it goes beyond 30M records then Broadcast issues occur and sometime roc timeout issues
My driver memory is 8GB and executes memory is 8g. Also, the driver's max result size is 4GB. What additional settings I can apply to support this 5 GB table broadcast
My node is having enough bandwidth and tried with 12 GB driver memory as well but still doesn't work
Please let me know if any other spark setting I can apply to make it work
Thanks

pyspark java.lang.OutOfMemoryError: GC overhead limit exceeded

I'm trying to process, 10GB of data using spark it is giving me this error,
java.lang.OutOfMemoryError: GC overhead limit exceeded
Laptop configuration is: 4CPU, 8 logical cores, 8GB RAM
Spark configuration while submitting the spark job.
spark = SparkSession.builder.master('local[6]').config("spark.ui.port", "4041").appName('test').getOrCreate()
spark.conf.set("spark.executor.instances", 1)
spark.conf.set("spark.executor.cores", 5)
After searching internet about this error, I have few questions
If answered that would be a great help.
1) Spark is in memory computing engine, for processing 10 gb of data, the system should have 10+gb of RAM. Spark loads 10gb of data into 10+ gb RAM memory and then do the job?
2) If point 1 is correct, how big companies are processing 100s of TBs of data, are they processing 100TB of data by clustering multiple systems to form 100+TB RAM and then process 100TB of data?
3) Is their no other way to process 50gb of data with 8gb RAM and 8Cores, by setting proper spark configurations? If it is what is the way and what should be the spark configurations.
4) What should be ideal spark configuration if the system properites are 8gb RAM and 8 Cores? for processing 8gb of data
spark configuration to be defined in spark config.
spark = SparkSession.builder.master('local[?]').config("spark.ui.port", "4041").appName('test').getOrCreate()
spark.conf.set("spark.executor.instances", ?)
spark.conf.set("spark.executor.cores", ?)
spark.executors.cores = ?
spark.executors.memory = ?
spark.yarn.executor.memoryOverhead =?
spark.driver.memory =?
spark.driver.cores =?
spark.executor.instances =?
No.of core instances =?
spark.default.parallelism =?
I hope the following will help if not clarify everything.
1) Spark is an in-memory computing engine, for processing 10 GB of data, the system should have 10+gb of RAM. Spark loads 10gb of data into 10+ GB RAM memory and then do the job?
Spark being an in-memory computation engine take the input/source from an underlying data lake or distributed storage system. The 10Gb file will be broken into smaller blocks (128Mb or 256Mb block size for Hadoop based data lake) and Spark Driver will get many executor/cores to read them from the cluster's worker node. If you try to load 10Gb data with laptop or with a single node, it will certainly go out of memory. It has to load all the data either in one machine or in many slaves/worker-nodes before it is processed.
2) If point 1 is correct, how big companies are processing 100s of TBs of data, are they processing 100TB of data by clustering multiple systems to form 100+TB RAM and then process 100TB of data?
The large data processing project design the storage and access layer with a lot of design patterns. They simply don't dump GBs or TBs of data to file system like HDFS. They use partitions (like sales transaction data is partition by month/week/day) and for structured data, there are different file formats available (especially columnar) which helps to lad only those columns which are required for processing. So right file format, partitioning, and compaction are the key attributes for large files.
3) Is their no other way to process 50gb of data with 8gb RAM and 8Cores, by setting proper spark configurations? If it is what is the way and what should be the spark configurations.
Very unlikely if there is no partition but there are ways. It also depends on what kind of file it is. You can create a custom stream file reader that can read the logical block and process it. However, the enterprise doesn't read 50Gb of one file which is one single unit. Even if you load an excel file of 10Gb in your machine via Office tool, it will go out of memory.
4) What should be ideal spark configuration if the system properties are 8gb RAM and 8 Cores? for processing 8gb of data
Leave 1 core & 1-Gb or 2-GB for OS and use the rest of them for your processing. Now depends on what kind of transformation is being performed, you have to decide the memory for driver and worker processes. Your driver should have 2Gb of RAM. But laptop is primarily for the playground to explore the code syntax and not suitable for large data set. Better to build your logic with dataframe.sample() and then push the code to bigger machine to generate the output.

Will spark load data into in-memory if data is 10 gb and RAM is 1gb

If i have cluster of 5 nodes, each node having 1gb ram, now if my data file is 10gb distributed in all 5 nodes, let say 2gb in each node, now if i trigger
val rdd = sc.textFile("filepath")
rdd.collect
will spark load data into the ram and how spark will deal with this scenario
will it straight away deny or will it process it.
Lets understand the question first #intellect_dp you are asking, you have a cluster of 5 nodes (here the term "node" I am assuming machine which generally includes hard disk,RAM, 4 core cpu etc.), now each node having 1 GB of RAM and you have 10 GB of data file which is distributed in a manner, that 2GB of data is residing in the hard disk of each node. Here lets assume that you are using HDFS and now your block size at each node is 2GB.
now lets break this :
each block size = 2GB
RAM size of each node = 1GB
Due to lazy evaluation in spark, only when "Action API" get triggered, then only it will load your data into the RAM and execute it further.
here you are saying that you are using "collect" as an action api. Now the problem here is that RAM size is less than your block size, and if you process it with all default configuration (1 block = 1 partition ) of spark and considering that no further node will going to add up, then in that case it will give you out of memory exception.
now the question - is there any way spark can handle this kind of large data with the given kind of hardware provisioning?
Ans - yes, first you need to set default minimum partition :
val rdd = sc.textFile("filepath",n)
here n will be my default minimum partition of block, now as we have only 1gb of RAM, so we need to keep it less than 1gb, so let say we take n = 4,
now as your block size is 2gb and minimum partition of block is 4 :
each partition size will be = 2GB/4 = 500mb;
now spark will process this 500mb first and will convert it into RDD, when next chunk of 500mb will come, the first rdd will get spill to hard disk (given that you have set the storage level "MEMORY_AND_DISK_ONLY").
In this way it will process your whole 10 GB of data file with the given cluster hardware configuration.
Now I personally will not recommend the given hardware provisioning for such case,
as it will definitely process the data, but there are few disadvantages :
firstly it will involve multiple I/O operation making whole process very slow.
secondly if any lag occurs in reading or writing to the hard disk, your whole job will get discarded, you will go frustrated with such hardware configuration. In addition to it you will never be sure that will spark process your data and will be able to give result when data will increase.
So try to keep very less I/O operation, and
Utilize in memory computation power of spark with an adition of few more resources for faster performance.
When you use collect all the data send is collected as array only in driver node.
From this point distribution spark and other nodes does't play part. You can think of it as a pure java application on a single machine.
You can determine driver's memory with spark.driver.memory and ask for 10G.
From this moment if you will not have enough memory for the array you will probably get OutOfMemory exception.
In the otherhand if we do so, Performance will be impacted, we will not get the speed we want.
Also Spark store only results in RDD, so I can say result would not be complete data, any worst case if we are doing select * from tablename, it will give data in chunks , what it can affroad....

Spark 2.x - Shuffle on "small" data crashes "big" executors

My (Py)Spark 2.1.1 app consists in two executors with 5 cores and 30G heap (spark.executor.memory) each. I have 3.2Gb of data persisted in memory (deserialized) spread on a dozen partitions and shared between my two executors (1.9Gb + 1.3Gb). I then want to repartition this data by calling repartition('myCol') on my persisted dataframe with myCol having only three keys with a 60-20-20 distribution. I then want to write the repartitionned data in (3) .parquet files. As expected, this transformation triggers a full shuffle of the data :
First question : In the Spark UI, Shuffle Write amounts to 5.9Gb. Why is this amount much higher than the size of the persisted data ? Is it the format Spark uses to write shuffle files on disk (text strings?) ? Replication ?
Second question : My executors keep dying with error messages such as org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle or ExecutorLostFailure (executor 2 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 32.0 GB of 32 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.. spark.yarn.executor.memoryOverhead is already set at 2g but I must confess I don't really get how this parameter should help in that context. But the main question is : how shuffling 3Gb of data can OOM a 30Gb executor ?
I changed a few parameters from the understanding I have of Spark (with limited success obviously) : I set spark.memory.fraction to 0.9 and spark.memory.storageFraction to 0.0.
Many thanks in advance for any help, this situation is so frustrating.
PS : Maybe once the issue is solved I can redesign my app with less memory per executor. It currently feels like a terrible waste of ressources to me.

Resources