Cannot scale spark job execution time - apache-spark

I built an ETL pipeline to process terabytes of data. To achieve that goal, I set up a Spark Cluster (Scala) and MinIO server for object data storage.
I can process and save 200 gigabytes in roughly 30 minutes using 10 virtual machines, for Spark Processing.
The issue I have is that I am not able to scale. Meaning, if I double the number of spark virtual machines, this does not affect processing time.
I need some guidance to identify the bottleneck.
ARCHITECTURE SUMMARY.
I use virtual machines set up on-premises using VMWare ESXi 6 Physical machines (which host VMs) are on a 1 GB network.
There is no over commitment for vCPU nor RAM
Spark VMs: 16VCPU, 64 GB RAM
MinIO (Storage). 16vCPU, 64GB RAM, Configured using RAID0
SOME DETAILS ABOUT DATA PROCESSING
The process is straight.
Read data from 2 sources on MinIO,
Make a Union of data of two sources,
Filter out empty values on a column from resulting dataset,
Apply 2 groupby on that column (I save intermediate values after the first groupby)
Union the dataset obtained after the groupby operation with the empty columns values
Save the whole again on MinIO
MY SPARK CONFIGURATION
Spark is deployed in Client mode.
mode = "client"
network.timeout = 1800001
rpc.askTimeout = 1800000
default.parallelism = 320
sql.shuffle.partitions = 320 #
spark.sql.adaptive.coalescePartitions.enabled=true
spark.sql.adaptive.enabled=true
sql.files.maxRecordsPerFile = "100000" // spark.write
sql.files.maxPartitionBytes = "31457280" // spark.read
sql.adaptive.advisoryPartitionSizeInBytes = "30m"
sql.objectHashAggregate.sortBased.fallbackThreshold = -1
memory.fraction = 0.85
# executor configs
executor.cores = 4
executor.memory = 12g
executor.memoryOverhead = 3g
total.executor.cores = 160
executor.instances = 40
executor.heartbeatInterval = 1800000

Based on the infrastructure you mentioned, in order to scale the spark jobs you also need to scale the data backend. This is similar to how if you have an app which writes to a traditional database, if the amount of requests increase, if you increase the front end workers it won't actually increase the performance until the database is also scale to match the additional requests. You have to consider all bottlenecks in your infrastructure.
So in this case, if you are increasing your spark VM count you also need to scale the MinIO backend accordingly to get the performance increase.

You cannot just scale spark to scale your performance, you need to first measure if your MinIO is deployed to your required scale.
From what I can gather you have only deployed MinIO on a single node, there is not only so much storage that can do to serve as you scale your clients.
Just scaling clients is not sufficient.

Related

Spark configuration based on my data size

I know there's a way to configure a Spark Application based in your cluster resources ("Executor memory" and "number of Executor" and "executor cores") I'm wondering if exist a way to do it considering the data input size?
What would happen if data input size does not fit into all partitions?
Example:
Data input size = 200GB
Number of partitions in cluster = 100
Size of partitions = 128MB
Total size that partitions could handle = 100 * 128MB = 128GB
What about the rest of the data (72GB)?
I guess Spark will wait to have free the resources free due to is designed to process batches of data Is this a correct assumption?
Thank in advance
I recommend for best performance, don't set spark.executor.cores. You want one executor per worker. Also, use ~70% of the executor memory in spark.executor.memory. Finally- if you want real-time application statistics to influence the number of partitions, use Spark 3, since it will come with Adaptive Query Execution (AQE). With AQE, Spark will dynamically coalesce shuffle partitions. SO you set it to an arbitrarily-large number of partitions, such as:
spark.sql.shuffle.partitions=<number of cores * 50>
Then just let AQE do its thing. You can read more about it here:
https://www.databricks.com/blog/2020/05/29/adaptive-query-execution-speeding-up-spark-sql-at-runtime.html
There are 2 aspects to your question. The first is regarding storage of this data, & the second is regarding data execution.
With regards to storage, when you say Size of partitions = 128MB, I assume you use HDFS to store this data & 128M is your default block size. HDFS itself internally decides how to split this 200GB file & store in chunks not exceeding 128M. And your HDFS cluster should have more than 200GB * replication factor of combined storage to persist this data.
Coming to the Spark execution part of the question, once you define spark.default.parallelism=100, it means that Spark will use this value as the default level of parallelism while performing certain operations (like join etc). Please note that the amount of data being processed by each executor is not affected by the block size (128M) in any way. Which means each executor task will work on 200G/100 = 2G of data (provided the executor memory is sufficient for the required operation being performed). In case there isn't enough capacity in the spark cluster to run 100 executors in parallel, then it will launch as many executors it can in batches as and when resources are available.

In GCP Dataproc, what is the maximum number of worker nodes we can use in a cluster?

I am about to train a 5 million rows of data containing 7 categorical variables (string), but soon will train a 31 million rows of data.
I am wondering what the maximum number of worker nodes we can use in a cluster, because even if I type something like: 2,000,000, it doesn't show any indication of an error.
Another question would be, what would be the best way to determine how many worker nodes needed?
Thank you in advance!
Max cluster size
Dataproc does not limit number of the nodes in the cluster, but other software can have limitations. For example, it's known that there are YARN cluster deployments that have 10k nodes, so going above that may not work for Spark on YARN that Dataproc runs.
Also, you need to take into account GCE limitations like different quotas (CPU, RAM, Disk, external IPs, etc) and QPS limits and make sure that you have enough of these for such a large cluster.
I think that 1k nodes is a reasonable size to start from for a large Dataproc cluster if you need it, and you can upscale it further to add more nodes as necessary after cluster creation.
Cluster size estimation
You should determine how many nodes you need based on your workload and VM size that you want to use. For your use case it seems that you need to find a guide on how to estimate cluster size for ML training.
Or alternatively you can just do a binary search until you satisfied with a training time. For example, you can start from 500 8-core nodes cluster and if training time is too long then increase cluster size to 600-750 nodes and see if training time decreases as you expect - you can repeat this until you satisfied with training time or until it does not scale/improve anymore.

pyspark java.lang.OutOfMemoryError: GC overhead limit exceeded

I'm trying to process, 10GB of data using spark it is giving me this error,
java.lang.OutOfMemoryError: GC overhead limit exceeded
Laptop configuration is: 4CPU, 8 logical cores, 8GB RAM
Spark configuration while submitting the spark job.
spark = SparkSession.builder.master('local[6]').config("spark.ui.port", "4041").appName('test').getOrCreate()
spark.conf.set("spark.executor.instances", 1)
spark.conf.set("spark.executor.cores", 5)
After searching internet about this error, I have few questions
If answered that would be a great help.
1) Spark is in memory computing engine, for processing 10 gb of data, the system should have 10+gb of RAM. Spark loads 10gb of data into 10+ gb RAM memory and then do the job?
2) If point 1 is correct, how big companies are processing 100s of TBs of data, are they processing 100TB of data by clustering multiple systems to form 100+TB RAM and then process 100TB of data?
3) Is their no other way to process 50gb of data with 8gb RAM and 8Cores, by setting proper spark configurations? If it is what is the way and what should be the spark configurations.
4) What should be ideal spark configuration if the system properites are 8gb RAM and 8 Cores? for processing 8gb of data
spark configuration to be defined in spark config.
spark = SparkSession.builder.master('local[?]').config("spark.ui.port", "4041").appName('test').getOrCreate()
spark.conf.set("spark.executor.instances", ?)
spark.conf.set("spark.executor.cores", ?)
spark.executors.cores = ?
spark.executors.memory = ?
spark.yarn.executor.memoryOverhead =?
spark.driver.memory =?
spark.driver.cores =?
spark.executor.instances =?
No.of core instances =?
spark.default.parallelism =?
I hope the following will help if not clarify everything.
1) Spark is an in-memory computing engine, for processing 10 GB of data, the system should have 10+gb of RAM. Spark loads 10gb of data into 10+ GB RAM memory and then do the job?
Spark being an in-memory computation engine take the input/source from an underlying data lake or distributed storage system. The 10Gb file will be broken into smaller blocks (128Mb or 256Mb block size for Hadoop based data lake) and Spark Driver will get many executor/cores to read them from the cluster's worker node. If you try to load 10Gb data with laptop or with a single node, it will certainly go out of memory. It has to load all the data either in one machine or in many slaves/worker-nodes before it is processed.
2) If point 1 is correct, how big companies are processing 100s of TBs of data, are they processing 100TB of data by clustering multiple systems to form 100+TB RAM and then process 100TB of data?
The large data processing project design the storage and access layer with a lot of design patterns. They simply don't dump GBs or TBs of data to file system like HDFS. They use partitions (like sales transaction data is partition by month/week/day) and for structured data, there are different file formats available (especially columnar) which helps to lad only those columns which are required for processing. So right file format, partitioning, and compaction are the key attributes for large files.
3) Is their no other way to process 50gb of data with 8gb RAM and 8Cores, by setting proper spark configurations? If it is what is the way and what should be the spark configurations.
Very unlikely if there is no partition but there are ways. It also depends on what kind of file it is. You can create a custom stream file reader that can read the logical block and process it. However, the enterprise doesn't read 50Gb of one file which is one single unit. Even if you load an excel file of 10Gb in your machine via Office tool, it will go out of memory.
4) What should be ideal spark configuration if the system properties are 8gb RAM and 8 Cores? for processing 8gb of data
Leave 1 core & 1-Gb or 2-GB for OS and use the rest of them for your processing. Now depends on what kind of transformation is being performed, you have to decide the memory for driver and worker processes. Your driver should have 2Gb of RAM. But laptop is primarily for the playground to explore the code syntax and not suitable for large data set. Better to build your logic with dataframe.sample() and then push the code to bigger machine to generate the output.

Improving SQL Query using Spark Multi Clusters

I was experimenting if Spark with multi clusters can improve slow SQL query. I created two workers for master and they are running on local Spark Standalone. Yes, I did halve the memory and the number of cores to create workers on local machine. I specified partitions for sqlContext, using partitionColumn, lowerBound, UpperBoundand numberPartitions, so that tasks (or partitions) can be distributed over workers. I described them as below (partitionColumn is unique):
df = sqlContext.read.format("jdbc").options(
url = "jdbc:sqlserver://localhost;databasename=AdventureWorks2014;integratedSecurity=true;",
driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver",
dbtable = query,
partitionColumn = "RowId",
lowerBound = 1,
upperBound = 10000000,
numPartitions = 4).load()
I ran my script over the master after specifying the options, but I couldn't get any performance improvement against when running on spark without cluster. I know I should have not halved the memory for integrity of the experiment. But I would like to know if that might be the case or any reason if that's not the case. Any thoughts are welcome. Many thanks.
There are multiple factors which play a role here, though the weights of each of these can differ on a case by case basis.
As nicely pointed out by mtoto, increasing number of workers on a single machine, is unlikely to bring any performance gains.
Multiple workers on a single machine have access to the same fixed pool of resources. Since worker doesn't participate in the processing itself, you just use a higher fraction of this pool for management.
There legitimate cases when we prefer a higher number of executor JVMs, but it is not the same as increasing number of workers (the former one is an application resource, the latter one is a cluster resource).
It is not clear if you use the same number of cores for baseline and multi-worker configuration, nevertheless cores are not the only resource you have to consider working with Spark. Typical Spark jobs are IO (mostly network and disk) bound. Increasing number of threads on a single node, without making sure that there is sufficient disk and network configuration, will just make them wait for the data.
Increasing cores alone is useful only for jobs which are CPU bound (and these will typically scale better on a single machine).
Fiddling with Spark resources won't help you, if external resource cannot keep up with the requests. A high number of concurrent batch reads from a single non-replicated database will just throttle the server.
In this particular case you make it even worse by running a database server on the same node as Spark. It has some advantages (all traffic can go through loopback), but unless database and Spark use different sets of disks, they'll be competing over disk IO (and other resources as well).
Note:
It is not clear what is the query, but if it is slow when executed directly against database, fetching it from Spark will it even slower. You should probably take a closer look at query and/or database structure and configuration first.

What performance parameters to set for spark scala code to run on yarn using spark-submit?

My use case is to merge two tables where one table contains 30 million records with 200 cols and another table contains 1 million records with 200 cols.I am using broadcast join for small table.I am loading both the tables as data-frames from hive managed tables on HDFS.
I need the values to set for driver memory and executor memory and other parameters along with it for this use case.
I have this hardware configurations for my yarn cluster :
Spark Version 2.0.0
Hdp version 2.5.3.0-37
1) yarn clients 20
2) Max. virtual cores allocated for a container (yarn.scheduler.maximum.allocation-vcores) 19
3) Max. Memory allocated for a yarn container 216gb
4) Cluster Memory Available 3.1 TB available
Any other info you need I can provide for this cluster.
I have to decrease the time to complete this process.
I have been using some configurations but I think its wrong, it took me 4.5 mins to complete it but I think spark has capability to decrease this time.
There are mainly two things to look at when you want to speed up your spark application.
Caching/persistance:
This is not a direct way to speed up the processing. This will be useful when you have multiple actions(reduce, join etc) and you want to avoid the re-computation of the RDDs in the case of failures and hence decrease the application run duration.
Increasing the parallelism:
This is the actual solution to speed up your Spark application. This can be achieved by increasing the number of partitions. Depending on the use case, you might have to increase the partitions
Whenever you create your dataframes/rdds: This is the better way to increase the partitions as you don't have to trigger a costly shuffle operation to increase the partitions.
By calling repartition: This will trigger a shuffle operation.
Note: Once you increase the number of partitions, then increase the executors(may be very large number of small containers with few vcores and few GBs of memory
Increasing the parallelism inside each executor
By adding more cores to each executor, you can increase the parallelism at the partition level. This will also speed up the processing.
To have a better understanding of configurations please refer this post

Resources