Improving SQL Query using Spark Multi Clusters - apache-spark

I was experimenting if Spark with multi clusters can improve slow SQL query. I created two workers for master and they are running on local Spark Standalone. Yes, I did halve the memory and the number of cores to create workers on local machine. I specified partitions for sqlContext, using partitionColumn, lowerBound, UpperBoundand numberPartitions, so that tasks (or partitions) can be distributed over workers. I described them as below (partitionColumn is unique):
df = sqlContext.read.format("jdbc").options(
url = "jdbc:sqlserver://localhost;databasename=AdventureWorks2014;integratedSecurity=true;",
driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver",
dbtable = query,
partitionColumn = "RowId",
lowerBound = 1,
upperBound = 10000000,
numPartitions = 4).load()
I ran my script over the master after specifying the options, but I couldn't get any performance improvement against when running on spark without cluster. I know I should have not halved the memory for integrity of the experiment. But I would like to know if that might be the case or any reason if that's not the case. Any thoughts are welcome. Many thanks.

There are multiple factors which play a role here, though the weights of each of these can differ on a case by case basis.
As nicely pointed out by mtoto, increasing number of workers on a single machine, is unlikely to bring any performance gains.
Multiple workers on a single machine have access to the same fixed pool of resources. Since worker doesn't participate in the processing itself, you just use a higher fraction of this pool for management.
There legitimate cases when we prefer a higher number of executor JVMs, but it is not the same as increasing number of workers (the former one is an application resource, the latter one is a cluster resource).
It is not clear if you use the same number of cores for baseline and multi-worker configuration, nevertheless cores are not the only resource you have to consider working with Spark. Typical Spark jobs are IO (mostly network and disk) bound. Increasing number of threads on a single node, without making sure that there is sufficient disk and network configuration, will just make them wait for the data.
Increasing cores alone is useful only for jobs which are CPU bound (and these will typically scale better on a single machine).
Fiddling with Spark resources won't help you, if external resource cannot keep up with the requests. A high number of concurrent batch reads from a single non-replicated database will just throttle the server.
In this particular case you make it even worse by running a database server on the same node as Spark. It has some advantages (all traffic can go through loopback), but unless database and Spark use different sets of disks, they'll be competing over disk IO (and other resources as well).
Note:
It is not clear what is the query, but if it is slow when executed directly against database, fetching it from Spark will it even slower. You should probably take a closer look at query and/or database structure and configuration first.

Related

How do you determine shuffle partitions for Spark application?

I am new to spark so am following this amazing tutorial from sparkbyexamples.com and while reading I found this section:
Shuffle partition size & Performance
Based on your dataset size, a number of cores and memory PySpark
shuffling can benefit or harm your jobs. When you dealing with less
amount of data, you should typically reduce the shuffle partitions
otherwise you will end up with many partitioned files with less number
of records in each partition. which results in running many tasks with
lesser data to process.
On other hand, when you have too much of data and having less number
of partitions results in fewer longer running tasks and some times you
may also get out of memory error.
Getting the right size of the shuffle partition is always tricky and
takes many runs with different values to achieve the optimized number.
This is one of the key properties to look for when you have
performance issues on PySpark jobs.
Can someone help me understand how do you determine how many shuffle partitions you will need for your job?
As you quoted, it’s tricky, but this is my strategy:
If you’re using “static allocation”, means you tell Spark how many executors you want to allocate for the job, then it’s easy, number of partitions could be executors * cores per executor * factor. factor = 1 means each executor will handle 1 job, factor = 2 means each executor will handle 2 jobs, and so on
If you’re using “dynamic allocation”, then it’s trickier. You can read the long description here https://databricks.com/blog/2021/03/17/advertising-fraud-detection-at-scale-at-t-mobile.html. The general idea is you need to answer many questions like what’s the size if your data (how big in terms of gigabytes), how its structure looks like (how many files, how many folders, how many rows etc), how would you read it (from hdfs or from hive or from jdbc), how much resources do you have (cores, executors, memory), … Then you run and benchmark over and over to find the sweet spot that is “just right” for your circumstances.
Update #1:
So what is the general industry practice, will a company simply use first tactic and allocate more hardware or they will use dynamic allocation?
Usually, if you have an on-premise Hadoop environment, you can choose between static (default mode) and dynamic allocation (advanced mode). Also, I often start with dynamic because I have no idea how big the data and its transformation is, so stick with dynamic give me flexibility to expand my work without thinking too much about Spark configuration. But you also can start with static if you want to, nothing preventing you to do so.
Then eventually, when it came to productionize process, you also can choose between static (very stable but consumes more resources) vs dynamic (less stable, i.e fail sometimes due to resources allocation, but save resources.
Finally, most Hadoop cloud solution (like Databricks) come with dynamic allocation by default, which is is less costly.

Why caching small Spark RDDs takes big memory allocation in Yarn?

The RDDs that are cached (in total 8) are not big, only around 30G, however, on Hadoop UI, it shows that the Spark application is taking lots of memory (no active jobs are running), i.e. 1.4T, why so much?
Why it shows around 100 executors (here, i.e. vCores) even when there's no active jobs running?
Also, if cached RDDs are stored across 100 executors, are those executors preserved and no more other Spark apps can use them for running tasks any more? To rephrase the question: will preserving a little memory resource (.cache) in executors prevents other Spark app from leveraging the idle computing resource of them?
Is there any potential Spark config / zeppelin config that can cause this phenomenon?
UPDATE 1
After checking the Spark conf (zeppelin), it seems there's the default (configured by administrator by default) setting for spark.executor.memory=10G, which is probably the reason why.
However, here's a new question: Is it possible to keep only the memory needed for the cached RDDs in each executors and release the rest, instead of holding always the initially set memory spark.executor.memory=10G?
Spark configuration
Perhaps you can try to repartition(n) your RDD to a fewer n < 100 partitions before caching. A ~30GB RDD would probably fit into storage memory of ten 10GB executors. A good overview of Spark memory management can be found here. This way, only those executors that hold cached blocks will be "pinned" to your application, while the rest can be reclaimed by YARN via Spark dynamic allocation after spark.dynamicAllocation.executorIdleTimeout (default 60s).
Q: Is it possible to keep only the memory needed for the cached RDDs in each executors and release the rest, instead of holding always the initially set memory spark.executor.memory=10G?
When Spark uses YARN as its execution engine, YARN allocates the containers of a specified (by application) size -- at least spark.executor.memory+spark.executor.memoryOverhead, but may be even bigger in case of pyspark -- for all the executors. How much memory Spark actually uses inside a container becomes irrelevant, since the resources allocated to a container will be considered off-limits to other YARN applications.
Spark assumes that your data is equally distributed on all the executors and tasks. That's the reason why you set memory per task. So to make Spark to consume less memory, your data has to be evenly distributed:
If you are reading from Parquet files or CSVs, make sure that they have similar sizes. Running repartition() causes shuffling, which if the data is so skewed may cause other problems if executors don't have enough resources
Cache won't help to release memory on the executors because it doesn't redistribute the data
Can you please see under "Event Timeline" on the Stages "how big are the green bars?" Normally that's tied to the data distribution, so that's a way to see how much data is loaded (proportionally) on every task and how much they are doing. As all tasks have same memory assigned, you can see graphically if resources are wasted (in case there are mostly tiny bars and few big bars). A sample of wasted resources can be seen on the image below
There are different ways to create evenly distributed files for your process. I mention some possibilities, but for sure there are more:
Using Hive and DISTRIBUTE BY clause: you need to use a field that is equally balanced in order to create as many files (and with proper size) as expected
If the process creating those files is a Spark process reading from a DB, try to create as many connections as files you need and use a proper field to populate Spark partitions. That is achieved, as explained here and here with partitionColumn, lowerBound, upperBound and numPartitions properties
Repartition may work, but see if coalesce also make sense in your process or in the previous one generating the files you are reading from

Deadlock when many spark jobs are concurrently scheduled

Using spark 2.4.4 running in YARN cluster mode with the spark FIFO scheduler.
I'm submitting multiple spark dataframe operations (i.e. writing data to S3) using a thread pool executor with a variable number of threads. This works fine if I have ~10 threads, but if I use hundreds of threads, there appears to be a deadlock, with no jobs being scheduled according to the Spark UI.
What factors control how many jobs can be scheduled concurrently? Driver resources (e.g. memory/cores)? Some other spark configuration settings?
EDIT:
Here's a brief synopsis of my code
ExecutorService pool = Executors.newFixedThreadPool(nThreads);
ExecutorCompletionService<Void> ecs = new ExecutorCompletionService<>(pool);
Dataset<Row> aHugeDf = spark.read.json(hundredsOfPaths);
List<Future<Void>> futures = listOfSeveralHundredThings
.stream()
.map(aThing -> ecs.submit(() -> {
df
.filter(col("some_column").equalTo(aThing))
.write()
.format("org.apache.hudi")
.options(writeOptions)
.save(outputPathFor(aThing));
return null;
}))
.collect(Collectors.toList());
IntStream.range(0, futures.size()).forEach(i -> ecs.poll(30, TimeUnit.MINUTES));
exec.shutdownNow();
At some point, as nThreads increases, spark no longer seems to be scheduling any jobs as evidenced by:
ecs.poll(...) timing out eventually
The Spark UI jobs tab showing no active jobs
The Spark UI executors tab showing no active tasks for any executor
The Spark UI SQL tab showing nThreads running queries with no running job ID's
My execution environment is
AWS EMR 5.28.1
Spark 2.4.4
Master node = m5.4xlarge
Core nodes = 3x rd5.24xlarge
spark.driver.cores=24
spark.driver.memory=32g
spark.executor.memory=21g
spark.scheduler.mode=FIFO
If possible write the output of the jobs to AWS Elastic MapReduce hdfs (to leverage on the almost instantaneous renames and better file IO of local hdfs) and add a dstcp step to move the files to S3, to save yourself all the troubles of handling the innards of an object store trying to be a filesystem. Also writing to local hdfs will allow you to enable speculation to control runaway tasks without falling into the deadlock traps associated with DirectOutputCommiter.
If you must use S3 as the output directory ensure that the following Spark configurations are set
spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version 2
spark.speculation false
Note: DirectParquetOutputCommitter is removed from Spark 2.0 due to the chance of data loss. Unfortunately until we have improved consistency from S3a we have to work with the workarounds. Things are improving with Hadoop 2.8
Avoid keynames in lexicographic order. One could use hashing/random prefixes or reverse date-time to get around.The trick is to name your keys hierarchically, putting the most common things you filter by on the left side of your key. And never have underscores in bucket names due to DNS issues.
Enabling fs.s3a.fast.upload upload parts of a single file to Amazon S3 in parallel
Refer these articles for more detail-
Setting spark.speculation in Spark 2.1.0 while writing to s3
https://medium.com/#subhojit20_27731/apache-spark-and-amazon-s3-gotchas-and-best-practices-a767242f3d98
IMO you're likely approaching this problem wrong. Unless you can guarantee that the number of tasks per job is very low, you're likely not going to get much performance improvement by parallelizing 100s of jobs at once. Your cluster can only support 300 tasks at once, assuming you're using the default parallelism of 200 thats only 1.5 jobs. I'd suggest rewriting your code to cap max concurrent queries at 10. I highly suspect that you have 300 queries with only a single task of several hundred actually running. Most OLTP data processing system intentionally have a fairly low level of concurrent queries compared to more traditional RDS systems for this reason.
also
Apache Hudi has a default parallelism of several hundred FYI.
Why don't you just partition based on your filter column?
I would start by eliminating possible causes. Are you sure its spark that is not able to submit many jobs? Is it spark or is it YARN? If it is the later, you might need to play with the YARN scheduler settings. Could it be something to do with ExecutorService implementation that may have some limitation for the scale you are trying to achieve? Could it be hudi? With the snippet thats hard to determine.
How does the problem manifest itself other than no jobs starting up? Do you see any metrics / monitoring on the cluster or any logs that point to the problem as you say it?
If it is to do with scaling, is is possible for you to autoscale with EMR flex and see if that works for you?
How many executor cores?
Looking into these might help you narrow down or perhaps confirm the issue - unless you have already looked into these things.
(I meant to add this as comment rather than answer but text too long for comment)
Using threads or thread pools are always problematic and error prone.
I had similar problem in processing spark jobs in one of Internet of things application. I resolved using fair scheduling.
Suggestions :
Use fair scheduling (fairscheduler.xml) instead of yarn capacity scheduler
how to ? see this by using dedicated resource pools one per module. when used it will look like below spark ui
See that unit of parllelism (number of partitions ) are correct for data frames you use by seeing spark admin ui. This is spark native way of using parllelism.

Distributing partitions across cluster

In apache spark one is allowed to load datasets from many different sources. According to my understanding computing nodes of spark cluster can be different than these used by hadoop to store data (am I right?). What is more, we can even load local file into spark job. Here goes main question: Even if we use the same computers for hdfs and spark purposes, is it always true that spark, during creation of RDD, will shuffle all data? Or spark will just try to load data in the way to take advantage of already existing data locality?
You can use HDFS as the common underlying storage for both MapReduce (Hadoop) and Spark engines, and use a cluster manager like YARN to perform resource management. Spark will try to take advantage of data locality, and execute tasks as close as possible to the data.
This is how it works: If data is available on a node to process, but the CPU is not free, Spark will wait for a certain amount of time (determined by the configuration parameter: spark.locality.wait seconds, default is 3 seconds) for the CPU to become available.
If CPU is still not free after the configured time has passed, Spark will switch the task to a lower locality level. It will then again wait for spark.locality.wait seconds and if a timeout occurs again, it will switch to a yet lower locality level.
The locality levels are defined as below, in order from closest to data, to farthest from data (https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.scheduler.TaskLocality$):
PROCESS_LOCAL (data is in the same JVM as the running code)
NODE_LOCAL (data is on the same node)
NO_PREF (data is accessed equally quickly from anywhere and has no locality preference)
RACK_LOCAL (data is on the same rack of servers)
ANY (data is elsewhere on the network and not in the same rack)
Waiting time for locality levels can also be individually configured. For longer jobs, the wait time can be increased to a larger value than default of 3 seconds, since the CPU might be tied up longer.

Apache Spark - restricting per stage concurrent thread count

I want to control the number of concurrent partitions being processed at every stage in my Spark RDD. .repartition(...) is not the solution, as that just modifies the overall number of partitions during a stage, not how many are being processed.
Normally you restrict the number of concurrent partitions at the start by using the --executor-cores and --num-executors parameters. And that is not exact, as processing stages can be staggered etc.
The main thing I want to accomplish is a dataload process from a database with certain resource restrictions (concurrent connections) - but I do not want those database resource restrictions to dictate the concurrency of the rest of my spark process or RDD. I also do not want to force very huge partitions at the beginning of the process that will have to be further split and redistributed.
It seems like a reasonable thing to expect, but at first glance not something that can be accomplished within the Spark API.
Example (some pseudo code)
rdd = pseudoReadFromJDBC(partitions = 500,parallelism=10)
.repartition(100)
.parallelism(50)
.operatorOnRDD();
So in this case, In the first stage, I would split the data retrieved from the JDBC Query in 500 smaller datasets. However, I would restrict Spark from only allowing to run 10 threads of it simultaneously, so I have maximum only 10 JDBC Connections simultaneously opened. Other partitions would just queue up.
Then in the second stage, I might repartition, but more importantly, I want to choose a higher degree of actual parallelism because I am not restricted anymore by the database allowing a limited amount of simultaneous connections.
That is what I mean with changing it on a per stage basis.
There is one parameter spark.default.parallelism. You can try changing value of this.

Resources