Single worker node stalls job - apache-spark

I have a Spark submit job (PySpark) that works properly 90% of the time, but for 10% it stalls on a specific host. Basically tasks may take seconds to complete on other hosts, but sometimes it grinds to a halt on a host I can identify via the Spark UI. In such cases I end up killing the process and re-running. I am wondering what my options are to mitigate this issue.
My infrastructure is a standalone Spark 2.1 cluster on EC2 instances running on Amazon AWS. I have considered speculative execution, but my process writes to s3 and I've been advised that specifying speculative execution for processes that end up persisting to s3 is a bad idea. Any suggestions are welcome.

Stalling at 90% is not unusual if your data is skewed, i.e. you have some partitions with really large amounts of data which can lead to a lot of GC and OOM.
In this case repartitioning the data, e.g. via the RangePartitioner would be a solution.

Related

best way to run 300+ concurrent spark jobs in Dataproc?

I have a Dataproc cluster with 2 worker nodes (n1s2). There is an external server which submits around 360 spark jobs within an hour (with a couple of minutes spacing between each submission). The first job completes successfully but the subsequent ones get stuck and do not proceed at all.
Each job crunches some timeseries numbers and writes to Cassandra. And the time taken is usually 3-6 minutes when the cluster is completely free.
I feel this can be solved by just scaling up the cluster, but would become very costly for me.
What would be the other options to best solve this use case?
Running 300+ concurrent jobs on a 2 worker nodes cluster doesn't sound like feasible. You want to first estimate how much resource (CPU, memory, disk) each job needs then make a plan for the cluster size. YARN metrics like available CPU, available memory, especially pending memory would be helpful for identifying the situation where it is lack of resources.

How is abnormal Driver termination handled for a Spark App in Yarn cluster mode

We're using AWS EMR for our spark jobs. All our jobs are submitted in yarn cluster mode, so the driver will run in one of the cluster nodes. We use on-demand node for master, and spot-instances for the core nodes. Now, although we almost always choose instances with < 5% interruption rate, sometimes it so happens that a significant fraction of our cluster nodes get terminated prematurely (probably because of higher demands).
So, I was wondering, in the above situation, what happens if a node containing the driver process goes down? Is there any chance of recovery for the spark job in that case? Or is the job gone forever?
The Spark driver is a single point of failure because it holds all cluster state for the running App.
In practice non-ephemeral storage can be used for check-pointing batch Apps after expensive expensive transformations. That said, trying to re-start after such a situation can be done, but when I looked into it, it is quite difficult to say the least. I asked such a question under my name some time ago, you can find it. I am quite technical but felt: gosh what a lot of hard work.
So, the recovery means rolling your own stuff, or accepting a re-run. Since I last evaluated EMR I see that the driver can run on the Master and that can be failed-over, but that is not the same thing as far as I can see, nor what you wish.
EMR has node leveling for CORE nodes in Yarn. Your spark driver/ Application master only gets created in CORE nodes. And HDFS also resides in CORE nodes only.
So to handle your situation in a best way, you may consider to use both CORE and TASK group.
What you can do to tackle this -
MASTER: On-demand
CORE: On-demand. Minimum no of Instances can be 1.
TASK: Spot with autoscaling with minimal EBS volume. Minimum no of Instances can be 0 this case.
This will reduce your cost also ensure that node containing the driver process never goes down.
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-master-core-task-nodes.html

Avoid CPU pegging on Spark Standalone

I have a daily pipeline running on Spark Standalone 2.1. Its deployed in and runs on AWS EC2 and uses S3 for its persistence layer. For the most part, the pipeline runs without a hitch, but occasionally the job hangs on a single worker node during a reduceByKey operation. When I work into the worker, I notice that the CPU (as seen via top) is pegged at 100%. My remedy so far is to reboot the worker node so that Spark re-assigns the task and the job proceeds fine from there.
I would like to be able to mitigate this issue. I gather that I can prevent CPU pegging by switching to use YARN as my cluster manager, but I wonder whether I could configure Spark Standalone to prevent CPU pegging by maybe limiting the number of cores that get assigned to the Spark job ? Any suggestions would be greatly appreciated.

Do multiple spark applications running on yarn have any impact on each other?

Do multiple spark jobs running on yarn have any impact on each other?
e.g. If the traffic on one streaming job increases too much does it have any effect on second job? Will it slow it down or any other consequences?
I have enough resources for both of the applications to run concurrently.
Yes they do. Depending on how your scheduler is set up (static vs dynamic) they either share just the network output (important for shuffles) and disk throughput (important for shuffles, reading in of data locally or on HDFS, writing away data locally or on HDFS) or also the memory and CPUs if it's on dynamic allocation. Still, running your two jobs on parallel as opposed to sequentially will benefit on average, due to the network and disk resources not being used constantly. This mostly depends on the amount of shuffling necessary in your jobs.

Spark Standalone cluster master web UI inaccessible after an application finishes

I have a spark application that finishes without error, but once it's done and saved all of its outputs and the process terminates, the Spark standalone cluster master process becomes a CPU hog, using 16 CPU's full time for hours, and the web UI becomes unresponsive. I have no idea what it could be doing, is there some complicated clean up step?
Some more details:
I've got a Spark standalone cluster (27 workers/nodes) that I've been successfully submitting jobs to for a while. I recently scaled up the size of my applications, the largest now takes 3.5 hours using 100 cores over 27 workers, and each worker has ~dozens of GB of shuffle read/write over the course of the job. Otherwise, the application is no different than the smaller jobs that have run successfully before.
This is a known issue with Spark's standalone cluster, and is caused by the massive event log created by large applications. You can read more at the issue tracking link below.
https://issues.apache.org/jira/browse/SPARK-12299
At the current time, the best work-around is to disable event logging for large jobs.

Resources