Spark Streaming on Kubernetes - Executor Pods not Restarted/Rescheduled - apache-spark

We are running DStream applications on Kubernetes cluster using Spark Operator (Spark 2.4.7). Sometimes due to various reasons (OOM's, Kubernetes node restarts) executor pods are getting lost, and while many times Spark sees this and reschedules a new executor, eventually (after a week or more) most of the applications are getting to the state when the executors are not getting rescheduled and the application continues running with less executors than requested. In Spark UI those "forever lost" executors are shown as healthy, but obviously they aren't fetching any data from Kafka. The only way to make sure application works as expected is to recreate the SparkApplication CRD, which basically means hard restart.
You can find the restart policy section of SparkApplication CRD below:
restartPolicy:
onFailureRetries: 100
onFailureRetryInterval: 20
onSubmissionFailureRetries: 5
onSubmissionFailureRetryInterval: 30
type: Always

Related

How to run multiple spark jobs on k8s cluster with simple scheduler

My main Intent is to get Spark (3.3) running on k8s with HDFS.
I went though the Spark website and got the spark pi program running on a k8s cluster in the form of spark-submit command. I read that if we submit multiple jobs to k8s cluster the k8s may end up starving all pods -- meaning there is no queueing in place, no scheduler (like yarn) which keeps a check on resources and arranges the tasks across the nodes.
So, my question is: what is the simplest way to write a scheduler in k8s? I read about volcano -- but it's not yet in GA. I read about Gang, Younikorn -- but I don't see much community support.

Monitor Spark with Prometheus when Spark clusters are spined up just when needed

We run spark over Kubernetes and we spin up a spark driver and executors for a lot of our tasks (not a spark task). After the task is finished we spin the cluster (on Kubernetes) down and spin up another one when needed (There could be a lot running simultaneously).
The problem I have is that I can't monitor it with Prometheus because I do not have a diver that is always "alive" that I can pull information on the executors from.
Is there a solution for that kind of architecture?

Spark job on Kubernetes Under Resource Starvation Wait Indefinitely For SPARK_MIN_EXECUTORS

I am using Spark 3.0.1 and working on a project spark deployment on Kubernetes where Kubernetes acting cluster manager for spark job and spark submits the job using client mode. In case Cluster does not have sufficient resource (CPU/ Memory ) for minimum number of executors , the executors goes in Pending State for indefinite time until the resource gets free.
Suppose, Cluster Configurations are:
total Memory=204Gi
used Memory=200Gi
free memory= 4Gi
SPARK.EXECUTOR.MEMORY=10G
SPARK.DYNAMICALLOCTION.MINEXECUTORS=4
SPARK.DYNAMICALLOCATION.MAXEXECUTORS=8
Here job should not be submitted as executors allocated are less than MIN_EXECUTORS.
How can driver abort the job in this scenario?
Firstly would like to mention that, spark dynamic allocation not supported for kubernetes yet(as of version 3.0.1), its in pipeline for future release Link
while for the requirement you have posted, you could address by running a resource monitor code snippet before the job initialized and terminate the initialization pod itself with error.
if you want to run this from CLI you could use kubectl describe nodes/ kube-capacity utility to monitor the resources

Spark Executors PODS in Pending State on Kubernetes Deployment

I deployed a simple spark application on kubernetes with following configurations:
spark.executor.instances=2;spark.executor.memory=8g;
spark.dynamicAllocation.enabled=true;spark.dynamicAllocation.shuffleTracking.enabled=true;
spark.executor.cores=2;spark.dynamicAllocation.minExecutors=2;spark.dynamicAllocation.maxExecutors=2;
Memory requirements of Executor PODS are more than what is available on Kubernetes cluster and due to this Spark Executor PODS always stay in PENDING state as below.
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/spark-k8sdemo-6e66d576f655b1f5-exec-1 0/1 Pending 0 10m
pod/spark-k8sdemo-6e66d576f655b1f5-exec-2 0/1 Pending 0 10m
pod/spark-master-6d9bc767c6-qsk8c 1/1 Running 0 10m
I know the reason is non-availability of resources as show in Kubectl describe command:
$ kubectl describe pod/spark-k8sdemo-6e66d576f655b1f5-exec-1
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 28s (x12 over 12m) default-scheduler 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.
On the other hand, driver pods keeps waiting forever for Executor PODS to get ample resources as below.
$ kubectl logs pod/spark-master-6d9bc767c6-qsk8c
21/01/12 11:36:46 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
21/01/12 11:37:01 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
21/01/12 11:37:16 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
Now my question here is if there is some way to make driver wait only for sometime/retries and if Executors still don't get resource, driver POD should auto-die with printing proper message/logs e.g. "application aborted as there were no resources in cluster".
I went through all Spark configurations for above requirement but couldn't fine any. Though in YARN we have spark.yarn.maxAppAttempts but nothing similar was found for Kubernetes.
If no such configuration is available in Spark Is there a way in kubernetes POD definition to achieve the same.
This is Apache Spark 3.0.1 here. No idea if things get any different in the upcoming 3.1.1.
tl;dr I don't think there's a built-in support for "driver wait only for sometime/retries and if Executors still don't get resource, driver POD should auto-die with printing proper message/logs".
My very basic understanding of Spark on Kubernetes lets me claim that there is no such feature to "auto-die" the driver pod when there are no resources for executor pods.
There is podCreationTimeout (based on spark.kubernetes.allocation.batch.delay configuration property) and spark.kubernetes.executor.deleteOnTermination configuration property that make Spark on Kubernetes delete executor pods requested but not created, but that's not really what you want.
Dynamic Allocation of Executors could make things a bit more complex, but it does not really matter in this case.
A workaround could be to use spark-submit --status to request status of a Spark application and check whether it's up and running or not and --kill it after a certain time threshold (you could achieve a similar thing using kubectl directly too).
Just a FYI and to make things a bit more interesting. You should be reviewing the two other Spark on Kubernetes-specific configuration properties:
spark.kubernetes.driver.request.cores
spark.kubernetes.executor.request.cores
There could be others.
After lot of digging, we are finally able to utilize SparkListener to check if Spark Application has started and ample number of executors have been registered. If this condition meets then proceed further with Spark jobs else return with a Warning stating no ample resources in kubernetes cluster to run that Spark Job.
Is there a way in kubernetes POD definition to achieve the same.
You could use an init container in your Spark Driver pods that confirms the Spark Exec pods are available.
The init container could be a simple shell script. The logic could be written so it only tries so many times and or times out.
In a pod definition all init containers must succeed before continuing to any other containers in the pod.

Should slave nodes be launched/started separately on Amazon EMR server?

I have just launched Amazon Elastic MapReduce server after trying java.lang.OutofMemorySpace:Java heap space while fetching 120 million rows from database in pyspark where I have 1 master and 2 slave nodes running each having 4 cores and 8G RAM.
I am trying to load a massive dataset from MySQL database (containing approx. 120M rows). The query loads fine but when I do a df.show() operation or when I try to perform operations on the spark dataframe I am getting errors like -
org.apache.spark.SparkException: Job 0 cancelled because SparkContext was shut down
Task 0 in stage 0.0 failed 1 times; aborting job
java.lang.OutOfMemoryError: GC overhead limit exceeded
My questions are -
When I SSH into the Amazon EMR server and do htop, I see that 5GB out of 8GB is already in use. Why is this?
On the Amazon EMR portal, I can see that the master and slave servers are running. I'm not sure if the slave servers are being used or if its just the master doing all the work. Do I have to separately launch or "start" the 2 slave nodes or does Spark do that automatically? If yes, how do I do this?
If you are running spark as standalone mode (local[*]) from master then it will only use master node.
How are you submitting spark job?
Use yarn cluster or client mode while submitting spark job to use resources efficiently.
Read more on YARN cluster vs client
Master node runs all the other services like hive, mysql, etc. Those services may
taking 5GB of ram if aren’t using standalone mode.
In yarn UI (http://<master-public-dns>:8088) you can check what other containers are running in more detail.
You can check where your spark driver and executer are spinning,
in spark UI http://<master-public-dns>:18080.
Select your job and go to the Executor section, there you would find machine ip of each executor.
Enable ganglia in EMR OR go to CloudWatch ec2 metric to check each machine utilization.
Spark doesn’t start or terminates nodes.
If you want to scale your cluster depending upon job load, apply autoscaling policy to CORE or TASK instance group.
But at-least you need 1 CORE node always running.

Resources