Spark Pi Applications running on Ideal Dataproc Cluster - apache-spark

I witness a lot of Spark Pi application runs on my ideal Dataproc Cluster. These runs took place every 5 minutes on different workers and master node. My cluster is linked to a PHS, these Spark Pi jobs end up spamming the PHS Spark UI.
Any reason why Spark Pi jobs are running on the cluster? Also any way to disable these Spark Pi jobs?

Related

How to run multiple spark jobs on k8s cluster with simple scheduler

My main Intent is to get Spark (3.3) running on k8s with HDFS.
I went though the Spark website and got the spark pi program running on a k8s cluster in the form of spark-submit command. I read that if we submit multiple jobs to k8s cluster the k8s may end up starving all pods -- meaning there is no queueing in place, no scheduler (like yarn) which keeps a check on resources and arranges the tasks across the nodes.
So, my question is: what is the simplest way to write a scheduler in k8s? I read about volcano -- but it's not yet in GA. I read about Gang, Younikorn -- but I don't see much community support.

Monitor Spark with Prometheus when Spark clusters are spined up just when needed

We run spark over Kubernetes and we spin up a spark driver and executors for a lot of our tasks (not a spark task). After the task is finished we spin the cluster (on Kubernetes) down and spin up another one when needed (There could be a lot running simultaneously).
The problem I have is that I can't monitor it with Prometheus because I do not have a diver that is always "alive" that I can pull information on the executors from.
Is there a solution for that kind of architecture?

Unable to SSH into EMR Cluster

I use airflow to submit multiple hourly spark jobs to an EMR. In one hour I can have upwards to 30 spark submits.
The EMR is 1 master node and 4 core nodes all c4.4xlarge.
My spark submits use master yarn and deploy-mode client.
Every hour multiple airflow dags will ssh into the EMR and spark-submit their jobs. Most of the jobs are small and finishes within a few minutes, except for a few that take 10-15 mins.
I have been hitting a reoccurring error logged by airflow and once one task receives it, it waterfalls down to the rest of them:
airflow.exceptions.AirflowException: SSH operator error: No existing session
This means airflow was unable to ssh into the cluster. I even tried to ssh through my computer and it just hangs. Is it possible there are too many spark tasks running? I wouldn't think so because my cluster is pretty big for the jobs I have to run.

Role of master in Spark standalone cluster

In a Spark standalone cluster, what is exactly the role of the master (a node started with start_master.sh script)?
I understand that is the node that receives the jobs from the submit-job.sh script, but what is its role when processing a job?
I'm seeing in the web UI that always delivers the job to a slave (a node started with start_slave.sh) and is not participating from processing, Am I right? In that case, should I also run also the script start_slave.sh in the same machine than master to to take advantage of its resources (cpu and memory)?
Thanks in advance.
Spark runs in the following cluster modes:
Local
Standalone
Mesos
Yarn
The above are cluster modes which offer resources to Spark Applications
Spark standalone mode is master slave architecture, we have Spark Master and Spark Workers. Spark Master runs in one of the cluster nodes and Spark Workers run on the Slave nodes of the cluster.
Spark Master (often written standalone Master) is the resource manager
for the Spark Standalone cluster to allocate the resources (CPU, Memory, Disk etc...) among the
Spark applications. The resources are used to run the Spark Driver and Executors.
Spark Workers report to Spark Master about resources information on the Slave nodes.
[apache-spark]
Spark standalone comes with its own resource manager. Think about Spark Master/Worker as YARN ResourceManager/NodeManager.

running multiple Spark jobs on a Mesos cluster

I would like to run multiple spark jobs on my Mesos cluster, and have all spark jobs share the same spark framework. Is this possible?
I have tried running the MesosClusterDispatcher and have the spark jobs connect to the dispatcher, but each spark job launches its own "Spark Framework" (I have tried running both client-mode and cluster-mode).
Is this the expected behaviour?
Is it possible to share the same spark-framework among multiple spark jobs?
It is normal and it's the expected behaviour.
In Mesos as far as I know, SparkDispatcher is in charge of allocate resources for your Spark Driver which will act as a framework. Once Spark driver has been allocated, it is responsible for talk to Mesos and accept offers to allocate the executors where tasks will be executed.

Resources