Worker Nodes not being used in GCE - apache-spark

While running my spark jobs on google-cloud-dataproc, I notice that only the master node is being utilized and the CPU utilization of all the worker nodes is nearly zero percent (0.8 percent or so). I have used both the GUI as well as the console to run the code. Do you know any specific reason that could be causing this and how to make the full utilization of the worker nodes?
I submit the jobs in the following manner:
gcloud dataproc jobs submit spark --properties spark.executor.cores=10 --cluster cluster-663c --class ComputeMST --jars gs://kslc/ComputeMST.jar --files gs://kslc/SIFT_full.txt -- SIFT_full.txt gs://kslc/SIFT_fu ll.txt 5.0 12
while(true){
level_counter++;
if(level_counter > (number_of_levels - 1)) break;
System.out.println("LEVEL = " + level_counter);
JavaPairRDD<ArrayList<Integer>, epsNet> distributed_msts_logn1 = distributed_msts_logn.mapToPair(new next_level());
JavaPairRDD<ArrayList<Integer>, epsNet> distributed_msts_next_level = distributed_msts_logn1.reduceByKey(new union_eps_nets());
den = den/2;
distributed_msts_logn = distributed_msts_next_level.mapValues(new unit_step_logn(den, level_counter));
}
JavaRDD<epsNet> epsNetsRDDlogn = distributed_msts_logn.values();
List<epsNet> epsNetslogn = epsNetsRDDlogn.collect();
Above is the code, I am trying to run.

You are doing a collect() in your driver program. What are you trying to achieve? Doing a collect will definitely hammer your master node resources, since driver will be collecting the results here. Generally you want to ingest data into spark (using read or parallelize on spark context), do in-memory map-reduce (transformations) and then take data out of the spark world (example, writing a parquet to hdfs) to do any collect-related stuff.
Also, ensure via spark UI that you have all the executors that you asked for with given cores and memory.

Related

Dataproc Didn't Process Big Data in Parallel Using pyspark

I launched a DataProc cluster in GCP, with one master node and 3 work nodes. Every node has 8 vCPU and 30G memory.
I developed a pyspark code, which read one csv file from GCS. The csv file is about 30G in size.
df_raw = (
spark
.read
.schema(schema)
.option('header', 'true')
.option('quote', '"')
.option('multiline', 'true')
.csv(infile)
)
df_raw = df_raw.repartition(20, "Product")
print(df_raw.rdd.getNumPartitions())
Here is how I launched the pyspark into dataproc:
gcloud dataproc jobs submit pyspark gs://<my-gcs-bucket>/<my-program>.py \
--cluster=${CLUSTER} \
--region=${REGION} \
I got the partition number of only 1.
I attached the nodes usage image here for your reference.
Seems it used only one vCore from one worker node.
How to make this in parallel with multiple partitions and using all nodes and more vCores?
Tried repartition to 20, but it still only used one vCore from one work node, as below:
Pyspark default partition is 200. So I was surprised to see dataproc didn't use all available resources for this kind of task.
This isn't a dataproc issue, but a pure Spark/pyspark one.
In order to parallelize your data it needs to split into multiple partitions - a number larger than the number of executors (total worker cores) you have. (E.g. ~ *2, ~ *3, ...)
There are various ways to do this e.g.:
Split data into files or folders and parallelize the list of files/folders and work on each one (or use a database that already does this and keeps this partitioning in Spark read).
Repartition your data after you get a Spark DF e.g. read the number of executors and multiply them by N and repartition to this many partitions. When you do this, you must chose columns which divide your data well i.e. into many parts, not into a few parts only e.g. by day, by a customer ID, not by a status ID.
df = df.repartition(num_partitions, 'partition_by_col1', 'partition_by_col2')
The code runs on the master node and the parallel stages are distributed amongst the worker nodes, e.g.
df = (
df.withColumn(...).select(...)...
.write(...)
)
Since Spark functions are lazy, they only run when you reach a step like write or collect which causes the DF to be evaluated.
You might want to try to increase the number of executors by passing Spark configuration via --properties of Dataproc command line. So something like
gcloud dataproc jobs submit pyspark gs://<my-gcs-bucket>/<my-program>.py \
--cluster=${CLUSTER} \
--region=${REGION} \
--properties=spark.executor.instances=5

How Apache Spark collects and coordinate the results from executors

Posting this question to learn how Apache Spark collects and coordinate the results from executors.
Suppose I'm running a job with 3 executors. My DataFrame is partitioned and running across these 3 executors.
So now, When I execute a count() or collect() action on the DataFrame how spark will coordinate the results from these 3 executors?
val prods = spark.read.format("csv").option("header", "true").load("testFile.csv")
prods.count(); // How spark collect data from three executors? Who will coordinate the result from different executors and give it to driver?
prods.count(); // How spark collect data from three executors? Who will coordinate the result from different executors and give it to driver?
When you do spark-submit you specify master a client program (driver) starts running on yarn ,if yarn is specified master or local if local specified. https://spark.apache.org/docs/latest/submitting-applications.html
Since you have added tag yarn in the question i am assuming you mean yarn-url,so yarn launches client program(driver) on any of the nodes of cluster and registers and assigns workers (executors) to driver so that task to be executed on each node.Each transformation/action is run parallel on each worker nodes (executor).Once each node complete the job they return back there results to the driver program.
Oki,what part are you not clear ?
Let me make it generic,the client/driver program launches and requests the master local/standalone master/yarn aka Cluster Manager that driver program wants resources to perform tasks ,so allocate driver with the workers for that.The cluster manager in return allocates workers,launches executors on worker nodes and gives the information to client program that you can you use these workers to do your job.So data is divided in each worker node and parallel tasks/transformations are done.Once collect() or count() is called (i assume this is the final part of job).Then each executor return its result back to driver.

Need help in understanding pyspark execution on yarn as Master

I have already some picture of yarn architecture as well as spark architecture.But when I try to understand them together(thats what happens
when apark job runs on YARN as master) on a Hadoop cluster, I am getting in to some confusions.So first I will say my understanding with below example and then I will
come to my confusions
Say I have a file "orderitems" stored on HDFS with some replication factor.
Now I am processing the data by reading this file in to a spark RDD (say , for calculating order revenue).
I have written the code and configured the spark submit as given below
spark-submit \
--master yarn \
--conf spark.ui.port=21888 \
--num-executors 2 \
--executor-memory 512M \
src/main/python/order_revenue.py
Lets assume that I have created the RDD with a partition of 5 and I have executed in yarn-client mode.
Now As per my understanding , once I submit the spark job on YARN,
Request goes to Application manager which is a component of resource
manager.
Application Manager will find one node manager and ask it to launch a
container.
This is the first container of an application and we will call it an
Application Master.
Application master takes over the responsibility of executing and monitoring
the job.
Since I have submitted on client mode,driver program will run on my edge Node/Gateway Node.
I have provided num-executors as 2 and executor memory as 512 mb
Also I have provided no.of partitions for RDD as 5 which means , it will create 5 partitions of data read
and distribute over 5 nodes.
Now here my few confusions over this
I have read in user guide that, partitions of rdd will be distributed to different nodes. Does these nodes are same as the
'Data Nodes' of HDFS cluster? I mean here its 5 partitions, does
this mean its in 5 data nodes?
I have mentioned num-executors as 2.So this 5 partitions of data will utilizes 2 executors(CPU).So my nextquestion is , from where
this 2 executors (CPU) will be picked? I mean 5 partitions are in 5 nodes
right , so does these 2 executors are also in any of these nodes?
The scheduler is responsible for allocating resources to the various running applications subject to constraints of capacities,
queues etc. And also a Container is a Linux Control Group which is
a linux kernel feature that allows users to allocate
CPU,memory,Disk I/O and Bandwidth to a user process. So my final
question is Containers are actually provided by "scheduler"?
I am confused here. I have referred architecture, release document and some videos and got messed up.
Expecting helping hands here.
To answer your questions first:
1) Very simply, Executor is spark's worker node and driver is manager node and have nothing to do with hadoop nodes. Assume executors to be processing units (say 2 here) and repartition(5) divides data in 5 chunks to be by these 2 executors and on some basis these data chunks will be divided amongst 2 executors. Repartition data does not create nodes
Spark cluster architecture:
Spark on yarn client mode:
Spark on yarn cluster mode:
For other details you can read the blog post https://sujithjay.com/2018/07/24/Understanding-Apache-Spark-on-YARN/
and https://0x0fff.com/spark-architecture/

How does Spark Streaming schedule map tasks between driver and executor?

I use Apache Spark 2.1 and Apache Kafka 0.9.
I have a Spark Streaming application that runs with 20 executors and reads from Kafka that has 20 partitions. This Spark application does map and flatMap operations only.
Here is what the Spark application does:
Create a direct stream from kafka with interval of 15 seconds
Perform data validations
Execute transformations using drool which are map only. No reduce transformations
Write to HBase using check-and-put
I wonder if executors and partitions are 1-1 mapped, will every executor independently perform above steps and write to HBase independently, or data will be shuffled within multiple executors and operations will happen between driver and executors?
Spark jobs submit tasks that can only be executed on executors. In other words, executors are the only place where tasks can be executed. The driver is to coordinate the tasks and schedule them accordingly.
With that said, I'd say the following is true:
will every executor independently perform above steps and write to HBase independently
By the way, the answer is irrelevant to what Spark version is in use. It's always been like this (and don't see any reason why it would or even should change).

Measure runtime of algorithm on a spark cluster

How do I measure the runtime of an algorithm in spark, especially on a cluster? I am interested in measuring the time from when the spark job is submitted to the cluster to when the submitted job has completed.
If it is important, I am mainly interested in machine learning algorithms using dataframes.
In my experience a reasonable approach is to measure the time from the submission of job to the completion on the driver. This is achieved by surrounding the spark action with timestamps:
val myRdd = sc.textFile("hdfs://foo/bar/..")
val startt = System.currentTimeMillis
val cnt = myRdd.count() // Or any other "action" such as take(), save(), etc
val elapsed = System.currentTimeMillis - startt
Notice that the initial sc.textFile() is lazy - i.e. it does not cause spark driver to submit the job to the cluster. therefore it is not really important if you include that in the timing or not.
A consideration for the results: the approach above is susceptible to variance due to existing load on the spark scheduler and cluster. A more precise approach would include the spark job writing the
System.currentTimeMillis
inside of its closure (executed on worker nodes) to an Accumulator at the beginning of its processing. This would remove the scheduling latency from the calculation.
To calculate the runtime of an algorithm, follow this procedure-
establish a single/multi node cluster
Make a folder and save your algorithm in that folder (eg. myalgo.scala/java/pyhton)
3.build it using sbt (you can follow this link to build your program. https://www.youtube.com/watch?v=1BeTWT8ADfE)
4.Run this command: SPARK_HOME$ /bin/spark-submit --class "class name" --master "spark master URL" "target jar file path" "arguments if any"
For example- spark-submit --class "GroupByTest" --master spark://BD:7077 /home/negi/sparksample/target/scala-2.11/spark-sample_2.11-1.0.jar
After this, refresh your web UI(eg. localhost:8080) and you will get all information there about your executed program including run-time.

Resources