My Code:
scala> val records = List( "CHN|2", "CHN|3" , "BNG|2","BNG|65")
records: List[String] = List(CHN|2, CHN|3, BNG|2, BNG|65)
scala> val recordsRDD = sc.parallelize(records)
recordsRDD: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[119] at parallelize at <console>:23
scala> val mapRDD = recordsRDD.map(elem => elem.split("\\|"))
mapRDD: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[120] at map at <console>:25
scala> val keyvalueRDD = mapRDD.map(elem => (elem(0),elem(1)))
keyvalueRDD: org.apache.spark.rdd.RDD[(String, String)] = MapPartitionsRDD[121] at map at <console>:27
scala> keyvalueRDD.count
res12: Long = 5
As you can see above there are 3 RDD's created.
My question is When does DAG gets created and What a DAG contains ?
Does it get created when we create a RDD using any transformation?
or
Does it created when we call a Action on existing RDD and then spark automatically launch that DAG?
Basically I want to know what happens internally when a RDD gets created?
DAG is created when job is executed (when you call an action) and it contains all required dependencies to distributed tasks.
DAG is not executed. Based on DAG Spark determines tasks which are distributed to the workers and executed.
RDD alone defines lineage by traversing recursively dependencies.
Related
In my Spark application, I see the same task getting executed in multiple stages. But these statements have been defined only once in the code. Moreover, the same tasks in different stages are taking different times to execute. I understand that in case of loss of RDD, the task lineage is used to recompute the RDD. How can I find out if this the case, because the same phenomenon was seen in all the runs of this application. Can someone please explain what is happening here and under what conditions a task can get scheduled in multiple stages.
The code very much looks like the following:
val events = getEventsDF()
events.cache()
metricCounter.inc("scec", events.count())
val scEvents = events.filter(_.totalChunks == 1)
.repartition(NUM_PARTITIONS, lit(col("eventId")))
val sortedEvents = events.filter(e => e.totalChunks > 1 && e.totalChunks <= maxNumberOfChunks)
.map(PartitionUtil.createKeyValueTuple)
.rdd
.repartitionAndSortWithinPartitions(new EventDataPartitioner(NUM_PARTITIONS))
val largeEvents = events.filter(_.totalChunks > maxNumberOfChunks).count()
val mcEvents = sortedEvents.mapPartitionsWithIndex[CFEventLog](
(index: Int, iter: Iterator[Tuple2]) => doSomething())
val mcEventsDF = session.sqlContext.createDataset[CFEventLog](mcEvents)
metricCounter.inc("mcec", mcEventsDF.count())
val currentDf = scEvents.unionByName(mcEventsDF)
val distinctDateHour = currentDf.select(col("eventDate"), col("eventHour"))
.distinct
.collect
val prevEventsDF = getAnotherDF(distinctDateHour)
val finalDf = currentDf.unionByName(prevEventsDF).dropDuplicates(Seq("eventId"))
finalDf
.write.mode(SaveMode.Overwrite)
.partitionBy("event_date", "event_hour")
.saveAsTable("table")
val finalEventsCount = finalDf.count()
Is every count() action resulting in re-execution of the RDD transformation before the action?
Thanks,
Devj
Referring to https://spark.apache.org/docs/1.6.2/programming-guide.html#performance-impact
Shuffle also generates a large number of intermediate files on disk. As of Spark 1.3, these files are preserved until the corresponding RDDs are no longer used and are garbage collected. This is done so the shuffle files don’t need to be re-created if the lineage is re-computed
I understand why these files will be retained. However, I cant seem to figure out whether these intermedaite files are shared between jobs?
My experimentations show that these shuffle files are NOT shared between jobs. Can anyone confirm?
The scenario I am talking about:
```
val rdd1 = sc.text...
val rdd2 = sc.text...
val rdd3 = rdd1.join(rdd2)
// at this point shuffle takes place
//Now, if I do this again:
val rdd4 = rdd1.join(rdd2)
// will the shuffle files be reused? And I think I ve got the answer, which is know since the rdds do not share the lineage
```
Between jobs - yes. That's the whole purpose of preserving shuffle files (What does "Stage Skipped" mean in Apache Spark web UI?). Consider following session transcript:
scala> val rdd1 = sc.parallelize(Seq((1, None), (2, None)), 4)
rdd1: org.apache.spark.rdd.RDD[(Int, None.type)] = ParallelCollectionRDD[0] at parallelize at <console>:24
scala> val rdd2 = sc.parallelize(Seq((1, None), (2, None)), 4)
rdd2: org.apache.spark.rdd.RDD[(Int, None.type)] = ParallelCollectionRDD[1] at parallelize at <console>:24
scala> val rdd3 = rdd1.join(rdd2)
rdd3: org.apache.spark.rdd.RDD[(Int, (None.type, None.type))] = MapPartitionsRDD[4] at join at <console>:27
scala> rdd3.count // First job
res0: Long = 2
scala> rdd3.foreach(_ => ()) // Second job
and corresponding state of the Spark UI
Between applications - no. Shuffle files are discarded when SparkContext is closed.
The shuffle files are meant for the stages within a job. Other jobs won't be able to use these shuffle files. So, afaik, No ! shuffle files cannot be shared between jobs
I am learning Apache Spark and trying to get the lineage graph of the RDDs.
But i could not find when does a particular lineage is created?
Also, where to find the lineage of an RDD?
RDD Lineage is the logical execution plan of a distributed computation that is created and expanded every time you apply a transformation on any RDD.
Note the part "logical" not "physical" that happens after you've executed an action.
Quoting Mastering Apache Spark 2 gitbook:
RDD Lineage (aka RDD operator graph or RDD dependency graph) is a graph of all the parent RDDs of a RDD. It is built as a result of applying transformations to the RDD and creates a logical execution plan.
A RDD lineage graph is hence a graph of what transformations need to be executed after an action has been called.
Any RDD has a RDD lineage even if that means that the RDD lineage is just a single node, i.e. the RDD itself. That's because an RDD may or may not be a result of a series of transformations (and no transformations is a "zero-effect" transformation :))
You can check out the RDD lineage of an RDD using RDD.toDebugString:
toDebugString: String A description of this RDD and its recursive dependencies for debugging.
val nums = sc.parallelize(0 to 9)
scala> nums.toDebugString
res0: String = (8) ParallelCollectionRDD[0] at parallelize at <console>:24 []
val doubles = nums.map(_ * 2)
scala> doubles.toDebugString
res1: String =
(8) MapPartitionsRDD[1] at map at <console>:25 []
| ParallelCollectionRDD[0] at parallelize at <console>:24 []
val groups = doubles.groupBy(_ < 10)
scala> groups.toDebugString
res2: String =
(8) ShuffledRDD[3] at groupBy at <console>:25 []
+-(8) MapPartitionsRDD[2] at groupBy at <console>:25 []
| MapPartitionsRDD[1] at map at <console>:25 []
| ParallelCollectionRDD[0] at parallelize at <console>:24 []
I would like some clarifications about the DAG behaviour, and how exactly has been handle the following job:
val rdd = sc.parallelize(List(1 to 10).flatMap(x=>x).zipWithIndex,3)
.partitionBy(new HashPartitioner(4))
val rdd1 = sc.parallelize(List(1 to 10).flatMap(x=>x).zipWithIndex,2)
.partitionBy(new HashPartitioner(3))
val rdd2 = rdd.join(rdd1)
rdd2.collect()
This is the related rdd2.toDebugString:
(4) MapPartitionsRDD[6] at join at IntegrationStatusJob.scala:92 []
| MapPartitionsRDD[5] at join at IntegrationStatusJob.scala:92 []
| CoGroupedRDD[4] at join at IntegrationStatusJob.scala:92 []
| ShuffledRDD[1] at partitionBy at IntegrationStatusJob.scala:90 []
+-(3) ParallelCollectionRDD[0] at parallelize at IntegrationStatusJob.scala:90 []
+-(3) ShuffledRDD[3] at partitionBy at IntegrationStatusJob.scala:91 []
+-(2) ParallelCollectionRDD[2] at parallelize at IntegrationStatusJob.scala:91 []
This is the spark UI image:
Looking at the toDebugString and at the spark UI, if I understood well, in order to perform the join, the DAG looks at what partitioner should be used and because both rdds are HashPartitioned,it choose the partitioner with the greater number of partitions, so rdd partitioner.
Now from the spark UI, it seems that rdd partitionBy and join being performed in the same stage, so under this conditions, the shuffle needed for to perform the join, will be done just from one side? From one side, I mean that just the rdd1 will be shuffled and no both.
Is my assumption correct?
You right. If both RDDs are partitioned using different partitioner Spark will pick one as a reference and reparation / shuffle only the second one.
If both have the same partitioner there is no need for a shuffle.
I have a Spark app that looks like this:
val conf = new SparkConf().setAppName("MyApp")
val sc = new SparkContext(conf)
val rdd1 = ...
rdd1.saveAsNewAPIHadoopDataset(output1)
val rdd2 = ...
rdd2.saveAsNewAPIHadoopDataset(output2)
val rdd3 = ...
rdd3.saveAsNewAPIHadoopDataset(output3)
```
The call to saveAsNewAPIHadoopDataset and while some of my workers are doing IO, it would be nice if the job continued to run the next stages.
I tried to wrap each computation in a Future {} and await on all of them at the end but ran into this issue https://issues.apache.org/jira/browse/SPARK-13631
Is there a way in Spark to save to Hadoop dataset in a way that will queue other stages? FWIW, Hadoop's output configuration is BigQuery connector (https://cloud.google.com/hadoop/bigquery-connector)