Spark: Poor performance on distributed system. How to improve> - apache-spark

I wrote a simple Spark program, and want to deploy it to the distributed servers. It is pretty simple:
obtain data-> arrange data->train data->reapply to see training result.
The input data is just 10K rows, with 3 features.
I first run at my local machine, using "local[*]". It runs just about 3 mins.
Now when I deploy to a cluster, it runs extremely slow: half an hour without finished. It becomes very slow at the training stage.
I am curious, if I did something wrong. Please help me to check. I use Spark 1.6.1.
I submit:
spark-submit --packages com.databricks:spark-csv_2.11:1.5.0 orderprediction_2.11-1.0.jar --driver-cores 1 --driver-memory 4g --executor-cores 8 --executor-memory 4g
The code is here:
def main(args: Array[String]) {
// Set the log level to only print errors
Logger.getLogger("org").setLevel(Level.ERROR)
val conf = new SparkConf()
.setAppName("My Prediction")
//.setMaster("local[*]")
val sc = new SparkContext(conf)
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
val data = sqlContext.read
.option("header","true")
.option("delimiter", "\t")
.format("com.databricks.spark.csv")
.option("inferSchema","true")
.load("mydata.txt")
data.printSchema()
data.show()
val dataDF = data.toDF().filter("clicks >=10")
dataDF.show()
val assembler = new VectorAssembler()
.setInputCols(Array("feature1", "feature2", "feature3"))
.setOutputCol("features")
val trainset = assembler.transform(dataDF).select("target", "features")
trainset.printSchema()
val trainset2 = trainset.withColumnRenamed("target", "label")
trainset2.printSchema()
val trainset3 = trainset2.withColumn("label", trainset2.col("label").cast(DataTypes.DoubleType))
trainset3.cache() // cache data into memory
trainset3.printSchema()
trainset3.show()
// Train a RandomForest model.
println("training Random Forest")
val rf = new RandomForestRegressor()
.setLabelCol("label")
.setFeaturesCol("features")
.setNumTrees(1000)
val rfmodel = rf.fit(trainset3)
println("prediction")
val result = rfmodel.transform(trainset3)
result.show()
}
Update: After investigation, I found it jammed at
collectAsMap at RandomForest.scala:525
It spent already 1.1 hours on this line, still unfinished yet. The data, I believe is only several Megabyte.

You are building a RandomForest made out of 1000 RandomTrees which will train 1000 instances.
In the code collectAsMap is the first action, while all the rest are transformations (are lazy evaluated). So while you see it hanging at that line it is because now all the maps, flatMaps, filters, groupBy, etc are evaluated.

Related

What if OOM happens for the complete output mode in spark structured streaming

I am new and learning spark structured streaming,
I have following code that is using complete as the output mode
import java.util.Date
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.streaming.Trigger
import org.apache.spark.sql.types.StructType
object StreamingWordCount {
def main(args: Array[String]): Unit = {
val spark = SparkSession
.builder
.appName("StreamingWordCount")
.config("spark.sql.shuffle.partitions", 1)
.master("local[2]")
.getOrCreate()
import spark.implicits._
val lines = spark
.readStream
.schema(new StructType().add("value", "string"))
.option("maxFilesPerTrigger", 1)
.text("file:///" + data_path)
.as[String]
val wordCounts = lines.flatMap(_.split(" ")).groupBy("value").count()
val query = wordCounts.writeStream
.queryName("t")
.outputMode("complete")
.format("memory")
.start()
while (true) {
spark.sql("select * from t").show(truncate = false)
println(new Date())
Thread.sleep(1000)
}
query.awaitTermination()
}
}
A quick question is that over time, the spark runtime remembers too many states of word and count, so OOM should happen at some time,
I would ask how to do in practice for such kind of scenario.
Memory sink should be used only for debugging purposes on low data volumes as the entire output will be collected and stored in the driver’s memory. The output will be stored in memory as an in-memory table.
So if OOM error occurs, the driver will crashes and all the state maintained in Driver's memory will be lost.
The same applies for Console sink as well.

In Spark, caching a DataFrame influences execution time of previous stages?

I am running a Spark (2.0.1) job with multiple stages. I noticed that when I insert a cache() in one of later stages it changes the execution time of earlier stages. Why? I've never encountered such a case in literature when reading about caching().
Here is my DAG with cache():
And here is my DAG without cache(). All remaining code is the same.
I have a cache() after a sort merge join in Stage10. If the cache() is used in Stage10 then Stage8 is nearly twice longer (20 min vs 11 min) then if there were no cache() in Stage10. Why?
My Stage8 contains two broadcast joins with small DataFrames and a shuffle on a large DataFrame in preparation for the merge join. Stages8 and 9 are independent and operate on two different DataFrames.
Let me know if you need more details to answer this question.
UPDATE 8/2/1018
Here are the details of my Spark script:
I am running my job on a cluster via spark-submit. Here is my spark session.
val spark = SparkSession.builder
.appName("myJob")
.config("spark.executor.cores", 5)
.config("spark.driver.memory", "300g")
.config("spark.executor.memory", "15g")
.getOrCreate()
This creates a job with 21 executors with 5 cpu each.
Load 4 DataFrames from parquet files:
val dfT = spark.read.format("parquet").load(filePath1) // 3 Tb in 3185 partitions
val dfO = spark.read.format("parquet").load(filePath2) // ~ 700 Mb
val dfF = spark.read.format("parquet").load(filePath3) // ~ 800 Mb
val dfP = spark.read.format("parquet").load(filePath4) // 38 Gb
Preprocessing on each of the DataFrames is composed of column selection and dropDuplicates and possible filter like this:
val dfT1 = dfT.filter(...)
val dfO1 = dfO.select(columnsToSelect2).dropDuplicates(Array("someColumn2"))
val dfF1 = dfF.select(columnsToSelect3).dropDuplicates(Array("someColumn3"))
val dfP1 = dfP.select(columnsToSelect4).dropDuplicates(Array("someColumn4"))
Then I left-broadcast-join together first three DataFrames:
val dfTO = dfT1.join(broadcast(dfO1), Seq("someColumn5"), "left_outer")
val dfTOF = dfTO.join(broadcast(dfF1), Seq("someColumn6"), "left_outer")
Since the dfP1 is large I need to do a merge join, I can't afford it to do it now. I need to limit the size of dfTOF first. To do that I add a new timestamp column which is a withColumn with a UDF which transforms a string into a timestamp
val dfTOF1 = dfTOF.withColumn("TransactionTimestamp", myStringToTimestampUDF)
Next I filter on a new timestamp column:
val dfTrain = dfTOF1.filter(dfTOF1("TransactionTimestamp").between("2016-01-01 00:00:00+000", "2016-05-30 00:00:00+000"))
Now I am joining the last DataFrame:
val dfTrain2 = dfTrain.join(dfP1, Seq("someColumn7"), "left_outer")
And lastly the column selection with a cache() that is puzzling me.
val dfTrain3 = dfTrain.select("columnsToSelect5").cache()
dfTrain3.agg(sum(col("someColumn7"))).show()
It looks like the cache() is useless here but there will be some further processing and modelling of the DataFrame and the cache() will be necessary.
Should I give more details? Would you like to see execution plan for dfTrain3?

Apache Spark or Spark-Cassandra-Connector doesnt look like it is reading multiple partitions in parallel?

Apache Spark or Spark-Cassandra-Connector doesnt look like it is reading multiple partitions in parallel.
Here is my code using spark-shell
import org.apache.spark.sql._
import org.apache.spark.sql.types.StringType
spark.sql("""CREATE TEMPORARY VIEW hello USING org.apache.spark.sql.cassandra OPTIONS (table "hello", keyspace "db", cluster "Test Cluster", pushdown "true")""")
val df = spark.sql("SELECT test from hello")
val df2 = df.select(df("test").cast(StringType).as("test"))
val rdd = df2.rdd.map { case Row(j: String) => j }
val df4 = spark.read.json(rdd) // This line takes forever
I have about 700 million rows each row is about 1KB and this line
val df4 = spark.read.json(rdd) takes forever as I get the following output.
Stage 1:==========> (4866 + 24) / 25256]
so at this rate it will probably take roughly 3hrs.
I measured the network throughput rate of spark worker nodes using iftop and it is about 75MB/s (Megabytes per second) which is pretty good but I am not sure if it is reading partitions in parallel. Any ideas on how to make it faster?
Here is my DAG.

Spark streaming: batch interval vs window

I have spark streaming application which consumes kafka messages. And I want to process all messages coming last 10 minutes together.
Looks like there are two approaches to do job done:
val ssc = new StreamingContext(new SparkConf(), Minutes(10))
val dstream = ....
and
val ssc = new StreamingContext(new SparkConf(), Seconds(1))
val dstream = ....
dstream.window(Minutes(10), Minutes(10))
and I just want to clarify is there any performance differences between them

Saving multiple hadoop datasets concurrently in Spark

I have a Spark app that looks like this:
val conf = new SparkConf().setAppName("MyApp")
val sc = new SparkContext(conf)
val rdd1 = ...
rdd1.saveAsNewAPIHadoopDataset(output1)
val rdd2 = ...
rdd2.saveAsNewAPIHadoopDataset(output2)
val rdd3 = ...
rdd3.saveAsNewAPIHadoopDataset(output3)
```
The call to saveAsNewAPIHadoopDataset and while some of my workers are doing IO, it would be nice if the job continued to run the next stages.
I tried to wrap each computation in a Future {} and await on all of them at the end but ran into this issue https://issues.apache.org/jira/browse/SPARK-13631
Is there a way in Spark to save to Hadoop dataset in a way that will queue other stages? FWIW, Hadoop's output configuration is BigQuery connector (https://cloud.google.com/hadoop/bigquery-connector)

Resources