Spark Performance issue - Writing partitions to S3 as individual files - apache-spark

I'm running a spark job whose job is to scan a large file and split it into smaller files. The file is in Json Lines format and I'm trying to partition it by a certain column (id) and save each partition as a separate file to S3. The file size is about 12 GB but there are about 500000 distinct values of id. The query is taking almost 15 hours. What can I do to improve performance? Is Spark a poor choice for such a task? Please note that I do have the liberty to making sure that the source as a fixed number of rows per id.
import sys
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from pyspark.sql import functions as F
from pyspark.sql.types import *
from pyspark.sql.functions import *
from pyspark.sql.window import Window
from awsglue.utils import getResolvedOptions
from awsglue.transforms import *
from pyspark.sql.functions import udf, substring, instr, locate
from datetime import datetime, timedelta
sc = SparkContext.getOrCreate()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
# Get parameters that were passed to the job
args = getResolvedOptions(sys.argv, ['INPUT_FOLDER', 'OUTPUT_FOLDER', 'ID_TYPE', 'DATASET_DATE'])
id_type = args["ID_TYPE"]
output_folder = "{}/{}/{}".format(args["OUTPUT_FOLDER"], id_type, args["DATASET_DATE"])
input_folder = "{}/{}/{}".format(args["INPUT_FOLDER"], id_type, args["DATASET_DATE"])
INS_SCHEMA = StructType([
StructField("camera_capture_timestamp", StringType(), True),
StructField(id_type, StringType(), True),
StructField("image_uri", StringType(), True)
])
data = spark.read.format("json").load(input_folder, schema=INS_SCHEMA)
data = data.withColumn("fnsku_1", F.col("fnsku"))
data.coalesce(1).write.partitionBy(["fnsku_1"]).mode('append').json(output_folder)
I have tried repartition instead of coalesce too.
I'm using AWS Glue

Please consider the following as one of possible options. It would be awesome to see if it helped :)
First, if you coalesce, as said #Lamanus in the comments, it means that you will reduce the number of partitions, hence also reduce the number of
writer task, hence shuffle all data to 1 task. It can be the first factor to improve.
To overcome the issue, ie. write a file per partition and keep the parallelization level, you can change the logic on the following one:
object TestSoAnswer extends App {
private val testSparkSession = SparkSession.builder()
.appName("Demo groupBy and partitionBy").master("local[*]")
.getOrCreate()
import testSparkSession.implicits._
// Input dataset with 5 partitions
val dataset = testSparkSession.sparkContext.parallelize(Seq(
TestData("a", 0), TestData("a", 1), TestData("b", 0), TestData("b", 1),
TestData("c", 1), TestData("c", 2)
), 5).toDF("letter", "number")
dataset.as[TestData].groupByKey(row => row.letter)
.flatMapGroups {
case (_, values) => values
}.write.partitionBy("letter").mode("append").json("/tmp/test-parallel-write")
}
case class TestData(letter: String, number: Int)
How does it work?
First, the code performs a shuffle to collect all rows related to a specific key (same as for the partitioning) to the same
partitions. So that, it will perform the write on all the rows belonging to the key at once. Some time ago I wrote a blog
post about partitionBy method. Roughly, internally it will sort the records on the given partition and later write them
one-by-one into the file.
That way we get the plan like this one, where only 1 shuffle, so processing-consuming operation is present:
== Physical Plan ==
*(2) SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, knownnotnull(assertnotnull(input[0, TestData, true])).letter, true, false) AS letter#22, knownnotnull(assertnotnull(input[0, TestData, true])).number AS number#23]
+- MapGroups TestSoAnswer$$$Lambda$1236/295519299#55c50f52, value#18.toString, newInstance(class TestData), [value#18], [letter#3, number#4], obj#21: TestData
+- *(1) Sort [value#18 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(value#18, 200), true, [id=#15]
+- AppendColumnsWithObject TestSoAnswer$$$Lambda$1234/1747367695#6df11e91, [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, knownnotnull(assertnotnull(input[0, TestData, true])).letter, true, false) AS letter#3, knownnotnull(assertnotnull(input[0, TestData, true])).number AS number#4], [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, input[0, java.lang.String, true], true, false) AS value#18]
+- Scan[obj#2]
The output of the TestSoAnswer executed twice looks like that:
test-parallel-write % ls
_SUCCESS letter=a letter=b letter=c
test-parallel-write % ls letter=a
part-00170-68245d8b-b155-40ca-9b5c-d9fb746ac76c.c000.json part-00170-cd90d64f-43c6-4582-aae6-fe443b6617f4.c000.json
test-parallel-write % ls letter=b
part-00161-68245d8b-b155-40ca-9b5c-d9fb746ac76c.c000.json part-00161-cd90d64f-43c6-4582-aae6-fe443b6617f4.c000.json
test-parallel-write % ls letter=c
part-00122-68245d8b-b155-40ca-9b5c-d9fb746ac76c.c000.json part-00122-cd90d64f-43c6-4582-aae6-fe443b6617f4.c000.json
You can also control the number of records written per file with this configuration.
Edit: Didn't see the comment of #mazaneicha but indeed, you can try with repartition("partitioning column")! It's even more clear than the grouping expression.
Best,
Bartosz.

If you're not going to use Spark for anything other than to split the file into smaller versions of itself, then I would say Spark is a poor choice. You'd be better off doing this within AWS following an approach such as the one given in this Stack Overflow post
Assuming you have an EC2 instance available, you'd run something like this:
aws s3 cp s3://input_folder/12GB.json - | split -l 1000 - output.
aws s3 cp output.* s3://output_folder/
If you're looking to do some further processing of the data in Spark, you're going to want to repartition the data to chunks between 128MB and 1 GB. With the default (snappy) compression, you typically end up with 20% of the original file size. So, in your case: between (12/5) ~3 and (12/5/8) ~20 partitions, so:
data = spark.read.format("json").load(input_folder, schema=INS_SCHEMA)
dataPart = data.repartition(12)
This is not actually a particularly large data set for Spark and should not be as cumbersome to deal with.
Saving as parquet gives you a good recovery point, and re-reading the data will be very fast. The total file size will be about 2.5 GB.

Related

Does Spark SQL optimize lower() on both sides?

Say I have this pseudo code in Spark SQL where t1 is a temp view built off of partitioned parquet files in HDFS and t2 is a small lookup file to filter the said temp view
select t1.*
from t1
where exists(select *
from t2
where t1.id=t2.id and
lower(t1.col) like lower(t2.pattern)) --to mimic ilike functionality
Will the optimizer treat lower(t1.col) like lower(t2.pattern) as case insensitive match? Or will it run transformations on these columns before performing the match?
I don't have access to the DAG to see what exactly happens behind the scenes so I am asking here to see if this is a known/documented optimization trick.
I tried to reproduce that case using scala and then I called explain() to get the physical plan (I'm pretty sure sql and scala will have the same physical plan because behind the scene it's the same optimizer named “Catalyst”)
import spark.implicits._
val df1 = spark.sparkContext.parallelize(Seq(("Car", "car"), ("bike", "Rocket"), ("Bus", "BUS"), ("Auto", "Machine") )).toDF("c1", "c2")
df1.filter(lower(col("c1")).equalTo(lower(col("c2")))).explain()
== Physical Plan ==
*(1) Project [_1#3 AS c1#8, _2#4 AS c2#9]
+- *(1) Filter ((isnotnull(_1#3) AND isnotnull(_2#4)) AND (lower(_1#3) = lower(_2#4)))
+- *(1) SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, knownnotnull(assertnotnull(input[0, scala.Tuple2, true]))._1, true, false, true) AS _1#3, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, knownnotnull(assertnotnull(input[0, scala.Tuple2, true]))._2, true, false, true) AS _2#4]
+- Scan[obj#2]
As you can see in the logical plan it will call lower each time to compare the 2 values: lower(_1#3) = lower(_2#4).
Btw I tried same thing joining 2 dataframe, then filtering on lower but I got the same result.
I hope this answer your question.

How to perform parallel computation on Spark Dataframe by row?

I have a collection of 300 000 points and I would like to compute the distance between them.
id x y
0 0 1 0
1 1      28 76
…
Thus I do a Cartesian product between those points and I filter such as I keep only one combination of points. Indeed for my purpose distance between points (0, 1) is same as (1,0)
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from pyspark.sql.functions import udf
from pyspark.sql.types import IntegerType
import math
#udf(returnType=IntegerType())
def compute_distance(x1,y1, x2,y2):
return math.square(math.pow(x1-x2) + math.pow(y1-y2))
columns = ['id','x', 'y']
data = [(0, 1, 0), (1, 28,76), (2, 33,42)]
spark = SparkSession\
.builder \
.appName('distance computation') \
.config('spark.sql.execution.arrow.pyspark.enabled', 'true') \
.config('spark.executor.memory', '2g') \
.master('local[20]') \
.getOrCreate()
rdd = spark.sparkContext.parallelize(data)
df = rdd.toDF(columns)
result = df.alias('a')\
.join(df.alias('b'),
F.array(*['a.id']) < F.array(*['b.id']))\
.withColumn('distance', compute_distance(F.col('a.x'), F.col('a.y'), F.col('b.x'), F.col('b.y')))
result.write.parquet('distance-between-points')
While that seems to work, the CPU usage for my latest task (parquet at NativeMethodAccessorImpl.java:0) did not go above 100%. Also, it took and a day to complete.
I would like to know if the withColumn operation is performed on multiple executors in order to achieve parallelism?
Is there a way to split the data in order to compute distance by batch and to store the result in one or multiple Parquet files?
Thanks for your insight.
I would like to know if the withColumn operation is performed on multiple executor in order to achieve parallelism ?
Yes, assuming a correctly configured cluster, the dataframe will be partitioned across your cluster and the executors will work through the partitions in parallel running your UDF.
Is there a way to split the data in order to compute distance by batch in // and to store them into one or multiples parquet files ?
By default, the resulting dataframe will be partitioned across the cluster and written out as one Parquet file per partition. You can change that by re-partioning if required, but that will result in a shuffle and take longer.
I recommend the 'Level of Parallelism' section in the Learning Spark book for further reading.

Cross Join in Apache Spark with dataset is very slow

I have posted this question on spark user forum but received no response so asking it here again.
We have a use case where we need to do a Cartesian join and for some reason we are not able to get it work with Dataset API's.
We have two dataset:
one data set with 2 string columns say c1, c2. It is a small data set with ~1 million records. The two columns are both strings of 32 characters so should be less than 500 mb.
We broadcast this dataset
the other data set is little bigger with ~10 million records
val ds1 = spark.read.format("csv").option("header", "true").load(<s3-location>).select("c1", "c2")
ds1.count
val ds2 = spark.read.format("csv").load(<s3-location>).toDF("c11", "c12", "c13", "c14", "c15", "ts")
ds2.count
ds2.crossJoin(broadcast(ds1)).filter($"c1" <= $"c11" && $"c11" <= $"c2").count
If I implement it using RDD api where I broadcast data in ds1 and then filter data in ds2 it works fine.
I have confirmed the broadcast is successful.
2019-02-14 23:11:55 INFO CodeGenerator:54 - Code generated in 10.469136 ms
2019-02-14 23:11:55 INFO TorrentBroadcast:54 - Started reading broadcast variable 29
2019-02-14 23:11:55 INFO TorrentBroadcast:54 - Reading broadcast variable 29 took 6 ms
2019-02-14 23:11:56 INFO CodeGenerator:54 - Code generated in 11.280087 ms
Query Plan:
== Physical Plan ==
BroadcastNestedLoopJoin BuildRight, Cross, ((c1#68 <= c11#13) && (c11#13 <= c2#69))
:- *Project []
: +- *Filter isnotnull(_c0#0)
: +- *FileScan csv [_c0#0,_c1#1,_c2#2,_c3#3,_c4#4,_c5#5] Batched: false, Format: CSV, Location: InMemoryFileIndex[], PartitionFilters: [], PushedFilters: [IsNotNull(_c0)], ReadSchema: struct<_c0:string,_c1:string,_c2:string,_c3:string,_c4:string,_c5:string>
+- BroadcastExchange IdentityBroadcastMode
+- *Project [c1#68, c2#69]
+- *Filter (isnotnull(c1#68) && isnotnull(c2#69))
+- *FileScan csv [c1#68,c2#69] Batched: false, Format: CSV, Location: InMemoryFileIndex[], PartitionFilters: [], PushedFilters: [IsNotNull(c1), IsNotNull(c2)], ReadSchema: struct
then the stage do not progress.
I updated the code to use broadcast ds1 and then did the join in the mapPartitions for ds2.
val ranges = spark.read.format("csv").option("header", "true").load(<s3-location>).select("c1", "c2").collect
val rangesBC = sc.broadcast(ranges)
then used this rangesBC in the mapPartitions method to identify the range each row in ds2 belongs and this job completes in 3 hrs, while the other job does not complete even after 24 hrs. This kind of implies that the query optimizer is not doing what I want it to do.
What am I doing wrong? Any pointers will be helpful. Thank you!
I have run into this issue recently and found that Spark has a strange partitioning behavior when cross joining large dataframes. If your input dataframe contain few million records, then the cross joined dataframe has partitions equal to the multiplication of the input dataframes partition, that is
Partitions of crossJoinDF = (Partitions of ds1) * (Partitions of ds2).
If ds1 or ds2 contain about few hundred partitions then the cross join dataframe would have partitions in the range of ~ 10,000. These are way too many partitions, which result in excessive overhead in managing many small tasks, making any computation (in your case - filter) on the cross joined data frame very slow to run.
So how do you make the computation faster? First check if this is indeed the issue for your problem:
scala> val crossJoinDF = ds2.crossJoin(ds1)
# This should return immediately because of spark lazy evaluation
scala> val crossJoinDFPartitions = crossJoinDF.rdd.partitions.size
Check the number of the partitions on the cross joined dataframe. If crossJoinDFPartitions > 10,000, then you do indeed have the same issue i.e cross joined dataframe has way too many partitions.
To make your operations on cross joined dataframe faster, reduce the number of partitions on the input DataFrames. For example:
scala> val ds1 = ds1.repartition(40)
scala> ds1.rdd.partitions.size
res80: Int = 40
scala> val ds2 = ds2.repartition(40)
scala> ds2.rdd.partitions.size
res81: Int = 40
scala> val crossJoinDF = ds1.crossJoin(ds2)
scala> crossJoinDF.rdd.partitions.size
res82: Int = 1600
scala> crossJoinDF.count()
The count() action should result in execution of the cross join. The count should now return in a reasonable amount of time. The number of exact partitions you choose would depend on number of cores available in your cluster.
The key takeaway here is to make sure that your cross joined dataframe has reasonable number of partitions (<< 10,000). You might also find this post useful which explains this issue in more detail.
I do not know if you are on bare metal or AWS with spot or on-demand or dedicated, or VMs with AZURE, et al. My take:
Appreciate that 10M x 1M is a lot of work, even if .filter applies on the resultant cross join. It will take some time. What were your expectations?
Spark is all about scaling in a linear way in general.
Data Centers with VMs do not have dedicated and hence do not have the fastest performance.
Then:
I ran on Databricks 10M x 100K in a simulated set-up with .86 core and 6GB on Driver for Community Edition. That ran in 17 mins.
I ran the 10M x 1M in your example on a 4 node AWS EMR non-dedicated Cluster (with some EMR-oddities like reserving the Driver on a valuable instance!) it took 3 hours for partial completion. See the picture below.
So, to answer your question:
- You did nothing wrong.
Just just need more resources allowing more parallelisation.
I did add some explicit partitioning as you can see.

pyspark df.count() taking a very long time (or not working at all)

I have the following code that is simply doing some joins and then outputting the data;
from pyspark.sql.functions import udf, struct
from pyspark import SparkContext
from pyspark.sql import SparkSession
from pyspark import SparkConf
from pyspark.sql.functions import broadcast
conf = SparkConf()
conf.set('spark.logConf', 'true')
spark = SparkSession \
.builder \
.config(conf=conf) \
.appName("Generate Parameters") \
.getOrCreate()
spark.sparkContext.setLogLevel("OFF")
df1 = spark.read.parquet("/location/mydata")
df1 = df1.select([c for c in df1.columns if c in ['sender','receiver','ccc,'cc','pr']])
df2 = spark.read.csv("/location/mydata2")
cond1 = [(df1.sender == df2._c1) | (df1.receiver == df2._c1)]
df3 = df1.join(broadcast(df2), cond1)
df3 = df3.select([c for c in df3.columns if c in['sender','receiver','ccc','cc','pr']])
df1 is 1,862,412,799 rows and df2 is 8679 rows
when I then call;
df3.count()
It just seems to sit there with the following
[Stage 33:> (0 + 200) / 200]
Assumptions for this answer:
df1 is the dataframe containing 1,862,412,799 rows.
df2 is the dataframe containing 8679 rows.
df1.count() returns a value quickly (as per your comment)
There may be three areas where the slowdown is occurring:
The imbalance of data sizes (1,862,412,799 vs 8679):
Although spark is amazing at handling large quantities of data, it doesn't deal well with very small sets. If not specifically set, Spark attempts to partition your data into multiple parts and on small files this can be excessively high in comparison to the actual amount of data each part has. I recommend trying to use the following and see if it improves speed.
df2 = spark.read.csv("/location/mydata2")
df2 = df2.repartition(2)
Note: The number 2 here is just an estimated number, based on how many partitions would suit the amount of rows that are in that set.
Broadcast Cost:
The delay in the count may be due to the actual broadcast step. Your data is being saved and copied to every node within your cluster before the join, this all happening together once count() is called. Depending on your infrastructure, this could take some time. If the above repartition doesn't work, try removing the broadcast call. If that ends up being the delay, it may be good to confirm that there are no bottlenecks within your cluster or if it's necessary.
Unexpected Merge Explosion
I do not imply that this is an issue, but it is always good to check that the merge condition you have set is not creating unexpected duplicates. It is a possibility that this may be happening and creating the slow down you are experiencing when actioning the processing of df3.

Spark: does DataFrameWriter have to be a blocking step?

I have data partitioned by a column (say, id) and I have this dataset saved some place. Every now and then, I get a smaller incremental dataset with the same structure and I essentially have to upsert my existing data based on my id with a date column deciding which record is the newest. (I don't write it in the same place, I save the whole new blob some place else.)
There are two ways I've been doing this - either grouping in a window and taking the row with the highest date. Or via dropDuplicates, relying on the fact, that my data is ordered. (I'd rather use the former, but I've been trying various things.)
The one big issue is that each id group is not negligible (a few gigabytes), so I was hoping Spark (with n workers) would understand that since I'm reading id-partitioned data and writing id-partitioned data, it would process n ids at once and continually write them to my storage, taking new ids as it's finished with the previous ones.
Unfortunately, what seems to be happening, is that Spark processes all my id groups in one big job (and spills to disk, naturally) before writing anything to disk. It gets really really slow.
The question is thus: Is there a way to force Spark to process these groups and write them as soon as they're ready? Again, they are partitioned, so no other task will affect my partition.
Here's a bit of code that reproduces the problem:
# generate dummy data first
import random
from typing import List
from datetime import datetime, timedelta
from pyspark.sql.functions import desc, col, row_number
from pyspark.sql.window import Window
from pyspark.sql.dataframe import DataFrame
def gen_data(n: int) -> List[tuple]:
names = 'foo, bar, baz, bak'.split(', ')
return [(random.randint(1, 25), random.choice(names), datetime.today() - timedelta(days=random.randint(1, 100))) \
for j in range(n)]
def get_df(n: int) -> DataFrame:
return spark.createDataFrame(gen_data(n), ['id', 'name', 'date'])
n = 10_000
df = get_df(n)
dd = get_df(n*10)
df.write.mode('overwrite').partitionBy('id').parquet('outputs/first')
dd.write.mode('overwrite').partitionBy('id').parquet('outputs/second')
d1 and d2 are both partitioned by id and so is the resulting dataset, but it's not reflected in the plan:
w = Window().partitionBy('id').orderBy(desc('date'))
d1 = spark.read.parquet('outputs/first')
d2 = spark.read.parquet('outputs/second')
d1.union(d2).\
withColumn('rn', row_number().over(w)).filter(col('rn') == 1).drop('rn').\
write.mode('overwrite').partitionBy('id').parquet('outputs/window')
I also tried to explicitly state the partition key (otherwise the code is the same):
d1 = spark.read.parquet('outputs/first').repartition('id')
d2 = spark.read.parquet('outputs/second').repartition('id')
d1.union(d2).\
withColumn('rn', row_number().over(w)).filter(col('rn') == 1).drop('rn').\
write.mode('overwrite').partitionBy('id').parquet('outputs/window2')
Here's the same using dropDuplicates:
d1 = spark.read.parquet('outputs/first')
d2 = spark.read.parquet('outputs/second')
d1.union(d2).\
dropDuplicates(subset=['id']).\
write.mode('overwrite').partitionBy('id').parquet('outputs/window3')
I also tried emphasising that my union is still partitioned using something like this, but again to no avail:
df.union(d2).repartition('id').\
.withColumn...
I could list all partitions (ids), load them one by one while leveraging partition pruning, deduplicating and writing. But that seems like extra boilerplate that shouldn't be necessary. Or is it possible to do this via foreach?
Update (2018-03-27):
Turns out, the information about partitioning is indeed present in the window functionality in one way or another, because when I filter at the very end, partition pruning on the inputs does take place:
d1 = spark.read.parquet('outputs/first')
d2 = spark.read.parquet('outputs/second')
w = Window().partitionBy('id', 'name').orderBy(desc('date'))
d1.union(d2).withColumn('rn', row_number().over(w)).filter(col('rn') == 1).filter(col('id') == 12).explain(True)
Results in
== Physical Plan ==
*(4) Filter (isnotnull(rn#387) && (rn#387 = 1))
+- Window [row_number() windowspecdefinition(id#187, name#185, date#186 DESC NULLS LAST, specifiedwindowframe(RowFrame, unboundedpreceding$(), currentrow$())) AS rn#387], [id#187, name#185], [date#186 DESC NULLS LAST]
+- *(3) Sort [id#187 ASC NULLS FIRST, name#185 ASC NULLS FIRST, date#186 DESC NULLS LAST], false, 0
+- Exchange hashpartitioning(id#187, name#185, 200)
+- Union
:- *(1) FileScan parquet [name#185,date#186,id#187] Batched: true, Format: Parquet, Location: InMemoryFileIndex[file:/.../spark_perf_partitions/outputs..., PartitionCount: 1, PartitionFilters: [isnotnull(id#187), (id#187 = 12)], PushedFilters: [], ReadSchema: struct<name:string,date:timestamp>
+- *(2) FileScan parquet [name#191,date#192,id#193] Batched: true, Format: Parquet, Location: InMemoryFileIndex[file:/.../spark_perf_partitions/outputs..., PartitionCount: 1, PartitionFilters: [isnotnull(id#193), (id#193 = 12)], PushedFilters: [], ReadSchema: struct<name:string,date:timestamp>
So it indeed only reads two partitions, one per each file. So I could, instead of looping, just run the code with one filter at a time (the filter being between the window function and .write). Tedious and not very practical, but potentially faster than spilling everything to disk.
Yes this is exactly how spark partitioning works. So it computes the whole lineage and then write in a partitioned form on the disk. There are several advantages for that. One of the important reason is parallel write. So when the computation is done spark can write all the partitions in parallel on the disk. This significantly improves the performance.
If you want to write as an when the data is ready you might as well filter on the dataframe by different Ids and compute the process in a loop and write. However, in my experience this approach requires several iterations on the same dataframe resulting huge performance loss.

Resources