The following code is pretty slow.
Is there a way of creating multiple columns at once over the same window, so Spark does not need to partition and order the data multiple times?
w = Window().partitionBy("k").orderBy("t")
df = df.withColumn(F.col("a"), F.last("a",True).over(w))
df = df.withColumn(F.col("b"), F.last("b",True).over(w))
df = df.withColumn(F.col("c"), F.last("c",True).over(w))
...
I'm not sure that Spark does partitioning and reordering several times, as you use the same window consecutively. However, .select is usually a better alternative than .withColumn.
df = df.select(
"*",
F.last("a", True).over(w).alias("a"),
F.last("b", True).over(w).alias("b"),
F.last("c", True).over(w).alias("c"),
)
To find out if partitioning and ordering is done several times, you need to analyse the df.explain() results.
You dont have to generate one column at a time. Use list comprehension. Code below
new=['a','b','c']
df = df.select(
"*", *[F.last(x, True).over(w).alias(f"{x}") for x in new]
)
Related
I'm trying to merge historical and incremental data. As part of the incremental data, I'm getting deletes. Below is the case.
historical data - 100 records ( 20 columns, id is the key column)
incremental data - 10 records ( 20 columns, id is the key column)
Out of the 10 records in incremental data, only 5 will match with historical data.
Now I want 100 records in the final dataframe of which 95 records belong to historical data and 5 records belong to incremental data(wherever id column is match).
Update timestamp field is available in both historical and incremental data.
Below is the approach I tried.
DF1 - Historical Data
DF2 - Incremental Delete Dataset
DF3 = DF1 LEFTANTIJOIN DF2
DF4 = DF2 INNERJOIN DF1
DF5 = DF3 UNION DF4
However, I observed It has lot of performance issue as I'm running this join on billions of records. Any better way to do this?
you can use the cogroup operator combined with a user defined function to construct the different variations of the join.
Suppose we have these two RDDs as an example :
visits = sc.parallelize([("h", "1.2.3.4"), ("a", "3.4.5.6"), ("h","1.3.3.1")] )
pageNames = sc.parallelize([("h", "Home"), ("a", "About"), ("o", "Other")])
cg = visits.cogroup(pageNames).map(lambda x :(x[0], ( list(x[1][0]), list(x[1][1]))))
You can implement an inner join as such :
innerjoin = cg.flatMap(lambda x: J(x))
Where J is defined as such :
def J(x):
j=[]
k=x[0]
if x[1][0]!=[] and x[1][1]!=[]:
for l in x[1][0]:
for r in x[1][1]:
j.append((k,(l,r)))
return j
For a right outer join for example you just need to change the J function to an roJ function defined as such :
def roJ(x):
j=[]
k=x[0]
if x[1][0]!=[] and x[1][1]!=[]:
for l in x[1][0]:
for r in x[1][1]:
j.append((k,(l,r)))
elif x[1][1]!=[] :
for r in x[1][1]:
j.append((k, (None, r)))
return j
And call it like so :
rightouterjoin = cg.flatMap(lambda x: roJ(x))
And so on for other types of join you'd wish to implement
Performance issues are not just related to the size of your data. It depends on many other parameters like, the keys you used for partition, your partitioned file sizes and the cluster configuration you are running your job on. I would recommend you to go through the official documentation on Tuning your spark jobs and make necessary changes.
https://spark.apache.org/docs/latest/tuning.html
Below is the approach I did.
historical_data.as("a").join(
incr_data.as("b"),
$"a.id" === $"b.id", "full")
.select(historical_data.columns.map(f => expr(s"""case when a.id=b.id then b.${f} else a.${f} end as $f""")): _*)
I have a dataframe with time-series data and I am trying to add a lot of moving average columns to it with different windows of various ranges. When I do this column by column, results are pretty slow.
I have tried to just pile the withColumn calls until I have all of them.
Pseudo code:
import pyspark.sql.functions as pysparkSqlFunctions
## working from a data frame with 12 colums:
## - key as a String
## - time as a DateTime
## - col_{1:10} as numeric values
window_1h = Window.partitionBy("key") \
.orderBy(col("time").cast("long")) \
.rangeBetween(-3600, 0)
window_2h = Window.partitionBy("key") \
.orderBy(col("time").cast("long")) \
.rangeBetween(-7200, 0)
df = df.withColumn("col1_1h", pysparkSqlFunctions.avg("col_1").over(window_1h))
df = df.withColumn("col1_2h", pysparkSqlFunctions.avg("col_1").over(window_2h))
df = df.withColumn("col2_1h", pysparkSqlFunctions.avg("col_2").over(window_1h))
df = df.withColumn("col2_2h", pysparkSqlFunctions.avg("col_2").over(window_2h))
What I would like is the ability to add all 4 columns (or many more) in one call, hopefully traversing the data only once for better performance.
I prefer to import the functions library as F as it looks neater and it is the standard alias used in the official Spark documentation.
The star string, '*', should capture all the current columns within the dataframe. Alternatively, you could replace the star string with *df.columns. Here the star explodes the list into separate parameters for the select method.
from pyspark.sql import functions as F
df = df.select(
"*",
F.avg("col_1").over(window_1h).alias("col1_1h"),
F.avg("col_1").over(window_2h).alias("col1_2h"),
F.avg("col_2").over(window_1h).alias("col2_1h"),
F.avg("col_2").over(window_1h).alias("col2_1h"),
)
I have the following code:
val df_in = sqlcontext.read.json(jsonFile) // the file resides in hdfs
//some operations in here to create df as df_in with two more columns "terms1" and "terms2"
val intersectUDF = udf( (seq1:Seq[String], seq2:Seq[String] ) => { seq1 intersect seq2 } ) //intersects two sequences
val symmDiffUDF = udf( (seq1:Seq[String], seq2:Seq[String] ) => { (seq1 diff seq2) ++ (seq2 diff seq1) } ) //compute the difference of two sequences
val df1 = (df.withColumn("termsInt", intersectUDF(df("terms1"), df1("terms2") ) )
.withColumn("termsDiff", symmDiffUDF(df("terms1"), df1("terms2") ) )
.where( size(col("termsInt")) >0 && size(col("termsDiff")) > 0 && size(col("termsDiff")) <= 2 )
.cache()
) // add the intersection and difference columns and filter the resulting DF
df1.show()
df1.count()
The app is working properly and fast until the show() but in the count() step, it creates 40000 tasks.
My understanding is that df1.show() should be triggering the full df1 creation and df1.count() should be very fast. What am I missing here? why is count() that slow?
Thank you very much in advance,
Roxana
show is indeed an action, but it is smart enough to know when it doesn't have to run everything. If you had an orderBy it would take very long too, but in this case all your operations are map operations and so there's no need to calculate the whole final table. However, count needs to physically go through the whole table in order to count it and that's why it's taking so long. You could test what I'm saying by adding an orderBy to df1's definition - then it should take long.
EDIT: Also, the 40k tasks are likely due to the amount of partitions your DF is partitioned into. Try using df1.repartition(<a sensible number here, depending on cluster and DF size>) and trying out count again.
show() by default shows only 20 rows. If the 1st partition returned more than 20 rows, then the rest partitions will not be executed.
Note show has a lot of variations. If you run show(false) which means show all results, all partitions will be executed and may take more time. So, show() equals show(20) which is a partial action.
I have two pyspark dataframes with same number of rows but they don't have any common column. So I am adding new column to both of them using monotonically_increasing_id() as
from pyspark.sql.functions import monotonically_increasing_id as mi
id=mi()
df1 = df1.withColumn("match_id", id)
cont_data = cont_data.withColumn("match_id", id)
cont_data = cont_data.join(df1,df1.match_id==cont_data.match_id, 'inner').drop(df1.match_id)
But after join the resulting data frame has less number of rows.
What am I missing here. Thanks
You just don't. This not an applicable use case for monotonically_increasing_id, which is by definition non-deterministic. Instead:
convert to RDD
zipWithIndex
convert back to DataFrame.
join
You can generate the id's with monotonically_increasing_id, save the file to disc, and then read it back in THEN do whatever joining process. Would only suggest this approach if you just need to generate the id's once. At that point they can be used for joining, but for the reasons mentioned above, this is hacky and not a good solution for anything that runs regularly.
If you want to get an incremental number on both dataframes and then join, you can generate a consecutive number with monotonically and windowing with the following code:
df1 = df1.withColumn("monotonically_increasing_id",monotonically_increasing_id())
window = Window.orderBy(scol('monotonically_increasing_id'))
df1 = df1.withColumn("match_id", row_number().over(window))
df1 = df1.drop("monotonically_increasing_id")
cont_data = cont_data.withColumn("monotonically_increasing_id",monotonically_increasing_id())
window = Window.orderBy(scol('monotonically_increasing_id'))
cont_data = cont_data.withColumn("match_id", row_number().over(window))
cont_data = cont_data.drop("monotonically_increasing_id")
cont_data = cont_data.join(df1,df1.match_id==cont_data.match_id, 'inner').drop(df1.match_id)
Warning It may move the data to a single partition! So maybe is better to separate the match_id to a different dataframe with the monotonically_increasing_id, generate the consecutive incremental number and then join with the data.
I'm trying to use SQLContext.subtract() in Spark 1.6.1 to remove rows from a dataframe based on a column from another dataframe. Let's use an example:
from pyspark.sql import Row
df1 = sqlContext.createDataFrame([
Row(name='Alice', age=2),
Row(name='Bob', age=1),
]).alias('df1')
df2 = sqlContext.createDataFrame([
Row(name='Bob'),
])
df1_with_df2 = df1.join(df2, 'name').select('df1.*')
df1_without_df2 = df1.subtract(df1_with_df2)
Since I want all rows from df1 which don't include name='Bob' I expect Row(age=2, name='Alice'). But I also retrieve Bob:
print(df1_without_df2.collect())
# [Row(age='1', name='Bob'), Row(age='2', name='Alice')]
After various experiments to get down to this MCVE, I found out that the issue is with the age key. If I omit it:
df1_noage = sqlContext.createDataFrame([
Row(name='Alice'),
Row(name='Bob'),
]).alias('df1_noage')
df1_noage_with_df2 = df1_noage.join(df2, 'name').select('df1_noage.*')
df1_noage_without_df2 = df1_noage.subtract(df1_noage_with_df2)
print(df1_noage_without_df2.collect())
# [Row(name='Alice')]
Then I only get Alice as expected. The weirdest observation I made is that it's possible to add keys, as long as they're after (in the lexicographical order sense) the key I use in the join:
df1_zage = sqlContext.createDataFrame([
Row(zage=2, name='Alice'),
Row(zage=1, name='Bob'),
]).alias('df1_zage')
df1_zage_with_df2 = df1_zage.join(df2, 'name').select('df1_zage.*')
df1_zage_without_df2 = df1_zage.subtract(df1_zage_with_df2)
print(df1_zage_without_df2.collect())
# [Row(name='Alice', zage=2)]
I correctly get Alice (with her zage)! In my real examples, I'm interested in all columns, not only the ones that are after name.
Well there are some bugs here (the first issue looks like related to to the same problem as SPARK-6231) and JIRA looks like a good idea, but SUBTRACT / EXCEPT is no the right choice for partial matches.
Instead, as of Spark 2.0, you can use anti-join:
df1.join(df1_with_df2, ["name"], "leftanti").show()
In 1.6 you can do pretty much the same thing with standard outer join:
import pyspark.sql.functions as F
ref = df1_with_df2.select("name").alias("ref")
(df1
.join(ref, ref.name == df1.name, "leftouter")
.filter(F.isnull("ref.name"))
.drop(F.col("ref.name")))