PySpark: Time since previous True - apache-spark

I have a Spark dataframe, like so:
# For sake of simplicity only one id is shown, but there are multiple objects
+---+-------------------+------+
| id| timstm|signal|
+---+-------------------+------+
| X1|2022-07-01 00:00:00| null|
| X1|2022-07-02 00:00:00| true|
| X1|2022-07-03 00:00:00| null|
| X1|2022-07-05 00:00:00| null|
| X1|2022-07-09 00:00:00| true|
+---+-------------------+------+
And I want to create a new column that contains the time since the signal column was last true
+---+-------------------+------+---------+
| id| timstm|signal|time_diff|
+---+-------------------+------+---------+
| X1|2022-07-01 00:00:00| null| null|
| X1|2022-07-02 00:00:00| true| 0.0|
| X1|2022-07-03 00:00:00| null| 1.0|
| X1|2022-07-05 00:00:00| null| 3.0|
| X1|2022-07-09 00:00:00| true| 0.0|
+---+-------------------+------+---------+
Any ideas how to approach this? My intuition is to somehow use window and filter to achieve this, but I'm not sure

So this logic is a bit hard to express in native PySpark. It might be easier to express it as a pandas_udf. I will use the Fugue library to bring Python/Pandas code to a Pandas UDF, but if you don't want to use Fugue, you can still bring it to Pandas UDF, it just takes a lot more code.
Setup
Here I am just creating the DataFrame in the example. I know this is a Pandas DataFrame, we will convert it to Spark and run the solution on Spark later.
I suggest filling the null with False in the original DataFrame. This is because the Pandas code uses a group-by and NULL values are dropped by default in Pandas groupby. Filling the NULL with False will make it work properly (and I think it's also easier for conversion between Spark and Pandas).
import pandas as pd
df = pd.DataFrame({"id": ["X1"]*5,
"timestm": ["2022-07-01", "2022-07-02", "2022-07-03", "2022-07-05", "2022-07-09"],
"signal": [None, True, None, None, True]})
df['timestm'] = pd.to_datetime(df['timestm'])
df['signal'] = df['signal'].fillna(False)
Solution 1
So when we use Pandas-UDF, the important piece is that the function is applied per Spark partition. So the function just needs to be able to handle one id. And then we partition the Spark DataFrame by id and run the function for each one later.
Also be aware that order may not be guaranteed so we'll sort the data by time as the first step. The Pandas code I have is really just taken from another post here and modified.
def process(df: pd.DataFrame) -> pd.DataFrame:
df = df.sort_values('timestm')
df['days_since_last_event'] = df['timestm'].diff().apply(lambda x: x.days)
df.loc[:, 'days_since_last_event'] = df.groupby(df['signal'].shift().cumsum())['days_since_last_event'].cumsum()
df.loc[df['signal'] == True, 'days_since_last_event'] = 0
return df
process(df)
This will give us:
id timestm signal days_since_last_event
X1 2022-07-01 False NaN
X1 2022-07-02 True 0.0
X1 2022-07-03 False 1.0
X1 2022-07-05 False 3.0
X1 2022-07-09 True 0.0
Which looks right. Now we can bring it to Spark using Fugue with minimal additional lines of code. This will partition the data and run the function on each partition. Schema is a requirement for Pandas UDF so Fugue needs it also, but uses a simpler way to define it.
import fugue.api as fa
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
sdf = spark.createDataFrame(df)
out = fa.transform(sdf, process, schema="*, days_since_last_event:int", partition={"by": "id"})
# out is a Spark DataFrame because a Spark DataFrame was passed in
out.show()
which gives us:
+---+-------------------+------+---------------------+
| id| timestm|signal|days_since_last_event|
+---+-------------------+------+---------------------+
| X1|2022-07-01 00:00:00| false| null|
| X1|2022-07-02 00:00:00| true| 0|
| X1|2022-07-03 00:00:00| false| 1|
| X1|2022-07-05 00:00:00| false| 3|
| X1|2022-07-09 00:00:00| true| 0|
+---+-------------------+------+---------------------+
Note to define the partition when running on the full data.

Related

Can you tell Spark to calculate `when` function's arguments lazily? [duplicate]

I want to create a new boolean column in my dataframe that derives its value from the evaluation of two conditional statements on other columns in the same dataframe:
columns = ["id", "color_one", "color_two"]
data = spark.createDataFrame([(1, "blue", "red"), (2, "red", None)]).toDF(*columns)
data = data.withColumn('is_red', data.color_one.contains("red") | data.color_two.contains("red"))
This works fine unless either color_one or color_two is NULL in a row. In cases like these, is_red is also set to NULL for that row instead of true or false:
+-------+----------+------------+-------+
|id |color_one |color_two |is_red |
+-------+----------+------------+-------+
| 1| blue| red| true|
| 2| red| NULL| NULL|
+-------+----------+------------+-------+
This means that PySpark is evaluating all of the clauses of the conditional statement rather than exiting early (via short-circuit evaluation) if the first condition happens to be true (like in row 2 of my example above).
Does PySpark support the short-circuit evaluation of conditional statements?
In the meantime, here is a workaround I have come up with to null-check each column:
from pyspark.sql import functions as F
color_one_is_null = data.color_one.isNull()
color_two_is_null = data.color_two.isNull()
data = data.withColumn('is_red', F.when(color_two_is_null, data.color_one.contains("red"))
.otherwise(F.when(color_one_is_null, data.color_two.contains("red"))
.otherwise(F.when(color_one_is_null & color_two_is_null, F.lit(False))
.otherwise(data.color_one.contains("red") | data.color_two.contains("red"))))
)
I don't think Spark support short-circuit evaluation on conditionals as stated here https://docs.databricks.com/spark/latest/spark-sql/udf-python.html#:~:text=Spark%20SQL%20(including,short-circuiting%E2%80%9D%20semantics.:
Spark SQL (including SQL and the DataFrame and Dataset API) does not guarantee the order of evaluation of subexpressions. In particular, the inputs of an operator or function are not necessarily evaluated left-to-right or in any other fixed order. For example, logical AND and OR expressions do not have left-to-right “short-circuiting” semantics.
Another alternative way would be creating an array of column_one and column_two, then evaluate if the array contains 'red' using SQL EXISTS
data = data.withColumn('is_red', F.expr("EXISTS(array(color_one, color_two), x -> x = 'red')"))
data.show()
+---+---------+---------+------+
| id|color_one|color_two|is_red|
+---+---------+---------+------+
| 1| blue| red| true|
| 2| red| null| true|
| 3| null| green| false|
| 4| yellow| null| false|
| 5| null| red| true|
| 6| null| null| false|
+---+---------+---------+------+

How to return the latest rows per group in pyspark structured streaming

I have a stream which I read in pyspark using spark.readStream.format('delta'). The data consists of multiple columns including a type, date and value column.
Example DataFrame;
type
date
value
1
2020-01-21
6
1
2020-01-16
5
2
2020-01-20
8
2
2020-01-15
4
I would like to create a DataFrame that keeps track of the latest state per type. One of the most easy methods to do when working on static (batch) data is to use windows, but using windows on non-timestamp columns is not supported. Another option would look like
stream.groupby('type').agg(last('date'), last('value')).writeStream
but I think Spark cannot guarantee the ordering here, and using orderBy is also not supported in structured streaming before the aggrations.
Do you have any suggestions on how to approach this challenge?
simple use the to_timestamp() function that can be import by from pyspark.sql.functions import *
on the date column so that you use the window function.
e.g
from pyspark.sql.functions import *
df=spark.createDataFrame(
data = [ ("1","2020-01-21")],
schema=["id","input_timestamp"])
df.printSchema()
+---+---------------+-------------------+
|id |input_timestamp|timestamp |
+---+---------------+-------------------+
|1 |2020-01-21 |2020-01-21 00:00:00|
+---+---------------+-------------------+
"but using windows on non-timestamp columns is not supported"
are you saying this from stream point of view, because same i am able to do.
Here is the solution to your problem.
windowSpec = Window.partitionBy("type").orderBy("date")
df1=df.withColumn("rank",rank().over(windowSpec))
df1.show()
+----+----------+-----+----+
|type| date|value|rank|
+----+----------+-----+----+
| 1|2020-01-16| 5| 1|
| 1|2020-01-21| 6| 2|
| 2|2020-01-15| 4| 1|
| 2|2020-01-20| 8| 2|
+----+----------+-----+----+
w = Window.partitionBy('type')
df1.withColumn('maxB', F.max('rank').over(w)).where(F.col('rank') == F.col('maxB')).drop('maxB').show()
+----+----------+-----+----+
|type| date|value|rank|
+----+----------+-----+----+
| 1|2020-01-21| 6| 2|
| 2|2020-01-20| 8| 2|
+----+----------+-----+----+

Python Spark join two dataframes and fill column

I have two dataframes that need to be joined in a particular way I am struggling with.
dataframe 1:
+--------------------+---------+----------------+
| asset_domain| eid| oid|
+--------------------+---------+----------------+
| test-domain...| 126656| 126656|
| nebraska.aaa.com| 335660| 335660|
| netflix.com| 460| 460|
+--------------------+---------+----------------+
dataframe 2:
+--------------------+--------------------+---------+--------------+----+----+------------+
| asset| asset_domain|dns_count| ip| ev|post|form_present|
+--------------------+--------------------+---------+--------------+----+----+------------+
| sub1.test-domain...| test-domain...| 6354| 11.11.111.111| 1| 1| null|
| netflix.com| netflix.com| 3836| 22.22.222.222|null|null| null|
+--------------------+--------------------+---------+--------------+----+----+------------+
desired result:
+--------------------+---------+-------------+----+----+------------+---------+----------------+
| asset|dns_count| ip| ev|post|form_present| eid| oid|
+--------------------+---------+-------------+----+----+------------+---------+----------------+
| netflix.com| 3836|22.22.222.222|null|null| null| 460| 460|
| sub1.test-domain...| 5924|111.11.111.11| 1| 1| null| 126656| 126656|
| nebraska.aaa.com| null| null|null|null| null| 335660| 335660|
+--------------------+---------+-------------+----+----+------------+---------+----------------+
Basically – it should join df1 and df2 on asset_domain but if that doesn't exist in df2, then the resulting asset should be the asset_domain from df1.
I tried df = df2.join(df1, ["asset_domain"], "right").drop("asset_domain") but that obviously leaves null in the asset column for nebraska.aaa.com since it does not have a matching domain in df2. How do I go about adding those to the asset column for this particular case?
you can use coalesce function after join to create asset column.
df2.join(df1, ["asset_domain"], "right").select(coalesce("asset","asset_domain").alias("asset"),"dns_count","ip","ev","post","form_present","eid","oid").orderBy("asset").show()
#+----------------+---------+-------------+----+----+------------+------+------+
#| asset|dns_count| ip| ev|post|form_present| eid| oid|
#+----------------+---------+-------------+----+----+------------+------+------+
#|nebraska.aaa.com| null| null|null|null| null|335660|335660|
#| netflix.com| 3836|22.22.222.222|null|null| None| 460| 460|
#|sub1.test-domain| 6354|11.11.111.111| 1| 1| null|126656|126656|
#+----------------+---------+-------------+----+----+------------+------+------+
After the join you can use the isNull() function
import pyspark.sql.functions as F
tst1 = sqlContext.createDataFrame([('netflix',1),('amazon',2)],schema=("asset_domain",'xtra1'))
tst2= sqlContext.createDataFrame([('netflix','yahoo',1),('amazon','yahoo',2),('flipkart',None,2)],schema=("asset_domain","asset",'xtra'))
tst_j = tst1.join(tst2,on='asset_domain',how='right')
#%%
tst_res = tst_j.withColumn("asset",F.when(F.col('asset').isNull(),F.col('asset_domain')).otherwise(F.col('asset')))

Apache Spark: Get the first and last row of each partition

I would like to get the first and last row of each partition in spark (I'm using pyspark). How do I go about this?
In my code I repartition my dataset based on a key column using:
mydf.repartition(keyColumn).sortWithinPartitions(sortKey)
Is there a way to get the first row and last row for each partition?
Thanks
I would highly advise against working with partitions directly. Spark does a lot of DAG optimisation, so when you try executing specific functionality on each partition, all your assumptions about the partitions and their distribution might be completely false.
You seem to however have a keyColumn and sortKey, so then I'd just suggest to do the following:
import pyspark
import pyspark.sql.functions as f
w_asc = pyspark.sql.Window.partitionBy(keyColumn).orderBy(f.asc(sortKey))
w_desc = pyspark.sql.Window.partitionBy(keyColumn).orderBy(f.desc(sortKey))
res_df = mydf. \
withColumn("rn_asc", f.row_number().over(w_asc)). \
withColumn("rn_desc", f.row_number().over(w_desc)). \
where("rn_asc = 1 or rn_desc = 1")
The resulting dataframe will have 2 additional columns, where rn_asc=1 indicates the first row and rn_desc=1 indicates the last row.
Scala: I think the repartition is not by come key column but it requires the integer how may partition you want to set. I made a way to select the first and last row by using the Window function of the spark.
First, this is my test data.
+---+-----+
| id|value|
+---+-----+
| 1| 1|
| 1| 2|
| 1| 3|
| 1| 4|
| 2| 1|
| 2| 2|
| 2| 3|
| 3| 1|
| 3| 3|
| 3| 5|
+---+-----+
Then, I use the Window function twice, because I cannot know the last row easily but the reverse is quite easy.
import org.apache.spark.sql.expressions.Window
val a = Window.partitionBy("id").orderBy("value")
val d = Window.partitionBy("id").orderBy(col("value").desc)
val df = spark.read.option("header", "true").csv("test.csv")
df.withColumn("marker", when(rank.over(a) === 1, "Y").otherwise("N"))
.withColumn("marker", when(rank.over(d) === 1, "Y").otherwise(col("marker")))
.filter(col("marker") === "Y")
.drop("marker").show
The final result is then,
+---+-----+
| id|value|
+---+-----+
| 3| 5|
| 3| 1|
| 1| 4|
| 1| 1|
| 2| 3|
| 2| 1|
+---+-----+
Here is another approach using mapPartitions from RDD API. We iterate over the elements of each partition until we reach the end. I would expect this iteration to be very fast since we skip all the elements of the partition except the two edges. Here is the code:
df = spark.createDataFrame([
["Tom", "a"],
["Dick", "b"],
["Harry", "c"],
["Elvis", "d"],
["Elton", "e"],
["Sandra", "f"]
], ["name", "toy"])
def get_first_last(it):
first = last = next(it)
for last in it:
pass
# Attention: if first equals last by reference return only one!
if first is last:
return [first]
return [first, last]
# coalesce here is just for demonstration
first_last_rdd = df.coalesce(2).rdd.mapPartitions(get_first_last)
spark.createDataFrame(first_last_rdd, ["name", "toy"]).show()
# +------+---+
# | name|toy|
# +------+---+
# | Tom| a|
# | Harry| c|
# | Elvis| d|
# |Sandra| f|
# +------+---+
PS: Odd positions will contain the first partition element and the even ones the last item. Also note that the number of results will be (numPartitions * 2) - numPartitionsWithOneItem which I expect to be relatively small therefore you shouldn't bother about the cost of the new createDataFrame statement.

PySpark: Randomize rows in dataframe

I have a dataframe and I want to randomize rows in the dataframe. I tried sampling the data by giving a fraction of 1, which didn't work (interestingly this works in Pandas).
It works in Pandas because taking sample in local systems is typically solved by shuffling data. Spark from the other hand avoids shuffling by performing linear scans over the data. It means that sampling in Spark only randomizes members of the sample not an order.
You can order DataFrame by a column of random numbers:
from pyspark.sql.functions import rand
df = sc.parallelize(range(20)).map(lambda x: (x, )).toDF(["x"])
df.orderBy(rand()).show(3)
## +---+
## | x|
## +---+
## | 2|
## | 7|
## | 14|
## +---+
## only showing top 3 rows
but it is:
expensive - because it requires full shuffle and it something you typically want to avoid.
suspicious - because order of values in a DataFrame is not something you can really depend on in non-trivial cases and since DataFrame doesn't support indexing it is relatively useless without collecting.
This code works for me without any RDD operations:
import pyspark.sql.functions as F
df = df.select("*").orderBy(F.rand())
Here is a more elaborated example:
import pyspark.sql.functions as F
# Example: create a Dataframe for the example
pandas_df = pd.DataFrame(([1,2],[3,1],[4,2],[7,2],[32,7],[123,3]),columns=["id","col1"])
df = sqlContext.createDataFrame(pandas_df)
df = df.select("*").orderBy(F.rand())
df.show()
+---+----+
| id|col1|
+---+----+
| 1| 2|
| 3| 1|
| 4| 2|
| 7| 2|
| 32| 7|
|123| 3|
+---+----+
df.select("*").orderBy(F.rand()).show()
+---+----+
| id|col1|
+---+----+
| 7| 2|
|123| 3|
| 3| 1|
| 4| 2|
| 32| 7|
| 1| 2|
+---+----+

Resources