Grouping consecutive rows where date difference is 1 day - apache-spark

I have managed to get the dataframe with these columns:
+----------+----------+--------+
| date| next_date|datediff|
+----------+----------+--------+
|2020-09-25|2020-09-30| 5|
|2020-09-30|2020-10-01| 1|
|2020-10-01|2020-10-02| 1|
|2020-10-02|2020-10-03| 1|
|2020-10-03|2020-10-04| 1|
|2020-10-09|2020-11-23| 45|
|2020-11-23|2020-11-24| 1|
|2020-11-24|2020-11-25| 1|
|2020-11-25|2020-11-26| 1|
+----------+----------+--------+
I got the 'group' column by doing these commands:
w1 = Window.orderBy("date")
df_dates.withColumn(
"dateChange",
(F.col("datediff") != F.lit(1)).cast("int")
)\
.fillna(
0,
subset=["dateChange"]
)\
.withColumn(
"indicator",
(~((F.col("dateChange")==0))).cast("int")
)\
.withColumn(
"group",
F.sum(F.col("indicator")).over(w1.rangeBetween(Window.unboundedPreceding, 0))
)
and finally got these groupings:
+----------+----------+--------+----------+---------+-----+
| date| next_date|datediff|dateChange|indicator|group|
+----------+----------+--------+----------+---------+-----+
|2020-09-25|2020-09-30| 5| 1| 1| 1|
|2020-09-30|2020-10-01| 1| 0| 0| 1|
|2020-10-01|2020-10-02| 1| 0| 0| 1|
|2020-10-02|2020-10-03| 1| 0| 0| 1|
|2020-10-03|2020-10-04| 1| 0| 0| 1|
|2020-10-09|2020-11-23| 45| 1| 1| 2|
|2020-11-23|2020-11-24| 1| 0| 0| 2|
|2020-11-24|2020-11-25| 1| 0| 0| 2|
|2020-11-25|2020-11-26| 1| 0| 0| 2|
+----------+----------+--------+----------+---------+-----+
However, the first row should have its own group. The second row should be group 2 (all incremented by 1).
Then I do the aggregation:
df_dates.groupBy("group")\
.agg(
F.min("next_date").alias("start_time"),
F.max("next_date").alias("end_time")
)\
.drop("group")\
.show()
+----------+----------+
|start_time| end_time|
+----------+----------+
|2020-09-30|2020-10-04|
|2020-11-23|2020-11-26|
+----------+----------+
But I am missing the first group which is 2020-09-25.
The aim for this is to get the ranges for consecutive dates to help me combine HDFS folders with consecutive dates into the same partition.

Rewritten example data as python script:
from pyspark.sql import functions as F, Window as W
df = spark.createDataFrame(
[('2020-09-25', '2020-09-30', 5),
('2020-09-30', '2020-10-01', 1),
('2020-10-01', '2020-10-02', 1),
('2020-10-02', '2020-10-03', 1),
('2020-10-03', '2020-10-04', 1),
('2020-10-09', '2020-11-23', 45),
('2020-11-23', '2020-11-24', 1),
('2020-11-24', '2020-11-25', 1),
('2020-11-25', '2020-11-26', 1)],
["date", "next_date", "datediff"])
The following creates groups using window functions lag and sum:
w = W.orderBy("date")
# _flg is the rule when subgroup inside partition must be created
df = df.withColumn("_flg", F.coalesce(F.when(F.col("datediff") != F.lag("datediff").over(w), 1), F.lit(0)))
df = df.withColumn("_grp", F.sum("_flg").over(w))
df.show()
# +----------+----------+--------+----+----+
# | date| next_date|datediff|_flg|_grp|
# +----------+----------+--------+----+----+
# |2020-09-25|2020-09-30| 5| 0| 0|
# |2020-09-30|2020-10-01| 1| 1| 1|
# |2020-10-01|2020-10-02| 1| 0| 1|
# |2020-10-02|2020-10-03| 1| 0| 1|
# |2020-10-03|2020-10-04| 1| 0| 1|
# |2020-10-09|2020-11-23| 45| 1| 2|
# |2020-11-23|2020-11-24| 1| 1| 3|
# |2020-11-24|2020-11-25| 1| 0| 3|
# |2020-11-25|2020-11-26| 1| 0| 3|
# +----------+----------+--------+----+----+
Lastly, grouping using the created "_grp" column and others when applicable:
df = (df
.groupBy("_grp")
.agg(
F.min("date").alias("start_time"),
F.max("next_date").alias("end_time")
).drop("_grp")
)
df.show()
# +----------+----------+
# |start_time| end_time|
# +----------+----------+
# |2020-09-25|2020-09-30|
# |2020-09-30|2020-10-04|
# |2020-10-09|2020-11-23|
# |2020-11-23|2020-11-26|
# +----------+----------+

Related

Check if a column is consecutive with groupby in pyspark

I have a pyspark dataframe that looks like this:
import pandas as pd
foo = pd.DataFrame({'group': ['a','a','a','b','b','c','c','c'], 'value': [1,2,3,4,5,2,4,5]})
I would like to create a new binary column is_consecutive that indicates if the values in the value column are consecutive by group.
The output should look like this:
foo = pd.DataFrame({'group': ['a','a','a','b','b','c','c','c'], 'value': [1,2,3,4,5,2,4,5],
'is_consecutive': [1,1,1,1,1,0,0,0]})
How could I do that in pyspark?
You can use lag to compare values with the previous row and check if they are consecutive, then use min to determine whether all rows are consecutive in a given group.
from pyspark.sql import functions as F, Window
df2 = df.withColumn(
'consecutive',
F.coalesce(
F.col('value') - F.lag('value').over(Window.partitionBy('group').orderBy('value')) == 1,
F.lit(True)
).cast('int')
).withColumn(
'all_consecutive',
F.min('consecutive').over(Window.partitionBy('group'))
)
df2.show()
+-----+-----+-----------+---------------+
|group|value|consecutive|all_consecutive|
+-----+-----+-----------+---------------+
| c| 2| 1| 0|
| c| 4| 0| 0|
| c| 5| 1| 0|
| b| 4| 1| 1|
| b| 5| 1| 1|
| a| 1| 1| 1|
| a| 2| 1| 1|
| a| 3| 1| 1|
+-----+-----+-----------+---------------+
You can use lead and subtract the same with the existing value then find max of the window, once done , put a condition saying return 0 is max is >1 else return 1
w = Window.partitionBy("group").orderBy(F.monotonically_increasing_id())
(foo.withColumn("Diff",F.lead("value").over(w)-F.col("value"))
.withColumn("is_consecutive",F.when(F.max("Diff").over(w)>1,0).otherwise(1))
.drop("Diff")).show()
+-----+-----+--------------+
|group|value|is_consecutive|
+-----+-----+--------------+
| a| 1| 1|
| a| 2| 1|
| a| 3| 1|
| b| 4| 1|
| b| 5| 1|
| c| 2| 0|
| c| 4| 0|
| c| 5| 0|
+-----+-----+--------------+

Is it possible to filter columns by the sum of their values in Spark?

I'm loading a sparse table using PySpark where I want to remove all columns where the sum of all values in the column is above a threshold.
For example, the sum of column values of the following table:
+---+---+---+---+---+---+
| a| b| c| d| e| f|
+---+---+---+---+---+---+
| 1| 0| 1| 1| 0| 0|
| 1| 1| 0| 0| 0| 0|
| 1| 0| 0| 1| 1| 1|
| 1| 0| 0| 1| 1| 1|
| 1| 1| 0| 0| 1| 0|
| 0| 0| 1| 0| 1| 0|
+---+---+---+---+---+---+
Is 5, 2, 2, 3, 4 and 2. Filtering for all columns with sum >= 3 should output this table:
+---+---+---+
| a| d| e|
+---+---+---+
| 1| 1| 0|
| 1| 0| 0|
| 1| 1| 1|
| 1| 1| 1|
| 1| 0| 1|
| 0| 0| 1|
+---+---+---+
I tried many different solutions without success. df.groupBy().sum() is giving me the sum of column values, so I'm searching how I can then filter those with threshold and get only the remaining columns from the original dataframe.
As there are not only 6 but a couple of thousand columns, I'm searching for a scalable solution, where I don't have to type in every column name. Thanks for help!
You can do this with a collect (or a first) step.
from pyspark.sql import functions as F
sum_result = df.groupBy().agg(*(F.sum(col).alias(col) for col in df.columns)).first()
filtered_df = df.select(
*(col for col, value in sum_result.asDict().items() if value >= 3)
)
filtered_df.show()
+---+---+---+
| a| d| e|
+---+---+---+
| 1| 1| 0|
| 1| 0| 0|
| 1| 1| 1|
| 1| 1| 1|
| 1| 0| 1|
| 0| 0| 1|
+---+---+---+

Get all possible combinations recursively in an RDD in pyspark

I have made this algorithm, but with higher numbers looks like that doesn't work or its very slow, it will run in a cluster of big data(cloudera), so i think that i have to put the function into pyspark, any tip how improve it please
import pandas as pd import itertools as itts
number_list = [10953, 10423, 10053]
def reducer(nums): def ranges(n): print(n) return range(n, -1, -1)
num_list = list(map(ranges, nums)) return list(itts.product(*num_list))
data=pd.DataFrame(reducer(number_list)) print(data)
You can use crossJoin with DataFrame:
Here we have a simple example trying to compute the cross-product of three arrays,
i.e. [1,0], [2,1,0], [3,2,1,0]. Their cross-product should have 2*3*4 = 24 elements.
The code below shows how to achieve this.
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('test').getOrCreate()
df1 = spark.createDataFrame([(1,),(0,)], ['v1'])
df2 = spark.createDataFrame([(2,), (1,),(0,)], ['v2'])
df3 = spark.createDataFrame([(3,), (2,),(1,),(0,)], ['v3'])
df1.show()
df2.show()
df3.show()
+---+
| v1|
+---+
| 1|
| 0|
+---+
+---+
| v2|
+---+
| 2|
| 1|
| 0|
+---+
+---+
| v3|
+---+
| 3|
| 2|
| 1|
| 0|
+---+
df = df1.crossJoin(df2).crossJoin(df3)
print('----------- Total rows: ', df.count())
df.show(30)
----------- Total rows: 24
+---+---+---+
| v1| v2| v3|
+---+---+---+
| 1| 2| 3|
| 1| 2| 2|
| 1| 2| 1|
| 1| 2| 0|
| 1| 1| 3|
| 1| 1| 2|
| 1| 1| 1|
| 1| 1| 0|
| 1| 0| 3|
| 1| 0| 2|
| 1| 0| 1|
| 1| 0| 0|
| 0| 2| 3|
| 0| 2| 2|
| 0| 2| 1|
| 0| 2| 0|
| 0| 1| 3|
| 0| 1| 2|
| 0| 1| 1|
| 0| 1| 0|
| 0| 0| 3|
| 0| 0| 2|
| 0| 0| 1|
| 0| 0| 0|
+---+---+---+
Your computation is pretty big:
(10953+1)*(10423+1)*(10053+1)=1148010922784, about 1 trillion rows. I would suggest increase the numbers slowly, spark is not as fast as you think when it involves table joins.
Also, try use broadcast on all your initial DataFrames, i.e. df1, df2, df3. See if it helps.

Replacing all column values using Window operation?

Hi Data frame created like below.
df = sc.parallelize([
(1, 3),
(2, 3),
(3, 2),
(4,2),
(1, 3)
]).toDF(["id",'t'])
it shows like below.
+---+---+
| id| t|
+---+---+
| 1| 3|
| 2| 3|
| 3| 2|
| 4| 2|
| 1| 3|
+---+---+
my main aim is ,I want to replace repeated value in every column with how many times repeated.
so i have tried flowing code it is not working as expected.
from pyspark.sql.functions import col
column_list = ["id",'t']
w = Window.partitionBy(column_list)
dfmax=df.select(*((count(col(c)).over(w)).alias(c) for c in df.columns))
dfmax.show()
+---+---+
| id| t|
+---+---+
| 2| 2|
| 2| 2|
| 1| 1|
| 1| 1|
| 1| 1|
+---+---+
my expected output will be
+---+---+
| id| t|
+---+---+
| 2| 3|
| 1| 3|
| 1| 1|
| 1| 1|
| 2| 3|
+---+---+
If I understand you correctly, what you're looking for is simply:
df.select(*[count(c).over(Window.partitionBy(c)).alias(c) for c in df.columns]).show()
#+---+---+
#| id| t|
#+---+---+
#| 2| 3|
#| 2| 3|
#| 1| 2|
#| 1| 3|
#| 1| 2|
#+---+---+
The difference between this and what you posted is that we only partition by one column at a time.
Remember that DataFrames are unordered. If you wanted to maintain your row order, you could add an ordering column using pyspark.sql.functions.monotonically_increasing_id():
from pyspark.sql.functions import monotonically_increasing_id
df.withColumn("order", monotonically_increasing_id())\
.select(*[count(c).over(Window.partitionBy(c)).alias(c) for c in df.columns])\
.sort("order")\
.drop("order")\
.show()
#+---+---+
#| id| t|
#+---+---+
#| 2| 3|
#| 1| 3|
#| 1| 2|
#| 1| 2|
#| 2| 3|
#+---+---+

[Py]Spark SQL: Constrain each frame of a Window using the frame's input row

I would like to constrain what rows in a Window frame are used by the aggregate function based on the current input row. For example, given a DataFrame df and a Window w, I want to be able to do something like:
df2 = df.withColumn("foo", first(col("bar").filter(...)).over(w))
where .filter would remove rows from the current Window frame based on the frame's input row.
My specific use case is as follows: Given a DataFrame df
+-----+--+--+
|group|n1|n2|
+-----+--+--+
| 1| 1| 6|
| 1| 0| 3|
| 1| 2| 2|
| 1| 3| 5|
| 2| 0| 5|
| 2| 0| 7|
| 2| 3| 2|
| 2| 5| 9|
+-----+--+--+
window
w = Window.partitionBy("group")\
.orderBy("n1", "n2")\
.rowsBetween(Window.currentRow + 1, Window.unboundedFollowing)
and some positive Long i, how would you find the first row (fr) in each input row r's frame such that r.n1 < fr.n1, r.n2 < fr.n2, and max(fr.n1 - r.n1, fr.n2 - r.n2) < i? The value returned can be either fr.n1 or fr's row index in df. So, for i = 6, the output for the example df would be
+-----+--+--+-----+
|group|n1|n2|fr.n1|
+-----+--+--+-----+
| 1| 1| 6| null|
| 1| 0| 3| 1|
| 1| 2| 2| 3|
| 1| 3| 5| null|
| 2| 0| 5| 5|
| 2| 0| 7| 5|
| 2| 3| 2| null|
| 2| 5| 9| null|
+-----+--+--+-----+
I've been studying the Spark API and looking at examples of Window, first, and when, but I can't seem to piece it together. Is this even possible with Window and aggregate functions or am I completely off the mark?
You won't be able to do it with just window functions and aggregations, you'll need a self join:
For the join:
df = sc.parallelize([[1, 1, 6],[1, 0, 3],[1, 2, 2],[1, 3, 5],[2, 0, 5],[2, 0, 7],[2, 3, 2],[2, 5, 9]]).toDF(["group","n1","n2"])
import pyspark.sql.functions as psf
df_r = df.select([df[c].alias("r_" + c) for c in df.columns])
df_join = df_r\
.join(df, (df_r.r_group == df.group)
& (df_r.r_n1 < df.n1)
& (df_r.r_n2 < df.n2)
& (psf.greatest(df.n1 - df_r.r_n1, df.n2 - df_r.r_n2) < i), "leftouter")\
.drop("group")
Now we can apply the window function to only keep the first row:
w = Window.partitionBy("r_group", "r_n1", "r_n2").orderBy("n1", "n2")
res = df_join\
.withColumn("rn", psf.row_number().over(w))\
.filter("rn = 1").drop("rn")
+-------+----+----+----+----+
|r_group|r_n1|r_n2| n1| n2|
+-------+----+----+----+----+
| 1| 0| 3| 1| 6|
| 1| 1| 6|null|null|
| 1| 2| 2| 3| 5|
| 1| 3| 5|null|null|
| 2| 0| 5| 5| 9|
| 2| 0| 7| 5| 9|
| 2| 3| 2|null|null|
| 2| 5| 9|null|null|
+-------+----+----+----+----+

Resources