update pyspark data frame column based on another column - apache-spark

Below is a data frame in pyspark. I want to update the column val in data frame based on the values in tests column.
df.show()
+---------+----+---+
| tests| val|asd|
+---------+----+---+
| test1| Y| 1|
| test2| N| 2|
| test2| Y| 1|
| test1| N| 2|
| test1| N| 3|
| test3| N| 4|
| test4| Y| 5|
+---------+----+---+
I want to update the value when the any given test has val Y then all val's of that particular tests should be updated to Y. if not then what ever values they have.
basically I want the data frame to be like below.
result_df.show()
+---------+----+---+
| tests| val|asd|
+---------+----+---+
| test1| Y| 1|
| test2| Y| 2|
| test2| Y| 1|
| test1| Y| 2|
| test1| Y| 3|
| test3| N| 4|
| test4| Y| 5|
+---------+----+---+
What should I do to achieve that.

Use max window function and selectExpr:
df.selectExpr(
'tests', 'max(val) over (partition by tests) as val', 'asd'
).show()
+-----+---+---+
|tests|val|asd|
+-----+---+---+
|test4| Y| 5|
|test3| N| 4|
|test1| Y| 1|
|test1| Y| 2|
|test1| Y| 3|
|test2| Y| 2|
|test2| Y| 1|
+-----+---+---+

Here is a solution.
First we find out for each test whether it has val Y.
import pyspark.sql.functions as sf
by_test = df.groupBy('tests').agg(sf.sum((sf.col('val') == 'Y').cast('int')).alias('HasY'))
by_test.show()
+-----+----+
|tests|HasY|
+-----+----+
|test4| 1|
|test3| 0|
|test1| 1|
|test2| 1|
+-----+----+
Join back to the origine dataframe
df = df.join(by_test, on='tests')
df.show()
+-----+---+---+----+
|tests|val|asd|HasY|
+-----+---+---+----+
|test4| Y| 5| 1|
|test3| N| 4| 0|
|test1| Y| 1| 1|
|test1| N| 2| 1|
|test1| N| 3| 1|
|test2| N| 2| 1|
|test2| Y| 1| 1|
+-----+---+---+----+
Create a new column with the same name using when/otherwise
df = df.withColumn('val', sf.when(sf.col('HasY') > 0, 'Y').otherwise(sf.col('val')))
df = df.drop('HasY')
df.show()
+-----+---+---+
|tests|val|asd|
+-----+---+---+
|test4| Y| 5|
|test3| N| 4|
|test1| Y| 1|
|test1| Y| 2|
|test1| Y| 3|
|test2| Y| 2|
|test2| Y| 1|
+-----+---+---+

Related

Conditions in Spark window function

I have a dataframe like
+---+---+---+---+
| q| w| e| r|
+---+---+---+---+
| a| 1| 20| y|
| a| 2| 22| z|
| b| 3| 10| y|
| b| 4| 12| y|
+---+---+---+---+
I want to mark the rows with the minimum e and r = z . If there are no rows which have r = z, I want the row with the minimum e, even if r = y.
Essentially, something like
+---+---+---+---+---+
| q| w| e| r| t|
+---+---+---+---+---+
| a| 1| 20| y| 0|
| a| 2| 22| z| 1|
| b| 3| 10| y| 1|
| b| 4| 12| y| 0|
+---+---+---+---+---+
I can do it using a number of joins, but that would be too expensive.
So I was looking for a window-based solution.
You can calculate the minimum per group once for rows with r = z and then for all rows within a group. The first non-null value can then be compared to e:
from pyspark.sql import functions as F
from pyspark.sql import Window
df = ...
w = Window.partitionBy("q")
#When ordering is not defined, an unbounded window frame is used by default.
df.withColumn("min_e_with_r_eq_z", F.expr("min(case when r='z' then e else null end)").over(w)) \
.withColumn("min_e_overall", F.min("e").over(w)) \
.withColumn("t", F.coalesce("min_e_with_r_eq_z","min_e_overall") == F.col("e")) \
.orderBy("w") \
.show()
Output:
+---+---+---+---+-----------------+-------------+-----+
| q| w| e| r|min_e_with_r_eq_z|min_e_overall| t|
+---+---+---+---+-----------------+-------------+-----+
| a| 1| 20| y| 22| 20|false|
| a| 2| 22| z| 22| 20| true|
| b| 3| 10| y| null| 10| true|
| b| 4| 12| y| null| 10|false|
+---+---+---+---+-----------------+-------------+-----+
Note: I assume that q is the grouping column for the window.
You can assign row numbers based on whether r = z and the value of column e:
from pyspark.sql import functions as F, Window
df2 = df.withColumn(
't',
F.when(
F.row_number().over(
Window.partitionBy('q')
.orderBy((F.col('r') == 'z').desc(), 'e')
) == 1,
1
).otherwise(0)
)
df2.show()
+---+---+---+---+---+
| q| w| e| r| t|
+---+---+---+---+---+
| a| 2| 22| z| 1|
| a| 1| 20| y| 0|
| b| 3| 10| y| 1|
| b| 4| 12| y| 0|
+---+---+---+---+---+
Adding the spark-scala version of #werner 's accepted answer
val w = Window.partitionBy("q")
df.withColumn("min_e_with_r_eq_z", min(when($"r" === "z", $"e").otherwise(null)).over(w))
.withColumn("min_e_overall", min("e").over(w))
.withColumn("t", coalesce($"min_e_with_r_eq_z", $"min_e_overall") === $"e")
.orderBy("w")
.show()

Drop function doesn't work properly after joining same columns of Dataframe

I am facing this same issue while joining two Data frame A, B.
For ex:
c = df_a.join(df_b, [df_a.col1 == df_b.col1], how="left").drop(df_b.col1)
And when I try to drop the duplicate column like as above this query doesn't drop the col1 of df_b. Instead when I try to drop col1 of df_a, then it able to drop the col1 of df_a.
Could anyone please say about this.
Note: I tried the same in my project which has more than 200 columns and shows the same problem. Sometimes this drop function works properly if we have few columns but not if we have more columns.
Drop function not working after left outer join in pyspark
function to drop duplicates column after merge.
def dropDupeDfCols(df):
newcols = []
dupcols = []
for i in range(len(df.columns)):
if df.columns[i] not in newcols:
newcols.append(df.columns[i])
else:
dupcols.append(i)
df = df.toDF(*[str(i) for i in range(len(df.columns))])
for dupcol in dupcols:
df = df.drop(str(dupcol))
return df.toDF(*newcols)
There are some similar issues I faced recently. Let me show them below with your case.
I am creating two dataframes with the same data
scala> val df_a = Seq((1, 2, "as"), (2,3,"ds"), (3,4,"ew"), (4, 1, "re"), (3,1,"ht")).toDF("a", "b", "c")
df_a: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
scala> val df_b = Seq((1, 2, "as"), (2,3,"ds"), (3,4,"ew"), (4, 1, "re"), (3,1,"ht")).toDF("a", "b", "c")
df_b: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
Joining them
scala> val df = df_a.join(df_b, df_a("b") === df_b("a"), "leftouter")
df: org.apache.spark.sql.DataFrame = [a: int, b: int ... 4 more fields]
scala> df.show
+---+---+---+---+---+---+
| a| b| c| a| b| c|
+---+---+---+---+---+---+
| 1| 2| as| 2| 3| ds|
| 2| 3| ds| 3| 1| ht|
| 2| 3| ds| 3| 4| ew|
| 3| 4| ew| 4| 1| re|
| 4| 1| re| 1| 2| as|
| 3| 1| ht| 1| 2| as|
+---+---+---+---+---+---+
Let's drop a column that is not present in the above dataframe
+---+---+---+---+---+---+
| a| b| c| a| b| c|
+---+---+---+---+---+---+
| 1| 2| as| 2| 3| ds|
| 2| 3| ds| 3| 1| ht|
| 2| 3| ds| 3| 4| ew|
| 3| 4| ew| 4| 1| re|
| 4| 1| re| 1| 2| as|
| 3| 1| ht| 1| 2| as|
+---+---+---+---+---+---+
Ideally we will expect spark to throw an error, but it executes successfully.
Now, if you drop a column from the above dataframe
scala> df.drop("a").show
+---+---+---+---+
| b| c| b| c|
+---+---+---+---+
| 2| as| 3| ds|
| 3| ds| 1| ht|
| 3| ds| 4| ew|
| 4| ew| 1| re|
| 1| re| 2| as|
| 1| ht| 2| as|
+---+---+---+---+
It drops all the columns with provided column name in the input dataframe.
If you want to drop specific columns, it should be done as below:
scala> df.drop(df_a("a")).show()
+---+---+---+---+---+
| b| c| a| b| c|
+---+---+---+---+---+
| 2| as| 2| 3| ds|
| 3| ds| 3| 1| ht|
| 3| ds| 3| 4| ew|
| 4| ew| 4| 1| re|
| 1| re| 1| 2| as|
| 1| ht| 1| 2| as|
+---+---+---+---+---+
I don't think spark accepts the input as give by you(see below):
scala> df.drop(df_a.a).show()
<console>:30: error: value a is not a member of org.apache.spark.sql.DataFrame
df.drop(df_a.a).show()
^
scala> df.drop(df_a."a").show()
<console>:1: error: identifier expected but string literal found.
df.drop(df_a."a").show()
^
If you provide the input to drop, as below, it executes but will have no impact
scala> df.drop("df_a.a").show
+---+---+---+---+---+---+
| a| b| c| a| b| c|
+---+---+---+---+---+---+
| 1| 2| as| 2| 3| ds|
| 2| 3| ds| 3| 1| ht|
| 2| 3| ds| 3| 4| ew|
| 3| 4| ew| 4| 1| re|
| 4| 1| re| 1| 2| as|
| 3| 1| ht| 1| 2| as|
+---+---+---+---+---+---+
The reason being, spark interprets "df_a.a" as a nested column. As that column is not present ideally it should have thrown error, but as explained above, it just executes.
Hope this helps..!!!

Replacing all column values using Window operation?

Hi Data frame created like below.
df = sc.parallelize([
(1, 3),
(2, 3),
(3, 2),
(4,2),
(1, 3)
]).toDF(["id",'t'])
it shows like below.
+---+---+
| id| t|
+---+---+
| 1| 3|
| 2| 3|
| 3| 2|
| 4| 2|
| 1| 3|
+---+---+
my main aim is ,I want to replace repeated value in every column with how many times repeated.
so i have tried flowing code it is not working as expected.
from pyspark.sql.functions import col
column_list = ["id",'t']
w = Window.partitionBy(column_list)
dfmax=df.select(*((count(col(c)).over(w)).alias(c) for c in df.columns))
dfmax.show()
+---+---+
| id| t|
+---+---+
| 2| 2|
| 2| 2|
| 1| 1|
| 1| 1|
| 1| 1|
+---+---+
my expected output will be
+---+---+
| id| t|
+---+---+
| 2| 3|
| 1| 3|
| 1| 1|
| 1| 1|
| 2| 3|
+---+---+
If I understand you correctly, what you're looking for is simply:
df.select(*[count(c).over(Window.partitionBy(c)).alias(c) for c in df.columns]).show()
#+---+---+
#| id| t|
#+---+---+
#| 2| 3|
#| 2| 3|
#| 1| 2|
#| 1| 3|
#| 1| 2|
#+---+---+
The difference between this and what you posted is that we only partition by one column at a time.
Remember that DataFrames are unordered. If you wanted to maintain your row order, you could add an ordering column using pyspark.sql.functions.monotonically_increasing_id():
from pyspark.sql.functions import monotonically_increasing_id
df.withColumn("order", monotonically_increasing_id())\
.select(*[count(c).over(Window.partitionBy(c)).alias(c) for c in df.columns])\
.sort("order")\
.drop("order")\
.show()
#+---+---+
#| id| t|
#+---+---+
#| 2| 3|
#| 1| 3|
#| 1| 2|
#| 1| 2|
#| 2| 3|
#+---+---+

pyspark sqlfunction expr function not working as expected?

pyspark sqlfunction expr not working as expected.
my test1.txt contains
101|10|4
101|12|1
101|13|3
101|14|2
my test2.txt contains
101|10|4
101|11|1
101|13|3
101|14|2
I have created two dataframes using above data like below code.
df3 = spark.createDataFrame(sc.textFile("C://Users//cravi//Desktop//test1.txt").map( lambda x: x.split("|")[:3]),["cid","pid","pr"])
df4 = spark.createDataFrame(sc.textFile("C://Users//cravi//Desktop//test2.txt").map( lambda x: x.split("|")[:3]),["cid","pid","p"])
df5=df4.withColumnRenamed("p", "p")\
.join(df3.withColumnRenamed("pr", "Pr")\
, ["cid", "pid"], "outer")\
.na.fill(0)
tt=df5.withColumn('flag', sf.expr("case when p>0 and pr=='null' then 'N'\
when p=0 and Pr>0 then 'D'\
when p=Pr then 'R'\
else 'U' end"))
tt.show()
I am getting output like below
+---+---+----+----+----+
|cid|pid| p| Pr|flag|
+---+---+----+----+----+
|101| 14| 2| 2| R|
|101| 10| 4| 4| R|
|101| 11| 1|null| U|
|101| 12|null| 1| U|
|101| 13| 3| 3| R|
+---+---+----+----+----+
pyspark sqlfunction expr not working as expected.
if p and pr is same then my falg will be 'R'.
if p some value and pr is null my flag will be 'N'
if p is null and pr is some value my flag is 'D'
other case my flag is 'U'
In this case expected output is :
+---+---+----+----+----+
|cid|pid| p| Pr|flag|
+---+---+----+----+----+
|101| 14| 2| 2| R|
|101| 10| 4| 4| R|
|101| 11| 1|null| N|
|101| 12|null| 1| D|
|101| 13| 3| 3| R|
+---+---+----+----+----+
isNull and isNotNull inbuilt functions should solve your issue which can be used in query as
tt=df5.withColumn('flag', sf.expr("case when isNotNull(`p`) and isNull(`pr`) then 'N'\
when isNull(`p`) and isNotNull(`Pr`) then 'D'\
when p=Pr then 'R'\
else 'U' end"))
Thus you should get
+---+---+----+----+----+
|cid|pid| p| Pr|flag|
+---+---+----+----+----+
|101| 14| 2| 2| R|
|101| 10| 4| 4| R|
|101| 11| 1|null| N|
|101| 12|null| 1| D|
|101| 13| 3| 3| R|
+---+---+----+----+----+
Note: na.fill(0) is useless as it is not applied since the columns are StringType()
I hope the answer is helpful

Joining two data frames and result data frames contain non duplicate items in PySpark?

I have created two data frames by executing below command. I want to
join the two data frames and result data frames contain non duplicate items in PySpark.
df1 = sc.parallelize([
("a",1,1),
("b",2,2),
("d",4,2),
("e",4,1),
("c",3,4)]).toDF(['SID','SSection','SRank'])
df1.show()
+---+--------+-----+
|SID|SSection|SRank|
+---+--------+-----+
| a| 1| 1|
| b| 2| 2|
| d| 4| 2|
| e| 4| 1|
| c| 3| 4|
+---+--------+-----+
df2 is
df2=sc.parallelize([
("a",2,1),
("b",2,3),
("f",4,2),
("e",4,1),
("c",3,4)]).toDF(['SID','SSection','SRank'])
+---+--------+-----+
|SID|SSection|SRank|
+---+--------+-----+
| a| 2| 1|
| b| 2| 3|
| f| 4| 2|
| e| 4| 1|
| c| 3| 4|ggVG
+---+--------+-----+
I want to join above two tables like below.
+---+--------+----------+----------+
|SID|SSection|test1SRank|test2SRank|
+---+--------+----------+----------+
| f| 4| 0| 2|
| e| 4| 1| 1|
| d| 4| 2| 0|
| c| 3| 4| 4|
| b| 2| 2| 3|
| a| 1| 1| 0|
| a| 2| 0| 1|
+---+--------+----------+----------+
Doesn't look like something that can be achieved with a single join. Here's a solution involving multiple joins:
from pyspark.sql.functions import col
d1 = df1.unionAll(df2).select("SID" , "SSection" ).distinct()
t1 = d1.join(df1 , ["SID", "SSection"] , "leftOuter").select(d1.SID , d1.SSection , col("SRank").alias("test1Srank"))
t2 = d1.join(df2 , ["SID", "SSection"] , "leftOuter").select(d1.SID , d1.SSection , col("SRank").alias("test2Srank"))
t1.join(t2, ["SID", "SSection"]).na.fill(0).show()
+---+--------+----------+----------+
|SID|SSection|test1Srank|test2Srank|
+---+--------+----------+----------+
| b| 2| 2| 3|
| c| 3| 4| 4|
| d| 4| 2| 0|
| e| 4| 1| 1|
| f| 4| 0| 2|
| a| 1| 1| 0|
| a| 2| 0| 1|
+---+--------+----------+----------+
You can simply rename the SRank column names and use outer join and use na.fill function
df1.withColumnRenamed("SRank", "test1SRank").join(df2.withColumnRenamed("SRank", "test2SRank"), ["SID", "SSection"], "outer").na.fill(0)

Resources