I have a dataframe (df) and within the dataframe I have a column user_id
df = sc.parallelize([(1, "not_set"),
(2, "user_001"),
(3, "user_002"),
(4, "n/a"),
(5, "N/A"),
(6, "userid_not_set"),
(7, "user_003"),
(8, "user_004")]).toDF(["key", "user_id"])
df:
+---+--------------+
|key| user_id|
+---+--------------+
| 1| not_set|
| 2| user_003|
| 3| user_004|
| 4| n/a|
| 5| N/A|
| 6|userid_not_set|
| 7| user_003|
| 8| user_004|
+---+--------------+
I would like to replace the following values: not_set, n/a, N/A and userid_not_set with null.
It would be good if I could add any new values to a list and they to could be changed.
I am currently using a CASE statement within spark.sql to preform this and would like to change this to pyspark.
None inside the when() function corresponds to the null. In case you wish to fill in anything else instead of null, you have to fill it in it's place.
from pyspark.sql.functions import col
df = df.withColumn(
"user_id",
when(
col("user_id").isin('not_set', 'n/a', 'N/A', 'userid_not_set'),
None
).otherwise(col("user_id"))
)
df.show()
+---+--------+
|key| user_id|
+---+--------+
| 1| null|
| 2|user_001|
| 3|user_002|
| 4| null|
| 5| null|
| 6| null|
| 7|user_003|
| 8|user_004|
+---+--------+
You can use the in-built when function, which is the equivalent of a case expression.
from pyspark.sql import functions as f
df.select(df.key,f.when(df.user_id.isin(['not_set', 'n/a', 'N/A']),None).otherwise(df.user_id)).show()
Also the values needed can be stored in a list and be referenced.
val_list = ['not_set', 'n/a', 'N/A']
df.select(df.key,f.when(df.user_id.isin(val_list),None).otherwise(df.user_id)).show()
PFB few approaches. I am assuming that all the legitimate user IDs starts with "user_". Please try below code.
from pyspark.sql.functions import *
df.withColumn(
"user_id",
when(col("user_id").startswith("user_"),col("user_id")).otherwise(None)
).show()
Another One.
cond = """case when user_id in ('not_set', 'n/a', 'N/A', 'userid_not_set') then null
else user_id
end"""
df.withColumn("ID", expr(cond)).show()
Another One.
cond = """case when user_id like 'user_%' then user_id
else null
end"""
df.withColumn("ID", expr(cond)).show()
Another one.
df.withColumn(
"user_id",
when(col("user_id").rlike("user_"),col("user_id")).otherwise(None)
).show()
Related
in scala spark we can filter if column A value is not equal to column B or same dataframe as
df.filter(col("A")=!=col("B"))
How we can do this same in Pyspark ?
I have tried differemt options like
df.filter(~(df["A"] == df["B"])) and != operator but got errors
Take a look at this snippet:
df = spark.createDataFrame([(1, 2), (1, 1)], "id: int, val: int")
df.show()
+---+---+
| id|val|
+---+---+
| 1| 2|
| 1| 1|
+---+---+
from pyspark.sql.functions import col
df.filter(col("id") != col("val")).show()
+---+---+
| id|val|
+---+---+
| 1| 2|
+---+---+
Below code working fine, but if any one field is NULL out of 5 columns SAL1, SAL2, SAL3, SAL4, SAL5 the corresponding TOTAL_SALARY is coming as NULL.
Looks like some null condition or spark udfs need to create, could you please help in that.
input:
NO NAME ADDR SAL1 SAL2 SAL3 SAL4 SAL5
1 ABC IND 100 200 300 null 400
2 XYZ USA 200 333 209 232 444
The second record's sum coming fine, but in first record because of null in SAL4, the output also coming as null.
from pyspark.shell import spark
from pyspark.sql import functions as F
from pyspark.sql.types import StringType
sc = spark.sparkContext
df = spark.read.option("header","true").option("delimiter", ",").csv("C:\\TEST.txt")
df.createOrReplaceTempView("table1")
df1 = spark.sql( "select * from table1" )
df2 = df1.groupBy('NO', 'NAME', 'ADDR').agg(F.sum(df1.SAL1 + df1.SAL2 + df1.SAL3 + df1.SAL4 + df1.SAL5).alias("TOTAL_SALARY"))
df2.show()
Thanks in advance
Just put a na.fill(0) in your code. This would replace the NULL values with 0 and you should be able to perform the operation.
So your last line should look like:
df2 = df1.na.fill(0).groupBy('NO', 'NAME', 'ADDR').agg(F.sum(df1.SAL1 + df1.SAL2 + df1.SAL3 + df1.SAL4 + df1.SAL5).alias("TOTAL_SALARY"))
It also seems that the sum function should be able to handle Null values correctly. I just tested the following code:
df_new = spark.createDataFrame([
(1, 4), (2, None), (3,None), (4,None),
(5,5), (6,None), (7,None),(1, 4), (2, 8), (3,9), (4,1),(1, 2), (2, 1), (3,3), (4,7),
], ("customer_id", "balance"))
df_new.groupBy("customer_id").agg(sum(col("balance"))).show()
df_new.na.fill(0).groupBy("customer_id").agg(sum(col("balance"))).show()
Output:
+-----------+------------+
|customer_id|sum(balance)|
+-----------+------------+
| 7| null|
| 6| null|
| 5| 5|
| 1| 10|
| 3| 12|
| 2| 9|
| 4| 8|
+-----------+------------+
+-----------+------------+
|customer_id|sum(balance)|
+-----------+------------+
| 7| 0|
| 6| 0|
| 5| 5|
| 1| 10|
| 3| 12|
| 2| 9|
| 4| 8|
+-----------+------------+
Version 1 only contains NULL values if all values in the sum are NULL.
Version 2 returns 0 instead, since all NULL values are replaced with 0's
Basically below line of code check all 5 SAL fields and if it is null, replace it with 0. If not keep the original value.
df1 = df.withColumn("SAL1", when(df.SAL1.isNull(), lit(0)).otherwise(df.SAL1))\
.withColumn("SAL2", when(df.SAL2.isNull(), lit(0)).otherwise(df.SAL2))\
.withColumn("SAL3", when(df.SAL3.isNull(), lit(0)).otherwise(df.SAL3))\
.withColumn("SAL4", when(df.SAL4.isNull(), lit(0)).otherwise(df.SAL4))\
.withColumn("SAL5", when(df.SAL5.isNull(), lit(0)).otherwise(df.SAL5))\
df1 has fields id and json; df2 has fields idand json
df1.count() => 1200; df2.count() => 20
df1 has all the rows. df2 has an incremental update with just 20 rows.
My goal is to update df1 with the values from df2. All the ids of df2 are in df1. But df2 has updated values(in the json field) for those same ids.
Resulting df should have all the values from df1 and updated values from df2.
What is the best way to do this? - With the least number of joins and filters.
Thanks!
You can achieve this using one left join.
Create Example DataFrames
Using the sample data provided by #Shankar Koirala in his answer.
data1 = [
(1, "a"),
(2, "b"),
(3, "c")
]
df1 = sqlCtx.createDataFrame(data1, ["id", "value"])
data2 = [
(1, "x"),
(2, "y")
]
df2 = sqlCtx.createDataFrame(data2, ["id", "value"])
Do a left join
Join the two DataFrames using a left join on the id column. This will keep all of the rows in the left DataFrame. For the rows in the right DataFrame that don't have a matching id, the value will be null.
import pyspark.sql.functions as f
df1.alias('l').join(df2.alias('r'), on='id', how='left')\
.select(
'id',
f.col('l.value').alias('left_value'),
f.col('r.value').alias('right_value')
)\
.show()
#+---+----------+-----------+
#| id|left_value|right_value|
#+---+----------+-----------+
#| 1| a| x|
#| 3| c| null|
#| 2| b| y|
#+---+----------+-----------+
Select the desired data
We will use the fact that the unmatched ids have a null to select the final columns. Use pyspark.sql.functions.when() to use the right value if it is not null, otherwise keep the left value.
df1.alias('l').join(df2.alias('r'), on='id', how='left')\
.select(
'id',
f.when(
~f.isnull(f.col('r.value')),
f.col('r.value')
).otherwise(f.col('l.value')).alias('value')
)\
.show()
#+---+-----+
#| id|value|
#+---+-----+
#| 1| x|
#| 3| c|
#| 2| y|
#+---+-----+
You can sort this output if you want the ids in order.
Using pyspark-sql
You can do the same thing using a pyspark-sql query:
df1.registerTempTable('df1')
df2.registerTempTable('df2')
query = """SELECT l.id,
CASE WHEN r.value IS NOT NULL THEN r.value ELSE l.value END AS value
FROM df1 l LEFT JOIN df2 r ON l.id = r.id"""
sqlCtx.sql(query.replace("\n", "")).show()
#+---+-----+
#| id|value|
#+---+-----+
#| 1| x|
#| 3| c|
#| 2| y|
#+---+-----+
I would like to provide a slightly more general solution. What happens if the input data has 100 columns instead of 2? We would spend too much time making a coalesce of those 100 columns to keep the values on the right side of the left join.
Another way to solve this problem would be to "delete" the updated rows from the original df and finally make a union with the updated rows.
data_orginal = spark.createDataFrame([
(1, "a"),
(2, "b"),
(3, "c")
], ("id", "value"))
data_updated = spark.createDataFrame([
(1, "x"),
(2, "y")
], ("id", "value"))
data_orginal.show()
+---+-----+
| id|value|
+---+-----+
| 1| a|
| 2| b|
| 3| c|
+---+-----+
data_updated.show()
+---+-----+
| id|value|
+---+-----+
| 1| x|
| 2| y|
+---+-----+
data_orginal.createOrReplaceTempView("data_orginal")
data_updated.createOrReplaceTempView("data_updated")
src_data_except_updated = spark.sql(f"SELECT * FROM data_orginal WHERE id not in (1,2)")
result_data = src_data_except_updated.union(data_updated)
result_data.show()
+---+-----+
| id|value|
+---+-----+
| 3| c|
| 1| x|
| 2| y|
+---+-----+
Notice that the query
SELECT * FROM data_orginal WHERE id not in (1,2)
could be generated automatically:
ids_collect = spark.sql(f"SELECT id FROM data_updated").collect()
ids_list = [f"{x.id}" for x in ids_collect]
ids_str = ",".join(ids_list)
query_get_all_except = f"SELECT * FROM data_original WHERE id not in ({ids_str})"
I have a Spark sql dataframe, consisting of an ID column and n "data" columns, i.e.
id | dat1 | dat2 | ... | datn
The id columnn is uniquely determined, whereas, looking at dat1 ... datn there may be duplicates.
My goal is to find the ids of those duplicates.
My approach so far:
get the duplicate rows using groupBy:
dup_df = df.groupBy(df.columns[1:]).count().filter('count > 1')
join the dup_df with the entire df to get the duplicate rows including id:
df.join(dup_df, df.columns[1:])
I am quite certain that this is basically correct, it fails because the dat1 ... datn columns contain null values.
To do the join on null values, I found .e.g this SO post. But this would require to construct a huge "string join condition".
Thus my questions:
Is there a simple / more generic / more pythonic way to do joins on null values?
Or, even better, is there another (easier, more beautiful, ...) method to get the desired ids?
BTW: I am using Spark 2.1.0 and Python 3.5.3
If number ids per group is relatively small you can groupBy and collect_list. Required imports
from pyspark.sql.functions import collect_list, size
example data:
df = sc.parallelize([
(1, "a", "b", 3),
(2, None, "f", None),
(3, "g", "h", 4),
(4, None, "f", None),
(5, "a", "b", 3)
]).toDF(["id"])
query:
(df
.groupBy(df.columns[1:])
.agg(collect_list("id").alias("ids"))
.where(size("ids") > 1))
and the result:
+----+---+----+------+
| _2| _3| _4| ids|
+----+---+----+------+
|null| f|null|[2, 4]|
| a| b| 3|[1, 5]|
+----+---+----+------+
You can apply explode twice (or use an udf) to an output equivalent to the one returned from join.
You can also identify groups using minimal id per group. A few additional imports:
from pyspark.sql.window import Window
from pyspark.sql.functions import col, count, min
window definition:
w = Window.partitionBy(df.columns[1:])
query:
(df
.select(
"*",
count("*").over(w).alias("_cnt"),
min("id").over(w).alias("group"))
.where(col("_cnt") > 1))
and the result:
+---+----+---+----+----+-----+
| id| _2| _3| _4|_cnt|group|
+---+----+---+----+----+-----+
| 2|null| f|null| 2| 2|
| 4|null| f|null| 2| 2|
| 1| a| b| 3| 2| 1|
| 5| a| b| 3| 2| 1|
+---+----+---+----+----+-----+
You can further use group column for self join.
I am working on a PySpark DataFrame with n columns. I have a set of m columns (m < n) and my task is choose the column with max values in it.
For example:
Input: PySpark DataFrame containing :
col_1 = [1,2,3], col_2 = [2,1,4], col_3 = [3,2,5]
Ouput :
col_4 = max(col1, col_2, col_3) = [3,2,5]
There is something similar in pandas as explained in this question.
Is there any way of doing this in PySpark or should I change convert my PySpark df to Pandas df and then perform the operations?
You can reduce using SQL expressions over a list of columns:
from pyspark.sql.functions import max as max_, col, when
from functools import reduce
def row_max(*cols):
return reduce(
lambda x, y: when(x > y, x).otherwise(y),
[col(c) if isinstance(c, str) else c for c in cols]
)
df = (sc.parallelize([(1, 2, 3), (2, 1, 2), (3, 4, 5)])
.toDF(["a", "b", "c"]))
df.select(row_max("a", "b", "c").alias("max")))
Spark 1.5+ also provides least, greatest
from pyspark.sql.functions import greatest
df.select(greatest("a", "b", "c"))
If you want to keep name of the max you can use `structs:
from pyspark.sql.functions import struct, lit
def row_max_with_name(*cols):
cols_ = [struct(col(c).alias("value"), lit(c).alias("col")) for c in cols]
return greatest(*cols_).alias("greatest({0})".format(",".join(cols)))
maxs = df.select(row_max_with_name("a", "b", "c").alias("maxs"))
And finally you can use above to find select "top" column:
from pyspark.sql.functions import max
((_, c), ) = (maxs
.groupBy(col("maxs")["col"].alias("col"))
.count()
.agg(max(struct(col("count"), col("col"))))
.first())
df.select(c)
We can use greatest
Creating DataFrame
df = spark.createDataFrame(
[[1,2,3], [2,1,2], [3,4,5]],
['col_1','col_2','col_3']
)
df.show()
+-----+-----+-----+
|col_1|col_2|col_3|
+-----+-----+-----+
| 1| 2| 3|
| 2| 1| 2|
| 3| 4| 5|
+-----+-----+-----+
Solution
from pyspark.sql.functions import greatest
df2 = df.withColumn('max_by_rows', greatest('col_1', 'col_2', 'col_3'))
#Only if you need col
#from pyspark.sql.functions import col
#df2 = df.withColumn('max', greatest(col('col_1'), col('col_2'), col('col_3')))
df2.show()
+-----+-----+-----+-----------+
|col_1|col_2|col_3|max_by_rows|
+-----+-----+-----+-----------+
| 1| 2| 3| 3|
| 2| 1| 2| 2|
| 3| 4| 5| 5|
+-----+-----+-----+-----------+
You can also use the pyspark built-in least:
from pyspark.sql.functions import least, col
df = df.withColumn('min', least(col('c1'), col('c2'), col('c3')))
Another simple way of doing it. Let us say that the below df is your dataframe
df = sc.parallelize([(10, 10, 1 ), (200, 2, 20), (3, 30, 300), (400, 40, 4)]).toDF(["c1", "c2", "c3"])
df.show()
+---+---+---+
| c1| c2| c3|
+---+---+---+
| 10| 10| 1|
|200| 2| 20|
| 3| 30|300|
|400| 40| 4|
+---+---+---+
You can process the above df as below to get the desited results
from pyspark.sql.functions import lit, min
df.select( lit('c1').alias('cn1'), min(df.c1).alias('c1'),
lit('c2').alias('cn2'), min(df.c2).alias('c2'),
lit('c3').alias('cn3'), min(df.c3).alias('c3')
)\
.rdd.flatMap(lambda r: [ (r.cn1, r.c1), (r.cn2, r.c2), (r.cn3, r.c3)])\
.toDF(['Columnn', 'Min']).show()
+-------+---+
|Columnn|Min|
+-------+---+
| c1| 3|
| c2| 2|
| c3| 1|
+-------+---+
Scala solution:
df = sc.parallelize(Seq((10, 10, 1 ), (200, 2, 20), (3, 30, 300), (400, 40, 4))).toDF("c1", "c2", "c3"))
df.rdd.map(row=>List[String](row(0).toString,row(1).toString,row(2).toString)).map(x=>(x(0),x(1),x(2),x.min)).toDF("c1","c2","c3","min").show
+---+---+---+---+
| c1| c2| c3|min|
+---+---+---+---+
| 10| 10| 1| 1|
|200| 2| 20| 2|
| 3| 30|300| 3|
|400| 40| 4| 4|
+---+---+---+---+