I have two dataframes df1 and df2 somewhat like this:
import pandas as pd
from spark.sql import SparkSession
spark = SparkSession.builder.appName("someAppname").getOrCreate()
df1 = spark.createDataFrame(pd.DataFrame({"entity_nm": ["Joe B", "Donald", "Barack Obama"]}))
df2 = spark.createDataFrame(pd.DataFrame({"aliases": ["Joe Biden; Biden Joe", "Donald Trump; Donald J. Trump", "Barack Obama", "Joe Burrow"], "id": [1, 2, 3, 4]}))
I want to join df2 on df1 based on a string contains match, it does work when I do it like this:
df_joined = df1.join(df2, df2.aliases.contains(df1.entity_nm), how="left")
That join gives me my desired result:
+------------+--------------------+---+
| entity_nm| aliases| id|
+------------+--------------------+---+
| Joe B|Joe Biden; Biden Joe| 1|
| Joe B|Joe Burrow | 4|
| Donald|Donald Trump; Don...| 2|
|Barack Obama| Barack Obama| 3|
Problem here: I tried to do this with a list of 60k entity names in df1 and around 6 million aliases in df2 and this approach takes like forever until at some point my Spark session will just crash due to memory errors. I'm pretty sure that my approach is very naive and far from optimized.
I've read this blog post which suggests to use a udf but I don't have any Scala knowledge and struggle to understand and recreate it in PySpark.
Any suggestions or help on how to optimize my approach? I need to do tasks like this a lot, so any help would be greatly appreciated.
I am using pyspark and obtained a table as below, table_1
+--------+------------------+---------------+----------+-----+
| Number|Institution | datetime| date| time|
+--------+------------------+---------------+----------+-----+
|AE19075B| ABC| 7/20/2019 7:45|07/20/2019| 7:45|
|AE11688U| CBT|2/11/2019 20:31|02/11/2019|20:31|
+--------+------------------+---------------+----------+-----+
I would like to add a lag column of the time (15 minutes) to the table_1
+--------+------------------+---------------+----------+-----+-----+
| Number|Institution | datetime| date| time|lag1 |
+--------+------------------+---------------+----------+-----+-----+
|AE19075B| ABC| 7/20/2019 7:45|07/20/2019| 7:45|7:30 |
|AE11688U| CBT|2/11/2019 20:31|02/11/2019|20:31|20:16|
+--------+------------------+---------------+----------+-----+-----+
from datetime import datetime, timedelta
table_2 = table_.withColumn('lag1', (datetime.strptime(table1['time'], '%H:%M') -timedelta(minutes=15)).strftime('%H:%M'))
The code above could be applied to a string but I have no idea why it cannot apply to the table in this case. It showed an error '''TypeError: strptime() argument 1 must be str, not Column''', is there any method to obtain a string from a column in Pyspark? Thanks!
You can't use Python functions directly on Spark dataframe columns. You can use Spark SQL functions instead, as shown below:
import pyspark.sql.functions as F
df2 = df.withColumn(
'lag1',
F.expr("date_format(to_timestamp(time, 'H:m') - interval 15 minute, 'H:m')")
)
df2.show()
+--------+-----------+---------------+----------+-----+-----+
| Number|Institution| datetime| date| time| lag1|
+--------+-----------+---------------+----------+-----+-----+
|AE19075B| ABC| 7/20/2019 7:45|07/20/2019| 7:45| 7:30|
|AE11688U| CBT|2/11/2019 20:31|02/11/2019|20:31|20:16|
+--------+-----------+---------------+----------+-----+-----+
Alternatively, you can call the Python function as a UDF (but the performance should be worse than calling Spark SQL functions directly):
import pyspark.sql.functions as F
from datetime import datetime, timedelta
lag = F.udf(lambda t: (datetime.strptime(t, '%H:%M') -timedelta(minutes=15)).strftime('%H:%M'))
df2 = df.withColumn('lag1', lag('time'))
df2.show()
+--------+-----------+---------------+----------+-----+-----+
| Number|Institution| datetime| date| time| lag1|
+--------+-----------+---------------+----------+-----+-----+
|AE19075B| ABC| 7/20/2019 7:45|07/20/2019| 7:45|07:30|
|AE11688U| CBT|2/11/2019 20:31|02/11/2019|20:31|20:16|
+--------+-----------+---------------+----------+-----+-----+
I have this command for all columns in my dataframe to round to 2 decimal places:
data = data.withColumn("columnName1", func.round(data["columnName1"], 2))
I have no idea how to round all Dataframe by the one command (not every column separate). Could somebody help me, please? I don't want to have the same command 50times with different column name.
There is not a function or command for applying all functions to the columns but you can iterate.
+-----+-----+
| col1| col2|
+-----+-----+
|1.111|2.222|
+-----+-----+
df = spark.read.option("header","true").option("inferSchema","true").csv("test.csv")
for c in df.columns:
df = df.withColumn(c, round(c, 2))
df.show()
+----+----+
|col1|col2|
+----+----+
|1.11|2.22|
+----+----+
To avoid converting non-FP columns:
import pyspark.sql.functions as F
for c_name, c_type in df.dtypes:
if c_type in ('double', 'float'):
df = df.withColumn(c_name, F.round(c_name, 2))
I have a dataframe as below:
+----------+----------+--------+
| FNAME| LNAME| AGE|
+----------+----------+--------+
| EARL| JONES| 35|
| MARK| WOOD| 20|
+----------+----------+--------+
I am trying to add a new column as value to this dataframe which should be like this:
+----------+----------+--------+------+------------------------------------+
| FNAME| LNAME| AGE| VALUE |
+----------+----------+--------+-------------------------------------------+
| EARL| JONES| 35|{"FNAME":"EARL","LNAME":"JONES","AGE":"35"}|
| MARK| WOOD| 20|{"FNAME":"MARK","WOOD":"JONES","AGE":"20"} |
+----------+----------+--------+-------------------------------------------+
I am not able to achieve this using withColumn or any json function.
Any headstart would be appreciated.
Spark: 2.3
Python: 3.7.x
Please consider using the SQL function to_jsonwhich you can find in org.apache.spark.sql.functions
Here's the solution :
df.withColumn("VALUE", to_json(struct($"FNAME", $"LNAME", $"AGE"))
And you can also avoid specifying the columns' names as follows :
df.withColumn("VALUE", to_json(struct(df.columns.map(col): _*)
PS: the code I provided is written in scala, but it's the same logic for Python, you just have to use a spark SQL function which is available in both programming languages.
I hope It helps,
scala solution:
val df2 = df.select(
to_json(
map_from_arrays(lit(df.columns), array('*))
).as("value")
)
pyton solution: (I don't know how to do it for n-cols like in scala because map_from_arrays not exists in pyspark)
import pyspark.sql.functions as f
df.select(f.to_json(
f.create_map(f.lit("FNAME"), df.FNAME, f.lit("LNAME"), df.LNAME, f.lit("AGE"), df.AGE)
).alias("value")
).show(truncate=False)
output:
+-------------------------------------------+
|value |
+-------------------------------------------+
|{"FNAME":"EARL","LNAME":"JONES","AGE":"35"}|
|{"FNAME":"MARK","LNAME":"WOOD","AGE":"20"} |
+-------------------------------------------+
Achieved using:
df.withColumn("VALUE", to_json(struct([df[x] for x in df.columns])))
Let's say I have a rather large dataset in the following form:
data = sc.parallelize([('Foo',41,'US',3),
('Foo',39,'UK',1),
('Bar',57,'CA',2),
('Bar',72,'CA',2),
('Baz',22,'US',6),
('Baz',36,'US',6)])
What I would like to do is remove duplicate rows based on the values of the first,third and fourth columns only.
Removing entirely duplicate rows is straightforward:
data = data.distinct()
and either row 5 or row 6 will be removed
But how do I only remove duplicate rows based on columns 1, 3 and 4 only? i.e. remove either one one of these:
('Baz',22,'US',6)
('Baz',36,'US',6)
In Python, this could be done by specifying columns with .drop_duplicates(). How can I achieve the same in Spark/Pyspark?
Pyspark does include a dropDuplicates() method, which was introduced in 1.4. https://spark.apache.org/docs/3.1.2/api/python/reference/api/pyspark.sql.DataFrame.dropDuplicates.html
>>> from pyspark.sql import Row
>>> df = sc.parallelize([ \
... Row(name='Alice', age=5, height=80), \
... Row(name='Alice', age=5, height=80), \
... Row(name='Alice', age=10, height=80)]).toDF()
>>> df.dropDuplicates().show()
+---+------+-----+
|age|height| name|
+---+------+-----+
| 5| 80|Alice|
| 10| 80|Alice|
+---+------+-----+
>>> df.dropDuplicates(['name', 'height']).show()
+---+------+-----+
|age|height| name|
+---+------+-----+
| 5| 80|Alice|
+---+------+-----+
From your question, it is unclear as-to which columns you want to use to determine duplicates. The general idea behind the solution is to create a key based on the values of the columns that identify duplicates. Then, you can use the reduceByKey or reduce operations to eliminate duplicates.
Here is some code to get you started:
def get_key(x):
return "{0}{1}{2}".format(x[0],x[2],x[3])
m = data.map(lambda x: (get_key(x),x))
Now, you have a key-value RDD that is keyed by columns 1,3 and 4.
The next step would be either a reduceByKey or groupByKey and filter.
This would eliminate duplicates.
r = m.reduceByKey(lambda x,y: (x))
I know you already accepted the other answer, but if you want to do this as a
DataFrame, just use groupBy and agg. Assuming you had a DF already created (with columns named "col1", "col2", etc) you could do:
myDF.groupBy($"col1", $"col3", $"col4").agg($"col1", max($"col2"), $"col3", $"col4")
Note that in this case, I chose the Max of col2, but you could do avg, min, etc.
Agree with David. To add on, it may not be the case that we want to groupBy all columns other than the column(s) in aggregate function i.e, if we want to remove duplicates purely based on a subset of columns and retain all columns in the original dataframe. So the better way to do this could be using dropDuplicates Dataframe api available in Spark 1.4.0
For reference, see: https://spark.apache.org/docs/1.4.0/api/scala/index.html#org.apache.spark.sql.DataFrame
I used inbuilt function dropDuplicates(). Scala code given below
val data = sc.parallelize(List(("Foo",41,"US",3),
("Foo",39,"UK",1),
("Bar",57,"CA",2),
("Bar",72,"CA",2),
("Baz",22,"US",6),
("Baz",36,"US",6))).toDF("x","y","z","count")
data.dropDuplicates(Array("x","count")).show()
Output :
+---+---+---+-----+
| x| y| z|count|
+---+---+---+-----+
|Baz| 22| US| 6|
|Foo| 39| UK| 1|
|Foo| 41| US| 3|
|Bar| 57| CA| 2|
+---+---+---+-----+
The below programme will help you drop duplicates on whole , or if you want to drop duplicates based on certain columns , you can even do that:
import org.apache.spark.sql.SparkSession
object DropDuplicates {
def main(args: Array[String]) {
val spark =
SparkSession.builder()
.appName("DataFrame-DropDuplicates")
.master("local[4]")
.getOrCreate()
import spark.implicits._
// create an RDD of tuples with some data
val custs = Seq(
(1, "Widget Co", 120000.00, 0.00, "AZ"),
(2, "Acme Widgets", 410500.00, 500.00, "CA"),
(3, "Widgetry", 410500.00, 200.00, "CA"),
(4, "Widgets R Us", 410500.00, 0.0, "CA"),
(3, "Widgetry", 410500.00, 200.00, "CA"),
(5, "Ye Olde Widgete", 500.00, 0.0, "MA"),
(6, "Widget Co", 12000.00, 10.00, "AZ")
)
val customerRows = spark.sparkContext.parallelize(custs, 4)
// convert RDD of tuples to DataFrame by supplying column names
val customerDF = customerRows.toDF("id", "name", "sales", "discount", "state")
println("*** Here's the whole DataFrame with duplicates")
customerDF.printSchema()
customerDF.show()
// drop fully identical rows
val withoutDuplicates = customerDF.dropDuplicates()
println("*** Now without duplicates")
withoutDuplicates.show()
val withoutPartials = customerDF.dropDuplicates(Seq("name", "state"))
println("*** Now without partial duplicates too")
withoutPartials.show()
}
}
This is my Df contain 4 is repeated twice so here will remove repeated values.
scala> df.show
+-----+
|value|
+-----+
| 1|
| 4|
| 3|
| 5|
| 4|
| 18|
+-----+
scala> val newdf=df.dropDuplicates
scala> newdf.show
+-----+
|value|
+-----+
| 1|
| 3|
| 5|
| 4|
| 18|
+-----+