How to capture frequency of words after group by with pyspark - apache-spark

I have a tabular data with keys and values and the keys are not unique.
for example:
+-----+------+
| key | value|
--------------
| 1 | the |
| 2 | i |
| 1 | me |
| 1 | me |
| 2 | book |
| 1 |table |
+-----+------+
Now assume this table is distributed across the different nodes in spark cluster.
How do I use pyspark to calculate frequencies of the words with respect to the different keys? for instance, in the above example I wish to output:
+-----+------+-------------+
| key | value| frequencies |
---------------------------+
| 1 | the | 1/4 |
| 2 | i | 1/2 |
| 1 | me | 2/4 |
| 2 | book | 1/2 |
| 1 |table | 1/4 |
+-----+------+-------------+

Not sure if you can combine multi-level operations with DFs, but doing it in 2 steps and leaving concat to you, this works:
# Running in Databricks, not all stuff required
# You may want to do to upper or lowercase for better results.
from pyspark.sql import Row
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from pyspark.sql.types import *
data = [("1", "the"), ("2", "I"), ("1", "me"),
("1", "me"), ("2", "book"), ("1", "table")]
rdd = sc.parallelize(data)
someschema = rdd.map(lambda x: Row(c1=x[0], c2=x[1]))
df = sqlContext.createDataFrame(someschema)
df1 = df.groupBy("c1", "c2") \
.count()
df2 = df1.groupBy('c1') \
.sum('count')
df3 = df1.join(df2,'c1')
df3.show()
returns:
+---+-----+-----+----------+
| c1| c2|count|sum(count)|
+---+-----+-----+----------+
| 1|table| 1| 4|
| 1| the| 1| 4|
| 1| me| 2| 4|
| 2| I| 1| 2|
| 2| book| 1| 2|
+---+-----+-----+----------+
You can reformat last 2 cols, but am curious if we can do all in 1 go. In normal SQL we would use inline views and combine I suspect.
This works across cluster standardly, what Spark is generally all about. The groupBy takes it all into account.
minor edit
As it is rather hot outside, I looked into this in a little more depth. This is a good overview: http://stevendavistechnotes.blogspot.com/2018/06/apache-spark-bi-level-aggregation.html. After reading this and experimenting I could not get it any more elegant, reducing to 5 rows of output all in 1 go appears not to be possible.

Another viable option is with window functions.
First, define the number of occurrences per values-keys and for key. Then just add another column with the Fraction (you will have reduced fractions)
from pyspark.sql import Row
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from pyspark.sql.window import Window
from pyspark.sql.types import *
from fractions import Fraction
from pyspark.sql.functions import udf
#udf (StringType())
def getFraction(frequency):
return str(Fraction(frequency))
schema = StructType([StructField("key", IntegerType(), True),
StructField("value", StringType(), True)])
data = [(1, "the"), (2, "I"), (1, "me"),
(1, "me"), (2, "book"), (1, "table")]
spark = SparkSession.builder.appName('myPython').getOrCreate()
input_df = spark.createDataFrame(data, schema)
(input_df.withColumn("key_occurrence",
F.count(F.lit(1)).over(Window.partitionBy(F.col("key"))))
.withColumn("value_occurrence", F.count(F.lit(1)).over(Window.partitionBy(F.col("value"), F.col('key'))))
.withColumn("frequency", getFraction(F.col("value_occurrence"), F.col("key_occurrence"))).dropDuplicates().show())

Related

Mapping key and list of values to key value using pyspark

I have a dataset which consists of two columns C1 and C2.The columns are associated with a relation of many to many.
What I would like to do is find for each C2 the value C1 which has the most associations with C2 values overall.
For example:
C1 | C2
1 | 2
1 | 5
1 | 9
2 | 9
2 | 8
We can see here that 1 is matched to 3 values of C2 while 2 is matched to 2 so i would like as output:
Out1 |Out2| matches
2 | 1 | 3
5 | 1 | 3
9 | 1 | 3 (1 wins because 3>2)
8 | 2 | 2
What I have done so far is:
dataset = sc.textFile("...").\
map(lambda line: (line.split(",")[0],list(line.split(",")[1]) ) ).\
reduceByKey(lambda x , y : x+y )
What this does is for each C1 value gather all the C2 matches,the count of this list is our desired matches column. What I would like now is somehow use each value in this list as a new key and have a mapping like :
(Key ,Value_list[value1,value2,...]) -->(value1 , key ),(value2 , key)...
How could this be done using spark? Any advice would be really helpful.
Thanks in advance!
The dataframe API is perhaps easier for this kind of task. You can group by C1, get the count, then group by C2, and get the value of C1 that corresponds to the highest number of matches.
import pyspark.sql.functions as F
df = spark.read.csv('file.csv', header=True, inferSchema=True)
df2 = (df.groupBy('C1')
.count()
.join(df, 'C1')
.groupBy(F.col('C2').alias('Out1'))
.agg(
F.max(
F.struct(F.col('count').alias('matches'), F.col('C1').alias('Out2'))
).alias('c')
)
.select('Out1', 'c.Out2', 'c.matches')
.orderBy('Out1')
)
df2.show()
+----+----+-------+
|Out1|Out2|matches|
+----+----+-------+
| 2| 1| 3|
| 5| 1| 3|
| 8| 2| 2|
| 9| 1| 3|
+----+----+-------+
We can get the desired result easily using dataframe API.
from pyspark.sql import *
import pyspark.sql.functions as fun
from pyspark.sql.window import Window
spark = SparkSession.builder.master("local[*]").getOrCreate()
# preparing sample dataframe
data = [(1, 2), (1, 5), (1, 9), (2, 9), (2, 8)]
schema = ["c1", "c2"]
df = spark.createDataFrame(data, schema)
output = df.withColumn("matches", fun.count("c1").over(Window.partitionBy("c1"))) \
.groupby(fun.col('C2').alias('out1')) \
.agg(fun.first(fun.col("c1")).alias("out2"), fun.max("matches").alias("matches"))
output.show()
# output
+----+----+-------+
|Out1|out2|matches|
+----+----+-------+
| 9| 1| 3|
| 5| 1| 3|
| 8| 2| 2|
| 2| 1| 3|
+----+----+-------+

How to remove several rows in a Spark Dataframe based on the position (not value)?

I want to do some data preprocessing using pyspark and want to remove data at the begining and end of data in dataframe. Let's say I want the first 30% and last 30% data to be removed. I only find possibilities based on values using where and find the first and the last but not for several. Here is the basic example so far with no solution:
import pandas as pd
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("foo").getOrCreate()
cut_factor_start = 0.3 # factor to cut the beginning of the data
cut_factor_stop = 1-cut_factor_start # factor to cut the end of the data
# create pandas dataframe
df = pd.DataFrame({'part':['foo','foo','foo','foo','foo', 'foo'], 'values':[9,1,2,2,6,9]})
# convert to spark dataframe
df = spark.createDataFrame(df)
df.show()
+----+------+
|part|values|
+----+------+
| foo| 9|
| foo| 1|
| foo| 2|
| foo| 2|
| foo| 6|
| foo| 9|
+----+------+
df_length = df.count()
print('length of df: ' + str(df_length))
cut_start = round(df_length * cut_factor_start)
print('start postion to cut: ' + str(cut_start))
cut_stop = round(df_length * (cut_factor_stop))
print('stop postion to cut: ' + str(cut_stop))
length of df: 6
start postion to cut: 2
stop postion to cut: 4
What I want it based on the calculations:
+----+------+
|part|values|
+----+------+
| foo| 1|
| foo| 2|
| foo| 2|
+----+------+
Another way is using between after assigning a row_number:
import pyspark.sql.functions as F
from pyspark.sql import Window
rnum= F.row_number().over(Window.orderBy(F.lit(0)))
output = (df.withColumn('Rnum',rnum)
.filter(F.col("Rnum").between(cut_start, cut_stop)).drop('Rnum'))
output.show()
+----+------+
|part|values|
+----+------+
| foo| 1|
| foo| 2|
| foo| 2|
+----+------+
On Scala, unique "id" column can be added, and then "limit" and "except" functions:
val dfWithIds = df.withColumn("uniqueId", monotonically_increasing_id())
dfWithIds
.limit(stopPostionToCut)
.except(dfWithIds.limit(startPostionToCut - 1))
.drop("uniqueId")

Get the distinct elements of a column grouped by another column on a PySpark Dataframe

I have a pyspark DF of ids and purchases which I'm trying to transform for use with FP growth.
Currently i have multiple rows for a given id with each row only relating to a single purchase.
I'd like to transform this dataframe to a form where there are two columns, one for id (with a single row per id ) and the second column containing a list of distinct purchases for that id.
I've tried to use a User Defined Function (UDF) to map the distinct purchases onto the distinct ids but I get a "py4j.Py4JException: Method getstate([]) does not exist". Thanks to #Mithril
I see that "You can't use sparkSession object , spark.DataFrame object or other Spark distributed objects in udf and pandas_udf, because they are unpickled."
So I've implemented the TERRIBLE approach below (which will work but is not scalable):
#Lets create some fake transactions
customers = [1,2,3,1,1]
purschases = ['cake','tea','beer','fruit','cake']
# Lets create a spark DF to capture the transactions
transactions = zip(customers,purschases)
spk_df_1 = spark.createDataFrame(list(transactions) , ["id", "item"])
# Lets have a look at the resulting spark dataframe
spk_df_1.show()
# Lets capture the ids and list of their distinct pruschases in a
# list of tuples
purschases_lst = []
nums1 = []
import pyspark.sql.functions as f
# for each distinct id lets get the list of their distinct pruschases
for id in spark.sql("SELECT distinct(id) FROM TBLdf ").rdd.map(lambda row : row[0]).collect():
purschase = df.filter(f.col("id") == id).select("item").distinct().rdd.map(lambda row : row[0]).collect()
nums1.append((id,purschase))
# Lets see what our list of transaction tuples looks like
print(nums1)
print("\n")
# lets turn the list of transaction tuples into a pandas dataframe
df_pd = pd.DataFrame(nums1)
# Finally lets turn our pandas dataframe into a pyspark Dataframe
df2 = spark.createDataFrame(df_pd)
df2.show()
Output:
+---+-----+
| id| item|
+---+-----+
| 1| cake|
| 2| tea|
| 3| beer|
| 1|fruit|
| 1| cake|
+---+-----+
[(1, ['fruit', 'cake']), (3, ['beer']), (2, ['tea'])]
+---+-------------+
| 0| 1|
+---+-------------+
| 1|[fruit, cake]|
| 3| [beer]|
| 2| [tea]|
+---+-------------+
If anybody has any suggestions I'd greatly appreciate it.
That is a task for collect_set, which creates a set of items without duplicates:
import pyspark.sql.functions as F
#Lets create some fake transactions
customers = [1,2,3,1,1]
purschases = ['cake','tea','beer','fruit','cake']
# Lets create a spark DF to capture the transactions
transactions = zip(customers,purschases)
spk_df_1 = spark.createDataFrame(list(transactions) , ["id", "item"])
spk_df_1.show()
spk_df_1.groupby('id').agg(F.collect_set('item')).show()
Output:
+---+-----+
| id| item|
+---+-----+
| 1| cake|
| 2| tea|
| 3| beer|
| 1|fruit|
| 1| cake|
+---+-----+
+---+-----------------+
| id|collect_set(item)|
+---+-----------------+
| 1| [fruit, cake]|
| 3| [beer]|
| 2| [tea]|
+---+-----------------+

Parsing through rows and isolating student records from Spark Dataframe

My student database has multiple records for each student in the table Student.
I am reading the data into a Spark dataframe and then iterate through a Spark Dataframe, isolate records for each student and do some processing for each student records.
My code so far:
from pyspark.sql import SparkSession
spark_session = SparkSession \
.builder \
.appName("app") \
.config("spark.jars.packages", "org.apache.hadoop:hadoop-aws:2.7.2") \
.getOrCreate()
class_3A = spark_session.sql("SQL")
for row in class_3A:
#for each student
#Print Name, Age and Subject Marks
How do I do this?
Another approach would be to use SparkSQL
>>> df = spark.createDataFrame([('Ankit',25),('Jalfaizy',22),('Suresh',20),('Bala',26)],['name','age'])
>>> df.show()
+--------+---+
| name|age|
+--------+---+
| Ankit| 25|
|Jalfaizy| 22|
| Suresh| 20|
| Bala| 26|
+--------+---+
>>> df.where('age > 20').show()
+--------+---+
| name|age|
+--------+---+
| Ankit| 25|
|Jalfaizy| 22|
| Bala| 26|
+--------+---+
>>> from pyspark.sql.functions import *
>>> df.select('name', col('age') + 100).show()
+--------+-----------+
| name|(age + 100)|
+--------+-----------+
| Ankit| 125|
|Jalfaizy| 122|
| Suresh| 120|
| Bala| 126|
+--------+-----------+
Imperative approach(in addition to Bala's SQL approach):
class_3A = spark_session.sql("SQL")
def process_student(student_row):
# Do Something with student_row
return processed_student_row
#"isolate records for each student"
# Each student record will be passed to process_student function for processing.
# Results will be accumulated to a new DF - result_df
result_df = class_3A.map(process_student)
# If you don't care about results and just want to do some processing:
class_3A.foreach(process_student)
You can loop through each records in a dataframe and access them with the column names
from pyspark.sql import Row
from pyspark.sql.functions import *
l = [('Ankit',25),('Jalfaizy',22),('Suresh',20),('Bala',26)]
rdd = sc.parallelize(l)
people = rdd.map(lambda x: Row(name=x[0], age=int(x[1])))
schemaPeople = spark.createDataFrame(people)
schemaPeople.show(10, False)
for row in schemaPeople.rdd.collect():
print("Hi " + str(row.name) + " your age is : " + str(row.age) )
This will produce an output as below
+---+--------+
|age|name |
+---+--------+
|25 |Ankit |
|22 |Jalfaizy|
|20 |Suresh |
|26 |Bala |
+---+--------+
Hi Ankit your age is : 25
Hi Jalfaizy your age is : 22
Hi Suresh your age is : 20
Hi Bala your age is : 26
So you can do your processing or some logic that you need to perform on each record of your dataframe.
Not sure if i understand the question right but if you want to do operation on
rows based on any column you can do that using dataframe functions . Example :
from pyspark.sql import SparkSession
import pyspark.sql.functions as f
from pyspark.sql import Window
sc = SparkSession.builder.appName("example").\
config("spark.driver.memory","1g").\
config("spark.executor.cores",2).\
config("spark.max.cores",4).getOrCreate()
df1 = sc.read.format("csv").option("header","true").load("test.csv")
w = Window.partitionBy("student_id")
df2 = df1.groupBy("student_id").agg(f.sum(df1["marks"]).alias("total"))
df3 = df1.withColumn("max_marks_inanysub",f.max(df1["marks"]).over(w))
df3 = df3.filter(df3["marks"] == df3["max_marks_inanysub"])
df1.show()
df3.show()
sample data
student_id,subject,marks
1,maths,3
1,science,6
2,maths,4
2,science,7
output
+----------+-------+-----+
|student_id|subject|marks|
+----------+-------+-----+
| 1| maths| 3|
| 1|science| 6|
| 2| maths| 4|
| 2|science| 7|
+----------+-------+-----+
+----------+-------+-----+------------------+
|student_id|subject|marks|max_marks_inanysub|
+----------+-------+-----+------------------+
| 1|science| 6| 6|
| 2|science| 7| 7|
+----------+-------+-----+------------------+

Spark filter multiple group of rows to a single row

I am trying to acheive the following,
Lets say I have a dataframe with the following columns
id | name | alias
-------------------
1 | abc | short
1 | abc | ailas-long-1
1 | abc | another-long-alias
2 | xyz | short_alias
2 | xyz | same_length
3 | def | alias_1
I want to groupby id and name and select the shorter alias,
The output I am expecting is
id | name | alias
-------------------
1 | abc | short
2 | xyz | short_alias
3 | def | alias_1
I can achevie this using window and row_number, is there anyother efficient method to get the same result. In general, the thrid column filter condition can be anything in this case its the length of the field.
Any help would be much appreciated.
Thank you.
All you need to do is use length inbuilt function and use that in window function as
from pyspark.sql import functions as f
from pyspark.sql import Window
windowSpec = Window.partitionBy('id', 'name').orderBy('length')
df.withColumn('length', f.length('alias'))\
.withColumn('length', f.row_number().over(windowSpec))\
.filter(f.col('length') == 1)\
.drop('length')\
.show(truncate=False)
which should give you
+---+----+-----------+
|id |name|alias |
+---+----+-----------+
|3 |def |alias_1 |
|1 |abc |short |
|2 |xyz |short_alias|
+---+----+-----------+
A solution without window (Not very pretty..) and the easiest, in my opinion, rdd solution:
from pyspark.sql import functions as F
from pyspark.sql import HiveContext
hiveCtx = HiveContext(sc)
rdd = sc.parallelize([(1 , "abc" , "short-alias"),
(1 , "abc" , "short"),
(1 , "abc" , "ailas-long-1"),
(1 , "abc" , "another-long-alias"),
(2 , "xyz" , "same_length"),
(2 , "xyz" , "same_length1"),
(3 , "def" , "short_alias") ])
df = hiveCtx.createDataFrame(\
rdd, ["id", "name", "alias"])
len_df = df.groupBy(["id", "name"]).agg(F.min(F.length("alias")).alias("alias_len"))
df = df.withColumn("alias_len", F.length("alias"))
cond = ["alias_len", "id", "name"]
df.join(len_df, cond).show()
print rdd.map(lambda x: ((x[0], x[1]), x[2]))\
.reduceByKey(lambda x,y: x if len(x) < len(y) else y ).collect()
Output:
+---------+---+----+-----------+
|alias_len| id|name| alias|
+---------+---+----+-----------+
| 11| 3| def|short_alias|
| 11| 2| xyz|same_length|
| 5| 1| abc| short|
+---------+---+----+-----------+
[((2, 'xyz'), 'same_length'), ((3, 'def'), 'short_alias'), ((1, 'abc'), 'short')]

Resources