how to get k-largest element and index in pyspark dataframe array - apache-spark

I have the following dataframe in pyspark:
+------------------------------------------------------------+
|probability |
+------------------------------------------------------------+
|[0.27047928569511825,0.5312608102025099,0.19825990410237174]|
|[0.06711381377029987,0.8775456658890036,0.05534052034069637]|
|[0.10847074295048188,0.04602848157663474,0.8455007754728833]|
+------------------------------------------------------------+
and I want to get the largest, 2-largest value and their index:
+-------------------------------------------------------------------------------------------------------------- -----+
|probability | largest_1 |index_1|largest_2 |index_2 |
+------------------------------------------------------------|------------------|-------|-------------------|--------+
|[0.27047928569511825,0.5312608102025099,0.19825990410237174]|0.5312608102025099| 1 |0.27047928569511825| 0 |
|[0.06711381377029987,0.8775456658890036,0.05534052034069637]|0.8775456658890036| 1 |0.06711381377029987| 0 |
|[0.10847074295048188,0.04602848157663474,0.8455007754728833]|0.8455007754728833| 2 |0.10847074295048188| 0 |
+--------------------------------------------------------------------------------------------------------------------+

Here is another way using transform (require spark 2.4+) to convert array of doubles into array of structs containing value and index of each item in the original array, sort_array(by descending), and then take the first N:
from pyspark.sql.functions import expr
df.withColumn('d', expr('sort_array(transform(probability, (x,i) -> (x as val, i as idx)), False)')) \
.selectExpr(
'probability',
'd[0].val as largest_1',
'd[0].idx as index_1',
'd[1].val as largest_2',
'd[1].idx as index_2'
).show(truncate=False)
+--------------------------------------------------------------+------------------+-------+-------------------+-------+
|probability |largest_1 |index_1|largest_2 |index_2|
+--------------------------------------------------------------+------------------+-------+-------------------+-------+
|[0.27047928569511825, 0.5312608102025099, 0.19825990410237174]|0.5312608102025099|1 |0.27047928569511825|0 |
|[0.06711381377029987, 0.8775456658890036, 0.05534052034069637]|0.8775456658890036|1 |0.06711381377029987|0 |
|[0.10847074295048188, 0.04602848157663474, 0.8455007754728833]|0.8455007754728833|2 |0.10847074295048188|0 |
+--------------------------------------------------------------+------------------+-------+-------------------+-------+

From Spark-2.4+
You can use array_sort and array_position built in functions for this case.
Example:
df=spark.sql("select array(0.27047928569511825,0.5312608102025099,0.19825990410237174) probability union select array(0.06711381377029987,0.8775456658890036,0.05534052034069637) prbability union select array(0.10847074295048188,0.04602848157663474,0.8455007754728833) probability")
#DataFrame[probability: array<decimal(17,17)>]
#sample data
df.show(10,False)
#+---------------------------------------------------------------+
#|probability |
#+---------------------------------------------------------------+
#|[0.06711381377029987, 0.87754566588900360, 0.05534052034069637]|
#|[0.27047928569511825, 0.53126081020250990, 0.19825990410237174]|
#|[0.10847074295048188, 0.04602848157663474, 0.84550077547288330]|
#+---------------------------------------------------------------+
df.withColumn("sort_arr",array_sort(col("probability"))).\
withColumn("largest_1",element_at(col("sort_arr"),-1)).\
withColumn("largest_2",element_at(col("sort_arr"),-2)).\
selectExpr("*","array_position(probability,largest_1) -1 index_1","array_position(probability,largest_2) -1 index_2").\
drop("sort_arr").\
show(10,False)
#+---------------------------------------------------------------+-------------------+-------------------+-------+-------+
#|probability |largest_1 |largest_2 |index_1|index_2|
#+---------------------------------------------------------------+-------------------+-------------------+-------+-------+
#|[0.06711381377029987, 0.87754566588900360, 0.05534052034069637]|0.87754566588900360|0.06711381377029987|1 |0 |
#|[0.27047928569511825, 0.53126081020250990, 0.19825990410237174]|0.53126081020250990|0.27047928569511825|1 |0 |
#|[0.10847074295048188, 0.04602848157663474, 0.84550077547288330]|0.84550077547288330|0.10847074295048188|2 |0 |
#+---------------------------------------------------------------+-------------------+-------------------+-------+-------+

Related

How to filter text after some stop word?

I have a text. From each line I want to filter everything after some stop word. For example :
stop_words=['with','is', '/']
One of the rows is:
senior manager with experience
I want to remove everything after with (including with) so the output is:
senior manager
I have big-data and am working with Spark in Python.
You can find the location of the stop words using instr, and get a substring up to that location.
import pyspark.sql.functions as F
stop_words = ['with', 'is', '/']
df = spark.createDataFrame([
['senior manager with experience'],
['is good'],
['xxx//'],
['other text']
]).toDF('col')
df.show(truncate=False)
+------------------------------+
|col |
+------------------------------+
|senior manager with experience|
|is good |
|xxx // |
|other text |
+------------------------------+
df2 = df.withColumn('idx',
F.coalesce(
# Get the smallest index of a stop word in the string
F.least(*[F.when(F.instr('col', s) != 0, F.instr('col', s)) for s in stop_words]),
# If no stop words found, get the whole string
F.length('col') + 1)
).selectExpr('trim(substring(col, 1, idx-1)) col')
df2.show()
+--------------+
| col|
+--------------+
|senior manager|
| |
| xxx|
| other text|
+--------------+
You can use udf and get index of first occurrence of stop word in col, then again using one more udf, you can substring col message.
val df = List("senior manager with experience", "is good", "xxx//", "other text").toDF("col")
val index_udf = udf ( (col_value :String ) => {val result = for (elem <- stop_words; if col_value.contains(elem)) yield col_value.indexOf(elem)
if (result.isEmpty) col_value.length else result.min } )
val substr_udf = udf((elem:String, index:Int) => elem.substring(0, index))
val df3 = df.withColumn("index", index_udf($"col")).withColumn("substr_message", substr_udf($"col", $"index")).select($"substr_message").withColumnRenamed("substr_message", "col")
df3.show()
+---------------+
| col|
+---------------+
|senior manager |
| |
| xxx|
| other text|
+---------------+

Trouble spliting a column into more columns on Pyspark

I'm having trouble spliting a dataframe's column into more columns in PySpark:
I have a list of lists and I want to transform it into a dataframe, each value in one column.
What I have tried:
I created a dataframe from this list:
[['COL-4560', 'COL-9655', 'NWG-0610', 'D81-3754'],
['DLL-7760', 'NAT-9885', 'PED-0550', 'MAR-0004', 'LLL-5554']]
Using this code:
from pyspark.sql import Row
R = Row('col1', 'col2')
# use enumerate to add the ID column
df_from_list = spark.createDataFrame([R(i, x) for i, x in enumerate(recs_list)])
The result I got is:
+----+--------------------+
|col1| col2|
+----+--------------------+
| 0|[COL-4560, COL-96...|
| 1|[DLL-7760, NAT-98...|
+----+--------------------+
I want to separate the values by comma into columns, so I tried:
from pyspark.sql import functions as F
df2 = df_from_list.select('col1', F.split('col2', ', ').alias('col2'))
# If you don't know the number of columns:
df_sizes = df2.select(F.size('col2').alias('col2'))
df_max = df_sizes.agg(F.max('col2'))
nb_columns = df_max.collect()[0][0]
df_result = df2.select('col1', *[df2['col2'][i] for i in range(nb_columns)])
df_result.show()
But I get an error on this line df2 = df_from_list.select('col1', F.split('col2', ', ').alias('col2')):
AnalysisException: cannot resolve 'split(`col2`, ', ', -1)' due to data type mismatch: argument 1 requires string type, however, '`col2`' is of array<string> type.;;
My ideal final output would be like this:
+----------+----------+----------+----------+----------+
| SKU | REC_01 | REC_02 | REC_03 | REC_04 |
+----------+----------+----------+----------+----------+
| COL-4560 | COL-9655 | NWG-0610 | D81-3754 | null |
| DLL-7760 | NAT-9885 | PED-0550 | MAR-0004 | LLL-5554 |
+---------------------+----------+----------+----------+
Some rows may have four values, but some my have more or less, I don't know the exact number of columns the final dataframe will have.
Does anyone have any idea of what is happening? Thank you very much in advance.
Dataframe df_from_list col2 column is already array type, so no need to split (as split works with stringtype here we have arraytype).
Here are the steps that will work for you.
recs_list=[['COL-4560', 'COL-9655', 'NWG-0610', 'D81-3754'],
['DLL-7760', 'NAT-9885', 'PED-0550', 'MAR-0004', 'LLL-5554']]
from pyspark.sql import Row
R = Row('col1', 'col2')
# use enumerate to add the ID column
df_from_list = spark.createDataFrame([R(i, x) for i, x in enumerate(recs_list)])
from pyspark.sql import functions as F
df2 = df_from_list
# If you don't know the number of columns:
df_sizes = df2.select(F.size('col2').alias('col2'))
df_max = df_sizes.agg(F.max('col2'))
nb_columns = df_max.collect()[0][0]
cols=['SKU','REC_01','REC_02','REC_03','REC_04']
df_result = df2.select(*[df2['col2'][i] for i in range(nb_columns)]).toDF(*cols)
df_result.show()
#+--------+--------+--------+--------+--------+
#| SKU| REC_01| REC_02| REC_03| REC_04|
#+--------+--------+--------+--------+--------+
#|COL-4560|COL-9655|NWG-0610|D81-3754| null|
#|DLL-7760|NAT-9885|PED-0550|MAR-0004|LLL-5554|
#+--------+--------+--------+--------+--------+

PySpark ETL Update Dataframe

I have thow Pyspark dataframes and I want to uodate the "target" dataframe with "staging" one according on the key...
Which is the best optimezed way to do this in Pyspark?
target
+---+-----------------------+------+------+
|key|updated_timestamp |field0|field1|
+---+-----------------------+------+------+
|005|2019-10-26 21:02:30.638|cdao |coaame|
|001|2019-10-22 13:02:30.638|aaaaaa|fsdc |
|002|2019-12-22 11:42:30.638|stfi |? |
|004|2019-10-21 14:02:30.638|ct |ome |
|003|2019-10-24 21:02:30.638|io |me |
+---+-----------------------+------+------+
staging
+---+-----------------------+----------+---------+
|key|updated_timestamp |field0 |field1 |
+---+-----------------------+----------+---------+
|006|2020-03-06 01:42:30.638|new record|xxaaame |
|005|2019-10-29 09:42:30.638|cwwwwdao |coaaaaame|
|004|2019-10-29 21:03:35.638|cwwwwdao |coaaaaame|
+---+-----------------------+----------+---------+
output dataframe
+---+-----------------------+----------+---------+
|key|updated_timestamp |field0 |field1 |
+---+-----------------------+----------+---------+
|005|2019-10-29 09:42:30.638|cwwwwdao |coaaaaame|
|001|2019-10-22 13:02:30.638|aaaaaa |fsdc |
|002|2019-12-22 11:42:30.638|stfi |? |
|004|2019-10-29 21:03:35.638|cwwwwdao |coaaaaame|
|003|2019-10-24 21:02:30.638|io |me |
|006|2020-03-06 01:42:30.638|new record|xxaaame |
+---+-----------------------+----------+---------+
There are several ways to achieve that. Here is one using a full outer join :
from pyspark.sql import functions as F
output = staging.join(
target,
on='key',
how='full'
).select(
*(
F.coalesce(staging[col], target[col]).alias(col)
for col
in staging.columns
)
)
This works only if the updated value is not NULL.
Another solution using union :
output = staging.union(
target.join(
staging,
on="key",
how="left_anti"
)
)

how to achieve execute several functions on each column dynamically?

I am using spark-sql-2.4.1v with java8.
I have following scenario
val df = Seq(
("0.9192019", "0.1992019", "0.9955999"),
("0.9292018", "0.2992019", "0.99662018"),
("0.9392017", "0.3992019", "0.99772000")).toDF("item1_value","item2_value","item3_value")
.withColumn("item1_value", $"item1_value".cast(DoubleType))
.withColumn("item2_value", $"item2_value".cast(DoubleType))
.withColumn("item3_value", $"item3_value".cast(DoubleType))
df.show(20)
I need an expected output something like this
-----------------------------------------------------------------------------------
col_name | sum_of_column | avg_of_column | vari_of_column
-----------------------------------------------------------------------------------
"item1_value" | sum("item1_value") | avg("item1_value") | variance("item1_value")
"item2_value" | sum("item2_value") | avg("item2_value") | variance("item2_value")
"item3_value" | sum("item3_value") | avg("item3_value") | variance("item3_value")
----------------------------------------------------------------------------------
how to achieve this dynamically .. tomorrow i may have
This is sample code that can achieve this. You can make column list dynamic and add more functions if needed.
import org.apache.spark.sql.types._
import org.apache.spark.sql.Column
val df = Seq(
("0.9192019", "0.1992019", "0.9955999"),
("0.9292018", "0.2992019", "0.99662018"),
("0.9392017", "0.3992019", "0.99772000")).
toDF("item1_value","item2_value","item3_value").
withColumn("item1_value", $"item1_value".cast(DoubleType)).
withColumn("item2_value", $"item2_value".cast(DoubleType)).
withColumn("item3_value", $"item3_value".cast(DoubleType))
val aggregateColumns = Seq("item1_value","item2_value","item3_value")
var aggDFs = aggregateColumns.map( c => {
df.groupBy().agg(lit(c).as("col_name"),sum(c).as("sum_of_column"), avg(c).as("avg_of_column"), variance(c).as("var_of_column"))
})
var combinedDF = aggDFs.reduce(_ union _)
This returns following output:
scala> df.show(10,false)
+-----------+-----------+-----------+
|item1_value|item2_value|item3_value|
+-----------+-----------+-----------+
|0.9192019 |0.1992019 |0.9955999 |
|0.9292018 |0.2992019 |0.99662018 |
|0.9392017 |0.3992019 |0.99772 |
+-----------+-----------+-----------+
scala> combinedDF.show(10,false)
+-----------+------------------+------------------+---------------------+
|col_name |sum_of_column |avg_of_column |var_of_column |
+-----------+------------------+------------------+---------------------+
|item1_value|2.7876054 |0.9292018 |9.999800000999957E-5 |
|item2_value|0.8976057000000001|0.2992019 |0.010000000000000002 |
|item3_value|2.9899400800000002|0.9966466933333334|1.1242332201333484E-6|
+-----------+------------------+------------------+---------------------+

Printing a list of dictionaries as a table

How can I format the below data into tabular form using Python ?
Is there any way to print/write the data as per the expected format ?
[{"itemcode":null,"productname":"PKS543452","value_2018":null},
{"itemcode":null,"productname":"JHBG6%&9","value_2018":null},
{"itemcode":null,"productname":"VATER3456","value_2018":null},
{"itemcode":null,"productname":"ACDFER3434","value_2018":null}]
Expected output:
|itemcode | Productname | Value_2018 |
|null |PKS543452|null|
|null |JHBG6%&9|null|
|null |VATER3456|null|
|null |ACDFER3434|null|
You can use pandas to generate a dataframe from the list of dictionaries:
import pandas as pd
null = "null"
lst = [{"itemcode":null,"productname":"PKS543452","value_2018":null},
{"itemcode":null,"productname":"JHBG6%&9","value_2018":null},
{"itemcode":null,"productname":"VATER3456","value_2018":null},
{"itemcode":null,"productname":"ACDFER3434","value_2018":null}]
df = pd.DataFrame.from_dict(lst)
print(df)
Output:
itemcode productname value_2018
0 null PKS543452 null
1 null JHBG6%&9 null
2 null VATER3456 null
3 null ACDFER3434 null
This makes it easy to manipulate data in the table later on. Otherwise, you can print your desired output using built-in string methods:
output=[]
col_names = '|' + ' | '.join(lst[0].keys()) + '|'
print(col_names)
for dic in lst:
row = '|' + ' | '.join(dic.values()) + '|'
print(row)
Output:
|itemcode | productname | value_2018|
|null | PKS543452 | null|
|null | JHBG6%&9 | null|
|null | VATER3456 | null|
|null | ACDFER3434 | null|
You can try like this as well (without using pandas). I have commented each and every line in code itself so don't forget to read them.
Note: Actually, the list/array that you have have pasted is either the result of json.dumps() (in Python, a text) or you have copied the API response (JSON).
null is from JavaScript and the pasted list/array is not a valid Python list but it can be considered as text and converted back to Python list using json.loads(). In this case, null will be converted to None.
And that's why to form the wanted o/p we need another check like "null" if d[key] is None else d[key].
import json
# `null` is used in JavaScript (JSON is JavaScript), so I considered it as string
json_text = """[{"itemcode":null,"productname":"PKS543452","value_2018":null},
{"itemcode":null,"productname":"JHBG6%&9","value_2018":null},
{"itemcode":null,"productname":"VATER3456","value_2018":null},
{"itemcode":null,"productname":"ACDFER3434","value_2018":null}]"""
# Will contain the rows (text)
texts = []
# Converting to original list object, `null`(JavaScript) will transform to `None`(Python)
l = json.loads(json_text)
# Obtain keys (Note that dictionary is an unorederd data type)
# So it is imp to get keys for ordered iteration in all dictionaries of list
# Column may be in different position but related data will be perfect
# If you wish you can hard code the `keys`, here I am getting using `l[0].keys()`
keys = l[0].keys()
# Form header and add to `texts` list
header = '|' + ' | '.join(keys) + " |"
texts.append(header)
# Form body (rows) and append to `texts` list
rows = ['| ' + "|".join(["null" if d[key] is None else d[key] for key in keys]) + "|" for d in l]
texts.extend(rows)
# Print all rows (including header) separated by newline '\n'
answer = '\n'.join(texts)
print(answer)
Output
|itemcode | productname | value_2018 |
| null|PKS543452|null|
| null|JHBG6%&9|null|
| null|VATER3456|null|
| null|ACDFER3434|null|

Resources