Redistribute elements of an array into multiple columns in a dataframe - apache-spark

I intend to create a dataframe from an array - with the element of the array sequentially distributed to multiple columns.
ex:
var A = Array(1,2,4,21,2,4,34,2,24,2,4,24,5,8,4,2,1,1)
var B = sc.parallelize(A.grouped(3).toList).map(Tuple1(_)).toDF("values")
Above results in
| values|
+-----------+
| [1, 2, 4]|
| [21, 2, 4]|
|[34, 2, 24]|
| [2, 4, 24]|
| [5, 8, 4]|
| [2, 1, 1]|
+-----------+
But I need these 3 elements in 3 different columns.
Please suggest a solution that doesn't hard code for 3 elements.

The basic problem is that you are creating Tuple1 which is a single element. Had you used a x=>Tuple3(x(0), x(1), x(2)) that would have solved it for the case of 3.
If you do not want to hardcode, you can do something like this:
def addColumns(num: Int, origDF: DataFrame) {
var df = origDF
for {
x <- 0 to num
} {
df = df.withColumn(s"col_$x", udf((y: Seq[Int]) => y(x))($"values"))
}
df
}
this will extract the relevant columns (you might also want to drop the original values).

Related

Get unique elements for every array-based row

I have a dataset which looks somewhat like this:
idx | attributes
--------------------------
101 | ['a','b','c']
102 | ['a','b','d']
103 | ['b','c']
104 | ['c','e','f']
105 | ['a','b','c']
106 | ['c','g','h']
107 | ['b','d']
108 | ['d','g','i']
I wish to transform the above dataframe into something like this:
idx | attributes
--------------------------
101 | [0,1,2]
102 | [0,1,3]
103 | [1,2]
104 | [2,4,5]
105 | [0,1,2]
106 | [2,6,7]
107 | [1,3]
108 | [3,6,8]
Here, 'a' is replaced by 0, 'b' is replaced by 1 and so. Essentially, I wish to find all unique elements and assign them numbers so that integer operations can be made on them. My current approach is by using RDDs to maintain a single set and loop across rows but it's highly memory and time-intensive. Is there any other method for this in PySpark?
Thanks in advance
Annotated code
from pyspark.ml.feature import StringIndexer
# Explode the dataframe by `attributes`
df1 = df.selectExpr('idx', "explode(attributes) as attributes")
# Create a StringIndexer to encode the labels
idx = StringIndexer(inputCol='attributes', outputCol='encoded', stringOrderType='alphabetAsc')
df1 = idx.fit(df1).transform(df1)
# group the encoded column by idx and aggregate using `collect_list`
df1 = df1.groupBy('idx').agg(F.collect_list(F.col('encoded').cast('int')).alias('attributes'))
Result
df1.show()
+---+----------+
|idx|attributes|
+---+----------+
|101| [0, 1, 2]|
|102| [0, 1, 3]|
|103| [1, 2]|
|104| [2, 4, 5]|
|105| [0, 1, 2]|
|106| [2, 6, 7]|
|107| [1, 3]|
|108| [3, 6, 8]|
+---+----------+
This can be done in spark 2.4 as a one liner.
In spark 3.0 this can be done without expr.
df = spark.createDataFrame(data=[(101,['a','b','c']),
(102,['a','b','d']),
(103,['b','c']),
(104,['c','e','f']),
(105,['a','b','c']),
(106,['c','g','h']),
(107,['b','d']),
(108,['d','g','i']),],schema = ["idx","attributes"])
df.select(df.idx, expr("transform( attributes, x -> ascii(x)-96)").alias("attributes") ).show()
+---+----------+
|idx|attributes|
+---+----------+
|101| [1, 2, 3]|
|102| [1, 2, 4]|
|103| [2, 3]|
|104| [3, 5, 6]|
|105| [1, 2, 3]|
|106| [3, 7, 8]|
|107| [2, 4]|
|108| [4, 7, 9]|
+---+----------+
The tricky bit: expr("transform( attributes, x -> ascii(x)-96)")
expr is used to say this is a SQL expression
transform takes a column [that is an array] and applies a function to each element in the array ( x is the lambda parameter for the element of the array. -> function start and ) function end.
ascii(x)-96) convert ascii code into integer.
If you are considering performance you may consider the explain plan for my answer vs the other one provided so far:
df1.groupBy('idx').agg(collect_list(col('encoded').cast('int')).alias('attributes')).explain()
== Physical Plan ==
ObjectHashAggregate(keys=[idx#24L], functions=[collect_list(cast(encoded#140 as int), 0, 0)])
+- Exchange hashpartitioning(idx#24L, 200)
+- ObjectHashAggregate(keys=[idx#24L], functions=[partial_collect_list(cast(encoded#140 as int), 0, 0)])
+- *(1) Project [idx#24L, UDF(attributes#132) AS encoded#140]
+- Generate explode(attributes#25), [idx#24L], false, [attributes#132]
+- Scan ExistingRDD[idx#24L,attributes#25]
my answer:
df.select(df.idx, expr("transform( attributes, x -> ascii(x)-96)").alias("attributes") ).explain()
== Physical Plan ==
Project [idx#24L, transform(attributes#25, lambdafunction((ascii(lambda x#128) - 96), lambda x#128, false)) AS attributes#127]

get distinct count from an array of each rows using pyspark

I am looking for distinct counts from an array of each rows using pyspark dataframe:
input:
col1
[1,1,1]
[3,4,5]
[1,2,1,2]
output:
1
3
2
I used below code but it is giving me the length of an array:
output:
3
3
4
please help me how do i achieve this using python pyspark dataframe.
slen = udf(lambda s: len(s), IntegerType())
count = Df.withColumn("Count", slen(df.col1))
count.show()
Thanks in advanced !
For spark2.4+ you can use array_distinct and then just get the size of that, to get count of distinct values in your array. Using UDF will be very slow and inefficient for big data, always try to use spark in-built functions.
https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.array_distinct
(welcome to SO)
df.show()
+------------+
| col1|
+------------+
| [1, 1, 1]|
| [3, 4, 5]|
|[1, 2, 1, 2]|
+------------+
df.withColumn("count", F.size(F.array_distinct("col1"))).show()
+------------+-----+
| col1|count|
+------------+-----+
| [1, 1, 1]| 1|
| [3, 4, 5]| 3|
|[1, 2, 1, 2]| 2|
+------------+-----+

Keeping identifier in exceptAll in PySpark

I was curious if there is an easy way to keep an identifying ID in the exceptALL command in PySpark. For example, suppose I have two dataframes (DF1,DF2) both with an ID column and another column "A." I want to keep the rows in DF1 that have a value for "A" not in DF2, so essentially I am trying to keep the identifier with the usual output of exceptAll. I attached an image with the ideal output.
Cheers!
You are probably looking for leftanti join in Spark:
df1 = spark.createDataFrame([
[1, "Dog"],
[2, "Cat"],
[3, "Dog"]
], ["id", "A"])
df2 = spark.createDataFrame([
[4, "Dog"],
[5, "Elmo"]
], ["id", "A"])
df1.join(df2, ["A"], "leftanti").show()
# +---+---+
# | A| id|
# +---+---+
# |Cat| 2|
# +---+---+
pyspark's dataframe method subtract should give you what you want. See Spark: subtract two DataFrames for more details.
Using exceptAll will not give the results you wanted, as it will retain the second dog entry in df1 because exceptAll keeps duplicates.
Given your dataframes :
df1 = spark.createDataFrame([{'id': 1, 'A': 'dog'},
{'id': 2, 'A': 'cat'},
{'id': 3, 'A': 'dog'}])
df2 = spark.createDataFrame([{'id': 4, 'A': 'dog'},
{'id': 5, 'A': 'elmo'}])
Use subtract on column of interest (i.e. A), then join the results back to original dataframe to get the remaining columns (i.e. id).
except_df = df1.select('A').subtract(df2.select('A'))
except_df.join(df1, on='A').show()
+---+---+
| A| id|
+---+---+
|cat| 2|
+---+---+

Add different arrays from numpy to each row of dataframe

I have a SparkSQL dataframe and 2D numpy matrix. They have the same number of rows. I intend to add each different array from numpy matrix as a new column to the existing PySpark data frame. In this way, the list added to each row is different.
For example, the PySpark dataframe is like this
| Id | Name |
| ------ | ------ |
| 1 | Bob |
| 2 | Alice |
| 3 | Mike |
And the numpy matrix is like this
[[2, 3, 5]
[5, 2, 6]
[1, 4, 7]]
The resulting expected dataframe should be like this
| Id | Name | customized_list
| ------ | ------ | ---------------
| 1 | Bob | [2, 3, 5]
| 2 | Alice | [5, 2, 6]
| 3 | Mike | [1, 4, 7]
Id column correspond to the order of the entries in the numpy matrix.
I wonder is there any efficient way to implement this?
Create a DataFrame from your numpy matrix and add an Id column to indicate the row number. Then you can join to your original PySpark DataFrame on the Id column.
import numpy as np
a = np.array([[2, 3, 5], [5, 2, 6], [1, 4, 7]])
list_df = spark.createDataFrame(enumerate(a.tolist(), start=1), ["Id", "customized_list"])
list_df.show()
#+---+---------------+
#| Id|customized_list|
#+---+---------------+
#| 1| [2, 3, 5]|
#| 2| [5, 2, 6]|
#| 3| [1, 4, 7]|
#+---+---------------+
Here I used enumerate(..., start=1) to add the row number.
Now just do an inner join:
df.join(list_df, on="Id", how="inner").show()
#+---+-----+---------------+
#| Id| Name|customized_list|
#+---+-----+---------------+
#| 1| Bob| [2, 3, 5]|
#| 3| Mike| [1, 4, 7]|
#| 2|Alice| [5, 2, 6]|
#+---+-----+---------------+

How to get the minimum of nested lists in PySpark

See the following data frame for example,
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('test').getOrCreate()
df = spark.createDataFrame([[[1, 2, 3, 4]],[[0, 2, 4]],[[]],[[3]]])
df.show()
Then we have
+------------+
| _1|
+------------+
|[1, 2, 3, 4]|
| [0, 2, 4]|
| []|
| [3]|
+------------+
Then I want to find the minimum of each list; use -1 in case of empty list. I tried the following, which does not work.
import pyspark.sql.functions as F
sim_col = F.col('_1')
df.withColumn('min_turn_sim', F.when(F.size(sim_col)==0, -1.0).otherwise(F.min(sim_col))).show()
The error is:
AnalysisException: "cannot resolve 'CASE WHEN (_1 IS NULL) THEN -1.0D ELSE min(_1) END' due to data type mismatch: THEN and ELSE expressions should all be same type or coercible to a common type;;\n'Aggregate [_1#404, CASE WHEN isnull(_1#404) THEN -1.0 ELSE min(_1#404) END AS min_turn_sim#411]\n+- LogicalRDD [_1#404], false\n"
The size function will work. Don't understand why 'min' does not.
df.withColumn('min_turn_sim', F.when(F.size(sim_col)==0, -1.0).otherwise(F.size(sim_col))).show()
+------------+------------+
| _1|min_turn_sim|
+------------+------------+
|[1, 2, 3, 4]| 4.0|
| [0, 2, 4]| 3.0|
| []| -1.0|
| [3]| 1.0|
+------------+------------+
min is an aggregate function - it operates on columns, not values. Therefore min(sim_col) means minimum array value across all rows in the scoper, according to array ordering, not minimum value in each row.
To find a minimum for each row you'll need a non-aggregate function. In the latest Spark versions (2.4.0 and later) this would be array_min (similarly array_max to get the maximum value):
df.withColumn("min_turn_sim", F.coalesce(F.array_min(sim_col), F.lit(-1)))
Earlier versions will require an UDF:
#F.udf("long")
def long_array_min(xs):
return min(xs) if xs else -1
df.withColumn("min_turn_sim", F.coalesce(long_array_min(sim_col), F.lit(-1))

Resources