Data schema,
root
|-- id: string (nullable = true)
|-- col1: string (nullable = true)
|-- col2: string (nullable = true)
|id|col1 |col2 |
|1 |["x","y","z"]|[123,"null","null"]|
From above data i want to filter where x exits in col1 and respective value for x from col2.
(values of col1 and col2 ordered.If x index 2 in col1 and value index at col2 also 2)
Result:(Need col1 and col2 type array type)
|id |col1 |col2 |
|1 |["x"]|[123]|
If x not present in col1 then need result like
|id| col1 |col2 |
|1 |["null"] |["null"]|
i tried,
val df1 = df.withColumn("result",when($"col1".contains("x"),"X").otherwise("null"))
The trick is to transform your data from dumb string columns into a more useable data structure. Once col1 and col2 are rebuilt as arrays (or as a map, as your desired output suggests they should be), you can use Spark's built-in functions rather than a messy UDF as suggested by #baitmbarek.
To start, use trim and split to convert col1 and col2 to arrays:
scala> val df = Seq(
| ("1", """["x","y","z"]""","""[123,"null","null"]"""),
| ("2", """["a","y","z"]""","""[123,"null","null"]""")
| ).toDF("id","col1","col2")
df: org.apache.spark.sql.DataFrame = [id: string, col1: string ... 1 more field]
scala> val df_array = df.withColumn("col1", split(trim($"col1", "[\"]"), "\"?,\"?"))
.withColumn("col2", split(trim($"col2", "[\"]"), "\"?,\"?"))
df_array: org.apache.spark.sql.DataFrame = [id: string, col1: array<string> ... 1 more field]
scala> df_array.show(false)
+---+---------+-----------------+
|id |col1 |col2 |
+---+---------+-----------------+
|1 |[x, y, z]|[123, null, null]|
|2 |[a, y, z]|[123, null, null]|
+---+---------+-----------------+
scala> df_array.printSchema
root
|-- id: string (nullable = true)
|-- col1: array (nullable = true)
| |-- element: string (containsNull = true)
|-- col2: array (nullable = true)
| |-- element: string (containsNull = true)
From here, you should be able to achieve what you want using array_position to find the index of 'x' (if any) in col1 and retrieve the matching data from col2. However, converting the two arrays into a map first should make it clearer to understand what your code is doing:
scala> val df_map = df_array.select(
$"id",
map_from_entries(arrays_zip($"col1", $"col2")).as("col_map")
)
df_map: org.apache.spark.sql.DataFrame = [id: string, col_map: map<string,string>]
scala> df_map.show(false)
+---+--------------------------------+
|id |col_map |
+---+--------------------------------+
|1 |[x -> 123, y -> null, z -> null]|
|2 |[a -> 123, y -> null, z -> null]|
+---+--------------------------------+
scala> val df_final = df_map.select(
$"id",
when(isnull(element_at($"col_map", "x")),
array(lit("null")))
.otherwise(
array(lit("x")))
.as("col1"),
when(isnull(element_at($"col_map", "x")),
array(lit("null")))
.otherwise(
array(element_at($"col_map", "x")))
.as("col2")
)
df_final: org.apache.spark.sql.DataFrame = [id: string, col1: array<string> ... 1 more field]
scala> df_final.show
+---+------+------+
| id| col1| col2|
+---+------+------+
| 1| [x]| [123]|
| 2|[null]|[null]|
+---+------+------+
scala> df_final.printSchema
root
|-- id: string (nullable = true)
|-- col1: array (nullable = false)
| |-- element: string (containsNull = false)
|-- col2: array (nullable = false)
| |-- element: string (containsNull = true)
Related
After running ALS algorithm in pyspark over a dataset, I have come across a final dataframe which looks like the following
Recommendation column is array type, now I want to split this column, my final dataframe should look like this
Can anyone suggest me, which pyspark function can be used to form this dataframe?
Schema of the dataframe
root
|-- person: string (nullable = false)
|-- recommendation: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- ID: string (nullable = true)
| | |-- rating: float (nullable = true)
Assuming ID doesn't duplicate in each array, you can try the following:
import pyspark.sql.functions as f
df.withColumn('recommendation', f.explode('recommendation'))\
.withColumn('ID', f.col('recommendation').getItem('ID'))\
.withColumn('rating', f.col('recommendation').getItem('rating'))\
.groupby('person')\
.pivot('ID')\
.agg(f.first('rating')).show()
+------+---+---+---+
|person| a| b| c|
+------+---+---+---+
| xyz|0.4|0.3|0.3|
| abc|0.5|0.3|0.2|
| def|0.3|0.2|0.5|
+------+---+---+---+
Or transform with RDD:
df.rdd.map(lambda r: Row(
person=r.person, **{s.ID: s.rating for s in r.recommendation})
).toDF().show()
+------+-------------------+-------------------+-------------------+
|person| a| b| c|
+------+-------------------+-------------------+-------------------+
| abc| 0.5|0.30000001192092896|0.20000000298023224|
| def|0.30000001192092896|0.20000000298023224| 0.5|
| xyz| 0.4000000059604645|0.30000001192092896|0.30000001192092896|
+------+-------------------+-------------------+-------------------+
I have an input PySpark df:
+---------+-------+--------+----------+----------+
|timestamp|user_id|results |event_name|product_id|
+---------+-------+--------+----------+----------+
|1000 |user_1 |result 1|Click |1 |
|1001 |user_1 |result 1|View |1 |
|1002 |user_1 |result 2|Click |3 |
|1003 |user_1 |result 2|View |4 |
|1004 |user_1 |result 2|View |5 |
+---------+-------+--------+----------+----------+
root
|-- timestamp: timestamp (nullable = true)
|-- user_id: string (nullable = true)
|-- results: string (nullable = true)
|-- event_name: string (nullable = true)
|-- product_id: string (nullable = true)
I'd like to convert this to following making sure that I keep unique combinations of user_id and results, and aggregate product_ids based on given event_name like this:
+-------+--------+---------------+---------------+
|user_id|results |product_clicked|products_viewed|
+-------+--------+---------------+---------------+
|user_1 |result 1|[1] |[1] |
|user_1 |result 2|[4,5] |[3] |
+-------+--------+---------------+---------------+
root
|-- user_id: string (nullable = true)
|-- results: string (nullable = true)
|-- product_clicked: array (nullable = true)
| |-- element: string (containsNull = true)
|-- products_viewed: array (nullable = true)
| |-- element: string (containsNull = true)
I have looked into pivot, it's close but I do not need the aggregation part of it, instead I need array creation on columns which are created based on event_name column. Cannot figure our how to do it.
NOTE: The order in product_clicked and product_viewed columns above is important and is based on timestamp column of input dataframe.
You can use collect_list during the pivot aggregation:
import pyspark.sql.functions as F
df2 = (df.groupBy('user_id', 'results')
.pivot('event_name')
.agg(F.collect_list('product_id'))
.selectExpr("user_id", "results", "Click as product_clicked", "View as product_viewed")
)
df2.show()
+-------+-------+---------------+--------------+
|user_id|results|product_clicked|product_viewed|
+-------+-------+---------------+--------------+
| user_1|result2| [3]| [4, 5]|
| user_1|result1| [1]| [1]|
+-------+-------+---------------+--------------+
To ensure ordering, you can collect a list of structs containing the timestamp, sort the list, and transform the list to only keep the product_id:
df2 = (df.groupBy('user_id', 'results')
.pivot('event_name')
.agg(F.sort_array(F.collect_list(F.struct('timestamp', 'product_id'))))
.selectExpr("user_id", "results", "transform(Click, x -> x.product_id) as product_clicked", "transform(View, x -> x.product_id) as product_viewed")
)
df2.show()
+-------+-------+---------------+--------------+
|user_id|results|product_clicked|product_viewed|
+-------+-------+---------------+--------------+
| user_1|result2| [3]| [4, 5]|
| user_1|result1| [1]| [1]|
+-------+-------+---------------+--------------+
Dataframe schema:
root
|-- ID: decimal(15,0) (nullable = true)
|-- COL1: array (nullable = true)
| |-- element: string (containsNull = true)
|-- COL2: array (nullable = true)
| |-- element: string (containsNull = true)
|-- COL3: array (nullable = true)
| |-- element: string (containsNull = true)
Sample data
+--------------------+--------------------+--------------------+
| COL1 | COL2 | COL3 |
+--------------------+--------------------+--------------------+
|[A, B, C, A] |[101, 102, 103, 104]|[P, Q, R, S] |
+--------------------+--------------------+--------------------+
I want to apply nested conditions on array elements.
For example,
Find COL3 elements where COL1 elements are A and COL2 elements are even.
Expected Output : [S]
I looked at various functions. For e.g. - array_position but it returns only the first occurrence.
Is there any straightforward way or I have to explode arrays?
Assuming your condition applies to array elements with the same index, it is possible to filter arrays with lambda functions in SQL since Spark 2.4.0, but this is still not exposed via the other language APIs and you need to use expr(). You simply zip the three arrays and then filter the resulting array of structs:
scala> df.show()
+---+------------+--------------------+------------+
| ID| COL1| COL2| COL3|
+---+------------+--------------------+------------+
| 1|[A, B, C, A]|[101, 102, 103, 104]|[P, Q, R, S]|
+---+------------+--------------------+------------+
scala> df.select($"ID", expr(s"""
| filter(
| arrays_zip(COL1, COL2, COL3),
| e -> e.COL1 == "A" AND CAST(e.COL2 AS integer) % 2 == 0
| ).COL3 AS result
| """)).show()
+---+------+
| ID|result|
+---+------+
| 1| [S]|
+---+------+
Since this uses expr() to supply an SQL expression as a column, it also works with PySpark:
>>> from pyspark.sql.functions import expr
>>> df.select(df.ID, expr("""
... filter(
... arrays_zip(COL1, COL2, COL3),
... e -> e.COL1 == "A" AND CAST(e.COL2 AS integer) % 2 == 0
... ).COL3 AS result
... """)).show()
+---+------+
| ID|result|
+---+------+
| 1| [S]|
+---+------+
i have one or more csv files that i have to merge in pyspark:
file 1:
c1,c2,c3
1,3,4
file 2:
c4,c5,c6
4,5,6
file 3
c1,c2
7,8
i need to merge the files so that the outcome will be:
c1,c2,c3,c4,c5,c6
1,2,3,null,null,null
null,null,null,4,5,6
7,8,null,null,null,null
i have tried:
loading all the files from a folder using the load method:
spark.read.format("csv").option("header","true")
merging the files using merge.
both used just one of the files schema
c1,c2,c3
1,3,4
4,5,6
7,8
Read all the files - f1,f2,f3 and merge the column names. Then for each file, find the complement columns and generate new columns with lit(null). Finally, union all the dfs, by selecting the column names in order. Here is the scala solution.
val f1 = spark.read.format("csv").option("inferSchema","true").option("header","true").load("in/f1.csv")
val f2 = spark.read.format("csv").option("inferSchema","true").option("header","true").load("in/f2.csv")
val f3 = spark.read.format("csv").option("inferSchema","true").option("header","true").load("in/f3.csv")
val fall = f1.columns.union(f2.columns).union(f3.columns).distinct
val f1c = fall.diff(f1.columns)
val f1a = f1c.foldLeft(f1)( (acc,r) => acc.withColumn(r,lit(null)) )
val f2c = fall.diff(f2.columns)
val f2a = f2c.foldLeft(f2)( (acc,r) => acc.withColumn(r,lit(null)) )
val f3c = fall.diff(f3.columns)
val f3a = f3c.foldLeft(f3)( (acc,r) => acc.withColumn(r,lit(null)) )
val result = f1a.select(fall.head,fall.tail:_*).union(f2a.select(fall.head,fall.tail:_*)).union(f3a.select(fall.head,fall.tail:_*))
result.printSchema
result.show(false)
Results:
root
|-- c1: integer (nullable = true)
|-- c2: integer (nullable = true)
|-- c3: integer (nullable = true)
|-- c4: integer (nullable = true)
|-- c5: integer (nullable = true)
|-- c6: integer (nullable = true)
+----+----+----+----+----+----+
|c1 |c2 |c3 |c4 |c5 |c6 |
+----+----+----+----+----+----+
|1 |3 |4 |null|null|null|
|null|null|null|4 |5 |6 |
|7 |8 |null|null|null|null|
+----+----+----+----+----+----+
I'm trying to display a distinct count of a couple different columns in a spark dataframe, and also the record count after grouping the first column.
So if I had col1, col2, and col3, I want to groupBy col1, and then display a distinct count of col2 and also a distinct count of col3.
Then, I would like to display the record count after that same groupBy of col1.
And finally, do this all in one agg statement..
Any ideas?
Below is the code you are looking for
df.groupBy("COL1").agg(countDistinct("COL2"),countDistinct("COL3"),count($"*")).show
=======Tested Below============
scala> val lst = List(("a","x","d"),("b","D","s"),("ss","kk","ll"),("a","y","e"),("b","c","y"),("a","x","y"));
lst: List[(String, String, String)] = List((a,x,d), (b,D,s), (ss,kk,ll), (a,y,e), (b,c,y), (a,x,y))
scala> val rdd=sc.makeRDD(lst);
rdd: org.apache.spark.rdd.RDD[(String, String, String)] = ParallelCollectionRDD[7] at makeRDD at <console>:26
scala> val df = rdd.toDF("COL1","COL2","COL3");
df: org.apache.spark.sql.DataFrame = [COL1: string, COL2: string ... 1 more field]
scala> df.printSchema
root
|-- COL1: string (nullable = true)
|-- COL2: string (nullable = true)
|-- COL3: string (nullable = true)
scala> df.groupBy("COL1").agg(countDistinct("COL2"),countDistinct("COL3"),count($"*")).show
+----+--------------------+--------------------+--------+
|COL1|count(DISTINCT COL2)|count(DISTINCT COL3)|count(1)|
+----+--------------------+--------------------+--------+
| ss| 1| 1| 1|
| b| 2| 2| 2|
| a| 2| 3| 3|
+----+--------------------+--------------------+--------+
scala>