Compare two large dataframes using pyspark - python-3.x

I am currently working on a data migration assignment, trying to compare two dataframes from two different databases using pyspark to find out the differences between two dataframes and record the results in a csv file as part of data validation. I am trying for a performance efficient solution since there are two reasons.i.e. large dataframes and table keys are unknown
#Approach 1 - Not sure about the performance and it is case-sensitive
df1.subtract(df2)
#Approach 2 - Creating row hash for each row in dataframe
piperdd=df1.rdd.map(lambda x: hash(x))
r=row("h_cd")
df1_new=piperdd.map(r).toDF()
The problem which I am facing in approach 2 is final dataframe(df1_new) is retrieving only hash column(h_cd) but I need all the columns of dataframe1(df1) with hash code column(h_cd) since I need to report the row difference in a csv file.Please help

Have a try with dataframes, it should be more concise.
df1 = spark.createDataFrame([(a, a*2, a+3) for a in range(10)], "A B C".split(' '))
#df1.show()
from pyspark.sql.functions import hash
df1.withColumn('hash_value', hash('A','B', 'C')).show()
+---+---+---+-----------+
| A| B| C| hash_value|
+---+---+---+-----------+
| 0| 0| 3| 1074520899|
| 1| 2| 4|-2073566230|
| 2| 4| 5| 2060637564|
| 3| 6| 6|-1286214988|
| 4| 8| 7|-1485932991|
| 5| 10| 8| 2099126539|
| 6| 12| 9| -558961891|
| 7| 14| 10| 1692668950|
| 8| 16| 11| 708810699|
| 9| 18| 12| -11251958|
+---+---+---+-----------+

Related

how does sortWithinPartitions sort?

After applying sortWithinPartitions to a df and writing the output to a table I'm getting a result I'm not sure how to interpret.
df
.select($"type", $"id", $"time")
.sortWithinPartitions($"type", $"id", $"time")
result file looks somewhat like
1 a 5
2 b 1
1 a 6
2 b 2
1 a 7
2 b 3
1 a 8
2 b 4
It's not actually random, but neither is it sorted like I would expect it to be. Namely, first by type, then id, then time.
If I try to use a repartition before sorting, then I get the result I want. But for some reason the files weight 5 times more(100gb vs 20gb).
I'm writing to a hive orc table with compresssion set to snappy.
Does anyone know why it's sorted like this and why a repartition gets the right order, but a larger size?
Using spark 2.2.
The documentation of sortWithinPartition states
Returns a new Dataset with each partition sorted by the given expressions
The easiest way to think of this function is to imagine a fourth column (the partition id) that is used as primary sorting criterion. The function spark_partition_id() prints the partition.
For example if you have just one large partition (something that you as a Spark user would never do!), sortWithinPartition works as a normal sort:
df.repartition(1)
.sortWithinPartitions("type","id","time")
.withColumn("partition", spark_partition_id())
.show();
prints
+----+---+----+---------+
|type| id|time|partition|
+----+---+----+---------+
| 1| a| 5| 0|
| 1| a| 6| 0|
| 1| a| 7| 0|
| 1| a| 8| 0|
| 2| b| 1| 0|
| 2| b| 2| 0|
| 2| b| 3| 0|
| 2| b| 4| 0|
+----+---+----+---------+
If there are more partitions, the results are only sorted within each partition:
df.repartition(4)
.sortWithinPartitions("type","id","time")
.withColumn("partition", spark_partition_id())
.show();
prints
+----+---+----+---------+
|type| id|time|partition|
+----+---+----+---------+
| 2| b| 1| 0|
| 2| b| 3| 0|
| 1| a| 5| 1|
| 1| a| 6| 1|
| 1| a| 8| 2|
| 2| b| 2| 2|
| 1| a| 7| 3|
| 2| b| 4| 3|
+----+---+----+---------+
Why would one use sortWithPartition instead of sort? sortWithPartition does not trigger a shuffle, as the data is only moved within the executors. sort however will trigger a shuffle. Therefore sortWithPartition executes faster. If the data is partitioned by a meaningful column, sorting within each partition might be enough.

aggregate function in lit() of pyspark along with withColumn

I have column quantity in dataframe. I want to add a new column to this dataframe with each record having min("Quantity"). I am trying to use lit() in pyspark. something like below
df.withColumn("min_quant", lit(min(col("Quantity")))).show().
It's resulting in the getting below error
grouping expressions sequence is empty, and `InvoiceNo` is not an aggregate function.
Wrap (min(`Quantity`) AS `min_quant`) in windowing function(s) or wrap
This is working:
df.withColumn("min_quant", lit(2)).show().
But, in place of 2 here, I want min(Quantity). Am I missing something?
Please try using window function as min() function needs aggregation.
val windowSpec = Window.orderBy("InvoiceNo")
df.withColumn("min_quant", min("Quantity") over(windowSpec)).show()
Sample Result:
+---------+----+--------+---------+
|InvoiceNo|name|Quantity|min_quant|
+---------+----+--------+---------+
| 1| ABC| 19| 1|
| 1| ABC| 1| 1|
| 1| ABC| 8| 1|
| 1| ABC| 389| 1|
| 1| ABC| 196| 1|
| 2| CBD| 10| 1|
| 2| CBD| 946| 1|
| 3| XYZ| 3| 1|
+---------+----+--------+---------+

Skewed By in Spark

I have a dataset that I want to partition by a particular key (clientID) but some clients produce far, far more data that others. There's a feature in Hive called either "ListBucketing" invoked by "skewed by" specifically to deal with this situation.
However, I cannot find any indication that Spark supports this feature, or how (if it does support it) to make use of it.
Is there a Spark feature that is the equivalent? Or, does Spark have some other set of features by which this behavior can be replicated?
(As a bonus - and requirement for my actual use-case - does your suggest method work with Amazon Athena?)
As far as I know, there is no such out of the box tool in Spark. In case of skewed data, what's very common is to add an artificial column to further bucketize the data.
Let's say you want to partition by column "y", but the data is very skewed like in this toy example (1 partition with 5 rows, the others with only one row):
val df = spark.range(8).withColumn("y", when('id < 5, 0).otherwise('id))
df.show()
+---+---+
| id| y|
+---+---+
| 0| 0|
| 1| 0|
| 2| 0|
| 3| 0|
| 4| 0|
| 5| 5|
| 6| 6|
| 7| 7|
+-------+
Now let's add an artificial random column and write the dataframe.
val maxNbOfBuckets = 3
val part_df = df.withColumn("r", floor(rand() * nbOfBuckets))
part_df.show
+---+---+---+
| id| y| r|
+---+---+---+
| 0| 0| 2|
| 1| 0| 2|
| 2| 0| 0|
| 3| 0| 0|
| 4| 0| 1|
| 5| 5| 2|
| 6| 6| 2|
| 7| 7| 1|
+---+---+---+
// and writing. We divided the partition with 5 elements into 3 partitions.
part_df.write.partitionBy("y", "r").csv("...")

How to filter a Spark DataFrame based on chained conditions? [duplicate]

This question already has answers here:
How to Join Multiple Columns in Spark SQL using Java for filtering in DataFrame
(2 answers)
Closed 4 years ago.
I'm trying to filter a Spark DataFrame that resembles this one:
+-----+---+-----+-----+-----+-----+-------+
| name|age|key_1|key_2|key_3|key_4|country|
+-----+---+-----+-----+-----+-----+-------+
| abc| 20| 1| 1| 1| 1| USA|
| def| 12| 2| 2| 3| 2| China|
| ghi| 40| 3| 3| 3| 3| India|
| jkl| 39| 4| 1| 4| 4| UK|
+-----+---+-----+-----+-----+-----+-------+
Basically what I want to achieve is to find out what rows have mismatching keys, and in this case I want to get a new dataframe with the second and the fourth row.
I tried with
val unmatching = df.filter(df.col("key_1").notEqual(df.col("key_2")).notEqual(df.col("key_3")).notEqual(df.col("key_4")))
and what I get is a shorter dataset than the original, but where the keys seem to be equal.
find out the matching
use except()
val matching=...
val unmatching= df.except(matching);

Accessing a count value from a dataframe in pyspark

I hope you can't help.
I have this dataframe, and I want to select, for example, the count of the prediction==4
Code:
the_counts=df.select('prediction').groupby('prediction').count()
the_counts.show()
+----------+-----+
|prediction|count|
+----------+-----+
| 1| 8|
| 6| 14|
| 5| 5|
| 4| 8|
| 8| 5|
| 0| 6|
+----------+-----+
So, I can assign that value to a variable. As this will be within a loop that will run many iterations.
I managed this, but it's by creating a different dataframe, and then changing that datafram to a number.
dfva = the_counts.select('count').filter(the_counts.prediction ==6)
dfva.show()
+-----+
|count|
+-----+
| 14|
+-----+
Is there a way to access the number straight away without so many steps, or the most efficient way?
This is python 3.x and spark 2.1
Thank you very much
you can first() method to take the value directly,
>>> dfva = the_counts.filter(the_counts['prediction'] == 6).first()['count']
>>> type(dfva)
<type 'int'>
>>> print(dfva)
14

Resources