I am trying to compare two DataFrames with the same schema (in Spark 1.6.0, using Scala) to determine which rows in the newer table have been added (i.e. are not present in the older table).
I need to do this by ID (i.e. examining a single column, not the whole row, to see what is new). Some rows may have changed between the versions, in that they have the same id in both versions, but the other columns have changed - I do not want these in the output, so I cannot simply subtract the two versions.
Based on various suggestions, I am doing a left-outer join on the chosen ID column, then selecting rows with nulls in columns from the right side of the join (indicating that they were not present in the older version of the table):
def diffBy(field:String, newer:DataFrame, older:DataFrame): DataFrame = {
newer.join(older, newer(field) === older(field), "left_outer")
.select(older(field).isNull)
// TODO just select the leftmost columns, removing the nulls
}
However, this does not work. (row 3 exists only in the newer version, so should be output):
scala> newer.show
+---+-------+
| id| value|
+---+-------+
| 3| three|
| 2|two-new|
+---+-------+
scala> older.show
+---+-------+
| id| value|
+---+-------+
| 1| one|
| 2|two-old|
+---+-------+
scala> diffBy("id", newer, older).show
+---+-----+---+-----+
| id|value| id|value|
+---+-----+---+-----+
+---+-----+---+-----+
The join is working as expected:
scala> val joined = newer.join(older, newer("id") === older("id"), "left_outer")
scala> joined.show
+---+-------+----+-------+
| id| value| id| value|
+---+-------+----+-------+
| 2|two-new| 2|two-old|
| 3| three|null| null|
+---+-------+----+-------+
So the problem is in the selection of the column for filtering.
joined.where(older("id").isNull).show
+---+-----+---+-----+
| id|value| id|value|
+---+-----+---+-----+
+---+-----+---+-----+
Perhaps it is due to the duplicate id column names in the join? But if I use the value column (which is also duplicated) instead to detect nulls, it works as expected:
joined.where(older("value").isNull).show
+---+-----+----+-----+
| id|value| id|value|
+---+-----+----+-----+
| 3|three|null| null|
+---+-----+----+-----+
What is going on here - and why is the behaviour different for id and value?
You can solve the problem using a special spark join called "leftanti" . It is equivalent to minus (in Oracle PL SQL).
val joined = newer.join(older, newer("id") === older("id"), "leftanti")
This will only select columns from newer.
I have found a solution to my problem, though not an explanation for why it occurs.
It seems to be necessary to create an alias in order to refer unambiguously to the rightmost id column, and then use a textual WHERE clause so that I can substitute in the qualified column name from the variable field:
newer.join(older.as("o"), newer(field) === older(field), "left_outer")
.where(s"o.$field IS NULL")
Related
I've come across something strange recently in Spark. As far as I understand, given the column based storage method of spark dfs, the order of the columns really don't have any meaning, they're like keys in a dictionary.
During a df.union(df2), does the order of the columns matter? I would've assumed that it shouldn't, but according to the wisdom of sql forums it does.
So we have df1
df1
| a| b|
+---+----+
| 1| asd|
| 2|asda|
| 3| f1f|
+---+----+
df2
| b| a|
+----+---+
| asd| 1|
|asda| 2|
| f1f| 3|
+----+---+
result
| a| b|
+----+----+
| 1| asd|
| 2|asda|
| 3| f1f|
| asd| 1|
|asda| 2|
| f1f| 3|
+----+----+
It looks like the schema from df1 was used, but the data appears to have joined following the order of their original dataframes.
Obviously the solution would be to do df1.union(df2.select(df1.columns))
But the main question is, why does it do this? Is it simply because it's part of pyspark.sql, or is there some underlying data architecture in Spark that I've goofed up in understanding?
code to create test set if anyone wants to try
d1={'a':[1,2,3], 'b':['asd','asda','f1f']}
d2={ 'b':['asd','asda','f1f'], 'a':[1,2,3],}
pdf1=pd.DataFrame(d1)
pdf2=pd.DataFrame(d2)
df1=spark.createDataFrame(pdf1)
df2=spark.createDataFrame(pdf2)
test=df1.union(df2)
The Spark union is implemented according to standard SQL and therefore resolves the columns by position. This is also stated by the API documentation:
Return a new DataFrame containing union of rows in this and another frame.
This is equivalent to UNION ALL in SQL. To do a SQL-style set union (that does >deduplication of elements), use this function followed by a distinct.
Also as standard in SQL, this function resolves columns by position (not by name).
Since Spark >= 2.3 you can use unionByName to union two dataframes were the column names get resolved.
in spark Union is not done on metadata of columns and data is not shuffled like you would think it would. rather union is done on the column numbers as in, if you are unioning 2 Df's both must have the same numbers of columns..you will have to take in consideration of positions of your columns previous to doing union. unlike SQL or Oracle or other RDBMS, underlying files in spark are physical files. hope that answers your question
I have two DataFrames, A and B. Each have a column called 'names' and this column is ArrayType(StringType()).
Now I want to left join A and B on the condition that A['names'] and B['names'] have common elements.
Here is an example:
A:
+---------------+
| names|
+---------------+
|['Mike','Jack']|
| ['Peter']|
+---------------+
B:
+---------------+
| names|
+---------------+
|['John','Mike']|
| null|
+---------------+
after the left join, I should have:
+---------------+---------------+
| A_names| B_names|
+---------------+---------------+
|['Mike','Jack']|['John','Mike']|
| ['Peter']| null|
+---------------+---------------+
In your case you have to explode the values- explode will produce one row per value in your arrays and then you can join them and reduce the final result back to your desired format.
In the code example, I exploded the names and joined the DataFrames on the newly created column (B_names). Finally the result will be grouped by "names" to remove the produced duplicates.
For the group by aggregate function, you can use pyspark.sql.functions.first(), with the parameter ignorenulls set to True.
from pyspark.sql.functions import col, explode, first
test_list = [['Mike', 'Jack']], [['Peter']]
test_df = spark.createDataFrame(test_list, ["names"])
test_list2 = [["John","Mike"]],[["Kate"]]
test_df2 = spark.createDataFrame(test_list2, ["names"])
test_df2 = test_df2.select(
col("names").alias("B_names"),
explode("names").alias("single_names")
)
test_df.select(col("names").alias("A_names"), explode("names").alias("single_names"))\
.join(test_df2, on="single_names", how="left" )\
.groupBy("A_names").agg(first("B_names", ignorenulls=True).alias("B_names")).show()
Result:
+------------+------------+
| A_names| B_names|
+------------+------------+
|[Mike, Jack]|[John, Mike]|
| [Peter]| null|
+------------+------------+
I have one dataframe with two columns:
+--------+-----+
| col1| col2|
+--------+-----+
|22 | 12.2|
|1 | 2.1|
|5 | 52.1|
|2 | 62.9|
|77 | 33.3|
I would like to create a new dataframe which will take only rows where
"value of col1" > "value of col2"
Just as a note the col1 has long type and col2 has double type
the result should be like this:
+--------+----+
| col1|col2|
+--------+----+
|22 |12.2|
|77 |33.3|
I think the best way would be to simply use "filter".
df_filtered=df.filter(df.col1>df.col2)
df_filtered.show()
+--------+----+
| col1|col2|
+--------+----+
|22 |12.2|
|77 |33.3|
Another possible way could be using a where function of DF.
For example this:
val output = df.where("col1>col2")
will give you the expected result:
+----+----+
|col1|col2|
+----+----+
| 22|12.2|
| 77|33.3|
+----+----+
The best way to keep rows based on a condition is to use filter, as mentioned by others.
To answer the question as stated in the title, one option to remove rows based on a condition is to use left_anti join in Pyspark.
For example to delete all rows with col1>col2 use:
rows_to_delete = df.filter(df.col1>df.col2)
df_with_rows_deleted = df.join(rows_to_delete, on=[key_column], how='left_anti')
you can use sqlContext to simplify the challenge.
first register as temp table as example:
df.createOrReplaceTempView("tbl1")
then run the sql like
sqlContext.sql("select * from tbl1 where col1 > col2")
I'm working with Spark 2.2.0.
I have a DataFrame holding more than 20 columns. In the below example, PERIOD is a week number and type a type of store (Hypermarket or Supermarket)
table.show(10)
+--------------------+-------------------+-----------------+
| PERIOD| TYPE| etc......
+--------------------+-------------------+-----------------+
| W1| HM|
| W2| SM|
| W3| HM|
etc...
I want to do a simple groupby (here with pyspark, but Scala or pyspark-sql give the same results)
total_stores = table.groupby("PERIOD", "TYPE").agg(countDistinct("STORE_DESC"))
total_stores2 = total_stores.withColumnRenamed("count(DISTINCT STORE_DESC)", "NB STORES (TOTAL)")
total_stores2.show(10)
+--------------------+-------------------+-----------------+
| PERIOD| TYPE|NB STORES (TOTAL)|
+--------------------+-------------------+-----------------+
|CMA BORGO -SANTA ...| BORGO| 1|
| C ATHIS MONS| ATHIS MONS CEDEX| 1|
| CMA BOSC LE HARD| BOSC LE HARD| 1|
The problem is not in the calculation: the columns got mixed up: PERIOD has STORE NAMES, TYPE has CITY, etc....
I have no clue why. Everything else works fine.
I want to perform subtract between 2 dataframes in pyspark. Challenge is that I have to ignore some columns while subtracting dataframe. But end dataframe should have all the columns, including ignored columns.
Here is an example:
userLeft = sc.parallelize([
Row(id=u'1',
first_name=u'Steve',
last_name=u'Kent',
email=u's.kent#email.com',
date1=u'2017-02-08'),
Row(id=u'2',
first_name=u'Margaret',
last_name=u'Peace',
email=u'marge.peace#email.com',
date1=u'2017-02-09'),
Row(id=u'3',
first_name=None,
last_name=u'hh',
email=u'marge.hh#email.com',
date1=u'2017-02-10')
]).toDF()
userRight = sc.parallelize([
Row(id=u'2',
first_name=u'Margaret',
last_name=u'Peace',
email=u'marge.peace#email.com',
date1=u'2017-02-11'),
Row(id=u'3',
first_name=None,
last_name=u'hh',
email=u'marge.hh#email.com',
date1=u'2017-02-12')
]).toDF()
Expected:
ActiveDF = userLeft.subtract(userRight) ||| Ignore "date1" column while subtracting.
End result should look something like this including "date1" column.
+----------+--------------------+----------+---+---------+
| date1| email|first_name| id|last_name|
+----------+--------------------+----------+---+---------+
|2017-02-08| s.kent#email.com| Steve| 1| Kent|
+----------+--------------------+----------+---+---------+
Seems you need anti-join:
userLeft.join(userRight, ["id"], "leftanti").show()
+----------+----------------+----------+---+---------+
| date1| email|first_name| id|last_name|
+----------+----------------+----------+---+---------+
|2017-02-08|s.kent#email.com| Steve| 1| Kent|
+----------+----------------+----------+---+---------+
You can also use a full join and only keep null values:
userLeft.join(
userRight,
[c for c in userLeft.columns if c != "date1"],
"full"
).filter(psf.isnull(userLeft.date1) | psf.isnull(userRight.date1)).show()
+------------------+----------+---+---------+----------+----------+
| email|first_name| id|last_name| date1| date1|
+------------------+----------+---+---------+----------+----------+
|marge.hh#email.com| null| 3| hh|2017-02-10| null|
|marge.hh#email.com| null| 3| hh| null|2017-02-12|
| s.kent#email.com| Steve| 1| Kent|2017-02-08| null|
+------------------+----------+---+---------+----------+----------+
If you want to use joins, whether it's leftanti or full you'll need to find default values for your null in the joining columns (I think we discussed it in a previous thread).
You can also just drop the column that bothers you subtract and join:
df = userLeft.drop("date1").subtract(userRight.drop("date1"))
userLeft.join(df, df.columns).show()
+----------------+----------+---+---------+----------+
| email|first_name| id|last_name| date1|
+----------------+----------+---+---------+----------+
|s.kent#email.com| Steve| 1| Kent|2017-02-08|
+----------------+----------+---+---------+----------+