Get first and last item without using two joins - apache-spark

Currently I have two dataset, one is parent, and one is child. Child dataset contain "parentId" column that can link to parent table. Child dataset hold data about actions of a person, and parent table hold data about person. I want to get a dataset contain person info and his first/last action.
Dataset look like this:
Parent:
id | name | gender
111| Alex | Male
222| Alice| Female
Child:
parentId | time | Action
111 | 12:01| Walk
111 | 12:03| Run
222 | 12:04| Walk
111 | 12:05| Jump
111 | 12:06| Run
The dataset I want to produce is:
id | name | gender | firstAction | lastAction |
111| Alex | Male | Walk | Run |
222| Alice| Female | Walk | Walk |
Currently I can achieve this using two window functions, something like:
WindowSepc w1 = Window.partitionBy("parentId").orderBy(col("time").asc())
WindowSepc w2 = Window.partitionBy("parentId").orderBy(col("time").desc())
and apply the windowSpec to child table using row_number().over(), like:
child.withColumn("rank1", row_numbers().over(w1))
.withColumn("rank2", row_numbers().over(w2))
The issue I have is that later, when I need to join with the parent table, I need to join two times, one for parentId=id && rank1=1, and another one for parentId=id && rank2=1
I wonder if there is a way to only join once, which will be much more efficient.
Or I used the Window function incorrectly, and there is a better way to do it?
Thanks

You could join first and then use groupBy instead of window-functions, this could work (not tested as no programmatic dataframe is provided):
parent
.join(child,$"parentId"===$"id")
.groupBy($"parentId",$"name",$"gender")
.agg(
min(struct($"time",$"action")).as("firstAction"),
max(struct($"time",$"action")).as("lastAction")
)
.select($"parentId",
$"name",
$"gender",
$"firstAction.action".as("firstAction"),
$"lastAction.action".as("lastAction")
)

Related

Efficiently update rows of a postgres table from another table in another database based on a condition in a common column

I have two pandas DataFrames:
df1 from database A with connection parameters {"host":"hostname_a","port": "5432", "dbname":"database_a", "user": "user_a", "password": "secret_a"}. The column key is the primary key.
df1:
| | key | create_date | update_date |
|---:|------:|:-------------|:--------------|
| 0 | 57247 | 1976-07-29 | 2018-01-21 |
| 1 | 57248 | | 2018-01-21 |
| 2 | 57249 | 1992-12-22 | 2016-01-31 |
| 3 | 57250 | | 2015-01-21 |
| 4 | 57251 | 1991-12-23 | 2015-01-21 |
| 5 | 57262 | | 2015-01-21 |
| 6 | 57263 | | 2014-01-21 |
df2 from database B with connection parameters {"host": "hostname_b","port": "5433", "dbname":"database_b", "user": "user_b", "password": "secret_b"}. The column id is the primary key (these values are originally the same than the one in the column key in df1; it's only a renaming of the primary key column of df1).
df2:
| | id | create_date | update_date | user |
|---:|------:|:-------------|:--------------|:------|
| 0 | 57247 | 1976-07-29 | 2018-01-21 | |
| 1 | 57248 | | 2018-01-21 | |
| 2 | 57249 | 1992-12-24 | 2020-10-11 | klm |
| 3 | 57250 | 2001-07-14 | 2019-21-11 | ptl |
| 4 | 57251 | 1991-12-23 | 2015-01-21 | |
| 5 | 57262 | | 2015-01-21 | |
| 6 | 57263 | | 2014-01-21 | |
Notice that the row[2] and row[3] in df2 have more recent update_date values (2020-10-11 and 2019-21-11 respectively) than their counterpart in df1 (where id = key) because their creation_date have been modified (by the given users).
I would like to update rows (i.e. in concrete terms; create_date and update_date values) of df1 where update_date in df2 is more recent than its original value in df1 (for the same primary keys).
This is how I'm tackling this for the moment, using sqlalchemy and psycopg2 + the .to_sql() method of pandas' DataFrame:
import psycopg2
from sqlalchemy import create_engine
connector = psycopg2.connect(**database_parameters_dictionary)
engine = create_engine('postgresql+psycopg2://', creator=connector)
df1.update(df2) # 1) maybe there is something better to do here?
with engine.connect() as connection:
df1.to_sql(
name="database_table_name",
con=connection,
schema="public",
if_exists="replace", # 2) maybe there is also something better to do here?
index=True
)
The problem I have is that, according to the documentation, the if_exists argument can only do three things:
if_exists{‘fail’, ‘replace’, ‘append’}, default ‘fail’
Therefore, to update these two rows, I have to;
1) use .update() method on df1 using df2 as an argument, together with
2) replacing the whole table inside the .to_sql() method, which means "drop+recreate".
As the tables are really large (more than 500'000 entries), I have the feeling that this will need a lot of unnecessary work!
How could I efficiently update only those two newly updated rows? Do I have to generate some custom SQL queries to compares the dates for each rows and only take the ones that have really changed? But here again, I have the intuition that, looping through all rows to compare the update dates will take "a lot" of time. How is the more efficient way to do that? (It would have been easier in pure SQL if the two tables were on the same host/database but it's unfortunately not the case).
Pandas can't do partial updates of a table, no. There is a longstanding open bug for supporting sub-whole-table-granularity updates in .to_sql(), but you can see from the discussion there that it's a very complex feature to support in the general case.
However, limiting it to just your situation, I think there's a reasonable approach you could take.
Instead of using df1.update(df2), put together an expression that yields only the changed records with their new values (I don't use pandas often so I don't know this offhand); then iterate over the resulting dataframe and build the UPDATE statements yourself (or with the SQLAlchemy expression layer, if you're using that). Then, use the connection to DB A to issue all the UPDATEs as one transaction. With an indexed PK, it should be as fast as this would ever be expected to be.
BTW, I don't think df1.update(df2) is exactly correct - from my reading, that would update all rows with any differing fields, not just when updated_date > prev updated_date. But it's a moot point if updated_date in df2 is only ever more recent than those in df1.

Generate unique ID in a mutable pyspark data frame

I want to generate sequential unique id to a data frame that is subject to change. When i say change it means that more number of rows will be added tomorrow after i generate the ids today. when more rows are added i want to look up the id column which has the generated ids and increment for the newly added data
+-------+--------------------+-------------+
|deal_id| deal_name|Unique_id |
+-------+--------------------+--------------
| 613760|ABCDEFGHI | 1|
| 613740|TEST123 | 2|
| 598946|OMG | 3|
Say if i get more data tomorrow i want to append the same to this data frame and the unique id should increment to 4 and go on.
+-------+--------------------+-------------+
|deal_id| deal_name|Unique_id |
+-------+--------------------+--------------
| 613760|ABCDEFGHI | 1|
| 613740|TEST123 | 2|
| 598946|OMG | 3|
| 591234|OM21 | 4|
| 988217|Otres | 5|
.
.
.
Code Snippet
deals_df_final = deals_df.withColumn("Unique_id",F.monotonically_increasing_id())
But this didnt give sequential ID.
I can try row_num and RDD zip with index but looks like the dataframe will be immutable.
Any help please? I want to be able to generate and also increment the id as and when data is added.
Very brief note if it helps - I had the same problem, and the 2nd example in this post helped me: https://kb.databricks.com/sql/gen-unique-increasing-values.html
My current in-progress code:
from pyspark.sql import (
SparkSession,
functions as F,
window as W
)
df_with_increasing_id = df.withColumn("monotonically_increasing_id", F.monotonically_increasing_id())
window = W.Window.orderBy(F.col('monotonically_increasing_id'))
df_with_consecutive_increasing_id = df_with_increasing_id.withColumn('increasing_id', F.row_number().over(window))
df = df_with_consecutive_increasing_id.drop('monotonically_increasing_id')
# now find the maximum value in the `increasing_id` column in the current dataframe before appending new
previous_max_id = df.agg({'increasing_id': 'max'}).collect()[0]
previous_max_id = previous_max_id['max(increasing_id)']
# CREATE NEW ROW HERE
# and then create new ids (same way as creating them originally)
# then union or vertically concatenate it with the old dataframe to get the combined one
df.withColumn("cnsecutiv_increase", F.col("increasing_id") + F.lit(previous_max_id)).show()

Performance: Group by a subset of previous grouping columns

I have a DataFrame with two categorical columns, similar to the following example:
+----+-------+-------+
| ID | Cat A | Cat B |
+----+-------+-------+
| 1 | A | B |
| 2 | B | C |
| 5 | A | B |
| 7 | B | C |
| 8 | A | C |
+----+-------+-------+
I have some processing to do that needs two steps: The first one needs the data to be grouped by both categorical columns. In the example, it would generate the following DataFrame:
+-------+-------+-----+
| Cat A | Cat B | Cnt |
+-------+-------+-----+
| A | B | 2 |
| B | C | 2 |
| A | C | 1 |
+-------+-------+-----+
Then, the next step consists on grouping only by CatA, to calculate a new aggregation, for example:
+-----+-----+
| Cat | Cnt |
+-----+-----+
| A | 3 |
| B | 2 |
+-----+-----+
Now come the questions:
In my solution, I create the intermediate dataframe by doing
val df2 = df.groupBy("catA", "catB").agg(...)
and then I aggregate this df2 to get the last one:
val df3 = df2.groupBy("catA").agg(...)
I assume it is more efficient than aggregating the first DF again. Is it a good assumption? Or it makes no difference?
Are there any suggestions of a more efficient way to achieve the same results?
Generally speaking it looks like a good approach and should be more efficient than aggregating data twice. Since shuffle files are implicitly cached at least part of the work should be performed only once. So when you call an action on df2 and subsequently on df3 you should see that stages corresponding to df2 have been skipped. Also partial structure enforced by the first shuffle may reduce memory requirements for the aggregation buffer during the second agg.
Unfortunately DataFrame aggregations, unlike RDD aggregations, cannot use custom partitioner. It means that you cannot compute both data frames using a single shuffle based on a value of catA. It means that second aggregation will require separate exchange hash partitioning. I doubt it justifies switching to RDDs.

Core Data - fetch distinct records with all properties

considering I have core data objects stored like this:
|Name | ActionType | Content | Date |
|-----|------------|---------|-----------|
|Abe | Create | "Hello" | 2014-07-01|
|Cat | Create | "Well" | 2014-07-01|
|Abe | Create | "Hi" | 2014-07-02|
|Bob | Edit | "Yo" | 2014-07-03|
|Cat | Delete | "What" | 2014-07-04|
|Abe | Edit | "Haha" | 2014-07-05|
I would like to get the last action of each user, so the results would be
|Abe | Edit | "Haha" | 2014-07-05|
|Cat | Delete | "What" | 2014-07-04|
|Bob | Edit | "Yo" | 2014-07-03|
Does anyone knows how to do that with a NSFetchRequest? So far from what I've gathered, if you want to use "group by", you can only retrieve the values in the group by cause (it will return "Abe, Cat, Bob" without the rest of the data in the core data object). Similar with "returnsDistinctResults", it will not return the whole object.
I have a feeling that core data is not equipped for that, any helps & hints would be appreciated!
Core Data is an object graph, not a database. Core Data itself has no concept of uniqueness, it's up to you to implement that in your application. This is most typically done using the find or create pattern. This pattern helps you prevent duplicate objects from being stored.
That said, you CAN return distinct results from Core Data using the NSDictionaryResultType. This will not prevent duplicates from being stored, but can be used to return distinct results from a fetch. There is an example of this in the programming guide. You can give this request all properties for a given entity by working with the NSEntityDescription of the managed object you are fetching.
For getting the object with the "last" timestamp for each, you actually want to get the object with the maximum value for that key path. That can be done a number of ways - a subquery, key path operators, expressions, etc.

SpecFlow - Is it possible to reuse test data within feature file?

Is there any way to reuse data in SpecFlow feature files?
E.g. I have two scenarios, which both uses the same data table:
Scenario: Some scenario 1
Given I have a data table
| Field Name | Value |
| Name | "Tom" |
| Age | 16 |
When ...
Scenario: Some scenario 2
Given I have a data table
| Field Name | Value |
| Name | "Tom" |
| Age | 16 |
And I have another data table
| Field Name | Value |
| Brand | "Volvo" |
| City | "London" |
When ...
In these simple examples the tables are small and there not a big problem, however in my case, the tables have 20+ rows and will be used in at least 5 tests each.
I'd imagine something like this:
Having data table "Employee"
| Field Name | Value |
| Name | "Tom" |
| Age | 16 |
Scenario: Some scenario 1
Given I have a data table "Employee"
When ...
Scenario: Some scenario 2
Given I have a data table "Employee"
And I have another data table
| Field Name | Value |
| Brand | "Volvo" |
| City | "London" |
When ...
I couldn't find anything like this in SpecFlow documentation. The only suggestion for sharing data was to put it into *.cs files. However, I can't do that because the Feature Files will be used by non-technical people.
The Background is the place for common data like this until the data gets too large and your Background section ends up spanning several pages. It sounds like that might be the case for you.
You mention the tables having 20+ rows each and having several data tables like this. That would be a lot of Background for readers to wade through before the get to the Scenarios. Is there another way you could describe the data? When I had tables of data like this in the past I put the details into a fixtures class in the automation code and then described just the important aspects in the Feature file.
Assuming for the sake of an example that "Tom" is a potential car buyer and you're running some sort of car showroom then his data table might include:
| Field | Value |
| Name | Tom |
| Age | 16 |
| Address | .... |
| Phone Number | .... |
| Fav Colour | Red |
| Country | UK |
Your Scenario 2 might be "Under 18s shouldn't be able to buy a car" (in the UK at least). Given that scenario we don't care about Tom's address phone number, only his age. We could write that scenario as:
Scenario: Under 18s shouldnt be able to buy a car
Given there is a customer "Tom" who is under 16
When he tries to buy a car
Then I should politely refuse
Instead of keeping that table of Tom's details in the Feature file we just reference the significant parts. When the Given step runs the automation can lookup "Tom" from our fixtures. The step references his age so that a) it's clear to the reader of the Feature file who Tom is and b) to make sure the fixture data is still valid.
A reader of that scenario will immediately understand what's important about Tom (he's 16), and they don't have to continuously reference between the Scenario and Background. Other Scenarios can also use Tom and if they are interested in other aspects of his information (e.g. Address) then they can specify the relevant information Given there is a customer "Tom" who lives at 10 Downing Street.
Which approach is best depends how much of this data you've got. If it's a small number of fields across a couple of tables then put it in the Background, but once it gets to be 10+ fields or large numbers of tables (presumably we have many potential customers) then I'd suggest moving it outside the Feature file and just describing the relevant information in each Scenario.
Yes, you use a background, i.e. from https://github.com/cucumber/cucumber/wiki/Background
Background:
Given I have a data table "Employee"
| Field Name | Value |
| Name | "Tom" |
| Age | 16 |
Scenario: Some scenario 1
When ...
Scenario: Some scenario 2
Given I have another data table
| Field Name | Value |
| Brand | "Volvo" |
| City | "London" |
If ever you aren't sure I find http://www.specflow.org/documentation/Using-Gherkin-Language-in-SpecFlow/ a great resource

Resources