Spark Dataframe complex ordering - apache-spark

I have a event log dataset, like this:
| patient | timestamp | event_st | extra_info |
| 1 | 1/1/2018 2:30 | urg_admission | x |
| 1 | 1/1/2018 3:00 | urg_discharge | x |
| 1 | 1/1/2018 | hosp_admission | y |
| 1 | 1/10/2018 | hosp_discharge | y |
I want to order all rows by patient and timestamp, but unfortunately, depending on the type of event event_st, the timestamp may be in minutes or days granularity.
So, the solution I would use in C++ would be define a complex < operator, where I would use the event_st as a discriminator when time granularity differs. For example, with the shown data, the events with hosp_ prefix will be always ordered after the events with urg_ prefix, when their day are the same.
Is there any equivalent approach using the DataFrame API or other Spark APIs?
Thank you very much.

One option is to first normalize all the timestamp to some standard form like ddMMYY or in epoch. The simplest way is to use an udf.
For example: If you consider all the timestamp to be converted to epoch then your code would look like:
def convertTimestamp(timeStamp:String, event_st:String) : Long = {
if(event_st == 'urg_admission') {
...// Add conversion logic
}
if(event_st == 'hosp_admission') {
...// Add conversion logic
}
...
}
val df = spark.read.json("/path/to/log/dataset") // I am assuming json format
spark.register.udf("convertTimestamp", convertTimestamp _)
df.createOrReplaceTempTable("logdataset")
val df_normalized = spark.sql("select logdataset.*, convertTimestamp(timestamp,event_st) as normalized_timestamp from logdataset")
After this you can use the normalized dataset form subsequent operation.

Related

Efficiently update rows of a postgres table from another table in another database based on a condition in a common column

I have two pandas DataFrames:
df1 from database A with connection parameters {"host":"hostname_a","port": "5432", "dbname":"database_a", "user": "user_a", "password": "secret_a"}. The column key is the primary key.
df1:
| | key | create_date | update_date |
|---:|------:|:-------------|:--------------|
| 0 | 57247 | 1976-07-29 | 2018-01-21 |
| 1 | 57248 | | 2018-01-21 |
| 2 | 57249 | 1992-12-22 | 2016-01-31 |
| 3 | 57250 | | 2015-01-21 |
| 4 | 57251 | 1991-12-23 | 2015-01-21 |
| 5 | 57262 | | 2015-01-21 |
| 6 | 57263 | | 2014-01-21 |
df2 from database B with connection parameters {"host": "hostname_b","port": "5433", "dbname":"database_b", "user": "user_b", "password": "secret_b"}. The column id is the primary key (these values are originally the same than the one in the column key in df1; it's only a renaming of the primary key column of df1).
df2:
| | id | create_date | update_date | user |
|---:|------:|:-------------|:--------------|:------|
| 0 | 57247 | 1976-07-29 | 2018-01-21 | |
| 1 | 57248 | | 2018-01-21 | |
| 2 | 57249 | 1992-12-24 | 2020-10-11 | klm |
| 3 | 57250 | 2001-07-14 | 2019-21-11 | ptl |
| 4 | 57251 | 1991-12-23 | 2015-01-21 | |
| 5 | 57262 | | 2015-01-21 | |
| 6 | 57263 | | 2014-01-21 | |
Notice that the row[2] and row[3] in df2 have more recent update_date values (2020-10-11 and 2019-21-11 respectively) than their counterpart in df1 (where id = key) because their creation_date have been modified (by the given users).
I would like to update rows (i.e. in concrete terms; create_date and update_date values) of df1 where update_date in df2 is more recent than its original value in df1 (for the same primary keys).
This is how I'm tackling this for the moment, using sqlalchemy and psycopg2 + the .to_sql() method of pandas' DataFrame:
import psycopg2
from sqlalchemy import create_engine
connector = psycopg2.connect(**database_parameters_dictionary)
engine = create_engine('postgresql+psycopg2://', creator=connector)
df1.update(df2) # 1) maybe there is something better to do here?
with engine.connect() as connection:
df1.to_sql(
name="database_table_name",
con=connection,
schema="public",
if_exists="replace", # 2) maybe there is also something better to do here?
index=True
)
The problem I have is that, according to the documentation, the if_exists argument can only do three things:
if_exists{‘fail’, ‘replace’, ‘append’}, default ‘fail’
Therefore, to update these two rows, I have to;
1) use .update() method on df1 using df2 as an argument, together with
2) replacing the whole table inside the .to_sql() method, which means "drop+recreate".
As the tables are really large (more than 500'000 entries), I have the feeling that this will need a lot of unnecessary work!
How could I efficiently update only those two newly updated rows? Do I have to generate some custom SQL queries to compares the dates for each rows and only take the ones that have really changed? But here again, I have the intuition that, looping through all rows to compare the update dates will take "a lot" of time. How is the more efficient way to do that? (It would have been easier in pure SQL if the two tables were on the same host/database but it's unfortunately not the case).
Pandas can't do partial updates of a table, no. There is a longstanding open bug for supporting sub-whole-table-granularity updates in .to_sql(), but you can see from the discussion there that it's a very complex feature to support in the general case.
However, limiting it to just your situation, I think there's a reasonable approach you could take.
Instead of using df1.update(df2), put together an expression that yields only the changed records with their new values (I don't use pandas often so I don't know this offhand); then iterate over the resulting dataframe and build the UPDATE statements yourself (or with the SQLAlchemy expression layer, if you're using that). Then, use the connection to DB A to issue all the UPDATEs as one transaction. With an indexed PK, it should be as fast as this would ever be expected to be.
BTW, I don't think df1.update(df2) is exactly correct - from my reading, that would update all rows with any differing fields, not just when updated_date > prev updated_date. But it's a moot point if updated_date in df2 is only ever more recent than those in df1.

How do I make my many-join / many-union datasets compute faster?

I have a series of ~30 datasets that all need to be joined together for making a wide final table. This final table takes ~5 years of individual tables (one table per year) and unions them together, then joins this full history with the full history of other tables (similarly unioned) to make a big, historical, wide table.
The layout of these first, per year tables is as such:
table_type_1:
| primary_key | year |
|-------------|------|
| key_1 | 0 |
| key_2 | 0 |
| key_3 | 0 |
With other year tables like this:
table_type_1:
| primary_key | year |
|-------------|------|
| key_1 | 1 |
| key_2 | 1 |
These are then unioned together to create:
table_type_1:
| primary_key | year |
|-------------|------|
| key_1 | 0 |
| key_2 | 0 |
| key_3 | 0 |
| key_1 | 1 |
| key_2 | 1 |
Similarly, a second type of table when unioned results in the following:
table_type_2:
| primary_key | year |
|-------------|------|
| key_1 | 0 |
| key_2 | 0 |
| key_3 | 0 |
| key_1 | 1 |
| key_2 | 1 |
I now want to join table_type_1 with table_type_2 on primary_key and year to yield a much wider table. I notice that this final join takes a very long time and shuffles a lot of data.
How can I make this faster?
You can use bucketing on the per-year tables over the primary_key and year columns into the exact same number of buckets to avoid an expensive exchange when computing the final join.
- output: table_type_1_year_0
input: raw_table_type_1_year_0
hive_partitioning: none
bucketing: BUCKET_COUNT by (PRIMARY_KEY, YEAR)
- output: table_type_1_year_1
input: raw_table_type_1_year_1
hive_partitioning: none
bucketing: BUCKET_COUNT by (PRIMARY_KEY, YEAR)
...
- output: table_type_2_year_0
input: raw_table_type_2_year_0
hive_partitioning: none
bucketing: BUCKET_COUNT by (PRIMARY_KEY, YEAR)
- output: table_type_2_year_1
input: raw_table_type_2_year_1
hive_partitioning: none
bucketing: BUCKET_COUNT by (PRIMARY_KEY, YEAR)
...
- output: all_tables
input:
- table_type_1_year_0
- table_type_1_year_1
...
- table_type_2_year_0
- table_type_2_year_1
...
hive_partitioning: none
bucketing: BUCKET_COUNT by (PRIMARY_KEY, YEAR)
Note: When you are picking the BUCKET_COUNT value, it's important to understand it should be optimized for the final all_tables output, not for the intermediate tables. This will mean you likely will end up with files that are quite small for the intermediate tables. This is likely to be inconsequential compared to the efficiency gains of the all_tables output since you won't have to compute a massive exchange when joining everything up; your buckets will be pre-computed and you can simply SortMergeJoin on the input files.
For an explicit example on how to write the transform writing out a specified number of buckets, my answer over here is probably useful.
What I advice you is: to make a first union on small datasets then to broadcast the dataset ,result of the first union , spark will deploy that dataset on its different nodes which will reduce the number of shuffles. The union on spark is well optimized so what you have to do is to think about the possess : select only columns that you need from the beginning, avoid any kind of non cost effective operations before the union like groupByKey ...etc because spark will call those operations when it makes the final process. I do advise you to avoid hive because it uses the map reduce strategy which is not worthy compared to spark sql you can use this example of a function just change the key, use scala if you can it will interact directly with spark:
def map_To_cells(df1: DataFrame, df2: DataFrame): DataFrame = {
val df0= df2.withColumn("key0",F.col("key")).drop("key")
df1.as("main").join(
broadcast(df0),
df0("key0") <=> df("key")
).select( needed columns)
}

Is spark smart enough to avoid redundant values while performing aggregation?

I have the following Dataset
case class Department(deptId:String,locations:Seq[String])
// using spark 2.0.2
// I have a Dataset `ds` of type Department
+-------+--------------------+
|deptId | locations |
+-------+--------------------+
| d1|[delhi,kerala] |
| d1|[] |
| dp2|[] |
| dp2|[hyderabad] |
+-------+--------------------+
I intended to convert it to
// Dataset `result` of type Department itself
+-------+--------------------+
|deptId | locations |
+-------+--------------------+
| d1|[delhi,kerala] |
| dp2|[hyderabad] |
+-------+--------------------+
I do the following
val flatten = udf(
(xs: Seq[Seq[String]]) => xs.flatten)
val result = ds.groupBy("deptId").
agg(flatten(collect_list("locations")).as("locations")
My question is, is Spark smart enough not to shuffle around empty locations ie [] ?
PS: I am not sure if this is a stupid question.
Yes and no:
Yes - collect_list performs map-side aggregation, so if there are multiple values per grouping key, data will be merged before shuffle.
No - because an empty list is not the same as the missing data. If that's not the desired behavior you should filter the data first
ds.filter(size($"location") > 0).groupBy("deptId").agg(...)
but keep in mind that it will yield different result if there are only empty arrays for deptId.

Conditional Explode in Spark Structured Streaming / Spark SQL

I'm trying to do a conditional explode in Spark Structured Streaming.
For instance, my streaming dataframe looks like follows (totally making the data up here). I want to explode the employees array into separate rows of arrays when contingent = 1. When contingent = 0, I need to let the array be as is.
|----------------|---------------------|------------------|
| Dept ID | Employees | Contingent |
|----------------|---------------------|------------------|
| 1 | ["John", "Jane"] | 1 |
|----------------|---------------------|------------------|
| 4 | ["Amy", "James"] | 0 |
|----------------|---------------------|------------------|
| 2 | ["David"] | 1 |
|----------------|---------------------|------------------|
So, my output should look like (I do not need to display the contingent column:
|----------------|---------------------|
| Dept ID | Employees |
|----------------|---------------------|
| 1 | ["John"] |
|----------------|---------------------|
| 1 | ["Jane"] |
|----------------|---------------------|
| 4 | ["Amy", "James"] |
|----------------|---------------------|
| 2 | ["David"] |
|----------------|---------------------|
There are a couple challenges I'm currently facing:
Exploding Arrays conditionally
exploding arrays into arrays (rather than strings in this case)
In Hive, there was a concept of UDTF (user-defined table functions) that would allow me to do this. Wondering if there is anything comparable to it?
Use flatMap to explode and specify whatever condition you want.
case class Department (Dept_ID: String, Employees: Array[String], Contingent: Int)
case class DepartmentExp (Dept_ID: String, Employees: Array[String])
val ds = df.as[Department]
ds.flatMap(dept => {
if (dept.Contingent == 1) {
dept.Employees.map(emp => DepartmentExp(dept.Dept_ID, Array(emp)))
} else {
Array(DepartmentExp(dept.Dept_ID, dept.Employees))
}
}).as[DepartmentExp]

Performance: Group by a subset of previous grouping columns

I have a DataFrame with two categorical columns, similar to the following example:
+----+-------+-------+
| ID | Cat A | Cat B |
+----+-------+-------+
| 1 | A | B |
| 2 | B | C |
| 5 | A | B |
| 7 | B | C |
| 8 | A | C |
+----+-------+-------+
I have some processing to do that needs two steps: The first one needs the data to be grouped by both categorical columns. In the example, it would generate the following DataFrame:
+-------+-------+-----+
| Cat A | Cat B | Cnt |
+-------+-------+-----+
| A | B | 2 |
| B | C | 2 |
| A | C | 1 |
+-------+-------+-----+
Then, the next step consists on grouping only by CatA, to calculate a new aggregation, for example:
+-----+-----+
| Cat | Cnt |
+-----+-----+
| A | 3 |
| B | 2 |
+-----+-----+
Now come the questions:
In my solution, I create the intermediate dataframe by doing
val df2 = df.groupBy("catA", "catB").agg(...)
and then I aggregate this df2 to get the last one:
val df3 = df2.groupBy("catA").agg(...)
I assume it is more efficient than aggregating the first DF again. Is it a good assumption? Or it makes no difference?
Are there any suggestions of a more efficient way to achieve the same results?
Generally speaking it looks like a good approach and should be more efficient than aggregating data twice. Since shuffle files are implicitly cached at least part of the work should be performed only once. So when you call an action on df2 and subsequently on df3 you should see that stages corresponding to df2 have been skipped. Also partial structure enforced by the first shuffle may reduce memory requirements for the aggregation buffer during the second agg.
Unfortunately DataFrame aggregations, unlike RDD aggregations, cannot use custom partitioner. It means that you cannot compute both data frames using a single shuffle based on a value of catA. It means that second aggregation will require separate exchange hash partitioning. I doubt it justifies switching to RDDs.

Resources