I have calculated the average temperature for two cities grouped by seasons, but I'm having trouble in with getting the difference between the avg(TemperatureF) for City A vs City B. Here is an example of what my Spark Scala DataFrame looks like:
City
Season
avg(TemperatureF)
A
Fall
52
A
Spring
50
A
Summer
72
A
Winter
25
B
Fall
49
B
Spring
44
B
Summer
69
B
Winter
22
You may use the pivot function as follows:
df.groupBy('Season').pivot('City').agg(f.first('avg')) \
.withColumn('diff', f.expr('A - B')) \
.show()
+------+---+---+----+
|Season| A| B|diff|
+------+---+---+----+
|Spring| 50| 44| 6.0|
|Summer| 72| 69| 3.0|
| Fall| 52| 49| 3.0|
|Winter| 25| 22| 3.0|
+------+---+---+----+
Related
Is it possible to get a sample n rows of a query with a where clause?
I tried to use the tablesample function below but I ended up only getting records in the first partition '2021-09-14.' P
select * from (select * from table where ts in ('2021-09-14', '2021-09-15')) tablesample (100 rows)
You can utilise Monotonically Increasing ID - here or Rand to generate an additional column which can be used to Order your dataset to generate the necessary sampling field
Both of these functions can be used in conjunction or individually
Further more you can use LIMIT clause to sample your required N records
NOTE - orderBy would be a costly operation
Data Preparation
input_str = """
1 2/12/2019 114 2
2 3/5/2019 116 1
3 3/3/2019 120 6
4 3/4/2019 321 10
6 6/5/2019 116 1
7 6/3/2019 116 1
8 10/1/2019 120 3
9 10/1/2019 120 3
10 10/1/2020 120 3
11 10/1/2020 120 3
12 10/1/2020 120 3
13 10/1/2022 120 3
14 10/1/2021 120 3
15 10/6/2019 120 3
""".split()
input_values = list(map(lambda x: x.strip() if x.strip() != 'null' else None, input_str))
cols = list(map(lambda x: x.strip() if x.strip() != 'null' else None, "shipment_id ship_date customer_id quantity".split()))
n = len(input_values)
input_list = [tuple(input_values[i:i+4]) for i in range(0,n,4)]
sparkDF = sql.createDataFrame(input_list, cols)
sparkDF = sparkDF.withColumn('ship_date',F.to_date(F.col('ship_date'),'d/M/yyyy'))
sparkDF.show()
+-----------+----------+-----------+--------+
|shipment_id| ship_date|customer_id|quantity|
+-----------+----------+-----------+--------+
| 1|2019-12-02| 114| 2|
| 2|2019-05-03| 116| 1|
| 3|2019-03-03| 120| 6|
| 4|2019-04-03| 321| 10|
| 6|2019-05-06| 116| 1|
| 7|2019-03-06| 116| 1|
| 8|2019-01-10| 120| 3|
| 9|2019-01-10| 120| 3|
| 10|2020-01-10| 120| 3|
| 11|2020-01-10| 120| 3|
| 12|2020-01-10| 120| 3|
| 13|2022-01-10| 120| 3|
| 14|2021-01-10| 120| 3|
| 15|2019-06-10| 120| 3|
+-----------+----------+-----------+--------+
Order By - Monotonically Increasing ID & Rand
sparkDF.createOrReplaceTempView("shipment_table")
sql.sql("""
SELECT
*
FROM (
SELECT
*
,monotonically_increasing_id() as increasing_id
,RAND(10) as random_order
FROM shipment_table
WHERE ship_date BETWEEN '2019-01-01' AND '2019-12-31'
ORDER BY monotonically_increasing_id() DESC ,RAND(10) DESC
LIMIT 5
)
""").show()
+-----------+----------+-----------+--------+-------------+-------------------+
|shipment_id| ship_date|customer_id|quantity|increasing_id| random_order|
+-----------+----------+-----------+--------+-------------+-------------------+
| 15|2019-06-10| 120| 3| 8589934593|0.11682250456449328|
| 9|2019-01-10| 120| 3| 8589934592|0.03422639313807285|
| 8|2019-01-10| 120| 3| 6| 0.8078688178371882|
| 7|2019-03-06| 116| 1| 5|0.36664222617947817|
| 6|2019-05-06| 116| 1| 4| 0.2093704977577|
+-----------+----------+-----------+--------+-------------+-------------------+
If you are using Dataset there is built-in functionality for this as outlined in the documenation:
sample(withReplacement: Boolean, fraction: Double): Dataset[T]
Returns a new Dataset by sampling a fraction of rows, using a random seed.
withReplacement: Sample with replacement or not.
fraction: Fraction of rows to generate, range [0.0, 1.0].
Since
1.6.0
Note
This is NOT guaranteed to provide exactly the fraction of the total count of the given Dataset.
To use this you'd filter your dataset against whatever criteria you're looking for, then sample the result. If you need an exact number of rows rather than a fraction you can follow the call to sample with limit(n) where n is the number of rows to return.
I have product, brand and percentage columns. I want to calculate the sum of the percentage column for the rows above the current row for those with different brand than the current row and also for those with same brand as the current row. How can I do it in PySpark or using using spark.sql?
sample data:
df = pd.DataFrame({'a': ['a1','a2','a3','a4','a5','a6'],
'brand':['b1','b2','b1', 'b3', 'b2','b1'],
'pct': [40, 30, 10, 8,7,5]})
df = spark.createDataFrame(df)
What I am looking for:
product brand pct pct_same_brand pct_different_brand
a1 b1 40 null null
a2 b2 30 null 40
a3 b1 10 40 30
a4 b3 8 null 80
a5 b2 7 30 58
a6 b1 5 50 45
This is what I have tried:
df.createOrReplaceTempView('tmp')
spark.sql("""
select *, sum(pct * (select case when n1.brand = n2.brand then 1 else 0 end
from tmp n1)) over(order by pct desc rows between UNBOUNDED PRECEDING and 1
preceding)
from tmp n2
""").show()
You can get the pct_different_brand column by subtracting the total rolling sum from the partitioned rolling sum (i.e. pct_same_brand column):
from pyspark.sql import functions as F, Window
df2 = df.withColumn(
'pct_same_brand',
F.sum('pct').over(
Window.partitionBy('brand')
.orderBy(F.desc('pct'))
.rowsBetween(Window.unboundedPreceding, -1)
)
).withColumn(
'pct_different_brand',
F.sum('pct').over(
Window.orderBy(F.desc('pct'))
.rowsBetween(Window.unboundedPreceding, -1)
) - F.coalesce(F.col('pct_same_brand'), F.lit(0))
)
df2.show()
+---+-----+---+--------------+-------------------+
| a|brand|pct|pct_same_brand|pct_different_brand|
+---+-----+---+--------------+-------------------+
| a1| b1| 40| null| null|
| a2| b2| 30| null| 40|
| a3| b1| 10| 40| 30|
| a4| b3| 8| null| 80|
| a5| b2| 7| 30| 58|
| a6| b1| 5| 50| 45|
+---+-----+---+--------------+-------------------+
I have a table like below
id week count
A100 201008 2
A100 201009 9
A100 201010 16
A100 201011 23
A100 201012 30
A100 201013 36
A100 201015 43
A100 201017 50
A100 201018 57
A100 201019 63
A100 201023 70
A100 201024 82
A100 201025 88
A100 201026 95
A100 201027 102
Here, we can see that below weeks are missing :
First 201014 is missing
Second 201016 is missing
Third weeks missing 201020, 201021, 201022
My requirement is whenever we have missing values we need to show the count of previous week.
In this case output should be :
id week count
A100 201008 2
A100 201009 9
A100 201010 16
A100 201011 23
A100 201012 30
A100 201013 36
A100 201014 36
A100 201015 43
A100 201016 43
A100 201017 50
A100 201018 57
A100 201019 63
A100 201020 63
A100 201021 63
A100 201022 63
A100 201023 70
A100 201024 82
A100 201025 88
A100 201026 95
A100 201027 102
How I can achieve this requirement using hive/pyspark?
Although this answer is in Scala, Python version will look almost the same & can be easily converted.
Step 1:
Find the rows that has missing week(s) value prior to it.
Sample Input:
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._
//sample input
val input = sc.parallelize(List(("A100",201008,2), ("A100",201009,9),("A100",201014,4), ("A100",201016,45))).toDF("id","week","count")
scala> input.show
+----+------+-----+
| id| week|count|
+----+------+-----+
|A100|201008| 2|
|A100|201009| 9|
|A100|201014| 4| //missing 4 rows
|A100|201016| 45| //missing 1 row
+----+------+-----+
To find it, we can use .lead() function on week. And compute the difference between the leadWeek and week. The difference should not be > 1, if so there are missing row prior to it.
val diffDF = input
.withColumn("leadWeek", lead($"week", 1).over(Window.partitionBy($"id").orderBy($"week"))) // partitioning by id & computing lead()
.withColumn("diff", ($"leadWeek" - $"week") -1) // finding difference between leadWeek & week
scala> diffDF.show
+----+------+-----+--------+----+
| id| week|count|leadWeek|diff|
+----+------+-----+--------+----+
|A100|201008| 2| 201009| 0| // diff -> 0 represents that no rows needs to be added
|A100|201009| 9| 201014| 4| // diff -> 4 represents 4 rows are to be added after this row.
|A100|201014| 4| 201016| 1| // diff -> 1 represents 1 row to be added after this row.
|A100|201016| 45| null|null|
+----+------+-----+--------+----+
Step 2:
If the diff is >= 1: Create and Add n number of rows(InputWithDiff, check the case class below) as specified in
diff and increment week value accordingly. Return the newly
created rows along with the original row.
If the diff is 0, No additional computation is required. Return the original row as it is.
Convert diffDF to Dataset for ease of computation.
case class InputWithDiff(id: Option[String], week: Option[Int], count: Option[Int], leadWeek: Option[Int], diff: Option[Int])
val diffDS = diffDF.as[InputWithDiff]
val output = diffDS.flatMap(x => {
val diff = x.diff.getOrElse(0)
diff match {
case n if n >= 1 => x :: (1 to diff).map(y => InputWithDiff(x.id, Some(x.week.get + y), x.count,x.leadWeek, x.diff)).toList // create and append new Rows
case _ => List(x) // return as it is
}
}).drop("leadWeek", "diff").toDF // drop unnecessary columns & convert to DF
final output:
scala> output.show
+----+------+-----+
| id| week|count|
+----+------+-----+
|A100|201008| 2|
|A100|201009| 9|
|A100|201010| 9|
|A100|201011| 9|
|A100|201012| 9|
|A100|201013| 9|
|A100|201014| 4|
|A100|201015| 4|
|A100|201016| 45|
+----+------+-----+
PySpark solution
Sample Data
df = spark.createDataFrame([(1,201901,10),
(1,201903,9),
(1,201904,21),
(1,201906,42),
(1,201909,3),
(1,201912,56)
],['id','weeknum','val'])
df.show()
+---+-------+---+
| id|weeknum|val|
+---+-------+---+
| 1| 201901| 10|
| 1| 201903| 9|
| 1| 201904| 21|
| 1| 201906| 42|
| 1| 201909| 3|
| 1| 201912| 56|
+---+-------+---+
1) The basic idea is to create a combination of all id's and weeks (starting from the minimum possible value to the maximum) with a cross join.
from pyspark.sql.functions import min,max,sum,when
from pyspark.sql import Window
min_max_week = df.agg(min(df.weeknum),max(df.weeknum)).collect()
#Generate all weeks using range
all_weeks = spark.range(min_max_week[0][0],min_max_week[0][1]+1)
all_weeks = all_weeks.withColumnRenamed('id','weekno')
#all_weeks.show()
id_all_weeks = df.select(df.id).distinct().crossJoin(all_weeks).withColumnRenamed('id','aid')
#id_all_weeks.show()
2) Thereafter, left joining the original dataframe on to these combinations helps identify missing values.
res = id_all_weeks.join(df,(df.id == id_all_weeks.aid) & (df.weeknum == id_all_weeks.weekno),'left')
res.show()
+---+------+----+-------+----+
|aid|weekno| id|weeknum| val|
+---+------+----+-------+----+
| 1|201911|null| null|null|
| 1|201905|null| null|null|
| 1|201903| 1| 201903| 9|
| 1|201904| 1| 201904| 21|
| 1|201901| 1| 201901| 10|
| 1|201906| 1| 201906| 42|
| 1|201908|null| null|null|
| 1|201910|null| null|null|
| 1|201912| 1| 201912| 56|
| 1|201907|null| null|null|
| 1|201902|null| null|null|
| 1|201909| 1| 201909| 3|
+---+------+----+-------+----+
3) Then, use a combination of window functions, sum -> to assign groups
and max -> to fill in the missing values once the groups are classified.
w1 = Window.partitionBy(res.aid).orderBy(res.weekno)
groups = res.withColumn("grp",sum(when(res.id.isNull(),0).otherwise(1)).over(w1))
w2 = Window.partitionBy(groups.aid,groups.grp)
missing_values_filled = groups.withColumn('filled',max(groups.val).over(w2)) #select required columns as needed
missing_values_filled.show()
+---+------+----+-------+----+---+------+
|aid|weekno| id|weeknum| val|grp|filled|
+---+------+----+-------+----+---+------+
| 1|201901| 1| 201901| 10| 1| 10|
| 1|201902|null| null|null| 1| 10|
| 1|201903| 1| 201903| 9| 2| 9|
| 1|201904| 1| 201904| 21| 3| 21|
| 1|201905|null| null|null| 3| 21|
| 1|201906| 1| 201906| 42| 4| 42|
| 1|201907|null| null|null| 4| 42|
| 1|201908|null| null|null| 4| 42|
| 1|201909| 1| 201909| 3| 5| 3|
| 1|201910|null| null|null| 5| 3|
| 1|201911|null| null|null| 5| 3|
| 1|201912| 1| 201912| 56| 6| 56|
+---+------+----+-------+----+---+------+
Hive Query with the same logic as described above (assuming a table with all weeks can be created)
select id,weeknum,max(val) over(partition by id,grp) as val
from (select i.id
,w.weeknum
,t.val
,sum(case when t.id is null then 0 else 1 end) over(partition by i.id order by w.weeknum) as grp
from (select distinct id from tbl) i
cross join weeks_table w
left join tbl t on t.id = i.id and w.weeknum = t.weeknum
) t
I am converting sql code into Pyspark.
The sql code is using rollup to sum up the count for each state.
I am try to do the same thing in pyspark, but don't know how to get the total count row.
I have a table with state, city, and count, I want to add a total count for each state at the end of the state sections.
This is a sample input:
State City Count
WA Seattle 10
WA Tacoma 11
MA Boston 11
MA Cambridge 3
MA Quincy 5
This is my desired output:
State City Count
WA Seattle 10
WA Tacoma 11
WA Total 21
MA Boston 11
MA Cambridge 3
MA Quincy 5
MA Total 19
I don't know how to add the total count in between states.
I did try rollup, here is my code:
df2=df.rollup('STATE').count()
and the result show up like this:
State Count
WA 21
MA 19
But I want the Total after each state.
Since you want the Total as a new row inside your DataFrame, one option is to union the results of the groupBy() and sort by ["State", "City", "Count"] (to ensure that the "Total" row displays last in each group):
import pyspark.sql.functions as f
df.union(
df.groupBy("State")\
.agg(f.sum("Count").alias("Count"))\
.select("State", f.lit("Total").alias("City"), "Count")
).sort("State", "City", "Count").show()
#+-----+---------+-----+
#|State| City|Count|
#+-----+---------+-----+
#| MA| Boston| 11|
#| MA|Cambridge| 3|
#| MA| Quincy| 5|
#| MA| Total| 19|
#| WA| Seattle| 10|
#| WA| Tacoma| 11|
#| WA| Total| 21|
#+-----+---------+-----+
Either:
df.groubpBy("State", "City").rollup(count("*"))
or just register table:
df.createOrReplaceTempView("df")
and apply your current SQL query with
spark.sql("...")
I have a dataframe in pyspark. Here is what it looks like,
+---------+---------+
|timestamp| price |
+---------+---------+
|670098928| 50 |
|670098930| 53 |
|670098934| 55 |
+---------+---------+
I want to fill in the gaps in timestamp with the previous state, so that I can get a perfect set to calculate time weighted averages. Here is what the output should be like -
+---------+---------+
|timestamp| price |
+---------+---------+
|670098928| 50 |
|670098929| 50 |
|670098930| 53 |
|670098931| 53 |
|670098932| 53 |
|670098933| 53 |
|670098934| 55 |
+---------+---------+
Eventually, I want to persist this new dataframe on disk and visualize my analysis.
How do I do this in pyspark? (For simplicity sake, I have just kept 2 columns. My actual dataframe has 89 columns with ~670 million records before filling the gaps.)
You can generate timestamp ranges, flatten them and select rows
import pyspark.sql.functions as func
from pyspark.sql.types import IntegerType, ArrayType
a=sc.parallelize([[670098928, 50],[670098930, 53], [670098934, 55]])\
.toDF(['timestamp','price'])
f=func.udf(lambda x:range(x,x+5),ArrayType(IntegerType()))
a.withColumn('timestamp',f(a.timestamp))\
.withColumn('timestamp',func.explode(func.col('timestamp')))\
.groupBy('timestamp')\
.agg(func.max(func.col('price')))\
.show()
+---------+----------+
|timestamp|max(price)|
+---------+----------+
|670098928| 50|
|670098929| 50|
|670098930| 53|
|670098931| 53|
|670098932| 53|
|670098933| 53|
|670098934| 55|
|670098935| 55|
|670098936| 55|
|670098937| 55|
|670098938| 55|
+---------+----------+