Pyspark extracting exactly 4 consecutive numeric digit from a column and return it in a new column - apache-spark

I am very new in using pyshark and have no idea how to do it
I am trying to extract from a title column.
Some value in the title column are:
Under Ground2(1990)
Waterword(1995)
Incredible
Skate (1991) board
That girl 2002”
I am trying to get:
1990
1995
1991
2002
This is what i have tried :
import pyspark.sql.functions as F
from pyspark.sql.functions import split
from pyspark.sql.functions import regexp_replace
movies_DF=movies_DF.withColumn('title', regexp_replace(movies_DF.title, "\(",""))
movies_DF=movies_DF.withColumn('title', regexp_replace(movies_DF.title, "\)",""))
movies_DF=movies_DF.withColumn('yearOfRelease',(f.expr('substring(title,-4)')))
My output column that have:
1990
1995
board
2002”
dible

Use regexp_extract function:
from pyspark.sql.functions import regexp_extract, col
df = df.withColumn('Year', regexp_extract(col('Title'), r'\((\d{4})\)$', 1))
df.show()
+-------------------+----+
| Title|Year|
+-------------------+----+
|Under Ground2(1990)|1990|
| Waterword(1995)|1995|
+-------------------+----+

Related

How to select last value in a pySpark DataFrame based on a datetime column

I have a DataFrame df structured as follows:
date_time id value
2020-12-06 17:00 A 10
2020-12-06 17:05 A 18
2020-12-06 17:00 B 20
2020-12-06 17:05 B 28
2020-12-06 17:00 C 30
2020-12-06 17:05 C 38
And I have to select only the most recent row for each id in a DataFrame named df_last.
This is a solution that works:
from pyspark.sql import functions as F
from pyspark.sql.window import *
df_rows = df.withColumn('row_num', F.row_number().over(Window.partitionBy('id').orderBy(F.desc('date_time')))-1)
df_last = df_rows.filter(F.col('row_num')==0)
I wonder if there is a simpler/cleaner solution
That's pretty much the way to do it. Just some minor improvements that can be made -
no need to subtract 1 from the row number:
from pyspark.sql import functions as F
from pyspark.sql.window import Window
df_rows = df.withColumn(
'row_num',
F.row_number().over(Window.partitionBy('id').orderBy(F.desc('date_time')))
)
df_last = df_rows.filter('row_num = 1')

Change the bar item name in Pandas

I have a test excel file like:
df = pd.DataFrame({'name':list('abcdefg'),
'age':[10,20,5,23,58,4,6]})
print (df)
name age
0 a 10
1 b 20
2 c 5
3 d 23
4 e 58
5 f 4
6 g 6
I use Pandas and matplotlib to read and plot it:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os
excel_file = 'test.xlsx'
df = pd.read_excel(excel_file, sheet_name=0)
df.plot(kind="bar")
plt.show()
the result shows:
it use index number as item name, how can I change it to the name, which stored in column name?
You can specify columns for x and y values in plot.bar:
df.plot(x='name', y='age', kind="bar")
Or create Series first by DataFrame.set_index and select age column:
df.set_index('name')['age'].plot(kind="bar")
#if multiple columns
#df.set_index('name').plot(kind="bar")

Python Spark: How to join 2 datasets containing >2 elements for each tuple

I'm trying to join data from these two datasets, based on the common "stock" key
stock, sector
GOOG Tech
stock, date, volume
GOOG 2015 5759725
The join method should join these together, however the resulting RDD I got is of the form:
GOOG, (Tech, 2015)
I'm trying to obtain:
(Tech, 2015) 5759726
Additionally, how do I go about reducing the results by the keys (e.g. (Tech, 2015)) in order to obtain a numerical summation for each sector and year?
from pyspark.sql.functions import struct, col, sum
#sample data
df1 = sc.parallelize([['GOOG', 'Tech'],
['AAPL', 'Tech'],
['XOM', 'Oil']]).toDF(["stock","sector"])
df2 = sc.parallelize([['GOOG', '2015', '5759725'],
['AAPL', '2015', '123'],
['XOM', '2015', '234'],
['XOM', '2016', '789']]).toDF(["stock","date","volume"])
#final output
df = df1.join(df2, ['stock'], 'inner').\
withColumn('sector_year', struct(col('sector'), col('date'))).\
drop('stock','sector','date')
df.show()
#numerical summation for each sector and year
df.groupBy('sector_year').agg(sum('volume')).show()
Output is:
+-------+-----------+
| volume|sector_year|
+-------+-----------+
| 123|[Tech,2015]|
| 234| [Oil,2015]|
| 789| [Oil,2016]|
|5759725|[Tech,2015]|
+-------+-----------+
+-----------+-----------+
|sector_year|sum(volume)|
+-----------+-----------+
|[Tech,2015]| 5759848.0|
| [Oil,2015]| 234.0|
| [Oil,2016]| 789.0|
+-----------+-----------+

pyspark: rolling average using timeseries data

I have a dataset consisting of a timestamp column and a dollars column. I would like to find the average number of dollars per week ending at the timestamp of each row. I was initially looking at the pyspark.sql.functions.window function, but that bins the data by week.
Here's an example:
%pyspark
import datetime
from pyspark.sql import functions as F
df1 = sc.parallelize([(17,"2017-03-11T15:27:18+00:00"), (13,"2017-03-11T12:27:18+00:00"), (21,"2017-03-17T11:27:18+00:00")]).toDF(["dollars", "datestring"])
df2 = df1.withColumn('timestampGMT', df1.datestring.cast('timestamp'))
w = df2.groupBy(F.window("timestampGMT", "7 days")).agg(F.avg("dollars").alias('avg'))
w.select(w.window.start.cast("string").alias("start"), w.window.end.cast("string").alias("end"), "avg").collect()
This results in two records:
| start | end | avg |
|---------------------|----------------------|-----|
|'2017-03-16 00:00:00'| '2017-03-23 00:00:00'| 21.0|
|---------------------|----------------------|-----|
|'2017-03-09 00:00:00'| '2017-03-16 00:00:00'| 15.0|
|---------------------|----------------------|-----|
The window function binned the time series data rather than performing a rolling average.
Is there a way to perform a rolling average where I'll get back a weekly average for each row with a time period ending at the timestampGMT of the row?
EDIT:
Zhang's answer below is close to what I want, but not exactly what I'd like to see.
Here's a better example to show what I'm trying to get at:
%pyspark
from pyspark.sql import functions as F
df = spark.createDataFrame([(17, "2017-03-10T15:27:18+00:00"),
(13, "2017-03-15T12:27:18+00:00"),
(25, "2017-03-18T11:27:18+00:00")],
["dollars", "timestampGMT"])
df = df.withColumn('timestampGMT', df.timestampGMT.cast('timestamp'))
df = df.withColumn('rolling_average', F.avg("dollars").over(Window.partitionBy(F.window("timestampGMT", "7 days"))))
This results in the following dataframe:
dollars timestampGMT rolling_average
25 2017-03-18 11:27:18.0 25
17 2017-03-10 15:27:18.0 15
13 2017-03-15 12:27:18.0 15
I'd like the average to be over the week proceeding the date in the timestampGMT column, which would result in this:
dollars timestampGMT rolling_average
17 2017-03-10 15:27:18.0 17
13 2017-03-15 12:27:18.0 15
25 2017-03-18 11:27:18.0 19
In the above results, the rolling_average for 2017-03-10 is 17, since there are no preceding records. The rolling_average for 2017-03-15 is 15 because it is averaging the 13 from 2017-03-15 and the 17 from 2017-03-10 which falls withing the preceding 7 day window. The rolling average for 2017-03-18 is 19 because it is averaging the 25 from 2017-03-18 and the 13 from 2017-03-10 which falls withing the preceding 7 day window, and it is not including the 17 from 2017-03-10 because that does not fall withing the preceding 7 day window.
Is there a way to do this rather than the binning window where the weekly windows don't overlap?
I figured out the correct way to calculate a moving/rolling average using this stackoverflow:
Spark Window Functions - rangeBetween dates
The basic idea is to convert your timestamp column to seconds, and then you can use the rangeBetween function in the pyspark.sql.Window class to include the correct rows in your window.
Here's the solved example:
%pyspark
from pyspark.sql import functions as F
from pyspark.sql.window import Window
#function to calculate number of seconds from number of days
days = lambda i: i * 86400
df = spark.createDataFrame([(17, "2017-03-10T15:27:18+00:00"),
(13, "2017-03-15T12:27:18+00:00"),
(25, "2017-03-18T11:27:18+00:00")],
["dollars", "timestampGMT"])
df = df.withColumn('timestampGMT', df.timestampGMT.cast('timestamp'))
#create window by casting timestamp to long (number of seconds)
w = (Window.orderBy(F.col("timestampGMT").cast('long')).rangeBetween(-days(7), 0))
df = df.withColumn('rolling_average', F.avg("dollars").over(w))
This results in the exact column of rolling averages that I was looking for:
dollars timestampGMT rolling_average
17 2017-03-10 15:27:18.0 17.0
13 2017-03-15 12:27:18.0 15.0
25 2017-03-18 11:27:18.0 19.0
I will add a variation which I personally found very useful. I hope someone will find it useful as well:
If you want to groupby then within the respective groups calculate the moving average:
Example of the dataframe :
from pyspark.sql.window import Window
from pyspark.sql import functions as func
df = spark.createDataFrame([("tshilidzi", 17.00, "2018-03-10T15:27:18+00:00"),
("tshilidzi", 13.00, "2018-03-11T12:27:18+00:00"),
("tshilidzi", 25.00, "2018-03-12T11:27:18+00:00"),
("thabo", 20.00, "2018-03-13T15:27:18+00:00"),
("thabo", 56.00, "2018-03-14T12:27:18+00:00"),
("thabo", 99.00, "2018-03-15T11:27:18+00:00"),
("tshilidzi", 156.00, "2019-03-22T11:27:18+00:00"),
("thabo", 122.00, "2018-03-31T11:27:18+00:00"),
("tshilidzi", 7000.00, "2019-04-15T11:27:18+00:00"),
("ash", 9999.00, "2018-04-16T11:27:18+00:00")
],
["name", "dollars", "timestampGMT"])
# we need this timestampGMT as seconds for our Window time frame
df = df.withColumn('timestampGMT', df.timestampGMT.cast('timestamp'))
df.show(10000, False)
Output:
+---------+-------+---------------------+
|name |dollars|timestampGMT |
+---------+-------+---------------------+
|tshilidzi|17.0 |2018-03-10 17:27:18.0|
|tshilidzi|13.0 |2018-03-11 14:27:18.0|
|tshilidzi|25.0 |2018-03-12 13:27:18.0|
|thabo |20.0 |2018-03-13 17:27:18.0|
|thabo |56.0 |2018-03-14 14:27:18.0|
|thabo |99.0 |2018-03-15 13:27:18.0|
|tshilidzi|156.0 |2019-03-22 13:27:18.0|
|thabo |122.0 |2018-03-31 13:27:18.0|
|tshilidzi|7000.0 |2019-04-15 13:27:18.0|
|ash |9999.0 |2018-04-16 13:27:18.0|
+---------+-------+---------------------+
To calculate the moving average based on the name and still maintain all rows:
#create window by casting timestamp to long (number of seconds)
w = (Window()
.partitionBy(col("name"))
.orderBy(F.col("timestampGMT").cast('long'))
.rangeBetween(-days(7), 0))
df2 = df.withColumn('rolling_average', F.avg("dollars").over(w))
df2.show(100, False)
Output:
+---------+-------+---------------------+------------------+
|name |dollars|timestampGMT |rolling_average |
+---------+-------+---------------------+------------------+
|ash |9999.0 |2018-04-16 13:27:18.0|9999.0 |
|tshilidzi|17.0 |2018-03-10 17:27:18.0|17.0 |
|tshilidzi|13.0 |2018-03-11 14:27:18.0|15.0 |
|tshilidzi|25.0 |2018-03-12 13:27:18.0|18.333333333333332|
|tshilidzi|156.0 |2019-03-22 13:27:18.0|156.0 |
|tshilidzi|7000.0 |2019-04-15 13:27:18.0|7000.0 |
|thabo |20.0 |2018-03-13 17:27:18.0|20.0 |
|thabo |56.0 |2018-03-14 14:27:18.0|38.0 |
|thabo |99.0 |2018-03-15 13:27:18.0|58.333333333333336|
|thabo |122.0 |2018-03-31 13:27:18.0|122.0 |
+---------+-------+---------------------+------------------+
It's worth noting, that if you don't care about the exact dates - but care to have the average of the last 30 days available you can use the rowsBetween function as follows:
w = Window.orderBy('timestampGMT').rowsBetween(-7, 0)
df = eurPrices.withColumn('rolling_average', F.avg('dollars').over(w))
Since you order by the dates, it will take the last 7 occurrences.
You save all the casting.
Do you mean this :
df = spark.createDataFrame([(17, "2017-03-11T15:27:18+00:00"),
(13, "2017-03-11T12:27:18+00:00"),
(21, "2017-03-17T11:27:18+00:00")],
["dollars", "timestampGMT"])
df = df.withColumn('timestampGMT', df.timestampGMT.cast('timestamp'))
df = df.withColumn('rolling_average', f.avg("dollars").over(Window.partitionBy(f.window("timestampGMT", "7 days"))))
Output:
+-------+-------------------+---------------+
|dollars|timestampGMT |rolling_average|
+-------+-------------------+---------------+
|21 |2017-03-17 19:27:18|21.0 |
|17 |2017-03-11 23:27:18|15.0 |
|13 |2017-03-11 20:27:18|15.0 |
+-------+-------------------+---------------+

How to stop months being ordered alphabetically in pandas pivot table

alphabetically-ordered months
How can I stop pandas converting my chronologically-ordered data in a csv into alphabetical order (like in my current plot). This is the code I am using:
import seaborn as sns
df = pd.read_csv("C:/Users/Paul/Desktop/calendar.csv")
df2 = df.pivot("Month", "Year", "hPM2.5")
ax = sns.heatmap(df2, annot=True, fmt="d")
I think you can use ordered categorical:
import pandas as pd
import seaborn as sns
df = pd.DataFrame({'Month':['January','February','September'],
'Year':[2015,2015,2016],
'hPM2.5':[7,8,9]})
print (df)
Month Year hPM2.5
0 January 2015 7
1 February 2015 8
2 September 2016 9
cats = ['January','February','March','April','May','June',
'July','August','September','October','November','December']
df['Month'] = df['Month'].astype('category',
ordered=True,
categories=cats)
df2 = df.pivot("Month", "Year", "hPM2.5")
sns.heatmap(df2, annot=True)

Resources