Python: Merge Pandas Data Frame with Pivot Table - python-3.x

I have the two following dataframes that I want to merge.
df1:
id time station
0 a 22.08.2017 12:00:00 A1
1 b 22.08.2017 12:00:00 A3
2 a 22.08.2017 13:00:00 A2
...
pivot:
station A1 A2 A3
0 time
1 22.08.2017 12:00:00 10 12 11
2 22.08.2017 13:00:00 9 7 3
3 22.08.2017 14:00:00 2 3 4
4 22.08.2017 15:00:00 3 2 7
...
it should look like:
merge:
id time station value
0 a 22.08.2017 12:00:00 A1 10
1 b 22.08.2017 12:00:00 A3 11
2 a 22.08.2017 13:00:00 A2 7
...
Now I want to add a column in the data frame with the right value from the pivot table. I failed including the column labels for the merge.
I constructed something like that, but it does not work:
merge = pd.merge(df1, pivot, how="left", left_on=["time", "station"], right_on=["station", pivot.columns])
Any help?
EDIT:
As advised, instead of the pivot table I tried to use the following data:
df2:
time station value
22.08.2017 12:00:00 A1 10
22.08.2017 12:00:00 A2 12
22.08.2017 12:00:00 A3 11
...
22.08.2017 13:00:00 A1 9
22.08.2017 13:00:00 A2 7
22.08.2017 13:00:00 A3 3
The table contains about 1300 different stations for every timestamp. All in all I have more than 115.000.000 rows. My df1 have 5.000.000 rows.
Now I tried to merge df1.head(100) and df2, but in the result all values are nan. Therefore I used this:
merge = pd.merge(df1.head(100), df2, how="left", on=["time", "station"])
Another problem is that the merge takes a few minutes so that I expect the whole df1 will take several days.

I guess you got the dataframe pivot using either pivot or pivot_table in pandas, if you can perform the merge using the dataframe you had before the pivot it should work just fine.
Otherwise you will have to reverse the pivot using melt before merging:
melt = pd.concat([pivot[['time']],pivot[['A1']].melt()],axis = 1)
melt = pd.concat([melt,pd.concat([pivot[['time']],pivot[['A2']].melt()],axis = 1)])
melt = pd.concat([melt,pd.concat([pivot[['time']],pivot[['A3']].melt()],axis = 1)])
melt.columns = ['time','station','value']
Then just perform a merge like you expected it:
my_df.merge(melt,on = ['time','station'])
id time station value
0 a time1 A1 10
1 b time1 A3 11
2 a time2 A2 7
EDIT:
If your dataframes are as big as in your edit, you indeed have to perform the merges on chunks of them. You could try to reduce it to chunk both your dataframes.
First, sort your df1 in order to have only close values of time:
df1.sort_values('time',inplace = True)
Then you chunk it, chunk the second dataframe in the way you are sure to have all the rows you might need, and then merge those chunks:
chunk1 = df1.head(100)
chunk2 = df2.loc[df2.time.between(chunk1.time.min(),chunk1.time.max())]
merge = chunk1.merge(chunk2,on = ['time','station'],how = 'left')

Related

How to reformat time series to fill in missing entries with NaNs?

I have a problem that involves converting time series from one
representation to another. Each item in the time series has
attributes "time", "id", and "value" (think of it as a measurement
at "time" for sensor "id"). I'm storing all the items in a
Pandas dataframe with columns named by the attributes.
The set of "time"s is a small set of integers (say, 32),
but some of the "id"s are missing "time"s/"value"s. What I want to
construct is an output dataframe with the form:
id time0 time1 ... timeN
val0 val1 ... valN
where the missing "value"s are represented by NaNs.
For example, suppose the input looks like the following:
time id value
0 0 13
2 0 15
3 0 20
2 1 10
3 1 12
Then, assuming the set of possible times is 0, 2, and 3, the
desired output is:
id time0 time1 time2 time3
0 13 NaN 15 20
1 NaN NaN 10 12
I'm looking for a Pythonic way to do this since there are several
million rows in the input and around 1/4 million groups.
You can transform your table with a pivot. If you need to handle duplicate values for index/column pairs, you can use the more general pivot_table.
For your example, the simple pivot is sufficient:
>>> df = df.pivot(index="id", columns="time", values="value")
time 0 2 3
id
0 13.0 15.0 20.0
1 NaN 10.0 12.0
To get the exact result from your question, you could reindex the columns to fill in the empty values, and rename the column index like this:
# add missing time columns, fill with NaNs
df = df.reindex(range(df.columns.max() + 1), axis=1)
# name them "time#"
df.columns = "time" + df.columns.astype(str)
# remove the column index name "time"
df = df.rename_axis(None, axis=1)
Final df:
time0 time1 time2 time3
id
0 13.0 NaN 15.0 20.0
1 NaN NaN 10.0 12.0

Pandas : Finding correct time window

I have a pandas dataframe which gets updated every hour with latest hourly data. I have to filter out IDs based upon a threshold, i.e. PR_Rate > 50 and CNT_12571 < 30 for 3 consecutive hours from a lookback period of 5 hours. I was using the below statements to accomplish this:
df_thld=df[(df['Date'] > df['Date'].max() - pd.Timedelta(hours=5))& (df.PR_Rate>50) & (df.CNT_12571 < 30)]
df_thld.loc[:,'HR_CNT'] = df_thld.groupby('ID')['Date'].nunique().to_frame('HR_CNT').reset_index()
df_thld[(df_thld['HR_CNT'] >3]
The problem with this approach is that since lookback period requirement is 5 hours, so, this HR_CNT can count any non consecutive hours breaching this critieria.
MY Dataset is as below:
DataFrame
Date IDs CT_12571 PR_Rate
16/06/2021 10:00 A1 15 50.487
16/06/2021 11:00 A1 31 40.806
16/06/2021 12:00 A1 25 52.302
16/06/2021 13:00 A1 13 61.45
16/06/2021 14:00 A1 7 73.805
In the above Dataframe, threshold was not breached at 1100 hrs, but while counting the hours, 10,12 and 13 as the hours that breached the threshold instead of 12,13,14 as required. Each id may or may not have this critieria breached in a single day. Any idea, How can I fix this issue?
Please excuse me, if I have misinterpreted your problem. As I understand the issues you have a dataframe which is updated hourly. An example of this dataframe is illustrated below as df. From this dataframe, you want to filter only those rows which satisfy the following two conditions:
PR_Rate > 50 and CNT_12571 < 30
If and only if the threshold is surpassed for three consecutive hours
Given these assumptions, I would proceed as follows:
df:
Date IDs CT_1257 PR_Rate
0 2021-06-16 10:00:00 A1 15 50.487
1 2021-06-16 12:00:00 A1 31 40.806
2 2021-06-16 14:00:00 A1 25 52.302
3 2021-06-16 15:00:00 A1 13 61.450
4 2021-06-16 16:00:00 A1 7 73.805
Note in this dataframe, the only time fr5ame which satisfies the above conditions is the entries for the of 14:00, 15:00 and 16:00.
def filterFrame(df, dur, pr_threshold, ct_threshold):
ff = df[(df['CT_1257']< ct_threshold) & (df['PR_Rate'] >pr_threshold) ].reset_index()
ml = list(ff.rolling(f'{dur}h', on='Date').count()['IDs'])
r = len(ml)- 1
rows= []
while r >= 0:
end = r
start = None
if int(ml[r]) < dur:
r -= 1
else:
k = int(ml[r])
for i in range(k):
rows.append(r-i)
r -= k
rows = rows[::-1]
return ff.filter(items= rows, axis = 0).reset_index()
running filterFrame(df, 3, 50, 30) yields:
level_0 index Date IDs CT_1257 PR_Rate
0 1 2 2021-06-16 14:00:00 A1 25 52.302
1 2 3 2021-06-16 15:00:00 A1 13 61.450
2 3 4 2021-06-16 16:00:00 A1 7 73.805

How to calculate 6 months moving average from daily data using pyspark

I am trying to calculate moving average of price for last six months in pyspark.
Currently my table has 6month lagged date.
id dates lagged_6month price
1 2017-06-02 2016-12-02 14.8
1 2017-08-09 2017-02-09 16.65
2 2017-08-16 2017-02-16 16
2 2018-05-14 2017-11-14 21.05
3 2017-09-01 2017-03-01 16.75
Desired Results
id dates avg6mprice
1 2017-06-02 20.6
1 2017-08-09 21.5
2 2017-08-16 16.25
2 2018-05-14 25.05
3 2017-09-01 17.75
Sample code
from pyspark.sql.functions import col
from pyspark.sql import functions as F
df = sqlContext.table("price_table")
w = Window.partitionBy([col('id')]).rangeBetween(col('dates'),col('lagged_6month'))
RangeBetween does not seem to accept columns as argument in the window function.

manipulating pandas dataframe - conditional

I have a pandas dataframe that looks like this:
ID Date Event_Type
1 01/01/2019 A
1 01/01/2019 B
2 02/01/2019 A
3 02/01/2019 A
I want to be left with:
ID Date
1 01/01/2019
2 02/01/2019
3 02/01/2019
Where my condition is:
If the ID is the same AND the dates are within 2 days of each other then drop one of the rows.
If however the dates are more than 2 days apart then keep both rows.
How do I do this?
I believe you need first convert values to datetimes by to_datetime, then get diff and get first values per groups by isnull() chained with comparing if next values are higher like timedelta treshold:
df['Date'] = pd.to_datetime(df['Date'], format='%d/%m/%Y')
s = df.groupby('ID')['Date'].diff()
df = df[(s.isnull() | (s > pd.Timedelta(2, 'd')))]
print (df)
ID Date Event_Type
0 1 2019-01-01 A
2 2 2019-02-01 A
3 3 2019-02-01 A
Check solution with another data:
print (df)
ID Date Event_Type
0 1 01/01/2019 A
1 1 04/01/2019 B <-difference 3 days
2 2 02/01/2019 A
3 3 02/01/2019 A
df['Date'] = pd.to_datetime(df['Date'], format='%d/%m/%Y')
s = df.groupby('ID')['Date'].diff()
df = df[(s.isnull() | (s > pd.Timedelta(2, 'd')))]
print (df)
ID Date Event_Type
0 1 2019-01-01 A
1 1 2019-01-04 B
2 2 2019-01-02 A
3 3 2019-01-02 A

Create a pandas column based on a lookup value from another dataframe

I have a pandas dataframe that has some data values by hour (which is also the index of this lookup dataframe). The dataframe looks like this:
In [1] print (df_lookup)
Out[1] 0 1.109248
1 1.102435
2 1.085014
3 1.073487
4 1.079385
5 1.088759
6 1.044708
7 0.902482
8 0.852348
9 0.995912
10 1.031643
11 1.023458
12 1.006961
...
23 0.889541
I want to multiply the values from this lookup dataframe to create a column of another dataframe, which has datetime as index.
The dataframe looks like this:
In [2] print (df)
Out[2]
Date_Label ID data-1 data-2 data-3
2015-08-09 00:00:00 1 2513.0 2502 NaN
2015-08-09 00:00:00 1 2113.0 2102 NaN
2015-08-09 01:00:00 2 2006.0 1988 NaN
2015-08-09 02:00:00 3 2016.0 2003 NaN
...
2018-07-19 23:00:00 33 3216.0 333 NaN
I want to calculate the data-3 column from data-2 column, where the weight given to 'data-2' column depends on corresponding value in df_lookup. I get the desired values by looping over the index as follows, but that is too slow:
for idx in df.index:
df.loc[idx,'data-3'] = df.loc[idx, 'data-2']*df_lookup.at[idx.hour]
Is there a faster way someone could suggest?
Using .loc
df['data-2']*df_lookup.loc[df.index.hour].values
Out[275]:
Date_Label
2015-08-09 00:00:00 2775.338496
2015-08-09 00:00:00 2331.639296
2015-08-09 01:00:00 2191.640780
2015-08-09 02:00:00 2173.283042
Name: data-2, dtype: float64
#df['data-3']=df['data-2']*df_lookup.loc[df.index.hour].values
I'd probably try doing a join.
# Fix column name
df_lookup.columns = ['multiplier']
# Get hour index
df['hour'] = df.index.hour
# Join
df = df.join(df_lookup, how='left', on=['hour'])
df['data-3'] = df['data-2'] * df['multiplier']
df = df.drop(['multiplier', 'hour'], axis=1)

Resources