Pandas Verion: 1.5.3
Python Version: 3.9.13
I'm trying to resample a pandas dataframe of some time series data which is divided by id. I've seen a million examples of this online, and frankly, have used this technique myself many times. However, for some reason, I have a dataframe that is taking an extremely long time to resample my data, despite being a rather reasonable amount of rows (~250k).
The structure is very simple:
item_id
date
value
1
2023-01-01
1
1
2023-01-03
3
1
2023-01-05
5
2
2023-01-01
1
2
2023-01-03
3
2
2023-01-05
5
I've oversimplified, but this is the core idea. What I want to do is resample this, grouped by item_id, with a frequency of 'per day'. The resulting table should look like this after resampling...
item_id
date
value
1
2023-01-01
1
1
2023-01-02
NaN
1
2023-01-03
3
1
2023-01-04
NaN
1
2023-01-05
5
2
2023-01-01
1
2
2023-01-02
NaN
2
2023-01-03
3
2
2023-01-04
NaN
2
2023-01-05
5
Keep in mind, I'm looking for the resampling min/max to be within each item_id group, so for the sake of this question, I'm not looking for the stack/unstack method (however, that method does execute almost instantly on this dataset).
Assuming df is my dataframe before sampling, this would be the python code I would use...
# First clean types / set index
df = df.astype({
'item_id': 'str',
'value': 'int'
})
df['date'] = pd.to_datetime(df['date'], format='%Y-%m-%d')
df = df.set_index('date')
# Resample
df = df.groupby('item_id').resample('D').value.mean()
If I run this code on ~250k rows, it takes approximately 12min to execute. That's so long I MUST assume that I'm doing something wrong here... but for the life of me I cannot see it. Any suggestions?
I have a time series data given below:
date product price amount
11/01/2019 A 10 20
11/02/2019 A 10 20
11/03/2019 A 25 15
11/04/2019 C 40 50
11/05/2019 C 50 60
I have a high dimensional data, and I have just added the simplified version with two columns {price, amount}. I am trying to transform it relatively based on time index illustrated below:
date product price amount
11/01/2019 A NaN NaN
11/02/2019 A 0 0
11/03/2019 A 15 -5
11/04/2019 C NaN NaN
11/05/2019 C 10 10
I am trying to get relative changes of each product based on time indexes. If previous date does not exist for a specified product, I am adding "NaN".
Can you please tell me is there any function to do this?
Group by product and use .diff()
df[["price", "amount"]] = df.groupby("product")[["price", "amount"]].diff()
output :
date product price amount
0 2019-11-01 A NaN NaN
1 2019-11-02 A 0.0 0.0
2 2019-11-03 A 15.0 -5.0
3 2019-11-04 C NaN NaN
4 2019-11-05 C 10.0 10.0
I'm trying to gather all dates between 06-01-2020 and 06-30-2020 based on the forecast date which can be 06-08-2020, 06-20-2020, and 06-24-2020. The problem I am running into is that I'm only grabbing all of the dates associated with the forecast date 06-24-2020. I need all dates that are most recent so if say 06-03-2020 occurs with the forecast date 06-08-2020 and not with 06-20-2020, I still need all of the dates associated with that forecast date. Here's the code I am currently using.
df = df[df['Forecast Date'].isin([max(df['Forecast Date'])])]
It's producing this-
Date \
5668 2020-06-25
5669 2020-06-26
5670 2020-06-27
5671 2020-06-28
5672 2020-06-29
5673 2020-06-30
Media Granularity Forecast Date
5668 NaN 2020-06-24
5669 NaN 2020-06-24
5670 NaN 2020-06-24
5671 NaN 2020-06-24
5672 NaN 2020-06-24
5673 NaN 2020-06-24
With a length of 6 (len(df[df['Forecast Date'].isin([max(df['Forecast Date'])])])). It needs to be a length of 30, one for each unique date. It is only grabbing the columns where the max of Forecast date is 06-24-2020.
I'm thinking it's something along the lines of df.sort_values(df[['Date', 'Forecast Date']]).drop_duplicates(df['Date'], keep='last') but it's giving me a key error.
It was easy but not what I expected.
df = df.sort_values(by=['Date', 'Forecast Date']).drop_duplicates(subset=['Date'], keep='last')
I have a pandas dataframe that has some data values by hour (which is also the index of this lookup dataframe). The dataframe looks like this:
In [1] print (df_lookup)
Out[1] 0 1.109248
1 1.102435
2 1.085014
3 1.073487
4 1.079385
5 1.088759
6 1.044708
7 0.902482
8 0.852348
9 0.995912
10 1.031643
11 1.023458
12 1.006961
...
23 0.889541
I want to multiply the values from this lookup dataframe to create a column of another dataframe, which has datetime as index.
The dataframe looks like this:
In [2] print (df)
Out[2]
Date_Label ID data-1 data-2 data-3
2015-08-09 00:00:00 1 2513.0 2502 NaN
2015-08-09 00:00:00 1 2113.0 2102 NaN
2015-08-09 01:00:00 2 2006.0 1988 NaN
2015-08-09 02:00:00 3 2016.0 2003 NaN
...
2018-07-19 23:00:00 33 3216.0 333 NaN
I want to calculate the data-3 column from data-2 column, where the weight given to 'data-2' column depends on corresponding value in df_lookup. I get the desired values by looping over the index as follows, but that is too slow:
for idx in df.index:
df.loc[idx,'data-3'] = df.loc[idx, 'data-2']*df_lookup.at[idx.hour]
Is there a faster way someone could suggest?
Using .loc
df['data-2']*df_lookup.loc[df.index.hour].values
Out[275]:
Date_Label
2015-08-09 00:00:00 2775.338496
2015-08-09 00:00:00 2331.639296
2015-08-09 01:00:00 2191.640780
2015-08-09 02:00:00 2173.283042
Name: data-2, dtype: float64
#df['data-3']=df['data-2']*df_lookup.loc[df.index.hour].values
I'd probably try doing a join.
# Fix column name
df_lookup.columns = ['multiplier']
# Get hour index
df['hour'] = df.index.hour
# Join
df = df.join(df_lookup, how='left', on=['hour'])
df['data-3'] = df['data-2'] * df['multiplier']
df = df.drop(['multiplier', 'hour'], axis=1)
I have a pandas dataframe with the following structure:
ID date m_1 m_2
1 2016-01-03 10 3.4
2016-02-07 11 3.3
2016-02-07 10.4 2.8
2 2016-01-01 10.9 2.5
2016-02-04 12 2.3
2016-02-04 11 2.7
2016-02-04 12.1 2.1
Both ID and date are a MultiIndex. The data represent some measurements made by some sensors (in the example two sensors). Those sensors sometimes create several measurements per day (as shown in the example).
My questions are:
How can I resample this so I have one row per day per sensor, but one column with the mean, another with the max another with min, etc?
How can I "align" (maybe this is no the correct word) the two time series, so both begin and end at the same time (from 2016-01-01 to 2016-02-07) adding the missing days with NAs?
You can use groupby with DataFrameGroupBy.resample and aggregate by functions in dict first and then reindex by MultiIndex.from_product:
df = df.reset_index(level=0).groupby('ID').resample('D').agg({'m_1':'mean', 'm_2':'max'})
df = df.reindex(pd.MultiIndex.from_product(df.index.levels, names = df.index.names))
#alternative for adding missing start and end datetimes
#df = df.unstack().stack(dropna=False)
print (df.head())
m_2 m_1
ID date
1 2016-01-01 NaN NaN
2016-01-02 NaN NaN
2016-01-03 3.4 10.0
2016-01-04 NaN NaN
2016-01-05 NaN NaN
For PeriodIndex in second level use set_levels with to_period:
df.index = df.index.set_levels(df.index.get_level_values('date').to_period('d'), level=1)
print (df.index.get_level_values('date'))
PeriodIndex(['2016-01-01', '2016-01-02', '2016-01-03', '2016-01-04',
'2016-01-05', '2016-01-06', '2016-01-07', '2016-01-08',
'2016-01-09', '2016-01-10', '2016-01-11', '2016-01-12',
'2016-01-13', '2016-01-14', '2016-01-15', '2016-01-16',
'2016-01-17', '2016-01-18', '2016-01-19', '2016-01-20',
'2016-01-21', '2016-01-22', '2016-01-23', '2016-01-24',
'2016-01-25', '2016-01-26', '2016-01-27', '2016-01-28',
'2016-01-29', '2016-01-30', '2016-01-31', '2016-02-01',
'2016-02-02', '2016-02-03', '2016-02-04', '2016-02-05',
'2016-02-06', '2016-02-07', '2016-01-01', '2016-01-02',
'2016-01-03', '2016-01-04', '2016-01-05', '2016-01-06',
'2016-01-07', '2016-01-08', '2016-01-09', '2016-01-10',
'2016-01-11', '2016-01-12', '2016-01-13', '2016-01-14',
'2016-01-15', '2016-01-16', '2016-01-17', '2016-01-18',
'2016-01-19', '2016-01-20', '2016-01-21', '2016-01-22',
'2016-01-23', '2016-01-24', '2016-01-25', '2016-01-26',
'2016-01-27', '2016-01-28', '2016-01-29', '2016-01-30',
'2016-01-31', '2016-02-01', '2016-02-02', '2016-02-03',
'2016-02-04', '2016-02-05', '2016-02-06', '2016-02-07'],
dtype='period[D]', name='date', freq='D')