Counter number of date including between two date - python-3.x

I have a data set like this:
ID date value_1 value_2 tech start_date last_date
ab 2017-06-01 3476.44 324 A 2015-05-04 2018-06-01
ab 2017-07-01 3556.65 332 A 2016-06-07 2018-07-01
ab 2017-08-01 3552.65 120 B 2016-01-08 2018-01-01
ab 2017-09-01 3201.66 987 C 2015-04-08 2018-04-01
bc 2017-10-01 3059.02 652 C 2015-06-09 2018-03-01
bc 2017-11-01 2853.37 345 C 2018-01-01 2018-08-01
bc 2017-12-01 2871.29 554 C 2015-10-01 2018-01-01
I want to keep the ID and the tech fixed and count how many the date inclouding between start_date and last_date.
Like:
ID count
ab 4
ab 4
ab 4
ab 4
bc 2
bc 2
bc 2
I build an a function for do the count and next I do an a group by:
def count_c(data):
d = {}
d['count'] = np.sum(
[x > data['start_date '] & x < data['last_date '] for x in data['date']])
return pd.Series(d, index=['count'])
df_model1 = flag.groupby('date').apply(count_c)

Quite simple actually, instead of using a function use the datetime library and subtract each date.
import pandas as pd
import numpy as np
from datetime import datetime
df = pd.DataFrame(columns=['ID', 'date', 'value_1', 'value_2', 'tech', 'start_date', 'last_date']) # Your DataFrame
days_list = []
EDIT: Solution now counts the amount of rows in between start_date and end_date column
for i, row in df.iterrows():
s_date = datetime.strptime(row['start_date'], '%m/%d/%y')
e_date = datetime.strptime(row['last_date'],'%m/%d/%y')
days = abs((e_date - s_date).days)
days_list.append(days)
days_list = np.array(days_list)
df['Days'] = days_list
def dates(df):
"""
:param df: DataFrame
:param start_date: (str) mm/dd/yy
:param end_date: (str) mm/dd/yy
:return: number of rows
"""
n = 0
for _, ro in df.iterrows():
y = datetime.strptime(ro['start_date'], '%m/%d/%y')
t = datetime.strptime(ro['last_date'], '%m/%d/%y')
d = datetime.strptime(ro['date'], '%m/%d/%y')
if y < d < t:
n += 1
print(dates(df))

Related

How to get the top values within each group?

I am new to Pandas and I have a dataset that looks something like this.
s_name Time p_name qty
A 12/01/2019 ABC 1
A 12/01/2019 ABC 1
A 12/01/2019 DEF 2
A 12/01/2019 DEF 2
A 12/01/2019 FGH 0
B 13/02/2019 ABC 3
B 13/02/2019 DEF 1
B 13/02/2019 DEF 1
B 13/03/2019 ABC 3
B 13/03/2019 FGH 0
I am trying to group by s_name and find the sum of the qty of each unique p_name in a month but only display the p_name with the top 2 most quantities. Below is an example of how I want the final output to look like.
s_name Time p_name qty
A 01 DEF 4
A 01 ABC 2
B 02 ABC 3
B 02 DEF 2
B 03 ABC 2
B 03 FGH 0
Do you have any ideas? I have been stuck here for quite long so much help is appreciated.
Create a month using dt, then group by s_name and month, then apply a function to the groups, group each group by name and do a sum over qty, sort_values descending and only get the first two rows with head:
df.Time = pd.to_datetime(df.Time, format='%d/%m/%Y')
df['month'] = df.Time.dt.month
df_f = df.groupby(['s_name', 'month']).apply(
lambda df:
df.groupby('p_name').qty.sum()
.sort_values(ascending=False).head(2)
).reset_index()
df_f
# s_name month p_name qty
# 0 A 1 DEF 4
# 1 A 1 ABC 2
# 2 B 2 ABC 3
# 3 B 2 DEF 2
# 4 B 3 ABC 3
# 5 B 3 FGH 0
I am new to Pandas myself. I am going to attempt to answer your question.
See this code.
from io import StringIO
import pandas as pd
columns = "s_name Time p_name qty"
# Create dataframe from text.
df = pd.read_csv(
StringIO(
f"""{columns}
A 12/01/2019 ABC 1
A 12/01/2019 ABC 1
A 12/01/2019 DEF 2
A 12/01/2019 DEF 2
A 12/01/2019 FGH 0
B 13/02/2019 ABC 3
B 13/02/2019 DEF 1
B 13/02/2019 DEF 1
B 13/03/2019 ABC 3
B 13/03/2019 FGH 0"""
),
sep=" ",
)
S_NAME, TIME, P_NAME, QTY = columns.split()
MONTH = "month"
# Convert the TIME col to datetime types.
df.Time = pd.to_datetime(df.Time, dayfirst=True)
# Create a month column with zfilled strings.
df[MONTH] = df.Time.apply(lambda x: str(x.month).zfill(2))
# Group
group = df.groupby(by=[S_NAME, P_NAME, MONTH])
gdf = (
group.sum()
.sort_index()
.sort_values(by=[S_NAME, MONTH, QTY], ascending=False)
.reset_index()
)
gdf.groupby([S_NAME, MONTH]).head(2).sort_values(by=[S_NAME, MONTH]).reset_index()
Is this the result you expected?

pandas time difference between rows based on conditions

So, I have a dataframe like this:
d = {'id': ['a','a','b','b'], 'map': ['aa','ab','ba','bb'], 'timestamp':['2019-01-01 00:00:00+00:00',
'2019-01-01 06:00:00+00:00',
'2019-05-01 12:00:00+00:00',
'2019-06-01 18:00:00+00:00']}
df = pd.DataFrame(data=d)
id map timestamp
0 a aa 2019-01-01 00:00:00+00:00
1 a ab 2019-01-01 06:00:00+00:00
2 b ba 2019-05-01 12:00:00+00:00
3 b bb 2019-06-01 18:00:00+00:00
For each value in id, I'd like to calculate the time difference (i.e the difference between min and max timestamp) for each value in map. Eg. id = a and map = aa = 6 hours
Appreciate any help
Use:
df['timestamp'] = pd.to_datetime(df['timestamp'])
df1 = df.groupby('id')['timestamp'].agg(['max','min'])
s = df1['max'].sub(df1['min']).dt.total_seconds().div(3600)
print (s)
id
a 6.0
b 750.0
dtype: float64

how to take only maximum date value is there are two date in a week in dataframe

i have a dataframe called Data
Date Value Frequency
06/01/2020 256 A
07/01/2020 235 A
14/01/2020 85 Q
16/01/2020 625 Q
22/01/2020 125 Q
here it is observed that 6/01/2020 and 07/01/2020 are in the same week that is monday and tuesday.
Therefore i wanted to take maximum date from week.
my final dataframe should look like this
Date Value Frequency
07/01/2020 235 A
16/01/2020 625 Q
22/01/2020 125 Q
I want the maximum date from the week , like i have showed in my final dataframe example.
I am new to python, And i am searching answer for this which i didnt find till now ,Please help
First convert column to datetimes by to_datetime and use DataFrameGroupBy.idxmax for rows with maximum datetime per rows with Series.dt.strftime, last select rows by DataFrame.loc:
df['Date'] = pd.to_datetime(df['Date'], dayfirst=True)
print (df['Date'].dt.strftime('%Y-%U'))
0 2020-01
1 2020-01
2 2020-02
3 2020-02
4 2020-03
Name: Date, dtype: object
df = df.loc[df.groupby(df['Date'].dt.strftime('%Y-%U'))['Date'].idxmax()]
print (df)
Date Value Frequency
1 2020-01-07 235 A
3 2020-01-16 625 Q
4 2020-01-22 125 Q
If format of datetimes cannot be changed:
d = pd.to_datetime(df['Date'], dayfirst=True)
df = df.loc[d.groupby(d.dt.strftime('%Y-%U')).idxmax()]
print (df)
Date Value Frequency
1 07/01/2020 235 A
3 16/01/2020 625 Q
4 22/01/2020 125 Q

roll off profile stacking data frames

I have a dataframe that looks like:
import pandas as pd
import datetime as dt
df= pd.DataFrame({'date':['2017-12-31','2017-12-31'],'type':['Asset','Liab'],'Amount':[100,-100],'Maturity Date':['2019-01-02','2018-01-01']})
df
I am trying to build a roll-off profile by checking if the 'Maturity Date' is greater than a 'date' in the future. I am trying to achieve something like:
#First Month
df1=df[df['Maturity Date']>'2018-01-31']
df1['date']='2018-01-31'
#Second Month
df2=df[df['Maturity Date']>'2018-02-28']
df2['date']='2018-02-28'
#third Month
df3=df[df['Maturity Date']>'2018-03-31']
df3['date']='2018-02-31'
#first quarter
qf1=df[df['Maturity Date']>'2018-06-30']
qf1['date']='2018-06-30'
#concatenate
df=pd.concat([df,df1,df2,df3,qf1])
df
I was wondering if there is a way to :
Allow an arbitrary long number of dates without repeating code
I think you need numpy.tile for repeat indices and assign to new column, last filter by boolean indexing and sorting by sort_values:
d = '2017-12-31'
df['Maturity Date'] = pd.to_datetime(df['Maturity Date'])
#generate first month and next quarters
c1 = pd.date_range(d, periods=4, freq='M')
c2 = pd.date_range(c1[-1], periods=2, freq='Q')
#join together
c = c1.union(c2[1:])
#repeat rows be indexing repeated index
df1 = df.loc[np.tile(df.index, len(c))].copy()
#assign column by datetimes
df1['date'] = np.repeat(c, len(df))
#filter by boolean indexing
df1 = df1[df1['Maturity Date'] > df1['date']]
print (df1)
Amount Maturity Date date type
0 100 2019-01-02 2017-12-31 Asset
1 -100 2018-01-01 2017-12-31 Liab
0 100 2019-01-02 2018-01-31 Asset
0 100 2019-01-02 2018-02-28 Asset
0 100 2019-01-02 2018-03-31 Asset
0 100 2019-01-02 2018-06-30 Asset
You could use a nifty tool in the Pandas arsenal called
pd.merge_asof. It
works similarly to pd.merge, except that it matches on "nearest" keys rather
than equal keys. Furthermore, you can tell pd.merge_asof to look for nearest
keys in only the backward or forward direction.
To make things interesting (and help check that things are working properly), let's add another row to df:
df = pd.DataFrame({'date':['2017-12-31', '2017-12-31'],'type':['Asset', 'Asset'],'Amount':[100,200],'Maturity Date':['2019-01-02', '2018-03-15']})
for col in ['date', 'Maturity Date']:
df[col] = pd.to_datetime(df[col])
df = df.sort_values(by='Maturity Date')
print(df)
# Amount Maturity Date date type
# 1 200 2018-03-15 2017-12-31 Asset
# 0 100 2019-01-02 2017-12-31 Asset
Now define some new dates:
dates = (pd.date_range('2018-01-31', periods=3, freq='M')
.union(pd.date_range('2018-01-1', periods=2, freq='Q')))
result = pd.DataFrame({'date': dates})
# date
# 0 2018-01-31
# 1 2018-02-28
# 2 2018-03-31
# 3 2018-06-30
Now we can merge rows, matching nearest dates from result with Maturity Dates from df:
result = pd.merge_asof(result, df.drop('date', axis=1),
left_on='date', right_on='Maturity Date', direction='forward')
In this case we want to "match" dates with Maturity Dates which are greater
so we use direction='forward'.
Putting it all together:
import pandas as pd
df = pd.DataFrame({'date':['2017-12-31', '2017-12-31'],'type':['Asset', 'Asset'],'Amount':[100,200],'Maturity Date':['2019-01-02', '2018-03-15']})
for col in ['date', 'Maturity Date']:
df[col] = pd.to_datetime(df[col])
df = df.sort_values(by='Maturity Date')
dates = (pd.date_range('2018-01-31', periods=3, freq='M')
.union(pd.date_range('2018-01-1', periods=2, freq='Q')))
result = pd.DataFrame({'date': dates})
result = pd.merge_asof(result, df.drop('date', axis=1),
left_on='date', right_on='Maturity Date', direction='forward')
result = pd.concat([df, result], axis=0)
result = result.sort_values(by=['Maturity Date', 'date'])
print(result)
yields
Amount Maturity Date date type
1 200 2018-03-15 2017-12-31 Asset
0 200 2018-03-15 2018-01-31 Asset
1 200 2018-03-15 2018-02-28 Asset
0 100 2019-01-02 2017-12-31 Asset
2 100 2019-01-02 2018-03-31 Asset
3 100 2019-01-02 2018-06-30 Asset

Python correlation matrix 3d dataframe

I have in SQL Server a historical return table by date and asset Id like this:
[Date] [Asset] [1DRet]
jan asset1 0.52
jan asset2 0.12
jan asset3 0.07
feb asset1 0.41
feb asset2 0.33
feb asset3 0.21
...
So I need to calculate the correlation matrix for a given date range for all assets combinations: A1,A2 ; A1,A3 ; A2,A3
Im using pandas and in my SQL Select Where I'm filtering tha date range and ordering it by date.
I'm trying to do it using pandas df.corr(), numpy.corrcoef and Scipy but not able to do it for my n-variable dataframe
I see some example but it's always for a dataframe where you have an asset per column and one row per day.
This my code block where I'm doing it:
qryRet = "Select * from IndexesValue where Date > '20100901' and Date < '20150901' order by Date"
result = conn.execute(qryRet)
df = pd.DataFrame(data=list(result),columns=result.keys())
df1d = df[['Date','Id_RiskFactor','1DReturn']]
corr = df1d.set_index(['Date','Id_RiskFactor']).unstack().corr()
corr.columns = corr.columns.droplevel()
corr.index = corr.columns.tolist()
corr.index.name = 'symbol_1'
corr.columns.name = 'symbol_2'
print(corr)
conn.close()
For it I'm reciving this msg:
corr.columns = corr.columns.droplevel()
AttributeError: 'Index' object has no attribute 'droplevel'
**Print(df1d.head())**
Date Id_RiskFactor 1DReturn
0 2010-09-02 149 0E-12
1 2010-09-02 150 -0.004242875148
2 2010-09-02 33 0.000590000011
3 2010-09-02 28 0.000099999997
4 2010-09-02 34 -0.000010000000
**print(df.head())**
Date Id_RiskFactor Value 1DReturn 5DReturn
0 2010-09-02 149 0.040096000000 0E-12 0E-12
1 2010-09-02 150 1.736700000000 -0.004242875148 -0.013014321215
2 2010-09-02 33 2.283000000000 0.000590000011 0.001260000048
3 2010-09-02 28 2.113000000000 0.000099999997 0.000469999999
4 2010-09-02 34 0.615000000000 -0.000010000000 0.000079999998
**print(corr.columns)**
Index([], dtype='object')
Create a sample DataFrame:
import pandas as pd
import numpy as np
df = pd.DataFrame({'daily_return': np.random.random(15),
'symbol': ['A'] * 5 + ['B'] * 5 + ['C'] * 5,
'date': np.tile(pd.date_range('1-1-2015', periods=5), 3)})
>>> df
daily_return date symbol
0 0.011467 2015-01-01 A
1 0.613518 2015-01-02 A
2 0.334343 2015-01-03 A
3 0.371809 2015-01-04 A
4 0.169016 2015-01-05 A
5 0.431729 2015-01-01 B
6 0.474905 2015-01-02 B
7 0.372366 2015-01-03 B
8 0.801619 2015-01-04 B
9 0.505487 2015-01-05 B
10 0.946504 2015-01-01 C
11 0.337204 2015-01-02 C
12 0.798704 2015-01-03 C
13 0.311597 2015-01-04 C
14 0.545215 2015-01-05 C
I'll assume you've already filtered your DataFrame for the relevant dates. You then want a pivot table where you have unique dates as your index and your symbols as separate columns, with daily returns as the values. Finally, you call corr() on the result.
corr = df.set_index(['date','symbol']).unstack().corr()
corr.columns = corr.columns.droplevel()
corr.index = corr.columns.tolist()
corr.index.name = 'symbol_1'
corr.columns.name = 'symbol_2'
>>> corr
symbol_2 A B C
symbol_1
A 1.000000 0.188065 -0.745115
B 0.188065 1.000000 -0.688808
C -0.745115 -0.688808 1.000000
You can select the subset of your DataFrame based on dates as follows:
start_date = pd.Timestamp('2015-1-4')
end_date = pd.Timestamp('2015-1-5')
>>> df.loc[df.date.between(start_date, end_date), :]
daily_return date symbol
3 0.371809 2015-01-04 A
4 0.169016 2015-01-05 A
8 0.801619 2015-01-04 B
9 0.505487 2015-01-05 B
13 0.311597 2015-01-04 C
14 0.545215 2015-01-05 C
If you want to flatten your correlation matrix:
corr.stack().reset_index()
symbol_1 symbol_2 0
0 A A 1.000000
1 A B 0.188065
2 A C -0.745115
3 B A 0.188065
4 B B 1.000000
5 B C -0.688808
6 C A -0.745115
7 C B -0.688808
8 C C 1.000000

Resources