pandas time difference between rows based on conditions - python-3.x

So, I have a dataframe like this:
d = {'id': ['a','a','b','b'], 'map': ['aa','ab','ba','bb'], 'timestamp':['2019-01-01 00:00:00+00:00',
'2019-01-01 06:00:00+00:00',
'2019-05-01 12:00:00+00:00',
'2019-06-01 18:00:00+00:00']}
df = pd.DataFrame(data=d)
id map timestamp
0 a aa 2019-01-01 00:00:00+00:00
1 a ab 2019-01-01 06:00:00+00:00
2 b ba 2019-05-01 12:00:00+00:00
3 b bb 2019-06-01 18:00:00+00:00
For each value in id, I'd like to calculate the time difference (i.e the difference between min and max timestamp) for each value in map. Eg. id = a and map = aa = 6 hours
Appreciate any help

Use:
df['timestamp'] = pd.to_datetime(df['timestamp'])
df1 = df.groupby('id')['timestamp'].agg(['max','min'])
s = df1['max'].sub(df1['min']).dt.total_seconds().div(3600)
print (s)
id
a 6.0
b 750.0
dtype: float64

Related

Drop by multiple columns groups if specific values not exit in another column in Pandas

How can I drop the whole group by city and district if date's value of 2018/11/1 not exits in the following dataframe:
city district date value
0 a c 2018/9/1 12
1 a c 2018/10/1 4
2 a c 2018/11/1 5
3 b d 2018/9/1 3
4 b d 2018/10/1 7
The expected result will like this:
city district date value
0 a c 2018/9/1 12
1 a c 2018/10/1 4
2 a c 2018/11/1 5
Thank you!
Create helper column by DataFrame.assign, compare by datetime and test if at least one true per groups with GroupBy.any and GroupBy.transform for possible filter by boolean indexing:
mask = (df.assign(new=df['date'].eq('2018/11/1'))
.groupby(['city','district'])['new'].transform('any'))
df = df[mask]
print (df)
city district date value
0 a c 2018/9/1 12
1 a c 2018/10/1 4
2 a c 2018/11/1 5
If error with misisng values in mask one possivle idea is replace misisng values in columns used for groups:
mask = (df.assign(new=df['date'].eq('2018/11/1'),
city= df['city'].fillna(-1),
district= df['district'].fillna(-1))
.groupby(['city','district'])['new'].transform('any'))
df = df[mask]
print (df)
city district date value
0 a c 2018/9/1 12
1 a c 2018/10/1 4
2 a c 2018/11/1 5
Another idea is add possible misisng index values by reindex and also replace missing values to False:
mask = (df.assign(new=df['date'].eq('2018/11/1'))
.groupby(['city','district'])['new'].transform('any'))
df = df[mask.reindex(df.index, fill_value=False).fillna(False)]
print (df)
city district date value
0 a c 2018/9/1 12
1 a c 2018/10/1 4
2 a c 2018/11/1 5
There's a special GroupBy.filter() method for this. Assuming date is already datetime:
filter_date = pd.Timestamp('2018-11-01').date()
df = df.groupby(['city', 'district']).filter(lambda x: (x['date'].dt.date == filter_date).any())

How to convert column into row?

Assuming I have two rows where for most of the columns the values are same, but not for all. I would like to group these two rows into one where ever the values are same and if the values are different then create an extra column and assign the column name as 'column1'
Step 1: Here assuming I have columns which has same value in both the rows 'a','b','c' and columns which has different values are 'd','e','f' so I am grouping using 'a','b','c' and then unstacking 'd','e','f'
Step 2: Then I am dropping the levels then renaming it to 'a','b','c','d','d1','e','e1','f','f1'
But in my actual case I have 500+ columns and million rows, I dont know how to expand this to 500+ columns where I have constrains like
1) I dont know which all columns will have same values
2) And which all columns will have different values that needs to be converted into new column after grouping with the columns that has same value
df.groupby(['a','b','c']) ['d','e','f'].apply(lambda x:pd.DataFrame(x.values)).unstack().reset_index()
df.columns = df.columns.droplevel()
df.columns = ['a','b','c','d','d1','e','e1','f','f1']
To be more clear, the below code creates the sample dataframe & expected output
df = pd.DataFrame({'Cust_id':[100,100, 101,101,102,103,104,104], 'gender':['M', 'M', 'F','F','M','F','F','F'], 'Date':['01/01/2019', '02/01/2019','01/01/2019',
'01/01/2019','03/01/2019','04/01/2019','03/01/2019','03/01/2019'],
'Product': ['a','a','b','c','d','d', 'e','e']})
expected_output = pd.DataFrame({'Cust_id':[100, 101,102,103,104], 'gender':['M', 'F','M','F','F'], 'Date':['01/01/2019','01/01/2019','03/01/2019','04/01/2019', '03/01/2019'], 'Date1': ['02/01/2019', 'NA','NA','NA','NA']
, 'Product': ['a', 'b', 'd', 'd','e'], 'Product1':['NA', 'c','NA','NA','NA' ]})
you may do following to get expected_output from df
s = df.groupby('Cust_id').cumcount().astype(str).replace('0', '')
df1 = df.pivot_table(index=['Cust_id', 'gender'], columns=s, values=['Date', 'Product'], aggfunc='first')
df1.columns = df1.columns.map(''.join)
Out[57]:
Date Date1 Product Product1
Cust_id gender
100 M 01/01/2019 02/01/2019 a a
101 F 01/01/2019 01/01/2019 b c
102 M 03/01/2019 NaN d NaN
103 F 04/01/2019 NaN d NaN
104 F 03/01/2019 03/01/2019 e e
Next, replace columns having duplicated values with NA
df_expected = df1.where(df1.ne(df1.shift(axis=1)), 'NA').reset_index()
Out[72]:
Cust_id gender Date Date1 Product Product1
0 100 M 01/01/2019 02/01/2019 a NA
1 101 F 01/01/2019 NA b c
2 102 M 03/01/2019 NA d NA
3 103 F 04/01/2019 NA d NA
4 104 F 03/01/2019 NA e NA
You can try this code - it could be a little cleaner but I think it does the job
df = pd.DataFrame({'a':[100, 100], 'b':['tue', 'tue'], 'c':['yes', 'yes'],
'd':['ok', 'not ok'], 'e':['ok', 'maybe'], 'f':[55, 66]})
df_transformed = pd.DataFrame()
for column in df.columns:
col_vals = df.groupby(column)['b'].count().index.values
for ix, col_val in enumerate(col_vals):
temp_df = pd.DataFrame({column + str(ix) : [col_val]})
df_transformed = pd.concat([df_transformed, temp_df], axis = 1)
Output for df_transformed

Counter number of date including between two date

I have a data set like this:
ID date value_1 value_2 tech start_date last_date
ab 2017-06-01 3476.44 324 A 2015-05-04 2018-06-01
ab 2017-07-01 3556.65 332 A 2016-06-07 2018-07-01
ab 2017-08-01 3552.65 120 B 2016-01-08 2018-01-01
ab 2017-09-01 3201.66 987 C 2015-04-08 2018-04-01
bc 2017-10-01 3059.02 652 C 2015-06-09 2018-03-01
bc 2017-11-01 2853.37 345 C 2018-01-01 2018-08-01
bc 2017-12-01 2871.29 554 C 2015-10-01 2018-01-01
I want to keep the ID and the tech fixed and count how many the date inclouding between start_date and last_date.
Like:
ID count
ab 4
ab 4
ab 4
ab 4
bc 2
bc 2
bc 2
I build an a function for do the count and next I do an a group by:
def count_c(data):
d = {}
d['count'] = np.sum(
[x > data['start_date '] & x < data['last_date '] for x in data['date']])
return pd.Series(d, index=['count'])
df_model1 = flag.groupby('date').apply(count_c)
Quite simple actually, instead of using a function use the datetime library and subtract each date.
import pandas as pd
import numpy as np
from datetime import datetime
df = pd.DataFrame(columns=['ID', 'date', 'value_1', 'value_2', 'tech', 'start_date', 'last_date']) # Your DataFrame
days_list = []
EDIT: Solution now counts the amount of rows in between start_date and end_date column
for i, row in df.iterrows():
s_date = datetime.strptime(row['start_date'], '%m/%d/%y')
e_date = datetime.strptime(row['last_date'],'%m/%d/%y')
days = abs((e_date - s_date).days)
days_list.append(days)
days_list = np.array(days_list)
df['Days'] = days_list
def dates(df):
"""
:param df: DataFrame
:param start_date: (str) mm/dd/yy
:param end_date: (str) mm/dd/yy
:return: number of rows
"""
n = 0
for _, ro in df.iterrows():
y = datetime.strptime(ro['start_date'], '%m/%d/%y')
t = datetime.strptime(ro['last_date'], '%m/%d/%y')
d = datetime.strptime(ro['date'], '%m/%d/%y')
if y < d < t:
n += 1
print(dates(df))

roll off profile stacking data frames

I have a dataframe that looks like:
import pandas as pd
import datetime as dt
df= pd.DataFrame({'date':['2017-12-31','2017-12-31'],'type':['Asset','Liab'],'Amount':[100,-100],'Maturity Date':['2019-01-02','2018-01-01']})
df
I am trying to build a roll-off profile by checking if the 'Maturity Date' is greater than a 'date' in the future. I am trying to achieve something like:
#First Month
df1=df[df['Maturity Date']>'2018-01-31']
df1['date']='2018-01-31'
#Second Month
df2=df[df['Maturity Date']>'2018-02-28']
df2['date']='2018-02-28'
#third Month
df3=df[df['Maturity Date']>'2018-03-31']
df3['date']='2018-02-31'
#first quarter
qf1=df[df['Maturity Date']>'2018-06-30']
qf1['date']='2018-06-30'
#concatenate
df=pd.concat([df,df1,df2,df3,qf1])
df
I was wondering if there is a way to :
Allow an arbitrary long number of dates without repeating code
I think you need numpy.tile for repeat indices and assign to new column, last filter by boolean indexing and sorting by sort_values:
d = '2017-12-31'
df['Maturity Date'] = pd.to_datetime(df['Maturity Date'])
#generate first month and next quarters
c1 = pd.date_range(d, periods=4, freq='M')
c2 = pd.date_range(c1[-1], periods=2, freq='Q')
#join together
c = c1.union(c2[1:])
#repeat rows be indexing repeated index
df1 = df.loc[np.tile(df.index, len(c))].copy()
#assign column by datetimes
df1['date'] = np.repeat(c, len(df))
#filter by boolean indexing
df1 = df1[df1['Maturity Date'] > df1['date']]
print (df1)
Amount Maturity Date date type
0 100 2019-01-02 2017-12-31 Asset
1 -100 2018-01-01 2017-12-31 Liab
0 100 2019-01-02 2018-01-31 Asset
0 100 2019-01-02 2018-02-28 Asset
0 100 2019-01-02 2018-03-31 Asset
0 100 2019-01-02 2018-06-30 Asset
You could use a nifty tool in the Pandas arsenal called
pd.merge_asof. It
works similarly to pd.merge, except that it matches on "nearest" keys rather
than equal keys. Furthermore, you can tell pd.merge_asof to look for nearest
keys in only the backward or forward direction.
To make things interesting (and help check that things are working properly), let's add another row to df:
df = pd.DataFrame({'date':['2017-12-31', '2017-12-31'],'type':['Asset', 'Asset'],'Amount':[100,200],'Maturity Date':['2019-01-02', '2018-03-15']})
for col in ['date', 'Maturity Date']:
df[col] = pd.to_datetime(df[col])
df = df.sort_values(by='Maturity Date')
print(df)
# Amount Maturity Date date type
# 1 200 2018-03-15 2017-12-31 Asset
# 0 100 2019-01-02 2017-12-31 Asset
Now define some new dates:
dates = (pd.date_range('2018-01-31', periods=3, freq='M')
.union(pd.date_range('2018-01-1', periods=2, freq='Q')))
result = pd.DataFrame({'date': dates})
# date
# 0 2018-01-31
# 1 2018-02-28
# 2 2018-03-31
# 3 2018-06-30
Now we can merge rows, matching nearest dates from result with Maturity Dates from df:
result = pd.merge_asof(result, df.drop('date', axis=1),
left_on='date', right_on='Maturity Date', direction='forward')
In this case we want to "match" dates with Maturity Dates which are greater
so we use direction='forward'.
Putting it all together:
import pandas as pd
df = pd.DataFrame({'date':['2017-12-31', '2017-12-31'],'type':['Asset', 'Asset'],'Amount':[100,200],'Maturity Date':['2019-01-02', '2018-03-15']})
for col in ['date', 'Maturity Date']:
df[col] = pd.to_datetime(df[col])
df = df.sort_values(by='Maturity Date')
dates = (pd.date_range('2018-01-31', periods=3, freq='M')
.union(pd.date_range('2018-01-1', periods=2, freq='Q')))
result = pd.DataFrame({'date': dates})
result = pd.merge_asof(result, df.drop('date', axis=1),
left_on='date', right_on='Maturity Date', direction='forward')
result = pd.concat([df, result], axis=0)
result = result.sort_values(by=['Maturity Date', 'date'])
print(result)
yields
Amount Maturity Date date type
1 200 2018-03-15 2017-12-31 Asset
0 200 2018-03-15 2018-01-31 Asset
1 200 2018-03-15 2018-02-28 Asset
0 100 2019-01-02 2017-12-31 Asset
2 100 2019-01-02 2018-03-31 Asset
3 100 2019-01-02 2018-06-30 Asset

How do I reformat dates in a CSV to just show MM/YYYY

Using Python 3 Pandas, spending an embarrassing amount of time trying to figure out how to take a column of dates from a CSV and make a new column with just MM/YYYY or YYYY/MM/01.
The data looks like Col1 but I am trying to produce Col2:
Col1 Col2
2/12/2017 2/1/2017
2/16/2017 2/1/2017
2/28/2017 2/1/2017
3/2/2017 3/1/2017
3/13/2017 3/1/2017
Am able to parse the year and month out:
df['Month'] = pd.DatetimeIndex(df['File_Processed_Date']).month
df['Year'] = pd.DatetimeIndex(df['File_Processed_Date']).year
df['Period'] = df['Month'] + '/' + df['Year']
That last line is wrong. Is there a clever python way to just show 2/2017?
Get the error: "TypeError: ufunc 'add' did not contain a loop with signature matching types dtype('
Update, answer by piRsquared:
d = pd.to_datetime(df.File_Processed_Date)
df['Period'] = d.dt.strftime('%m/1/%Y')
This will create a pandas column in a dataframe that converts Col1 into Col2 successfully. Thanks!
let d be just 'Col1' converted to Timestamp
d = pd.to_datetime(df.Col1)
then
d.dt.strftime('%m/1/%Y')
0 02/1/2017
1 02/1/2017
2 02/1/2017
3 03/1/2017
4 03/1/2017
Name: Col1, dtype: object
​
d.dt.strftime('%m%Y')
0 02/2017
1 02/2017
2 02/2017
3 03/2017
4 03/2017
Name: Col1, dtype: object
d.dt.strftime('%Y/%m/01')
0 2017/02/01
1 2017/02/01
2 2017/02/01
3 2017/03/01
4 2017/03/01
Name: Col1, dtype: object
d - pd.offsets.MonthBegin()
0 2017-02-01
1 2017-02-01
2 2017-02-01
3 2017-03-01
4 2017-03-01
Name: Col1, dtype: datetime64[ns]
The function you are looking for is strftime.

Resources