Subtracting two clock times in pandas dataframe - python-3.x

I am trying to subtract two columns of a pandas data frame which contain normal clock times as strings, but somehow I am getting struck.
I have tried converting each column to datetime using pandas.datetime, but still the subtraction doesn't work.
import pandas as pd
df = pd.DataFrame()
df['A'] = ["12:30","5:30"]
df['B'] = ["19:30","9:30"]
df['A'] = pd.to_datetime(df['A']).dt.time
df['B'] = pd.to_datetime(df['B']).dt.time
df['time_diff'] = df['B'] - df['A']
I am expecting the actual time difference between two clock times.

You should using to_timedelta
df['A'] = pd.to_timedelta(df['A']+':00')
df['B'] = pd.to_timedelta(df['B']+':00')
df['time_diff'] = df['B'] - df['A']
df
Out[21]:
A B time_diff
0 12:30:00 19:30:00 07:00:00
1 05:30:00 09:30:00 04:00:00

I tried the following method. This also worked for me. Divide by 3600 to get the time in hours.
df = pd.DataFrame()
df['A'] = ["12:30","5:30"]
df['B'] = ["19:30","9:30"]
df['time_diff_minutes'] = (pd.to_datetime(df['B']) -
pd.to_datetime(df['A'])).astype('timedelta64[s]')/60
df['time_diff_hours'] = df['time_diff_minutes']/60
df
Out[161]:
A B time_diff_minutes time_diff_hours
0 12:30 19:30 420.0 7.0
1 5:30 9:30 240.0 4.0

Related

Pandas [.dt] property vs to_datetime

The question is intended to build an understandable grasp on subtle differences between .dt and pd.to_datetime
I want understand which method is suited/preferred and if one can be used as a defacto and other differences that are there between the two
values = {'date_time': ['20190902093000','20190913093000','20190921200000'],
}
df = pd.DataFrame(values, columns = ['date_time'])
df['date_time'] = pd.to_datetime(df['date_time'], format='%Y%m%d%H%M%S')
>>> df
date_time
0 2019-09-02 09:30:00
1 2019-09-13 09:30:00
2 2019-09-21 20:00:00
Using .dt
df['date'] = df['date_time'].dt.date
>>> df
date_time date
0 2019-09-02 09:30:00 2019-09-02
1 2019-09-13 09:30:00 2019-09-13
2 2019-09-21 20:00:00 2019-09-21
>>> df.dtypes
date_time datetime64[ns]
date object
dtype: object
>>> df.date.values
array([datetime.date(2019, 9, 2), datetime.date(2019, 9, 13),
datetime.date(2019, 9, 21)], dtype=object)
Using .dt , even though the elements are individually datetime , is inferred as object in the 'DataFrame` , which sometimes is suited but mostly its causes a lot of problems down the line and an implicit conversion is inevitable
Using pd.to_datetime
df['date_to_datetime'] = pd.to_datetime(df['date'],format='%Y-%m-%d')
>>> df.dtypes
date_time datetime64[ns]
date object
date_to_datetime datetime64[ns]
>>> df.date_to_datetime.values
array(['2019-09-02T00:00:00.000000000', '2019-09-13T00:00:00.000000000',
'2019-09-21T00:00:00.000000000'], dtype='datetime64[ns]')
Using pd.to_datetime , natively returns a datetime64[ns] array and inferred the same in the DataFrame , which in my experience is consistent and widely used , when dealing with dates using pandas
I m aware of the fact a native Date dtype does not exist in pandas , and is wrapped around datetime64[ns]
The two concepts are quite different.
pandas.to_datetime() is a function that can take a variety of inputs and convert them to a pandas datetime index. For example:
dates = pd.to_datetime([1610290846000000000, '2020-01-11', 'Jan 12 2020 2pm'])
print(dates)
# DatetimeIndex(['2021-01-10 15:00:46', '2020-01-11 00:00:00',
# '2020-01-12 14:00:00'],
# dtype='datetime64[ns]', freq=None)
pandas.Series.dt is an interface on a pandas series that gives you convenient access to operations on data stored as a pandas datetime. For example:
x = pd.Series(dates)
print(x.dt.date)
# 0 2021-01-10
# 1 2020-01-11
# 2 2020-01-12
# dtype: object
print(x.dt.hour)
# 0 15
# 1 0
# 2 14
# dtype: int64

Converting timedeltas to integers for consecutive time points in pandas

Suppose I have the dataframe
import pandas as pd
df = pd.DataFrame({"Time": ['2010-01-01', '2010-01-02', '2010-01-03', '2010-01-04']})
print(df)
Time
0 2010-01-01
1 2010-01-02
2 2010-01-03
3 2010-01-04
If I want to calculate the time from the lowest time point for each time in the dataframe, I can use the apply function like
df['Time'] = pd.to_datetime(df['Time'])
df.sort_values(inplace = True)
df['Time'] = df['Time'].apply(lambda x: (x - df['Time'].iloc[0]).days)
print(df)
Time
0 0
1 1
2 2
3 3
Is there a function in Pandas that does this already?
I will recommend not use apply
(df.Time-df.Time.iloc[0]).dt.days
0 0
1 1
2 2
3 3
Name: Time, dtype: int64

Handling dates with mix of two and four digit years in python

I have two DataFrame df:
A B
5/4/2018 8/4/2018
24/5/15 26/5/15
21/7/16 22/7/16
3/7/2015 5/7/2015
1/7/2016 1/7/2016
I want to calculate the difference of days for each row.
for example:
A B C
5/4/2018 8/4/2018 3
24/5/15 26/5/15 2
I have tried to convert dataframe into datetime using pd.to_datetime. but, getting error "ValueError: unconverted data remains: 18"
tried following code:
import datetime as dt
df['A'] = pd.to_datetime(df['A'], format = "%d/%m/%y").datetime.datetime.strftime("%Y-%m-%d")
df['B'] = pd.to_datetime(df['B'], format = "%d/%m/%y").datetime.datetime.strftime("%Y-%m-%d")
df['C'] = (df['B'] - df['A']).dt.days
note :using python 3.7
Try:
df['A'] = pd.to_datetime(df['A'], dayfirst=True)
df['B'] = pd.to_datetime(df['B'], dayfirst=True)
df['C'] = (df['B'] - df['A']).dt.days
Output:
A B C
0 2018-04-05 2018-04-08 3
1 2015-05-24 2015-05-26 2

Indicate whether datetime of row is in a daterange

I'm trying to get dummy variables for holidays in a dataset. I have a couple of dateranges (pd.daterange()) with holidays and a dataframe to which I would like to append a dummy to indicate whether the datetime of that row is in a certain daterange of the specified holidays.
Small example:
ChristmasBreak = list(pd.date_range('2014-12-20','2015-01-04').date)
dates = pd.date_range('2015-01-03', '2015-01-06, freq='H')
d = {'Date': dates, 'Number': np.rand(len(dates))}
df = pd.DataFrame(data=d)
df.set_index('Date', inplace=True)
for i, row in df.iterrows():
if i in ChristmasBreak:
df[i,'Christmas] = 1
The if loop is never entered, so matching the dates won't work. Is there any way to do this? Alternative methods to come to dummies for this case are welcome as well!
First dont use iterrows, because really slow.
Better is use dt.date with Series,isin, last convert boolean mask to integer - Trues are 1:
df = pd.DataFrame(data=d)
df['Christmas'] = df['Date'].dt.date.isin(ChristmasBreak).astype(int)
Or use between:
df['Christmas'] = df['Date'].between('2014-12-20', '2015-01-04').astype(int)
If want compare with DatetimeIndex:
df = pd.DataFrame(data=d)
df.set_index('Date', inplace=True)
df['Christmas'] = df.index.date.isin(ChristmasBreak).astype(int)
df['Christmas'] = ((df.index > '2014-12-20') & (df.index < '2015-01-04')).astype(int)
Sample:
ChristmasBreak = pd.date_range('2014-12-20','2015-01-04').date
dates = pd.date_range('2014-12-19 20:00', '2014-12-20 05:00', freq='H')
d = {'Date': dates, 'Number': np.random.randint(10, size=len(dates))}
df = pd.DataFrame(data=d)
df['Christmas'] = df['Date'].dt.date.isin(ChristmasBreak).astype(int)
print (df)
Date Number Christmas
0 2014-12-19 20:00:00 6 0
1 2014-12-19 21:00:00 7 0
2 2014-12-19 22:00:00 0 0
3 2014-12-19 23:00:00 9 0
4 2014-12-20 00:00:00 1 1
5 2014-12-20 01:00:00 3 1
6 2014-12-20 02:00:00 1 1
7 2014-12-20 03:00:00 8 1
8 2014-12-20 04:00:00 2 1
9 2014-12-20 05:00:00 1 1
This should do what you want:
df['Christmas'] = df.index.isin(ChristmasBreak).astype(int)

Populating pandas column based on moving date range (efficiently)

I have 2 pandas dataframes, one of them contains dates with measurements, and the other contains dates with an event ID.
df1
from datetime import datetime as dt
from datetime import timedelta
import pandas as pd
import numpy as np
today = dt.now()
ndays = 10
df1 = pd.DataFrame({'Date': [today + timedelta(days = x) for x in range(ndays)], 'measurement': pd.Series(np.random.randint(1, high = 10, size = ndays))})
df1.Date = df1.Date.dt.date
Date measurement
2018-01-10 8
2018-01-11 2
2018-01-12 7
2018-01-13 3
2018-01-14 1
2018-01-15 1
2018-01-16 6
2018-01-17 9
2018-01-18 8
2018-01-19 4
df2
df2 = pd.DataFrame({'Date': ['2018-01-11', '2018-01-14', '2018-01-16', '2018-01-19'], 'letter': ['event_a', 'event_b', 'event_c', 'event_d']})
df2.Date = pd.to_datetime(df2.Date, format = '%Y-%m-%d')
df2.Date = df2.Date.dt.date
Date event_id
2018-01-11 event_a
2018-01-14 event_b
2018-01-16 event_c
2018-01-19 event_d
I give the dates in df1 an event_id from df2 only if it's between two event dates. The resulting dataframe would look something like:
df3
today = dt.now()
ndays = 10
df3 = pd.DataFrame({'Date': [today + timedelta(days = x) for x in range(ndays)], 'measurement': pd.Series(np.random.randint(1, high = 10, size = ndays)), 'event_id': ['event_a', 'event_a', 'event_b', 'event_b', 'event_b', 'event_c', 'event_c', 'event_d', 'event_d', 'event_d']})
df3.Date = df3.Date.dt.date
Date event_id measurement
2018-01-10 event_a 4
2018-01-11 event_a 2
2018-01-12 event_b 1
2018-01-13 event_b 5
2018-01-14 event_b 5
2018-01-15 event_c 4
2018-01-16 event_c 6
2018-01-17 event_d 6
2018-01-18 event_d 9
2018-01-19 event_d 6
The code I use to achieve this is:
n = 1
while n <= len(list(df2.Date)) - 1 :
for date in list(df1.Date):
if date <= df2.iloc[n].Date and (date > df2.iloc[n-1].Date):
df1.loc[df1.Date == date, 'event_id'] = df2.iloc[n].event_id
n += 1
The dataset that I am working with is significantly larger than this (a few million rows) and this method runs far too long. Is there a more efficient way to accomplish this?
So there are quite a few things to improve performance.
The first question I have is: does it have to be a pandas frame to begin with? Meaning can't df1 and df2 just be lists of tuples or list of lists?
The thing is that pandas adds a significant overhead when accessing items but especially when setting values individually.
Pandas excels when it comes to vectorized operations but I don't see an efficient alternative right now (maybe someone comes up with such an answer, that would be ideal).
Now what I'd do is:
Convert your df1 and df2 to records -> e.g. d1 = df1.to_records() what you get is an array of tuples, basically with the same structure as the dataframe.
Now run your algorithm but instead of operating on pandas dataframes you operate on the arrays of tuples d1 and d2
Use a third list of tuples d3 where you store the newly created data (each tuple is a row)
Now if you want you can convert d3 back to a pandas dataframe:
df3 = pd.DataFrame.from_records(d3, myKwArgs**)
This will speed up your code significantly I'd assume by more than 100-1000%. It does increase memory usage though, so if you are low on memory try to avoid the pandas dataframes all-together or dereference unused pandas frames df1, df2 once you used them to create the records (and if you run into problems call gc manually).
EDIT: Here a version of your code using the procedure above:
d3 = []
n = 1
while n < range(len(d2)):
for i in range(len(d1)):
date = d1[i][0]
if date <= d2[n][0] and date > d2[n-1][0]:
d3.append( (date, d2[n][1], d1[i][1]) )
n += 1
You can try df.apply() method to achieve this. Refer pandas.DataFrame.apply. I think my code will works faster than yours.
My approach:
Merge two dataframes df1 and df2 and create new one df3 by
df3 = pd.merge(df1, df2, on='Date', how='outer')
Sort df3 by date to make easy to travserse.
df3['Date'] = pd.to_datetime(df3.Date)
df3.sort_values(by='Date')
Create set_event_date() method to apply for each rows in df3.
new_event_id = np.nan
def set_event_date(df3):
global new_event_id
if df3.event_id is not np.nan:
new_event_id = df3.event_id
return new_event_id
Apply set_event_method() to each rows in df3.
df3['new_event_id'] = df3.apply(set_event_date,axis=1)
Final Output will be:
Date Measurement New_event_id
0 2018-01-11 2 event_a
1 2018-01-12 1 event_a
2 2018-01-13 3 event_a
3 2018-01-14 6 event_b
4 2018-01-15 3 event_b
5 2018-01-16 5 event_c
6 2018-01-17 7 event_c
7 2018-01-18 9 event_c
8 2018-01-19 7 event_d
9 2018-01-20 4 event_d
Let me know once you tried my solution and it works faster than yours.
Thanks.

Resources