Check whether a certain datetime value is missing in a given period - python-3.x

I have a df with DateTime index as follows:
DateTime
2017-01-02 15:00:00
2017-01-02 16:00:00
2017-01-02 18:00:00
....
....
2019-12-07 22:00:00
2019-12-07 23:00:00
Now, I want to know is there any time missing in the 1-hour interval. So, for instance, the 3rd reading is missing 1 reading as we went from 16:00 to 18:00 so is it possible to detect this?

Create date_range with minimal and maximal datetime and filter values by Index.isin with boolean indexing with ~ for inverting mask:
print (df)
DateTime
0 2017-01-02 15:00:00
1 2017-01-02 16:00:00
2 2017-01-02 18:00:00
r = pd.date_range(df['DateTime'].min(), df['DateTime'].max(), freq='H')
print (r)
DatetimeIndex(['2017-01-02 15:00:00', '2017-01-02 16:00:00',
'2017-01-02 17:00:00', '2017-01-02 18:00:00'],
dtype='datetime64[ns]', freq='H')
out = r[~r.isin(df['DateTime'])]
print (out)
DatetimeIndex(['2017-01-02 17:00:00'], dtype='datetime64[ns]', freq='H')
Another idea is create DatetimeIndex with helper column, change frequency by Series.asfreq and filter index values with missing values:
s = df[['DateTime']].assign(val=1).set_index('DateTime')['val'].asfreq('H')
print (s)
DateTime
2017-01-02 15:00:00 1.0
2017-01-02 16:00:00 1.0
2017-01-02 17:00:00 NaN
2017-01-02 18:00:00 1.0
Freq: H, Name: val, dtype: float64
out = s.index[s.isna()]
print (out)
DatetimeIndex(['2017-01-02 17:00:00'], dtype='datetime64[ns]', name='DateTime', freq='H')

Is it safe to assume that the datetime format will always be the same? If yes, why don't you extract the "hour" values from your respective timestamps and compare them to the interval you desire, e.g:
import re
#store some datetime values for show
datetimes=[
"2017-01-02 15:00:00",
"2017-01-02 16:00:00",
"2017-01-02 18:00:00",
"2019-12-07 22:00:00",
"2019-12-07 23:00:00"
]
#extract hour value via regex (first match always is the hours in this format)
findHour = re.compile("\d{2}(?=\:)")
prevx = findHour.findall(datetimes[1])[0]
#simple comparison: compare to previous value, calculate difference, set previous value to current value
for x in datetimes[2:]:
cmp = findHour.findall(x)[0]
diff = int(cmp) - int(prevx)
if diff > 1:
print("Missing Timestamp(s) between {} and {} hours!".format(prevx, cmp))
prevx = cmp

Related

Iterate over unique date and hour in the pandas dataframe to run a function

Hi I am currently running a for loop through by unique dates in the dataframe to pass it to a function.
However what I wanted is to iterate over the unique date and hour (e.g. 2020-12-18 15:00, 2020-12-18 16:00) through my dataframe. Is there any possible way to do this?
This is my code and a sample of my dataframe.
for day in df['DateTime'].dt.day.unique():
testdf = df[df['DateTime'].dt.day == day]
testdf.set_index('DateTimeStarted', inplace=True)
output = mk.original_test(testdf, alpha =0.05)
output_df = pd.DataFrame(output).T
output_df.rename({0:"Trend", 1: "h", 2:"p", 3:"z", 4:"Tau", 5:"s", 6:"var_s", 7:"slope", 8:"intercept"}, axis = 1, inplace = True)
result_df = result_df.append(output_df)
DateTime Values
0 2020-12-18 15:00:00 554.0
1 2020-12-18 15:00:00 594.0
2 2020-12-18 15:00:00 513.0
3 2020-12-18 16:00:00 651.0
4 2020-12-18 16:00:00 593.0
5 2020-12-18 17:00:00 521.0
6 2020-12-18 17:00:00 539.0
7 2020-12-18 17:00:00 534.0
8 2020-12-18 18:00:00 562.0
9 2020-12-19 08:00:00 511.0
10 2020-12-19 09:00:00 512.0
11 2020-12-19 09:00:00 584.0
12 2020-12-19 09:00:00 597.0
13 2020-12-22 09:00:00 585.0
14 2020-12-22 09:00:00 620.0
15 2020-12-22 09:00:00 593.0
You can use groupby if need filter by all dates in DataFrame:
for day, testdf in df.groupby('DateTime'):
testdf.set_index('DateTimeStarted', inplace=True)
output = mk.original_test(testdf, alpha =0.05)
output_df = pd.DataFrame(output).T
output_df.rename({0:"Trend", 1: "h", 2:"p", 3:"z", 4:"Tau", 5:"s", 6:"var_s", 7:"slope", 8:"intercept"}, axis = 1, inplace = True)
result_df = result_df.append(output_df)
EDIT: If need filter only some dates from list use:
for date in ['2020-12-18 15:00', '2020-12-18 16:00']:
testdf = df[df['DateTime'] == date]
testdf.set_index('DateTimeStarted', inplace=True)
output = mk.original_test(testdf, alpha =0.05)
output_df = pd.DataFrame(output).T
output_df.rename({0:"Trend", 1: "h", 2:"p", 3:"z", 4:"Tau", 5:"s", 6:"var_s", 7:"slope", 8:"intercept"}, axis = 1, inplace = True)
result_df = result_df.append(output_df)
EDIT1:
for date in df['DateTime']:
testdf = df[df['DateTime'] == date]
testdf.set_index('DateTimeStarted', inplace=True)
output = mk.original_test(testdf, alpha =0.05)
output_df = pd.DataFrame(output).T
output_df.rename({0:"Trend", 1: "h", 2:"p", 3:"z", 4:"Tau", 5:"s", 6:"var_s", 7:"slope", 8:"intercept"}, axis = 1, inplace = True)
result_df = result_df.append(output_df)

How to read in unusual date\time format

I have a small df with a date\time column using a format I have never seen.
Pandas reads it in as an object even if I use parse_dates, and to_datetime() chokes on it.
The dates in the column are formatted as such:
2019/12/29 GMT+8 18:00
2019/12/15 GMT+8 05:00
I think the best approach is using a date parsing pattern. Something like this:
dateparse = lambda x: pd.datetime.strptime(x, '%Y-%m-%d %H:%M:%S')
df = pd.read_csv(infile, parse_dates=['datetime'], date_parser=dateparse)
But I simply do not know how to approach this format.
The datatime format for UTC is very specific for converting the offset.
strftime() and strptime() Format Codes
The format must be + or - and then 00:00
Use str.zfill to backfill the 0s between the sign and the integer
+08:00 or -08:00 or +10:00 or -10:00
import pandas as pd
# sample data
df = pd.DataFrame({'datetime': ['2019/12/29 GMT+8 18:00', '2019/12/15 GMT+8 05:00', '2019/12/15 GMT+10 05:00', '2019/12/15 GMT-10 05:00']})
# display(df)
datetime
2019/12/29 GMT+8 18:00
2019/12/15 GMT+8 05:00
2019/12/15 GMT+10 05:00
2019/12/15 GMT-10 05:00
# fix the format
df.datetime = df.datetime.str.split(' ').apply(lambda x: x[0] + x[2] + x[1][3:].zfill(3) + ':00')
# convert to a utc datetime
df.datetime = pd.to_datetime(df.datetime, format='%Y/%m/%d%H:%M%z', utc=True)
# display(df)
datetime
2019-12-29 10:00:00+00:00
2019-12-14 21:00:00+00:00
2019-12-14 19:00:00+00:00
2019-12-15 15:00:00+00:00
print(df.info())
[out]:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4 entries, 0 to 3
Data columns (total 1 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 datetime 4 non-null datetime64[ns, UTC]
dtypes: datetime64[ns, UTC](1)
memory usage: 160.0 bytes
You could pass the custom format with GMT+8 in the middle and then subtract eight hours with timedelta(hours=8):
import pandas as pd
from datetime import datetime, timedelta
df['Date'] = pd.to_datetime(df['Date'], format='%Y/%m/%d GMT+8 %H:%M') - timedelta(hours=8)
df
Date
0 2019-12-29 10:00:00
1 2019-12-14 21:00:00

read date time column into pandas dataframe. retain seconds information in the dataframe

My csv file.
Timestamp
---------------------
1/4/2019 2:00:09 PM
1/4/2019 2:00:18 PM
I have a column date time information in a csv file . I want to read this as a timestamp column into a pandas dataframe. I want to retain the seconds information.
Effort 1:
I tried
def dateparse (timestamp):
return pd.datetime.strptime(timestamp, '%m/%d/%Y %H:%M:%S ')
df = pd.read_csv('file_name.csv', parse_dates['Timestamp'],date_parser=dateparse)
Above rounds off the seconds to something like
1/4/2019 2:00:00
Effort 2:
I thought of reading the entire file using and later convert it into dataframe.
with open('file name.csv') as f:
for line in f:
print(line)
But again here seconds information is rounded off.
edit 1:
The seconds info is truncated when I open this csv file in editors like sublime.
For me working omit date_parser=dateparse:
import pandas as pd
temp=u"""Timestamp1
1/4/2019 2:00:09 PM
1/4/2019 2:00:18 PM"""
#after testing replace 'pd.compat.StringIO(temp)' to 'filename.csv'
df = pd.read_csv(pd.compat.StringIO(temp), parse_dates=['Timestamp1'])
print (df)
Timestamp1
0 2019-01-04 14:00:09
1 2019-01-04 14:00:18
print (df.dtypes)
Timestamp1 datetime64[ns]
dtype: object
EDIT1:
Correct format of datetimes should be changed:
import pandas as pd
def dateparse (timestamp):
return pd.datetime.strptime(timestamp, '%m/%d/%Y %I:%M:%S %p')
temp=u"""Timestamp1
1/4/2019 2:00:09 AM
1/4/2019 2:00:09 PM
1/4/2019 2:00:18 PM"""
#after testing replace 'pd.compat.StringIO(temp)' to 'filename.csv'
df = pd.read_csv(pd.compat.StringIO(temp), parse_dates=['Timestamp1'],date_parser=dateparse)
print (df)
Timestamp1
0 2019-01-04 02:00:09
1 2019-01-04 14:00:09
2 2019-01-04 14:00:18
print (df.dtypes)
Timestamp1 datetime64[ns]
dtype: object
EDIT2:
df = pd.read_csv('send1.csv', parse_dates=['Timestamp'])
print (df)
Timestamp
0 2019-01-04 14:00:00
1 2019-01-04 14:00:00
2 2019-01-04 14:00:00
3 2019-01-04 14:00:00
4 2019-01-04 14:00:00
5 2019-01-04 14:00:00

Create a pandas column based on a lookup value from another dataframe

I have a pandas dataframe that has some data values by hour (which is also the index of this lookup dataframe). The dataframe looks like this:
In [1] print (df_lookup)
Out[1] 0 1.109248
1 1.102435
2 1.085014
3 1.073487
4 1.079385
5 1.088759
6 1.044708
7 0.902482
8 0.852348
9 0.995912
10 1.031643
11 1.023458
12 1.006961
...
23 0.889541
I want to multiply the values from this lookup dataframe to create a column of another dataframe, which has datetime as index.
The dataframe looks like this:
In [2] print (df)
Out[2]
Date_Label ID data-1 data-2 data-3
2015-08-09 00:00:00 1 2513.0 2502 NaN
2015-08-09 00:00:00 1 2113.0 2102 NaN
2015-08-09 01:00:00 2 2006.0 1988 NaN
2015-08-09 02:00:00 3 2016.0 2003 NaN
...
2018-07-19 23:00:00 33 3216.0 333 NaN
I want to calculate the data-3 column from data-2 column, where the weight given to 'data-2' column depends on corresponding value in df_lookup. I get the desired values by looping over the index as follows, but that is too slow:
for idx in df.index:
df.loc[idx,'data-3'] = df.loc[idx, 'data-2']*df_lookup.at[idx.hour]
Is there a faster way someone could suggest?
Using .loc
df['data-2']*df_lookup.loc[df.index.hour].values
Out[275]:
Date_Label
2015-08-09 00:00:00 2775.338496
2015-08-09 00:00:00 2331.639296
2015-08-09 01:00:00 2191.640780
2015-08-09 02:00:00 2173.283042
Name: data-2, dtype: float64
#df['data-3']=df['data-2']*df_lookup.loc[df.index.hour].values
I'd probably try doing a join.
# Fix column name
df_lookup.columns = ['multiplier']
# Get hour index
df['hour'] = df.index.hour
# Join
df = df.join(df_lookup, how='left', on=['hour'])
df['data-3'] = df['data-2'] * df['multiplier']
df = df.drop(['multiplier', 'hour'], axis=1)

Convert datetime object to date and datetime2 to time then combine to single column

I have a dataset where the transaction date is stored as YYYY-MM-DD 00:00:00 and the transaction time is stored as 1900-01-01 HH:MM:SS
I need to truncate these timestamps and then either leave as is or convert to a singular timestamp. I've tried several methods and all continue to return the full timestamp. Thoughts?
Use split and pd.to_datetime:
df = pd.DataFrame({'TransDate':['2015-01-01 00:00:00','2015-01-02 00:00:00','2015-01-03 00:00:00'],
'TransTime':['1900-01-01 07:00:00','1900-01-01 08:30:00','1900-01-01 09:45:15']})
df['Date'] = (pd.to_datetime(df['TransDate'].str.split().str[0] +
' ' +
df['TransTime'].str.split().str[1]))
Output:
TransDate TransTime Date
0 2015-01-01 00:00:00 1900-01-01 07:00:00 2015-01-01 07:00:00
1 2015-01-02 00:00:00 1900-01-01 08:30:00 2015-01-02 08:30:00
2 2015-01-03 00:00:00 1900-01-01 09:45:15 2015-01-03 09:45:15
print(df.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 3 entries, 0 to 2
Data columns (total 3 columns):
TransDate 3 non-null object
TransTime 3 non-null object
Date 3 non-null datetime64[ns]
dtypes: datetime64[ns](1), object(2)
memory usage: 152.0+ bytes
None

Resources