Get the last date before an nth date for each month in Python - python-3.x

I am using a csv with an accumulative number that changes daily.
Day Accumulative Number
0 9/1/2020 100
1 11/1/2020 102
2 18/1/2020 98
3 11/2/2020 105
4 24/2/2020 95
5 6/3/2020 120
6 13/3/2020 100
I am now trying to find the best way to aggregate it and compare the monthly results before a specific date. So, I want to check the balance on the 11th of each month but for some months, there is no activity for the specific day. As a result, I trying to get the latest day before the 12th of each Month. So, the above would be:
Day Accumulative Number
0 11/1/2020 102
1 11/2/2020 105
2 6/3/2020 120
What I managed to do so far is to just get the latest day of each month:
dateparse = lambda x: pd.datetime.strptime(x, "%d/%m/%Y")
df = pd.read_csv("Accumulative.csv",quotechar="'", usecols=["Day","Accumulative Number"], index_col=False, parse_dates=["Day"], date_parser=dateparse, na_values=['.', '??'] )
df.index = df['Day']
grouped = df.groupby(pd.Grouper(freq='M')).sum()
print (df.groupby(df.index.month).apply(lambda x: x.iloc[-1]))
which returns:
Day Accumulative Number
1 2020-01-18 98
2 2020-02-24 95
3 2020-03-13 100
Is there a way to achieve this in Pandas, Python or do I have to use SQL logic in my script? Is there an easier way I am missing out in order to get the "balance" as per the 11th day of each month?

You can do groupby with factorize
n = 12
df = df.sort_values('Day')
m = df.groupby(df.Day.dt.strftime('%Y-%m')).Day.transform(lambda x :x.factorize()[0])==n
df_sub = df[m].copy()

You can try filtering the dataframe where the days are less than 12 , then take last of each group(grouped by month) :
df['Day'] = pd.to_datetime(df['Day'],dayfirst=True)
(df[df['Day'].dt.day.lt(12)]
.groupby([df['Day'].dt.year,df['Day'].dt.month],sort=False).last()
.reset_index(drop=True))
Day Accumulative_Number
0 2020-01-11 102
1 2020-02-11 105
2 2020-03-06 120

I would try:
# convert to datetime type:
df['Day'] = pd.to_datetime(df['Day'], dayfirst=True)
# select day before the 12th
new_df = df[df['Day'].dt.day < 12]
# select the last day in each month
new_df.loc[~new_df['Day'].dt.to_period('M').duplicated(keep='last')]
Output:
Day Accumulative Number
1 2020-01-11 102
3 2020-02-11 105
5 2020-03-06 120

Here's another way using expanding the date range:
# set as datetime
df2['Day'] = pd.to_datetime(df2['Day'], dayfirst=True)
# set as index
df2 = df2.set_index('Day')
# make a list of all dates
dates = pd.date_range(start=df2.index.min(), end=df2.index.max(), freq='1D')
# add dates
df2 = df2.reindex(dates)
# replace NA with forward fill
df2['Number'] = df2['Number'].ffill()
# filter to get output
df2 = df2[df2.index.day == 11].reset_index().rename(columns={'index': 'Date'})
print(df2)
Date Number
0 2020-01-11 102.0
1 2020-02-11 105.0
2 2020-03-11 120.0

Related

Pandas : Finding correct time window

I have a pandas dataframe which gets updated every hour with latest hourly data. I have to filter out IDs based upon a threshold, i.e. PR_Rate > 50 and CNT_12571 < 30 for 3 consecutive hours from a lookback period of 5 hours. I was using the below statements to accomplish this:
df_thld=df[(df['Date'] > df['Date'].max() - pd.Timedelta(hours=5))& (df.PR_Rate>50) & (df.CNT_12571 < 30)]
df_thld.loc[:,'HR_CNT'] = df_thld.groupby('ID')['Date'].nunique().to_frame('HR_CNT').reset_index()
df_thld[(df_thld['HR_CNT'] >3]
The problem with this approach is that since lookback period requirement is 5 hours, so, this HR_CNT can count any non consecutive hours breaching this critieria.
MY Dataset is as below:
DataFrame
Date IDs CT_12571 PR_Rate
16/06/2021 10:00 A1 15 50.487
16/06/2021 11:00 A1 31 40.806
16/06/2021 12:00 A1 25 52.302
16/06/2021 13:00 A1 13 61.45
16/06/2021 14:00 A1 7 73.805
In the above Dataframe, threshold was not breached at 1100 hrs, but while counting the hours, 10,12 and 13 as the hours that breached the threshold instead of 12,13,14 as required. Each id may or may not have this critieria breached in a single day. Any idea, How can I fix this issue?
Please excuse me, if I have misinterpreted your problem. As I understand the issues you have a dataframe which is updated hourly. An example of this dataframe is illustrated below as df. From this dataframe, you want to filter only those rows which satisfy the following two conditions:
PR_Rate > 50 and CNT_12571 < 30
If and only if the threshold is surpassed for three consecutive hours
Given these assumptions, I would proceed as follows:
df:
Date IDs CT_1257 PR_Rate
0 2021-06-16 10:00:00 A1 15 50.487
1 2021-06-16 12:00:00 A1 31 40.806
2 2021-06-16 14:00:00 A1 25 52.302
3 2021-06-16 15:00:00 A1 13 61.450
4 2021-06-16 16:00:00 A1 7 73.805
Note in this dataframe, the only time fr5ame which satisfies the above conditions is the entries for the of 14:00, 15:00 and 16:00.
def filterFrame(df, dur, pr_threshold, ct_threshold):
ff = df[(df['CT_1257']< ct_threshold) & (df['PR_Rate'] >pr_threshold) ].reset_index()
ml = list(ff.rolling(f'{dur}h', on='Date').count()['IDs'])
r = len(ml)- 1
rows= []
while r >= 0:
end = r
start = None
if int(ml[r]) < dur:
r -= 1
else:
k = int(ml[r])
for i in range(k):
rows.append(r-i)
r -= k
rows = rows[::-1]
return ff.filter(items= rows, axis = 0).reset_index()
running filterFrame(df, 3, 50, 30) yields:
level_0 index Date IDs CT_1257 PR_Rate
0 1 2 2021-06-16 14:00:00 A1 25 52.302
1 2 3 2021-06-16 15:00:00 A1 13 61.450
2 3 4 2021-06-16 16:00:00 A1 7 73.805

Pandas changing dates near each other

I have a pandas dataframe with dates and users which looks like this-
date = ['1/2/2020','1/9/2020','1/10/2020','1/17/2020','1/18/2020','1/24/2020','1/25/2020','5/17/2019','5/18/2019','5/24/2019','5/29/2019']
user =['A','B','C','B','A','A','B','C','A','A','B']
df = pd.DataFrame(data={"Date":date, "User":user})
I am trying to find all dates that are next to each other (Jan-1 and Jan-2) and convert them to a single date so both entries would then become the lower of the two. The number of entries are over a million. This data is created from a scan results that triggers nightly and sometime flows into the other day.
Update-
I wanted to consolidate the date of the scan so that I can show the visualization properly. As right now the results would have more entry on the day the scan starts but very few entries for the day where the scan overflowed. There is a primary date and time stored so I am not loosing the data. The user column is presented as it scans a file with all the usernames and the date stores the date when it was scanned.
So far I was able to read the dataframe and then sort it based on the date to have the entries one after the other.
The output should look like the following -
Is there a pytonic way of doing this?
One issue to consider is the case of multiple consecutive days and how you want to handle these. The following code sets the day to the first of the consecutive days in each block:
import pandas as pd
from datetime import timedelta
# prepend two dates to show multiple consecutive days "use-case"
date = ['12/31/2019','1/1/2020','1/2/2020','1/9/2020','1/10/2020','1/17/2020','1/18/2020','1/24/2020','1/25/2020','5/17/2019','5/18/2019','5/24/2019','5/29/2019']
user = ['Z','Z','A','B','C','B','A','A','B','C','A','A','B']
df = pd.DataFrame(data={"Date":date, "User":user})
# first convert to datetime to allow date operations
df.Date = pd.to_datetime(df.Date)
# check if the the date is one day after the row before (by shifting the Date column)
df['isConsecutive'] = (df.Date == df.Date.shift()+pd.DateOffset(1))
# get number of consecutive days in each block
df['numConsecutive'] = df.isConsecutive.groupby((~df.isConsecutive).cumsum()).cumsum()
# convert to timedelta
df.numConsecutive = df.numConsecutive.apply(lambda x: timedelta(days=x))
# take this as differnce to Date
df['NewDate'] = df.Date - df.numConsecutive
print(df)
This returns:
Date User isConsecutive numConsecutive NewDate
0 2019-12-31 Z False 0 days 2019-12-31
1 2020-01-01 Z True 1 days 2019-12-31
2 2020-01-02 A True 2 days 2019-12-31
3 2020-01-09 B False 0 days 2020-01-09
4 2020-01-10 C True 1 days 2020-01-09
5 2020-01-17 B False 0 days 2020-01-17
6 2020-01-18 A True 1 days 2020-01-17
7 2020-01-24 A False 0 days 2020-01-24
8 2020-01-25 B True 1 days 2020-01-24
9 2019-05-17 C False 0 days 2019-05-17
10 2019-05-18 A True 1 days 2019-05-17
11 2019-05-24 A False 0 days 2019-05-24
12 2019-05-29 B False 0 days 2019-05-29

how to take only maximum date value is there are two date in a week in dataframe

i have a dataframe called Data
Date Value Frequency
06/01/2020 256 A
07/01/2020 235 A
14/01/2020 85 Q
16/01/2020 625 Q
22/01/2020 125 Q
here it is observed that 6/01/2020 and 07/01/2020 are in the same week that is monday and tuesday.
Therefore i wanted to take maximum date from week.
my final dataframe should look like this
Date Value Frequency
07/01/2020 235 A
16/01/2020 625 Q
22/01/2020 125 Q
I want the maximum date from the week , like i have showed in my final dataframe example.
I am new to python, And i am searching answer for this which i didnt find till now ,Please help
First convert column to datetimes by to_datetime and use DataFrameGroupBy.idxmax for rows with maximum datetime per rows with Series.dt.strftime, last select rows by DataFrame.loc:
df['Date'] = pd.to_datetime(df['Date'], dayfirst=True)
print (df['Date'].dt.strftime('%Y-%U'))
0 2020-01
1 2020-01
2 2020-02
3 2020-02
4 2020-03
Name: Date, dtype: object
df = df.loc[df.groupby(df['Date'].dt.strftime('%Y-%U'))['Date'].idxmax()]
print (df)
Date Value Frequency
1 2020-01-07 235 A
3 2020-01-16 625 Q
4 2020-01-22 125 Q
If format of datetimes cannot be changed:
d = pd.to_datetime(df['Date'], dayfirst=True)
df = df.loc[d.groupby(d.dt.strftime('%Y-%U')).idxmax()]
print (df)
Date Value Frequency
1 07/01/2020 235 A
3 16/01/2020 625 Q
4 22/01/2020 125 Q

How to remove Initial rows in a dataframe in python

I have 4 dataframes with weekly sales values for a year for 4 products. Some of the initial rows are 0 as no sales. there are some other 0 values as well in between the weeks.
I want to remove those initial 0 values, keeping the in between 0s.
For example
Week Sales(prod 1)
1 0
2 0
3 100
4 120
5 55
6 0
7 60.
Week Sales(prod 2)
1 0
2 0
3 0
4 120
5 0
6 30
7 60.
I want to remove row 1,2 from 1st table and 1,2,3 frm 2nd.
Few Assumption based on your example dataframe:
DataFrame is created using pandas
week always start with 1
will remove all the starting weeks only which are having 0 sales
Solution:
Python libraries Required
- pandas, more_itertools
Example DataFrame (df):
Week Sales
1 0
2 0
3 0
4 120
5 0
6 30
7 60
Python Code:
import pandas as pd
import more_itertools as mit
filter_col = 'Sales'
filter_val = 0
##function which returns the index to be removed
def return_initial_week_index_with_zero_sales(df,filter_col,filter_val):
index_wzs = [False]
if df[filter_col].iloc[1]==filter_val:
index_list = df[df[filter_col]==filter_val].index.tolist()
index_wzs = [list(group) for group in mit.consecutive_groups(index_list)]
else:
pass
return index_wzs[0]
##calling above function and removing index from the dataframe
df = df.set_index('Week')
weeks_to_be_removed = return_initial_week_index_with_zero_sales(df,filter_col,filter_val)
if weeks_to_be_removed:
print('Initial weeks with 0 sales are {}'.format(weeks_to_be_removed))
df = df.drop(index=weeks_to_be_removed)
else:
print('No initial week has 0 sales')
df.reset_index(inplace=True)
Result:df
Week Sales
4 120
5 55
6 0
7 60
I hope it helps, you can modify the function as per your requirement.

Replacing the day number for each value in a data frame column

I'm trying to replace the day number values within a datetime column using the values from another column.
This is my dataframe:
ID Code Day_to_replace Base_date
0 123 403 28 22/02/2013
1 456 402 21 22/03/2011
2 789 401 14 01/05/2017
and this is what I want to end up with:
ID Code Day_to_replace Base_date New_Date
0 123 403 28 22/02/2013 28/02/2013
1 456 402 21 22/03/2011 21/03/2011
2 789 401 14 01/05/2017 14/05/2017
I can do this using a static value but can't work out how to use a value from another column to apply to each record.
newdf['New_Date'] = newdf['Base_Date'].apply(lambda x: x.replace(day=1))
Thanks
First convert values to datetimes:
df['Base_date'] = pd.to_datetime(df['Base_date'], format='%d/%m/%Y')
Use DataFrame.apply with axis=1 for loop per rows:
df['New_Date'] = df.apply(lambda x: x['Base_date'].replace(day=x['Day_to_replace']), axis=1)
Or convert datetimes to month period and back for first day and add days timedeltas with subtracting 1 by to_timedelta:
df['New_Date'] = (df['Base_date'].dt.to_period('m').dt.to_timestamp() +
pd.to_timedelta(df['Day_to_replace'].sub(1), unit='d'))
Or convert values to strings, add days and convert to datetimes:
df['New_Date'] = pd.to_datetime(df['Base_date'].dt.strftime('%Y-%m-') +
df['Day_to_replace'].astype(str))
print (df)
ID Code Day_to_replace Base_date New_Date
0 123 403 28 2013-02-22 2013-02-28
1 456 402 21 2011-03-22 2011-03-21
2 789 401 14 2017-05-01 2017-05-14

Resources