I have a pandas dataframe which gets updated every hour with latest hourly data. I have to filter out IDs based upon a threshold, i.e. PR_Rate > 50 and CNT_12571 < 30 for 3 consecutive hours from a lookback period of 5 hours. I was using the below statements to accomplish this:
df_thld=df[(df['Date'] > df['Date'].max() - pd.Timedelta(hours=5))& (df.PR_Rate>50) & (df.CNT_12571 < 30)]
df_thld.loc[:,'HR_CNT'] = df_thld.groupby('ID')['Date'].nunique().to_frame('HR_CNT').reset_index()
df_thld[(df_thld['HR_CNT'] >3]
The problem with this approach is that since lookback period requirement is 5 hours, so, this HR_CNT can count any non consecutive hours breaching this critieria.
MY Dataset is as below:
DataFrame
Date IDs CT_12571 PR_Rate
16/06/2021 10:00 A1 15 50.487
16/06/2021 11:00 A1 31 40.806
16/06/2021 12:00 A1 25 52.302
16/06/2021 13:00 A1 13 61.45
16/06/2021 14:00 A1 7 73.805
In the above Dataframe, threshold was not breached at 1100 hrs, but while counting the hours, 10,12 and 13 as the hours that breached the threshold instead of 12,13,14 as required. Each id may or may not have this critieria breached in a single day. Any idea, How can I fix this issue?
Please excuse me, if I have misinterpreted your problem. As I understand the issues you have a dataframe which is updated hourly. An example of this dataframe is illustrated below as df. From this dataframe, you want to filter only those rows which satisfy the following two conditions:
PR_Rate > 50 and CNT_12571 < 30
If and only if the threshold is surpassed for three consecutive hours
Given these assumptions, I would proceed as follows:
df:
Date IDs CT_1257 PR_Rate
0 2021-06-16 10:00:00 A1 15 50.487
1 2021-06-16 12:00:00 A1 31 40.806
2 2021-06-16 14:00:00 A1 25 52.302
3 2021-06-16 15:00:00 A1 13 61.450
4 2021-06-16 16:00:00 A1 7 73.805
Note in this dataframe, the only time fr5ame which satisfies the above conditions is the entries for the of 14:00, 15:00 and 16:00.
def filterFrame(df, dur, pr_threshold, ct_threshold):
ff = df[(df['CT_1257']< ct_threshold) & (df['PR_Rate'] >pr_threshold) ].reset_index()
ml = list(ff.rolling(f'{dur}h', on='Date').count()['IDs'])
r = len(ml)- 1
rows= []
while r >= 0:
end = r
start = None
if int(ml[r]) < dur:
r -= 1
else:
k = int(ml[r])
for i in range(k):
rows.append(r-i)
r -= k
rows = rows[::-1]
return ff.filter(items= rows, axis = 0).reset_index()
running filterFrame(df, 3, 50, 30) yields:
level_0 index Date IDs CT_1257 PR_Rate
0 1 2 2021-06-16 14:00:00 A1 25 52.302
1 2 3 2021-06-16 15:00:00 A1 13 61.450
2 3 4 2021-06-16 16:00:00 A1 7 73.805
Related
I am using a csv with an accumulative number that changes daily.
Day Accumulative Number
0 9/1/2020 100
1 11/1/2020 102
2 18/1/2020 98
3 11/2/2020 105
4 24/2/2020 95
5 6/3/2020 120
6 13/3/2020 100
I am now trying to find the best way to aggregate it and compare the monthly results before a specific date. So, I want to check the balance on the 11th of each month but for some months, there is no activity for the specific day. As a result, I trying to get the latest day before the 12th of each Month. So, the above would be:
Day Accumulative Number
0 11/1/2020 102
1 11/2/2020 105
2 6/3/2020 120
What I managed to do so far is to just get the latest day of each month:
dateparse = lambda x: pd.datetime.strptime(x, "%d/%m/%Y")
df = pd.read_csv("Accumulative.csv",quotechar="'", usecols=["Day","Accumulative Number"], index_col=False, parse_dates=["Day"], date_parser=dateparse, na_values=['.', '??'] )
df.index = df['Day']
grouped = df.groupby(pd.Grouper(freq='M')).sum()
print (df.groupby(df.index.month).apply(lambda x: x.iloc[-1]))
which returns:
Day Accumulative Number
1 2020-01-18 98
2 2020-02-24 95
3 2020-03-13 100
Is there a way to achieve this in Pandas, Python or do I have to use SQL logic in my script? Is there an easier way I am missing out in order to get the "balance" as per the 11th day of each month?
You can do groupby with factorize
n = 12
df = df.sort_values('Day')
m = df.groupby(df.Day.dt.strftime('%Y-%m')).Day.transform(lambda x :x.factorize()[0])==n
df_sub = df[m].copy()
You can try filtering the dataframe where the days are less than 12 , then take last of each group(grouped by month) :
df['Day'] = pd.to_datetime(df['Day'],dayfirst=True)
(df[df['Day'].dt.day.lt(12)]
.groupby([df['Day'].dt.year,df['Day'].dt.month],sort=False).last()
.reset_index(drop=True))
Day Accumulative_Number
0 2020-01-11 102
1 2020-02-11 105
2 2020-03-06 120
I would try:
# convert to datetime type:
df['Day'] = pd.to_datetime(df['Day'], dayfirst=True)
# select day before the 12th
new_df = df[df['Day'].dt.day < 12]
# select the last day in each month
new_df.loc[~new_df['Day'].dt.to_period('M').duplicated(keep='last')]
Output:
Day Accumulative Number
1 2020-01-11 102
3 2020-02-11 105
5 2020-03-06 120
Here's another way using expanding the date range:
# set as datetime
df2['Day'] = pd.to_datetime(df2['Day'], dayfirst=True)
# set as index
df2 = df2.set_index('Day')
# make a list of all dates
dates = pd.date_range(start=df2.index.min(), end=df2.index.max(), freq='1D')
# add dates
df2 = df2.reindex(dates)
# replace NA with forward fill
df2['Number'] = df2['Number'].ffill()
# filter to get output
df2 = df2[df2.index.day == 11].reset_index().rename(columns={'index': 'Date'})
print(df2)
Date Number
0 2020-01-11 102.0
1 2020-02-11 105.0
2 2020-03-11 120.0
i have a dataframe called Data
Date Value Frequency
06/01/2020 256 A
07/01/2020 235 A
14/01/2020 85 Q
16/01/2020 625 Q
22/01/2020 125 Q
here it is observed that 6/01/2020 and 07/01/2020 are in the same week that is monday and tuesday.
Therefore i wanted to take maximum date from week.
my final dataframe should look like this
Date Value Frequency
07/01/2020 235 A
16/01/2020 625 Q
22/01/2020 125 Q
I want the maximum date from the week , like i have showed in my final dataframe example.
I am new to python, And i am searching answer for this which i didnt find till now ,Please help
First convert column to datetimes by to_datetime and use DataFrameGroupBy.idxmax for rows with maximum datetime per rows with Series.dt.strftime, last select rows by DataFrame.loc:
df['Date'] = pd.to_datetime(df['Date'], dayfirst=True)
print (df['Date'].dt.strftime('%Y-%U'))
0 2020-01
1 2020-01
2 2020-02
3 2020-02
4 2020-03
Name: Date, dtype: object
df = df.loc[df.groupby(df['Date'].dt.strftime('%Y-%U'))['Date'].idxmax()]
print (df)
Date Value Frequency
1 2020-01-07 235 A
3 2020-01-16 625 Q
4 2020-01-22 125 Q
If format of datetimes cannot be changed:
d = pd.to_datetime(df['Date'], dayfirst=True)
df = df.loc[d.groupby(d.dt.strftime('%Y-%U')).idxmax()]
print (df)
Date Value Frequency
1 07/01/2020 235 A
3 16/01/2020 625 Q
4 22/01/2020 125 Q
I have a huge ass csv file like given below which I opened as dataframe using pandas. I want to extract data from multiple columns at different date sets.
I want to select from a particular date and hour to another for the last 3 column values. The slicing options I tried and googled were for single column.
date heure PM10 NO2 O3
0 01/01/2016 1 27 22 36
1 01/01/2016 2 25 29 27
2 01/01/2016 3 26 47 10
3 01/01/2016 4 16 40 13
4 01/01/2016 5 15 34 13
5 02/01/2016 1 15 34 13
6 02/01/2016 2 15 34 13
Target output - taking data from a particular data and hour to another one.
3 01/01/2016 4 16
4 01/01/2016 5 15
Thank you. The data set is obviously way bigger than 4 No.
You can do this:
df_selected = df[(df.date >= "01/01/2016") &
(df['hour']>=4) &
(df.date < "02/01/2016") &
(df['hour']<6)
].iloc[:,:3] #first three columns
Alternatively, for the columns selection you can use .loc[:,['name', 'of', 'columns']] or for the last n columns .iloc[:,-n:].
Be careful with date because I'm not sure what happens with an "English" date, maybe you have to change the date using df['date'] = pd.to_datetime(df.date).
I have the two following dataframes that I want to merge.
df1:
id time station
0 a 22.08.2017 12:00:00 A1
1 b 22.08.2017 12:00:00 A3
2 a 22.08.2017 13:00:00 A2
...
pivot:
station A1 A2 A3
0 time
1 22.08.2017 12:00:00 10 12 11
2 22.08.2017 13:00:00 9 7 3
3 22.08.2017 14:00:00 2 3 4
4 22.08.2017 15:00:00 3 2 7
...
it should look like:
merge:
id time station value
0 a 22.08.2017 12:00:00 A1 10
1 b 22.08.2017 12:00:00 A3 11
2 a 22.08.2017 13:00:00 A2 7
...
Now I want to add a column in the data frame with the right value from the pivot table. I failed including the column labels for the merge.
I constructed something like that, but it does not work:
merge = pd.merge(df1, pivot, how="left", left_on=["time", "station"], right_on=["station", pivot.columns])
Any help?
EDIT:
As advised, instead of the pivot table I tried to use the following data:
df2:
time station value
22.08.2017 12:00:00 A1 10
22.08.2017 12:00:00 A2 12
22.08.2017 12:00:00 A3 11
...
22.08.2017 13:00:00 A1 9
22.08.2017 13:00:00 A2 7
22.08.2017 13:00:00 A3 3
The table contains about 1300 different stations for every timestamp. All in all I have more than 115.000.000 rows. My df1 have 5.000.000 rows.
Now I tried to merge df1.head(100) and df2, but in the result all values are nan. Therefore I used this:
merge = pd.merge(df1.head(100), df2, how="left", on=["time", "station"])
Another problem is that the merge takes a few minutes so that I expect the whole df1 will take several days.
I guess you got the dataframe pivot using either pivot or pivot_table in pandas, if you can perform the merge using the dataframe you had before the pivot it should work just fine.
Otherwise you will have to reverse the pivot using melt before merging:
melt = pd.concat([pivot[['time']],pivot[['A1']].melt()],axis = 1)
melt = pd.concat([melt,pd.concat([pivot[['time']],pivot[['A2']].melt()],axis = 1)])
melt = pd.concat([melt,pd.concat([pivot[['time']],pivot[['A3']].melt()],axis = 1)])
melt.columns = ['time','station','value']
Then just perform a merge like you expected it:
my_df.merge(melt,on = ['time','station'])
id time station value
0 a time1 A1 10
1 b time1 A3 11
2 a time2 A2 7
EDIT:
If your dataframes are as big as in your edit, you indeed have to perform the merges on chunks of them. You could try to reduce it to chunk both your dataframes.
First, sort your df1 in order to have only close values of time:
df1.sort_values('time',inplace = True)
Then you chunk it, chunk the second dataframe in the way you are sure to have all the rows you might need, and then merge those chunks:
chunk1 = df1.head(100)
chunk2 = df2.loc[df2.time.between(chunk1.time.min(),chunk1.time.max())]
merge = chunk1.merge(chunk2,on = ['time','station'],how = 'left')
I want to count how many vehicles are delayed more than 4 min on a given day according to a given departure (let's assume from 00:00 to 05:00).
This is a sample of the data:
A B C D
1 Line Day Departure Delayed (sec)
2 11 Weekday 02:30:00 120
3 11 Weekday 03:40:00 500
4 22 Weekday 01:45:00 10
5 44 Weekday 06:44:00 1000
6 55 Weekday 04:35:00 145
7 111 Saturday 14:40:00 450
8 111 Saturday 04:20:00 300
9 111 Saturday 20:20:00 220
10 111 Saturday 07:00:00 125
11 333 Sunday 09:15:00 700
I used a "TÆL.HVISER" function (Danish) or COUNT.IFS function to count the vehicles:
=TÆL.HVISER(A2:A11;"11";B2:B11;"Weekday";C2:C11;00:00:00>C2:C11>05:00:00;D2:D11;">240")
But it is not working. When I break this restriction into four restrictions, the individual restrictions are working but when I combine them it's not working.
I've laid out your data according to how I read your sample formula.
The EN-US formula in G4 is,
=COUNTIFS($A$2:$A$11, G$3, $B$2:$B$11, $F4, $C$2:$C$11, ">="&TIME(0, 0, 0), $C$2:$C$11, "<="&TIME(5, 0, 0), $D$2:$D$11, ">="&240)
Fill both right and down. I've use the TIME function so that a) real times could be referenced and b) it makes it easier to set to new values.
TÆL.HVISER, funktionen
Funktionen TID
It is the part
00:00:00>C2:C11>05:00:00
if you change it to two criteria like this
C2:C11;">00:00:00";C2:C11;"<05:00:00"
it will work. Here is the full formula:
=COUNTIFS(A2:A11;"11";B2:B11;"Weekday";C2:C11;">00:00:00";C2:C11;"<05:00:00";D2:D11;">240")