How to Vectorise index insertion in Python? - python-3.x

I have this problem with a very large data set which I can finished in Excel in less than a minute but it takes way too long on Python.
Objective: To give each row an ID based on information in column X and Y of the data set.
In Excel:
Initialize counter to 1
For each row i:
If both X = 0 and Y = 0, row ID = counter, followed by counter += 1
[edited] Else row ID = ID in the previous row.
Next i
My pd dataframe is large. Doing it in for loop takes more than an hour. I don't know how to vectorise my problem to avoid a for loop.
Hope someone can help me.

To find an efficient Pandas solution, you should rephrase your problem. Your counter is essentially the number of the previous all-zero rows (plus 1):
df = pd.DataFrame({'X': [0,2,1,0,0,1,2,0],
'Y': [0,2,1,3,0,0,1,2]})
df['counter'] = (((df.X==0) & (df.Y==0)).cumsum().shift() + 1)\
.fillna(1).astype(int)
# X Y counter
#0 0 0 1
#1 2 2 2
#2 1 1 2
#3 0 3 2
#4 0 0 2
#5 1 0 3
#6 2 1 3
#7 0 2 3

Related

Compare current value with n values above and below on Pandas DataFrame

I have this df:
x
0 2
1 2
2 2
3 1
4 1
5 2
6 2
I need to compare current value on column x with respect to the n previous and next values based on a defined condition, if condition is met q times then add 1 in a new column, if not, add 0.
For instance, if n is 2, q is 3 and the condition is current_value <= value / 2. In this case, the code will do 7 comparisons:
1st comparison: compare current_value = 2 to previous n = 2 numbers (in this case there are no such numbers because is the first value on the column) and then compare current_value = 2 to the next n = 2 values (in this case both numbers are 2, so condtion is not met on neither (2 <= 2/2)). In this case there are no conditions met, as q = 3 >= 0 the code adds 0 to the new column.
2nd comparison: compare current_value = 2 to previous n = 2 numbers (in this case there is just one number above, the condition is not met (2 <= 2/2)) and then compare current_value = 2 to the next n = 2 values (in this case there's a number 2 and then a number 1, so condition is not met (2 <= 2/2 and 2 <= 1/2)). In this case there are no conditions met, as q = 3 >= 0 the code adds 0 to the new column.
3rd comparison: In this case there are no condition met, as q = 3 >= 0 the code adds 0 to the new column.
4th comparison: compare current_value = 1 to previous n = 2 numbers (in this case there are two number 2 above, the condition is met on both of them (1 <= 2/2)) and then compare current_value = 1 to the next n = 2 values (in this case there's a number 1 and then a number 2, so condition is met once (1 <= 2/2 and 1 <= 1/2)). In this case there are 3 conditions met, as q = 3 >= 3 the code adds 1 to the new column.
5th comparison: In this case there are 3 conditions met, as q = 3 >= 3 the code adds 1 to the new column.
6th comparison: In this case there are no conditions met, as q = 3 >= 0 the code adds 0 to the new column.
7th comparison: In this case there are no conditions met, as q = 3 >= 0 the code adds 0 to the new column.
Desired result:
x comparison
0 2 0
1 2 0
2 2 0
3 1 1
4 1 1
5 2 0
6 2 0
I was thinking on using something like shift function but I'm not sure how to implement it. Any help?
I suggest to use numpy here, to benefit from its sliding window view:
import numpy as np
from numpy.lib.stride_tricks import sliding_window_view as swv
n = 2
q = 3
# convert to numpy array
a = df['x'].astype(float).to_numpy()
# create a sliding window
# remove central value, divide by 2
# compare to original value
# count number of matches
count = (a[:,None] <= swv(np.pad(a, n, constant_values=np.nan), 2*n+1)[:, np.r_[:n,n+1:2*n+1]]/2).sum(1)
# array([0, 0, 0, 3, 3, 0, 0])
# compare number of matches to q
df['comparison'] = (count >= q).astype(int)
print(df)
An alternative with only pandas would require to compute two rolling windows (forward and backward) as it's not trivial to access the current index in a centered rolling with min_periods=1:
n = 2
q = 3
s1 = df['x'].rolling(n+1, min_periods=2).apply(lambda x: sum(x.iloc[-1]<=x.iloc[:-1]/2))
s2 = df.loc[::-1, 'x'].rolling(n+1, min_periods=2).apply(lambda x: sum(x.iloc[-1]<=x.iloc[:-1]/2))
df['comparison'] = s1.add(s2, fill_value=0).ge(3).astype(int)
Output:
x comparison
0 2 0
1 2 0
2 2 0
3 1 1
4 1 1
5 2 0
6 2 0

How to add a field in python to return value based on other columns?

I want to create a new field in python based on the values of other fields. For Example, column Pop2020 has values ranging from 0-10000. If the value is between 0-200 then I want a new field to indicate "0-200", if the values are between 201-500, then return "201-500" and so on and so forth. I thought of using a statement below but that only returns true or false values. I thought maybe I should use the append function but not sure what would work to get what I need.
excl['TestFlag'] = np.where(excl['Pop2020'] > 0 and < 201, True, False)
I hope the following example may help you. You can use pandas.Series.apply which receives a callable and apply it element-wise to a column:
import pandas as pd
def f(x):
if 10 < x:
return 'a'
elif 5 < x <= 10:
return 'b'
else:
return 'c'
df = pd.DataFrame({"num1":[1,10,11,2,6,4,8], "num2":[0,1,0,1,2,0,7]})
df["new_col"] = df["num1"].apply(f)
# num1 num2 new_col
#0 1 0 c
#1 10 1 b
#2 11 0 a
#3 2 1 c
#4 6 2 b
#5 4 0 c
#6 8 7 b

How to extract data from data frame when value of column change

I want to extract part of the data frame when value change from 0 to 1.
logic1: when value change from 0 to 1, start to save data until value again change to 0. (also points before 1 and after 1)
logic2: when value change from 0 to 1, start to save data until value again change to 0. (don't need to save points before 1 and after 1)
only save data when the first time value of flag change from 0 to 1, after this if again value change from 0 to 1 don't need to do anything
df=pd.DataFrame({'value':[3,4,7,8,11,1,15,20,15,16,87],'flag':[0,0,0,1,1,1,0,0,1,1,0]})
Desired output:
df_out_1=pd.DataFrame({'value':[7,8,11,1,15]})
Desired output:
df_out_2=pd.DataFrame({'value':[8,11,1]})
Idea is get consecutive groups of 1 and 0 consecutive groups to s, filter only 1 groups and get first 1 group by compare by minimal value:
df = df.reset_index(drop=True)
s = df['flag'].ne(df['flag'].shift()).cumsum()
m = s.eq(s[df['flag'].eq(1)].min())
df2 = df.loc[m, ['value']]
print (df2)
value
3 8
4 11
5 1
And then filter values with aff and remove 1 to default RangeIndex:
df1 = df.loc[(df2.index + 1).union(df2.index - 1), ['value']]
print (df1)
value
2 7
3 8
4 11
5 1
6 15

Removing outliers using percentile in panda dataframe groupby

I have dataframe df
Transportation_Mode time_delta trip_id segmentid Vincenty_distance velocity acceleration jerk
walk 1 1 1 1.551676553 1.551676553 0.550163852 -1.017629555
walk 1 1 1 1.70920675 1.70920675 0.16257622 -0.39166534
walk 1 1 1 1.871782971 1.871782971 -0.22908912 -0.734438511
walk 12 1 1 23.16466284 1.93038857 0.324972586 -0.331839143
walk 1 1 1 5.830059603 5.830059603 -3.657097132 2.614438854
bus 1 16 5 8.418372046 8.418372046 -7.259019484 7.40735053
bus 23 16 5 26.66510892 1.159352562 0.148331046 -0.036318522
bus 1 16 5 4.570966614 4.570966614 -0.68699497 -0.889126918
I want to remove outlier values within each group of Transportation_Mode based on percentile values [0.05,0.95]
My problem is similar to discussion Remove outliers in Pandas dataframe with groupby
The code I write is
res = df.groupby("Transportation_Mode")["Vincenty_distance"].quantile([0.05, 0.95]).unstack(level=1)
df.loc[ (res.loc[ df.Transportation_Mode, 0.05] < df.Vincenty_distance.values) & (df.Vincenty_distance.values < res.loc[df.Transportation_Mode, 0.95]) ]
but I get the error, ValueError: cannot reindex from a duplicate axis. I don't know where I am wrong here.
Complete input data is available at the link https://drive.google.com/file/d/1JjvS7igTmrtLA4E5Rs5D6tsdAXqzpYqX/view?usp=sharing
Actually if we see,
(res.loc[ df.Transportation_Mode, 0.05] < df.Vincenty_distance.values) & (df.Vincenty_distance.values < res.loc[df.Transportation_Mode, 0.95])
returns a series of type bool which can be to select rows in original df. We just need to give the value of the series for which just add .values while giving it to the df.loc[]. Below should work:
df.loc[ ((res.loc[ df.Transportation_Mode, 0.05] < df.Vincenty_distance.values) & (df.Vincenty_distance.values < res.loc[df.Transportation_Mode, 0.95])).values]
Use map for Series with same size as original DataFrame, so possible filtering:
m1 = (df.Transportation_Mode.map(res[0.05]) < df.Vincenty_distance)
m2 = (df.Vincenty_distance.values < df.Transportation_Mode.map(res[0.95]))
df = df[m1 & m2]
print (df)
Transportation_Mode time_delta trip_id segmentid Vincenty_distance \
1 walk 1 1 1 1.709207
2 walk 1 1 1 1.871783
4 walk 1 1 1 5.830060
5 bus 1 16 5 8.418372
velocity acceleration jerk
1 1.709207 0.162576 -0.391665
2 1.871783 -0.229089 -0.734439
4 5.830060 -3.657097 2.614439
5 8.418372 -7.259019 7.407351

Index Value of Last Matching Row Python Panda DataFrame

I have a dataframe which has a value of either 0 or 1 in a "column 2", and either a 0 or 1 in "column 1", I would somehow like to find and append as a column the index value for the last row where Column1 = 1 but only for rows where column 2 = 1. This might be easier to see than read:
d = {'C1' : pd.Series([1, 0, 1,0,0], index=[1,2,3,4,5]),'C2' : pd.Series([0, 0,0,1,1], index=[1,2,3,4,5])}
df = pd.DataFrame(d)
print(df)
C1 C2
1 1 0
2 0 0
3 1 0
4 0 1
5 0 1
#I've left out my attempts as they don't even get close
df['C3'] = IF C2 = 1: Call Function that gives Index Value of last place where C1 = 1 Else 0 End
This would result in this result set:
C1 C2 C3
1 1 0 0
2 0 0 0
3 1 0 0
4 0 1 3
5 0 1 3
I was trying to get a function to do this as there are roughly 2million rows in my data set but only ~10k where C2 =1.
Thank you in advance for any help, I really appreciate it - I only started
programming with python a few weeks ago.
It is not so straight forward, you have to do a few loops to get this result. The key here is the fillna method which can do forwards and backwards filling.
It is often the case that pandas methods does more than one thing, this makes it very hard to figure out what methods to use for what.
So let me talk you through this code.
First we need to set C3 to nan, otherwise we cannot use fillna later.
Then we set C3 to be the index but only where C1 == 1 (the mask does this)
After this we can use fillna with method='ffill' to propagate the last observation forwards.
Then we have to mask away all the values where C2 == 0, same way we set the index earlier, with a mask.
df['C3'] = pd.np.nan
mask = df['C1'] == 1
df['C3'].loc[mask] = df.index[mask].copy()
df['C3'] = df['C3'].fillna(method='ffill')
mask = df['C2'] == 0
df['C3'].loc[mask] = 0
df
C1 C2 C3
1 1 0 0
2 0 0 0
3 1 0 0
4 0 1 3
5 0 1 3
EDIT:
Added a .copy() to the index, otherwise we overwrite it and the index gets all full of zeroes.

Resources