Pandas rolling idmax for True/False rows? - python-3.x

I'm keeping score in a True/False column when determining whether some signal is below the background level, so for example
sig bg is_below
5 3 False
5 3 False
5 3 False
2 3 True # "False positive"
4 3 False
4 3 False
0 3 True # Signal is dead and not returning from this point onwards
0 3 True
0 3 True
0 3 True
0 3 True
But as I've shown, noise may sometimes generate "false positives", and smoothing the data doesn't get rid of some big spikes, without oversmoothing smaller data. I'm sure there's a proper mathematical way, but perhaps that would be overkill in work and computational efficiency.
Instead, how do I determine the index of the first True where True appears e.g. 3 times in a row?

Okay, so I just remembered that True/False could just as easily be interpreted as 1/0, and so a rolling median, e.g.
scipy.signal.medfilt(df["is_below"], kernel_size = 5).argmax()
Would return the index of the first time encountering [False, False, True, True, True], as the median of [0, 0, 1, 1, 1] is the smallest window that returns 3 True in a row.
I don't know if there is an even better way, but given that I have 100s of datapoints in my timeseries, the returned argmax index is accurate enough for my application.

If your data is in a pandas dataframe (say called df), you can do it by creating a boolean variable b which is True at each row only when the row and previous two rows are True in df.is_below.
b = ((df.is_below == True) & (df.is_below.shift(-1) == True) & (df.is_below.shift(-2) == True))
Here, df.is_below.shift(-1) shifts the whole dataframe back by 1, so we are looking at the previous row (and similarly for shift(-2) to look at the row before the previous row).
Full code below:
import pandas as pd
# Create dataframe
df = pd.DataFrame()
sig = [5, 5, 5, 2, 4, 4, 0, 0, 0, 0, 0]
df['sig'] = sig
df['bg'] = [3] * len(sig)
df['is_below'] = df.sig < df.bg
# Find index of first consecutive three True in df.is_below
b = ((df.is_below == True) & (df.is_below.shift(-1) == True) & (df.is_below.shift(-2) == True))
idx = df.index[b][0] # first index where three Trues are in a row

Related

how to get row index of a Pandas dataframe from a regex match

This question has been asked but I didn't find the answers complete. I have a dataframe that has unnecessary values in the first row and I want to find the row index of the animals:
df = pd.DataFrame({'a':['apple','rhino','gray','horn'],
'b':['honey','elephant', 'gray','trunk'],
'c':['cheese','lion', 'beige','mane']})
a b c
0 apple honey cheese
1 rhino elephant lion
2 gray gray beige
3 horn trunk mane
ani_pat = r"rhino|zebra|lion"
That means I want to find "1" - the row index that matches the pattern. One solution I saw here was like this; applying to my problem...
def findIdx(df, pattern):
return df.apply(lambda x: x.str.match(pattern, flags=re.IGNORECASE)).values.nonzero()
animal = findIdx(df, ani_pat)
print(animal)
(array([1, 1], dtype=int64), array([0, 2], dtype=int64))
That output is a tuple of NumPy arrays. I've got the basics of NumPy and Pandas, but I'm not sure what to do with this or how it relates to the df above.
I altered that lambda expression like this:
df.apply(lambda x: x.str.match(ani_pat, flags=re.IGNORECASE))
a b c
0 False False False
1 True False True
2 False False False
3 False False False
That makes a little more sense. but still trying to get the row index of the True values. How can I do that?
We can select from the filter the DataFrame index where there are rows that have any True value in them:
idx = df.index[
df.apply(lambda x: x.str.match(ani_pat, flags=re.IGNORECASE)).any(axis=1)
]
idx:
Int64Index([1], dtype='int64')
any on axis 1 will take the boolean DataFrame and reduce it to a single dimension based on the contents of the rows.
Before any:
a b c
0 False False False
1 True False True
2 False False False
3 False False False
After any:
0 False
1 True
2 False
3 False
dtype: bool
We can then use these boolean values as a mask for index (selecting indexes which have a True value):
Int64Index([1], dtype='int64')
If needed we can use tolist to get a list instead:
idx = df.index[
df.apply(lambda x: x.str.match(ani_pat, flags=re.IGNORECASE)).any(axis=1)
].tolist()
idx:
[1]

Delete dataframe rows based upon two dependent conditions

I have a fairly large dataframe (a few hundred columns) and I want to perform the following operation on it. I am using a toy dataframe below with a simple condition to illustrate what I need.
For every row:
Condition #1:
Check two of the columns for a value of zero (0). If this is true, keep the row
and move on to the next. If either column has a value of zero (0), the condition is True.
If Condition #1 is False (no zeros in either column 1 or 4)
Check all remaining columns in the row.
If any of the remaining columns has a value of zero, drop the row.
I would like the filtered dataframe returned as a new, separate dataframe.
My code so far:
# https://codereview.stackexchange.com/questions/185389/dropping-rows-from-a-pandas-dataframe-where-some-of-the-columns-have-value-0/185390
# https://thispointer.com/python-pandas-how-to-drop-rows-in-dataframe-by-conditions-on-column-values/
# https://stackoverflow.com/questions/29763620/how-to-select-all-columns-except-one-column-in-pandas
import pandas as pd
df = pd.DataFrame({'Col1': [7, 6, 0, 1, 8],
'Col2': [0.5, 0.5, 0, 0, 7],
'Col3': [0, 0, 3, 3, 6],
'Col4': [7, 0, 6, 4, 5]})
print(df)
print()
exclude = ['Col1', 'Col4']
all_but_1_and_4 = df[df.columns.difference(exclude)] # Filter out columns 1 and 4
print(all_but_1_and_4)
print()
def delete_rows(row):
if row['Col1'] == 0 or row['Col4'] == 0: # Is the value in either Col1 or Col4 zero(0)
skip = True # If it is, keep the row
if not skip: # If not, check the second condition
is_zero = all_but_1_and_4.apply(lambda x: 0 in x.values, axis=1).any() # Are any values in the remaining columns zero(0)
if is_zero: # If any of the remaining columns has a value of zero(0)
pass
# drop the row being analyzed # Drop the row.
new_df = df.apply(delete_rows, axis=1)
print(new_df)
I don't know how to actually drop the row if both of my conditions are met.
In my toy dataframe, rows 1, 2 and 4 should be kept, 0 and 3 dropped.
I do not want to manually check all columns for step 2 because there are several hundred. That is why I filtered using .difference().
What I will do
s1=df[exclude].eq(0).any(1)
s2=df[df.columns.difference(exclude)].eq(0).any(1)
~(~s1&s2) #s1 | ~s2
Out[97]:
0 False
1 True
2 True
3 False
4 True
dtype: bool
yourdf=df[s1 | ~s2].copy()
The WeNYoBen's answer is excellent, so I will only show mistakes in your code:
The condition in the following if statement will never fulfill:
skip = True # If it is, keep the row
if not skip: # If not, check the second condition
You probably wanted to unindent the following rows, i.e. something as
skip = True # If it is, keep the row
if not skip: # If not, check the second condition
which is the same as a simple else:, without the need of skip = True:
else: # If not, check the second condition
The condition in the following if statement will always fulfill, if at least one value in you whole table is zero (so not only in the current row, as you supposed):
is_zero = all_but_1_and_4.apply(lambda x: 0 in x.values, axis=1).any() # Are any values in the remaining columns zero(0)
if is_zero: # If any of the remaining columns has a value of zero(0)
because all_but_1_and_4.apply(lambda x: 0 in x.values, axis=1) is a series of True / False values - one for every row in the all_but_1_and_4 table. So after applying the .any() method to it you receive what I said.
Note:
Your approach is not bad, you may add a variable dropThisRow in your function, set it to True or False depending on conditions, and return it.
Then you may use your function to make the True / False series and use it for creating your target table:
dropRows = df.apply(delete_rows, axis=1) # True/False for dropping/keeping - for every row
new_df = df[~dropRows] # Select only rows with False

Looking for NaN values in a specific column in df [duplicate]

Now I know how to check the dataframe for specific values across multiple columns. However, I cant seem to work out how to carry out an if statement based on a boolean response.
For example:
Walk directories using os.walk and read in a specific file into a dataframe.
for root, dirs, files in os.walk(main):
filters = '*specificfile.csv'
for filename in fnmatch.filter(files, filters):
df = pd.read_csv(os.path.join(root, filename),error_bad_lines=False)
Now checking that dataframe across multiple columns. The first value being the column name (column1), the next value is the specific value I am looking for in that column(banana). I am then checking another column (column2) for a specific value (green). If both of these are true I want to carry out a specific task. However if it is false I want to do something else.
so something like:
if (df['column1']=='banana') & (df['colour']=='green'):
do something
else:
do something
If you want to check if any row of the DataFrame meets your conditions you can use .any() along with your condition . Example -
if ((df['column1']=='banana') & (df['colour']=='green')).any():
Example -
In [16]: df
Out[16]:
A B
0 1 2
1 3 4
2 5 6
In [17]: ((df['A']==1) & (df['B'] == 2)).any()
Out[17]: True
This is because your condition - ((df['column1']=='banana') & (df['colour']=='green')) - returns a Series of True/False values.
This is because in pandas when you compare a series against a scalar value, it returns the result of comparing each row of that series against the scalar value and the result is a series of True/False values indicating the result of comparison of that row with the scalar value. Example -
In [19]: (df['A']==1)
Out[19]:
0 True
1 False
2 False
Name: A, dtype: bool
In [20]: (df['B'] == 2)
Out[20]:
0 True
1 False
2 False
Name: B, dtype: bool
And the & does row-wise and for the two series. Example -
In [18]: ((df['A']==1) & (df['B'] == 2))
Out[18]:
0 True
1 False
2 False
dtype: bool
Now to check if any of the values from this series is True, you can use .any() , to check if all the values in the series are True, you can use .all() .

how to get a kind of "maximum" in a matrix, efficiently

I have the following problem: I have a matrix opened with pandas module, where each cell has a number between -1 and 1. What I wanted to find is the maximum "posible" value in a row that is also not the maximum value in another row.
If for example 2 rows has their maximum value at the same column, I compare both values and take the bigger one, then for the row that has its maximum value smaller that the other row, I took the second maximum value (and do the same analysis again and again).
To explain myself better consider my code
import pandas as pd
matrix = pd.read_csv("matrix.csv")
# this matrix has an id (or name) for each column
# ... and the firt column has the id of each row
results = pd.DataFrame(np.empty((len(matrix),3),dtype=pd.Timestamp),columns=['id1','id2','max_pos'])
l = len(matrix.col[[0]]) # number of columns
while next = 1:
next = 0
for i in range(0, len(matrix)):
max_column = str(0)
for j in range(1, l): # 1 because the first column is an id
if matrix[max_column][i] < matrix[str(j)][i]:
max_column = str(j)
results['id1'][i] = str(i) # I coul put here also matrix['0'][i]
results['id2'][i] = max_column
results['max_pos'][i] = matrix[max_column][i]
for i in range(0, len(results)): #now I will check if two or more rows have the same max column
for ii in range(0, len(results)):
# if two id1 has their max in the same column, I keep it with the biggest
# ... max value and chage the other to "-1" to iterate again
if (results['id2'][i] == results['id2'][ii]) and (results['max_pos'][i] < results['max_pos'][ii]):
matrix[results['id2'][i]][i] = -1
next = 1
Putting an example:
#consider
pd.DataFrame({'a':[1, 2, 5, 0], 'b':[4, 5, 1, 0], 'c':[3, 3, 4, 2], 'd':[1, 0, 0, 1]})
a b c d
0 1 4 3 1
1 2 5 3 0
2 5 1 4 0
3 0 0 2 1
#at the first iterarion I will have the following result
0 b 4 # this means that the row 0 has its maximum at column 'b' and its value is 4
1 b 5
2 a 5
3 c 2
#the problem is that column b is the maximum of row 0 and 1, but I know that the maximum of row 1 is bigger than row 0, so I take the second maximum of row 0, then:
0 c 3
1 b 5
2 a 5
3 c 2
#now I solved the problem for row 0 and 1, but I have that the column c is the maximum of row 0 and 3, so I compare them and take the second maximum in row 3
0 c 3
1 b 5
2 a 5
3 d 1
#now I'm done. In the case that two rows have the same column as maximum and also the same number, nothing happens and I keep with that values.
#what if the matrix would be
pd.DataFrame({'a':[1, 2, 5, 0], 'b':[5, 5, 1, 0], 'c':[3, 3, 4, 2], 'd':[1, 0, 0, 1]})
a b c d
0 1 5 3 1
1 2 5 3 0
2 5 1 4 0
3 0 0 2 1
#then, at the first itetarion the result will be:
0 b 5
1 b 5
2 a 5
3 c 2
#then, given that the max value of row 0 and 1 is at the same column, I should compare the maximum values
# ... but in this case the values are the same (both are 5), this would be the end of iterating
# ... because I can't choose between row 0 and 1 and the other rows have their maximum at different columns...
This code works perfect to me if I have a matrix of 100x100 for example. But, if the matrix size goes to 50,000x50,000 the code takes to much time in finish it. I now that my code could be the most inneficient way to do it, but I don't know how to deal with this.
I have been reading about threads in python that could help but it doesn't help if I put 50,000 threads because my computer doesn't use more CPU. I also tried to use some functions as .max() but I'm not able to get column of the max an compare it with the other max ...
If anyone could help me of give me a piece of advice to make this more efficient I would be very grateful.
Going to need more information on this. What are you trying to accomplish here?
This will help you get some of the way, but in order to fully achieve what you're doing I need more context.
We'll import numpy, random, and Counter from collections:
import numpy as np
import random
from collections import Counter
We'll create a random 50k x 50k matrix of numbers between -10M and +10M
mat = np.random.randint(-10000000,10000000,(50000,50000))
Now to get the maximums for each row we can just do the following list comprehension:
maximums = [max(mat[x,:]) for x in range(len(mat))]
Now we want to find out which ones are not maximums in any other rows. We can use Counter on our maximums list to find out how many of each there are. Counter returns a counter object that is like a dictionary with the maximum as the key, and the # of times it appears as the value.
We then do dictionary comprehension where the value is == to 1. That will give us the maximums that only show up once. we use the .keys() function to grab the numbers themselves, and then turn it into a list.
c = Counter(maximums)
{9999117: 15,
9998584: 2,
9998352: 2,
9999226: 22,
9999697: 59,
9999534: 32,
9998775: 8,
9999288: 18,
9998956: 9,
9998119: 1,
...}
k = list( {x: c[x] for x in c if c[x] == 1}.keys() )
[9998253,
9998139,
9998091,
9997788,
9998166,
9998552,
9997711,
9998230,
9998000,
...]
Lastly we can do the following list comprehension to iterate through the original maximums list to get the indicies of where these rows are.
indices = [i for i, x in enumerate(maximums) if x in k]
Depending on what else you're looking to do we can go from here.
Its not the speediest program but finding the maximums, the counter, and the indicies takes 182 seconds on a 50,000 by 50,000 matrix that is already loaded.

Conditional column selection in pandas

I want to select columns from a DataFrame according to a particular condition. I know it can be done with a loop, but my df is very large so efficiency is crucial. The condition for column selection is having either only non-nan entries or a sequence of only nans followed by a sequence of only non-nan entries.
Here is an example. Consider the following DataFrame:
pd.DataFrame([[1, np.nan, 2, np.nan], [2, np.nan, 5, np.nan], [4, 8, np.nan, 1], [3, 2, np.nan, 2], [3, 2, 5, np.nan]])
0 1 2 3
0 1 NaN 2.0 NaN
1 2 NaN 5.0 NaN
2 4 8.0 NaN 1.0
3 3 2.0 NaN 2.0
4 3 2.0 5.0 NaN
From it, I would like to select only columns 0 and 1. Any advice on how to do this efficiently without looping?
logic
count the nulls in each column. if the only nulls are in the beginning, then the number of nulls in the column should be equal the the position of the first valid index.
get the first valid index
slice the index by the null count and compare against the first valid indices. If they are equal, then thats a good column
cnull = df.isnull().sum()
fvald = df.apply(pd.Series.first_valid_index)
cols = df.index[cnull] == fvald
df.loc[:, cols]
Edited with speed improvements
old answer
def pir1(df):
cnull = df.isnull().sum()
fvald = df.apply(pd.Series.first_valid_index)
cols = df.index[cnull] == fvald
return df.loc[:, cols]
much faster answer using same logic
def pir2(df):
nulls = np.isnan(df.values)
null_count = nulls.sum(0)
first_valid = nulls.argmin(0)
null_on_top = null_count == first_valid
filtered_data = df.values[:, null_on_top]
filtered_columns = df.columns.values[null_on_top]
return pd.DataFrame(filtered_data, df.index, filtered_columns)
Consider a DF as shown which has Nans in various possible locations:
1. Both sides Nans present:
Create a mask by replacing all nans with 0's and finite values with 1's:
mask = np.where(np.isnan(df), 0, 1)
Take it's corresponding element difference across each column. Next, take modulus of it's values. Logic here is that whenever there are three unique values in each column, then discard that column(namely → -1,1,0) as there would be a break in the sequence for such a situation.
Idea is to take the sum and create a subset wherever the sum results in a value less than 2.(As after taking mod, we get 1,1,0). So, for the extreme case, we get sum as 2 and those columns certainly are disjoint and must be discarded.
criteria = pd.DataFrame(mask, columns=df.columns).diff(1).abs().sum().lt(2)
Finally transpose the DF and use this condition and re-transpose to get the desired result having only Nans in one portion and finite values in the other.
df.loc[:, criteria]
2. Nans present on top:
mask = np.where(np.isnan(df), 0, 1)
criteria = pd.DataFrame(mask, columns=df.columns).diff(1).ne(-1).any()
df.loc[:, criteria]

Resources