Pandas, how to dropna values using subset with multiindex dataframe? - python-3.x

I have a data frame with multi-index columns.
From this data frame I need to remove the rows with NaN values in a subset of columns.
I am trying to use the subset option of pd.dropna but I do not manage to find the way to specify the subset of columns. I have tried using pd.IndexSlice but this does not work.
In the example below I need to get ride of the last row.
import pandas as pd
# ---
a = [1, 1, 2, 2, 3, 3]
b = ["a", "b", "a", "b", "a", "b"]
col = pd.MultiIndex.from_arrays([a[:], b[:]])
val = [
[1, 2, 3, 4, 5, 6],
[None, None, 1, 2, 3, 4],
[None, 1, 2, 3, 4, 5],
[None, None, 5, 3, 3, 2],
[None, None, None, None, 5, 7],
]
# ---
df = pd.DataFrame(val, columns=col)
# ---
print(df)
# ---
idx = pd.IndexSlice
df.dropna(axis=0, how="all", subset=idx[1:2, :])
# ---
print(df)
Using the thresh option is an alternative but if possible I would like to use subset and how='all'

When dealing with a MultiIndex, each column of the MultiIndex can be specified as a tuple:
In [67]: df.dropna(axis=0, how="all", subset=[(1, 'a'), (1, 'b'), (2, 'a'), (2, 'b')])
Out[67]:
1 2 3
a b a b a b
0 1.0 2.0 3.0 4.0 5 6
1 NaN NaN 1.0 2.0 3 4
2 NaN 1.0 2.0 3.0 4 5
3 NaN NaN 5.0 3.0 3 2
Or, to select all columns whose first level equals 1 or 2 you could use:
In [69]: df.dropna(axis=0, how="all", subset=df.loc[[], [1,2]].columns)
Out[69]:
1 2 3
a b a b a b
0 1.0 2.0 3.0 4.0 5 6
1 NaN NaN 1.0 2.0 3 4
2 NaN 1.0 2.0 3.0 4 5
3 NaN NaN 5.0 3.0 3 2
df[[1,2]].columns also works, but this returns a (possibly large) intermediate DataFrame. df.loc[[], [1,2]].columns is more memory-efficient since its intermediate DataFrame is empty.

If you want to apply the dropna to the columns which have 1 or 2 in level 1, you can do it as follows:
cols= [(c0, c1) for (c0, c1) in df.columns if c0 in [1,2]]
df.dropna(axis=0, how="all", subset=cols)
If applied to your data, it results in:
Out[446]:
1 2 3
a b a b a b
0 1.0 2.0 3.0 4.0 5 6
1 NaN NaN 1.0 2.0 3 4
2 NaN 1.0 2.0 3.0 4 5
3 NaN NaN 5.0 3.0 3 2
As you can see, the last line (index=4) is gone, because all columns below 1 and 2 were NaN for this line. If you rather want all rows to be removed, where any NaN occured in the column, you need:
df.dropna(axis=0, how="any", subset=cols)
Which results in:
Out[447]:
1 2 3
a b a b a b
0 1.0 2.0 3.0 4.0 5 6

Related

Apply np.where or np.select to multiple column pairs

Given a data df as follows:
import pandas as pd
data = [[1, 'A1', 'A1'], [2, 'A2', 'B2', 1, 1], [3, 'B3', 'B3', 3, 2], [4, None, None]]
df = pd.DataFrame(data, columns=['id', 'v1','v2','v3','v4'])
print(df)
Out:
id v1 v2 v3 v4
0 1 A1 A1 NaN NaN
1 2 A2 B2 1.0 1.0
2 3 B3 B3 3.0 2.0
3 4 None None NaN NaN
Let's say I need to check if multiple column pairs have identical content or same values:
col_pair = {'v1': 'v2', 'v3': 'v4'}
If I don't want to repeat np.where multiple times as follow, instead, I hope to apply col_pair or other possible solutions, how could I acheive that? Thanks.
df['v1_v2'] = np.where(df['v1'] == df['v2'], 1, 0)
df['v3_v4'] = np.where(df['v3'] == df['v4'], 1, 0)
The expected result:
id v1 v2 v3 v4 v1_v2 v3_v4
0 1 A1 A1 NaN NaN 1 NaN
1 2 A2 B2 1.0 1.0 0 1
2 3 B3 B3 3.0 2.0 1 0
3 4 None None NaN NaN NaN NaN
You need test also if both values in pair key-value are missing in DataFrame.isna with DataFrame.all and passed to numpy.select:
for k, v in col_pair.items():
df[f'{k}_{v}'] = np.select([df[[k, v]].isna().all(axis=1),
df[k] == df[v]], [None,1], default=0)
Out:
id v1 v2 v3 v4 v1_v2 v3_v4
0 1 A1 A1 NaN NaN 1 None
1 2 A2 B2 1.0 1.0 0 1
2 3 B3 B3 3.0 2.0 1 0
3 4 None None NaN NaN None None

Pandas groupby with dropna set to True generating wrong output

In the following snippet:
import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"a": [1, 2, 3, 4, 5, 6, 7, 8, 9],
"b": [1, np.nan, 1, np.nan, 2, 1, 2, np.nan, 1]
}
)
df_again = df.groupby("b", dropna=False).apply(lambda x: x)
I was expecting df and df_again to be identical. But they are not:
df
a b
0 1 1.0
1 2 NaN
2 3 1.0
3 4 NaN
4 5 2.0
5 6 1.0
6 7 2.0
7 8 NaN
8 9 1.0
df_again
a b
0 1 1.0
2 3 1.0
4 5 2.0
5 6 1.0
6 7 2.0
8 9 1.0
Now, if I tweak slightly the lambda expression to "see" what is going on by
df.groupby("b", dropna=False).apply(lambda x: print(x)) I can actually visualize that also the portion of the df where b was NaN was processed.
What am I missing here?
(Using pandas 1.3.1 and numpy 1.20.3)
It's because None and None are the same thing:
>>> None == None
True
>>>
You have to use np.nan:
>>> np.NaN == np.NaN
False
>>>
So try this:
df = pd.DataFrame(
{
"a": [1, 2, 3, 4, 5, 6, 7, 8, 9],
"b": [1, np.NaN, 1, np.NaN, 2, 1, 2, np.NaN, 1]
}
)
df_again = df.groupby("b", dropna=False).apply(lambda x: x)
Now df and df_again would be the same:
>>> df
a b
0 1 1.0
1 2 NaN
2 3 1.0
3 4 NaN
4 5 2.0
5 6 1.0
6 7 2.0
7 8 NaN
8 9 1.0
>>> df_again
a b
0 1 1.0
1 2 NaN
2 3 1.0
3 4 NaN
4 5 2.0
5 6 1.0
6 7 2.0
7 8 NaN
8 9 1.0
>>> df.equals(df_again)
True
>>>
This was a bug introduced in pandas 1.2.0 as described here and was solved here.

fill values after condition with NaN

I have a df like this:
df = pd.DataFrame(
[
['A', 1],
['A', 1],
['A', 1],
['B', 2],
['B', 0],
['A', 0],
['A', 1],
['B', 1],
['B', 0]
], columns = ['key', 'val'])
df
print:
key val
0 A 1
1 A 1
2 A 1
3 B 2
4 B 0
5 A 0
6 A 1
7 B 1
8 B 0
I want to fill the rows after 2 in the val column (in the example all values in the val column from row 3 to 8 are replaced with nan).
I tried this:
df['val'] = np.where(df['val'].shift(-1) == 2, np.nan, df['val'])
and iterating over rows like this:
for row in df.iterrows():
df['val'] = np.where(df['val'].shift(-1) == 2, np.nan, df['val'])
but cant get it to fill nan forward.
You can use boolean indexing with cummax to fill nan values:
df.loc[df['val'].eq(2).cummax(), 'val'] = np.nan
Alternatively you can also use Series.mask:
df['val'] = df['val'].mask(lambda x: x.eq(2).cummax())
key val
0 A 1.0
1 A 1.0
2 A 1.0
3 B NaN
4 B NaN
5 A NaN
6 A NaN
7 B NaN
8 B NaN
You can try :
ind = df.loc[df['val']==2].index
df.iloc[ind[0]:,1] = np.nan
Once you get index by df.index[df.val.shift(-1).eq(2)].item() then you can use slicing
idx = df.index[df.val.shift(-1).eq(2)].item()
df.iloc[idx:, 1] = np.nan
df
key val
0 A 1.0
1 A 1.0
2 A NaN
3 B NaN
4 B NaN
5 A NaN
6 A NaN
7 B NaN
8 B NaN

How do I get nlargest rows without the sorting?

I need to extract the n-smallest rows of a pandas df, but it is very important to me to maintain the original order of rows.
code example:
import pandas as pd
df = pd.DataFrame({
'a': [1, 10, 8, 11, -1],
'b': list('abdce'),
'c': [1.0, 2.0, 1.5, 3.0, 4.0]})
df.nsmallest(3, 'a')
Gives:
a b c
4 -1 e 4.0
0 1 a 1.0
2 8 d 1.5
I need:
a b c
0 1 a 1.0
2 8 d 1.5
4 -1 e 4.0
Any ideas how to do that?
PS! In my real example, the index is not sorted/sortable as they are strings (names).
Simplest approach assuming index was sorted in the beginning
df.nsmallest(3, 'a').sort_index()
a b c
0 1 a 1.0
2 8 d 1.5
4 -1 e 4.0
Alternatively with np.argpartition and iloc
This doesn't depend on sorting the index.emphasized text
df.iloc[np.sort(df.a.values.argpartition(3)[:3])]
a b c
0 1 a 1.0
2 8 d 1.5
4 -1 e 4.0

How to replace selected rows of pandas dataframe with a np array, sequentially?

I have a pandas dataframe
A B C
0 NaN 2 6
1 3.0 4 0
2 NaN 0 4
3 NaN 1 2
where I have a column A that has NaN values in some rows (not necessarily consecutive).
I want to replace these values not with a constant value (which pd.fillna does), but rather with the values from a numpy array.
So the desired outcome is:
A B C
0 1.0 2 6
1 3.0 4 0
2 5.0 0 4
3 7.0 1 2
I'm not sure the .replace method will help here as well, since that seems to replace value <-> value via dictionary. Whereas here I want to sequentially change NaN to its corresponding value (by index) in the np array.
I tried:
MWE:
huh = pd.DataFrame([[np.nan, 2, 6],
[3, 4, 0],
[np.nan, 0, 4],
[np.nan, 1, 2]],
columns=list('ABC'))
huh.A[huh.A.isnull()] = np.array([1,5,7]) # what i want to do, but this gives error
gives the error
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
'''
I read the docs but I can't understand how to do this with .loc.
How do I do this properly, preferably without a for loop?
Other info:
The number of elements in the np array will always match the number of NaN in the dataframe, so your answer does not need to check for this.
You are really close, need DataFrame.loc for avoid chained assignments:
huh.loc[huh.A.isnull(), 'A'] = np.array([1,5,7])
print (huh)
A B C
0 1.0 2 6
1 3.0 4 0
2 5.0 0 4
3 7.0 1 2
zip
This should account for uneven lengths
m = huh.A.isna()
a = np.array([1, 5, 7])
s = pd.Series(dict(zip(huh.index[m], a)))
huh.fillna({'A': s})
A B C
0 1.0 2 6
1 3.0 4 0
2 5.0 0 4
3 7.0 1 2

Resources