I have a dataframe with columns age, date and location.
I would like to count how many rows are empty across ALL columns (not some but all in the same time). I have the following code, each line works independently, but how do I say age AND date AND location isnull?
df['age'].isnull().sum()
df['date'].isnull().sum()
df['location'].isnull().sum()
I would like to return a dataframe after removing the rows with missing values in ALL these three columns, so something like the following lines but combined in one statement:
df.mask(row['location'].isnull())
df[np.isfinite(df['age'])]
df[np.isfinite(df['date'])]
You basically can use your approach, but drop the column indices:
df.isnull().sum().sum()
The first .sum() returns a per-column value, while the second .sum() will return the sum of all NaN values.
Similar to Vaishali's answer, you can use df.dropna() to drop all values that are NaN or None and only return your cleaned DataFrame.
In [45]: df = pd.DataFrame({'age': [1, 2, 3, np.NaN, 4, None], 'date': [1, 2, 3, 4, None, 5], 'location': ['a', 'b', 'c', None, 'e', 'f']})
In [46]: df
Out[46]:
age date location
0 1.0 1.0 a
1 2.0 2.0 b
2 3.0 3.0 c
3 NaN 4.0 None
4 4.0 NaN e
5 NaN 5.0 f
In [47]: df.isnull().sum().sum()
Out[47]: 4
In [48]: df.dropna()
Out[48]:
age date location
0 1.0 1.0 a
1 2.0 2.0 b
2 3.0 3.0 c
You can find the no of rows with all NaNs by
len(df) - len(df.dropna(how = 'all'))
and drop by
df = df.dropna(how = 'all')
This will drop the rows with all the NaN values
Related
I have a Pandas DataFrame with two columns of "complementary" data. For any given row, there are 3 possibilities:
1) Column A has a non-null value, and column B has a null value, NaN, that I want to replace with the non-null value from column A.
2) Column A has a null value, NaN, that I want to replace with the non-null value from column B.
3) Both columns A and B have null values, NaN, which means I'll keep NaN as the value for that row.
Here's a simplified version of my DataFrame:
df1 = pd.DataFrame({'A' : ['keep1', np.nan, np.nan, 'keep4', np.nan],
'B' : [np.nan, 'keep2', np.nan, np.nan, np.nan]})
I was thinking that as an intermediate step, I'd create a new column C with the entries I need:
df2 = pd.DataFrame({'A' : ['keep1', np.nan, np.nan, 'keep4', np.nan],
'B' : [np.nan, 'keep2', np.nan, np.nan, np.nan],
'C' : ['keep1', 'keep2', np.nan, 'keep4', np.nan]}
Then I'd drop the first two rows A and B:
df_final = df2.drop(['A', 'B'], axis=1)
My actual DataFrame has hundreds of rows, and I've tried several approaches (boolean filters, looping through the DataFrame using iterrows, using DataFrame.where()) without success. I'd think this would be a simple problem, but I'm not seeing it. Any help is appreciated.
Thanks
You can use combine_first() to fill the gaps in A from B:
df1['C'] = df1['A'].combine_first(df1['B'])
#0 keep1
#1 keep2
#2 NaN
#3 keep4
#4 NaN
Use Series.fillna for replace missing values from A by B values:
df1['C'] = df1.A.fillna(df1.B)
print (df1)
A B C
0 keep1 NaN keep1
1 NaN keep2 keep2
2 NaN NaN NaN
3 keep4 NaN keep4
4 NaN NaN NaN
For avoid drop is possible use DataFrame.pop for extract columns:
df1['C'] = df1.pop('A').fillna(df1.pop('B'))
print (df1)
C
0 keep1
1 keep2
2 NaN
3 keep4
4 NaN
I have a Pandas 0.24.2 dataframe for Python 3.7x as below. I want to drop_duplicates() with the same Name based on a conditional logic. A similar question can be found here: Pandas - Conditional drop duplicates but it gets more complicated in my case
import pandas as pd
import numpy as np
df = pd.DataFrame({
'Id': [1, 2, 3, 4, 5, 6 ],
'Name': ['A', 'B', 'C', 'A', 'B', 'C' ],
'Value1':[1, np.NaN, 0, np.NaN, 1, np.NaN],
'Value2':[np.NaN, 0, np.NaN, 1, np.NaN, 0 ],
'Value3':[np.NaN, 0, np.NaN, 1, np.NaN, np.NaN]
})
How is it possible to:
Drop duplicates for same 'Name' records, keeping the one that has less NaNs?
If they have the same number of NaNs, keeping the one that has NOT a NaN in 'Value1'?
The desired output would be:
Id Name Value1 Value2 Value3
2 2 B NaN 0 0
3 3 C 0 NaN NaN
4 4 A NaN 1 1
Idea is create helper columns for both conditions, sorting and remove duplicates:
df1 = df.assign(count= df.isna().sum(axis=1),
count_val1 = df['Value1'].isna().view('i1'))
df2 = (df1.sort_values(['count', 'count_val1'])[df.columns]
.drop_duplicates('Name')
.sort_index())
print (df2)
Id Name Value1 Value2 Value3
1 2 B NaN 0.0 0.0
2 3 C 0.0 NaN NaN
3 4 A NaN 1.0 1.0
Here is a different solution. The goal is to create two columns that help sort the duplicate rows that will be deleted.
First, we create the columns.
df['count_nan'] = df.isnull().sum(axis=1)
Value1_nan = []
for row in df['Value1']:
if row >= 0:
Value1_nan.append(0)
else:
Value1_nan.append(1)
df['Value1_nan'] = Value1_nan
We then sort the columns so that the column with the most NaNs appears first.
df.sort_values(by=['Name','count_nan', 'Value1'], inplace=True, ascending = [True, True, True])
Finally, we drop the "last" duplicate line. That is, we keep the line with the smallest number of NaNs followed by the line with the smallest number of NaNs in Value1
df = df.drop_duplicates(subset = ['Name'],keep='first')
I have a data frame with multi-index columns.
From this data frame I need to remove the rows with NaN values in a subset of columns.
I am trying to use the subset option of pd.dropna but I do not manage to find the way to specify the subset of columns. I have tried using pd.IndexSlice but this does not work.
In the example below I need to get ride of the last row.
import pandas as pd
# ---
a = [1, 1, 2, 2, 3, 3]
b = ["a", "b", "a", "b", "a", "b"]
col = pd.MultiIndex.from_arrays([a[:], b[:]])
val = [
[1, 2, 3, 4, 5, 6],
[None, None, 1, 2, 3, 4],
[None, 1, 2, 3, 4, 5],
[None, None, 5, 3, 3, 2],
[None, None, None, None, 5, 7],
]
# ---
df = pd.DataFrame(val, columns=col)
# ---
print(df)
# ---
idx = pd.IndexSlice
df.dropna(axis=0, how="all", subset=idx[1:2, :])
# ---
print(df)
Using the thresh option is an alternative but if possible I would like to use subset and how='all'
When dealing with a MultiIndex, each column of the MultiIndex can be specified as a tuple:
In [67]: df.dropna(axis=0, how="all", subset=[(1, 'a'), (1, 'b'), (2, 'a'), (2, 'b')])
Out[67]:
1 2 3
a b a b a b
0 1.0 2.0 3.0 4.0 5 6
1 NaN NaN 1.0 2.0 3 4
2 NaN 1.0 2.0 3.0 4 5
3 NaN NaN 5.0 3.0 3 2
Or, to select all columns whose first level equals 1 or 2 you could use:
In [69]: df.dropna(axis=0, how="all", subset=df.loc[[], [1,2]].columns)
Out[69]:
1 2 3
a b a b a b
0 1.0 2.0 3.0 4.0 5 6
1 NaN NaN 1.0 2.0 3 4
2 NaN 1.0 2.0 3.0 4 5
3 NaN NaN 5.0 3.0 3 2
df[[1,2]].columns also works, but this returns a (possibly large) intermediate DataFrame. df.loc[[], [1,2]].columns is more memory-efficient since its intermediate DataFrame is empty.
If you want to apply the dropna to the columns which have 1 or 2 in level 1, you can do it as follows:
cols= [(c0, c1) for (c0, c1) in df.columns if c0 in [1,2]]
df.dropna(axis=0, how="all", subset=cols)
If applied to your data, it results in:
Out[446]:
1 2 3
a b a b a b
0 1.0 2.0 3.0 4.0 5 6
1 NaN NaN 1.0 2.0 3 4
2 NaN 1.0 2.0 3.0 4 5
3 NaN NaN 5.0 3.0 3 2
As you can see, the last line (index=4) is gone, because all columns below 1 and 2 were NaN for this line. If you rather want all rows to be removed, where any NaN occured in the column, you need:
df.dropna(axis=0, how="any", subset=cols)
Which results in:
Out[447]:
1 2 3
a b a b a b
0 1.0 2.0 3.0 4.0 5 6
I am trying to read in a dataframe and the latitude and longitude doesn't seem accurate. And this is not only for few rows but an entire dataframe with more than 100k rows.
screenshot of dataframe
How do you handle such data?
It looks like your source could be using 99999 instead of NaN. I'd replace these with NaN (missing):
In [11]: df = pd.DataFrame([[1, 99999.0], [2, 4]], columns=['A', 'B'])
In [12]: df[['B']] = df[['B']].replace(99999., np.nan)
In [13]: df
Out[13]:
A B
0 1 NaN
1 2 4.0
i.e.
df[['Latitude', 'Longitude']] = df[['Latitude', 'Longitude']].replace(99999., np.nan)
Note: This might replace some geo locations that are legitimately 99999 but that's very unlikely!
I have two dataframes like the following examples:
import pandas as pd
import numpy as np
df = pd.DataFrame({'a': ['20', '50', '100'], 'b': [1, np.nan, 1],
'c': [np.nan, 1, 1]})
df_id = pd.DataFrame({'b': ['50', '4954', '93920', '20'],
'c': ['123', '100', '6', np.nan]})
print(df)
a b c
0 20 1.0 NaN
1 50 NaN 1.0
2 100 1.0 1.0
print(df_id)
b c
0 50 123
1 4954 100
2 93920 6
3 20 NaN
For each identifier in df['a'], I want to null the value in df['b'] if there is no matching identifier in any row in df_id['b']. I want to do the same for column df['c'].
My desired result is as follows:
result = pd.DataFrame({'a': ['20', '50', '100'], 'b': [1, np.nan, np.nan],
'c': [np.nan, np.nan, 1]})
print(result)
a b c
0 20 1.0 NaN
1 50 NaN NaN # df_id['c'] did not contain '50'
2 100 NaN 1.0 # df_id['b'] did not contain '100'
My attempt to do this is here:
for i, letter in enumerate(['b','c']):
df[letter] = (df.apply(lambda x: x[letter] if x['a']
.isin(df_id[letter].tolist()) else np.nan, axis = 1))
The error I get:
AttributeError: ("'str' object has no attribute 'isin'", 'occurred at index 0')
This is in Python 3.5.2, Pandas version 20.1
You can solve your problem using this instead:
for letter in ['b','c']: # took off enumerate cuz i didn't need it here, maybe you do for the rest of your code
df[letter] = df.apply(lambda row: row[letter] if row['a'] in (df_id[letter].tolist()) else np.nan,axis=1)
just replace isin with in.
The problem is that when you use apply on df, x will represent df rows, so when you select x['a'] you're actually selecting one element.
However, isin is applicable for series or list-like structures which raises the error so instead we just use in to check if that element is in the list.
Hope that was helpful. If you have any questions please ask.
Adapting a hard-to-find answer from Pandas New Column Calculation Based on Existing Columns Values:
for i, letter in enumerate(['b','c']):
mask = df['a'].isin(df_id[letter])
name = letter + '_new'
# for some reason, df[letter] = df.loc[mask, letter] does not work
df.loc[mask, name] = df.loc[mask, letter]
df[letter] = df[name]
del df[name]
This isn't pretty, but seems to work.
If you have a bigger Dataframe and performance is important to you, you can first build a mask df and then apply it to your dataframe.
First create the mask:
mask = df_id.apply(lambda x: df['a'].isin(x))
b c
0 True False
1 True False
2 False True
This can be applied to the original dataframe:
df.iloc[:,1:] = df.iloc[:,1:].mask(~mask, np.nan)
a b c
0 20 1.0 NaN
1 50 NaN NaN
2 100 NaN 1.0