Comparing equality of groupby objects - python-3.x

Say we have dataframe one df1 and dataframe two df2.
import pandas as pd
dict1= {'group':['A','A','B','C','C','C'],'col2':[1,7,4,2,1,0],'col3':[1,1,3,4,5,3]}
df1 = pd.DataFrame(data=dict1).set_index('group')
dict2 = {'group':['A','A','B','C','C','C'],'col2':[1,7,400,2,1,0],'col3':[1,1,3,4,5,3500]}
df2 = pd.DataFrame(data=dict2).set_index('group')
df1
col2 col3
group
A 1 1
A 7 1
B 4 3
C 2 4
C 1 5
C 0 3
df2
col2 col3
group
A 1 1
A 7 1
B 400 3
C 2 4
C 1 5
C 0 3500
In pandas it is easy to compare the equality of these two dataframes with df1.equals(df2). In this case False.
However, we can see that some in this groups (A in the given toy example) are equal and some are not (groups B and C). I want to check for equality between these groups. In other words, check the equality between the dataframes with index A and B etc.
Here is my attempt. We wish to group the data
g1 = df1.groupby('group')
g2 = df2.groupby('group')
Naively trying g1.equals(g2) gives the error Cannot access callable attribute 'equals' of 'DataFrameGroupBy' objects, try using the 'apply' method.
However, if we try
g1.apply(lambda x: x.equals(g2))
We get a series
group
A False
B False
C False
dtype: bool
However the first entry should be True since the first case group A is equal between the two dataframes.
I can see that I could laboriously construct nested loops to do this, but that's slow. I feel there a way to do this in pandas without usings loops? I think I am misusing the apply method?

You can call get_group on g2 to retrieve the group to compare, you can access the group name using the attribute .name:
In[316]:
g1.apply(lambda x: x.equals(g2.get_group(x.name)))
Out[316]:
group
A True
B False
C False
dtype: bool
EDIT
To handle non-existent groups:
In[320]:
g1.apply(lambda x: x.equals(g2.get_group(x.name)) if x.name in g2.groups else False)
Out[320]:
group
A True
B False
C False
dtype: bool
Example:
In[323]:
dict1= {'group':['A','A','B','C','C','C','D'],'col2':[1,7,4,2,1,0,-1],'col3':[1,1,3,4,
5,3,-1]}
df1 = pd.DataFrame(data=dict1).set_index('group')
g1 = df1.groupby('group')
g1.apply(lambda x: x.equals(g2.get_group(x.name)) if x.name in g2.groups else False)
Out[323]:
group
A True
B False
C False
D False
dtype: bool
Here .groups returns a dict of the groups, the keys are the group name/labels, we can test for existence using x.name in g2.groups and modify the lambda to handle non-existent groups

Related

Pandas dataframe deduplicate rows with column logic

I have a pandas dataframe with about 100 million rows. I am interested in deduplicating it but have some criteria that I haven't been able to find documentation for.
I would like to deduplicate the dataframe, ignoring one column that will differ. If that row is a duplicate, except for that column, I would like to only keep the row that has a specific string, say X.
Sample dataframe:
import pandas as pd
df = pd.DataFrame(columns = ["A","B","C"],
data = [[1,2,"00X"],
[1,3,"010"],
[1,2,"002"]])
Desired output:
>>> df_dedup
A B C
0 1 2 00X
1 1 3 010
So, alternatively stated, the row index 2 would be removed because row index 0 has the information in columns A and B, and X in column C
As this data is slightly large, I hope to avoid iterating over rows, if possible. Ignore Index is the closest thing I've found to the built-in drop_duplicates().
If there is no X in column C then the row should require that C is identical to be deduplicated.
In the case in which there are matching A and B in a row, but have multiple versions of having an X in C, the following would be expected.
df = pd.DataFrame(columns=["A","B","C"],
data = [[1,2,"0X0"],
[1,2,"X00"],
[1,2,"0X0"]])
Output should be:
>>> df_dedup
A B C
0 1 2 0X0
1 1 2 X00
Use DataFrame.duplicated on columns A and B to create a boolean mask m1 corresponding to condition where values in column A and B are not duplicated, then use Series.str.contains + Series.duplicated on column C to create a boolean mask corresponding to condition where C contains string X and C is not duplicated. Finally using these masks filter the rows in df.
m1 = ~df[['A', 'B']].duplicated()
m2 = df['C'].str.contains('X') & ~df['C'].duplicated()
df = df[m1 | m2]
Result:
#1
A B C
0 1 2 00X
1 1 3 010
#2
A B C
0 1 2 0X0
1 1 2 X00
Does the column "C" always have X as the last character of each value? You could try creating a column D with 1 if column C has an X or 0 if it does not. Then just sort the values using sort_values and finally use drop_duplicates with keep='last'
import pandas as pd
df = pd.DataFrame(columns = ["A","B","C"],
data = [[1,2,"00X"],
[1,3,"010"],
[1,2,"002"]])
df['D'] = 0
df.loc[df['C'].str[-1] == 'X', 'D'] = 1
df.sort_values(by=['D'], inplace=True)
df.drop_duplicates(subset=['A', 'B'], keep='last', inplace=True)
This is assuming you also want to drop duplicates in case there is no X in the 'C' column among the duplicates of columns A and B
Here is another approach. I left 'count' (a helper column) in for transparency.
# use df as defined above
# count the A,B pairs
df['count'] = df.groupby(['A', 'B']).transform('count').squeeze()
m1 = (df['count'] == 1)
m2 = (df['count'] > 1) & df['C'].str.contains('X') # could be .endswith('X')
print(df.loc[m1 | m2]) # apply masks m1, m2
A B C count
0 1 2 00X 2
1 1 3 010 1

conditionally multiply values in DataFrame row

here is an example DataFrame:
df = pd.DataFrame([[1,0.5,-0.3],[0,-4,7],[1,0.12,-.06]], columns=['condition','value1','value2'])
I would like to apply a function which multiples the values ('value1' and 'value2' in each row by 100, if the value in the 'condition' column of that row is equal to 1, otherwise, it is left as is.
presumably some usage of .apply with a lambda function would work here but I am not able to get the syntax right. e.g.
df.apply(lambda x: 100*x if x['condition'] == 1, axis=1)
will not work
the desired output after applying this operation would be:
As simple as
df.loc[df.condition==1,'value1':]*=100
import numpy as np
df['value1'] = np.where(df['condition']==1,df['value1']*100,df['value1']
df['value2'] = np.where(df['condition']==1,df['value2']*100,df['value2']
In case multiple columns
# create a list of columns you want to apply condition
columns_list = ['value1','value2']
for i in columns_list:
df[i] = np.where(df['condition']==1,df[i]*100,df[i]
Use df.loc[] with the condition and filter the list of cols to operate then multiply:
l=['value1','value2'] #list of cols to operate on
df.loc[df.condition.eq(1),l]=df.mul(100)
#if condition is just 0 and 1 -> df.loc[df.condition.astype(bool),l]=df.mul(100)
print(df)
Another solution using df.mask() using same list of cols as above:
df[l]=df[l].mask(df.condition.eq(1),df[l]*100)
print(df)
condition value1 value2
0 1 50.0 -30.0
1 0 -4.0 7.0
2 1 12.0 -6.0
Use a mask to filter and where it is true choose second argument where false choose third argument is how np.where works
value_cols = ['value1','value2']
mask = (df.condition == 1)
df[value_cols] = pd.np.where(mask[:, None], df[value_cols].mul(100), df[value_cols])
If you have multiple value columns such as value1, value2 ... and so on, Use
value_cols = df.filter(regex='value\d').columns

Looking for NaN values in a specific column in df [duplicate]

Now I know how to check the dataframe for specific values across multiple columns. However, I cant seem to work out how to carry out an if statement based on a boolean response.
For example:
Walk directories using os.walk and read in a specific file into a dataframe.
for root, dirs, files in os.walk(main):
filters = '*specificfile.csv'
for filename in fnmatch.filter(files, filters):
df = pd.read_csv(os.path.join(root, filename),error_bad_lines=False)
Now checking that dataframe across multiple columns. The first value being the column name (column1), the next value is the specific value I am looking for in that column(banana). I am then checking another column (column2) for a specific value (green). If both of these are true I want to carry out a specific task. However if it is false I want to do something else.
so something like:
if (df['column1']=='banana') & (df['colour']=='green'):
do something
else:
do something
If you want to check if any row of the DataFrame meets your conditions you can use .any() along with your condition . Example -
if ((df['column1']=='banana') & (df['colour']=='green')).any():
Example -
In [16]: df
Out[16]:
A B
0 1 2
1 3 4
2 5 6
In [17]: ((df['A']==1) & (df['B'] == 2)).any()
Out[17]: True
This is because your condition - ((df['column1']=='banana') & (df['colour']=='green')) - returns a Series of True/False values.
This is because in pandas when you compare a series against a scalar value, it returns the result of comparing each row of that series against the scalar value and the result is a series of True/False values indicating the result of comparison of that row with the scalar value. Example -
In [19]: (df['A']==1)
Out[19]:
0 True
1 False
2 False
Name: A, dtype: bool
In [20]: (df['B'] == 2)
Out[20]:
0 True
1 False
2 False
Name: B, dtype: bool
And the & does row-wise and for the two series. Example -
In [18]: ((df['A']==1) & (df['B'] == 2))
Out[18]:
0 True
1 False
2 False
dtype: bool
Now to check if any of the values from this series is True, you can use .any() , to check if all the values in the series are True, you can use .all() .

pandas create a Boolean column for a df based on one condition on a column of another df

I have two dfs, A and B. A is like,
date id
2017-10-31 1
2017-11-01 2
2017-08-01 3
B is like,
type id
1 1
2 2
3 3
I like to create a new boolean column has_b for A, set the column value to True if its corresponding row (A joins B on id) in B does not have type == 1, and its time delta is > 90 days comparing to datetime.utcnow().day; and False otherwise, here is my solution
B = B[B['type'] != 1]
A['has_b'] = A.merge(B[['id', 'type']], how='left', on='id')['date'].apply(lambda x: datetime.utcnow().day - x.day > 90)
A['has_b'].fillna(value=False, inplace=True)
expect to see A result in,
date id has_b
2017-10-31 1 False
2017-11-01 2 False
2017-08-01 3 True
I am wondering if there is a better way to do this, in terms of more concise and efficient code.
First merge A and B on id -
i = A.merge(B, on='id')
Now, compute has_b -
x = i.type.ne(1)
y = (pd.to_datetime('today') - i.date).dt.days.gt(90)
i['has_b'] = (x & y)
Merge back i and A -
C = A.merge(i[['id', 'has_b']], on='id')
C
date id has_b
0 2017-10-31 1 False
1 2017-11-01 2 False
2 2017-08-01 3 True
Details
x will return a boolean mask for the first condition.
i.type.ne(1)
0 False
1 True
2 True
Name: type, dtype: bool
y will return a boolean mask for the second condition. Use to_datetime('today') to get the current date, subtract this from the date column, and access the days component with dt.days.
(pd.to_datetime('today') - i.date).dt.days.gt(90)
0 False
1 False
2 True
Name: date, dtype: bool
In case, A and B's IDs do not align, you may need a left merge instead of an inner merge, for the last step -
C = A.merge(i[['id', 'has_b']], on='id', how='left')
C's has_b column will contain NaNs in this case.

finding values in pandas series - Python3

i have this excruciatingly annoying problem (i'm quite new to python)
df=pd.DataFrame[{'col1':['1','2','3','4']}]
col1=df['col1']
Why does col1[1] in col1 return False?
For check values use boolean indexing:
#get value where index is 1
print (col1[1])
2
#more common with loc
print (col1.loc[1])
2
print (col1 == '2')
0 False
1 True
2 False
3 False
Name: col1, dtype: bool
And if need get rows:
print (col1[col1 == '2'])
1 2
Name: col1, dtype: object
For check multiple values with or:
print (col1.isin(['2', '4']))
0 False
1 True
2 False
3 True
Name: col1, dtype: bool
print (col1[col1.isin(['2', '4'])])
1 2
3 4
Name: col1, dtype: object
And something about in for testing membership docs:
Using the Python in operator on a Series tests for membership in the index, not membership among the values.
If this behavior is surprising, keep in mind that using in on a Python dictionary tests keys, not values, and Series are dict-like. To test for membership in the values, use the method isin():
For DataFrames, likewise, in applies to the column axis, testing for membership in the list of column names.
#1 is in index
print (1 in col1)
True
#5 is not in index
print (5 in col1)
False
#string 2 is not in index
print ('2' in col1)
False
#number 2 is in index
print (2 in col1)
True
You try to find string 2 in index values:
print (col1[1])
2
print (type(col1[1]))
<class 'str'>
print (col1[1] in col1)
False
I might be missing something, and this is years later, but as I read the question, you are trying to get the in keyword to work on your panda series? So probably want to do:
col1[1] in col1.values
Because as mentioned above, pandas is looking through the index, and you need to specifically ask it to look at the values of the series, not the index.

Resources