Pandas dataframe deduplicate rows with column logic - python-3.x

I have a pandas dataframe with about 100 million rows. I am interested in deduplicating it but have some criteria that I haven't been able to find documentation for.
I would like to deduplicate the dataframe, ignoring one column that will differ. If that row is a duplicate, except for that column, I would like to only keep the row that has a specific string, say X.
Sample dataframe:
import pandas as pd
df = pd.DataFrame(columns = ["A","B","C"],
data = [[1,2,"00X"],
[1,3,"010"],
[1,2,"002"]])
Desired output:
>>> df_dedup
A B C
0 1 2 00X
1 1 3 010
So, alternatively stated, the row index 2 would be removed because row index 0 has the information in columns A and B, and X in column C
As this data is slightly large, I hope to avoid iterating over rows, if possible. Ignore Index is the closest thing I've found to the built-in drop_duplicates().
If there is no X in column C then the row should require that C is identical to be deduplicated.
In the case in which there are matching A and B in a row, but have multiple versions of having an X in C, the following would be expected.
df = pd.DataFrame(columns=["A","B","C"],
data = [[1,2,"0X0"],
[1,2,"X00"],
[1,2,"0X0"]])
Output should be:
>>> df_dedup
A B C
0 1 2 0X0
1 1 2 X00

Use DataFrame.duplicated on columns A and B to create a boolean mask m1 corresponding to condition where values in column A and B are not duplicated, then use Series.str.contains + Series.duplicated on column C to create a boolean mask corresponding to condition where C contains string X and C is not duplicated. Finally using these masks filter the rows in df.
m1 = ~df[['A', 'B']].duplicated()
m2 = df['C'].str.contains('X') & ~df['C'].duplicated()
df = df[m1 | m2]
Result:
#1
A B C
0 1 2 00X
1 1 3 010
#2
A B C
0 1 2 0X0
1 1 2 X00

Does the column "C" always have X as the last character of each value? You could try creating a column D with 1 if column C has an X or 0 if it does not. Then just sort the values using sort_values and finally use drop_duplicates with keep='last'
import pandas as pd
df = pd.DataFrame(columns = ["A","B","C"],
data = [[1,2,"00X"],
[1,3,"010"],
[1,2,"002"]])
df['D'] = 0
df.loc[df['C'].str[-1] == 'X', 'D'] = 1
df.sort_values(by=['D'], inplace=True)
df.drop_duplicates(subset=['A', 'B'], keep='last', inplace=True)
This is assuming you also want to drop duplicates in case there is no X in the 'C' column among the duplicates of columns A and B

Here is another approach. I left 'count' (a helper column) in for transparency.
# use df as defined above
# count the A,B pairs
df['count'] = df.groupby(['A', 'B']).transform('count').squeeze()
m1 = (df['count'] == 1)
m2 = (df['count'] > 1) & df['C'].str.contains('X') # could be .endswith('X')
print(df.loc[m1 | m2]) # apply masks m1, m2
A B C count
0 1 2 00X 2
1 1 3 010 1

Related

creating a new column from an existent categorical column in python

I have a data frame like this:
ID col
1 a
2 b
3 c
4 d
I want to create a new column so that if it is a or c, new column will give Y, otherwise N.
So, it will look like the following:
ID col col1
1 a Y
2 b N
3 c Y
4 d N
I am working in python3.
Try this code, simple effective
df = pd.DataFrame({'ID':[1,2,3,4],
'Col':['a', 'b', 'c', 'd']})
df['Col_2'] = df.apply(lambda row: 'Y' if (row.Col=='a' or row.Col=='c') else 'N' , axis = 1)

I want to count the occurrence of duplicate values in a column in a dataframe and update the count in a new column in python

Example: Let's say I have a df
Id
A
B
C
A
A
B
It should look like:
Id count
A. 1
B. 1
C. 1
A. 2
A. 3
B. 2
Note: I've tried using the for loop method and while loop option but it works for small datasets but takes a lot of time for large datasets.
for i in df:
for j in df:
if i==j:
count+=1
You can groupby with cumcount, like this:
df['counts'] = df.groupby('Id', sort=False).cumcount() + 1
df.head()
Id counts
0 A 1
1 B 1
2 C 1
3 A 2
4 A 3
5 B 2
dups_values = df.pivot_table(index=['values'], aggfunc='size')
print(dups_values)

Remove duplicated permuted rows in Pandas

I have one Pandas DF with three columns like below:
City1 City2 Totalamount
0 A B 1000
1 A C 2000
2 B A 1000
3 B C 500
4 C A 2000
5 C B 500
I want to delete the duplicated rows where (city1,city2) =(city2,city1). The result should be
City1 City2 Totalamount
0 A B 1000
1 A C 2000
2 B C 500
I tried
res=DFname.drop(DFname[(DFname.City1,DFname.City2) == (DFname.City2,DFname.City1)].index)
but its giving an error.
Could you please help
Thanks
You sort first, then drop the duplicates:
import numpy as np
cols = ['City1', 'City2']
df[cols] = np.sort(df[cols].values, axis=1)
df = df.drop_duplicates()
If the entire dataframe follows the pattern you show in your sample, where:
All rows are duplicated like (A, B) and (B, A)
There are no unpaired entries
CityA and CityB are always different (no instances of (A, A))
then you can simply do
df = df[df['City1'] < df['City2']]
If the sample is not representative of your whole dataframe, please include a sample that is.

Search for value in all DataFrame columns (except first column !) and add new column with matching column name

I'd like to do a search on all columns (except the first column !) of a DataFrame and add a new column (like 'Column_Match') with the name of the matching column.
I tried something like this:
df.apply(lambda row: row.astype(str).str.contains('my_keyword').any(), axis=1)
But it's not excluding the first column and I don't know how to return and add the column name.
Any help much appreciated !
If want columns name of first matched value per rows add new column for match not exist values by DataFrame.assign and DataFrame.idxmax for column name:
df = pd.DataFrame({
'B':[4,5,4,5,5,4],
'A':list('abcdef'),
'C':list('akabbe'),
'F':list('eakbbb')
})
f = lambda row: row.astype(str).str.contains('e')
df['new'] = df.iloc[:,1:].apply(f, axis=1).assign(missing=True).idxmax(axis=1)
print (df)
B A C F new
0 4 a a e F
1 5 b k a missing
2 4 c a k missing
3 5 d b b missing
4 5 e b b A
5 4 f e b C
If need all columns names of all matched values create boolean DataFrame and use dot product with columns names by DataFrame.dot and Series.str.rstrip:
f = lambda row: row.astype(str).str.contains('a')
df1 = df.iloc[:,1:].apply(f, axis=1)
df['new'] = df1.dot(df.columns[1:] + ', ').str.rstrip(', ').replace('', 'missing')
print (df)
B A C F new
0 4 a a e A, C
1 5 b k a F
2 4 c a k C
3 5 d b b missing
4 5 e b b missing
5 4 f e b missing

Accumalate column through pandas

I have multiple tab delimited files, all having same entries. I intend to read each file choose first column as index. My final table will have first column as index mapped against last column from all the files. For this, I wrote a pandas code but not a great ones. Is there an alternate way to do this ?
import pandas as pd
df1 = pd.read_csv("FB_test.tsv",sep='\t')
df1_idx = df1.set_index('target_id')
df1_idx.drop(df1_idx[['length','eff_length','est_counts']],inplace=True, axis=1)
print(df1_idx)
df2 = pd.read_csv("Myc_test.tsv",sep='\t')
df2_idx = df2.set_index('target_id')
df2_idx.drop(df2_idx[['length','eff_length','est_counts']],inplace=True, axis=1)
print(df2_idx)
frames = [df1_idx, df2_idx]
results = pd.concat(frames, axis=1)
results
The output it generated was,
tpm
target_id
A 0
B 0
C 0
D 0
E 0
tpm
target_id
A 1
B 1
C 1
D 1
E 1
Out[18]:
target_id tpm tpm
A 0 1
B 0 1
C 0 1
D 0 1
E 0 1
How to loop it so that, I read each file and achieve this same output ?
Thanks,
AP
I think you can use parameters index_col and usecols in read_csv with list comprehension. But get duplicates columns names (so is problem for selecting), so better is add parameter keys to concat - after converting Multiindex get nice unique column names:
files = ["FB_test.tsv", "Myc_test.tsv"]
dfs = [pd.read_csv(f,sep='\t', index_col=['target_id'], usecols=['target_id','tpm'])
for f in files]
results = pd.concat(dfs, axis=1, keys=('a','b'))
results.columns = results.columns.map('_'.join)
results = results.reset_index()
print (results)
target_id a_tpm b_tpm
0 A 0 1
1 B 0 1
2 C 0 1
3 D 0 1
4 E 0 1
To clean the code and use a looping mechanism, you can put both your file names and the columns you are dropping in two separate lists, and then use list comprehension on the file names to import each dataset. Subsequently, you concatenate the output of the list comprehension into one dataframe:
import pandas as pd
drop_cols = ['length','eff_length','est_counts']
filenames = ["FB_test.tsv", "Myc_test.tsv"]
results = pd.concat([pd.read_csv(filename,sep='\t').set_index('target_id').drop(drop_cols, axis=1) for filename in filenames], axis=1)
I hope this helps.

Resources