python pandas apply how to replace function with lambda function? - python-3.x

I have a dataframe and the function that I would like to apply:
>>> import pandas as pd
>>> import numpy as np
>>> df = pd.DataFrame({
... 'A' : ['A1', 'A2', 'A3'],
... 'B' : ['B1', 'B2', 'B3'],
... 'format_str' : [None, np.nan, 'A = {A}, B = {B}']
... }
... )
>>> df
A B format_str
0 A1 B1 None
1 A2 B2 NaN
2 A3 B3 A = {A}, B = {B}
>>> def gen_format_str(ser):
... if pd.isna(ser.format_str):
... return ser.A
... else:
... # return ser.format_str.format(A = ser.A, B=ser.B)
... return ser.format_str.format(**ser)
...
>>> df['new_field'] = df.apply(
... gen_format_str, axis=1
... )
>>> df
A B format_str new_field
0 A1 B1 None A1
1 A2 B2 NaN A2
2 A3 B3 A = {A}, B = {B} A = A3, B = B3
>>>
Everything works as it should, but I would like to use lambda function instead of gen_format_str.
I tried different approaches, but none of them worked.
How to implement the same functionality of gen_format_str by using lambda function in apply method?
Regards.

This seems to be doing the job :
df['new_field'] = df.apply(
lambda ser: ser.A if pd.isna(ser.format_str) else ser.format_str.format(**ser),
axis=1
)

Related

create new column using if , else statments from lists

i need to create a new column C using if and else statements, from A, B columns: as in example
the below code returns nothing,
can anybody notify me the correct one
import numpy as np
import pandas as pd
a = np.arange(10)
b = [0.1,0.3,0.1, 0.2, 0.5, 0.4,0.7,0.56,
0.78, 0.45]
df= pd.DataFrame(data=b, columns=['B'])
df2= pd.DataFrame(data=a, columns=['A'])
A = df2['A']
B = df['B']
print (A, B)
def comma ( A, B, c):
if B >= 0.1 and B <0.4:
c = B *2
else:
c = B*A
print (c)
If you consider a dataframe with two columns 'A' and 'B', then you can use the apply function to return a new column based on your conditions
data = np.random.rand(10, 2)
df = pd.DataFrame(data=data, columns=['A', 'B'])
then you can use the apply function to return a new column based on your conditions
def cdt(x):
if x['B'] >= 0.1 and x['B'] < 0.4:
return 2 * x['B']
return x['B'] * x['A']
df['C'] = df.apply(cdt, axis=1)

How to iterate over dfs and append data with combine names

i have this problem to solve, this is a continuation of a previus question How to iterate over pandas df with a def function variable function and the given answer worked perfectly, but now i have to append all the data in a 2 columns dataframe (Adduct_name and mass).
This is from the previous question:
My goal: i have to calculate the "adducts" for a given "Compound", both represents numbes, but for eah "Compound" there are 46 different "Adducts".
Each adduct is calculated as follow:
Adduct 1 = [Exact_mass*M/Charge + Adduct_mass]
where exact_mass = number, M and Charge = number (1, 2, 3, etc) according to each type of adduct, Adduct_mass = number (positive or negative) according to each adduct.
My data: 2 data frames. One with the Adducts names, M, Charge, Adduct_mass. The other one correspond to the Compound_name and Exact_mass of the Compounds i want to iterate over (i just put a small data set)
Adducts: df_al
import pandas as pd
data = [["M+3H", 3, 1, 1.007276], ["M+3Na", 3, 1, 22.989], ["M+H", 1, 1,
1.007276], ["2M+H", 1, 2, 1.007276], ["M-3H", 3, 1, -1.007276]]
df_al = pd.DataFrame(data, columns=["Ion_name", "Charge", "M", "Adduct_mass"])
Compounds: df
import pandas as pd
data1 = [[1, "C3H64O7", 596.465179], [2, "C30H42O7", 514.293038], [4,
"C44H56O8", 712.397498], [4, "C24H32O6S", 448.191949], [5, "C20H28O3",
316.203834]]
df = pd.DataFrame(data1, columns=["CdId", "Formula", "exact_mass"])
The solution to this problem was:
df_name = df_al["Ion_name"]
df_mass = df_al["Adduct_mass"]
df_div = df_al["Charge"]
df_M = df_al["M"]
#Defining general function
def Adduct(x,i):
return x*df_M[i]/df_div[i] + df_mass[i]
#Applying general function in a range from 0 to 5.
for i in range(5):
df[df_name.loc[i]] = df['exact_mass'].map(lambda x: Adduct(x,i))
Output
Name exact_mass M+3H M+3Na M+H 2M+H M-3H
0 a 596.465179 199.829002 221.810726 597.472455 1193.937634 197.814450
1 b 514.293038 172.438289 194.420013 515.300314 1029.593352 170.423737
2 c 712.397498 238.473109 260.454833 713.404774 1425.802272 236.458557
3 d 448.191949 150.404592 172.386316 449.199225 897.391174 148.390040
4 e 316.203834 106.408554 128.390278 317.211110 633.414944 104.39400
Now that is the rigth calculations but i need now a file where:
-only exists 2 columns (Name and mass)
-All the different adducts are appended one after another
desired out put
Name Mass
a_M+3H 199.82902
a_M+3Na 221.810726
a_M+H 597.472455
a_2M+H 1193.937634
a_M-3H 197.814450
b_M+3H 514.293038
.
.
.
c_M+3H
and so on.
Also i need to combine the name of the respective compound with the ion form (M+3H, M+H, etc).
At this point i have no code for that.
I would apprecitate any advice and a better approach since the begining.
This part is an update of the question above:
Is posible to obtain and ouput like this one:
Name Mass RT
a_M+3H 199.82902 1
a_M+3Na 221.810726 1
a_M+H 597.472455 1
a_2M+H 1193.937634 1
a_M-3H 197.814450 1
b_M+3H 514.293038 3
.
.
.
c_M+3H 2
The RT is the same value for all forms of a compound, in this example is RT for a =1, b = 3, c =2, etc.
Is posible to incorporate (Keep this column) from the data set df (which i update here below)?. As you can see that df has more columns like "Formula" and "RT" which desapear after calculations.
import pandas as pd
data1 = [[a, "C3H64O7", 596.465179, 1], [b, "C30H42O7", 514.293038, 3], [c,
"C44H56O8", 712.397498, 2], [d, "C24H32O6S", 448.191949, 4], [e, "C20H28O3",
316.203834, 1.5]]
df = pd.DataFrame(data1, columns=["Name", "Formula", "exact_mass", "RT"])
Part three! (sorry and thank you)
this is a trial i did on a small data set (df) using the code below, with the same df_al of above.
df=
Code
#Defining variables for calculation
df_name = df_al["Ion_name"]
df_mass = df_al["Adduct_mass"]
df_div = df_al["Charge"]
df_M = df_al["M"]
df_ID= df["Name"]
#Defining the RT dictionary
RT = dict(zip(df["Name"], df["RT"]))
#Removing RT column
df=df.drop(columns=["RT"])
#Defining general function
def Adduct(x,i):
return x*df_M[i]/df_div[i] + df_mass[i]
#Applying general function in a range from 0 to 46.
for i in range(47):
df[df_name.loc[i]] = df['exact_mass'].map(lambda x: Adduct(x,i))
df
output
#Melting
df = pd.melt(df, id_vars=['Name'], var_name = "Adduct", value_name= "Exact_mass", value_vars=[x for x in df.columns if 'Name' not in x and 'exact' not in x])
df['name'] = df.apply(lambda x:x[0] + "_" + x[1], axis=1)
df['RT'] = df.Name.apply(lambda x: RT[x[0]] if x[0] in RT else np.nan)
del df['Name']
del df['Adduct']
df['RT'] = df.name.apply(lambda x: RT[x[0]] if x[0] in RT else np.nan)
df
output
Why NaN?
Here is how I will go about it, pandas.melt comes to rescue:
import pandas as pd
import numpy as np
from io import StringIO
s = StringIO('''
Name exact_mass M+3H M+3Na M+H 2M+H M-3H
0 a 596.465179 199.829002 221.810726 597.472455 1193.937634 197.814450
1 b 514.293038 172.438289 194.420013 515.300314 1029.593352 170.423737
2 c 712.397498 238.473109 260.454833 713.404774 1425.802272 236.458557
3 d 448.191949 150.404592 172.386316 449.199225 897.391174 148.390040
4 e 316.203834 106.408554 128.390278 317.211110 633.414944 104.39400
''')
df = pd.read_csv(s, sep="\s+")
df = pd.melt(df, id_vars=['Name'], value_vars=[x for x in df.columns if 'Name' not in x and 'exact' not in x])
df['name'] = df.apply(lambda x:x[0] + "_" + x[1], axis=1)
del df['Name']
del df['variable']
RT = {'a':1, 'b':2, 'c':3, 'd':5, 'e':1.5}
df['RT'] = df.name.apply(lambda x: RT[x[0]] if x[0] in RT else np.nan)
df
Here is the output:

How to make a column with lists from columns of list elements in a pandas dataframe?

I have a pandas dataframe like
test = pd.DataFrame([[['P','N'], ['Z', 'P']],[['N','N'], ['Z', 'P']]],
columns=['c1', 'c2'])
I want to add another column c3 to test whose elements are
['PZ', 'NP']
['NZ', 'NP']
How can I do this?
Use assign:
df = test.assign(c3 = [[x[0]+y[0], x[1]+y[1]] for x,y in test.values.tolist()])
Or:
df = test.assign(c3 = list(map(list,zip(test.c1.str[0]+test.c2.str[0],test.c1.str[1]+test.c2.str[1]))))
print(df)
c1 c2 c3
0 [P, N] [Z, P] [PZ, NP]
1 [N, N] [Z, P] [NZ, NP]
print([[x[0]+y[0], x[1]+y[1]] for x,y in test.values.tolist()])
[['PZ', 'NP'], ['NZ', 'NP']]
print(list(map(list,zip(test.c1.str[0]+test.c2.str[0],test.c1.str[1]+test.c2.str[1]))))
[['PZ', 'NP'], ['NZ', 'NP']]

Pandas - Fastest way indexing with 2 dataframes

I am developing a software in Python 3 with Pandas library.
Time is very important but memory not so much.
For better visualization I am using the names a and b with few values, although there are many more:
a -> 50000 rows
b -> 5000 rows
I need to select from dataframe a and b (using multiples conditions)
a = pd.DataFrame({
'a1': ['x', 'y', 'z'] ,
'a2': [1, 2, 3],
'a3': [3.14, 2.73, -23.00],
'a4': [pd.np.nan, pd.np.nan, pd.np.nan]
})
a
a1 a2 a3 a4
0 x 1 3.14 NaN
1 y 2 2.73 NaN
2 z 3 -23.00 NaN
b = pd.DataFrame({
'b1': ['x', 'y', 'z', 'k', 'l'],
'b2': [2018, 2019, 2020, 2015, 2012]
})
b
b1 b2
0 x 2018
1 y 2019
2 z 2020
3 k 2015
4 l 2012
So far my code is like this:
for index, row in a.iterrows():
try:
# create a key
a1 = row["a1"]
mask = b.loc[(b['b1'] == a1) & (b['b2'] != 2019)]
# check if exists
if (len(mask.index) != 0): #not empty
a.loc[[index], ['a4']] = mask.iloc[0]['b2']
except KeyError: #not found
pass
But as you can see, I'm using for iterrows that is very slow compared to other methods and I'm changing the value of the DataFrame I'm iterating, that is not recommended.
Could you help me find a better way? The results should be like this:
a
a1 a2 a3 a4
0 x 1 3.14 2018
1 y 2 2.73 NaN
2 z 3 -23.00 2020
I tried things like this below, but I didnt made it work.
a.loc[ (a['a1'] == b['b1']) , 'a4'] = b.loc[b['b2'] != 2019]
*the real code has more conditions
Thanks!
EDIT
I benchmark using: iterrows, merge, set_index/loc. Here is the code:
import timeit
import pandas as pd
def f_iterrows():
for index, row in a.iterrows():
try:
# create a key
a1 = row["a1"]
a3 = row["a3"]
mask = b.loc[(b['b1'] == a1) & (b['b2'] != 2019)]
# check if exists
if len(mask.index) != 0: # not empty
a.loc[[index], ['a4']] = mask.iloc[0]['b2']
except: # not found
pass
def f_merge():
a.merge(b[b.b2 != 2019], left_on='a1', right_on='b1', how='left').drop(['a4', 'b1'], 1).rename(columns={'b2': 'a4'})
def f_lock():
df1 = a.set_index('a1')
df2 = b.set_index('b1')
df1.loc[:, 'a4'] = df2.b2[df2.b2 != 2019]
#variables for testing
number_rows = 100
number_iter = 100
a = pd.DataFrame({
'a1': ['x', 'y', 'z'] * number_rows,
'a2': [1, 2, 3] * number_rows,
'a3': [3.14, 2.73, -23.00] * number_rows,
'a4': [pd.np.nan, pd.np.nan, pd.np.nan] * number_rows
})
b = pd.DataFrame({
'b1': ['x', 'y', 'z', 'k', 'l'] * number_rows,
'b2': [2018, 2019, 2020, 2015, 2012] * number_rows
})
print('For: %s s' % str(timeit.timeit(f_iterrows, number=number_iter)))
print('Merge: %s s' % str(timeit.timeit(f_merge, number=number_iter)))
print('Loc: %s s' % str(timeit.timeit(f_iterrows, number=number_iter)))
They all worked :) and the time to run is:
For: 277.9994369489998 s
Loc: 274.04929955067564 s
Merge: 2.195712725706926 s
So far Merge is the fastest.
If another option appears I will update here, thanks again.
IIUC
a.merge(b[b.b2!=2019],left_on='a1',right_on='b1',how='left').drop(['a4','b1'],1).rename(columns={'b2':'a4'})
Out[263]:
a1 a2 a3 a4
0 x 1 3.14 2018.0
1 y 2 2.73 NaN
2 z 3 -23.00 2020.0

pandas.isnull() not working on decimal type?

do I miss something or do we have an issue with pandas.isnull() ?
>>> import pandas as pd
>>> import decimal
>>> d = decimal.Decimal('NaN')
>>> d
Decimal('NaN')
>>> pd.isnull(d)
False
>>> f = float('NaN')
>>> f
nan
>>> pd.isnull(f)
True
>>> pd.isnull(float(d))
True
Problem is I have a dataframe with decimal.Decimal values in it, and df.dropna() doesn't remove NaN for this reason...
Yes this isn't supported, you can use the property that NaN does not equal itself which still works for Decimal types:
In [20]:
import pandas as pd
import decimal
d = decimal.Decimal('NaN')
df = pd.DataFrame({'a':[d]})
df
Out[20]:
a
0 NaN
In [21]:
df['a'].apply(lambda x: x != x)
Out[21]:
0 True
Name: a, dtype: bool
So you can do:
In [26]:
df = pd.DataFrame({'a':[d,1,2,3]})
df[df['a'].apply(lambda x: x == x)]
Out[26]:
a
1 1
2 2
3 3

Resources