How to find the first occurrance of sum of closest value to the given value - python-3.x

Have got an array like below with columns ['item','Space','rem_spc']
array([['Pineapple', 0.5, 0.5],
['Mango', 0.75, 0.25],
['Apple', 0.375, 0.625],
['Melons', 0.25, 0.75],
['Grape', 0.125, 0.875]], dtype=object)
need to convert this array to dataframe along with new column ['nxt_item'] which should be generated for first array row alone(Here, for Pineapple) with below conditions:
to find the first nearest items array['Space'] whose sum equals array['rem_spc'] for pineapple.
Expected Output:
item Space rem_spc nxt_item
Pineapple 0.5 0.5 {Apple, Grape} #0.5 = 0.375 + 0.125
Mango 0.75 0.25
Apple 0.375 0.625
Melons 0.25 0.75
Grape 0.125 0.875
Thanks!

A possible solution (another would be using binary linear programming):
from itertools import product
n = len(df) - 1
combinations = product([0, 1], repeat=n)
a = np.array(list(combinations))
df['nxt_item'] = np.nan
df.loc[0, 'nxt_item'] = (
'{' +
', '.join(list(
df.loc[[False] +
a[np.argmin(np.abs(df.iloc[0, 2] -
np.sum(a * df['Space'][1:].values, axis=1))), :]
.astype(bool).tolist(), 'item']))
+ '}')
Output:
item Space rem_spc nxt_item
0 Pineapple 0.5 0.5 {Apple, Grape}
1 Mango 0.75 0.25 NaN
2 Apple 0.375 0.625 NaN
3 Melons 0.25 0.75 NaN
4 Grape 0.125 0.875 NaN

Related

Vertically combine one column to another and fill other columns values in Pandas

If I have a abnormal column 2019/2/1 in the following dataframe:
date,type,ratio,2019/2/1
2019/1/1,food,0.4,0.3
2019/1/1,vegetables,0.2,0.6
2019/1/1,toy,0.1,0.5
How could I vertically append 2019/2/1 to ratio?
The expected result will like this:
date type ratio
0 2019/1/1 food 0.4
1 2019/1/1 vegetables 0.2
2 2019/1/1 toy 0.1
3 2019/2/1 food 0.3
4 2019/2/1 vegetables 0.6
5 2019/2/1 toy 0.5
First idea is rename column ratio before melt:
df1 = (df.rename(columns={'ratio':'2019/1/1'})
.drop('date', 1)
.melt('type',value_name='ratio', var_name='date'))
print (df1)
type date ratio
0 food 2019/1/1 0.4
1 vegetables 2019/1/1 0.2
2 toy 2019/1/1 0.1
3 food 2019/2/1 0.3
4 vegetables 2019/2/1 0.6
5 toy 2019/2/1 0.5
Another is replace datetimes from columns to date column after melt:
df['date'] = pd.to_datetime(df['date'])
df2 = df.melt(['date','type'],value_name='ratio')
df2['date'] = pd.to_datetime(df2.pop('variable'), errors='coerce').fillna(df2['date'])
print (df2)
date type ratio
0 2019-01-01 food 0.4
1 2019-01-01 vegetables 0.2
2 2019-01-01 toy 0.1
3 2019-02-01 food 0.3
4 2019-02-01 vegetables 0.6
5 2019-02-01 toy 0.5

Creating a new column into a dataframe based on conditions

For the dataframe df :
dummy_data1 = {'category': ['White', 'Black', 'Hispanic','White'],
'Pop':['75','85','90','100'],'White_ratio':[0.6,0.4,0.7,0.35],'Black_ratio':[0.3,0.2,0.1,0.45], 'Hispanic_ratio':[0.1,0.4,0.2,0.20] }
df = pd.DataFrame(dummy_data1, columns = ['category', 'Pop','White_ratio', 'Black_ratio', 'Hispanic_ratio'])
I want to add a new column to this data frame,'pop_n', by first checking the category, and then multiplying the value in 'Pop' by the corresponding ratio value in the columns. For the first row,
the category is 'White' so it should multiply 75 with 0.60 and put 45 in pop_n column.
I thought about writing something like :
df['pop_n']= (df['Pop']*df['White_ratio']).where(df['category']=='W')
this works but just for one category.
I will appreciate any helps with this.
Thanks.
Using DataFrame.filter and DataFrame.lookup:
First we use filter to get the columns with ratio in the name. Then split and keep the first word before the underscore only.
Finally we use lookup to match the category values to these columns.
# df['Pop'] = df['Pop'].astype(int)
df2 = df.filter(like='ratio').rename(columns=lambda x: x.split('_')[0])
df['pop_n'] = df2.lookup(df.index, df['category']) * df['Pop']
category Pop White_ratio Black_ratio Hispanic_ratio pop_n
0 White 75 0.60 0.30 0.1 45.0
1 Black 85 0.40 0.20 0.4 17.0
2 Hispanic 90 0.70 0.10 0.2 18.0
3 White 100 0.35 0.45 0.2 35.0
Locate the columns that have underscores in their names:
to_rename = {x: x.split("_")[0] for x in df if "_" in x}
Find the matching factors:
stack = df.rename(columns=to_rename)\
.set_index('category').stack()
factors = stack[map(lambda x: x[0]==x[1], stack.index)]\
.reset_index(drop=True)
Multiply the original data by the factors:
df['pop_n'] = df['Pop'].astype(int) * factors
# category Pop White_ratio Black_ratio Hispanic_ratio pop_n
#0 White 75 0.60 0.30 0.1 45
#1 Black 85 0.40 0.20 0.4 17
#2 Hispanic 90 0.70 0.10 0.2 18
#3 White 100 0.35 0.45 0.2 35

Split pandas columns into two with column MultiIndex

I need to split DataFrame columns into two and add an additional value to the new column. The twist is that I need to lift the original column names up one level and add two new column names.
Given a DataFrame h:
>>> import pandas as pd
>>> h = pd.DataFrame({'a': [0.6, 0.4, 0.1], 'b': [0.2, 0.4, 0.7]})
>>> h
a b
0 0.6 0.2
1 0.4 0.4
2 0.1 0.7
I need to lift the original column names up one level and add two new column names. The result should look like this:
>>> # some stuff...
a b
expected received expected received
0 0.6 1 0.2 1
1 0.4 1 0.4 1
2 0.1 1 0.7 1
I've tried this:
>>> h['a1'] = [1, 1, 1]
>>> h['b1'] = [1, 1, 1]
>>> t = [('f', 'expected'),('f', 'received'), ('g', 'expected'), ('g', 'received')]
>>> h.columns = pd.MultiIndex.from_tuples(t)
>>> h
f g
expected received expected received
0 0.6 0.2 1 1
1 0.4 0.4 1 1
2 0.1 0.7 1 1
This just renames the columns but does not align them properly. I think the issue is there's no link between a1 and b1 to the expected and received columns.
How do I lift the original column names up one level and add two new column names?
I am using concat with keys , then swaplevel
h1=h.copy()
h1[:]=1
pd.concat([h,h1],keys=['expected', 'received'],axis=1).\
swaplevel(0,1,axis=1).\
sort_index(level=0,axis=1)
Out[233]:
a b
expected received expected received
0 0.6 1.0 0.2 1.0
1 0.4 1.0 0.4 1.0
2 0.1 1.0 0.7 1.0

Python LIfe Expectancy

Trying to use panda to calculate life expectanc with complex equations.
Multiply or divide column by column is not difficult to do.
My data is
A b
1 0.99 1000
2 0.95 =0.99*1000=990
3 0.93 = 0.95*990
Field A is populated and field be has only the 1000
Field b (b2) = A1*b1
Tried shift function, got result for b2 only and the rest zeros any help please thanks mazin
IIUC, if you're starting with:
>>> df
A b
0 0.99 1000.0
1 0.95 NaN
2 0.93 NaN
Then you can do:
df.loc[df.b.isnull(),'b'] = (df.A.cumprod()*1000).shift()
>>> df
A b
0 0.99 1000.0
1 0.95 990.0
2 0.93 940.5
Or more generally:
df['b'] = (df.A.cumprod()*df.b.iloc[0]).shift().fillna(df.b.iloc[0])

Replace values in pandas column based on nan in another column

For pairs of columns, i want to replace the values of the second columns with nan if the values in the first is nan.
I have tried without success
>import pandas as pd
>
> df=pd.DataFrame({'a': ['r', np.nan, np.nan, 's'], 'b':[0.5, 0.5, 0.2,
> 0.02], 'c':['n','r', np.nan, 's' ], 'd':[1,0.5,0.2,0.05]})
>
>listA=['a','c']
>listB=['b','d']
>for color, ratio in zip(listA, listB):
>>df.loc[df[color].isnull(), ratio] == np.nan
df remain unchanged
other test using def (failed)
>def Test(df):
>> if df[color]== np.nan:
>> >> return df[ratio]== np.nan
>> else:
>> >>return
>for color, ratio in zip(listA, listB):
>>>>df[ratio]=df.apply(Test, axis=1)
Thanks
It seems you have typo, change == to =:
for color, ratio in zip(listA, listB):
df.loc[df[color].isnull(), ratio] = np.nan
print (df)
a b c d
0 r 0.50 n 1.00
1 NaN NaN r 0.50
2 NaN NaN NaN NaN
3 s 0.02 s 0.05
Another solution with mask for replace True values of mask to NaN by default:
for color, ratio in zip(listA, listB):
df[ratio] = df[ratio].mask(df[color].isnull())
print (df)
a b c d
0 r 0.50 n 1.00
1 NaN NaN r 0.50
2 NaN NaN NaN NaN
3 s 0.02 s 0.05

Resources