Lookup Pandas Dataframe comparing different size data frames - python-3.x
I have two pandas df that look like this
df1
Amount Price
0 5 50
1 10 53
2 15 55
3 30 50
4 45 61
df2
Used amount
0 4.5
1 1.2
2 6.2
3 4.1
4 25.6
5 31
6 19
7 15
I am trying to insert a new column on df2 that will give provide the price from the df1, df1 and df2 have different size, df1 is smaller
I am expecting something like this
df3
Used amount price
0 4.5 50
1 1.2 50
2 6.2 53
3 4.1 50
4 25.6 50
5 31 61
6 19 50
7 15 55
I am thinking to solve this, with something like this function
def price_function(key, table):
used_amount_df2 = (row[0] for row in df1)
price = filter(lambda x: x < key, used_amount_df1)
Here is my own solution
1st approach:
from itertools import product
import pandas as pd
df2=df2.reset_index()
DF=pd.DataFrame(list(product(df2.Usedamount, df1.Amount)), columns=['l1', 'l2'])
DF['DIFF']=(DF.l1-DF.l2)
DF=DF.loc[DF.DIFF<=0,]
DF=DF.sort_values(['l1','DIFF'],ascending=[True,False]).drop_duplicates(['l1'],keep='first')
df1.merge(DF,left_on='Amount',right_on='l2',how='left').merge(df2,left_on='l1',right_on='Usedamount',how='right').loc[:,['index','Usedamount','Price']].set_index('index').sort_index()
Out[185]:
Usedamount Price
index
0 4.5 50
1 1.2 50
2 6.2 53
3 4.1 50
4 25.6 50
5 31.0 61
6 19.0 50
7 15.0 55
2nd using pd.merge_asof I recommend this
df2=df2.rename({'Used amount':Amount}).sort_values('Amount')
df2=df2.reset_index()
pd.merge_asof(df2,df1,on='Amount',allow_exact_matches=True,direction='forward')\
.set_index('index').sort_index()
Out[206]:
Amount Price
index
0 4.5 50
1 1.2 50
2 6.2 53
3 4.1 50
4 25.6 50
5 31.0 61
6 19.0 50
7 15.0 55
Using pd.IntervalIndex you can
In [468]: df1.index = pd.IntervalIndex.from_arrays(df1.Amount.shift().fillna(0),df1.Amount)
In [469]: df1
Out[469]:
Amount Price
(0.0, 5.0] 5 50
(5.0, 10.0] 10 53
(10.0, 15.0] 15 55
(15.0, 30.0] 30 50
(30.0, 45.0] 45 61
In [470]: df2['price'] = df2['Used amount'].map(df1.Price)
In [471]: df2
Out[471]:
Used amount price
0 4.5 50
1 1.2 50
2 6.2 53
3 4.1 50
4 25.6 50
5 31.0 61
6 19.0 50
7 15.0 55
You can use cut or searchsorted for create bins.
Notice: Index in df1 has to be default - 0,1,2....
#create default index if necessary
df1 = df1.reset_index(drop=True)
#create bins
bins = [0] + df1['Amount'].tolist()
#get index values of df1 by values of Used amount
a = pd.cut(df2['Used amount'], bins=bins, labels=df1.index)
#assign output
df2['price'] = df1['Price'].values[a]
print (df2)
Used amount price
0 4.5 50
1 1.2 50
2 6.2 53
3 4.1 50
4 25.6 50
5 31.0 61
6 19.0 50
7 15.0 55
a = df1['Amount'].searchsorted(df2['Used amount'])
df2['price'] = df1['Price'].values[a]
print (df2)
Used amount price
0 4.5 50
1 1.2 50
2 6.2 53
3 4.1 50
4 25.6 50
5 31.0 61
6 19.0 50
7 15.0 55
You can use pd.DataFrame.reindex with method=bfill
df1.set_index('Amount').reindex(df2['Used amount'], method='bfill')
Price
Used amount
4.5 50
1.2 50
6.2 53
4.1 50
25.6 50
31.0 61
19.0 50
15.0 55
To add that to a new column we can use
join
df2.join(
df1.set_index('Amount').reindex(df2['Used amount'], method='bfill'),
on='Used amount'
)
Used amount Price
0 4.5 50
1 1.2 50
2 6.2 53
3 4.1 50
4 25.6 50
5 31.0 61
6 19.0 50
7 15.0 55
Or assign
df2.assign(
Price=df1.set_index('Amount').reindex(df2['Used amount'], method='bfill').values)
Used amount Price
0 4.5 50
1 1.2 50
2 6.2 53
3 4.1 50
4 25.6 50
5 31.0 61
6 19.0 50
7 15.0 55
Related
column multiplication based on a mapping
I have the following two dataframes. The first one, maps some nodes to area number and the maximum electric load of that node. bus = pd.DataFrame(data={'Node':[101, 102, 103, 104, 105], 'Area':[1, 1, 2, 2, 3], 'Load':[10, 15, 12, 20, 25]}) which gives us: Bus Area Load 0 101 1 10 1 102 1 15 2 103 2 12 3 104 2 20 4 105 3 25 The second dataframe, shows the total electric load of each area over a time period (from hour 0 to 5). The column names are the areas (matching the column Area in dataframe bus. load = pd.DataFrame(data={1:[20, 18, 17, 19, 22, 25], 2:[23, 25,24, 27, 30, 32], 3:[10, 14, 19, 25, 22, 20]}) which gives us: 1 2 3 0 20 23 10 1 18 25 14 2 17 24 19 3 19 27 25 4 22 30 22 5 25 32 20 I would like to have a dataframe that shows the electric load of each bus over the 6 hours. Assumption: The percentage of the load over time is the same as the percentage of the maximum load shown in bus; e.g., bus 101 has 10/(10+15)=0.4 percent of the electric load of area 1, therefore, to calculate its hourly load, 10/(10+15) should be multiplied by the column corresponding to area 1 in load. The desired output should be of the following format: 101 102 103 104 105 0 8 12 8.625 14.375 10 1 7.2 10.8 9.375 15.625 14 2 6.8 10.2 9 15 19 3 7.6 11.4 10.125 16.875 25 4 8.8 13.2 11.25 18.75 22 5 10 15 12 20 20 For column 101, we have 0.4 multiplied by column 1 of load. Any help is greatly appreaciated.
One option is to get the Load divided by the sum, then pivot, get the index matching for both load and bus, before multiplying on the matching levels: (bus.assign(Load = bus.Load.div(bus.groupby('Area').Load.transform('sum'))) .pivot(None, ['Area', 'Node'], 'Load') .reindex(load.index) .ffill() # get the data spread into all rows .bfill() .mul(load, level=0) .droplevel(0,1) .rename_axis(columns=None) ) 101 102 103 104 105 0 8.0 12.0 8.625 14.375 10.0 1 7.2 10.8 9.375 15.625 14.0 2 6.8 10.2 9.000 15.000 19.0 3 7.6 11.4 10.125 16.875 25.0 4 8.8 13.2 11.250 18.750 22.0 5 10.0 15.0 12.000 20.000 20.0
You can calculate the ratio in bus, transpose load, merge the two and multiply the ratio by the load, here goes: bus['area_sum'] = bus.groupby('Area')['Load'].transform('sum') bus['node_ratio'] = bus['Load'] / bus['area_sum'] full_data = bus.merge(load.T.reset_index(), left_on='Area', right_on='index') result = pd.DataFrame([full_data['node_ratio'] * full_data[x] for x in range(6)]) result.columns = full_data['Node'].values result: 101 102 103 104 105 0 8 12 8.625 14.375 10 1 7.2 10.8 9.375 15.625 14 2 6.8 10.2 9 15 19 3 7.6 11.4 10.125 16.875 25 4 8.8 13.2 11.25 18.75 22 5 10 15 12 20 20
Pandas: Combine pandas columns that have the same column name
If we have the following df, df A A B B B 0 10 2 0 3 3 1 20 4 19 21 36 2 30 20 24 24 12 3 40 10 39 23 46 How can I combine the content of the columns with the same names? e.g. A B 0 10 0 1 20 19 2 30 24 3 40 39 4 2 3 5 4 21 6 20 24 7 10 23 8 Na 3 9 Na 36 10 Na 12 11 Na 46 I tried groupby and merge and both are not doing this job. Any help is appreciated.
If columns names are duplicated you can use DataFrame.melt with concat: df = pd.concat([df['A'].melt()['value'], df['B'].melt()['value']], axis=1, keys=['A','B']) print (df) A B 0 10.0 0 1 20.0 19 2 30.0 24 3 40.0 39 4 2.0 3 5 4.0 21 6 20.0 24 7 10.0 23 8 NaN 3 9 NaN 36 10 NaN 12 11 NaN 46 EDIT: uniq = df.columns.unique() df = pd.concat([df[c].melt()['value'] for c in uniq], axis=1, keys=uniq) print (df) A B 0 10.0 0 1 20.0 19 2 30.0 24 3 40.0 39 4 2.0 3 5 4.0 21 6 20.0 24 7 10.0 23 8 NaN 3 9 NaN 36 10 NaN 12 11 NaN 46
How to do substruction in the cells of columns in python
I have this dataframe (df) in python: Cumulative sales 0 12 1 28 2 56 3 87 I want to create a new column in which I whould have the the number of new sales (N-(N-1)) as below: Cumulative sales New Sales 0 12 12 1 28 16 2 56 28 3 87 31
You can do df['new sale']=df.Cumulativesales.diff().fillna(df.Cumulativesales) df Cumulativesales new sale 0 12 12.0 1 28 16.0 2 56 28.0 3 87 31.0
Do this: df['New_sales'] = df['Cumlative_sales'].diff() df.fillna(df.iloc[0]['Cumlative_sales'], inplace=True) print(df) Output: Cumlative_sales New_sales 0 12 12.0 1 28 16.0 2 56 28.0 3 87 31.0
Pandas JOIN/MERGE/CONCAT Data Frame On Specific Indices
I want to join two data frames specific indices as per the map (dictionary) I have created. What is an efficient way to do this? Data: df = pd.DataFrame({"a":[10, 34, 24, 40, 56, 44], "b":[95, 63, 74, 85, 56, 43]}) print(df) a b 0 10 95 1 34 63 2 24 74 3 40 85 4 56 56 5 44 43 df1 = pd.DataFrame({"c":[1, 2, 3, 4], "d":[5, 6, 7, 8]}) print(df1) c d 0 1 5 1 2 6 2 3 7 3 4 8 d = { (1,0):0.67, (1,2):0.9, (2,1):0.2, (2,3):0.34 (4,0):0.7, (4,2):0.5 } Desired Output: a b c d ratio 0 34 63 1 5 0.67 1 34 63 3 7 0.9 ... 5 56 56 3 7 0.5 I'm able to achieve this but it takes a lot of time since my original data frames' map has about 4.7M rows to map. I'd love to know if there is a way to MERGE, JOIN or CONCAT these data frames on different indices. My Approach: matched_rows = [] for key in d.keys(): s = df.iloc[key[0]].tolist() + df1.iloc[key[1]].tolist() + [d[key]] matched_rows.append(s) df_matched = pd.DataFrame(matched_rows, columns = df.columns.tolist() + df1.columns.tolist() + ['ratio'] I would highly appreciate your help. Thanks a lot in advance.
Create Series and then DaatFrame by dictioanry, DataFrame.join both and last remove first 2 columns by positions: df = (pd.Series(d).reset_index(name='ratio') .join(df, on='level_0') .join(df1, on='level_1') .iloc[:, 2:]) print (df) ratio a b c d 0 0.67 34 63 1 5 1 0.90 34 63 3 7 2 0.20 24 74 2 6 3 0.34 24 74 4 8 4 0.70 56 56 1 5 5 0.50 56 56 3 7 And then if necessary reorder columns: df = df[df.columns[1:].tolist() + df.columns[:1].tolist()] print (df) a b c d ratio 0 34 63 1 5 0.67 1 34 63 3 7 0.90 2 24 74 2 6 0.20 3 24 74 4 8 0.34 4 56 56 1 5 0.70 5 56 56 3 7 0.50
why am I getting a too many indexers error?
cars_df = pd.DataFrame((car.iloc[:[1,3,4,6]].values), columns = ['mpg', 'dip', 'hp', 'wt']) car_t = car.iloc[:9].values target_names = [0,1] car_df['group'] = pd.series(car_t, dtypre='category') sb.pairplot(cars_df) I have tried using .iloc(axis=0)[xxxx] and making a slice into a list and a tuple. no dice. Any thoughts? I am trying to make a scatter plot from a lynda.com video but in the video, the host is using .ix which is deprecated. So I am using .iloc[] car = a dataframe a few lines of data "Car_name","mpg","cyl","disp","hp","drat","wt","qsec","vs","am","gear","carb" "Mazda RX4",21,6,160,110,3.9,2.62,16.46,0,1,4,4 "Mazda RX4 Wag",21,6,160,110,3.9,2.875,17.02,0,1,4,4 "Datsun 710",22.8,4,108,93,3.85,2.32,18.61,1,1,4,1 "Hornet 4 Drive",21.4,6,258,110,3.08,3.215,19.44,1,0,3,1 "Hornet Sportabout",18.7,8,360,175,3.15,3.44,17.02,0,0,3,2 "Valiant",18.1,6,225,105,2.76,3.46,20.22,1,0,3,1 "Duster 360",14.3,8,360,245,3.21,3.57,15.84,0,0,3,4 "Merc 240D",24.4,4,146.7,62,3.69,3.19,20,1,0,4,2 "Merc 230",22.8,4,140.8,95,3.92,3.15,22.9,1,0,4,2 "Merc 280",19.2,6,167.6,123,3.92,3.44,18.3,1,0,4,4 "Merc 280C",17.8,6,167.6,123,3.92,3.44,18.9,1,0,4,4 "Merc 450SE",16.4,8,275.8,180,3.07,4.07,17.4,0,0,3,3
I think you want select multiple columns by iloc: cars_df = car.iloc[:, [1,3,4,6]] print (cars_df) mpg disp hp wt 0 21.0 160.0 110 2.620 1 21.0 160.0 110 2.875 2 22.8 108.0 93 2.320 3 21.4 258.0 110 3.215 4 18.7 360.0 175 3.440 5 18.1 225.0 105 3.460 6 14.3 360.0 245 3.570 7 24.4 146.7 62 3.190 8 22.8 140.8 95 3.150 9 19.2 167.6 123 3.440 10 17.8 167.6 123 3.440 11 16.4 275.8 180 4.070 sb.pairplot(cars_df) Not 100% sure with another code, it seems need: #select also 9. column cars_df = car.iloc[:, [1,3,4,6,9]] #rename 9. column cars_df = cars_df.rename(columns={'am':'group'}) #convert it to categorical cars_df['group'] = pd.Categorical(cars_df['group']) print (cars_df) mpg disp hp wt group 0 21.0 160.0 110 2.620 1 1 21.0 160.0 110 2.875 1 2 22.8 108.0 93 2.320 1 3 21.4 258.0 110 3.215 0 4 18.7 360.0 175 3.440 0 5 18.1 225.0 105 3.460 0 6 14.3 360.0 245 3.570 0 7 24.4 146.7 62 3.190 0 8 22.8 140.8 95 3.150 0 9 19.2 167.6 123 3.440 0 10 17.8 167.6 123 3.440 0 11 16.4 275.8 180 4.070 0 #add parameetr hue for different levels of a categorical variable sb.pairplot(cars_df, hue='group')