Aggregate the given data frame based on the specific conditions in pandas - python-3.x

I have a df as shown below
df:
ID Number_of_Cars Age_in_days Total_amount Total_N Type
1 2 100 10000 100 A
2 5 10 1000 2 B
3 1 1000 1000 200 B
4 1 20 0 0 C
5 3 1000 100000 20 A
6 6 100 10000 20 C
7 4 200 10000 200 A
from the above df I would like to prepare df1 as shown below
df1:
ID Avg_Monthly_Amount Avg_Monthly_N Type
1 3000 30 A
2 3000 6 B
3 30 6 B
4 0 0 C
5 3000 0.6 A
6 3000 6 C
7 1500 30 A
Explanation:
Avg_Monthly_Amount = Avg monthly amount
Avg_Monthly_N = Avg monthly N
To prepare df1, I tried below code
df['Avg_Monthly_Amount'] = df['Total_amount'] / df['Age_in_days'] * 30
df['Avg_Monthly_N'] = df['Total_N'] / df['Age_in_days'] * 30
From df and df1 (or df alone) I would like to prepare below dataframe as df2
I could not a write a proper code to generate below df2
Explanation:
Aggregate above number at Type level
Example:
There are 3 customers (ID = 1, 5, 7) with Type = A, hence for Type = A, Number_Of_Type = 3
Avg_Cars for Type = A, is (2+3+4)/3 = 3
Avg_age_in_years for Type = A is ((100+1000+200)/3)/365
Avg_amount_monthly for Type = A is Mean of Average_Monthly_Amount in for type = A in df1
Avg_N_monthly for Type = A is Mean of Avg_Monthly_N in for type = A in df1
Final expected output (df2)
Type Number_Of_Type Avg_Cars Avg_age_in_years Avg_amount_monthly Avg_N_monthly
A 3 3 1.19 2500 20.2
B 2 3 1.38 1515 6
C 2 3.5 0.16 1500 3

Don't prepare other df named df1 from your original dataframe df
your dataframe df:-
ID Number_of_Cars Age_in_days Total_amount Total_N Type
1 2 100 10000 100 A
2 5 10 1000 2 B
3 1 1000 1000 200 B
4 1 20 0 0 C
5 3 1000 100000 20 A
6 6 100 10000 20 C
7 4 200 10000 200 A
After you created/imported df:-
df['Avg_Monthly_Amount'] = df['Total_amount'] / df['Age_in_days'] * 30
df['Avg_Monthly_N'] = df['Total_N'] / df['Age_in_days'] * 30
df['Age_in_year']=df['Age_in_days']/365
Then:-
df2=df.groupby('Type').agg({'Type':'count','Number_of_Cars':'mean','Age_in_year':'mean','Avg_Monthly_Amount':'mean','Avg_Monthly_N':'mean'}).rename(columns={'Type':'Number_Of_Type'})
Now if you print or write df2(if you are using jupyter notebook) then you get your desired output
Output:-
Number_Of_Type Number_of_Cars Age_in_year Avg_Monthly_Amount Avg_Monthly_N
Type
A 3 3.0 1.187215 2500.0 20.2
B 2 3.0 1.383562 1515.0 6.0
C 2 3.5 0.164384 1500.0 3.0

Related

Grouping several dataframe columns based on another columns values

I have this dataframe:
refid col2 price1 factor1 price2 factor2 price3 factor3
0 1 a 200 1 180 3 150 10
1 2 b 500 1 450 3 400 10
2 3 c 700 1 620 2 550 5
And I need to get this output:
refid col2 price factor
0 1 a 200 1
1 1 b 500 1
2 1 c 700 1
3 2 a 180 3
4 2 b 450 3
5 2 c 620 2
6 3 a 150 10
7 3 b 400 10
8 3 c 550 5
Right now I'm trying to use df.melt method, but can't get it to work, this is the code and the current result:
df2_melt = df2.melt(id_vars=["refid","col2"],
value_vars=["price1","price2","price3",
"factor1","factor2","factor3"],
var_name="Price",
value_name="factor")
refid col2 price factor
0 1 a price1 200
1 2 b price1 500
2 3 c price1 700
3 1 a price2 180
4 2 b price2 450
5 3 c price2 620
6 1 a price3 150
7 2 b price3 400
8 3 c price3 550
9 1 a factor1 1
10 2 b factor1 1
11 3 c factor1 1
12 1 a factor2 3
13 2 b factor2 3
14 3 c factor2 2
15 1 a factor3 10
16 2 b factor3 10
17 3 c factor3 5
Since you have a wide DataFrame with common prefixes, you can use wide_to_long:
out = pd.wide_to_long(df, stubnames=['price','factor'],
i=["refid","col2"], j='num').droplevel(-1).reset_index()
Output:
refid col2 price factor
0 1 a 200 1
1 1 a 180 3
2 1 a 150 10
3 2 b 500 1
4 2 b 450 3
5 2 b 400 10
6 3 c 700 1
7 3 c 620 2
8 3 c 550 5
Note that your expected output has an error where factors don't align with refids.
You can melt two times and then concat them:
import pandas as pd
df = pd.DataFrame({'refid': [1, 2, 3], 'col2': ['a', 'b', 'c'],
'price1': [200, 500, 700], 'factor1': [1, 1, 1],
'price2': [180, 450, 620], 'factor2': [3,3,2],
'price3': [150, 400, 550], 'factor3': [10, 10, 5]})
prices = [c for c in df if c.startswith('price')]
factors = [c for c in df if c.startswith('factor')]
df1 = pd.melt(df, id_vars=["refid","col2"], value_vars=prices, value_name='price').drop('variable', axis=1)
df2 = pd.melt(df, id_vars=["refid","col2"], value_vars=factors, value_name='factor').drop('variable', axis=1)
df3 = pd.concat([df1, df2['factor']],axis=1).reset_index().drop('index', axis=1)
print(df3)
Here is the output:
refid col2 price factor
0 1 a 200 1
1 2 b 500 1
2 3 c 700 1
3 1 a 180 3
4 2 b 450 3
5 3 c 620 2
6 1 a 150 10
7 2 b 400 10
8 3 c 550 5
One option is pivot_longer from pyjanitor:
# pip install pyjanitor
import janitor
import pandas as pd
(df
.pivot_longer(
index = ['refid', 'col2'],
names_to = '.value',
names_pattern = r"(.+)\d",
sort_by_appearance = True)
)
refid col2 price factor
0 1 a 200 1
1 1 a 180 3
2 1 a 150 10
3 2 b 500 1
4 2 b 450 3
5 2 b 400 10
6 3 c 700 1
7 3 c 620 2
8 3 c 550 5
The idea for this particular reshape is that whatever group in the regular expression is paired with the .value stays as the column header.

Merge pandas data frame based on specific conditions

I have a df as shown below
df1:
ID Job Salary
1 A 100
2 B 200
3 B 20
4 C 150
5 A 500
6 A 600
7 A 200
8 B 150
df2:
ID Type Status Age
1 2 P 23
2 1 P 28
8 1 F 33
4 3 P 48
14 1 F 23
11 2 P 28
16 2 F 23
41 3 P 38
df3:
ID T_Type Amount
1 K 20
2 L -50
1 K 30
3 K 5
1 K 100
2 L -50
1 L -30
25 K 500
1 K 20
4 L -80
19 K 30
2 K -5
Explanation About the data
ID is the primary key of df1.
ID is the primary key of df2.
df3 does not have any primary key.
From the above, I would like to prepare below dfs.
1. IDs which are in df1 and df2.
Expected output1:
ID Job Salary
1 A 100
2 B 200
4 C 150
8 B 150
IDs which are there in df1 and not in df2
output2:
ID Job Salary
3 B 20
5 A 500
6 A 600
7 A 200
IDs which are there in df1 and df3
output3:
ID Job Salary
1 A 100
2 B 200
3 B 20
4 C 150
4. IDs which are there in df1 and not in df3.
output4:
ID Job Salary
5 A 500
6 A 600
7 A 200
8 B 150
>>> # 1. IDs which are in df1 and df2.
>>> df1[df1['ID'].isin(df2['ID'])]
ID Job Salary
0 1 A 100
1 2 B 200
3 4 C 150
7 8 B 150
>>> # 2. IDs which are there in df1 and not in df2
>>> df1[~df1['ID'].isin(df2['ID'])]
ID Job Salary
2 3 B 20
4 5 A 500
5 6 A 600
6 7 A 200
>>> # 3. IDs which are there in df1 and df3
>>> df1[df1['ID'].isin(df3['ID'])]
ID Job Salary
0 1 A 100
1 2 B 200
2 3 B 20
3 4 C 150
>>> # 4. IDs which are there in df1 and not in df3.
>>> df1[~df1['ID'].isin(df3['ID'])]
ID Job Salary
4 5 A 500
5 6 A 600
6 7 A 200
7 8 B 150
Actually, your expected results aren't any merges, but rather
selections, based on whether df1.ID is (or is not) in ID column
of the second DataFrame.
To get your expected results, run the following commands:
result_1 = df1[df1.ID.isin(df2.ID)]
result_2 = df1[~df1.ID.isin(df2.ID)]
result_3 = df1[df1.ID.isin(df3.ID)]
result_4 = df1[~df1.ID.isin(df3.ID)]

Subset and Loop to create a new column [duplicate]

With the DataFrame below as an example,
In [83]:
df = pd.DataFrame({'A':[1,1,2,2],'B':[1,2,1,2],'values':np.arange(10,30,5)})
df
Out[83]:
A B values
0 1 1 10
1 1 2 15
2 2 1 20
3 2 2 25
What would be a simple way to generate a new column containing some aggregation of the data over one of the columns?
For example, if I sum values over items in A
In [84]:
df.groupby('A').sum()['values']
Out[84]:
A
1 25
2 45
Name: values
How can I get
A B values sum_values_A
0 1 1 10 25
1 1 2 15 25
2 2 1 20 45
3 2 2 25 45
In [20]: df = pd.DataFrame({'A':[1,1,2,2],'B':[1,2,1,2],'values':np.arange(10,30,5)})
In [21]: df
Out[21]:
A B values
0 1 1 10
1 1 2 15
2 2 1 20
3 2 2 25
In [22]: df['sum_values_A'] = df.groupby('A')['values'].transform(np.sum)
In [23]: df
Out[23]:
A B values sum_values_A
0 1 1 10 25
1 1 2 15 25
2 2 1 20 45
3 2 2 25 45
I found a way using join:
In [101]:
aggregated = df.groupby('A').sum()['values']
aggregated.name = 'sum_values_A'
df.join(aggregated,on='A')
Out[101]:
A B values sum_values_A
0 1 1 10 25
1 1 2 15 25
2 2 1 20 45
3 2 2 25 45
Anyone has a simpler way to do it?
This is not so direct but I found it very intuitive (the use of map to create new columns from another column) and can be applied to many other cases:
gb = df.groupby('A').sum()['values']
def getvalue(x):
return gb[x]
df['sum'] = df['A'].map(getvalue)
df
In [15]: def sum_col(df, col, new_col):
....: df[new_col] = df[col].sum()
....: return df
In [16]: df.groupby("A").apply(sum_col, 'values', 'sum_values_A')
Out[16]:
A B values sum_values_A
0 1 1 10 25
1 1 2 15 25
2 2 1 20 45
3 2 2 25 45

summing up certain rows in a panda dataframe

I have a pandas dataframe with 1000 rows and 10 columns. I am looking to aggregate rows 100-1000 and replace them with just one row where the indexvalue is '>100' and the column values are the sum of rows 100-1000 of each column. Any ideas on a simple way of doing this? Thanks in advance
Say I have the below
a b c
0 1 10 100
1 2 20 100
2 3 60 100
3 5 80 100
and I want it replaced with
a b c
0 1 10 100
1 2 20 100
>1 8 140 200
You could use ix or loc but it shows SettingWithCopyWarning:
ind = 1
mask = df.index > ind
df1 = df[~mask]
df1.ix['>1', :] = df[mask].sum()
In [69]: df1
Out[69]:
a b c
0 1 10 100
1 2 20 100
>1 8 140 200
To set it without warning you could do it with pd.concat. May be not elegant due to two transposing but worked:
ind = 1
mask = df.index > ind
df1 = pd.concat([df[~mask].T, df[mask].sum()], axis=1).T
df1.index = df1.index.tolist()[:-1] + ['>{}'.format(ind)]
In [36]: df1
Out[36]:
a b c
0 1 10 100
1 2 20 100
>1 8 140 200
Some demonstrations:
In [37]: df.index > ind
Out[37]: array([False, False, True, True], dtype=bool)
In [38]: df[mask].sum()
Out[38]:
a 8
b 140
c 200
dtype: int64
In [40]: pd.concat([df[~mask].T, df[mask].sum()], axis=1).T
Out[40]:
a b c
0 1 10 100
1 2 20 100
0 8 140 200

Pandas assign value of one column based on another

Given the following data frame:
import pandas as pd
df = pd.DataFrame(
{'A':[10,20,30,40,50,60],
'B':[1,2,1,4,5,4]
})
df
A B
0 10 1
1 20 2
2 30 1
3 40 4
4 50 5
5 60 4
I would like a new column 'C' to have values be equal to those in 'A' where the corresponding values for 'B' are less than 3 else 0.
The desired result is as follows:
A B C
0 10 1 10
1 20 2 20
2 30 1 30
3 40 4 0
4 50 5 0
5 60 4 0
Thanks in advance!
Use np.where:
df['C'] = np.where(df['B'] < 3, df['A'], 0)
>>> df
A B C
0 10 1 10
1 20 2 20
2 30 1 30
3 40 4 0
4 50 5 0
5 60 4 0
Here you can use pandas method where direct on the column:
In [3]:
df['C'] = df['A'].where(df['B'] < 3,0)
df
Out[3]:
A B C
0 10 1 10
1 20 2 20
2 30 1 30
3 40 4 0
4 50 5 0
5 60 4 0
Timings
In [4]:
%timeit df['A'].where(df['B'] < 3,0)
%timeit np.where(df['B'] < 3, df['A'], 0)
1000 loops, best of 3: 1.4 ms per loop
1000 loops, best of 3: 407 µs per loop
np.where is faster here but pandas where is doing more checking and has more options so it depends on the use case here.

Resources