Creating python data frame from list of dictionary - python-3.x

I have the following data:
sentences = [{'mary':'N', 'jane':'N', 'can':'M', 'see':'V','will':'N'},
{'spot':'N','will':'M','see':'V','mary':'N'},
{'will':'M','jane':'N','spot':'V','mary':'N'},
{'mary':'N','will':'M','pat':'V','spot':'N'}]
I want to create a data frame where each key (from the pairs above) will be the column name and each value (from above) will be the index of the row. The values in the data frame will be counting of each matching point between the key and the value.
The expected result should be:
df = pd.DataFrame([(4,0,0),
(2,0,0),
(0,1,0),
(0,0,2),
(1,3,0),
(2,0,1),
(0,0,1)],
index=['mary', 'jane', 'can', 'see', 'will', 'spot', 'pat'],
columns=('N','M','V'))

Use value_counts per columns in DataFrame.apply, replace missing values, convert to integers and last transpose by DataFrame.T:
df = df.apply(pd.value_counts).fillna(0).astype(int).T
print (df)
M N V
mary 0 3 1
jane 0 2 0
can 1 0 0
see 0 0 2
will 3 1 0
spot 0 2 1
pat 0 0 1
Or use DataFrame.stack with SeriesGroupBy.value_counts and Series.unstack:
df = df.stack().groupby(level=1).value_counts().unstack(fill_value=0)
print (df)
M N V
can 1 0 0
jane 0 2 0
mary 0 3 1
pat 0 0 1
see 0 0 2
spot 0 2 1
will 3 1 0

pd.DataFrame(sentences).T.stack().groupby(level=0).value_counts().unstack().fillna(0)
M N V
can 1.0 0.0 0.0
jane 0.0 2.0 0.0
mary 0.0 3.0 1.0
pat 0.0 0.0 1.0
see 0.0 0.0 2.0
spot 0.0 2.0 1.0
will 3.0 1.0 0.0
Cast as int if needed to.
pd.DataFrame(sentences).T.stack().groupby(level=0).value_counts().unstack().fillna(0).cast("int")

Related

Creating columns in a pandas dataframe based on a column value in other dataframe

I have two pandas dataframe
import pandas as pd
import numpy as np
import datetime
data = {'group' :["A","A","B","B"],
'val': ["AA","AB","B1","B2"],
'cal1' :[4,5,7,6],
'cal2' :[10,100,100,10]
}
df1 = pd.DataFrame(data)
df1
group val cal1 cal2
0 A AA 4 10
1 A AB 5 100
2 B B1 7 100
3 B B2 6 10
data = {'group' :["A","A","A","B","B","B","B", "B", "B", "B"],
'flag' : [1,0,0,1,0,0,0, 1, 0, 0],
'var1': [1,2,3,7,8,9,10, 15, 20, 30]
}
# Create DataFrame
df2 = pd.DataFrame(data)
df2
group flag var1
0 A 1 1
1 A 0 2
2 A 0 3
3 B 1 7
4 B 0 8
5 B 0 9
6 B 0 10
7 B 1 15
8 B 0 20
9 B 0 30
Step 1: CReate columns in df2(with suffix "_new") based on unique "val" in df1 like below:
unique_val = df1['val'].unique().tolist()
new_cols = [t + '_new' for t in unique_val]
for i in new_cols:
df2[i] = 0
df2
group flag var1 AA_new AB_new B1_new B2_new
0 A 1 1 0 0 0 0
1 A 0 2 0 0 0 0
2 A 0 3 0 0 0 0
3 B 1 7 0 0 0 0
4 B 0 8 0 0 0 0
5 B 0 9 0 0 0 0
6 B 0 10 0 0 0 0
7 B 1 15 0 0 0 0
8 B 0 20 0 0 0 0
9 B 0 30 0 0 0 0
Step 2: for row where flag = 1, AA_new will be calculated as var1(from df2)* value of 'cal1' from df1 for group "A" and val "AA" * value of 'cal2' from df1 for group "A" and val "AA", similarly AB_new will be calculated as var1(from df2) * value of 'cal1' from df1 for group "A" and val "AB" * value of 'cal2' from df1 for group "A" and val "AB"
My expected output should look like below:
group flag var1 AA_new AB_new B1_new B2_new
0 A 1 1 40.0 500.0 0.0 0.0
1 A 0 2 0.0 0.0 0.0 0.0
2 A 0 3 0.0 0.0 0.0 0.0
3 B 1 7 0.0 0.0 4900.0 420.0
4 B 0 8 0.0 0.0 0.0 0.0
5 B 0 9 0.0 0.0 0.0 0.0
6 B 0 10 0.0 0.0 0.0 0.0
7 B 1 15 0.0 0.0 10500.0 900.0
8 B 0 20 0.0 0.0 0.0 0.0
9 B 0 30 0.0 0.0 0.0 0.0
Below solution based on the other stackflow question works partially:
df2.assign(**df1.assign(mul_cal = df1['cal1'].mul(df1['cal2']))
.pivot_table(columns='val',
values='mul_cal',
index = ['group', df2.index])
.add_suffix('_new')
.groupby(level=0)
.apply(lambda x: x.bfill().ffill())
.reset_index(level='group',drop='group')
.fillna(0)
.mul(df2['var1'], axis=0)
.where(df2['flag'].eq(1), 0)
)
Flexible Columns
If you want this works when we add several rows more in df1, you can do this.
combinations = df1.groupby(['group','val'])['cal3'].sum().reset_index()
for index_, row_ in combinations.iterrows():
for index, row in df2.iterrows():
if row['flag'] == 1:
if row['group'] == row_['group']:
df2.loc[index, row_['val'] + '_new'] = row['var1'] * df1[(df1['group'] == row_['group']) & (df1['val'] == row_['val'])]['cal3'].values[0]
Hard Code
You can use iteration to dataframe and change its specific column in each iteration, you can do something like this (but you need to add new column into your df1 first).
df1['cal3'] = df1['cal1'] * df1['cal2']
for index, row in df2.iterrows():
if row['flag'] == 1:
if row['group'] == 'A':
df2.loc[index, 'AA_new'] = row['var1'] * df1[(df1['group'] == 'A') & (df1['val'] == 'AA')]['cal3'].values[0]
df2.loc[index, 'AB_new'] = row['var1'] * df1[(df1['group'] == 'A') & (df1['val'] == 'AB')]['cal3'].values[0]
elif row['group'] == 'B':
df2.loc[index, 'B1_new'] = row['var1'] * df1[(df1['group'] == 'B') & (df1['val'] == 'B1')]['cal3'].values[0]
df2.loc[index, 'B2_new'] = row['var1'] * df1[(df1['group'] == 'B') & (df1['val'] == 'B2')]['cal3'].values[0]
This is the result I got.

Add Column based on information from other dataframe pandas

I am looking for an answer to a question which I would have solved with for loops.
I have two pandas Dataframes:
ind_1 ind_2 ind_3
prod_id
A = a 1 0 0
a 0 1 0
b 0 1 0
c 0 0 1
a 0 0 1
a b c
B = ind_1 0.1 0.2 0.3
ind_2 0.4 0.5 0.6
ind_3 0.7 0.8 0.9
I am looking for a way to solve the following problem with pandas:
I want to map the entries of the dataframe B with a the index and columnnames and create a new column within dataframe A, so the result will look like this:
ind_1 ind_2 ind_3 y
prod_id
A = a 1 0 0 0.1
a 0 1 0 0.4
b 0 1 0 0.5
c 0 0 1 0.9
a 0 0 1 0.7
Is there a way to not use for loop to solve this problem?
Thank you in advance!
Use DataFrame.stack for MultiIndex Series in both DataFrames, then filter only 1 values by callable, filter b values by Index.isin, remove first level of MultiIndex and last add new column - it is align by index values of A:
a = A.T.stack().loc[lambda x: x == 1]
b = B.stack()
b = b[b.index.isin(a.index)].reset_index(level=0, drop=True)
A['y'] = b
print (A)
ind_1 ind_2 ind_3 y
prod_id
a 1 0 0 0.1
b 0 1 0 0.5
c 0 0 1 0.9
Or use DataFrame.join with DataFrame.query for filtering, but processing is a bit complicated:
a = A.stack()
b = B.stack()
s = (a.to_frame('a')
.rename_axis((None, None))
.join(b.swaplevel(1,0)
.rename('b'))
.query("a == 1")
.reset_index(level=1, drop=True))
A['y'] = s['b']
print (A)
ind_1 ind_2 ind_3 y
prod_id
a 1 0 0 0.1
b 0 1 0 0.5
c 0 0 1 0.9

How to check numbers after decimal point?

How do I check the numbers after a decimal point?
import pandas as pd
df = pd.DataFrame({'num':[1,2,3.5,4,5.8]})
df:
num
0 1.0
1 2.0
2 3.5
3 4.0
4 5.8
After check:
num check_point
0 1.0 0
1 2.0 0
2 3.5 1
3 4.0 0
4 5.8 1
Use numpy.modf for get values after decimal, then compare for not equal with ne and cast to integer:
df['check_point'] = np.modf(df['num'])[0].ne(0).astype(int)
Or use numpy.where:
df['check_point'] = np.where(np.modf(df['num'])[0] == 0, 0, 1)
Another idea is test if floats without .0 are integers:
df['check_point'] = np.where(df['num'].apply(lambda x: x.is_integer()), 0, 1)
Or:
df['check_point'] = np.where(df['num'].sub(df['num'].astype(int)).astype(bool), 1, 0)
print (df)
num check_point
0 1.0 0
1 2.0 0
2 3.5 1
3 4.0 0
4 5.8 1
Detail:
print (np.modf(df['num']))
(0 0.0
1 0.0
2 0.5
3 0.0
4 0.8
Name: num, dtype: float64, 0 1.0
1 2.0
2 3.0
3 4.0
4 5.0
Name: num, dtype: float64)
Check the diff between the number and its rounded version and determine if it's a int.
df['check_point'] = df.num.sub(df.num.round()).ne(0).astype(int)

Python/Pandas Subtract Only if Value is not 0

I'm starting with data that looks something like this, but with a lot more rows:
Location Sample a b c d e f g h i
1 w 14.6 0 0 0 0 0 0 0 16.8
2 x 0 13.6 0 0 0 0 0 0 16.5
3 y 0 0 15.5 0 0 0 0 0 16.9
4 z 0 0 0 0 14.3 0 0 0 15.7
...
The data is indexed by the first two columns. I need to subtract the values in column i from each of the values in a - h, adding a new column to the right of the data frame for each original column. However, if there is a zero in the first column, I want it to stay zero instead of subtracting. For example, if my code worked I would have the following columns added to the data frame on the right
Location Sample ... a2 b2 c2 d2 e2 f2 g2 h2
1 w ... -2.2 0 0 0 0 0 0 0
2 x ... 0 -2.9 0 0 0 0 0 0
3 y ... 0 0 -1.4 0 0 0 0 0
4 z ... 0 0 0 0 -1.4 0 0 0
...
I'm trying to use where in pandas to only subtract the value in column i if the value in the current column is not zero using the following code:
import pandas as pd
normalizer = i
columns = list(df.columns.values)
for column in columns:
if column == normalizer: continue
newcol = gene + "2"
df[newcol] = df.where(df[column] == 0,
df[column] - df[normalizer], axis = 0)
I'm using a for loop because the number of columns will not always be the same, and the column that is being subtracted will have a different name using different data sets.
I'm getting this error: "ValueError: Wrong number of items passed 9, placement implies 1".
I think the subtraction is causing the issue, but I can't figure out how to change it to make it work. Any assistance would be greatly appreciated.
Thanks in advance.
Method 1 (pretty fast: roughly 3 times faster than method 2)
1. Select columns that is relavent
2. Do subtraction
3. Elementwise mutiplication with a 0, 1 matrix that constructed before the substraction. Each element in (df_ref > 0) is 0 if it was originally 0 and 1 otherwise.
ith_col = df["i"]
subdf = df.iloc[:, 2:-1] # a - h columns
df_temp = subdf.sub(ith_col, axis=0).multiply(subdf > 0).add(0)
df_temp.columns = ['a2', 'b2', 'c2', 'd2', 'e2', 'f2', 'g2', 'h2'] # rename columns
df_desired = pd.concat([df, df_temp], axis=1)
Note in this method, the 0 is negative. Thus, we have an extra add(0) in the end. Yes, a 0 can be negative. :P
Method 2 (more readable)
1. Find the greater than 0 part with a condition.
2. Select rows that is relavent
3. Substract
4. Fill in 0.
ith_col = df["i"]
df[df > 0].iloc[:,2:-1].sub(ith_col, axis=0).fillna(0)
The second method is pretty similar to #Wen's answer. Credits to him :P
Speed comparison of two methods (tested on Python 3 and pandas 0.20)
%timeit subdf.sub(ith_col, axis=0).multiply(subdf > 0).add(0)
688 µs ± 30.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit df[df > 0].iloc[:,2:-1].sub(ith_col, axis=0).fillna(0)
2.97 ms ± 248 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Reference:
DataFrame.multiply perform elementwise multiplication with another data frame.
Using mask + fillna
df.iloc[:,2:-1]=df.iloc[:,2:-1].mask(df.iloc[:,2:-1]==0).sub(df['i'],0).fillna(0)
df
Out[116]:
Location Sample a b c d e f g h i
0 1 w -2.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 16.8
1 2 x 0.0 -2.9 0.0 0.0 0.0 0.0 0.0 0.0 16.5
2 3 y 0.0 0.0 -1.4 0.0 0.0 0.0 0.0 0.0 16.9
3 4 z 0.0 0.0 0.0 0.0 -1.4 0.0 0.0 0.0 15.7
Update
normalizer = ['i','Location','Sample']
df.loc[:,~df.columns.isin(normalizer)]=df.loc[:,~df.columns.isin(normalizer)].mask(df.loc[:,~df.columns.isin(normalizer)]==0).sub(df['i'],0).fillna(0)

Cannot assign row in pandas.Dataframe

I am trying to calculate the mean of the rows of a DataFrame which have the same value on a specified column col. However I'm stuck at assigning a row of the pandas DataFrame.
Here's my code:
def code(data, col):
""" Finds average value of all rows that have identical col values from column col .
Returns new Pandas.DataFrame with the data
"""
values = pd.unique(data[col])
rows = len(values)
res = pd.DataFrame(np.zeros(shape = (rows, len(data.columns))), columns = data.columns)
for i, v in enumerate(values):
e = data[data[col] == v].mean().to_frame().transpose()
res[i:i+1] = e
return res
The problem is that the code only works for the first row, and puts NaN values on the next rows. I have checked the value of e and confirmed it to be good, so there is a problem with the assignment res[i:i+1] = e. I have also tried to do res.iloc[i] = e but i get "ValueError: Incompatible indexer with Series" Is there an alternate way to do this? It seems very straight forward and I'm baffled why it doesn't work...
E.g:
wdata
Out[78]:
Die Subsite Algorithm Vt1 It1 Ignd
0 1 0 0 0.0 -2.320000e-07 -4.862400e-08
1 1 0 0 0.1 -1.000000e-04 1.000000e-04
2 1 0 0 0.2 -1.000000e-03 1.000000e-03
3 1 0 0 0.3 -1.000000e-02 1.000000e-02
4 1 1 1 0.0 3.554000e-07 -2.012000e-07
5 1 2 2 0.0 5.353000e-08 -1.684000e-07
6 1 3 3 0.0 9.369400e-08 -2.121400e-08
7 1 4 4 0.0 3.286200e-08 -2.093600e-08
8 1 5 5 0.0 8.978600e-08 -3.262000e-07
9 1 6 6 0.0 3.624800e-08 -2.507600e-08
10 1 7 7 0.0 2.957000e-08 -1.993200e-08
11 1 8 8 0.0 7.732600e-08 -3.773200e-08
12 1 9 9 0.0 9.300000e-08 -3.521200e-08
13 1 10 10 0.0 8.468000e-09 -6.990000e-09
14 1 11 11 0.0 1.434200e-11 -1.200000e-11
15 2 0 0 0.0 8.118000e-11 -5.254000e-11
16 2 1 1 0.0 9.322000e-11 -1.359200e-10
17 2 2 2 0.0 1.944000e-10 -2.409400e-10
18 2 3 3 0.0 7.756000e-11 -8.556000e-11
19 2 4 4 0.0 1.260000e-11 -8.618000e-12
20 2 5 5 0.0 7.122000e-12 -1.402000e-13
21 2 6 6 0.0 6.224000e-11 -2.760000e-11
22 2 7 7 0.0 1.133400e-08 -6.566000e-09
23 2 8 8 0.0 6.600000e-13 -1.808000e-11
24 2 9 9 0.0 6.861000e-08 -4.063400e-08
25 2 10 10 0.0 2.743800e-10 -1.336000e-10
Expected output:
Die Subsite Algorithm Vt1 It1 Ignd
0 1 4.4 4.4 0.04 -0.00074 0.00074
0 2 5.5 5.5 0 6.792247e-09 -4.023330e-09
Instead, what i get is:
Die Subsite Algorithm Vt1 It1 Ignd
0 1 4.4 4.4 0.04 -0.00074 0.00074
0 NaN NaN NaN NaN NaN NaN
For example, this code results in:
In[81]: wdata[wdata['Die'] == 2].mean().to_frame().transpose()
Out[81]:
Die Subsite Algorithm Vt1 It1 Ignd
0 2 5.5 5.5 0 6.792247e-09 -4.023330e-09
For me works:
def code(data, col):
""" Finds average value of all rows that have identical col values from column col .
Returns new Pandas.DataFrame with the data
"""
values = pd.unique(data[col])
rows = len(values)
res = pd.DataFrame(columns = data.columns)
for i, v in enumerate(values):
e = data[data[col] == v].mean()
res.loc[i,:] = e
return res
col = 'Die'
print (code(data, col))
Die Subsite Algorithm Vt1 It1 Ignd
0 1 4.4 4.4 0.04 -0.000739957 0.000739939
1 2 5 5 0 7.34067e-09 -4.35482e-09
but same output has groupby with aggregate mean:
print (data.groupby(col, as_index=False).mean())
Die Subsite Algorithm Vt1 It1 Ignd
0 1 4.4 4.4 0.04 -7.399575e-04 7.399392e-04
1 2 5.0 5.0 0.00 7.340669e-09 -4.354818e-09
A few minutes after I posted the question I solved it by adding a .values to e.
e = data[data[col] == v].mean().to_frame().transpose().values
However it turns out that what I wanted to do is already done by Pandas. Thanks MaxU!
df.groupBy(col).mean()

Resources