I have 2 DF that have in common some elements, and differentiates on 1 data. These DF are added into a list with the function append.
How do i re organise the list into a new DF with the data put in columns ?
The 2 DF are like below and are added with append
import pandas as pd
a=[]
r1={'date' : ['2003-01-31','2003-01-31'],'name' :['mod','dom'],'fib' :[2,3]}
df1=pd.DataFrame(r1,columns=['date','name','fib'])
r2={'date' : ['2003-01-31','2003-01-31'],'name' :['dom','mod'],'bif' :[5,7]}
df2=pd.DataFrame(r2,columns=['date','name','bif'])
a.append(df1)
a.append(df2)
a
Then i map the list a in a new DF
z=pd.concat(map(pd.DataFrame,a))
z
How do i re organize z that needs only two rows ?
The output i expect is
r3={'date':['2003-01-31','2003-01-31'],'name' :['mod','dom'],'fib':[2,3],'bif':[7,5]}
pd.DataFrame(r3)
For the z , I would do:
z=pd.concat([i.set_index(['date','name']) for i in a],axis=1).reset_index()
print(z)
date name fib bif
0 2003-01-31 dom 3 5
1 2003-01-31 mod 2 7
Try using pd.merge
df1.merge(df2, on=['name', 'date'])
Results:
date name fib bif
0 2003-01-31 mod 2 7
1 2003-01-31 dom 3 5
Related
In my DataFrame.I am having a list of list values in a column. For example, I am having columns as A, B, C, and my output column. In column A I'm having a value of 12 and in column B I am having values of 30 and in column C I am having a list of values like [0.01,1.234,2.31].When I try to find mean for all the list of list values.It shows list object as no attribute mean.How to convert all list of list values to mean in the dataframe?
You can transform the column which contains the lists to another DataFrame and calculate the mean.
import pandas as pd
df = ... # Original df
pd.DataFrame(df['column_with_lists'].values.tolist()).mean(1)
This would result in a pandas DataFrame which looks like the following:
0 mean_of_list_row_0
1 mean_of_list_row_1
. .
. .
. .
n mean_of_list_row_n
You can use apply(np.mean) on the column with the lists in it to get the mean. For example:
Build a dataframe:
import numpy as np
import pandas as pd
df = pd.DataFrame([[2,4],[4,6]])
df[3] = [[5,7],[8,9,10]]
print(df)
0 1 3
0 2 4 [5, 7]
1 4 6 [8, 9, 10]
Use apply(np.mean)
print(df[3].apply(np.mean))
0 6.0
1 9.0
If you want to convert that column into the mean of the lists:
df[3] = df[3].apply(np.mean)
print(df)
Name: 3, dtype: float64
0 1 3
0 2 4 6.0
1 4 6 9.0
I have several dataframes indexed more or less by the same MultiIndex (a few values may be missing on each dataframe, but the total rows exceeds 70K and the missing values is always less than 10). I want to attach/merge/concatenate to all of them a given dataframe (with same indexation). I tried doing this using a for iteration with a tuple, as in the example here. However, at the end, all my data frames do not merge. I provide a simple example where this happens. Why they do not merge?
df1 = pd.DataFrame(np.arange(12).reshape(4,3), index = ["A", "B", "C", "D"], columns = ["1st", "2nd", "3rd"])
df2 = df1 + 2
df3 = df1 - 2
for df in (df1, df2):
df = pd.merge(df, df3, left_index = True, right_index = True, how = "inner")
df1, df2
What is your expected result?
In the for loop, df is the loop variable and also the result on the left-hand side of the assignment statement. Here is the same loop with print statements to provide additional information. I think you are over-writing intermediate results.
for df in (df1, df2):
print(df)
print('-----')
df = pd.merge(df, df3, left_index = True, right_index = True, how = "inner")
print(df)
print('==========', end='\n\n')
print(df)
You could combine df1, df2 and df3 like this.
print(pd.concat([df1, df2, df3], axis=1))
1st 2nd 3rd 1st 2nd 3rd 1st 2nd 3rd
A 0 1 2 2 3 4 -2 -1 0
B 3 4 5 5 6 7 1 2 3
C 6 7 8 8 9 10 4 5 6
D 9 10 11 11 12 13 7 8 9
UPDATE
Here is an idiomatic way to import and concatenate several CSV files, possibly in multiple directories. In short: read each file into a separate data frame; add each data frame to a list; concatenate once at the end.
Reference: https://pandas.pydata.org/docs/user_guide/cookbook.html#reading-multiple-files-to-create-a-single-dataframe
import pandas as pd
from pathlib import Path
df = list()
for filename in Path.cwd().rglob('*.csv'):
with open(filename, 'rt') as handle:
t = pd.read_csv(handle)
df.append(t)
print(filename.name, t.shape)
df = pd.concat(df)
print('\nfinal: ', df.shape)
penny.csv (62, 8)
penny-2020-06-24.csv (144, 9)
...etc
final: (474, 20)
Is there any effcient way to write different dimensions dictionaries to excel using pandas?
Example:
import pandas as pd
mylist1=[7,8,'woo']
mylist2=[[1,2,3],[4,5,6],['foo','boo','doo']]
d=dict(y=mylist1,x=mylist2)
df=pd.DataFrame.from_dict(d, orient='index').transpose().fillna('')
writer = pd.ExcelWriter('output.xls',engine = 'xlsxwriter')
df.to_excel(writer)
writer.save()
The current results,
The desired results,
Please note that my database is much bigger than this simple example. So a generic answer would be appreciated.
You can fix your dataframe first before exporting to excel:
df=pd.DataFrame.from_dict(d, orient='index').transpose()
df = pd.concat([df["y"],pd.DataFrame(df["x"].tolist(),columns=list("x"*len(df["x"])))],axis=1)
Or do it upstream:
df = pd.DataFrame([[a, *b] for a,b in zip(mylist1, mylist2)],columns=list("yxxx"))
Both yield the same result:
y x x x
0 7 1 2 3
1 8 4 5 6
2 woo foo boo doo
Get first appropriate format then save to excel.
df = df.join(df.x.apply(pd.Series)).drop('x',1)
df.columns = list('yxxx')
df
y x x x
0 7 1 2 3
1 8 4 5 6
2 woo foo boo doo
For Dynamic columns name
df.columns = ['y'] + list('x' * (len(df.columns)-1))
I have a pandas dataframe as below:
import pandas as pd
import numpy as np
import datetime
# intialise data of lists.
data = {'A' :[1,1,1,1,2,2,2,2],
'B' :[2,3,1,5,7,7,1,6]}
# Create DataFrame
df = pd.DataFrame(data)
df
I want to sort 'B' by each group of 'A'
Expected Output:
A B
0 1 1
1 1 2
2 1 3
3 1 5
4 2 1
5 2 6
6 2 7
7 2 7
You can sort a dataframe using the sort_values command. This command will sort your dataframe with priority on A and then B as requested.
df.sort_values(by=['A', 'B'])
Docs
I have a dataframe like this:
A B C
0 1 0.749065 This
1 2 0.301084 is
2 3 0.463468 a
3 4 0.643961 random
4 1 0.866521 string
5 2 0.120737 !
Calling
In [10]: print df.groupby("A")["B"].sum()
will return
A
1 1.615586
2 0.421821
3 0.463468
4 0.643961
Now I would like to do "the same" for column "C". Because that column contains strings, sum() doesn't work (although you might think that it would concatenate the strings). What I would really like to see is a list or set of the strings for each group, i.e.
A
1 {This, string}
2 {is, !}
3 {a}
4 {random}
I have been trying to find ways to do this.
Series.unique() (http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unique.html) doesn't work, although
df.groupby("A")["B"]
is a
pandas.core.groupby.SeriesGroupBy object
so I was hoping any Series method would work. Any ideas?
In [4]: df = read_csv(StringIO(data),sep='\s+')
In [5]: df
Out[5]:
A B C
0 1 0.749065 This
1 2 0.301084 is
2 3 0.463468 a
3 4 0.643961 random
4 1 0.866521 string
5 2 0.120737 !
In [6]: df.dtypes
Out[6]:
A int64
B float64
C object
dtype: object
When you apply your own function, there is not automatic exclusions of non-numeric columns. This is slower, though, than the application of .sum() to the groupby
In [8]: df.groupby('A').apply(lambda x: x.sum())
Out[8]:
A B C
A
1 2 1.615586 Thisstring
2 4 0.421821 is!
3 3 0.463468 a
4 4 0.643961 random
sum by default concatenates
In [9]: df.groupby('A')['C'].apply(lambda x: x.sum())
Out[9]:
A
1 Thisstring
2 is!
3 a
4 random
dtype: object
You can do pretty much what you want
In [11]: df.groupby('A')['C'].apply(lambda x: "{%s}" % ', '.join(x))
Out[11]:
A
1 {This, string}
2 {is, !}
3 {a}
4 {random}
dtype: object
Doing this on a whole frame, one group at a time. Key is to return a Series
def f(x):
return Series(dict(A = x['A'].sum(),
B = x['B'].sum(),
C = "{%s}" % ', '.join(x['C'])))
In [14]: df.groupby('A').apply(f)
Out[14]:
A B C
A
1 2 1.615586 {This, string}
2 4 0.421821 {is, !}
3 3 0.463468 {a}
4 4 0.643961 {random}
You can use the apply method to apply an arbitrary function to the grouped data. So if you want a set, apply set. If you want a list, apply list.
>>> d
A B
0 1 This
1 2 is
2 3 a
3 4 random
4 1 string
5 2 !
>>> d.groupby('A')['B'].apply(list)
A
1 [This, string]
2 [is, !]
3 [a]
4 [random]
dtype: object
If you want something else, just write a function that does what you want and then apply that.
You may be able to use the aggregate (or agg) function to concatenate the values. (Untested code)
df.groupby('A')['B'].agg(lambda col: ''.join(col))
You could try this:
df.groupby('A').agg({'B':'sum','C':'-'.join})
Named aggregations with pandas >= 0.25.0
Since pandas version 0.25.0 we have named aggregations where we can groupby, aggregate and at the same time assign new names to our columns. This way we won't get the MultiIndex columns, and the column names make more sense given the data they contain:
aggregate and get a list of strings
grp = df.groupby('A').agg(B_sum=('B','sum'),
C=('C', list)).reset_index()
print(grp)
A B_sum C
0 1 1.615586 [This, string]
1 2 0.421821 [is, !]
2 3 0.463468 [a]
3 4 0.643961 [random]
aggregate and join the strings
grp = df.groupby('A').agg(B_sum=('B','sum'),
C=('C', ', '.join)).reset_index()
print(grp)
A B_sum C
0 1 1.615586 This, string
1 2 0.421821 is, !
2 3 0.463468 a
3 4 0.643961 random
a simple solution would be :
>>> df.groupby(['A','B']).c.unique().reset_index()
If you'd like to overwrite column B in the dataframe, this should work:
df = df.groupby('A',as_index=False).agg(lambda x:'\n'.join(x))
Following #Erfan's good answer, most of the times in an analysis of aggregate values you want the unique possible combinations of these existing character values:
unique_chars = lambda x: ', '.join(x.unique())
(df
.groupby(['A'])
.agg({'C': unique_chars}))