I have many dataframes that I want to combine.
I only need 1 "level_0" column.
pd.concat([df_NB_E, df_LDA_E, df_DT_E, df_RF_E], axis=1)
it seems, that level_0 could be your index, right?
you have:
>>> level0 = ['ALL','AWA','REM','S1','S2','SWS']
>>> df1 = pd.DataFrame(data={'level_0':level0, 'col1':np.random.randint(0,9,6)})
>>> df2 = pd.DataFrame(data={'level_0':level0, 'col2':np.random.randint(0,9,6)})
>>> df3 = pd.DataFrame(data={'level_0':level0, 'col3':np.random.randint(0,9,6)})
>>> df1
col1 level_0
0 5 ALL
1 8 AWA
2 5 REM
3 3 S1
4 8 S2
5 4 SWS
>>> df2
col2 level_0
0 4 ALL
1 1 AWA
2 3 REM
3 2 S1
4 5 S2
5 1 SWS
>>> df3
col3 level_0
0 1 ALL
1 3 AWA
2 0 REM
3 4 S1
4 2 S2
5 3 SWS
>>> pd.concat([df1,df2,df3], axis=1)
col1 level_0 col2 level_0 col3 level_0
0 5 ALL 4 ALL 1 ALL
1 8 AWA 1 AWA 3 AWA
2 5 REM 3 REM 0 REM
3 3 S1 2 S1 4 S1
4 8 S2 5 S2 2 S2
5 4 SWS 1 SWS 3 SWS
you can set level_0 as your index, then concatenate:
>>> pd.concat([df1.set_index('level_0'), df2.set_index('level_0'), df3.set_index('level_0')], axis=1)
col1 col2 col3
level_0
ALL 5 4 1
AWA 8 1 3
REM 5 3 0
S1 3 2 4
S2 8 5 2
SWS 4 1 3
of if it's not an index, you can remove it before concat:
>>> pd.concat([df1.drop('level_0', axis=1), df2.drop('level_0', axis=1), df3.drop('level_0', axis=1)], axis=1)
col1 col2 col3
0 5 4 1
1 8 1 3
2 5 3 0
3 3 2 4
4 8 5 2
5 4 1 3
With these command I was able to delete all the columns with name "level_0"
df.drop(df.columns[[0]], axis=1, inplace=True)
df
Related
I have the following data frame:
Col1 Col2 Col3 Type
0 1 2 3 1
1 4 5 6 1
2 7 8 9 2
and I would like to have a shuffled output like :
Col3 Col1 Col2 Type
0 3 1 2 1
1 6 4 5 1
2 9 7 8 2
How to achieve this?
Use DataFrame.sample with axis=1:
df = df.sample(frac=1, axis=1)
If need last column not changed position:
a = df.columns[:-1].to_numpy()
np.random.shuffle(a)
print (a)
['Col3' 'Col1' 'Col2']
df = df[np.append(a, ['Type'])]
print (df)
Col2 Col3 Col1 Type
0 3 1 2 1
1 6 4 5 1
2 9 7 8 2
I have a dataframe, where in I have 1 column, which contains names of column satisfying certain conditions for each row.
It's like if columns of dataframe are Index, Col1, Col2, Col3, Col_Name. Where Col_Name has either Col1 or Col2 or Col3 for each row.
Now in a new column say Col_New, I want output for each row such as if 5th row Col_Name mentions Col_1, then value of Col_1 in 5th row.
I am sorry I cannot post the code I am working on, hence gave this hypothetical example.
Obliged for any help, thanks.
IIUC you could use:
df['col_new'] = df.reset_index().apply(lambda x: df.at[x['index'], x['col_name']], axis=1)
Example:
cols = ['Col1', 'Col2', 'Col3']
df = pd.DataFrame(np.random.rand(10, 3), columns=cols)
df['Col_Name'] = np.random.choice(cols, 10)
print(df)
Col1 Col2 Col3 Col_Name
0 0.833988 0.939254 0.256450 Col2
1 0.675909 0.609494 0.641944 Col3
2 0.877474 0.971299 0.218273 Col3
3 0.201189 0.265742 0.800580 Col2
4 0.397945 0.135153 0.941313 Col2
5 0.666252 0.697983 0.164768 Col2
6 0.863377 0.839421 0.601316 Col2
7 0.138975 0.731359 0.379258 Col3
8 0.412148 0.541033 0.197861 Col2
9 0.980040 0.506752 0.823274 Col3
df['Col_New'] = df.reset_index().apply(lambda x: df.at[x['index'], x['Col_Name']], axis=1)
[out]
Col1 Col2 Col3 Col_Name Col_New
0 0.833988 0.939254 0.256450 Col2 0.939254
1 0.675909 0.609494 0.641944 Col3 0.641944
2 0.877474 0.971299 0.218273 Col3 0.218273
3 0.201189 0.265742 0.800580 Col2 0.265742
4 0.397945 0.135153 0.941313 Col2 0.135153
5 0.666252 0.697983 0.164768 Col2 0.697983
6 0.863377 0.839421 0.601316 Col2 0.839421
7 0.138975 0.731359 0.379258 Col3 0.379258
8 0.412148 0.541033 0.197861 Col2 0.541033
9 0.980040 0.506752 0.823274 Col3 0.823274
Example 2 (based on integer col references)
cols = [1, 2, 3]
np.random.seed(0)
df = pd.DataFrame(np.random.rand(10, 3), columns=cols)
df[13] = np.random.choice(cols, 10)
print(df)
1 2 3 13
0 0.548814 0.715189 0.602763 3
1 0.544883 0.423655 0.645894 3
2 0.437587 0.891773 0.963663 1
3 0.383442 0.791725 0.528895 3
4 0.568045 0.925597 0.071036 1
5 0.087129 0.020218 0.832620 1
6 0.778157 0.870012 0.978618 1
7 0.799159 0.461479 0.780529 2
8 0.118274 0.639921 0.143353 2
9 0.944669 0.521848 0.414662 3
Instead use:
df['Col_New'] = df.reset_index().apply(lambda x: df.at[int(x['index']), int(x[13])], axis=1)
1 2 3 13 Col_New
0 0.548814 0.715189 0.602763 3 0.602763
1 0.544883 0.423655 0.645894 3 0.645894
2 0.437587 0.891773 0.963663 1 0.437587
3 0.383442 0.791725 0.528895 3 0.528895
4 0.568045 0.925597 0.071036 1 0.568045
5 0.087129 0.020218 0.832620 1 0.087129
6 0.778157 0.870012 0.978618 1 0.778157
7 0.799159 0.461479 0.780529 2 0.461479
8 0.118274 0.639921 0.143353 2 0.639921
9 0.944669 0.521848 0.414662 3 0.414662
Using the example DataFrame from Chris A.
You could do it like this:
cols = ['Col1', 'Col2', 'Col3']
df = pd.DataFrame(np.random.rand(10, 3), columns=cols)
df['Col_Name'] = np.random.choice(cols, 10)
print(df)
df['Col_New'] = [df.loc[df.index[i],j]for i,j in enumerate(df.Col_Name)]
print(df)
In pandas is for this function DataFrame.lookup, also it seems need same types of values in columns and looking column, so is possible convert both to strings:
np.random.seed(123)
cols = [1, 2, 3]
df = pd.DataFrame(np.random.randint(10, size=(5, 3)), columns=cols).rename(columns=str)
df['Col_Name'] = np.random.choice(cols, 5)
df['Col_New'] = df.lookup(df.index, df['Col_Name'].astype(str))
print(df)
1 2 3 Col_Name Col_New
0 2 2 6 3 6
1 1 3 9 2 3
2 6 1 0 1 6
3 1 9 0 1 1
4 0 9 3 1 0
I am trying to understand how to perform arithmetic operations on a dataframe in python.
import pandas as pd
import numpy as np
df = pd.DataFrame({'col1':[2,38,7,5],'col2':[1,3,2,4]})
print (unsorted_df.sum())
This is what I'm getting (in terms of the output), but I want to have more control over which sum I am getting.
col1 52
col2 10
dtype: int64
Just wondering how I would add individual elements in the dataframe together.
Your question is not very clear but still I will try to cover all possible scenarios,
Input:
df
col1 col2
0 2 1
1 38 3
2 7 2
3 5 4
If you want the sum of columns,
df.sum(axis = 0)
Output:
col1 52
col2 10
dtype: int64
If you want the sum of rows,
df.sum(axis = 1)
0 3
1 41
2 9
3 9
dtype: int64
If you want to add a list of numbers into a column,
num = [1, 2, 3, 4]
df['col1'] = df['col1'] + num
df
Output:
col1 col2
0 3 1
1 40 3
2 10 2
3 9 4
If you want to add a list of numbers into a row,
num = [1, 2]
df.loc[0] = df.loc[0] + num
df
Output:
col1 col2
0 3 3
1 38 3
2 7 2
3 5 4
If you want to add a single number to a column,
df['col1'] = df['col1'] + 2
df
Output:
col1 col2
0 4 1
1 40 3
2 9 2
3 7 4
If you want to add a single number to a row,
df.loc[0] = df.loc[0] + 2
df
Output:
col1 col2
0 4 3
1 38 3
2 7 2
3 5 4
If you want to add a number to any number(an element of row i and column j),
df.iloc[1,1] = df.iloc[1,1] + 5
df
Output:
col1 col2
0 2 1
1 38 8
2 7 2
3 5 4
I am working with 2 data frames that I created based from an Excel file. One data frame contains values that are separated with commas, that is,
df1 df2
----------- ------------
0 LFTEG42 X,Y,Z
1 JOCOROW 1,2
2 TLR_U01 I
3 PR_UDG5 O,M
df1 and df2 are my column names. My intention is to merge the two data frames and generate the following output:
desired result
----------
0 LFTEG42X
1 LFTEG42Y
2 LFTEG42Z
3 JOCOROW1
4 JOCOROW2
5 TLR_U01I
6 .....
n PR_UDG5M
This is the code that I used but I ended up with the following result:
input_file = pd.ExcelFile \
('C:\\Users\\devel\\Desktop_12\\Testing\\latest_Calculation' + str(datetime.now()).split(' ')[0] + '.xlsx')
# convert the worksheets to dataframes
df1 = pd.read_excel(input_file, index_col=None, na_values=['NA'], parse_cols="H",
sheetname="Analysis")
df2 = pd.read_excel(input_file, index_col=None, na_values=['NA'], parse_cols="I",
sheetname="Analysis")
data_frames_merged = df1.append(df2, ignore_index=True)
current result
--------------
NaN XYZ
NaN 1,2
NaN I
... ...
PR_UDG5 NaN
Questions
Why did I end up receiving a NaN (not a number) value?
How can I achieve my desired result of merging these two data frames with the comma values?
I break down the steps
df=pd.concat([df1,df2],axis=1)
df.df2=df.df2.str.split(',')
df=df.set_index('df1').df2.apply(pd.Series).stack().reset_index().drop('level_1',1).rename(columns={0:'df2'})
df['New']=df.df1+df.df2
df
Out[34]:
df1 df2 New
0 LFTEG42 X LFTEG42X
1 LFTEG42 Y LFTEG42Y
2 LFTEG42 Z LFTEG42Z
3 JOCOROW 1 JOCOROW1
4 JOCOROW 2 JOCOROW2
5 TLR_U01 I TLR_U01I
6 PR_UDG5 O PR_UDG5O
7 PR_UDG5 M PR_UDG5M
Data Input :
df1
Out[36]:
df1
0 LFTEG42
1 JOCOROW
2 TLR_U01
3 PR_UDG5
df2
Out[37]:
df2
0 X,Y,Z
1 1,2
2 I
3 O,M
Dirty one-liner
new_df = pd.concat([df1['df1'], df2['df2'].str.split(',', expand = True).stack()\
.reset_index(1,drop = True)], axis = 1).sum(1)
0 LFTEG42X
0 LFTEG42Y
0 LFTEG42Z
1 JOCOROW1
1 JOCOROW2
2 TLR_U01I
3 PR_UDG5O
3 PR_UDG5M
Also, similar to #vaishali except using melt
df = pd.concat([df1,df2['df2'].str.split(',',expand=True)],axis=1).melt(id_vars='df1').dropna().drop('variable',axis=1).sum(axis=1)
0 LFTEG42X
1 JOCOROW1
2 TLR_U01I
3 PR_UDG5O
4 LFTEG42Y
5 JOCOROW2
7 PR_UDG5M
8 LFTEG42Z
Setup
df1 = pd.DataFrame(dict(A='LFTEG42 JOCOROW TLR_U01 PR_UDG5'.split()))
df2 = pd.DataFrame(dict(A='X,Y,Z 1,2 I O,M'.split()))
Getting creative
df1.A.repeat(df2.A.str.count(',') + 1) + ','.join(df2.A).split(',')
0 LFTEG42X
0 LFTEG42Y
0 LFTEG42Z
1 JOCOROW1
1 JOCOROW2
2 TLR_U01I
3 PR_UDG5O
3 PR_UDG5M
dtype: object
Given the following data frame:
import pandas as pd
df = pd.DataFrame({'COL1': ['A', 'A','A','A','B','B'],
'COL2' : ['AA','AA','BB','BB','BB','BB'],
'COL3' : [2,3,4,5,4,2],
'COL4' : [0,1,2,3,4,2]})
df
COL1 COL2 COL3 COL4
0 A AA 2 0
1 A AA 3 1
2 A BB 4 2
3 A BB 5 3
4 B BB 4 4
5 B BB 2 2
I would like, as efficiently as possible (i.e. via groupby and lambda x or better), to find the median of columns 3 and 4 for each distinct group of columns 1 and 2.
The desired result is as follows:
COL1 COL2 COL3 COL4 MEDIAN
0 A AA 2 0 1.5
1 A AA 3 1 1.5
2 A BB 4 2 3.5
3 A BB 5 3 3.5
4 B BB 4 4 3
5 B BB 2 2 3
Thanks in advance!
You already had the idea -- groupby COL1 and COL2 and calculate median.
m = df.groupby(['COL1', 'COL2'])[['COL3','COL4']].apply(np.median)
m.name = 'MEDIAN'
print df.join(m, on=['COL1', 'COL2'])
COL1 COL2 COL3 COL4 MEDIAN
0 A AA 2 0 1.5
1 A AA 3 1 1.5
2 A BB 4 2 3.5
3 A BB 5 3 3.5
4 B BB 4 4 3.0
5 B BB 2 2 3.0
df.groupby(['COL1', 'COL2']).median()[['COL3','COL4']]