Given the following data frame:
import pandas as pd
import numpy as np
df1=pd.DataFrame({'A':['a','b','c','d'],
'B':['d',np.nan,'c','f']})
df1
A B
0 a d
1 b NaN
2 c c
3 d f
I'd like to insert blank rows before each row.
The desired result is:
A B
0 NaN NaN
1 a d
2 NaN NaN
3 b NaN
4 NaN NaN
5 c c
6 NaN NaN
7 d f
In reality, I have many rows.
Thanks in advance!
I think you could change your index like #bananafish did and then use reindex:
df1.index = range(1, 2*len(df1)+1, 2)
df2 = df1.reindex(index=range(2*len(df1)))
In [29]: df2
Out[29]:
A B
0 NaN NaN
1 a d
2 NaN NaN
3 b NaN
4 NaN NaN
5 c c
6 NaN NaN
7 d f
Use numpy and pd.DataFrame
def pir(df):
nans = np.where(np.empty_like(df.values), np.nan, np.nan)
data = np.hstack([nans, df.values]).reshape(-1, df.shape[1])
return pd.DataFrame(data, columns=df.columns)
pir(df1)
Testing and Comparison
Code
def banana(df):
df1 = df.set_index(np.arange(1, 2*len(df)+1, 2))
df2 = pd.DataFrame(index=range(0, 2*len(df1), 2), columns=df1.columns)
return pd.concat([df1, df2]).sort_index()
def anton(df):
df = df.set_index(np.arange(1, 2*len(df)+1, 2))
return df.reindex(index=range(2*len(df)))
def pir(df):
nans = np.where(np.empty_like(df.values), np.nan, np.nan)
data = np.hstack([nans, df.values]).reshape(-1, df.shape[1])
return pd.DataFrame(data, columns=df.columns)
Results
pd.concat([f(df1) for f in [banana, anton, pir]],
axis=1, keys=['banana', 'anton', 'pir'])
Timing
A bit roundabout but this works:
df1.index = range(1, 2*len(df1)+1, 2)
df2 = pd.DataFrame(index=range(0, 2*len(df1), 2), columns=df1.columns)
df3 = pd.concat([df1, df2]).sort()
Related
Hello I just want to group the elements by id and show each string in a separated column
Original dataframe:
id|elements|
1|a
1|b
1|c
1|d
2|a
2|b
2|b
3|a
3|a
3|b
3|c
3|c
3|c
Desired output:
id|column1|column2|column3|column4|column5|
1 |a|b|c|d| | |
2 |a|b|b|
3 |a|a|b|c|c|c|
Any ideas? Thank you very much in advance
Given your original data frame, you can simply do:
df.groupby('id').apply(lambda x: x['element'].to_list()).apply(pd.Series)
Output:
0 1 2 3 4 5
id
1 a b c d NaN NaN
2 a b b NaN NaN NaN
3 a a b c c c
If you do not want id to be the index, use .reset_index().
Try this
import pandas as pd
import numpy as np
F = {'id': [1,1,1,1,2,2,2,3,3,3,3,3], 'element': ['a','b','c','d','a','b','b','a','a','b','c','c']}
df = pd.DataFrame(data = F)
df2 = df.set_index('id').stack().groupby(level=[0,1]).apply(list).unstack()
df3 = pd.DataFrame(df2["element"].to_list(), columns=['element1', 'element2','element3', 'element4','element5'])
I have a pandas dataframe as below:
import pandas as pd
df = pd.DataFrame({'ORDER':["A", "A"], 'col1':[np.nan, np.nan], 'col2':[np.nan, 5]})
df
ORDER col1 col2
0 A NaN NaN
1 A NaN 5.0
I want to create a column 'new' as sum(col1, col2) ignoring Nan only if one of the column as Nan,
If both of the columns have NaN value, it should return NaN as below
I tried the below code and it works fine. Is there any way to achieve the same with just one line of code.
df['new'] = df[['col1', 'col2']].sum(axis = 1)
df['new'] = np.where(pd.isnull(df['col1']) & pd.isnull(df['col2']), np.nan, df['new'])
df
ORDER col1 col2 new
0 A NaN NaN NaN
1 A NaN 5.0 5.0
Do sum with min_count
df['new'] = df[['col1','col2']].sum(axis=1,min_count=1)
Out[78]:
0 NaN
1 5.0
dtype: float64
Use the add function on the two columns, which takes a fill_value argument that lets you replace NaN:
df['col1'].add(df['col2'], fill_value=0)
0 NaN
1 5.0
dtype: float64
Is this ok?
df['new'] = df[['col1', 'col2']].sum(axis = 1).replace(0,np.nan)
I am working with a pandas dataframe, in which some of the columns have no entries. I want to put all columns at the end and I manage to do it (see code below), but I also notice that after sorting the remaining columns were also sorted alphabetically by column names in descending order. Can I prevent this from happening?
Input dataframe:
,colA,colB,colC,colD,colF
rowA,X,nan,nan,X,nan
rowB,nan,X,nan,nan,X
rowC,X,nan,nan,X,X
rowD,X,nan,nan,nan,nan
rowE,nan,X,nan,nan,X
Code:
import pandas as pd
df = pd.read_csv (r'q1.csv', dtype= 'str', index_col=0, na_values = 'nan')
ind = df.notnull().astype('int').any().sort_values(ascending= False).index
out = df.loc[:,ind]
out.to_csv(r'out.csv', na_rep= 'nan')
Output dataframe:
,colF,colD,colB,colA,colC
rowA,nan,X,nan,X,nan
rowB,X,nan,X,nan,nan
rowC,X,X,nan,X,nan
rowD,nan,nan,nan,X,nan
rowE,X,nan,X,nan,nan
Essentially, I want to keep order as it is for all other columns.
Thanks.
If I understand correctly, you may try this.
m = df.isna().all().sort_values(kind='mergesort')
df_new = df[m.index]
Out[243]:
colA colB colD colF colC
rowA X NaN X NaN NaN
rowB NaN X NaN X NaN
rowC X NaN X X NaN
rowD X NaN NaN NaN NaN
rowE NaN X NaN X NaN
I need to delete the row completely in a dataframe having "None" value in all the columns. I am using the following code -
df.dropna(axis=0,how='all',thresh=None,subset=None,inplace=True)
This does not bring any difference to the dataframe. The rows with "None" value are still there.
How to achieve this?
There Nones should be strings, so use replace first:
df = df.replace('None', np.nan).dropna(how='all')
df = pd.DataFrame({
'a':['None','a', 'None'],
'b':['None','g', 'None'],
'c':['None','v', 'b'],
})
print (df)
a b c
0 None None None
1 a g v
2 None None b
df1 = df.replace('None', np.nan).dropna(how='all')
print (df1)
a b c
1 a g v
2 NaN NaN b
Or test values None with not equal and DataFrame.any:
df1 = df[df.ne('None').any(axis=1)]
print (df1)
a b c
1 a g v
2 None None b
You should be dropping in the axis 1. Use the how keyword to drop columns with any or all NaN values. Check the docs
import pandas as pd
import numpy as np
df = pd.DataFrame({'a':[1,2,3], 'b':[-1, 0, np.nan], 'c':[np.nan, np.nan, np.nan]})
df
a b c
0 1 -1.0 NaN
1 2 0.0 NaN
2 3 NaN 5.0
df.dropna(axis=1, how='any')
a
0 1
1 2
2 3
df.dropna(axis=1, how='all')
a b
0 1 -1.0
1 2 0.0
2 3 NaN
I have two dataframes. df1 is empty dataframe and df2 is having some data as shown. There are few columns common in both dfs. I want to append df2 dataframe columns data into df1 dataframe's column. df3 is expected result.
I have referred Python + Pandas + dataframe : couldn't append one dataframe to another, but not working. It gives following error:
ValueError: Plan shapes are not aligned
df1:
Empty DataFrame
Columns: [a, b, c, d, e]
Index: [] `
df2:
c e
0 11 55
1 22 66
df3 (expected output):
a b c d e
0 11 55
1 22 66
tried with append but not getting desired result
import pandas as pd
l1 = ['a', 'b', 'c', 'd', 'e']
l2 = []
df1 = pd.DataFrame(l2, columns=l1)
l3 = ['c', 'e']
l4 = [[11, 55],
[22, 66]]
df2 = pd.DataFrame(l4, columns=l3)
print("concat","\n",pd.concat([df1,df2])) # columns will be inplace
print("merge Nan","\n",pd.merge(df2, df1,how='left', on=l3)) # columns occurence is not preserved
#### Output ####
#concat
a b c d e
0 NaN NaN 11 NaN 55
1 NaN NaN 22 NaN 66
#merge
c e a b d
0 11 55 NaN NaN NaN
1 22 66 NaN NaN NaN
Append seems to work for me. Does this not do what you want?
df1 = pd.DataFrame(columns=['a', 'b', 'c'])
print("df1: ")
print(df1)
df2 = pd.DataFrame(columns=['a', 'c'], data=[[0, 1], [2, 3]])
print("df2:")
print(df2)
print("df1.append(df2):")
print(df1.append(df2, ignore_index=True, sort=False))
Output:
df1:
Empty DataFrame
Columns: [a, b, c]
Index: []
df2:
a c
0 0 1
1 2 3
df1.append(df2):
a b c
0 0 NaN 1
1 2 NaN 3
Have you tried pd.concat ?
pd.concat([df1,df2])