merging varying number of rows and columns by multiple conditions in python - python-3.x

updated Problem: Why does it not merge a_date, a_par, a_cons, a_ment and a_le. These are appended as columns without values but in the original dataset they have values.
Here is how the dataset looks like
connector type q_text a_text var1 var2
1 1111 1 aa None xx ps
2 9999 2 None tt jjjj pppp
3 1111 2 None uu None oo
4 9999 1 bb None yy Rt
5 9999 1 cc None zz tR
Goal: how the dataset should look like
connector q_text a_text var1 var1.1 var2 var2.1
1 1111 aa uu xx None ps oo
2 9999 bb tt yy jjjj Rt pppp
3 9999 cc tt zz jjjj tR pppp
Logic: Column type has either value 1 or 2 with multiple rows having value 1 but only one row (with the same value in connector) has value 2
Here are the main merging rules:
Merge every row of type=1 with its corresponding (connector) type=2 row.
Since multiple rows of type=1 have the same connector value, I don't want to merge solely one row of type=1 but all of them, each with the sole type==2 row.
Since some columns (e.g. a_text) follow left-join logic, values can be overridden without adding an extra column.
Since var2 values cannot be merged by left-join because they are non-exclusionary with regard to the rows connector value, i want to have extra columns (var1.1, var2.1) for those values (pppp, jjjj).
In summary (and having in mind that i only speak of rows that have the same connector values): If q_text is None i first, want to replace the values in a_text with the a_text value (see above table tt and uu) of the corresponding row (same connector value) and secondly, want to append some other values (var1 and var2) of the very same corresponding row as new columns.
Also, there are rows with a unique connector value that is not going to be matched. I want to keep those rows though.
I only want to "drop" the type=2 rows that get merged with their corresponding type=1 row**(s)**. In other words: I dont want to keep the rows of type=2 that have a match and get merged into their corresponding (connector) type=1 rows. I want to keep all other rows though.
Solution by #victor__von__doom here
merging varying number of rows by multiple conditions in python
was answered when i originally wanted to keep all of the "type"=2 columns(values).
Code i used: merged Perso, q_text and a_text
df.loc[df['type'] == 2, 'a_date'] = df['q_date']
df.loc[df['type'] == 2, 'a_par'] = df['par']
df.loc[df['type'] == 2, 'a_cons'] = df['cons']
df.loc[df['type'] == 2, 'a_ment'] = df['pret']
df.loc[df['type'] == 2, 'a_le'] = df['q_le']
my_cols = ['Perso', 'q_text','a_text', 'a_le', 'q_le', 'q_date', 'par', 'cons', 'pret', 'q_le', 'a_date','a_par', 'a_cons', 'a_ment', 'a_le']
df[my_cols] = df.sort_values(['connector','type']).groupby('connector')[my_cols].transform(lambda x: x.bfill())
df.dropna(subset=['a_text', 'Perso'],inplace=True)
df.reset_index(drop=True,inplace=True)
Data: This is a representation of the core dataset. Unfortunately i cannot share the actual data due to privacy laws.
Perso
ID
per
q_le
a_le
pret
par
form
q_date
name
IO_ID
part
area
q_text
a_text
country
cons
dig
connector
type
J Ws
1-1/4/2001-11-12/1
1999-2009
None
4325
'Mi, h', 'd'
Cew
Thre
2001-11-12
None
345
rede
s — H
None
wr ede
Terd e
e r
2001-11-12.1.g9
999999999
2
S ts
9-3/6/2003-10-14/1
1994-2004
None
23
'sd, h'
d-g
Thre
2003-10-14
None
34555
The
l? I
None
Tre
Thr ede
re
2001-04-16.1.a9
333333333
2
On d
6-1/6/2005-09-03/1
1992-2006
None
434
'uu h'
d-g
Thre
2005-09-03
None
7313
Thde
l? I
None
T e
Th rede
dre
2001-08-07.1.e4
111111111
2
None
3-4/4/2000-07-07/1
1992-2006
1223
None
'uu h'
dfs
Thre
2000-07-07
Th r
7413
Thde
Tddde
Thd de
None
Thre de
2001-07-06.1.j3
111111111
1
None
2-1/6/2001-11-12/1
1999-2009
1444
None
'Mi, h', 'd'
d-g
Thre
2001-11-12
T rj
7431
Thde
l? I
Th dde
None
Thr ede
2001-11-12.1.s7
999999999
1
None
1-6/4/2007-11-01/1
1993-2010
2353
None
None
d-g
Thre
2007-11-01
Thrj
444
Thed
l. I
Tgg gg
None
Thre de
we e
2001-06-11.1.g9
654982984
1

EDIT v2 with additional columns
This version ensures the values in the additional columns are not impacted.
c = ['connector','type','q_text','a_text','var1','var2','cumsum','country','others']
d = [[1111, 1, 'aa', None, 'xx', 'ps', 0, 'US', 'other values'],
[9999, 2, None, 'tt', 'jjjj', 'pppp', 0, 'UK', 'no values'],
[1111, 2, None, 'uu', None, 'oo', 1, 'US', 'some values'],
[9999, 1, 'bb', None, 'yy', 'Rt', 1, 'UK', 'more values'],
[9999, 1, 'cc', None, 'zz', 'tR', 2, 'UK', 'less values']]
import pandas as pd
pd.set_option('display.max_columns', None)
df = pd.DataFrame(d,columns=c)
print (df)
df.loc[df['type'] == 2, 'var1.1'] = df['var1']
df.loc[df['type'] == 2, 'var2.1'] = df['var2']
my_cols = ['q_text','a_text','var1','var2','var1.1','var2.1']
df[my_cols] = df.sort_values(['connector','type']).groupby('connector')[my_cols].transform(lambda x: x.bfill())
df.dropna(subset=['q_text'],inplace=True)
df.reset_index(drop=True,inplace=True)
print (df)
Original DataFrame:
connector type q_text a_text var1 var2 cumsum country others
0 1111 1 aa None xx ps 0 US other values
1 9999 2 None tt jjjj pppp 0 UK no values
2 1111 2 None uu None oo 1 US some values
3 9999 1 bb None yy Rt 1 UK more values
4 9999 1 cc None zz tR 2 UK less values
Updated DataFrame
connector type q_text a_text var1 var2 cumsum country others var1.1 var2.1
0 1111 1 aa uu xx ps 0 US other values None oo
1 9999 1 bb tt yy Rt 1 UK more values jjjj pppp
2 9999 1 cc tt zz tR 2 UK less values jjjj pppp

Related

iteratively merging varying number of rows

earlier discussion with help of #Joe Ferndz here:
merging varying number of rows and columns by multiple conditions in python
how the dataset looks like
connector type q_text a_text var1
1 1111 1 aaaa None xxxx
2 9999 2 None tttt jjjj
3 1111 2 None uuuu None
4 9999 1 bbbb None yyyy
5 9999 1 cccc None zzzz
Logic merge every row with type = 1 to its corresponding (same value in connector) type = 2. Code that does this:
df.loc[df['type'] == 2, 'var1.1'] = df['var1']
my_cols = ['q_text','a_text','var1']
df[my_cols] = df.sort_values(['connector','type']).groupby('connector')[my_cols].transform(lambda x: x.bfill())
df.dropna(subset=['q_text'], inplace=True)
df.reset_index(drop=True,inplace=True)
how the dataset then looks like
connector q_text a_text var1 var1.1
1 1111 aaaa uuuu xxxx None
2 9999 bbbb tttt yyyy jjjj
3 9999 cccc None zzzz zzzz
Problem with multiple rows having type = 1 but only one row having type = 2 (same connector value). therefore i need to merge type = 2 row eventually multiple times.
Question Why does it merge only one row?
how the dataset should look like (compare row 3 values and you will see what i mean)
connector q_text a_text var1 var1.1
1 1111 aaaa uuuu xxxx None
2 9999 bbbb tttt yyyy jjjj
3 9999 cccc tttt zzzz jjjj
a_text follows left-join logic, values can be overridden without adding an extra column. Contrary, var1 values are non-exclusionary with regard to the rows connector value, that is why i want to have extra column (var1.1) for those values (jjjj). There are rows with a unique connector value that will never be merged, but I want to keep those.
You want to merge rows with type = 1 to rows having type = 2, but in the code/logic you showed doesn't involve use of pandas.merge method, which will actually do what you desire.
First segregate the rows with type = 1 and type = 2 into 2 different dataframes df1 and df2. Then simply merge these 2 dataframes on connector values. It will automatically map multiple rows having type = 1 in df1 with only one row having type = 2 in df2 (with same connector values). Also since you want to keep rows with a unique connector value that will never be merged, use how='outer' param to perform an outer merge and keep all values.
After merge, select what all columns you finally want and rename them accordingly:
df1 = df.loc[df.type == 1].copy()
df2 = df.loc[df.type == 2].copy()
merged_df = pd.merge(df1, df2, on='connector', how='outer')
merged_df = merged_df.loc[:,['connector','q_text_x','a_text_y','var1_x','var1_y']]
merged_df.rename(columns={'q_text_x':'q_text','a_text_y':'a_text','var1_x':'var1','var1_y':'var1.1'}, inplace=True)
>>> merged_df
connector q_text a_text var1 var1.1
0 1111 aaaa uuuu xxxx None
1 9999 bbbb tttt yyyy jjjj
2 9999 cccc tttt zzzz jjjj

merging varying number of rows by multiple conditions in python

Problem: merging varying number of rows by multiple conditions
Here is a stylistic example of how the dataset looks like
"index" "connector" "type" "q_text" "a_text" "varx" ...
1 1111 1 aa NA xx
2 9999 2 NA tt NA
3 1111 2 NA uu NA
4 9999 1 bb NA yy
5 9999 1 cc NA zz
Goal: how the dataset should look like
"index" "connector" "type" "type.1" "q_text" "q_text.1" "a_text" "a_text.1 " "varx" "varx.1" ...
1 1111 1 2 aa NA NA uu xx NA
2 9999 1 2 bb NA NA tt yy NA
3 9999 1 2 cc NA NA tt zz NA
Logic: Column "type" has either value 1 or 2 while multiple rows have value 1 but only one row (with the same value in "connector") has value 2
If
same values in "connector"
then
merge
rows of "type"=2 with rows of "type"=1
but
(because multiple rows of "type"=1 have the same value in "connector")
duplicate
the corresponding rows of type=2
and
merge
all of the other rows that also have the same value in "connector" and are of "type"=1
My results: Not all are merged because multiple rows of "type"=1 are associated with UNIQUE rows of "type"=2
Most similar questions are answered using SQL, which i cannot use here.
df2 = df.copy()
df.index.astype(str)
df2.index.astype(str)
pd.merge(df,df2, how='left', on='connector',right_index=True, left_index=True)
df3 = pd.merge(df.set_index('connector'),df2.set_index('connector'), right_index=True, left_index=True).reset_index()
dfNew = df.merge(df2, how='left', left_on=['connector'], right_on = ['connector'])
Can i achieve my goal by goupby() ?
Solution by #victor__von__doom
if __name__ == '__main__':
df = df.groupby('connector', sort=True).apply(lambda c: list(zip(*c.values[:,2:].tolist()))).reset_index(name='merged')
df[['here', 'are', 'all', 'columns', 'except', 'for', 'the', 'connector', 'column']] = pd.DataFrame(df.merged.tolist())
df = df.drop(['merged'], axis=1)
First off, it is really messy to just keep concatenating new columns onto your original DataFrame when rows are merged, especially when the number of columns is very large. Furthermore, if you end up merging 3 rows for 1 connector value and 4 rows for another (for example), the only way to include all values is to make empty columns for some rows, which is never a good idea. Instead, I've made it so that the merged rows get combined into tuples, which can then be parsed efficiently while keeping the size of your DataFrame manageable:
import numpy as np
import pandas as pd
if __name__ == '__main__':
data = np.array([[1,2,3,4,5], [1111,9999,1111,9999,9999],
[1,2,2,1,1], ['aa', 'NA', 'NA', 'bb', 'cc'],
['NA', 'tt', 'uu', 'NA', 'NA'],
['xx', 'NA', 'NA', 'yy', 'zz']])
df = pd.DataFrame(data.T, columns = ["index", "connector",
"type", "q_text", "a_text", "varx"])
df = df.groupby("connector", sort=True).apply(lambda c: list(zip(*c.values[:,2:].tolist()))).reset_index(name='merged')
df[["type", "q_text", "a_text", "varx"]] = pd.DataFrame(df.merged.tolist())
df = df.drop(['merged'], axis=1)
The final DataFrame looks like:
connector type q_text a_text varx ...
0 1111 (1, 2) (aa, NA) (NA, uu) (xx, NA) ...
1 9999 (2, 1, 1) (NA, bb, cc) (tt, NA, NA) (NA, yy, zz) ...
Which is much more compact and readable.

Replace a special character with a new line within same row in pandas [duplicate]

I have a pandas dataframe in which one column of text strings contains comma-separated values. I want to split each CSV field and create a new row per entry (assume that CSV are clean and need only be split on ','). For example, a should become b:
In [7]: a
Out[7]:
var1 var2
0 a,b,c 1
1 d,e,f 2
In [8]: b
Out[8]:
var1 var2
0 a 1
1 b 1
2 c 1
3 d 2
4 e 2
5 f 2
So far, I have tried various simple functions, but the .apply method seems to only accept one row as return value when it is used on an axis, and I can't get .transform to work. Any suggestions would be much appreciated!
Example data:
from pandas import DataFrame
import numpy as np
a = DataFrame([{'var1': 'a,b,c', 'var2': 1},
{'var1': 'd,e,f', 'var2': 2}])
b = DataFrame([{'var1': 'a', 'var2': 1},
{'var1': 'b', 'var2': 1},
{'var1': 'c', 'var2': 1},
{'var1': 'd', 'var2': 2},
{'var1': 'e', 'var2': 2},
{'var1': 'f', 'var2': 2}])
I know this won't work because we lose DataFrame meta-data by going through numpy, but it should give you a sense of what I tried to do:
def fun(row):
letters = row['var1']
letters = letters.split(',')
out = np.array([row] * len(letters))
out['var1'] = letters
a['idx'] = range(a.shape[0])
z = a.groupby('idx')
z.transform(fun)
UPDATE 3: it makes more sense to use Series.explode() / DataFrame.explode() methods (implemented in Pandas 0.25.0 and extended in Pandas 1.3.0 to support multi-column explode) as is shown in the usage example:
for a single column:
In [1]: df = pd.DataFrame({'A': [[0, 1, 2], 'foo', [], [3, 4]],
...: 'B': 1,
...: 'C': [['a', 'b', 'c'], np.nan, [], ['d', 'e']]})
In [2]: df
Out[2]:
A B C
0 [0, 1, 2] 1 [a, b, c]
1 foo 1 NaN
2 [] 1 []
3 [3, 4] 1 [d, e]
In [3]: df.explode('A')
Out[3]:
A B C
0 0 1 [a, b, c]
0 1 1 [a, b, c]
0 2 1 [a, b, c]
1 foo 1 NaN
2 NaN 1 []
3 3 1 [d, e]
3 4 1 [d, e]
for multiple columns (for Pandas 1.3.0+):
In [4]: df.explode(['A', 'C'])
Out[4]:
A B C
0 0 1 a
0 1 1 b
0 2 1 c
1 foo 1 NaN
2 NaN 1 NaN
3 3 1 d
3 4 1 e
UPDATE 2: more generic vectorized function, which will work for multiple normal and multiple list columns
def explode(df, lst_cols, fill_value='', preserve_index=False):
# make sure `lst_cols` is list-alike
if (lst_cols is not None
and len(lst_cols) > 0
and not isinstance(lst_cols, (list, tuple, np.ndarray, pd.Series))):
lst_cols = [lst_cols]
# all columns except `lst_cols`
idx_cols = df.columns.difference(lst_cols)
# calculate lengths of lists
lens = df[lst_cols[0]].str.len()
# preserve original index values
idx = np.repeat(df.index.values, lens)
# create "exploded" DF
res = (pd.DataFrame({
col:np.repeat(df[col].values, lens)
for col in idx_cols},
index=idx)
.assign(**{col:np.concatenate(df.loc[lens>0, col].values)
for col in lst_cols}))
# append those rows that have empty lists
if (lens == 0).any():
# at least one list in cells is empty
res = (res.append(df.loc[lens==0, idx_cols], sort=False)
.fillna(fill_value))
# revert the original index order
res = res.sort_index()
# reset index if requested
if not preserve_index:
res = res.reset_index(drop=True)
return res
Demo:
Multiple list columns - all list columns must have the same # of elements in each row:
In [134]: df
Out[134]:
aaa myid num text
0 10 1 [1, 2, 3] [aa, bb, cc]
1 11 2 [] []
2 12 3 [1, 2] [cc, dd]
3 13 4 [] []
In [135]: explode(df, ['num','text'], fill_value='')
Out[135]:
aaa myid num text
0 10 1 1 aa
1 10 1 2 bb
2 10 1 3 cc
3 11 2
4 12 3 1 cc
5 12 3 2 dd
6 13 4
preserving original index values:
In [136]: explode(df, ['num','text'], fill_value='', preserve_index=True)
Out[136]:
aaa myid num text
0 10 1 1 aa
0 10 1 2 bb
0 10 1 3 cc
1 11 2
2 12 3 1 cc
2 12 3 2 dd
3 13 4
Setup:
df = pd.DataFrame({
'aaa': {0: 10, 1: 11, 2: 12, 3: 13},
'myid': {0: 1, 1: 2, 2: 3, 3: 4},
'num': {0: [1, 2, 3], 1: [], 2: [1, 2], 3: []},
'text': {0: ['aa', 'bb', 'cc'], 1: [], 2: ['cc', 'dd'], 3: []}
})
CSV column:
In [46]: df
Out[46]:
var1 var2 var3
0 a,b,c 1 XX
1 d,e,f,x,y 2 ZZ
In [47]: explode(df.assign(var1=df.var1.str.split(',')), 'var1')
Out[47]:
var1 var2 var3
0 a 1 XX
1 b 1 XX
2 c 1 XX
3 d 2 ZZ
4 e 2 ZZ
5 f 2 ZZ
6 x 2 ZZ
7 y 2 ZZ
using this little trick we can convert CSV-like column to list column:
In [48]: df.assign(var1=df.var1.str.split(','))
Out[48]:
var1 var2 var3
0 [a, b, c] 1 XX
1 [d, e, f, x, y] 2 ZZ
UPDATE: generic vectorized approach (will work also for multiple columns):
Original DF:
In [177]: df
Out[177]:
var1 var2 var3
0 a,b,c 1 XX
1 d,e,f,x,y 2 ZZ
Solution:
first let's convert CSV strings to lists:
In [178]: lst_col = 'var1'
In [179]: x = df.assign(**{lst_col:df[lst_col].str.split(',')})
In [180]: x
Out[180]:
var1 var2 var3
0 [a, b, c] 1 XX
1 [d, e, f, x, y] 2 ZZ
Now we can do this:
In [181]: pd.DataFrame({
...: col:np.repeat(x[col].values, x[lst_col].str.len())
...: for col in x.columns.difference([lst_col])
...: }).assign(**{lst_col:np.concatenate(x[lst_col].values)})[x.columns.tolist()]
...:
Out[181]:
var1 var2 var3
0 a 1 XX
1 b 1 XX
2 c 1 XX
3 d 2 ZZ
4 e 2 ZZ
5 f 2 ZZ
6 x 2 ZZ
7 y 2 ZZ
OLD answer:
Inspired by #AFinkelstein solution, i wanted to make it bit more generalized which could be applied to DF with more than two columns and as fast, well almost, as fast as AFinkelstein's solution):
In [2]: df = pd.DataFrame(
...: [{'var1': 'a,b,c', 'var2': 1, 'var3': 'XX'},
...: {'var1': 'd,e,f,x,y', 'var2': 2, 'var3': 'ZZ'}]
...: )
In [3]: df
Out[3]:
var1 var2 var3
0 a,b,c 1 XX
1 d,e,f,x,y 2 ZZ
In [4]: (df.set_index(df.columns.drop('var1',1).tolist())
...: .var1.str.split(',', expand=True)
...: .stack()
...: .reset_index()
...: .rename(columns={0:'var1'})
...: .loc[:, df.columns]
...: )
Out[4]:
var1 var2 var3
0 a 1 XX
1 b 1 XX
2 c 1 XX
3 d 2 ZZ
4 e 2 ZZ
5 f 2 ZZ
6 x 2 ZZ
7 y 2 ZZ
After painful experimentation to find something faster than the accepted answer, I got this to work. It ran around 100x faster on the dataset I tried it on.
If someone knows a way to make this more elegant, by all means please modify my code. I couldn't find a way that works without setting the other columns you want to keep as the index and then resetting the index and re-naming the columns, but I'd imagine there's something else that works.
b = DataFrame(a.var1.str.split(',').tolist(), index=a.var2).stack()
b = b.reset_index()[[0, 'var2']] # var1 variable is currently labeled 0
b.columns = ['var1', 'var2'] # renaming var1
Pandas >= 0.25
Series and DataFrame methods define a .explode() method that explodes lists into separate rows. See the docs section on Exploding a list-like column.
Since you have a list of comma separated strings, split the string on comma to get a list of elements, then call explode on that column.
df = pd.DataFrame({'var1': ['a,b,c', 'd,e,f'], 'var2': [1, 2]})
df
var1 var2
0 a,b,c 1
1 d,e,f 2
df.assign(var1=df['var1'].str.split(',')).explode('var1')
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2
Note that explode only works on a single column (for now). To explode multiple columns at once, see below.
NaNs and empty lists get the treatment they deserve without you having to jump through hoops to get it right.
df = pd.DataFrame({'var1': ['d,e,f', '', np.nan], 'var2': [1, 2, 3]})
df
var1 var2
0 d,e,f 1
1 2
2 NaN 3
df['var1'].str.split(',')
0 [d, e, f]
1 []
2 NaN
df.assign(var1=df['var1'].str.split(',')).explode('var1')
var1 var2
0 d 1
0 e 1
0 f 1
1 2 # empty list entry becomes empty string after exploding
2 NaN 3 # NaN left un-touched
This is a serious advantage over ravel/repeat -based solutions (which ignore empty lists completely, and choke on NaNs).
Exploding Multiple Columns
pandas 1.3 update
df.explode works on multiple columns starting from pandas 1.3:
df = pd.DataFrame({'var1': ['a,b,c', 'd,e,f'],
'var2': ['i,j,k', 'l,m,n'],
'var3': [1, 2]})
df
var1 var2 var3
0 a,b,c i,j,k 1
1 d,e,f l,m,n 2
(df.set_index(['var3'])
.apply(lambda col: col.str.split(','))
.explode(['var1', 'var2'])
.reset_index()
.reindex(df.columns, axis=1))
var1 var2 var3
0 a i 1
1 b j 1
2 c k 1
3 d l 2
4 e m 2
5 f n 2
On older versions, you would move the explode column inside the apply which is a lot less performant:
(df.set_index(['var3'])
.apply(lambda col: col.str.split(',').explode())
.reset_index()
.reindex(df.columns, axis=1))
The idea is to set as the index, all the columns that should NOT be exploded, then explode the remaining columns via apply. This works well when the lists are equally sized.
How about something like this:
In [55]: pd.concat([Series(row['var2'], row['var1'].split(','))
for _, row in a.iterrows()]).reset_index()
Out[55]:
index 0
0 a 1
1 b 1
2 c 1
3 d 2
4 e 2
5 f 2
Then you just have to rename the columns
Here's a function I wrote for this common task. It's more efficient than the Series/stack methods. Column order and names are retained.
def tidy_split(df, column, sep='|', keep=False):
"""
Split the values of a column and expand so the new DataFrame has one split
value per row. Filters rows where the column is missing.
Params
------
df : pandas.DataFrame
dataframe with the column to split and expand
column : str
the column to split and expand
sep : str
the string used to split the column's values
keep : bool
whether to retain the presplit value as it's own row
Returns
-------
pandas.DataFrame
Returns a dataframe with the same columns as `df`.
"""
indexes = list()
new_values = list()
df = df.dropna(subset=[column])
for i, presplit in enumerate(df[column].astype(str)):
values = presplit.split(sep)
if keep and len(values) > 1:
indexes.append(i)
new_values.append(presplit)
for value in values:
indexes.append(i)
new_values.append(value)
new_df = df.iloc[indexes, :].copy()
new_df[column] = new_values
return new_df
With this function, the original question is as simple as:
tidy_split(a, 'var1', sep=',')
Similar question as: pandas: How do I split text in a column into multiple rows?
You could do:
>> a=pd.DataFrame({"var1":"a,b,c d,e,f".split(),"var2":[1,2]})
>> s = a.var1.str.split(",").apply(pd.Series, 1).stack()
>> s.index = s.index.droplevel(-1)
>> del a['var1']
>> a.join(s)
var2 var1
0 1 a
0 1 b
0 1 c
1 2 d
1 2 e
1 2 f
There is a possibility to split and explode the dataframe without changing the structure of dataframe
Split and expand data of specific columns
Input:
var1 var2
0 a,b,c 1
1 d,e,f 2
#Get the indexes which are repetative with the split
df['var1'] = df['var1'].str.split(',')
df = df.explode('var1')
Out:
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2
Edit-1
Split and Expand of rows for Multiple columns
Filename RGB RGB_type
0 A [[0, 1650, 6, 39], [0, 1691, 1, 59], [50, 1402... [r, g, b]
1 B [[0, 1423, 16, 38], [0, 1445, 16, 46], [0, 141... [r, g, b]
Re indexing based on the reference column and aligning the column value information with stack
df = df.reindex(df.index.repeat(df['RGB_type'].apply(len)))
df = df.groupby('Filename').apply(lambda x:x.apply(lambda y: pd.Series(y.iloc[0])))
df.reset_index(drop=True).ffill()
Out:
Filename RGB_type Top 1 colour Top 1 frequency Top 2 colour Top 2 frequency
Filename
A 0 A r 0 1650 6 39
1 A g 0 1691 1 59
2 A b 50 1402 49 187
B 0 B r 0 1423 16 38
1 B g 0 1445 16 46
2 B b 0 1419 16 39
TL;DR
import pandas as pd
import numpy as np
def explode_str(df, col, sep):
s = df[col]
i = np.arange(len(s)).repeat(s.str.count(sep) + 1)
return df.iloc[i].assign(**{col: sep.join(s).split(sep)})
def explode_list(df, col):
s = df[col]
i = np.arange(len(s)).repeat(s.str.len())
return df.iloc[i].assign(**{col: np.concatenate(s)})
Demonstration
explode_str(a, 'var1', ',')
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2
Let's create a new dataframe d that has lists
d = a.assign(var1=lambda d: d.var1.str.split(','))
explode_list(d, 'var1')
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2
General Comments
I'll use np.arange with repeat to produce dataframe index positions that I can use with iloc.
FAQ
Why don't I use loc?
Because the index may not be unique and using loc will return every row that matches a queried index.
Why don't you use the values attribute and slice that?
When calling values, if the entirety of the the dataframe is in one cohesive "block", Pandas will return a view of the array that is the "block". Otherwise Pandas will have to cobble together a new array. When cobbling, that array must be of a uniform dtype. Often that means returning an array with dtype that is object. By using iloc instead of slicing the values attribute, I alleviate myself from having to deal with that.
Why do you use assign?
When I use assign using the same column name that I'm exploding, I overwrite the existing column and maintain its position in the dataframe.
Why are the index values repeat?
By virtue of using iloc on repeated positions, the resulting index shows the same repeated pattern. One repeat for each element the list or string.
This can be reset with reset_index(drop=True)
For Strings
I don't want to have to split the strings prematurely. So instead I count the occurrences of the sep argument assuming that if I were to split, the length of the resulting list would be one more than the number of separators.
I then use that sep to join the strings then split.
def explode_str(df, col, sep):
s = df[col]
i = np.arange(len(s)).repeat(s.str.count(sep) + 1)
return df.iloc[i].assign(**{col: sep.join(s).split(sep)})
For Lists
Similar as for strings except I don't need to count occurrences of sep because its already split.
I use Numpy's concatenate to jam the lists together.
import pandas as pd
import numpy as np
def explode_list(df, col):
s = df[col]
i = np.arange(len(s)).repeat(s.str.len())
return df.iloc[i].assign(**{col: np.concatenate(s)})
I came up with a solution for dataframes with arbitrary numbers of columns (while still only separating one column's entries at a time).
def splitDataFrameList(df,target_column,separator):
''' df = dataframe to split,
target_column = the column containing the values to split
separator = the symbol used to perform the split
returns: a dataframe with each entry for the target column separated, with each element moved into a new row.
The values in the other columns are duplicated across the newly divided rows.
'''
def splitListToRows(row,row_accumulator,target_column,separator):
split_row = row[target_column].split(separator)
for s in split_row:
new_row = row.to_dict()
new_row[target_column] = s
row_accumulator.append(new_row)
new_rows = []
df.apply(splitListToRows,axis=1,args = (new_rows,target_column,separator))
new_df = pandas.DataFrame(new_rows)
return new_df
Here is a fairly straightforward message that uses the split method from pandas str accessor and then uses NumPy to flatten each row into a single array.
The corresponding values are retrieved by repeating the non-split column the correct number of times with np.repeat.
var1 = df.var1.str.split(',', expand=True).values.ravel()
var2 = np.repeat(df.var2.values, len(var1) / len(df))
pd.DataFrame({'var1': var1,
'var2': var2})
var1 var2
0 a 1
1 b 1
2 c 1
3 d 2
4 e 2
5 f 2
I have been struggling with out-of-memory experience using various way to explode my lists so I prepared some benchmarks to help me decide which answers to upvote. I tested five scenarios with varying proportions of the list length to the number of lists. Sharing the results below:
Time: (less is better, click to view large version)
Peak memory usage: (less is better)
Conclusions:
#MaxU's answer (update 2), codename concatenate offers the best speed in almost every case, while keeping the peek memory usage low,
see #DMulligan's answer (codename stack) if you need to process lots of rows with relatively small lists and can afford increased peak memory,
the accepted #Chang's answer works well for data frames that have a few rows but very large lists.
Full details (functions and benchmarking code) are in this GitHub gist. Please note that the benchmark problem was simplified and did not include splitting of strings into the list - which most solutions performed in a similar fashion.
One-liner using split(___, expand=True) and the level and name arguments to reset_index():
>>> b = a.var1.str.split(',', expand=True).set_index(a.var2).stack().reset_index(level=0, name='var1')
>>> b
var2 var1
0 1 a
1 1 b
2 1 c
0 2 d
1 2 e
2 2 f
If you need b to look exactly like in the question, you can additionally do:
>>> b = b.reset_index(drop=True)[['var1', 'var2']]
>>> b
var1 var2
0 a 1
1 b 1
2 c 1
3 d 2
4 e 2
5 f 2
Based on the excellent #DMulligan's solution, here is a generic vectorized (no loops) function which splits a column of a dataframe into multiple rows, and merges it back to the original dataframe. It also uses a great generic change_column_order function from this answer.
def change_column_order(df, col_name, index):
cols = df.columns.tolist()
cols.remove(col_name)
cols.insert(index, col_name)
return df[cols]
def split_df(dataframe, col_name, sep):
orig_col_index = dataframe.columns.tolist().index(col_name)
orig_index_name = dataframe.index.name
orig_columns = dataframe.columns
dataframe = dataframe.reset_index() # we need a natural 0-based index for proper merge
index_col_name = (set(dataframe.columns) - set(orig_columns)).pop()
df_split = pd.DataFrame(
pd.DataFrame(dataframe[col_name].str.split(sep).tolist())
.stack().reset_index(level=1, drop=1), columns=[col_name])
df = dataframe.drop(col_name, axis=1)
df = pd.merge(df, df_split, left_index=True, right_index=True, how='inner')
df = df.set_index(index_col_name)
df.index.name = orig_index_name
# merge adds the column to the last place, so we need to move it back
return change_column_order(df, col_name, orig_col_index)
Example:
df = pd.DataFrame([['a:b', 1, 4], ['c:d', 2, 5], ['e:f:g:h', 3, 6]],
columns=['Name', 'A', 'B'], index=[10, 12, 13])
df
Name A B
10 a:b 1 4
12 c:d 2 5
13 e:f:g:h 3 6
split_df(df, 'Name', ':')
Name A B
10 a 1 4
10 b 1 4
12 c 2 5
12 d 2 5
13 e 3 6
13 f 3 6
13 g 3 6
13 h 3 6
Note that it preserves the original index and order of the columns. It also works with dataframes which have non-sequential index.
The string function split can take an option boolean argument 'expand'.
Here is a solution using this argument:
(a.var1
.str.split(",",expand=True)
.set_index(a.var2)
.stack()
.reset_index(level=1, drop=True)
.reset_index()
.rename(columns={0:"var1"}))
I do appreciate the answer of "Chang She", really, but the iterrows() function takes long time on large dataset. I faced that issue and I came to this.
# First, reset_index to make the index a column
a = a.reset_index().rename(columns={'index':'duplicated_idx'})
# Get a longer series with exploded cells to rows
series = pd.DataFrame(a['var1'].str.split('/')
.tolist(), index=a.duplicated_idx).stack()
# New df from series and merge with the old one
b = series.reset_index([0, 'duplicated_idx'])
b = b.rename(columns={0:'var1'})
# Optional & Advanced: In case, there are other columns apart from var1 & var2
b.merge(
a[a.columns.difference(['var1'])],
on='duplicated_idx')
# Optional: Delete the "duplicated_index"'s column, and reorder columns
b = b[a.columns.difference(['duplicated_idx'])]
One-liner using assign and explode:
col1 col2
0 a,b,c 1
1 d,e,f 2
df.assign(col1 = df.col1.str.split(',')).explode('col1', ignore_index=True)
Output:
col1 col2
0 a 1
1 b 1
2 c 1
3 d 2
4 e 2
5 f 2
Just used jiln's excellent answer from above, but needed to expand to split multiple columns. Thought I would share.
def splitDataFrameList(df,target_column,separator):
''' df = dataframe to split,
target_column = the column containing the values to split
separator = the symbol used to perform the split
returns: a dataframe with each entry for the target column separated, with each element moved into a new row.
The values in the other columns are duplicated across the newly divided rows.
'''
def splitListToRows(row, row_accumulator, target_columns, separator):
split_rows = []
for target_column in target_columns:
split_rows.append(row[target_column].split(separator))
# Seperate for multiple columns
for i in range(len(split_rows[0])):
new_row = row.to_dict()
for j in range(len(split_rows)):
new_row[target_columns[j]] = split_rows[j][i]
row_accumulator.append(new_row)
new_rows = []
df.apply(splitListToRows,axis=1,args = (new_rows,target_column,separator))
new_df = pd.DataFrame(new_rows)
return new_df
upgraded MaxU's answer with MultiIndex support
def explode(df, lst_cols, fill_value='', preserve_index=False):
"""
usage:
In [134]: df
Out[134]:
aaa myid num text
0 10 1 [1, 2, 3] [aa, bb, cc]
1 11 2 [] []
2 12 3 [1, 2] [cc, dd]
3 13 4 [] []
In [135]: explode(df, ['num','text'], fill_value='')
Out[135]:
aaa myid num text
0 10 1 1 aa
1 10 1 2 bb
2 10 1 3 cc
3 11 2
4 12 3 1 cc
5 12 3 2 dd
6 13 4
"""
# make sure `lst_cols` is list-alike
if (lst_cols is not None
and len(lst_cols) > 0
and not isinstance(lst_cols, (list, tuple, np.ndarray, pd.Series))):
lst_cols = [lst_cols]
# all columns except `lst_cols`
idx_cols = df.columns.difference(lst_cols)
# calculate lengths of lists
lens = df[lst_cols[0]].str.len()
# preserve original index values
idx = np.repeat(df.index.values, lens)
res = (pd.DataFrame({
col:np.repeat(df[col].values, lens)
for col in idx_cols},
index=idx)
.assign(**{col:np.concatenate(df.loc[lens>0, col].values)
for col in lst_cols}))
# append those rows that have empty lists
if (lens == 0).any():
# at least one list in cells is empty
res = (res.append(df.loc[lens==0, idx_cols], sort=False)
.fillna(fill_value))
# revert the original index order
res = res.sort_index()
# reset index if requested
if not preserve_index:
res = res.reset_index(drop=True)
# if original index is MultiIndex build the dataframe from the multiindex
# create "exploded" DF
if isinstance(df.index, pd.MultiIndex):
res = res.reindex(
index=pd.MultiIndex.from_tuples(
res.index,
names=['number', 'color']
)
)
return res
My version of the solution to add to this collection! :-)
# Original problem
from pandas import DataFrame
import numpy as np
a = DataFrame([{'var1': 'a,b,c', 'var2': 1},
{'var1': 'd,e,f', 'var2': 2}])
b = DataFrame([{'var1': 'a', 'var2': 1},
{'var1': 'b', 'var2': 1},
{'var1': 'c', 'var2': 1},
{'var1': 'd', 'var2': 2},
{'var1': 'e', 'var2': 2},
{'var1': 'f', 'var2': 2}])
### My solution
import pandas as pd
import functools
def expand_on_cols(df, fuse_cols, delim=","):
def expand_on_col(df, fuse_col):
col_order = df.columns
df_expanded = pd.DataFrame(
df.set_index([x for x in df.columns if x != fuse_col])[fuse_col]
.apply(lambda x: x.split(delim))
.explode()
).reset_index()
return df_expanded[col_order]
all_expanded = functools.reduce(expand_on_col, fuse_cols, df)
return all_expanded
assert(b.equals(expand_on_cols(a, ["var1"], delim=",")))
I have come up with the following solution to this problem:
def iter_var1(d):
for _, row in d.iterrows():
for v in row["var1"].split(","):
yield (v, row["var2"])
new_a = DataFrame.from_records([i for i in iter_var1(a)],
columns=["var1", "var2"])
Another solution that uses python copy package
import copy
new_observations = list()
def pandas_explode(df, column_to_explode):
new_observations = list()
for row in df.to_dict(orient='records'):
explode_values = row[column_to_explode]
del row[column_to_explode]
if type(explode_values) is list or type(explode_values) is tuple:
for explode_value in explode_values:
new_observation = copy.deepcopy(row)
new_observation[column_to_explode] = explode_value
new_observations.append(new_observation)
else:
new_observation = copy.deepcopy(row)
new_observation[column_to_explode] = explode_values
new_observations.append(new_observation)
return_df = pd.DataFrame(new_observations)
return return_df
df = pandas_explode(df, column_name)
There are a lot of answers here but I'm surprised no one has mentioned the built in pandas explode function. Check out the link below:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html#pandas.DataFrame.explode
For some reason I was unable to access that function, so I used the below code:
import pandas_explode
pandas_explode.patch()
df_zlp_people_cnt3 = df_zlp_people_cnt2.explode('people')
Above is a sample of my data. As you can see the people column had series of people, and I was trying to explode it. The code I have given works for list type data. So try to get your comma separated text data into list format. Also since my code uses built in functions, it is much faster than custom/apply functions.
Note: You may need to install pandas_explode with pip.
I had a similar problem, my solution was converting the dataframe to a list of dictionaries first, then do the transition. Here is the function:
import re
import pandas as pd
def separate_row(df, column_name):
ls = []
for row_dict in df.to_dict('records'):
for word in re.split(',', row_dict[column_name]):
row = row_dict.copy()
row[column_name]=word
ls.append(row)
return pd.DataFrame(ls)
Example:
>>> from pandas import DataFrame
>>> import numpy as np
>>> a = DataFrame([{'var1': 'a,b,c', 'var2': 1},
{'var1': 'd,e,f', 'var2': 2}])
>>> a
var1 var2
0 a,b,c 1
1 d,e,f 2
>>> separate_row(a, "var1")
var1 var2
0 a 1
1 b 1
2 c 1
3 d 2
4 e 2
5 f 2
You can also change the function a bit to support separating list type rows.
Upon adding few bits and pieces from all the solutions on this page, I was able to get something like this(for someone who need to use it right away).
parameters to the function are df(input dataframe) and key(column that has delimiter separated string). Just replace with your delimiter if that is different to semicolon ";".
def split_df_rows_for_semicolon_separated_key(key, df):
df=df.set_index(df.columns.drop(key,1).tolist())[key].str.split(';', expand=True).stack().reset_index().rename(columns={0:key}).loc[:, df.columns]
df=df[df[key] != '']
return df
Try:
vals = np.array(a.var1.str.split(",").values.tolist())
var = np.repeat(a.var2, vals.shape[1])
out = pd.DataFrame(np.column_stack((var, vals.ravel())), columns=a.columns)
display(out)
var1 var2
0 1 a
1 1 b
2 1 c
3 2 d
4 2 e
5 2 f
In recent version of pandas you can use split followed by explode
a.assign(var1=a['var1'].str.split(',')).explode('var1')
a
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2
A short and simple way to change the format of the column using .apply() so that it can be used by .explod():
import string
import pandas as pd
from io import StringIO
file = StringIO(""" var1 var2
0 a,b,c 1
1 d,e,f 2""")
df = pd.read_csv(file, sep=r'\s\s+')
df['var1'] = df['var1'].apply(lambda x : str(x).split(','))
df.explode('var1')
Output:
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2

Flatten dataframe cell of comma separated list (not list type) in pandas [duplicate]

I have a pandas dataframe in which one column of text strings contains comma-separated values. I want to split each CSV field and create a new row per entry (assume that CSV are clean and need only be split on ','). For example, a should become b:
In [7]: a
Out[7]:
var1 var2
0 a,b,c 1
1 d,e,f 2
In [8]: b
Out[8]:
var1 var2
0 a 1
1 b 1
2 c 1
3 d 2
4 e 2
5 f 2
So far, I have tried various simple functions, but the .apply method seems to only accept one row as return value when it is used on an axis, and I can't get .transform to work. Any suggestions would be much appreciated!
Example data:
from pandas import DataFrame
import numpy as np
a = DataFrame([{'var1': 'a,b,c', 'var2': 1},
{'var1': 'd,e,f', 'var2': 2}])
b = DataFrame([{'var1': 'a', 'var2': 1},
{'var1': 'b', 'var2': 1},
{'var1': 'c', 'var2': 1},
{'var1': 'd', 'var2': 2},
{'var1': 'e', 'var2': 2},
{'var1': 'f', 'var2': 2}])
I know this won't work because we lose DataFrame meta-data by going through numpy, but it should give you a sense of what I tried to do:
def fun(row):
letters = row['var1']
letters = letters.split(',')
out = np.array([row] * len(letters))
out['var1'] = letters
a['idx'] = range(a.shape[0])
z = a.groupby('idx')
z.transform(fun)
UPDATE 3: it makes more sense to use Series.explode() / DataFrame.explode() methods (implemented in Pandas 0.25.0 and extended in Pandas 1.3.0 to support multi-column explode) as is shown in the usage example:
for a single column:
In [1]: df = pd.DataFrame({'A': [[0, 1, 2], 'foo', [], [3, 4]],
...: 'B': 1,
...: 'C': [['a', 'b', 'c'], np.nan, [], ['d', 'e']]})
In [2]: df
Out[2]:
A B C
0 [0, 1, 2] 1 [a, b, c]
1 foo 1 NaN
2 [] 1 []
3 [3, 4] 1 [d, e]
In [3]: df.explode('A')
Out[3]:
A B C
0 0 1 [a, b, c]
0 1 1 [a, b, c]
0 2 1 [a, b, c]
1 foo 1 NaN
2 NaN 1 []
3 3 1 [d, e]
3 4 1 [d, e]
for multiple columns (for Pandas 1.3.0+):
In [4]: df.explode(['A', 'C'])
Out[4]:
A B C
0 0 1 a
0 1 1 b
0 2 1 c
1 foo 1 NaN
2 NaN 1 NaN
3 3 1 d
3 4 1 e
UPDATE 2: more generic vectorized function, which will work for multiple normal and multiple list columns
def explode(df, lst_cols, fill_value='', preserve_index=False):
# make sure `lst_cols` is list-alike
if (lst_cols is not None
and len(lst_cols) > 0
and not isinstance(lst_cols, (list, tuple, np.ndarray, pd.Series))):
lst_cols = [lst_cols]
# all columns except `lst_cols`
idx_cols = df.columns.difference(lst_cols)
# calculate lengths of lists
lens = df[lst_cols[0]].str.len()
# preserve original index values
idx = np.repeat(df.index.values, lens)
# create "exploded" DF
res = (pd.DataFrame({
col:np.repeat(df[col].values, lens)
for col in idx_cols},
index=idx)
.assign(**{col:np.concatenate(df.loc[lens>0, col].values)
for col in lst_cols}))
# append those rows that have empty lists
if (lens == 0).any():
# at least one list in cells is empty
res = (res.append(df.loc[lens==0, idx_cols], sort=False)
.fillna(fill_value))
# revert the original index order
res = res.sort_index()
# reset index if requested
if not preserve_index:
res = res.reset_index(drop=True)
return res
Demo:
Multiple list columns - all list columns must have the same # of elements in each row:
In [134]: df
Out[134]:
aaa myid num text
0 10 1 [1, 2, 3] [aa, bb, cc]
1 11 2 [] []
2 12 3 [1, 2] [cc, dd]
3 13 4 [] []
In [135]: explode(df, ['num','text'], fill_value='')
Out[135]:
aaa myid num text
0 10 1 1 aa
1 10 1 2 bb
2 10 1 3 cc
3 11 2
4 12 3 1 cc
5 12 3 2 dd
6 13 4
preserving original index values:
In [136]: explode(df, ['num','text'], fill_value='', preserve_index=True)
Out[136]:
aaa myid num text
0 10 1 1 aa
0 10 1 2 bb
0 10 1 3 cc
1 11 2
2 12 3 1 cc
2 12 3 2 dd
3 13 4
Setup:
df = pd.DataFrame({
'aaa': {0: 10, 1: 11, 2: 12, 3: 13},
'myid': {0: 1, 1: 2, 2: 3, 3: 4},
'num': {0: [1, 2, 3], 1: [], 2: [1, 2], 3: []},
'text': {0: ['aa', 'bb', 'cc'], 1: [], 2: ['cc', 'dd'], 3: []}
})
CSV column:
In [46]: df
Out[46]:
var1 var2 var3
0 a,b,c 1 XX
1 d,e,f,x,y 2 ZZ
In [47]: explode(df.assign(var1=df.var1.str.split(',')), 'var1')
Out[47]:
var1 var2 var3
0 a 1 XX
1 b 1 XX
2 c 1 XX
3 d 2 ZZ
4 e 2 ZZ
5 f 2 ZZ
6 x 2 ZZ
7 y 2 ZZ
using this little trick we can convert CSV-like column to list column:
In [48]: df.assign(var1=df.var1.str.split(','))
Out[48]:
var1 var2 var3
0 [a, b, c] 1 XX
1 [d, e, f, x, y] 2 ZZ
UPDATE: generic vectorized approach (will work also for multiple columns):
Original DF:
In [177]: df
Out[177]:
var1 var2 var3
0 a,b,c 1 XX
1 d,e,f,x,y 2 ZZ
Solution:
first let's convert CSV strings to lists:
In [178]: lst_col = 'var1'
In [179]: x = df.assign(**{lst_col:df[lst_col].str.split(',')})
In [180]: x
Out[180]:
var1 var2 var3
0 [a, b, c] 1 XX
1 [d, e, f, x, y] 2 ZZ
Now we can do this:
In [181]: pd.DataFrame({
...: col:np.repeat(x[col].values, x[lst_col].str.len())
...: for col in x.columns.difference([lst_col])
...: }).assign(**{lst_col:np.concatenate(x[lst_col].values)})[x.columns.tolist()]
...:
Out[181]:
var1 var2 var3
0 a 1 XX
1 b 1 XX
2 c 1 XX
3 d 2 ZZ
4 e 2 ZZ
5 f 2 ZZ
6 x 2 ZZ
7 y 2 ZZ
OLD answer:
Inspired by #AFinkelstein solution, i wanted to make it bit more generalized which could be applied to DF with more than two columns and as fast, well almost, as fast as AFinkelstein's solution):
In [2]: df = pd.DataFrame(
...: [{'var1': 'a,b,c', 'var2': 1, 'var3': 'XX'},
...: {'var1': 'd,e,f,x,y', 'var2': 2, 'var3': 'ZZ'}]
...: )
In [3]: df
Out[3]:
var1 var2 var3
0 a,b,c 1 XX
1 d,e,f,x,y 2 ZZ
In [4]: (df.set_index(df.columns.drop('var1',1).tolist())
...: .var1.str.split(',', expand=True)
...: .stack()
...: .reset_index()
...: .rename(columns={0:'var1'})
...: .loc[:, df.columns]
...: )
Out[4]:
var1 var2 var3
0 a 1 XX
1 b 1 XX
2 c 1 XX
3 d 2 ZZ
4 e 2 ZZ
5 f 2 ZZ
6 x 2 ZZ
7 y 2 ZZ
After painful experimentation to find something faster than the accepted answer, I got this to work. It ran around 100x faster on the dataset I tried it on.
If someone knows a way to make this more elegant, by all means please modify my code. I couldn't find a way that works without setting the other columns you want to keep as the index and then resetting the index and re-naming the columns, but I'd imagine there's something else that works.
b = DataFrame(a.var1.str.split(',').tolist(), index=a.var2).stack()
b = b.reset_index()[[0, 'var2']] # var1 variable is currently labeled 0
b.columns = ['var1', 'var2'] # renaming var1
Pandas >= 0.25
Series and DataFrame methods define a .explode() method that explodes lists into separate rows. See the docs section on Exploding a list-like column.
Since you have a list of comma separated strings, split the string on comma to get a list of elements, then call explode on that column.
df = pd.DataFrame({'var1': ['a,b,c', 'd,e,f'], 'var2': [1, 2]})
df
var1 var2
0 a,b,c 1
1 d,e,f 2
df.assign(var1=df['var1'].str.split(',')).explode('var1')
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2
Note that explode only works on a single column (for now). To explode multiple columns at once, see below.
NaNs and empty lists get the treatment they deserve without you having to jump through hoops to get it right.
df = pd.DataFrame({'var1': ['d,e,f', '', np.nan], 'var2': [1, 2, 3]})
df
var1 var2
0 d,e,f 1
1 2
2 NaN 3
df['var1'].str.split(',')
0 [d, e, f]
1 []
2 NaN
df.assign(var1=df['var1'].str.split(',')).explode('var1')
var1 var2
0 d 1
0 e 1
0 f 1
1 2 # empty list entry becomes empty string after exploding
2 NaN 3 # NaN left un-touched
This is a serious advantage over ravel/repeat -based solutions (which ignore empty lists completely, and choke on NaNs).
Exploding Multiple Columns
pandas 1.3 update
df.explode works on multiple columns starting from pandas 1.3:
df = pd.DataFrame({'var1': ['a,b,c', 'd,e,f'],
'var2': ['i,j,k', 'l,m,n'],
'var3': [1, 2]})
df
var1 var2 var3
0 a,b,c i,j,k 1
1 d,e,f l,m,n 2
(df.set_index(['var3'])
.apply(lambda col: col.str.split(','))
.explode(['var1', 'var2'])
.reset_index()
.reindex(df.columns, axis=1))
var1 var2 var3
0 a i 1
1 b j 1
2 c k 1
3 d l 2
4 e m 2
5 f n 2
On older versions, you would move the explode column inside the apply which is a lot less performant:
(df.set_index(['var3'])
.apply(lambda col: col.str.split(',').explode())
.reset_index()
.reindex(df.columns, axis=1))
The idea is to set as the index, all the columns that should NOT be exploded, then explode the remaining columns via apply. This works well when the lists are equally sized.
How about something like this:
In [55]: pd.concat([Series(row['var2'], row['var1'].split(','))
for _, row in a.iterrows()]).reset_index()
Out[55]:
index 0
0 a 1
1 b 1
2 c 1
3 d 2
4 e 2
5 f 2
Then you just have to rename the columns
Here's a function I wrote for this common task. It's more efficient than the Series/stack methods. Column order and names are retained.
def tidy_split(df, column, sep='|', keep=False):
"""
Split the values of a column and expand so the new DataFrame has one split
value per row. Filters rows where the column is missing.
Params
------
df : pandas.DataFrame
dataframe with the column to split and expand
column : str
the column to split and expand
sep : str
the string used to split the column's values
keep : bool
whether to retain the presplit value as it's own row
Returns
-------
pandas.DataFrame
Returns a dataframe with the same columns as `df`.
"""
indexes = list()
new_values = list()
df = df.dropna(subset=[column])
for i, presplit in enumerate(df[column].astype(str)):
values = presplit.split(sep)
if keep and len(values) > 1:
indexes.append(i)
new_values.append(presplit)
for value in values:
indexes.append(i)
new_values.append(value)
new_df = df.iloc[indexes, :].copy()
new_df[column] = new_values
return new_df
With this function, the original question is as simple as:
tidy_split(a, 'var1', sep=',')
Similar question as: pandas: How do I split text in a column into multiple rows?
You could do:
>> a=pd.DataFrame({"var1":"a,b,c d,e,f".split(),"var2":[1,2]})
>> s = a.var1.str.split(",").apply(pd.Series, 1).stack()
>> s.index = s.index.droplevel(-1)
>> del a['var1']
>> a.join(s)
var2 var1
0 1 a
0 1 b
0 1 c
1 2 d
1 2 e
1 2 f
There is a possibility to split and explode the dataframe without changing the structure of dataframe
Split and expand data of specific columns
Input:
var1 var2
0 a,b,c 1
1 d,e,f 2
#Get the indexes which are repetative with the split
df['var1'] = df['var1'].str.split(',')
df = df.explode('var1')
Out:
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2
Edit-1
Split and Expand of rows for Multiple columns
Filename RGB RGB_type
0 A [[0, 1650, 6, 39], [0, 1691, 1, 59], [50, 1402... [r, g, b]
1 B [[0, 1423, 16, 38], [0, 1445, 16, 46], [0, 141... [r, g, b]
Re indexing based on the reference column and aligning the column value information with stack
df = df.reindex(df.index.repeat(df['RGB_type'].apply(len)))
df = df.groupby('Filename').apply(lambda x:x.apply(lambda y: pd.Series(y.iloc[0])))
df.reset_index(drop=True).ffill()
Out:
Filename RGB_type Top 1 colour Top 1 frequency Top 2 colour Top 2 frequency
Filename
A 0 A r 0 1650 6 39
1 A g 0 1691 1 59
2 A b 50 1402 49 187
B 0 B r 0 1423 16 38
1 B g 0 1445 16 46
2 B b 0 1419 16 39
TL;DR
import pandas as pd
import numpy as np
def explode_str(df, col, sep):
s = df[col]
i = np.arange(len(s)).repeat(s.str.count(sep) + 1)
return df.iloc[i].assign(**{col: sep.join(s).split(sep)})
def explode_list(df, col):
s = df[col]
i = np.arange(len(s)).repeat(s.str.len())
return df.iloc[i].assign(**{col: np.concatenate(s)})
Demonstration
explode_str(a, 'var1', ',')
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2
Let's create a new dataframe d that has lists
d = a.assign(var1=lambda d: d.var1.str.split(','))
explode_list(d, 'var1')
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2
General Comments
I'll use np.arange with repeat to produce dataframe index positions that I can use with iloc.
FAQ
Why don't I use loc?
Because the index may not be unique and using loc will return every row that matches a queried index.
Why don't you use the values attribute and slice that?
When calling values, if the entirety of the the dataframe is in one cohesive "block", Pandas will return a view of the array that is the "block". Otherwise Pandas will have to cobble together a new array. When cobbling, that array must be of a uniform dtype. Often that means returning an array with dtype that is object. By using iloc instead of slicing the values attribute, I alleviate myself from having to deal with that.
Why do you use assign?
When I use assign using the same column name that I'm exploding, I overwrite the existing column and maintain its position in the dataframe.
Why are the index values repeat?
By virtue of using iloc on repeated positions, the resulting index shows the same repeated pattern. One repeat for each element the list or string.
This can be reset with reset_index(drop=True)
For Strings
I don't want to have to split the strings prematurely. So instead I count the occurrences of the sep argument assuming that if I were to split, the length of the resulting list would be one more than the number of separators.
I then use that sep to join the strings then split.
def explode_str(df, col, sep):
s = df[col]
i = np.arange(len(s)).repeat(s.str.count(sep) + 1)
return df.iloc[i].assign(**{col: sep.join(s).split(sep)})
For Lists
Similar as for strings except I don't need to count occurrences of sep because its already split.
I use Numpy's concatenate to jam the lists together.
import pandas as pd
import numpy as np
def explode_list(df, col):
s = df[col]
i = np.arange(len(s)).repeat(s.str.len())
return df.iloc[i].assign(**{col: np.concatenate(s)})
I came up with a solution for dataframes with arbitrary numbers of columns (while still only separating one column's entries at a time).
def splitDataFrameList(df,target_column,separator):
''' df = dataframe to split,
target_column = the column containing the values to split
separator = the symbol used to perform the split
returns: a dataframe with each entry for the target column separated, with each element moved into a new row.
The values in the other columns are duplicated across the newly divided rows.
'''
def splitListToRows(row,row_accumulator,target_column,separator):
split_row = row[target_column].split(separator)
for s in split_row:
new_row = row.to_dict()
new_row[target_column] = s
row_accumulator.append(new_row)
new_rows = []
df.apply(splitListToRows,axis=1,args = (new_rows,target_column,separator))
new_df = pandas.DataFrame(new_rows)
return new_df
Here is a fairly straightforward message that uses the split method from pandas str accessor and then uses NumPy to flatten each row into a single array.
The corresponding values are retrieved by repeating the non-split column the correct number of times with np.repeat.
var1 = df.var1.str.split(',', expand=True).values.ravel()
var2 = np.repeat(df.var2.values, len(var1) / len(df))
pd.DataFrame({'var1': var1,
'var2': var2})
var1 var2
0 a 1
1 b 1
2 c 1
3 d 2
4 e 2
5 f 2
I have been struggling with out-of-memory experience using various way to explode my lists so I prepared some benchmarks to help me decide which answers to upvote. I tested five scenarios with varying proportions of the list length to the number of lists. Sharing the results below:
Time: (less is better, click to view large version)
Peak memory usage: (less is better)
Conclusions:
#MaxU's answer (update 2), codename concatenate offers the best speed in almost every case, while keeping the peek memory usage low,
see #DMulligan's answer (codename stack) if you need to process lots of rows with relatively small lists and can afford increased peak memory,
the accepted #Chang's answer works well for data frames that have a few rows but very large lists.
Full details (functions and benchmarking code) are in this GitHub gist. Please note that the benchmark problem was simplified and did not include splitting of strings into the list - which most solutions performed in a similar fashion.
One-liner using split(___, expand=True) and the level and name arguments to reset_index():
>>> b = a.var1.str.split(',', expand=True).set_index(a.var2).stack().reset_index(level=0, name='var1')
>>> b
var2 var1
0 1 a
1 1 b
2 1 c
0 2 d
1 2 e
2 2 f
If you need b to look exactly like in the question, you can additionally do:
>>> b = b.reset_index(drop=True)[['var1', 'var2']]
>>> b
var1 var2
0 a 1
1 b 1
2 c 1
3 d 2
4 e 2
5 f 2
Based on the excellent #DMulligan's solution, here is a generic vectorized (no loops) function which splits a column of a dataframe into multiple rows, and merges it back to the original dataframe. It also uses a great generic change_column_order function from this answer.
def change_column_order(df, col_name, index):
cols = df.columns.tolist()
cols.remove(col_name)
cols.insert(index, col_name)
return df[cols]
def split_df(dataframe, col_name, sep):
orig_col_index = dataframe.columns.tolist().index(col_name)
orig_index_name = dataframe.index.name
orig_columns = dataframe.columns
dataframe = dataframe.reset_index() # we need a natural 0-based index for proper merge
index_col_name = (set(dataframe.columns) - set(orig_columns)).pop()
df_split = pd.DataFrame(
pd.DataFrame(dataframe[col_name].str.split(sep).tolist())
.stack().reset_index(level=1, drop=1), columns=[col_name])
df = dataframe.drop(col_name, axis=1)
df = pd.merge(df, df_split, left_index=True, right_index=True, how='inner')
df = df.set_index(index_col_name)
df.index.name = orig_index_name
# merge adds the column to the last place, so we need to move it back
return change_column_order(df, col_name, orig_col_index)
Example:
df = pd.DataFrame([['a:b', 1, 4], ['c:d', 2, 5], ['e:f:g:h', 3, 6]],
columns=['Name', 'A', 'B'], index=[10, 12, 13])
df
Name A B
10 a:b 1 4
12 c:d 2 5
13 e:f:g:h 3 6
split_df(df, 'Name', ':')
Name A B
10 a 1 4
10 b 1 4
12 c 2 5
12 d 2 5
13 e 3 6
13 f 3 6
13 g 3 6
13 h 3 6
Note that it preserves the original index and order of the columns. It also works with dataframes which have non-sequential index.
The string function split can take an option boolean argument 'expand'.
Here is a solution using this argument:
(a.var1
.str.split(",",expand=True)
.set_index(a.var2)
.stack()
.reset_index(level=1, drop=True)
.reset_index()
.rename(columns={0:"var1"}))
I do appreciate the answer of "Chang She", really, but the iterrows() function takes long time on large dataset. I faced that issue and I came to this.
# First, reset_index to make the index a column
a = a.reset_index().rename(columns={'index':'duplicated_idx'})
# Get a longer series with exploded cells to rows
series = pd.DataFrame(a['var1'].str.split('/')
.tolist(), index=a.duplicated_idx).stack()
# New df from series and merge with the old one
b = series.reset_index([0, 'duplicated_idx'])
b = b.rename(columns={0:'var1'})
# Optional & Advanced: In case, there are other columns apart from var1 & var2
b.merge(
a[a.columns.difference(['var1'])],
on='duplicated_idx')
# Optional: Delete the "duplicated_index"'s column, and reorder columns
b = b[a.columns.difference(['duplicated_idx'])]
One-liner using assign and explode:
col1 col2
0 a,b,c 1
1 d,e,f 2
df.assign(col1 = df.col1.str.split(',')).explode('col1', ignore_index=True)
Output:
col1 col2
0 a 1
1 b 1
2 c 1
3 d 2
4 e 2
5 f 2
Just used jiln's excellent answer from above, but needed to expand to split multiple columns. Thought I would share.
def splitDataFrameList(df,target_column,separator):
''' df = dataframe to split,
target_column = the column containing the values to split
separator = the symbol used to perform the split
returns: a dataframe with each entry for the target column separated, with each element moved into a new row.
The values in the other columns are duplicated across the newly divided rows.
'''
def splitListToRows(row, row_accumulator, target_columns, separator):
split_rows = []
for target_column in target_columns:
split_rows.append(row[target_column].split(separator))
# Seperate for multiple columns
for i in range(len(split_rows[0])):
new_row = row.to_dict()
for j in range(len(split_rows)):
new_row[target_columns[j]] = split_rows[j][i]
row_accumulator.append(new_row)
new_rows = []
df.apply(splitListToRows,axis=1,args = (new_rows,target_column,separator))
new_df = pd.DataFrame(new_rows)
return new_df
upgraded MaxU's answer with MultiIndex support
def explode(df, lst_cols, fill_value='', preserve_index=False):
"""
usage:
In [134]: df
Out[134]:
aaa myid num text
0 10 1 [1, 2, 3] [aa, bb, cc]
1 11 2 [] []
2 12 3 [1, 2] [cc, dd]
3 13 4 [] []
In [135]: explode(df, ['num','text'], fill_value='')
Out[135]:
aaa myid num text
0 10 1 1 aa
1 10 1 2 bb
2 10 1 3 cc
3 11 2
4 12 3 1 cc
5 12 3 2 dd
6 13 4
"""
# make sure `lst_cols` is list-alike
if (lst_cols is not None
and len(lst_cols) > 0
and not isinstance(lst_cols, (list, tuple, np.ndarray, pd.Series))):
lst_cols = [lst_cols]
# all columns except `lst_cols`
idx_cols = df.columns.difference(lst_cols)
# calculate lengths of lists
lens = df[lst_cols[0]].str.len()
# preserve original index values
idx = np.repeat(df.index.values, lens)
res = (pd.DataFrame({
col:np.repeat(df[col].values, lens)
for col in idx_cols},
index=idx)
.assign(**{col:np.concatenate(df.loc[lens>0, col].values)
for col in lst_cols}))
# append those rows that have empty lists
if (lens == 0).any():
# at least one list in cells is empty
res = (res.append(df.loc[lens==0, idx_cols], sort=False)
.fillna(fill_value))
# revert the original index order
res = res.sort_index()
# reset index if requested
if not preserve_index:
res = res.reset_index(drop=True)
# if original index is MultiIndex build the dataframe from the multiindex
# create "exploded" DF
if isinstance(df.index, pd.MultiIndex):
res = res.reindex(
index=pd.MultiIndex.from_tuples(
res.index,
names=['number', 'color']
)
)
return res
My version of the solution to add to this collection! :-)
# Original problem
from pandas import DataFrame
import numpy as np
a = DataFrame([{'var1': 'a,b,c', 'var2': 1},
{'var1': 'd,e,f', 'var2': 2}])
b = DataFrame([{'var1': 'a', 'var2': 1},
{'var1': 'b', 'var2': 1},
{'var1': 'c', 'var2': 1},
{'var1': 'd', 'var2': 2},
{'var1': 'e', 'var2': 2},
{'var1': 'f', 'var2': 2}])
### My solution
import pandas as pd
import functools
def expand_on_cols(df, fuse_cols, delim=","):
def expand_on_col(df, fuse_col):
col_order = df.columns
df_expanded = pd.DataFrame(
df.set_index([x for x in df.columns if x != fuse_col])[fuse_col]
.apply(lambda x: x.split(delim))
.explode()
).reset_index()
return df_expanded[col_order]
all_expanded = functools.reduce(expand_on_col, fuse_cols, df)
return all_expanded
assert(b.equals(expand_on_cols(a, ["var1"], delim=",")))
I have come up with the following solution to this problem:
def iter_var1(d):
for _, row in d.iterrows():
for v in row["var1"].split(","):
yield (v, row["var2"])
new_a = DataFrame.from_records([i for i in iter_var1(a)],
columns=["var1", "var2"])
Another solution that uses python copy package
import copy
new_observations = list()
def pandas_explode(df, column_to_explode):
new_observations = list()
for row in df.to_dict(orient='records'):
explode_values = row[column_to_explode]
del row[column_to_explode]
if type(explode_values) is list or type(explode_values) is tuple:
for explode_value in explode_values:
new_observation = copy.deepcopy(row)
new_observation[column_to_explode] = explode_value
new_observations.append(new_observation)
else:
new_observation = copy.deepcopy(row)
new_observation[column_to_explode] = explode_values
new_observations.append(new_observation)
return_df = pd.DataFrame(new_observations)
return return_df
df = pandas_explode(df, column_name)
There are a lot of answers here but I'm surprised no one has mentioned the built in pandas explode function. Check out the link below:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html#pandas.DataFrame.explode
For some reason I was unable to access that function, so I used the below code:
import pandas_explode
pandas_explode.patch()
df_zlp_people_cnt3 = df_zlp_people_cnt2.explode('people')
Above is a sample of my data. As you can see the people column had series of people, and I was trying to explode it. The code I have given works for list type data. So try to get your comma separated text data into list format. Also since my code uses built in functions, it is much faster than custom/apply functions.
Note: You may need to install pandas_explode with pip.
I had a similar problem, my solution was converting the dataframe to a list of dictionaries first, then do the transition. Here is the function:
import re
import pandas as pd
def separate_row(df, column_name):
ls = []
for row_dict in df.to_dict('records'):
for word in re.split(',', row_dict[column_name]):
row = row_dict.copy()
row[column_name]=word
ls.append(row)
return pd.DataFrame(ls)
Example:
>>> from pandas import DataFrame
>>> import numpy as np
>>> a = DataFrame([{'var1': 'a,b,c', 'var2': 1},
{'var1': 'd,e,f', 'var2': 2}])
>>> a
var1 var2
0 a,b,c 1
1 d,e,f 2
>>> separate_row(a, "var1")
var1 var2
0 a 1
1 b 1
2 c 1
3 d 2
4 e 2
5 f 2
You can also change the function a bit to support separating list type rows.
Upon adding few bits and pieces from all the solutions on this page, I was able to get something like this(for someone who need to use it right away).
parameters to the function are df(input dataframe) and key(column that has delimiter separated string). Just replace with your delimiter if that is different to semicolon ";".
def split_df_rows_for_semicolon_separated_key(key, df):
df=df.set_index(df.columns.drop(key,1).tolist())[key].str.split(';', expand=True).stack().reset_index().rename(columns={0:key}).loc[:, df.columns]
df=df[df[key] != '']
return df
Try:
vals = np.array(a.var1.str.split(",").values.tolist())
var = np.repeat(a.var2, vals.shape[1])
out = pd.DataFrame(np.column_stack((var, vals.ravel())), columns=a.columns)
display(out)
var1 var2
0 1 a
1 1 b
2 1 c
3 2 d
4 2 e
5 2 f
In recent version of pandas you can use split followed by explode
a.assign(var1=a['var1'].str.split(',')).explode('var1')
a
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2
A short and simple way to change the format of the column using .apply() so that it can be used by .explod():
import string
import pandas as pd
from io import StringIO
file = StringIO(""" var1 var2
0 a,b,c 1
1 d,e,f 2""")
df = pd.read_csv(file, sep=r'\s\s+')
df['var1'] = df['var1'].apply(lambda x : str(x).split(','))
df.explode('var1')
Output:
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2

Getting all rows where for column 'C' the entry is larger than the preceding element in column 'C'

How can I select all rows of a data frame where a condition is met according to a column, which has to do with the relationship between every 2 entries of that column. To give the specific example, lets say I have a DataFrame:
>>>df = pd.DataFrame({'A': [ 1, 2, 3, 4],
'B':['spam', 'ham', 'egg', 'foo'],
'C':[4, 5, 3, 4]})
>>> df
A B C
0 1 spam 4
1 2 ham 5
2 3 egg 3
3 4 foo 4
>>>df2 = df[ return every row of df where C[i] > C[i-1] ]
>>> df2
A B C
1 2 ham 5
3 4 foo 4
There is plenty of great information about slicing and indexing in the pandas docs and here, but this is a bit more complicated, I think. I could also be going about it wrong. What I'm looking for is the rows of data where the value stored in C is no longer monotonously declining.
Any help is appreciated!
Use boolean indexing with compare by shifted column values:
print (df[df['C'] > df['C'].shift()])
A B C
1 2 ham 5
3 4 foo 4
Detail:
print (df['C'] > df['C'].shift())
0 False
1 True
2 False
3 True
Name: C, dtype: bool
If want all monotonously declining rows compare diff of column:
print (df[df['C'].diff() > 0])
A B C
1 2 ham 5
3 4 foo 4

Resources