combine rows of single column over multiple .csv files in pandas - python-3.x

I have a bunch of .csv files with the same column headers and data types in the columns.
c1 c2 c3
1 5 words
2 6 words
3 7 words
4 8 words
is there a way to combine all the text in c3 in each .csv file then combine them into one csv?
I combined them this way
path = r'C:\\Users\\...\**\*.csv'
all_rec = iglob(path, recursive=True)
dataframes = (pd.read_csv(f) for f in all_rec)
big_dataframe = pd.concat(dataframes, ignore_index=True)
i'm not sure how to combine the text rows first then bring them together.

There are many way to do it. One way:
path = r'C:\\Users\\...\**\*.csv'
all_rec = iglob(path, recursive=True)
# Extract only c3 column from files
dataframes = {f: pd.read_csv(f, usecols=['c3']) for f in all_rec}
# Group all dataframes then combine text rows of each dataframe
big_dataframe = pd.concat(dataframes).groupby(level=0)['c3'] \
.apply(lambda x: ' '.join(x.tolist())).reset_index(drop=True)
Output:
>>> big_dataframe
0 words words words words
1 words2 words2 words2 words2
2 words3 words3 words3 words3
Name: c3, dtype: object

Related

How to split a pandas column into multiple columns [duplicate]

I load some machine learning data from a CSV file. The first 2 columns are observations and the remaining columns are features.
Currently, I do the following:
data = pandas.read_csv('mydata.csv')
which gives something like:
data = pandas.DataFrame(np.random.rand(10,5), columns = list('abcde'))
I'd like to slice this dataframe in two dataframes: one containing the columns a and b and one containing the columns c, d and e.
It is not possible to write something like
observations = data[:'c']
features = data['c':]
I'm not sure what the best method is. Do I need a pd.Panel?
By the way, I find dataframe indexing pretty inconsistent: data['a'] is permitted, but data[0] is not. On the other side, data['a':] is not permitted but data[0:] is.
Is there a practical reason for this? This is really confusing if columns are indexed by Int, given that data[0] != data[0:1]
2017 Answer - pandas 0.20: .ix is deprecated. Use .loc
See the deprecation in the docs
.loc uses label based indexing to select both rows and columns. The labels being the values of the index or the columns. Slicing with .loc includes the last element.
Let's assume we have a DataFrame with the following columns:
foo, bar, quz, ant, cat, sat, dat.
# selects all rows and all columns beginning at 'foo' up to and including 'sat'
df.loc[:, 'foo':'sat']
# foo bar quz ant cat sat
.loc accepts the same slice notation that Python lists do for both row and columns. Slice notation being start:stop:step
# slice from 'foo' to 'cat' by every 2nd column
df.loc[:, 'foo':'cat':2]
# foo quz cat
# slice from the beginning to 'bar'
df.loc[:, :'bar']
# foo bar
# slice from 'quz' to the end by 3
df.loc[:, 'quz'::3]
# quz sat
# attempt from 'sat' to 'bar'
df.loc[:, 'sat':'bar']
# no columns returned
# slice from 'sat' to 'bar'
df.loc[:, 'sat':'bar':-1]
sat cat ant quz bar
# slice notation is syntatic sugar for the slice function
# slice from 'quz' to the end by 2 with slice function
df.loc[:, slice('quz',None, 2)]
# quz cat dat
# select specific columns with a list
# select columns foo, bar and dat
df.loc[:, ['foo','bar','dat']]
# foo bar dat
You can slice by rows and columns. For instance, if you have 5 rows with labels v, w, x, y, z
# slice from 'w' to 'y' and 'foo' to 'ant' by 3
df.loc['w':'y', 'foo':'ant':3]
# foo ant
# w
# x
# y
Note: .ix has been deprecated since Pandas v0.20. You should instead use .loc or .iloc, as appropriate.
The DataFrame.ix index is what you want to be accessing. It's a little confusing (I agree that Pandas indexing is perplexing at times!), but the following seems to do what you want:
>>> df = DataFrame(np.random.rand(4,5), columns = list('abcde'))
>>> df.ix[:,'b':]
b c d e
0 0.418762 0.042369 0.869203 0.972314
1 0.991058 0.510228 0.594784 0.534366
2 0.407472 0.259811 0.396664 0.894202
3 0.726168 0.139531 0.324932 0.906575
where .ix[row slice, column slice] is what is being interpreted. More on Pandas indexing here: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-advanced
Lets use the titanic dataset from the seaborn package as an example
# Load dataset (pip install seaborn)
>> import seaborn.apionly as sns
>> titanic = sns.load_dataset('titanic')
using the column names
>> titanic.loc[:,['sex','age','fare']]
using the column indices
>> titanic.iloc[:,[2,3,6]]
using ix (Older than Pandas <.20 version)
>> titanic.ix[:,[‘sex’,’age’,’fare’]]
or
>> titanic.ix[:,[2,3,6]]
using the reindex method
>> titanic.reindex(columns=['sex','age','fare'])
Also, Given a DataFrame
data
as in your example, if you would like to extract column a and d only (e.i. the 1st and the 4th column), iloc mothod from the pandas dataframe is what you need and could be used very effectively. All you need to know is the index of the columns you would like to extract. For example:
>>> data.iloc[:,[0,3]]
will give you
a d
0 0.883283 0.100975
1 0.614313 0.221731
2 0.438963 0.224361
3 0.466078 0.703347
4 0.955285 0.114033
5 0.268443 0.416996
6 0.613241 0.327548
7 0.370784 0.359159
8 0.692708 0.659410
9 0.806624 0.875476
You can slice along the columns of a DataFrame by referring to the names of each column in a list, like so:
data = pandas.DataFrame(np.random.rand(10,5), columns = list('abcde'))
data_ab = data[list('ab')]
data_cde = data[list('cde')]
And if you came here looking for slicing two ranges of columns and combining them together (like me) you can do something like
op = df[list(df.columns[0:899]) + list(df.columns[3593:])]
print op
This will create a new dataframe with first 900 columns and (all) columns > 3593 (assuming you have some 4000 columns in your data set).
Here's how you could use different methods to do selective column slicing, including selective label based, index based and the selective ranges based column slicing.
In [37]: import pandas as pd
In [38]: import numpy as np
In [43]: df = pd.DataFrame(np.random.rand(4,7), columns = list('abcdefg'))
In [44]: df
Out[44]:
a b c d e f g
0 0.409038 0.745497 0.890767 0.945890 0.014655 0.458070 0.786633
1 0.570642 0.181552 0.794599 0.036340 0.907011 0.655237 0.735268
2 0.568440 0.501638 0.186635 0.441445 0.703312 0.187447 0.604305
3 0.679125 0.642817 0.697628 0.391686 0.698381 0.936899 0.101806
In [45]: df.loc[:, ["a", "b", "c"]] ## label based selective column slicing
Out[45]:
a b c
0 0.409038 0.745497 0.890767
1 0.570642 0.181552 0.794599
2 0.568440 0.501638 0.186635
3 0.679125 0.642817 0.697628
In [46]: df.loc[:, "a":"c"] ## label based column ranges slicing
Out[46]:
a b c
0 0.409038 0.745497 0.890767
1 0.570642 0.181552 0.794599
2 0.568440 0.501638 0.186635
3 0.679125 0.642817 0.697628
In [47]: df.iloc[:, 0:3] ## index based column ranges slicing
Out[47]:
a b c
0 0.409038 0.745497 0.890767
1 0.570642 0.181552 0.794599
2 0.568440 0.501638 0.186635
3 0.679125 0.642817 0.697628
### with 2 different column ranges, index based slicing:
In [49]: df[df.columns[0:1].tolist() + df.columns[1:3].tolist()]
Out[49]:
a b c
0 0.409038 0.745497 0.890767
1 0.570642 0.181552 0.794599
2 0.568440 0.501638 0.186635
3 0.679125 0.642817 0.697628
Another way to get a subset of columns from your DataFrame, assuming you want all the rows, would be to do:
data[['a','b']] and data[['c','d','e']]
If you want to use numerical column indexes you can do:
data[data.columns[:2]] and data[data.columns[2:]]
Its equivalent
>>> print(df2.loc[140:160,['Relevance','Title']])
>>> print(df2.ix[140:160,[3,7]])
if Data frame look like that:
group name count
fruit apple 90
fruit banana 150
fruit orange 130
vegetable broccoli 80
vegetable kale 70
vegetable lettuce 125
and OUTPUT could be like
group name count
0 fruit apple 90
1 fruit banana 150
2 fruit orange 130
if you use logical operator np.logical_not
df[np.logical_not(df['group'] == 'vegetable')]
more about
https://docs.scipy.org/doc/numpy-1.13.0/reference/routines.logic.html
other logical operators
logical_and(x1, x2, /[, out, where, ...]) Compute the truth value of
x1 AND x2 element-wise.
logical_or(x1, x2, /[, out, where, casting,
...]) Compute the truth value of x1 OR x2 element-wise.
logical_not(x, /[, out, where, casting, ...]) Compute the truth
value of NOT x element-wise.
logical_xor(x1, x2, /[, out, where, ..]) Compute the truth value of x1 XOR x2, element-wise.
You can use the method truncate
df = pd.DataFrame(np.random.rand(10, 5), columns = list('abcde'))
df_ab = df.truncate(before='a', after='b', axis=1)
df_cde = df.truncate(before='c', axis=1)

Pandas: new column using data from multiple other file

I would like to add a new column in a pandas dataframe df, filled with data that are in multiple other files.
Say my df is like this:
Sample Pos
A 5602
A 3069483
B 51948
C 231
And I have three files A_depth-file.txt, B_depth-file.txt, C_depth-file.txt like this (showing A_depth-file.txt):
Pos Depth
1 31
2 33
3 31
... ...
5602 52
... ...
3069483 40
The desired output df would have a new column Depth as follows:
Sample Pos Depth
A 5602 52
A 3069483 40
B 51948 32
C 231 47
I have a method that works but it takes about 20 minutes to fill a df with 712 lines, searching files of ~4 million lines (=positions). Would anyone know a better/faster way to do this?
The code I am using now is:
import pandas as pd
from io import StringIO
with open("mydf.txt") as f:
next(f)
List=[]
for line in f:
df = pd.read_fwf(StringIO(line), header=None)
df.rename(columns = {df.columns[1]: "Pos"}, inplace=True)
f2basename = df.iloc[:, 0].values[0]
f2 = f2basename + "_depth-file.txt"
df2 = pd.read_csv(f2, sep='\t')
df = pd.merge(df, df2, on="Pos", how="left")
List.append(df)
df = pd.concat(List, sort=False)
with open("mydf.txt") as f: to open the file to which I wish to add data
next(f) to pass the header
List=[] to create a new empty array called List
for line in f: to go over mydf.txt line by line and reading them with df = pd.read_fwf(StringIO(line), header=None)
df.rename(columns = {df.columns[1]: "Pos"}, inplace=True) to rename lost header name for Pos column, used later when merging line with associated file f2
f2basename = df.iloc[:, 0].values[0] getting basename of associated file f2 based on 1st column of mydf.txt
f2 = f2basename + "_depth-file.txt"to get full associated file f2 name
df2 = pd.read_csv(f2, sep='\t') to read file f2
df = pd.merge(df, df2, on="Pos", how="left")to merge the two files on column Pos, essentially adding Depth column to mydf.txt
List.append(df)adding modified line to the array List
df = pd.concat(List, sort=False) to concatenate elements of the List array into a dataframe df
Additional NOTES
In reality, I may need to search not only three files but several hundreds.
I didn't test the execution time, but should be faster if you read your 'mydf.txt' file in a dataframe too using read_csv and then use groupby and groupby apply.
If you know in advance that you have 3 samples and 3 relative files storing the depth, you can make a dictionary to read and store the three respective dataframes in advance and use them when needed.
df = pd.read_csv('mydf.txt', sep='\s+')
files = {basename : pd.read_csv(basename + "_depth-file.txt", sep='\s+') for basename in ['A', 'B', 'C']}
res = df.groupby('Sample').apply(lambda x : pd.merge(x, files[x.name], on="Pos", how="left"))
The final res would look like:
Sample Pos Depth
Sample
A 0 A 5602 52.0
1 A 3069483 40.0
B 0 B 51948 NaN
C 0 C 231 NaN
There are NaN values because I am using the sample provided and I don't have files for B and C (I used a copy of A), so values are missing. Provided that your files contain a 'Depth' for each 'Pos' you should not get any NaN.
To get rid of the multiindex made by groupby you can do:
res.reset_index(drop=True, inplace=True)
and res becomes:
Sample Pos Depth
0 A 5602 52.0
1 A 3069483 40.0
2 B 51948 NaN
3 C 231 NaN
EDIT after comments
Since you have a lot of files, you can use the following solution: same idea, but it does not require to read all the files in advance. Each file will be read when needed.
def merging_depth(x):
td = pd.read_csv(x.name + "_depth-file.txt", sep='\s+')
return pd.merge(x, td, on="Pos", how="left")
res = df.groupby('Sample').apply(merging_depth)
The result is the same.

Detect end of file (cols & rows) in dataframe and delete all extra

BACKGROUND: I have a large excel file converted to .csv. Need to be able to detect the end of the file/dataframe and delete all rows and columns after that. The data has the following format (approx 100 cols and 200 rows):
>>>>>Spec. No Text ..... LastColumnName UnnamedN1 UnnammedN2 UnnamedN3 UnnamedN4 .....
0 (some text) (some text) 0
1 (some text) (some text) 2
2 (some text) (some text)
3
4
5
(1) The # of columns and the column titles change with every file. However it is certain that the last column with a name + following 2 columns are the bounds on the column axis even if they contain no values (in this case UnnamedN4 and N5 would be deleted).
(2) Delete all empty rows after the last one with (some text) in Spec No + Text (in this case rows 3, 4, 5 would be deleted). This will always have some text in the last necessary/relevant row.
ALREADY TRIED: Looked at several posts on SO but couldn't find a complete solution for the problem.
EXPECTED: New dataframe that looks like -
>>>>>Spec. No Text ..... LastColumnName UnnamedN1 UnnammedN2
0 (some text) (some text) 0
1 (some text) (some text) 2
2 (some text) (some text)
Python novice, seeking some help and guidance.
use dropna
df.dropna(how='ALL')
I have been able to solve this for my particular case of data. It is a non-elegant and round-about way of solving but it addresses my problem.
Posting in case the code can help/inspire others for their own projects.
(EoF - End of File)
# Determining the title of the last relevant column from df
x = df.columns[len(df.columns)-1]
# Getting the index of the last relevant column
df.columns.get_loc(x)
# Deleting all columns after relevant column + 2 (my requirement)
# Returns dataframe with reduced # of columns
df1 = df.drop(list(df)[df.columns.get_loc(x)+3 : len(df.columns)], axis=1)
df_cols = len(df.columns)
cols_1 = [1]
cols_2 = [9]
df_1 = df[df.columns[cols_1]] #dataframe with column 1
df_2 = df[df.columns[cols_2]] #dataframe with column 9
df_3 = df_1.join([df_2]) #dataframe with columns 1 and 9
df_4 = df[df.columns[25:df_cols]] #dataframe with columns 25 (my reqiurement) to EoF
df_5 = df_3.join([df_4]) #dataframe with columns 1 and 9 joined to columns 25 to EoF
df5

Merge data frames based on column with different rows

I have multiple csv files that I read into individual data frames based on their name in the directory, like so
# ask user for path
path = input('Enter the path for the csv files: ')
os.chdir(path)
# loop over filenames and read into individual dataframes
for fname in os.listdir(path):
if fname.endswith('Demo.csv'):
demoRaw = pd.read_csv(fname, encoding = 'utf-8')
if fname.endswith('Key2.csv'):
keyRaw = pd.read_csv(fname, encoding = 'utf-8')
Then I filter to only keep certain columns
# filter to keep desired columns only
demo = demoRaw.filter(['Key', 'Sex', 'Race', 'Age'], axis=1)
key = keyRaw.filter(['Key', 'Key', 'Age'], axis=1)
Then I create a list of the above dataframes and use reduce to merge them on Key
# create list of data frames for combined sheet
dfs = [demo, key]
# merge the list of data frames on the Key
combined = reduce(lambda left,right: pd.merge(left,right,on='Key'), dfs)
Then I drop the auto generated column, create an Excel writer and write to a csv
# drop the auto generated index colulmn
combined.set_index('RecordKey', inplace=True)
# create a Pandas Excel writer using XlsxWriter as the engine.
writer = pd.ExcelWriter('final.xlsx', engine='xlsxwriter')
# write to csv
combined.to_excel(writer, sheet_name='Combined')
meds.to_excel(writer, sheet_name='Meds')
# Close the Pandas Excel writer and output the Excel file.
writer.save()
The problem is some files have keys that aren't in others. For example
Demo file
Key Sex Race Age
1 M W 52
2 F B 25
3 M L 78
Key file
Key Key2 Age
1 7325 52
2 4783 25
3 1367 78
4 9435 21
5 7247 65
Right now, it will only include rows if there is a matching key in each (in other words it just leaves out the rows with keys not in the other files). How can I combine all rows from all files, even if keys don't match? So the end result will look like this
Key Sex Race Age Key2 Age
1 M W 52 7325 52
2 F B 25 4783 25
3 M L 78 1367 78
4 9435 21
5 7247 65
I don't care if the empty cells are blanks, NaN, #N/A, etc. Just as long as I can identify them.
Replace combined = reduce(lambda left,right: pd.merge(left,right,on='Key'), dfs) With: combined=pd.merge(demo,key, how='outer', on='Key') You will have to specificy the 'outer' to join both the full table of Key and Demo

Appending values to a column in a loop

I have various files containing data. I want to extract one specific column from each file and create a new dataframe with one column containing all the extracted data.
So for example I have 3 files:
A B C
1 2 3
4 5 6
A B C
7 8 9
8 7 6
A B C
5 4 3
2 1 0
The new dataframe should only contain the values from column C:
C
3
6
9
6
3
0
So the column of the first file should be copied to the new dataframe, the column from the second file should be appendend to the new dataframe.
My code looks like this so far:
import pandas as pd
import glob
for filename in glob.glob('*.dat'):
df= pd.read_csv(filename, delimiter="\t", header=6)
df1= df["Bias"]
print(df)
Now df1 is overwritten in each loop step. Would it be a good idea to create a temporary dataframe in each loop step and then copy the data to the new dataframe?
Any input is appreciated!
Use list comprehension or for loop with append for list of DataFrames and if need only some columns add parameter usecols, last concat all together for big DataFrame:
dfs = [pd.read_csv(f, delimiter="\t", header=6, usecols=['C']) for f in glob.glob('*.dat')]
Or:
dfs = []
for filename in glob.glob('*.dat'):
df = pd.read_csv(filename, delimiter="\t", header=6, usecols=['C'])
#if need all columns
#df = pd.read_csv(filename, delimiter="\t", header=6)
dfs.append(df)
df = pd.concat(dfs, ignore_index=True)

Resources