So I'm trying to read from a CSV file that's tab-delimited saved as a .DAT file.
I need to skip everything until I get to the data portion under the Date header.
So 05-29-2012 and everything on that row and rows below it. I found plenty of documentation on how to skip the first few lines, but I don't know how many lines that may be. From the
"Data file created", to the meat of the data may have more lines of text than another file. Could
be 3 rows, could be 10 rows.
I have thousands of these files I'm trying to extract the data out of and plot it. Easy in excel just to cut and paste but I'm going for efficiency here.
This is the code I'm using. I see that data perfectly. Only IF I know how many lines to skip. There will be blanks lines, I get how to bypass those, but if I have text there, that adds the extra lines I can't bypass.
import pandas as pd
import csv
myfile = ('E:\\TTF Data Backup\\1X ARRAY MOD #2.dat')
df = pd.read_csv(myfile, skiprows=3 , delimiter='\t')
print(df.head(20))
Data File
Try this code:
myfile = ('E:\TTF Data Backup\1X ARRAY MOD #2.dat')
skipcnt = 0
with open(myfile) as f: # auto closes after loop
for row in f:
skipcnt += 1
if "Tension" in row and "Elong" in row: # top of header
break;
skipcnt += 3 # skip headers
df = pd.read_csv(myfile, skiprows=skipcnt , delimiter='\t')
Related
I am a beginner in python. I have a CSV file that has like 3 M rows and I wanted to split this csv file into some csv files based on a list of indexes
for example I have the list of the index where I need to do the splitting like
Index [0 , 1000 , 5000 , ... etc ]
so I wanted to split the CSV file based on their indexes like the first csv file has the data from row 0 to row number 1000 and the second CSV file has the data from row number 1001 to number 5000 and so on
I tried to use something like that but actually, it doesn't work i think i have a problem in the loop
with open('input_0.csv', 'r') as f:
csvfile = f.readlines()
filename = 1
for i in range(0,len(csvfile)):
with open(str(filename) + '.csv', 'w+') as f:
if filename > 1:
f.write(csvfile[0]) # header again
f.writelines(csvfile[index[i]:1+index[i]])
filename += 1
Thanks in advance
I want to extract numbers only from lines in a txt file that have a certain keyword and add them up then compare them, and then print the highest total number and the lowest total number. How should I go about this?
I want to print the highest and the lowest valid total numbers
I managed to extract lines with "valid" keyword in them, but now I want to get numbers from this lines, and then add the numbers up of each line, and then compare these numbers with other lines that have the same keyword and print the highest and the lowest valid numbers.
my code so far
#get file object reference to the file
file = open("shelfs.txt", "r")
#read content of file to string
data = file.read()
#close file<br>
closefile = file.close()
#get number of occurrences of the substring in the string
totalshelfs = data.count("SHELF")
totalvalid = data.count("VALID")
totalinvalid = data.count("INVALID")
print('Number of total shelfs :', totalshelfs)
print('Number of valid valid books :', totalvalid)
print('Number of invalid books :', totalinvalid)
txt file
HEADER|<br>
SHELF|2200019605568|<br>
BOOK|20200120000000|4810.1|20210402|VALID|<br>
SHELF|1591024987400|<br>
BOOK|20200215000000|29310.0|20210401|VALID|<br>
SHELF|1300001188124|<br>
BOOK|20200229000000|11519.0|20210401|VALID|<br>
SHELF|1300001188124|<br>
BOOK|20200329001234|115.0|20210331|INVALID|<br>
SHELF|1300001188124|<br>
BOOK|2020032904567|1144.0|20210401|INVALID|<br>
FOOTER|
What you need is to use the pandas library.
https://pandas.pydata.org/
You can read a csv file like this:
data = pd.read_csv('shelfs.txt', sep='|')
it returns a DataFrame object that can easily select or sort your data. It will use the first row as header, then you can select a specific column like a dictionnary:
header = data['HEADER']
header is a Series object.
To select columns you can do:
shelfs = data.loc[:,data['HEADER']=='SHELF']
to select only the row where the header is 'SHELF'.
I'm just not sure how pandas will handle the fact that you only have 1 header but 2 or 5 columns.
Maybe you should try to create one header per colmun in your csv, and add separators to make each row the same size first.
Edit (No External libraries or change in the txt file):
# Split by row
data = data.split('<br>\n')
# Split by col
data = [d.split('|') for d in data]
# Fill empty cells
n_cols = max([len(d) for d in data])
for i in range(len(data)):
while len(data[i])<n_cols:
data[i].append('')
# List VALID rows
valid_rows = [d for d in data if d[4]=='VALID']
# Get sum min and max
valid_sum = [d[1]+d[2]+d[3] for d in valid_rows]
valid_minimum = min(valid_sum)
valid_maximum = max(valid_sum)
It's maybe not exactly what you want to do but it solves a part of your problem. I didn't test the code.
I am trying to read a CSV file that has four parts that are on the same page but distinguished by putting some empty rows in the middle of the spreadsheet. I want to somehow ask pandas to stop reading the rest of the file as soon as it finds the empty row.
Edit: I need to elaborate on the problem. I have a CSV file, that has 4 different sections that separated with 3-4 empty rows. I need to extract each of these sections or at least the first section. In other words, I want read_csv stop when it finds the first empty row(of course after skipping rows with detail about the file)
url = urlopen("https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/30_Industry_Portfolios_CSV.zip")
zipfile = ZipFile(BytesIO(url.read()))
data = pd.read_csv(zipfile.open('30_Industry_Portfolios.CSV'),
header = 0, index_col=0,
skiprows=11,parse_dates=True)
You could use a generator.
Suppose the csv module is generating rows.
(We might use yield from sheet,
except that we'll change the loop in a moment.)
import csv
def get_rows(csv_fspec, skip_rows=12):
with open(csv_fspec) as fin:
sheet = csv.reader(fin)
for _ in range(skip_rows):
next(sheet) # discard initial rows
for row in sheet:
yield row
df = pd.DataFrame(get_rows(my_csv))
Now you want to ignore rows after encountering some condition,
perhaps after initial column is empty.
Ok, that's simple enough, just change the loop body:
for row in sheet:
if row[0]:
yield row
else:
break # Ignore rest of input file.
Trying to sum a column in a csv file that has a header row at the top. I'm trying to use this for loop but it's just return zero. Any thoughts?
CSVFile = open('Data103.csv')
CSVReader = csv.reader(CSVFile) #you don't pass a file name directly to csv.reader
CSVDataList = list(CSVReader) #stores the csv file as a list of lists
print(CSVDataList[0][16])
total = 0
for row in CSVReader:
if CSVReader.line_num == 1:
continue
total += int(row[16])
print (total)
Here is what the data sample looks like in txt:
Value,Value,Value, "15,500.05", 00.00, 00.00
So the items are deliminted by , except in the case where they need an escape then it's "". It's a pretty standard file with a header row and about 1k lines of data across 18 columns.
You might want to use Pandas.
import pandas as pd
df = pd.read_csv('/path/to/file.csv')
column_sum = df['column_name'].sum()
It seems that you've over-indented the line that does the sum. It should be like this:
for row in CSVReader:
if CSVReader.line_num == 1:
continue
total += int(row[16])
Otherwise you'll only sum the values for the first row, which is exactly the one you want to skip.
EDIT:
Since you said the previous change doesn't work, I'd suggest working with the excellent Python lib called rows.
With the following CSV (fruits.csv):
id,name,amount
1,apple,3
2,banana,6
3,pineapple,2
4,lemon,5
You can access columns directly by their name instead of their index:
import rows
data = rows.import_from_csv('fruits.csv')
for fruit_data in data:
print(fruit_data.name, fruit_data.amount)
# output:
# apple 3
# banana 6
# pineapple 2
# lemon 5
NEW EDIT:
After you've provided the data, I believe in your case you could do something like:
import rows
data = rows.import_from_csv('Data103.csv')
print(data.field_names[16]) # prints the field name
total = 0
for row in data:
value = row.<column_name>
value = value.replace(',', '') # remove commas
total += float(value)
print (total)
I have the following problem:
I want to convert a tab delimited text file to a csv file. The text file is the SentiWS dictionary which I want to use for a sentiment analysis ( https://github.com/MechLabEngineering/Tatort-Analyzer-ME/tree/master/SentiWS_v1.8c ).
The code I used to do this is the following:
txt_file = r"SentiWS_v1.8c_Positive.txt"
csv_file = r"NewProcessedDoc.csv"
in_txt = csv.reader(open(txt_file, "r"), delimiter = '\t')
out_csv = csv.writer(open(csv_file, 'w'))
out_csv.writerows(in_txt)
This code writes everything in one row but I need the data to be in three rows as normally intended from the file itself. There is also a blank line under each data and I don´t know why.
I want the data to be in this form:
Row1 Row2 Row3
Word Data Words
Word Data Words
instead of
Row1
Word,Data,Words
Word,Data,Words
Can anyone help me?
import pandas
It will convert tab delimiter text file into dataframe
dataframe = pandas.read_csv("SentiWS_v1.8c_Positive.txt",delimiter="\t")
Write dataframe into CSV
dataframe.to_csv("NewProcessedDoc.csv", encoding='utf-8', index=False)
Try this:
import csv
txt_file = r"SentiWS_v1.8c_Positive.txt"
csv_file = r"NewProcessedDoc.csv"
with open(txt_file, "r") as in_text:
in_reader = csv.reader(in_text, delimiter = '\t')
with open(csv_file, "w") as out_csv:
out_writer = csv.writer(out_csv, newline='')
for row in in_reader:
out_writer.writerow(row)
There is also a blank line under each data and I don´t know why.
You're probably using a file created or edited in a Windows-based text editor. According to the Python 3 csv module docs:
If newline='' is not specified, newlines embedded inside quoted fields will not be interpreted correctly, and on platforms that use \r\n linendings on write an extra \r will be added. It should always be safe to specify newline='', since the csv module does its own (universal) newline handling.