Extra blank lines in csv file when directing from stdout - python-3.x

When the output of this program is redirected into a csv file there is a blank line between the two lines of output.
The other solution I found on S.O involved changing the way that the output file is opened, but since I'm using stream redirection it doesn't seem to apply.
Could someone kindly suggest how I might be able to eliminate these extra spaces it would be greatly appreciated. Also please note that I am new to python and coding in general.
def output_data(aggregrate_objects):
header = []
results = []
for object in aggregrate_objects: #generate header
header.append(object.name)
for object in aggregrate_objects: #generate results
results.append(object.get_result())
writer = csv.writer(sys.stdout,dialect='excel',lineterminator=os.linesep)
writer.writerow(header)
writer.writerow(results)

Related

Eliminate footer and header information from multiple text files (why can't I eliminate the last line as easily as I can eliminate the first lines?)

I have been trying all day.
# successfully writes the data from line 17 and next lines
# to new (temp) file named and saved in the os
import os
import glob
files = glob.glob('/Users/path/Documents/test/*.txt')
for myspec in files:
temp_filename = 'foo.temp.txt'
with open(myspec) as f:
for n in range(17):
f.readline()
with open(temp_filename, 'w') as w:
w.writelines(f)
os.remove(myspec)
os.rename(temp_filename, myspec)
# delete original file and rename the temp file so it replaces the original file
print("done")
The above works and it works well! I love it. I am very happy.
But this below does NOT work (same files, I am preprocessing files) :
# trying unsuccessfully to remove the last line which is line
# 2048 in all files and save again like above
import os
import glob
files = glob.glob('/Users/path/Documents/test/*.txt')
for myspec in files:
temp_filename = 'foo.temp.txt'
with open(myspec) as f:
for n in range(-1):
f.readline()
with open(temp_filename, 'w') as w:
w.writelines(f)
os.remove(myspec)
os.rename(temp_filename, myspec)
# delete original file and rename the temp file so it replaces the original file
print("done")
This does not work. It doesn't give an error, it prints done, but it does not change the file. I have tried range(-1), all the way up to range(-7), thinking maybe there were blank lines at the end I could not see. This is the only difference between the two blocks of code. If anyone could help that would be great.
To summarize, I got rid of permanently the headers and now I still have a 1 line footer I can not get rid of permanently.
Thank you so much for any help. I need to write permanently edited files. Because I have a ton of code that wants 2 or 3 column files without all the header footer junk, and the junk and file types vary widely. So if I lose the junk permanently ASCII can guess correctly the file types. And I really do not want to try and rewrite that code right now, it's very complicated and involves uncertainty and it took me months to get working correctly. I don't read the files until I'm inside a function and there are many files that are displayed in multiple drop downs. Thank you! All day I've been at this, I have tried other methods. I'd like to make THIS the above method work. To pop off the last and write it back to a permanent file. It doesn't like the -1. Right now it is just one specific line, it is (specifically line 2048 after the header is removed.) Therefore just removing line 2048 would be fine too. Its the last line of the files which are a batch of TSV files that are CCD readouts. Thanks in advance!

Read .csv that contains commas

I have a .csv file that contains multiple columns with texts in it. These texts contain commas, which makes things messy when I try to read the file into Python.
When I tried:
import pandas as pd
directory = 'some directory'
dataset = pd.read_csv(directory)
I got the following error:
ParserError: Error tokenizing data. C error: Expected 3 fields in line 42, saw 5
After doing some research, I found the clevercsv package.
So, I ran:
import clevercsv as csv
dataset = csv.read_csv(directory)
Running this, I got the error:
UnicodeDecodeError: 'charmap' codec can't decode byte 0x8f in position 4359705: character maps to <undefined>
To overcome this, I tried:
dataset = csv.read_csv(directory, encoding="utf8")
However, 10 hours later my computer was still working on reading it. So I expect that something went wrong there.
Furthermore, when I open the file in Excel, it does split cells well. Therefore, What I tried was to save the .csv file as a .xlsx and then save it as .csv in Python with an uncommon delimiter ('~'). However, when I save my .csv file as a .xlsx file, the size of the file gets smaller, which indicates that only a part of the file is saved and that is not what I want.
Lastly, I have tried the solutions given here and here. But neither seem to work for my problem.
Given that Excel reads in the file without problems, I do expect that it should be possible to read it into Python as well. Who can help me with this?
UPDATE:
When using dataset = pd.read_csv(directory, sep = ',', error_bad_lines=False)I manage to read in the .csv. But many lines are skipped. Is there a better way for this?
panda should be work https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
Dou you tried somthing like dataset = pd.read_csv(directory, sep = ',', header = None)
Regards

Index Error - For Line in File: line.split into [1] and [2]

might be a silly question but maybe still someone can help me out. So inside my code i'm trying to use a text file to grap some log in data and repeat it. The code which is not working is the following.
Accs.txt file looks like:
User1:Passwort1
User2:Passwort2
User3:Passwort3
code.py looks like:
file = open('Accs.txt', 'r')
for acc in file:
Mail=acc.split(':')[0]
Passwort=acc.split(':')[1]
print (Mail)
print (Passwort)
after the text file graps the second acc on the list i get an index error. I guess there is some logical thing behind how it works which i dont get. Anybody could help me out?
I ran the same code and it worked fine.
If there are any extra blank lines in your text file, that index out of range exception can be thrown.
here is a workaround to handle blank lines (source: python: how to check if a line is an empty line)
for acc in file:
if acc.strip():
lineSplit = acc.split(':')
Mail=lineSplit[0]
Passwort=lineSplit[1]
print (Mail)
print (Passwort)
Also , it is more efficient to use the split() method one time and store into a variable to access later by index later (also in code above)
You should go
for line in file.read():
line = line.split(":")
mail=line[0]
pass=line[1]
Read the file must go file.read()
Sorry for the layout I'm using my phone :)

Why is Python 3.6.3 csv.read() adding data out of sequence

--Noobe Here--
I have a table of data as displayed below which is stored in a file.
It is processed by the below code snippet. All
looks good until the last entry in the table is
reached and then the code suddenly starts
appending earlier data read from the input file.
I have looked for extraneous characters in the
input data but find nothing that alerts me.
with open('barometer.txt', 'r') as pressure:
formatted_file = csv.reader(pressure, delimiter = ',')
for line in formatted_file:
print (line)
The resulting output of the above code snippet is:
I suspect a spurious character at the end of the
input file but cannot seem to locate it. Any insight
is greatly appreciated.
Mel Blanchard

read login data from text file into dictionary error

Using the answer on Stack Overflow shown on this link: https://stackoverflow.com/a/4804039, I have attempted to read in the file contents into a dictionary. There is an error that I cannot seem to fix.
Code
def login():
print("====Login====")
userinfo={}
with open("userinfo.txt","r") as f:
for line in f:
(key,val)=line.split()
userinfo[key]=val
print(userinfo)
File Contents
{'user1': 'pass'}
{'user2': 'foo'}
{'user3': 'boo'}
Error:
(key,val)=line.split()
ValueError: not enough values to unpack (expected 2, got 0)
I have a question to which I would very much appreciate a two fold answer
What is the best and most efficient way to read in file contents, as shown, into a dictionary, noting that it has already been stored in dictionary format.
Is there a way to WRITE to a dictionary to make this "reading" easier? My code for writing to the userinfo.txt file in the first place is shown below
Write code
with open("userinfo.txt","a",newline="")as fo:
writer=csv.writer(fo)
writer.writerow([{username:password}])
Could any answers please attempt the following
Provide a solution to the error using the original code
Suggest the best method to do the same thing (simplest for teaching purposes) Note, that I do not wish to use pickle, json or anything other than very basic file handling (so only reading from a text file or csv reader/writer tools). For instance, would it be best to read the file contents into a list and then convert the list into a dictionary? Or is there any other way?
Is there a method of writing a dictionary to a text file using csv reader or other basic txt file handling, so that the reading of the file contents into a dictionary could be done more effectively on the other end.
Update:
Blank line removed, and the code works but produces the erroneous output:
{"{"Vjr':": "'open123'}", "{'mvj':": "'mvv123'}"}
I think I need to understand the split and strip commands and how to use them in this context to produce the desired result (reading the contents into a dictionary userinfo)
Well let's start with the basics first. The error message:
ValueError: not enough values to unpack (expected 2, got 0)
means a line was empty, so do you have a blank line in the file?
Yes, there are other options on saving your dictionary out and bringing it back, but first you should understand this, and may work just fine for you. :-) The split() is acting on the string you read from the file, and by default will split on the space, so that is what you are seeing. You could format your text file like 'username:pass' instead and then use split(':").
File Contents
user1:pass
user2:foo
user3:boo
Code
def login():
print("====Login====")
userinfo={}
with open("userinfo.txt","r") as f:
for line in f:
(key,val)=line.split(':')
userinfo[key]=val.strip()
print(userinfo)
if __name__ == '__main__':
login()
This simple format may be best if you want to be able to edit the text file by hand, and I like to keep it simple as possible. ;-)

Resources