How to create csv file for each line in a text file? - python-3.x

I have a text file price.txt that contains the following rows:
open
high
low
close
I need to create a separate csv files for each row in the text file and name the csv files as price1.csv, price2.csv and so on
I tried the following code
with open('price.txt') as infile, open('outfile.csv','w') as outfile:
for line in infile:
outfile.write(line.replace(' ',','))
I am getting only one csv file that has the following rows
open
high
low
close
How can I create a csv file for each row?

Here the code to obtain a different file called price1.csv, price2.csv etc for every line (with the substitution whitespace - comma that you put in your example) with comments:
### start a counter for the filename number
i = 0
with open('price.txt') as infile:
### open a loop over rows of input file
for line in infile :
### add 1 to counter
i += 1
### create the output filename for the row
newfile_name = "price" + str(i) + ".csv"
### write in the new filename the modified row
with open(newfile_name,'w') as outfile:
outfile.write(line.replace(' ',','))

Related

How do I convert multiple multiline txt files to excel - ensuring each file is its own line, then each line of text is it own row? Python3

Using openpyxl and Path I aim to:
Create multiple multiline .txt files,
then insert .txt content into a .xlsx file ensuring file 1 is in column 1 and each line has its own row.
I thought to create a nested list then loop through it to insert the text. I cannot figure how to ensure that all the nested list string is displayed. This is what I have so far which nearly does what I want however it's just a repeat of the first line of text.
from pathlib import Path
import openpyxl
listOfText = []
wb = openpyxl.Workbook() # Create a new workbook to insert the text files
sheet = wb.active
for txtFile in range(5): # create 5 text files
createTextFile = Path('textFile' + str(txtFile) + '.txt')
createTextFile.write_text(f'''Hello, this is a multiple line text file.
My Name is x.
This is text file {txtFile}.''')
readTxtFile = open(createTextFile)
listOfText.append(readTxtFile.readlines()) # nest the list from each text file into a parent list
textFileList = len(listOfText[txtFile]) # get the number of lines of text from the file. They are all 3 as made above
# Each column displays text from each text file
for row in range(1, txtFile + 1):
for col in range(1, textFileList + 1):
sheet.cell(row=row, column=col).value = listOfText[txtFile][0]
wb.save('importedTextFiles.xlsx')
The output is 4 columns/4 rows. All of which say the same 'Hello, this is a multiple line text file.'
Appreciate any help with this!
The problem is in the for loop while writing, change the line sheet.cell(row=row, column=col).value = listOfText[txtFile][0] to sheet.cell(row=col, column=row).value = listOfText[row-1][col-1] and it will work

How to replace the contents of two TXT file in python?

I have 2 (*.txt) file and I want to replace file1 text with file2 and file2 text with file1.
Here is my code:
f1 = open("20-file-1.txt", "r+")
f2 = open("20-file-2.txt", "r+")
f1_all_lines_list = f1.readlines()
f2_all_lines_list = f2.readlines()
f1.truncate(0)
f2.truncate(0)
f1.write(''.join(f2_all_lines_list))
f2.write(''.join(f1_all_lines_list))
f1.close()
f2.close()
Every thing works well but each time I run the code some space would be added before the first line string and after several run both txt files size increases and my IDE Stucks.
Here are txt files before running the code :
Here are txt files after first run:

Convert and concatenate data from two columns of a csv file

I have a csv file which contains data in two columns, as follows:
40500 38921
43782 32768
55136 49651
63451 60669
50550 36700
61651 34321
and so on...
I want to convert each data into it's hex equivalent, then concatenate them, and write them into a column in another csv file.
For example: hex(40500) = 9E34, and hex(38921) = 9809.
So, in output csv file, element A1 would be 9E349809
So, i am expecting column A in output csv file to be:
9E349809
AB068000
D760C1F3
F7DBECFD
C5768F5C
F0D38611
I referred a sample code which concatenates two columns, but am struggling with the converting them to hex and then concatenating them. Following is the code:-
import csv
inputFile = 'input.csv'
outputFile = 'output.csv'
with open(inputFile) as f:
reader = csv.reader(f)
with open(outputFile, 'w') as g:
writer = csv.writer(g)
for row in reader:
new_row = [''.join([row[0], row[1]])] + row[2:]
writer.writerow(new_row)
How can i convert data in each column to its hex equivalent, then concatenate them and write them in another file?
You could do this in 4 steps:
Read the lines from the input csv file
Use formatting options to get the hex values of each number
Perform string concatenation to get your result
Write to new csv file.
Sample Code:
with open (outputFile, 'w') as outfile:
with open (inputFile,'r') as infile:
for line in infile: # Iterate through each line
left, right = int(line.split()[0]), int(line.split()[1]) # split left and right blocks
newstr = '{:x}'.format(left)+'{:x}'.format(right) # create new string using hex values excluding '0x'
outfile.write(newstr) # write to output file
print ('Conversion completed')
print ('Closing outputfile')
Sample Output:
In[44] line = '40500 38921'
Out[50]: '9e349809'
ParvBanks solution is good (clear and functionnal), I would simplify it a little like that:
with open (inputFile,'r') as infile, open (outputFile, 'w+') as outfile:
for line in infile:
outfile.write("".join(["{:x}".format(int(v)) for v in line.split()]))

python csv format all rows to one line

Ive a csv file that I would like to get all the rows in one column. Ive tried importing into MS Excel or Formatting it with Notedpad++ . However with each try it considers a piece of data as a new row.
How can I format file with pythons csv module so that it removes a string "BRAS" and corrects the format. Each row is found between a quote " and delimiter is a pipe |.
Update:
"aa|bb|cc|dd|
ee|ff"
"ba|bc|bd|be|
bf"
"ca|cb|cd|
ce|cf"
The above is supposed to be 3 rows, however my editors see them as 5 rows or 6 and so forth.
import csv
import fileinput
with open('ventoya.csv') as f, open('ventoya2.csv', 'w') as w:
for line in f:
if 'BRAS' not in line:
w.write(line)
N.B I get a unicode error when trying to use in python.
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x8f in position 18: character maps to <undefined>
This is a quick hack for small input files (the content is read to memory).
#!python2
fnameIn = 'ventoya.csv'
fnameOut = 'ventoya2.csv'
with open(fnameIn) as fin, open(fnameOut, 'w') as fout:
data = fin.read() # content of the input file
data = data.replace('\n', '') # make it one line
data = data.replace('""', '|') # split char instead of doubled ""
data = data.replace('"', '') # remove the first and last "
print data
for x in data.split('|'): # split by bar
fout.write(x + '\n') # write to separate lines
Or if the goal is only to fix the extra (unwanted) newline to form a single-column CSV file, the file can be fixed first, and then read through the csv module:
#!python2
import csv
fnameIn = 'ventoya.csv'
fnameFixed = 'ventoyaFixed.csv'
fnameOut = 'ventoya2.csv'
# Fix the input file.
with open(fnameIn) as fin, open(fnameFixed, 'w') as fout:
data = fin.read() # content of the file
data = data.replace('\n', '') # remove the newlines
data = data.replace('""', '"\n"') # add the newlines back between the cells
fout.write(data)
# It is an overkill, but now the fixed file can be read using
# the csv module.
with open(fnameFixed, 'rb') as fin, open(fnameOut, 'wb') as fout:
reader = csv.reader(fin)
writer = csv.writer(fout)
for row in reader:
writer.writerow(row)
For solving this you need not to go to even code.
1: Just open file in Notepad++
2: In first line select from | symble till next line
3: go to replace and replace the selected format with |
Search mode can be normal or extended :)
Well, since the line breaks are consistent, you could go in and do find/replace as suggested, but you could also do a quick conversion with your python script:
import csv
import fileinput
linecount = 0
with open('ventoya.csv') as f, open('ventoya2.csv', 'w') as w:
for line in f:
line = line.rstrip()
# remove unwanted breaks by concatenating pairs of rows
if linecount%2 == 0:
line1 = line
else:
full_line = line1 + line
full_line = full_line.replace(' ','')
# remove spaces from front of 2nd half of line
# if you want comma delimiters, uncomment next line:
# full_line = full_line.replace('|',',')
if 'BRAS' not in full_line:
w.write(full_line + '\n')
linecount += 1
This works for me with the test data, and if you want to change the delimiters while writing to file, you can. The nice thing about doing with code is: 1. you can do it with code (always fun) and 2. you can remove the line breaks and filter content to the written file at the same time.

Fixing broken rows from txt file using python

I am a complete novice to programming. I am trying to parse and format 'broken' rows (rogue lf's in the file instead of the \cr\lf windows format) in a txt file. Using python 3.4 and reading these types of posts I have managed to read the source files and create a file with only the broken rows in it with all the lf's removed so its one long line. Now I need to read the line and count the delimiters which are in this format '<|>' and after the 36th one add a newline then continue counting the next 36 and add a newline etc. I have tried a few different things but have got stuck as I am not sure if I need to .tell() then .seek() to insert the \n. Any suggestions as to how to insert the newline char after the 36th delimiter please?
my_count = 36 # define the number of delimiters to count
LineNumber = 1 # define line counter
FileName = 'Broken_Registrations.txt' # variable to define filename
target = open('Target.txt','w',encoding='utf-8') # open a file to write fixed lines
with open(FileName,encoding="utf8") as file:
for line in file: # open file read
cnt=line.count('<|>') # count delimiters
if cnt == mycount: # count until mycount then
target.write(line).append("\n") # write line and append new line char
print('DONE!') # let me know when you finished
target.close() # close the file opened outside of the with
ok i managed it, it was simple all along, although there is probably a much more efficient way to do it but this worked for me
#import pdb
#pdb.set_trace()
my_count = 36
LineNumber = 1 # define line counter
FileName = 'Broken_Registrations.txt' # variable to define filename
target = open('Target.txt','w',encoding='utf-8') # open a file to write fixed lines
with open(FileName,encoding="utf8") as file:
for line in file: # open file read
cnt=line.count('<|>') # count delimiters
if cnt == my_count: # count until mycount then
line = line.rstrip() # remove whitespace
target.write(line +"\n") # write line and append new line char
print('DONE!') # let me know when you finished
target.close() # close the file opened outside of the with

Resources