Replacing and writing a file python [duplicate] - python-3.x
I want to loop over the contents of a text file and do a search and replace on some lines and write the result back to the file. I could first load the whole file in memory and then write it back, but that probably is not the best way to do it.
What is the best way to do this, within the following code?
f = open(file)
for line in f:
if line.contains('foo'):
newline = line.replace('foo', 'bar')
# how to write this newline back to the file
The shortest way would probably be to use the fileinput module. For example, the following adds line numbers to a file, in-place:
import fileinput
for line in fileinput.input("test.txt", inplace=True):
print('{} {}'.format(fileinput.filelineno(), line), end='') # for Python 3
# print "%d: %s" % (fileinput.filelineno(), line), # for Python 2
What happens here is:
The original file is moved to a backup file
The standard output is redirected to the original file within the loop
Thus any print statements write back into the original file
fileinput has more bells and whistles. For example, it can be used to automatically operate on all files in sys.args[1:], without your having to iterate over them explicitly. Starting with Python 3.2 it also provides a convenient context manager for use in a with statement.
While fileinput is great for throwaway scripts, I would be wary of using it in real code because admittedly it's not very readable or familiar. In real (production) code it's worthwhile to spend just a few more lines of code to make the process explicit and thus make the code readable.
There are two options:
The file is not overly large, and you can just read it wholly to memory. Then close the file, reopen it in writing mode and write the modified contents back.
The file is too large to be stored in memory; you can move it over to a temporary file and open that, reading it line by line, writing back into the original file. Note that this requires twice the storage.
I guess something like this should do it. It basically writes the content to a new file and replaces the old file with the new file:
from tempfile import mkstemp
from shutil import move, copymode
from os import fdopen, remove
def replace(file_path, pattern, subst):
#Create temp file
fh, abs_path = mkstemp()
with fdopen(fh,'w') as new_file:
with open(file_path) as old_file:
for line in old_file:
new_file.write(line.replace(pattern, subst))
#Copy the file permissions from the old file to the new file
copymode(file_path, abs_path)
#Remove original file
remove(file_path)
#Move new file
move(abs_path, file_path)
Here's another example that was tested, and will match search & replace patterns:
import fileinput
import sys
def replaceAll(file,searchExp,replaceExp):
for line in fileinput.input(file, inplace=1):
if searchExp in line:
line = line.replace(searchExp,replaceExp)
sys.stdout.write(line)
Example use:
replaceAll("/fooBar.txt","Hello\sWorld!$","Goodbye\sWorld.")
This should work: (inplace editing)
import fileinput
# Does a list of files, and
# redirects STDOUT to the file in question
for line in fileinput.input(files, inplace = 1):
print line.replace("foo", "bar"),
Based on the answer by Thomas Watnedal.
However, this does not answer the line-to-line part of the original question exactly. The function can still replace on a line-to-line basis
This implementation replaces the file contents without using temporary files, as a consequence file permissions remain unchanged.
Also re.sub instead of replace, allows regex replacement instead of plain text replacement only.
Reading the file as a single string instead of line by line allows for multiline match and replacement.
import re
def replace(file, pattern, subst):
# Read contents from file as a single string
file_handle = open(file, 'r')
file_string = file_handle.read()
file_handle.close()
# Use RE package to allow for replacement (also allowing for (multiline) REGEX)
file_string = (re.sub(pattern, subst, file_string))
# Write contents to file.
# Using mode 'w' truncates the file.
file_handle = open(file, 'w')
file_handle.write(file_string)
file_handle.close()
As lassevk suggests, write out the new file as you go, here is some example code:
fin = open("a.txt")
fout = open("b.txt", "wt")
for line in fin:
fout.write( line.replace('foo', 'bar') )
fin.close()
fout.close()
If you're wanting a generic function that replaces any text with some other text, this is likely the best way to go, particularly if you're a fan of regex's:
import re
def replace( filePath, text, subs, flags=0 ):
with open( filePath, "r+" ) as file:
fileContents = file.read()
textPattern = re.compile( re.escape( text ), flags )
fileContents = textPattern.sub( subs, fileContents )
file.seek( 0 )
file.truncate()
file.write( fileContents )
A more pythonic way would be to use context managers like the code below:
from tempfile import mkstemp
from shutil import move
from os import remove
def replace(source_file_path, pattern, substring):
fh, target_file_path = mkstemp()
with open(target_file_path, 'w') as target_file:
with open(source_file_path, 'r') as source_file:
for line in source_file:
target_file.write(line.replace(pattern, substring))
remove(source_file_path)
move(target_file_path, source_file_path)
You can find the full snippet here.
fileinput is quite straightforward as mentioned on previous answers:
import fileinput
def replace_in_file(file_path, search_text, new_text):
with fileinput.input(file_path, inplace=True) as file:
for line in file:
new_line = line.replace(search_text, new_text)
print(new_line, end='')
Explanation:
fileinput can accept multiple files, but I prefer to close each single file as soon as it is being processed. So placed single file_path in with statement.
print statement does not print anything when inplace=True, because STDOUT is being forwarded to the original file.
end='' in print statement is to eliminate intermediate blank new lines.
You can used it as follows:
file_path = '/path/to/my/file'
replace_in_file(file_path, 'old-text', 'new-text')
Create a new file, copy lines from the old to the new, and do the replacing before you write the lines to the new file.
Expanding on #Kiran's answer, which I agree is more succinct and Pythonic, this adds codecs to support the reading and writing of UTF-8:
import codecs
from tempfile import mkstemp
from shutil import move
from os import remove
def replace(source_file_path, pattern, substring):
fh, target_file_path = mkstemp()
with codecs.open(target_file_path, 'w', 'utf-8') as target_file:
with codecs.open(source_file_path, 'r', 'utf-8') as source_file:
for line in source_file:
target_file.write(line.replace(pattern, substring))
remove(source_file_path)
move(target_file_path, source_file_path)
Using hamishmcn's answer as a template I was able to search for a line in a file that match my regex and replacing it with empty string.
import re
fin = open("in.txt", 'r') # in file
fout = open("out.txt", 'w') # out file
for line in fin:
p = re.compile('[-][0-9]*[.][0-9]*[,]|[-][0-9]*[,]') # pattern
newline = p.sub('',line) # replace matching strings with empty string
print newline
fout.write(newline)
fin.close()
fout.close()
if you remove the indent at the like below, it will search and replace in multiple line.
See below for example.
def replace(file, pattern, subst):
#Create temp file
fh, abs_path = mkstemp()
print fh, abs_path
new_file = open(abs_path,'w')
old_file = open(file)
for line in old_file:
new_file.write(line.replace(pattern, subst))
#close temp file
new_file.close()
close(fh)
old_file.close()
#Remove original file
remove(file)
#Move new file
move(abs_path, file)
Related
Find string and replace line
I already could get a lot of my code together (although it is not a long code). However i am struggeling to achieve the "replace the whole line and not only the search term". Is there like a symbol you can place to do that? Like: * or % etc. import glob for files in glob.glob("./prtr/*p*"): with open(files, 'r') as file: filedata = file.read() filedata = filedata.replace('TCPOPTS', 'TCPOPTS = 80\n') with open(files, 'w') as file: file.write(filedata) It works so far that "TCPOPTS" is replaced with "TCPOPTS = 80" and a linebreak is done. But it is not deleting the rest of that line but just moves it to the next line. Which is of course correct due the code. So as mentioned all i need now is to have it not replace the search term but the whole line containing that search term. Any advice is highly apreciated :) Kind regards Edit: Before: TCPOPTS = 90 Afterwards: TCPOPTS = 80 = 90 Expected: TCPOPTS = 80
I recently solved a very similar task in the following way: # Scan file with open(filePath, 'r') as file: fileContent = file.readlines() # Find line, where modification should be done for lineIndex in range(len(fileContent)): if ('TCPOPTS' in fileContent[lineIndex]): fileContent[lineIndex] = 'TCPOPTS = 80\n' with open(filePath, 'w') as tableFile: tableFile.writelines(fileContent) break The benefit of doing it this way is, that the file is not rewritten, if your keyword is not found.
Try using str.startswith Ex: import glob for files in glob.glob("./prtr/*p*"): res = [] with open(files) as infile: for line in infile: #Iterate Each line if line.startswith("TCPOPTS"): #Check if TCPOPTS in line res.append("TCPOPTS = 80\n") else: res.append(line) with open(files, "w") as outfile: #Write back to file. for line in res: outfile.write(line)
You can use re.sub (after importing re) to match the whole line and use back-references to preserve selective portions of the match: Change: filedata = filedata.replace('TCPOPTS', 'TCPOPTS = 80\n') to: filedata = re.sub(r'^(?P<header>TCPOPTS\s*=\s*).*', r'\g<header>80', filedata, flags=re.MULTILINE)
Python - Spyder 3 - Open a list of .csv files and remove all double quotes in every file
I've read every thing I can find and tried about 20 examples from SO and google, and nothing seems to work. This should be very simple, but I cannot get it to work. I just want to point to a folder, and replace every double quote in every file in the folder. That is it. (And I don't know Python well at all, hence my issues.) I have no doubt that some of the scripts I've tried to retask must work, but my lack of Python skill is getting in the way. This is as close as I've gotten, and I get errors. If I don't get errors it seems to do nothing. Thanks. import glob import csv mypath = glob.glob('\\C:\\csv\\*.csv') for fname in mypath: with open(mypath, "r") as infile, open("output.csv", "w") as outfile: reader = csv.reader(infile) writer = csv.writer(outfile) for row in reader: writer.writerow(item.replace("""", "") for item in row)
You don't need to use csv-specific file opening and writing, I think that makes it more complex. How about this instead: import os mypath = r'\path\to\folder' for file in os.listdir(mypath): # This will loop through every file in the folder if '.csv' in file: # Check if it's a csv file fpath = os.path.join(mypath, file) fpath_out = fpath + '_output' # Create an output file with a similar name to the input file with open(fpath) as infile lines = infile.readlines() # Read all lines with open(fpath_out, 'w') as outfile: for line in lines: # One line at a time outfile.write(line.replace('"', '')) # Remove each " and write the line Let me know if this works, and respond with any error messages you may have.
I found the solution to this based on the original answer provided by u/Jeff. It was actually smart quotes (u'\u201d') to be exact, not straight quotes. That is why I could get nothing to work. That is a great way to spend like two days, now if you'll excuse me I have to go jump off the roof. But for posterity, here is what I used that worked. (And note - there is the left curving smart quote as well - that is u'\u201c'. mypath = 'C:\\csv\\' myoutputpath = 'C:\\csv\\output\\' for file in os.listdir(mypath): # This will loop through every file in the folder if '.csv' in file: # Check if it's a csv file fpath = os.path.join(mypath, file) fpath_out = os.path.join(myoutputpath, file) #+ '_output' # Create an output file with a similar name to the input file with open(fpath) as infile: lines = infile.readlines() # Read all lines with open(fpath_out, 'w') as outfile: for line in lines: # One line at a time outfile.write(line.replace(u'\u201d', ''))# Remove each " and write the line infile.close() outfile.close()
Python 3 split('\n')
How do I split a text string according to an explicit newline ('\n')? Unfortunately, instead of a properly formatted csv file, I am dealing with a long string of text with "\n" where the newline would be. (example format: "A0,B0\nA1,B1\nA2,B2\nA3,B3\n ...") I thought a simple bad_csv_list = text.split('\n') would give me a list of the two-valued cells (example split ['A0,B0', 'A1,B1', 'A2,B2', 'A3,B3', ...]). Instead I end up with one cell and "\n" gets converted to "\\n". I tried copy-pasting a section of the string and using split('\n') and it works as I had hoped. The print statement for the file object tells me the following: <_io.TextIOWrapper name='stats.csv' mode='r' encoding='cp1252'> ...so I suspect the problem is with the cp1252 encoding? Of note tho: Notepad++ says the file I am working with is "UTF-8 without BOM"... I've looked in the docs and around SO and tried importing io and codec and prepending the open statement and declaring encoding='utf8' but I am at a loss and I don't really grok text encoding. Maybe there is a better solution? from sys import argv # import io, codec filename = argv[1] file_object = open(filename, 'r') # file_object = io.open(filename, 'r', encoding='utf8') # file_object = codec.open(filename, 'r', encoding='utf8') file_contents = file_object.read() file_list = file_contents.split('\n') print("1.) Here's the name of the file: {}".format(filename)) print("2.) Here's the file object info: {}".format(file_object)) print("3.) Here's all the files contents:\n{}".format(file_contents)) print("4.) Here's a list of the file contents:\n{}".format(file_list)) Any help would be greatly appreciated, thank you. If it helps to explain what I am dealing with, here's the contents of the stats.csv file: Albuquerque,749\nAnaheim,371\nAnchorage,828\nArlington,503\nAtlanta,1379\nAurora,425\nAustin,408\nBakersfield,542\nBaltimore,1405\nBoston,835\nBuffalo,1288\nCharlotte-Mecklenburg,647\nCincinnati,974\nCleveland,1383\nColorado Springs,455\nCorpus Christi,658\nDallas,675\nDenver,615\nDetroit,2122\nEl Paso,423\nFort Wayne,362\nFort Worth,587\nFresno,543\nGreensboro,563\nHenderson,168\nHouston,992\nIndianapolis,1185\nJacksonville,617\nJersey City,734\nKansas City,1263\nLas Vegas,784\nLexington,352\nLincoln,397\nLong Beach,575\nLos Angeles,481\nLouisville Metro,598\nMemphis,1750\nMesa,399\nMiami,1172\nMilwaukee,1294\nMinneapolis,992\nMobile,522\nNashville,1216\nNew Orleans,815\nNew York,639\nNewark,1154\nOakland,1993\nOklahoma City,919\nOmaha,594\nPhiladelphia,1160\nPhoenix,636\nPittsburgh,752\nPlano,130\nPortland,517\nRaleigh,423\nRiverside,443\nSacramento,738\nSan Antonio,503\nSan Diego,413\nSan Francisco,704\nSan Jose,363\nSanta Ana,401\nSeattle,597\nSt. Louis,1776\nSt. Paul,722\nStockton,1548\nTampa,616\nToledo,1171\nTucson,724\nTulsa,990\nVirginia Beach,169\nWashington,1177\nWichita,742 And the result from the split('\n'): ['Albuquerque,749\\nAnaheim,371\\nAnchorage,828\\nArlington,503\\nAtlanta,1379\\nAurora,425\\nAustin,408\\nBakersfield,542\\nBaltimore,1405\\nBoston,835\\nBuffalo,1288\\nCharlotte-Mecklenburg,647\\nCincinnati,974\\nCleveland,1383\\nColorado Springs,455\\nCorpus Christi,658\\nDallas,675\\nDenver,615\\nDetroit,2122\\nEl Paso,423\\nFort Wayne,362\\nFort Worth,587\\nFresno,543\\nGreensboro,563\\nHenderson,168\\nHouston,992\\nIndianapolis,1185\\nJacksonville,617\\nJersey City,734\\nKansas City,1263\\nLas Vegas,784\\nLexington,352\\nLincoln,397\\nLong Beach,575\\nLos Angeles,481\\nLouisville Metro,598\\nMemphis,1750\\nMesa,399\\nMiami,1172\\nMilwaukee,1294\\nMinneapolis,992\\nMobile,522\\nNashville,1216\\nNew Orleans,815\\nNew York,639\\nNewark,1154\\nOakland,1993\\nOklahoma City,919\\nOmaha,594\\nPhiladelphia,1160\\nPhoenix,636\\nPittsburgh,752\\nPlano,130\\nPortland,517\\nRaleigh,423\\nRiverside,443\\nSacramento,738\\nSan Antonio,503\\nSan Diego,413\\nSan Francisco,704\\nSan Jose,363\\nSanta Ana,401\\nSeattle,597\\nSt. Louis,1776\\nSt. Paul,722\\nStockton,1548\\nTampa,616\\nToledo,1171\\nTucson,724\\nTulsa,990\\nVirginia Beach,169\\nWashington,1177\\nWichita,742'] Why does it ADD a \ ?
dOh!!! ROYAL FACE PALM! I just wrote all this out an then realized that all I needed to do was put an escape slash before the \newline: file_list = file_contents.split('\\n') I'm gonna post this anyways so y'all can have a chuckle ^_^
Merging multiple text files into one and related problems
I'm using Windows 7 and Python 3.4. I have several multi-line text files (all in Persian) and I want to merge them into one under one condition: each line of the output file must contain the whole text of each input file. It means if there are nine text files, the output text file must have only nine lines, each line containing the text of a single file. I wrote this: import os os.chdir ('C:\Dir') with open ('test.txt', 'w', encoding = 'UTF8') as OutFile: with open ('news01.txt', 'r', encoding = 'UTF8') as InFile: while True: _Line = InFile.readline() if len (_Line) == 0: break else: _LineString = str (_Line) OutFile.write (_LineString) It worked for that one file but it looks like it takes more than one line in output file and also the output file contains disturbing characters like: &,   and things like that. But the source files don't contain any of them. Also, I've got some other texts: news02.txt, news03.txt, news04.txt ... news09.txt. Considering all these: How can I correct my code so that it reads all files one after one, putting each in only one line? How can I clean these unfamiliar and strange characters or prevent them to appear in my final text?
Here is an example that will do the merging portion of your question: def merge_file(infile, outfile, separator = ""): print(separator.join(line.strip("\n") for line in infile), file = outfile) def merge_files(paths, outpath, separator = ""): with open(outpath, 'w') as outfile: for path in paths: with open(path) as infile: merge_file(infile, outfile, separator) Example use: merge_files(["C:\file1.txt", "C:\file2.txt"], "C:\output.txt") Note this makes the rather large assumption that the contents of 'infile' can fit into memory. Reasonable for most text files, but possibly quite unreasonable otherwise. If your text files will be very large, you can this alternate merge_file implementation: def merge_file(infile, outfile, separator = ""): for line in infile: outfile.write(line.strip("\n")+separator) outfile.write("\n") It's slower, but shouldn't run into memory problems.
Answering question 1: You were right about the UTF-8 part. You probably want to create a function which takes multiple files as a tuple of files/strings of file directories or *args. Then, read all input files, and replace all "\n" (newlines) with a delimiter (Default ""). out_file can be in in_files, but makes the assumption that the contents of files can be loaded in to memory. Also, out_file can be a file object, and in_files can be file objects. def write_from_files(out_file, in_files, delimiter="", dir="C:\Dir"): import _io import os import html.parser # See part 2 of answer os.chdir(dir) output = [] for file in in_files: file_ = file if not isinstance(file_, _io.TextIOWrapper): file_ = open(file_, "r", -1, "UTF-8") # If it isn't a file, make it a file file_.seek(0, 0) output.append(file_.read().replace("\n", delimiter)) # Replace all newlines file_.close() # Close file to prevent IO errors # with delimiter if not isinstance(out_file, _io.TextIOWrapper): out_file = open(out_file, "w", -1, "UTF-8") html.parser.HTMLParser().unescape("\n".join(output)) out_file.write(join) out_file.close() return join # Do not have to return Answering question 2: I think you may of copied from a webpage. This does not happen to me. The & and   are the HTML entities, (&) and ( ). You may need to replace them with their corresponding character. I would use HTML.parser. As you see in above, it turns HTML escape sequences into Unicode literals. E.g.: >>> html.parser.HTMLParser().unescape("Alpha < β") 'Alpha < β' This will not work in Python 2.x, as in 3.x it was renamed. Instead, replace the incorrect lines with: import HTMLParser HTMLParser.HTMLParser().unescape("\n".join(output))
python3 opening files and reading lines
Can you explain what is going on in this code? I don't seem to understand how you can open the file and read it line by line instead of all of the sentences at the same time in a for loop. Thanks Let's say I have these sentences in a document file: cat:dog:mice cat1:dog1:mice1 cat2:dog2:mice2 cat3:dog3:mice3 Here is the code: from sys import argv filename = input("Please enter the name of a file: ") f = open(filename,'r') d1ct = dict() print("Number of times each animal visited each station:") print("Animal Id Station 1 Station 2") for line in f: if '\n' == line[-1]: line = line[:-1] (AnimalId, Timestamp, StationId,) = line.split(':') key = (AnimalId,StationId,) if key not in d1ct: d1ct[key] = 0 d1ct[key] += 1
The magic is at: for line in f: if '\n' == line[-1]: line = line[:-1] Python file objects are special in that they can be iterated over in a for loop. On each iteration, it retrieves the next line of the file. Because it includes the last character in the line, which could be a newline, it's often useful to check and remove the last character.
As Moshe wrote, open file objects can be iterated. Only, they are not of the file type in Python 3.x (as they were in Python 2.x). If the file object is opened in text mode, then the unit of iteration is one text line including the \n. You can use line = line.rstrip() to remove the \n plus the trailing withespaces. If you want to read the content of the file at once (into a multiline string), you can use content = f.read(). There is a minor bug in the code. The open file should always be closed. I means to use f.close() after the for loop. Or you can wrap the open to the newer with construct that will close the file for you -- I suggest to get used to the later approach.