how to skip enumerate encoding exception in python3? - python-3.x

I crafted script and preprocessed large csv for importing to database:
with open(sys.argv[1], encoding='utf-16') as _f:
for i, line in enumerate(_f):
try:
.... some stuff with line ...
except Exception as e:
...
But at some point it gives me exception on enumerate :
...
File "/Users/elajah/PycharmProjects/untitled1/importer.py", line 94, in main
for i, line in enumerate(_f):
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/codecs.py", line 319, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/encodings/utf_16.py", line 69, in _buffer_decode
return self.decoder(input, self.errors, final)
UnicodeDecodeError: 'utf-16-le' codec can't decode byte 0x00 in position 0: truncated data
...
How to skip broken lines in file not interrupting the script flow ?

You can pass the parameter errors="ignore" to open, to tell Python that you don't care about encoding errors when reading from the file.
with open(sys.argv[1], errors="ignore") as _f:
This may behave oddly however, since it will just skip the invalid bytes, not the whole line the invalid bytes showed up on.
If the behavior you need is to ignore the whole line if anything goes wrong with the decoding, you might be better off reading the file in binary mode and trying the decoding yourself inside your try/except block, inside the loop:
with open(sys.argv[1], 'b') as _f:
for i, line_bytes in enumerate(_f):
try:
line = line_bytes.decode('utf-16')
# do some stuff with line ...
except UnicodeDecodeError:
pass
A final idea is to fix whatever is wrong with your file's data so you don't get decoding errors when reading it. But who knows how easy that is. If you're getting the file from somewhere else, out of your control, there may not be any practical way to fix it ahead of time.

You ignore an exception by catching it and the doing nothing
try:
.... some stuff with line ...
except UnicodeDecodeError as e:
pass
But it will depend on the situation if that is really what you want.
You can find the name of the exception in the last line of the stack trace
UnicodeDecodeError: 'utf-16-le' codec can't decode byte 0x00 in position 0: truncated data

Related

How to get python to tolerate UTF-8 encoding errors

I have a set of UTF-8 texts I have scraped from web pages. I am trying to extract keywords from these files like so:
import os
import json
from rake_nltk import Rake
rake_nltk_var = Rake()
directory = 'files'
results = {}
for filename in os.scandir(directory):
if filename.is_file():
with open("files/" + filename.name, encoding="utf-8", mode = 'r') as infile:
text = infile.read()
rake_nltk_var.extract_keywords_from_text(text)
keyword_extracted = rake_nltk_var.get_ranked_phrases()
results[filename.name] = keyword_extracted
with open("extracted-keywords.json", "w") as outfile:
json.dump(results, outfile)
One of the files I've managed to process so far is throwing the following error on read:
Traceback (most recent call last):
File "extract-keywords.py", line 11, in <module>
text = infile.read()
File "c:\python36\lib\codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x92 in position 66: invalid start byte
0x92 is a right single quotation mark, but the 66th char of the file is a "u" so IDK where this error is coming from. Regardless, is there some way to make the codec tolerate such encoding errors? For example, Perl simply substitutes a question mark for any character it can't decode. Is there some way to get Python to do the same? I have a lot of files and can't afford to stop and debug every encoding error they might contain.
I have a set of UTF-8 texts I have scraped from web pages
If they can't be read with the script you've shown, then these are not actually UTF-8 encoded files.
We have to know about the code which wrote the files in the first place to tell the correct way to decode. However, the ’ character is 0x92 byte in code page 1252, so try using that encoding instead, i.e.:
with open("files/" + filename.name, encoding="cp1252") as infile:
text = infile.read()
Ignoring decoding errors corrupts the data, so it's best to use the correct decoder when possible, so try and do that first! However, about this part of the question:
Regardless, is there some way to make the codec tolerate such encoding errors? For example, Perl simply substitutes a question mark for any character it can't decode. Is there some way to get Python to do the same?
Yes, you can specify errors="replace"
>>> with open("/tmp/f.txt", "w", encoding="cp1252") as f:
... f.write('this is a right quote: \N{RIGHT SINGLE QUOTATION MARK}')
...
>>> with open("/tmp/f.txt", encoding="cp1252") as f:
... print(f.read()) # using correct encoding
...
this is a right quote: ’
>>> with open("/tmp/f.txt", encoding="utf-8", errors="replace") as f:
... print(f.read()) # using incorrect encoding and replacing errors
this is a right quote: �

Error message: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 3131: invalid start byte

I am a newbie in programming and have a question:
I try to edit some .vtt files, where I want to remove certain substrings from the text. The file should keep its structure. For this, I copied the .vtt files in the folder and changed it to a .txt ending. Now I run this simple code:
import os
file_index = 0
all_text = []
path = "/Users/username/Documents/programming/IMS/Translate/files/"
new_path = "/Users/username/Documents/programming/IMS/Translate/new_files/"
for filename in os.listdir(path):
if os.path.isfile(filename): #check if there is a file in the directory
with open(os.path.join(path, filename), 'r') as file: # open in read-only mode
for line in file.read().split("\n"): #read lines and split
line = " ".join(line.split())
start_index = line.find("[") #find the first character of string to remove, this returns the index number
last_index = start_index + 11 #define the last index to be removed
if start_index != -1:
line = line[:start_index] + line[last_index:] #The new line to slice the first charaters until the one to be removed, and add the others that need to stay
all_text.append(line)
else:
line = line[:]
all_text.append(line)'''
I get this error message:
> File "srt-files-strip.py", line 11, in <module>
> for line in file.read().split("\n"): #read lines and split File "/usr/local/Cellar/python#3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/codecs.py", line 322, in decode
> (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position
> 3131: invalid start byte
I have search through different forums, changed to encoding="utf16", but to no avail. Strange thing is that it did work earlier on. Then I wrote a program to rename my files automatically, after that, it threw this error. I have cleared all files in the folder, copied the original ones in again ... can't get it to work. Would really appreciate your help, as I have really no idea where to look. Thx

UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte while reading a text file

I am training a word2vec model, using about 700 text files as my corpus. But, when I start reading the files after the preprocessing step, I get the mentioned error. The code is as follows
class MyCorpus(object):
def __iter__(self):
for i in ceo_path: /// ceo_path contains abs path of all text files
file = open(i, 'r', encoding='utf-8')
text = file.read()
###########
########### /// text preprocessing steps
###########
yield final_text /// returns preprocessed text
sentences = MyCorpus()
logging.basicConfig(format="%(levelname)s - %(asctime)s: %(message)s", datefmt= '%H:%M:%S', level=logging.INFO)
# training the model
cores = multiprocessing.cpu_count()
w2v_model = Word2Vec(min_count=5,
iter=30,
window=3,
size=200,
sample=6e-5,
alpha=0.025,
min_alpha=0.0001,
negative=20,
workers=cores-1,
sg=1)
w2v_model.build_vocab(sentences)
w2v_model.train(sentences, total_examples=w2v_model.corpus_count, epochs=30, report_delay=1)
w2v_model.save('ceo1.model')
The error that I am getting is:
Traceback (most recent call last):
File "C:/Users/name/PycharmProjects/prac2/hbs_word2vec.py", line 131, in <module>
w2v_model.build_vocab(sentences)
File "C:\Users\name\PycharmProjects\prac1\venv\lib\site-packages\gensim\models\base_any2vec.py", line 921, in build_vocab
total_words, corpus_count = self.vocabulary.scan_vocab(
File "C:\Users\name\PycharmProjects\prac1\venv\lib\site-packages\gensim\models\word2vec.py", line 1403, in scan_vocab
total_words, corpus_count = self._scan_vocab(sentences, progress_per, trim_rule)
File "C:\Users\name\PycharmProjects\prac1\venv\lib\site-packages\gensim\models\word2vec.py", line 1372, in _scan_vocab
for sentence_no, sentence in enumerate(sentences):
File "C:/Users/name/PycharmProjects/prac2/hbs_word2vec.py", line 65, in __iter__
text = file.read()
File "C:\Users\name\AppData\Local\Programs\Python\Python38-32\lib\codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
I am not able to understand the error as I am new to this. I was not getting the error in reading the text files when I wasn't using the iter function and sending the data in chunks as I am doing currently.
It looks like one of your files doesn't have proper utf-8-encoded text.
(Your Word2Vec-related code probably isn't necessary for hitting the error, at all. You could probably trigger the same error with just: sentences_list = list(MyCorpus()).)
To find which file, two different possibilities might be:
Change your MyCorpus class so that it prints the path of each file before it tries to read it.
Add a Python try: ... except UnicodeDecodeError: ... statement around the read, and when the exception is caught, print the offending filename.
Once you know the file involved, you may want to fix the file, or change the code to be able to handle the files you have.
Maybe they're not really in utf-8 encoding, in which case you'd specify a different encoding.
Maybe just one or a few have problems, and it's be OK to just print their names for later investigation, and skip them. (You could use the exception-handling approach above to do that.)
Maybe, those that aren't utf-8 are always in some other platform-specific encoding, so when utf-8 fails, you could try a 2nd encoding.
Separately, when you solve the encoding issue, your iterable MyCorpus is not yet returning whet the Word2Vec class expects.
It doesn't want full text plain strings. It needs those texts to already be broken up into individual word-tokens.
(Often, simply performing a .split() on a string is close-enough-to-real-tokenization to try as a starting point, but usually, projects use some more-sophisticated punctuation-aware tokenization.)

why does file.tell() affect encoding?

Calling tell() while reading a GBK-encoded file of mine causes the next call to readline() to raise a UnicodeDecodeError. However, if I don't call tell(), it doesn't raise this error.
C:\tmp>hexdump badtell.txt
000000: 61 20 6B 0D 0A D2 BB B0-E3 a k......
C:\tmp>type test.py
with open(r'c:\tmp\badtell.txt', "r", encoding='gbk') as f:
while True:
pos = f.tell()
line = f.readline();
if not line: break
print(line)
C:\tmp>python test.py
a k
Traceback (most recent call last):
File "test.py", line 4, in <module>
line = f.readline();
UnicodeDecodeError: 'gbk' codec can't decode byte 0xd2 in position 0: incomplete multibyte sequence
When I remove the f.tell() statement, it decoded successfully. Why?
I tried Python3.4/3.5 x64 on Win7/Win10, it is all the same.
Any one, any idea? Should I report a bug?
I have a big text file, and I really want to get file position ranges of this big text, is there a workaround?
OK, there is a workaround, It works so far:
with open(r'c:\tmp\badtell.txt', "rb") as f:
while True:
pos = f.tell()
line = f.readline();
if not line: break
line = line.decode("gbk").strip('\n')
print(line)
I submitted an issue yesterday here: http://bugs.python.org/issue26990
still no response yet
I just replicated this on Python 3.4 x64 on Linux. Looking at the docs for TextIOBase, I don't see anything that says tell() causes problems with reading a file, so maybe it is indeed a bug.
b'\xd2'.decode('gbk')
gives an error like the one that you saw, but in your file that byte is followed by the byte BB, and
b'\xd2\xbb'.decode('gbk')
gives a value equal to '\u4e00', not an error.
I found a workaround that works for the data in your original question, but not for other data, as you've since found. Wish I knew why! I called seek() after every tell(), with the value that tell() returned:
pos = f.tell()
f.seek(pos)
line = f.readline()
An alternative to f.seek(f.tell()) is to use the SEEK_CUR mode of seek() to give the position. With an offset of 0, this does the same as the above code: moves to the current position and gets that position.
pos = f.seek(0, io.SEEK_CUR)
line = f.readline()

Python 3, UnicodeEncodeError with decode set to ignore

This code makes an http call to a solr index.
query_uri = prop.solr_base_uri + "?q=" + query + "&wt=json&indent=true"
with urllib.request.urlopen(query_uri) as response:
data = response.read()
#data is bytes
data_str=data.decode('utf-8', 'ignore')
print(data_str)
The print statement throws:
UnicodeEncodeError: 'charmap' codec can't encode character '\u2715' in position 149273: character maps to undefined
I thought the decode('utf-8', 'ignore') was supposed to ignore non utf-8 characters and leave it out of the result? How is it that I have a UnicodeEncodeError in the the print statement? How do I handle characters that can't encoded in Unicode? Thanks!
The error is caused by print (and any file.write()) not having a character map set and defaulting to ASCII.
The recommended approach is to set PYTHONIOENCODING=UTF-8 in your environment or encode each string before printing:
print(`data_str`.encode("utf-8")
For file writing, set the encoding for the file when you open it:
file = open("/temp/test.txt", "w", encoding="UTF-8")
file.write('\u2715')

Resources