Remove non utf-8 characters from string in python - python-3.x

I am attempting to read in tweets and write these tweets to a file. However, I am getting UnicodeEncodeErrors when I try to write some of these tweets to a file. Is there a way to remove these non utf-8 characters so I can write out the rest of the tweet?
For example, a problem tweet may look it this:
Camera? 🎥
This is the code I am using:
with open("Tweets.txt",'w') as f:
for user_tws in twitter.get_user_timeline(screen_name='camera',
count = 200):
try:
f.write(user_tws["text"] + '\n')
except UnicodeEncodeError:
print("skipped: " + user_tws["text"])
mod_tw = user_tws["text"]
mod_tw=mod_tw.encode('utf-8','replace').decode('utf-8')
print(mod_tw)
f.write(mod_tw)
The error is this:
UnicodeEncodeError: 'charmap' codec can't encode character '\U0001f3a5' in position 56: character maps to

You are not writing a UTF8 encoded file, add the encoding parameter to the open function
with open("Tweets.txt",'w', encoding='utf8') as f:
...
Have fun 🎥

Related

How to get python to tolerate UTF-8 encoding errors

I have a set of UTF-8 texts I have scraped from web pages. I am trying to extract keywords from these files like so:
import os
import json
from rake_nltk import Rake
rake_nltk_var = Rake()
directory = 'files'
results = {}
for filename in os.scandir(directory):
if filename.is_file():
with open("files/" + filename.name, encoding="utf-8", mode = 'r') as infile:
text = infile.read()
rake_nltk_var.extract_keywords_from_text(text)
keyword_extracted = rake_nltk_var.get_ranked_phrases()
results[filename.name] = keyword_extracted
with open("extracted-keywords.json", "w") as outfile:
json.dump(results, outfile)
One of the files I've managed to process so far is throwing the following error on read:
Traceback (most recent call last):
File "extract-keywords.py", line 11, in <module>
text = infile.read()
File "c:\python36\lib\codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x92 in position 66: invalid start byte
0x92 is a right single quotation mark, but the 66th char of the file is a "u" so IDK where this error is coming from. Regardless, is there some way to make the codec tolerate such encoding errors? For example, Perl simply substitutes a question mark for any character it can't decode. Is there some way to get Python to do the same? I have a lot of files and can't afford to stop and debug every encoding error they might contain.
I have a set of UTF-8 texts I have scraped from web pages
If they can't be read with the script you've shown, then these are not actually UTF-8 encoded files.
We have to know about the code which wrote the files in the first place to tell the correct way to decode. However, the ’ character is 0x92 byte in code page 1252, so try using that encoding instead, i.e.:
with open("files/" + filename.name, encoding="cp1252") as infile:
text = infile.read()
Ignoring decoding errors corrupts the data, so it's best to use the correct decoder when possible, so try and do that first! However, about this part of the question:
Regardless, is there some way to make the codec tolerate such encoding errors? For example, Perl simply substitutes a question mark for any character it can't decode. Is there some way to get Python to do the same?
Yes, you can specify errors="replace"
>>> with open("/tmp/f.txt", "w", encoding="cp1252") as f:
... f.write('this is a right quote: \N{RIGHT SINGLE QUOTATION MARK}')
...
>>> with open("/tmp/f.txt", encoding="cp1252") as f:
... print(f.read()) # using correct encoding
...
this is a right quote: ’
>>> with open("/tmp/f.txt", encoding="utf-8", errors="replace") as f:
... print(f.read()) # using incorrect encoding and replacing errors
this is a right quote: �

What to do if using utf-8 while writing in a file results in replacement of some characters?

Writing some raw text + html data into file with these lines:
ready_file = open('example.txt', 'w', encoding='utf-8')
ready_file.write(raw_data_html)
ready_file.close()
If I'm using encoding, it results in some characters, such as ' and ", being replaced with ”, “, Ђ™ and so on.
If I'm not using encoding, it results in this type of errors:
UnicodeEncodeError: 'charmap' codec can't encode character 'X' in position Y: character maps to
undefined
IF i'm writing only text data withour html part and not using encoding as in example below:
ready_file = open('example.txt', 'w')
ready_file.write(raw_data)
ready_file.close()
then it's fine and no ' or " are being replaced with ” and so on.
How do I avoid this error or avoid my characters being replaced with god knows what?
UPD: figured it out! I had ” instead of " in my initial files.
Thanks for the answers!

Read .txt with emoji characters in python

I try to read a chat history with smilies in it, but I get the following error:
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 38: character maps to
My code looks like this:
file_name = "chat_file.txt"
chat = open(chat_file)
chatText = chat.read() # read data
chat.close()
print(chatText)
I am pretty certain that it's because of elements like: ❤
How can I implement the correct Transformation Format // what is the correct file encoding so python can read these elements?
Never open text files without specifying their encoding.
Also, use with blocks, these automatically call .close() so you don't have to.
file_name = "chat_file.txt"
with open(chat_file, encoding="utf8") as chat:
chat_text = chat.read()
print(chat_text)
iso-8859-1 is a legacy encoding, that means it cannot contain emoji. For emoji the text file has to be Unicode. And the most common encoding for Unicode is UTF-8.

'charmap' codec can't encode characters in position XX

I have a simple script that is attempting to extract mutiple json objects from a single file, and store it as a list:
import json
URL = r"C:\Users\Kenneth\Youtube_comment_parser\Testing.txt"
with open(URL, 'r', encoding="utf-8") as handle:
json_data = [json.loads(line) for line in handle]
print(json_data) # Can't .encode() because it's a list
Even after specifying utf-8 encoding, I'm still running into a codec error. If possible, I would also like to change this object into a dictionary, but this is as far as I've got.
The exact error reads:
UnicodeEncodeError: 'charmap' codec can't encode characters in position
394-395: character maps to <undefined>
Thanks in advance.
I was able to solve this issue by removing one unicode character that was producing "/undefined>", the string '\ufeff', and then the rest was able to display nicely. This required me to iterate over the keys in the list of dictionaries, and replace as necessary.
import json
URL = r"C:\Users\Kenneth\Youtube_comment_parser\Testing.txt"
json1_file = open(URL, encoding='utf-8')
json1_str = json1_file.read()
json1_str = [d.strip() for d in json1_str.splitlines()]
json1_data = [json.loads(i) for i in json1_str]
json1_data = [{key:value.replace(u'\ufeff', '') for
key, value in json1_data[index].items()} for
index in range(len(json1_data))]
print(json1_data[1]['text'].encode('utf-8'))
Still not sure why I have to open with utf-8 and then encode again with my print statement, but it produced the string nicely.

Python 3, UnicodeEncodeError with decode set to ignore

This code makes an http call to a solr index.
query_uri = prop.solr_base_uri + "?q=" + query + "&wt=json&indent=true"
with urllib.request.urlopen(query_uri) as response:
data = response.read()
#data is bytes
data_str=data.decode('utf-8', 'ignore')
print(data_str)
The print statement throws:
UnicodeEncodeError: 'charmap' codec can't encode character '\u2715' in position 149273: character maps to undefined
I thought the decode('utf-8', 'ignore') was supposed to ignore non utf-8 characters and leave it out of the result? How is it that I have a UnicodeEncodeError in the the print statement? How do I handle characters that can't encoded in Unicode? Thanks!
The error is caused by print (and any file.write()) not having a character map set and defaulting to ASCII.
The recommended approach is to set PYTHONIOENCODING=UTF-8 in your environment or encode each string before printing:
print(`data_str`.encode("utf-8")
For file writing, set the encoding for the file when you open it:
file = open("/temp/test.txt", "w", encoding="UTF-8")
file.write('\u2715')

Resources