How to convert iso 8859-1 in simple letter using python - python-3.x

I m trying to clean my sqlite database using python. At first I loaded using this code:
import sqlite3, pandas as pd
con = sqlite3.connect("DATABASE.db")
import sqlite3, pandas as pd
df = pd.read_sql_query("SELECT TITLE from DOCUMENT", con)
So I got the dirty words. for example this "Conciliaci\363n" I want to get "Conciliacion". I used this code:
df['TITLE']=df['TITle'].apply(lambda x: x.decode('iso-8859-1').encode('utf8'))
I got b'' in blank cells. and got 'Conciliaci\\363n' too. So maybe I'm doing wrong. how can I solve this problem. Thanks in advance.

It's unclear, but if your string contains a literal backslash and numbers like this:
>>> s= r"Conciliaci\363n" # A raw string to make a literal escape code
>>> s
'Conciliaci\\363n' # debug display of string shows an escaped backslash
>>> print(s)
Conciliaci\363n # printing prints the escape
Then this will decode it correctly:
>>> s.encode('ascii').decode('unicode-escape') # convert to byte string, then decode
'Conciliación'
If you want to lose the accent mark as your question shows, then decomposing the Unicode string, converting to ASCII ignoring errors, then converting back to a Unicode string will do it:
>>> s2 = s.encode('ascii').decode('unicode-escape')
>>> s2
'Conciliación'
>>> import unicodedata as ud
>>> ud.normalize('NFD',s2) # Make Unicode decomposed form
'Conciliación' # The ó is now an ASCII 'o' and a combining accent
>>> ud.normalize('NFD',s2).encode('ascii',errors='ignore').decode('ascii')
'Conciliacion' # accent isn't ASCII, so is removed

Related

How to customize unidecode?

I'm using unidecode module for replacing utf-8 characters. However, there are some characters, for example greek letters and some symbols like Å, which I want to preserve. How can I achieve this?
For example,
from unidecode import unidecode
test_str = 'α, Å ©'
unidecode(test_str)
gives the output a, A (c), while what I want is α, Å (c).
Run unidecode on each character individually. Have a whitelist set of characters that you use to bypass the unidecode.
>>> import string
>>> whitelist = set(string.printable + 'αÅ')
>>> test_str = 'α, Å ©'
>>> ''.join(ch if ch in whitelist else unidecode.unidecode(ch) for ch in test_str)
'α, Å (c)'

How to remove both number and text from a parenthesis using regrex in python?

In the following text, I want to remove everything inside the parenthesis including number and string. I use the following syntax but I got result of 22701 instead of 2270. What would be a way to show 2270 only using re.sub? Thanks
import regex as re
import numpy as np
import pandas as pd
text = "2270 (1st xyz)"
text_new = re.sub(r"[a-zA-Z()\s]","",text)
text_new
Does the text always follow the same pattern? Try:
import re
import numpy as np
import pandas as pd
text = "2270 (1st xyz)"
text_new = re.sub(r"\s\([^)]*\)","",text)
print(text_new)
Output:
2270
Simply use the regex pattern \(.*?\):
import re
text = "2270 (1st xyz)"
text_new = re.sub("\(.*?\)", "", text)
print(text_new)
Output:
2270
Explanation on the pattern \(.*?\):
The \ behind each parenthesis is to tell re to treat the parenthesis as a regular character, as they are by default special characters in re.
The . matches any character except the newline character.
The * matches zero or more occurrences of the pattern immediately specified before the *.
The ? tells re to match as little text as possible, thus making it non-greedy.
Note the trailing space in the output. To remove it, simply add it to the pattern:
import re
text = "2270 (1st xyz)"
text_new = re.sub(" \(.*?\)", "", text)
print(text_new)
Output:
2270

Why some emojis are not converted back into their representation?

I am working on emoji detection module. For some emojis I am observing weird behavior that is after converting them to utf-8 encoding they are not converted back to their original representation form. I need their exact colored representation to be send as API response instead of sending unicode escaped string. Any leads?
In [1]: x = "example1: 🤭 and example2: 😁 and example3: 🥺"
In [2]: x.encode('utf8')
Out[2]: b'example1: \xf0\x9f\xa4\xad and example2: \xf0\x9f\x98\x81 and example3: \xf0\x9f\xa5\xba'
In [3]: x.encode('utf8').decode('utf8')
Out[3]: 'example1: \U0001f92d and example2: 😁 and example3: \U0001f97a'
In [4]: print( x.encode('utf8').decode('utf8') )
*example1: 🤭 and example2: 😁 and example3: 🥺*
Link Emoji used in example
Update 1:
By this example it must be much clearer to explain. Here, two emojis are rendered when I have send unicode escape string, but 3rd exampled failed to convert exact emoji, what to do in such case?
'\U0001f92d' == '🤭' is True. It is an escape code but is still the same character...Two ways of display/entry. The former is the repr() of the string, printing calls str(). Example:
>>> s = '🤭'
>>> print(repr(s))
'\U0001f92d'
>>> print(str())
🤭
>>> s
'\U0001f92d'
>>> print(s)
🤭
When Python generates the repr() it uses an escape code representation if it thinks the display can't handle the character. The content of the string is still the same...the Unicode code point.
It's a debug feature. For example, is the white space spaces or tabs? The repr() of the string makes it clear by using \t as an escape code.
>>> s = 'a\tb'
>>> print(s)
a b
>>> s
'a\tb'
As to why an escape code is used for one emoji and not another, it depends on the version of Unicode supported by the version of Python used.
Pyton 3.8 uses Unicode 9.0, and one of your emoji isn't defined at that version level:
>>> import unicodedata as ud
>>> ud.unidata_version
'9.0.0'
>>> ud.name('😁')
'GRINNING FACE WITH SMILING EYES'
>>> ud.name('🤭')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: no such name

Python: How to Remove range of Characters \x91\x87\xf0\x9f\x91\x87 from File

I have this file with some lines that contain some unicode literals like:
"b'Who\xe2\x80\x99s he?\n\nA fan rushed the field to join the Cubs\xe2\x80\x99 celebration after Jake Arrieta\xe2\x80\x99s no-hitter."
I want to remove those xe2\x80\x99 like characters.
I can remove them if I declare a string that contains these characters but my solutions don't work when reading from a CSV file. I used pandas to read the file.
SOLUTIONS TRIED
1.Regex
2.Decoding and Encoding
3.Lambda
Regex Solution
line = "b'Who\xe2\x80\x99s he?\n\nA fan rushed the field to join the Cubs\xe2\x80\x99 celebration after Jake Arrieta\xe2\x80\x99s no-hitter."
code = (re.sub(r'[^\x00-\x7f]',r'', line))
print (code)
LAMBDA SOLUTION
stripped = lambda s: "".join(i for i in s if 31 < ord(i) < 127)
code2 = stripped(line)
print(code2)
ENCODING SOLUTION
code3 = (line.encode('ascii', 'ignore')).decode("utf-8")
print(code3)
HOW FILE WAS READ
df = pandas.read_csv('file.csv',encoding = "utf-8")
for index, row in df.iterrows():
print(stripped(row['text']))
print(re.sub(r'[^\x00-\x7f]',r'', row['text']))
print(row['text'].encode('ascii', 'ignore')).decode("utf-8"))
SUGGESTED METHOD
df = pandas.read_csv('file.csv',encoding = "utf-8")
for index, row in df.iterrows():
en = row['text'].encode()
print(type(en))
newline = en.decode('utf-8')
print(type(newline))
print(repr(newline))
print(newline.encode('ascii', 'ignore'))
print(newline.encode('ascii', 'replace'))
Your string is valid utf-8. Therefore it can be directly converted to a python string.
You can then encode it to ascii with str.encode(). It can ignore non-ascii characters with 'ignore'.
Also possible: 'replace'
line_raw = b'Who\xe2\x80\x99s he?'
line = line_raw.decode('utf-8')
print(repr(line))
print(line.encode('ascii', 'ignore'))
print(line.encode('ascii', 'replace'))
'Who’s he?'
b'Whos he?'
b'Who?s he?'
To come back to your original question, your 3rd method was correct. It was just in the wrong order.
code3 = line.decode("utf-8").encode('ascii', 'ignore')
print(code3)
To finally provide a working pandas example, here you go:
import pandas
df = pandas.read_csv('test.csv', encoding="utf-8")
for index, row in df.iterrows():
print(row['text'].encode('ascii', 'ignore'))
There is no need to do decode('utf-8'), because pandas does that for you.
Finally, if you have a python string that contains non-ascii characters, you can just strip them by doing
text = row['text'].encode('ascii', 'ignore').decode('ascii')
This converts the text to ascii bytes, strips all the characters that cannot be represented as ascii, and then converts back to text.
You should look up the difference between python3 strings and bytes, that should clear things up for you, I hope.

Getting a value error: invalid literal for int() with base 10: '56,990'

So I am trying to scrap a website containing price of a laptop.However it is a srting and for comparison purposes I need to convert it to int.But on using the same I get a none type error: invalid literal for int() with base 10: '56,990'
Below is the code:
from bs4 import BeautifulSoup
import requests
r = requests.get("https://www.flipkart.com/apple-macbook-air-core-i5-5th-gen-8-gb-128-gb-ssd-mac-os-sierra-mqd32hn-a-a1466/p/itmevcpqqhf6azn3?pid=COMEVCPQBXBDFJ8C&srno=s_1_1&otracker=search&lid=LSTCOMEVCPQBXBDFJ8C5XWYJP&fm=SEARCH&iid=2899998f-8606-4b81-a303-46fd62a7882b.COMEVCPQBXBDFJ8C.SEARCH&qH=9e3635d7234e9051")
data = r.text
soup = BeautifulSoup(data,"lxml")
data=soup.find('div',{"class":"_1vC4OE _37U4_g"})
cost=(data.text[1:].strip())
print(int(cost))
PS:I used text[1:] toremove the currency character
I get error in the last line.Basically I need to get the int value of the cost.
The value has a comma in it. So you need to replace the comma with empty character before converting it to integer.
print(int(cost.replace(',','')))
python does not understand , group separators in integers, so you'll need to remove them. Try:
cost = data.text[1:].strip().translate(None,',')
Rather than invent a new solution for every character you don't want (strip() function for whitespace, [1:] index for the currency, something else for the digit separator) consider a single solution to gather what you do want:
>>> import re
>>> text = "\u20B956,990\n"
>>> cost = re.sub(r"\D", "", text)
>>> print(int(cost))
56990
The re.sub() replaces anything that isn't a digit with nothing.

Resources