I've got two very important, user entered, information columns in my data frame. They are mostly cleaned up except for one issue: the spelling, and the way names are written differ. For example I have five entries for one name: "red rocks canyon", "redrcks", "redrock canyon", "red rocks canyons". This data set is too large for me to go through and clean this manually (2 million entries). Are there any strategies to clean these features up with code?
I would look into doing phonetic string matching here. The basic idea behind this approach is to obtain a phonetic encoding for each entered string, and then group spelling variations by their encoding. Then, you could choose the most frequent variation in each group to be the "correct" spelling.
There are several different variations on phonetic encoding, and a great package in Python for trying some of them out is jellyfish. Here is an example of how to use it with the Soundex encoding:
import jellyfish
import pandas as pd
data = pd.DataFrame({
"name": [
"red rocks canyon",
"redrcks",
"redrock canyon",
"red rocks canyons",
"bosque",
"bosque escoces",
"bosque escocs",
"borland",
"borlange"
]
})
data["soundex"] = data.name.apply(lambda x: jellyfish.soundex(x))
print(data.groupby("soundex").agg({"name": lambda x: ", ".join(x)}))
This prints:
name
soundex
B200 bosque
B222 bosque escoces, bosque escocs
B645 borland, borlange
R362 red rocks canyon, redrcks, redrock canyon, red...
This definitely won't be perfect and you'll have to be careful as it might group things too aggressively, but I hope it gives you something to try!
Related
I'm trying to process text data (Twitter tweets) with PySpark. Emojis and special characters are being red correctly but "\n", "&" appear to be escaped. Spark does not recognize them. Probably others too. One example tweet in my Spark DF would look like this:
"Hello everyone\n\nHow is it going? 😉 Take care & enjoy"
I would like Spark to read them correctly. The files are stored as parquet and I'm reading them like this:
tweets = spark.read.format('parquet')\
.option('header', 'True')\
.option('encoding', 'utf-8')\
.load(path)
Below are some sample input data, which I took from the original JSONL files (I stored the data as parquet later).
"full_text": "RT #OurWarOnCancer: Where is our FEDERAL vaccination
education campaign for HPV?! Where is our FEDERAL #lungcancer
screening program?! (and\u2026"
"full_text": "\u2b55\ufe0f#HPV is the most important cause of
#CervicalCancer But it doesn't just cause cervical cancer (see the figure\ud83d\udc47) \n\u2b55\ufe0fThat means they can be PREVENTED"
Reading directly from JSONL files results in the same recognizing problems.
tweets = spark.read.\
.option('encoding', 'utf-8')\
.json(path)
How can Spark recognize them correctly? Thank you in advance.
the below code might be helpful to solve your problem,
Input taken:
"Hello everyone\n\nHow is it going? 😉 Take care & enjoy"
"full_text": "RT #OurWarOnCancer: Where is our FEDERAL vaccination education campaign for HPV?! Where is our FEDERAL #lungcancer screening program?! (and\u2026 &"
"full_text": "\u2b55\ufe0f#HPV is the most important cause of #CervicalCancer But it doesn't just cause cervical cancer (see the figure\ud83d\udc47) \n\u2b55\ufe0fThat means they can be PREVENTED #theNCI #NCIprevention #AmericanCancer #cancereu #uicc #IARCWHO #EuropeanCancer #KanserSavascisi #AUTF_DEKANLIK #OncoAlert"
code to solve the problem:
from pyspark.sql.functions import *
df=spark.read.csv("file:///home/sathya/Desktop/stackoverflo/raw-data/input.tweet")
df1=df.withColumn("cleandata",regexp_replace('_c0', '&|\\\\n', ''))
df1.select("cleandata").show(truncate=False)
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|cleandata |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|Hello everyoneHow is it going? 😉 Take care & enjoy |
|"full_text": "RT #OurWarOnCancer: Where is our FEDERAL vaccination education campaign for HPV?! Where is our FEDERAL #lungcancer screening program?! (and\u2026 &" |
|"full_text": "\u2b55\ufe0f#HPV is the most important cause of #CervicalCancer But it doesn't just cause cervical cancer (see the figure\ud83d\udc47) \u2b55\ufe0fThat means they can be PREVENTED #theNCI #NCIprevention #AmericanCancer #cancereu #uicc #IARCWHO #EuropeanCancer #KanserSavascisi #AUTF_DEKANLIK #OncoAlert"|
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
I'm going through Wes McKinney's Python for Data Analysis 2nd Edition and in Chapter 2 he has several examples based of merging three .dat files about movie reviews.
I can get two of the three data files to work (users and reviews), but the third one (movie titles) I can not get to work and can't figure out what to do.
Here's the code:
mnames = ['movie_id', 'title', 'genres']
movies = pd.read_table('movies.dat', sep = '::', header = None, engine = 'python', names = mnames)
print(movies[:5])
And here is what the output/problem looks like. Seems the file is not lining up the separator correctly and I've tried recreating the file and comparing to the other two files which are working but they look exactly the same.
Here's a sample data taken from here:
1::Toy Story (1995)::Animation|Children's|Comedy
2::Jumanji (1995)::Adventure|Children's|Fantasy
3::Grumpier Old Men (1995)::Comedy|Romance
4::Waiting to Exhale (1995)::Comedy|Drama
5::Father of the Bride Part II (1995)::Comedy
6::Heat (1995)::Action|Crime|Thriller
7::Sabrina (1995)::Comedy|Romance
8::Tom and Huck (1995)::Adventure|Children's
9::Sudden Death (1995)::Action
10::GoldenEye (1995)::Action|Adventure|Thriller
11::American President, The (1995)::Comedy|Drama|Romance
12::Dracula: Dead and Loving It (1995)::Comedy|Horror
13::Balto (1995)::Animation|Children's
14::Nixon (1995)::Drama
I'd like to be able to read this file properly so I can join it to the other two example files and keep learning Pandas :)
try adding encoding='UTF-16' to pd.read_table()
(Sorry, not enough reputation to add a comment.)
I have a dictionary of famous people's names sorted by their initials. I want to convert these names into their respective Wikipedia title page names. These are the same for the first three given in this example, but Alexander Bell gets correctly converted to Alexander Graham Bell after running this code.
The algorithm works, although took about an hour to do all the 'AA' names and I am hoping for it to do this all the way up to 'ZZ'.
Is there any optimisation I can do on this? For example I saw something about batch requests but am not sure if it applies to my algorithm.
Or is there a more efficient method that I could use to get this same information?
Thanks.
import wikipedia
PeopleDictionary = {'AA':['Amy Adams', 'Aaron Allston'], 'AB':['Alia Bhatt', 'Alexander Bell']}
for key, val in PeopleDictionary.items():
for val in range(len(PeopleDictionary[key])):
Name_URL_All = wikipedia.search(PeopleDictionary[key][val])
if Name_URL_All:
Name_URL = Name_URL_All[0]
PeopleDictionary[key][val] = Name_URL
As you all know names of persons normally on the top of their resume, so i did NER(name entity recognition) tagging using spaCy library on CV's and then i extract the first tag of PERSON (hoping it should be Human Name). Some time it works for me fine but some time it gives me Other things which are not names(because spaCy don't even recognize some names with any NER tag), so it is giving me some other things which it recognize as a PERSON it may be like 'Curriculam vitae' obviously this i don't want.
Following is a Code for which i was talking above...
import spacy
import docx2txt
nlp = spacy.load('en_default')
my_text = docx2txt.process("/home/waqar/CV data/Adnan.docx")
doc_2 = nlp(my_text)
for ent in doc_2.ents:
if ent.label_ == "PERSON":
print('{}'.format(ent))
break
Is there any way through which i can add some name to NER for 'PERSON' tag in spaCy because then it will be able to recognize human names written in CV's
i think my logic is fine but something i am missing....
I would be very thankful if u peoples help me as i am Student and a beginer in python hope u peoples will definitely suggest some thing
OutPut
Abdul Ahad Ghous
but some time it giving me OutPuts like following as NER recognize it as a PERSON and don't even give any tag to the human name in this CV.
Curriculum Vitae
Stack Overflow implemented its "Related Questions" feature by taking the title of the current question being asked and removing from it the 10,000 most common English words according to Google. The remaining words are then submitted as a fulltext search to find related questions.
How do I get such a list of the most common English words? Or most common words in other languages? Is this something I can just get off the Google website?
A word frequency list is what you want. You can also make your own, or customize one for use within a particular domain, and it is a nice way to become familiar with some good libraries. Start with some text such as discussed in this question, then try out some variants of this back-of-the-envelope script:
from nltk.stem.porter import PorterStemmer
import os
import string
from collections import defaultdict
ps = PorterStemmer()
word_count = defaultdict(int)
source_directory = '/some/dir/full/of/text'
for root, dirs, files in os.walk(source_directory):
for item in files:
current_text = os.path.join(root, item)
words = open(current_text, 'r').read().split()
for word in words:
entry = ps.stem_word(word.strip(string.punctuation).lower())
word_count[entry] += 1
results = [[word_count[i], i] for i in word_count]
print sorted(results)
This gives the following on a couple of books downloaded, re the most common words:
[2955, 'that'], [4201, 'in'], [4658, 'to'], [4689, 'a'], [6441, 'and'], [6705, 'of'], [14508, 'the']]
See what happens when you filter out the most common x y or z number from your queries, or leave them out of your text search index entirely. Also might get some interesting results if you include real world data -- for example "community" "wiki" is not likely a common word on a generic list, but on SO that obviously wouldn't be the case and you might want to exclude them.