Cleaning up text in an inconsistent format in Python - python-3.x

I'm trying to learn textual analysis in Python by analyzing Congressional testimony transcripts. I've been able to extract the text from the PDF transcripts, but the text is very dirty. I'm not sure how to clean it up. I'm trying to convert this:
`["HOUSE COMMITTEE ON BANKING, FINANCE AND URBAN AFFAIRSHENRY S. REUSS, Wisconsin, ChairmanTHOMAS'L.
ASHLEY, OhioWILLIAM S. MOORHEAD, PennsylvaniaFERNAND J. ST GERMAIN, Rhode Island", "HENRY B.
GONZALEZ, TexasPAUL NELSON, Clerk and Staff DirectorMICHAEL P. FLAHERTY, General CounselMERCER L.
JACKSON, Minority Staff Director(H)']`
to a list of names, states, and title (ex: "Chairman"), like this:
`[["HENRY S. REUSS", "Wisconsin", "Chairman"],
["THOMAS'L. ASHLEY", "Ohio"],
["WILLIAM S. MOORHEAD", "Pennsylvania"],
["FERNAND J. ST GERMAIN", "Rhode Island"],
["HENRY B. GONZALEZ", "Texas"],
["PAUL NELSON", "Clerk and Staff Director"],
["MICHAEL P. FLAHERTY", "General Counsel"],
["MERCER L. JACKSON", "Minority Staff Director(H)"]]`
Note that some individuals have a state and title associated with their name, while some have just a state or title associated with their name. Is this type of conversion doable? Any recommendations on how to approach this? Any help will be greatly appreciated. Cheers!
-Mike

Related

python Spacy custom NER – how to prepare multi-words entities?

:) Please help :)
I`m preparing custom Name Entity Recognition using Spacy (blank) model. I use only one entity: Brand (we can name it 'ORG' as Organisation). I have short texts with ORGs & have prepared data like this (but I can change it):
train_data = [
(‘First text in string with Name I want’, {'entities': [(START, END, ‘ORG')]}),
(‘Second text with Name and Name2’, {'entities': [(START, END, ‘ORG'), (START2, END2, ‘ORG')]})
]
START, END – are the start and end indexes of the brand name in text , of course.
This is working well, but...
The problem I have is how to prepare entities for Brands that are made of 2 (or more) words.
Lets say Brand Name is a full name of a company. How to prepare an entity?
Consider the tuple itself for a single text:
text = 'Third text with Brand Name'
company = 'Brand Name'
Can I treat company as a one word?
(‘Third text with Brand Name', {“entities”: [(16, 26, 'ORG')]})
Or 2 separated brands ‘Brand’ & ‘Name’ ? (will not be useful in my case while using :( the model later)
(‘Third text with Brand Name', {“entities”: [(16, 21, 'ORG'), (22, 26, 'ORG')]})
Or I should use a different format of labeling eg. BIO ?
So Brand will be B-ORG and Name will be I-ORG ?
IF so can I prepare it like this for Spacy:
(‘Third text with Brand Name', {“entities”: [(16, 21, 'B-ORG'), (22, 26, 'I-ORG')]})
or should I change the format of train_data because I also need the ‘O’ from BIO?
How? Like this? :
(‘Third text with Brand Name', {"entities": ["O", "O", "O", "B-ORG", "I-ORG"]})
The question is on the format of the train_data for ‘Third text with Brand Name' - how to label the entity. If I have the format, I will handle the code. :)
The same question for 3 or more words entities. :)
You can just provide the start and end offsets for the whole entity. You describe this as "treating it as one word", but the character offsets don't have any direct relation to tokenization - they won't affect tokenizer output.
You will get an error if the start and end of your entity don't match token boundaries, but it doesn't matter if the entity is one token or many.
I recommend you take a look at the training data section in the spaCy docs. Your specific question isn't answered explicitly, but that's only because multi-token entries don't require special treatment. Examples include multi-token entities.
Regarding BIO tagging, for details on how to use it with spaCy you can see the docs for spacy convert.

Sentence segmentation with trailing whitespaces in stanza (stanford corenlp)

Using the library Stanza for sentence segmentation :
import stanza
stanza.download('en')
snlp = stanza.Pipeline(lang="en",processors='tokenize')
doc = snlp(text)
doc_sents = [sentence.text for sentence in doc.sentences]
Output:
["Arthur's Magazine (1844–1846) was an American literary periodical published in Philadelphia in the 19th century.",
'Edited by T.S. Arthur, it featured work by Edgar A. Poe, J.H. Ingraham, Sarah Josepha Hale, Thomas G. Spear, and others.',
'In May 1846 it was merged into "Godey\'s Lady\'s Book".',
"First for Women is a woman's magazine published by Bauer Media Group in the USA.",
'The magazine was started in 1989.',
'It is based in Englewood Cliffs, New Jersey.',
'In 2011 the circulation of the magazine was 1,310,696 copies.']
But, Then I'm losing the trailing whitespace, is there a way to "keep" them, similar to the behavior in spacy using
[sent.text_with_ws for sent in doc.sents]
Any workaround will help as well. I need to keep the original whitespaces to handle indexes that written originally on the full text
["Arthur's Magazine (1844–1846) was an American literary periodical published in Philadelphia in the 19th century. ",
'Edited by T.S. Arthur, it featured work by Edgar A. Poe, J.H. Ingraham, Sarah Josepha Hale, Thomas G. Spear, and others. ',
'In May 1846 it was merged into "Godey\'s Lady\'s Book". ',
"First for Women is a woman's magazine published by Bauer Media Group in the USA. ",
'The magazine was started in 1989. ',
'It is based in Englewood Cliffs, New Jersey. ',
'In 2011 the circulation of the magazine was 1,310,696 copies.']

Removing punctuation except for apostrophes AND intra-word dashes in R

I know how to separately remove punctuation and keep apostrophes:
gsub( "[^[:alnum:]']", " ", db$text )
or how to keep intra-word dashes with the tm package:
removePunctuation(db$text, preserve_intra_word_dashes = TRUE)
but I cannot find a way to do both at the same time. For example if my original sentence is:
"Interested in energy/the environment/etc.? Congrats to our new e-board! Ben, Nathan, Jenny, and Adam, y'all are sure to lead the club in a great direction next year! #obama #swag"
I would like it to be:
"Interested in energy the environment etc Congrats to our new e-board Ben Nathan Jenny and Adam y'all are sure to lead the club in a great direction next year obama swag"
Of course, there will be extra white spaces, but I can remove them later.
I will be grateful for your help.
Use character classes
gsub("[^[:alnum:]['-]", " ", db$text)
## "Interested in energy the environment etc Congrats to our new e-board Ben Nathan Jenny and Adam y'all are sure to lead the club in a great direction next year obama swag"
I like David Arenberg's answer. If you need another way, you could try:
library(qdap)
text <- "Interested in energy/the environment/etc.? Congrats to our new e-board! Ben, Nathan, Jenny, and Adam, y'all are sure to lead the club in a great direction next year! #obama #swag"
gsub("/", " ",strip(text, char.keep=c("-","/"), apostrophe.remove=F,lower.case=F))
#[1] "Interested in energy the environment etc Congrats to our new e-board Ben Nathan Jenny and Adam y'all are sure to lead the club in a great direction next year obama swag"
or
library(gsubfn)
clean(gsubfn("[[:punct:]]", function(x) ifelse(x=="'","'",ifelse(x=="-","-"," ")),text))
#[1] "Interested in energy the environment etc Congrats to our new e-board Ben Nathan Jenny and Adam y'all are sure to lead the club in a great direction next year obama swag"
clean is from qdap. Used to remove escaped characters and space

weighted RDF predicate (owl:ObjectProperty)

in RDF a statement is represented with S,P and O; In OWL the owl:ObjectProperty represents the predicate logic.
(S) (P) (O)
I like dog
<owl:Class rdf:about="Person" />
<owl:NamedIndividual rdf:about="I">
<rdf:type rdf:resource="Person"/>
<like rdf:resource="Dog"/>
</owl:NamedIndividual>
<owl:Class rdf:about="Pet" />
<owl:NamedIndividual rdf:about="Dog">
<rdf:type rdf:resource="Pet"/>
</owl:NamedIndividual>
<owl:ObjectProperty rdf:about="like">
<rdfs:domain>
<owl:Restriction>
<owl:onProperty rdf:resource="like"/>
<owl:someValuesFrom rdf:resource="Person"/>
</owl:Restriction>
</rdfs:domain>
<rdfs:range>
<owl:Restriction>
<owl:onProperty rdf:resource="like"/>
<owl:someValuesFrom rdf:resource="Pet"/>
</owl:Restriction>
</rdfs:range>
</owl:ObjectProperty>
But how about to describe "the degree" I like dogs?
How can I give a property or value to a predicate?
One solution I got is to extend one (S,P,O) statement to 3 statements.
For example,
(S) (P) (O)
Person isSrcOf LikeRelation
Pet isTargetOf LikeRelation
LikeRelation hasValue [0~100]
It should work but obviously it will let ontology 3 times bigger :(
I appreciate any suggestion!
I wouldn't use RDF reification, not in this case and almost not in any case. RDF reification just makes the things always more complicated. As you commented it will inflate your ontology, but not just that, it'll also make your ontology very difficult for applying OWL reasoning.
I've dealt with the same scenario that you've presented and most of times I've ended up with the following design.
(S) (P) [ (P) (O) (P) (O)]
I like [ 'what I like' Dog , 'how much I like it' 'a lot']
Class: LikeLevel //it represents class of things a person likes with a degree factor.
ObjectProperty: likeObject
Domain: LikeLevel
Range: Pet //(or Thing)
ObjectProperty: likeScale
Domain: LikeLevel
Range: xsd:int //(or an enumeration class i.e: 'nothing', 'a bit', 'very much',...)
ObjectProperty: like
Domain: Person
Range: LikeLevel
If you want to represent some instance data with this model (in RDF/Turtle syntax):
:I :like [ a :LikeLevel;
:likeObject :dogs;
:likeScale 5.7] .
In this case I'm creating a blank node for the object LikeLevel but you could create a ground object as well, sometimes you might want/need to avoid bNodes. In that case:
:I :like :a0001 .
:a0001 a :LikeLevel;
:likeObject :dogs;
:likeScale 5.7.
This design can be consider a light case of reification, the main difference with RDF reification is that keeps the ontology design in the user's model.
Your suggestion is a valid one; it is called reification and is the standard way of representing properties inherent to a relationship between two items in an ontology or RDF graph, where statements are made in a pairwise manner between items - it is a limitation of the data model itself that makes reification necessary sometimes.
If you're worried that reification will inflate your ontology, you could try the following instead, but are generally less desirable and come with their own problems:
Create specific properties, such as somewhatLikes, doesntLike, loves; this may be suitable if you have a limited set of alternatives, and don't mind creating the extra properties. This becomes tedious and cumbersome (and I'd go so far as to suggest incorrect) if you intend to encode the 'degree of likeness' with an integer (or any wide range of alternatives) - following this approach, you'd have properties like likes0, likes1, ..., likes99, likes100. This method would also preclude querying, for example, all dogs that a person likes within a range of degree values, which is possible in SPARQL with the reification approach you've specified, but not with this approach.
Attach the likesDogs property to the Person instance, if the assertion can be made against the person onto all types/instances of Dog, and not individual instances. This will, of course, be dependent of what you're trying to capture here; if it's the latter, then this also won't be appropriate.
Good luck!
I think #msalvadores gets it wrong.
Let's forget about the dogs and likes. What we are really doing here is:
a x b
axb y c
axb z d
where axb is the identifier of the a x b statement, a, b, c, d are subjects or objects and x, y, z are predicates. What we need is binding the a, x, b resources to the axb statement somehow.
This is how reification does it:
axb subject a
axb predicate x
axb object b
which I think is very easy to understand.
Let's check what msalvadores does:
:I :like [ a :LikeLevel;
:likeObject :dogs;
:likeScale 5.7] .
we can easily translate this to axb terms
a x w
w type AxbSpecificObjectWrapper
w object b
w y c
which is just mimicking reification with low quality tools and more effort (you need a wrapper class and define an object property). The a x w statement does not makes sense to me; I like a like level, which objects are dogs???
But how about to describe "the degree" I like dogs?
There are 2 ways to do this as far as I can tell with my very limited RDF knowledge.
1.) use reification
stmt_1
a LikeStatement
subject I
predicate like
object dogs
how_much "very much"
2.) instantiate a predicate class
I like_1 dogs
like_1
a Like
how_much "very much"
It depends on your taste and your actual vocab which one you choose.
How can I give a property or value to a predicate?
I don't think you understand the difference between a predicate and a statement. A great example about it is available here: Simple example of reification in RDF
Tolkien wrote Lord of the rings
Wikipedia said that
The statement here:
that: [Tolkien, wrote, LotR]
If we are making statements about the statement, we write something like this:
[Wikipedia, said, that]
If we are making statements about the predicate then we write something like this:
[Wikipedia, said, wrote]
I think there is a big difference. Reification is about making statements about statements not about predicates...
A sentence from Jena's document just catch my eye.
...OWL Full allows ... state the following .... construction:
<owl:Class rdf:ID="DigitalCamera">
<rdf:type owl:ObjectProperty />
</owl:Class>
..
does OWL Full really allow an ObjectProperty be a Class as well?
If an ObjectProperty could be a Class, and could have individuals then I could describe a statement with
S_individual P_individual O_individual
and I could have further properties on P_individual. Is it right?
or am I missing some points?
since the following RDF is valid, a corresponding OWL should be achievable.
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:j.0="http://somewhere/" >
<rdf:Description rdf:about="http://somewhere/Dog_my_dog">
<j.0:name>Lucky</j.0:name>
</rdf:Description>
<rdf:Description rdf:about="http://somewhere/like_dog">
<j.0:degree>80</j.0:degree>
</rdf:Description>
<rdf:Description rdf:about="http://somewhere/Cat_my_cat">
<j.0:name>Catty</j.0:name>
</rdf:Description>
<rdf:Description rdf:about="http://somewhere/like_cat">
<j.0:degree>86</j.0:degree>
</rdf:Description>
<rdf:Description rdf:about="http://somewhere/Person_I">
<j.0:name>Bob</j.0:name>
<j.0:like_dog rdf:resource="http://somewhere/Dog_my_dog"/>
<j.0:like_cat rdf:resource="http://somewhere/Cat_my_cat"/>
</rdf:Description>
</rdf:RDF>

Parse usable Street Address, City, State, Zip from a string [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Problem: I have an address field from an Access database which has been converted to SQL Server 2005. This field has everything all in one field. I need to parse out the address's individual sections into their appropriate fields in a normalized table. I need to do this for approximately 4,000 records, and it needs to be repeatable.
Assumptions:
Assume an address in the US (for now)
assume that the input string will sometimes contain an addressee (the person being addressed) and/or a second street address (i.e. Suite B)
states may be abbreviated
zip code could be standard 5 digits or zip+4
there are typos in some instances
UPDATE: In response to the questions posed, standards were not universally followed; I need need to store the individual values, not just geocode and errors means typo (corrected above)
Sample Data:
A. P. Croll & Son 2299 Lewes-Georgetown Hwy, Georgetown, DE 19947
11522 Shawnee Road, Greenwood DE 19950
144 Kings Highway, S.W. Dover, DE 19901
Intergrated Const. Services 2 Penns Way Suite 405 New Castle, DE 19720
Humes Realty 33 Bridle Ridge Court, Lewes, DE 19958
Nichols Excavation 2742 Pulaski Hwy Newark, DE 19711
2284 Bryn Zion Road, Smyrna, DE 19904
VEI Dover Crossroads, LLC 1500 Serpentine Road, Suite 100 Baltimore MD 21
580 North Dupont Highway Dover, DE 19901
P.O. Box 778 Dover, DE 19903
I've done a lot of work on this kind of parsing. Because there are errors you won't get 100% accuracy, but there are a few things you can do to get most of the way there, and then do a visual BS test. Here's the general way to go about it. It's not code, because it's pretty academic to write it, there's no weirdness, just lots of string handling.
(Now that you've posted some sample data, I've made some minor changes)
Work backward. Start from the zip code, which will be near the end, and in one of two known formats: XXXXX or XXXXX-XXXX. If this doesn't appear, you can assume you're in the city, state portion, below.
The next thing, before the zip, is going to be the state, and it'll be either in a two-letter format, or as words. You know what these will be, too -- there's only 50 of them. Also, you could soundex the words to help compensate for spelling errors.
before that is the city, and it's probably on the same line as the state. You could use a zip-code database to check the city and state based on the zip, or at least use it as a BS detector.
The street address will generally be one or two lines. The second line will generally be the suite number if there is one, but it could also be a PO box.
It's going to be near-impossible to detect a name on the first or second line, though if it's not prefixed with a number (or if it's prefixed with an "attn:" or "attention to:" it could give you a hint as to whether it's a name or an address line.
I hope this helps somewhat.
I think outsourcing the problem is the best bet: send it to the Google (or Yahoo) geocoder. The geocoder returns not only the lat/long (which aren't of interest here), but also a rich parsing of the address, with fields filled in that you didn't send (including ZIP+4 and county).
For example, parsing "1600 Amphitheatre Parkway, Mountain View, CA" yields
{
"name": "1600 Amphitheatre Parkway, Mountain View, CA, USA",
"Status": {
"code": 200,
"request": "geocode"
},
"Placemark": [
{
"address": "1600 Amphitheatre Pkwy, Mountain View, CA 94043, USA",
"AddressDetails": {
"Country": {
"CountryNameCode": "US",
"AdministrativeArea": {
"AdministrativeAreaName": "CA",
"SubAdministrativeArea": {
"SubAdministrativeAreaName": "Santa Clara",
"Locality": {
"LocalityName": "Mountain View",
"Thoroughfare": {
"ThoroughfareName": "1600 Amphitheatre Pkwy"
},
"PostalCode": {
"PostalCodeNumber": "94043"
}
}
}
}
},
"Accuracy": 8
},
"Point": {
"coordinates": [-122.083739, 37.423021, 0]
}
}
]
}
Now that's parseable!
The original poster has likely long moved on, but I took a stab at porting the Perl Geo::StreetAddress:US module used by geocoder.us to C#, dumped it on CodePlex, and think that people stumbling across this question in the future may find it useful:
US Address Parser
On the project's home page, I try to talk about its (very real) limitations. Since it is not backed by the USPS database of valid street addresses, parsing can be ambiguous and it can't confirm nor deny the validity of a given address. It can just try to pull data out from the string.
It's meant for the case when you need to get a set of data mostly in the right fields, or want to provide a shortcut to data entry (letting users paste an address into a textbox rather than tabbing among multiple fields). It is not meant for verifying the deliverability of an address.
It doesn't attempt to parse out anything above the street line, but one could probably diddle with the regex to get something reasonably close--I'd probably just break it off at the house number.
I've done this in the past.
Either do it manually, (build a nice gui that helps the user do it quickly) or have it automated and check against a recent address database (you have to buy that) and manually handle errors.
Manual handling will take about 10 seconds each, meaning you can do 3600/10 = 360 per hour, so 4000 should take you approximately 11-12 hours. This will give you a high rate of accuracy.
For automation, you need a recent US address database, and tweak your rules against that. I suggest not going fancy on the regex (hard to maintain long-term, so many exceptions). Go for 90% match against the database, do the rest manually.
Do get a copy of Postal Addressing Standards (USPS) at http://pe.usps.gov/cpim/ftp/pubs/Pub28/pub28.pdf and notice it is 130+ pages long. Regexes to implement that would be nuts.
For international addresses, all bets are off. US-based workers would not be able to validate.
Alternatively, use a data service. I have, however, no recommendations.
Furthermore: when you do send out the stuff in the mail (that's what it's for, right?) make sure you put "address correction requested" on the envelope (in the right place) and update the database. (We made a simple gui for the front desk person to do that; the person who actually sorts through the mail)
Finally, when you have scrubbed data, look for duplicates.
After the advice here, I have devised the following function in VB which creates passable, although not always perfect (if a company name and a suite line are given, it combines the suite and city) usable data. Please feel free to comment/refactor/yell at me for breaking one of my own rules, etc.:
Public Function parseAddress(ByVal input As String) As Collection
input = input.Replace(",", "")
input = input.Replace(" ", " ")
Dim splitString() As String = Split(input)
Dim streetMarker() As String = New String() {"street", "st", "st.", "avenue", "ave", "ave.", "blvd", "blvd.", "highway", "hwy", "hwy.", "box", "road", "rd", "rd.", "lane", "ln", "ln.", "circle", "circ", "circ.", "court", "ct", "ct."}
Dim address1 As String
Dim address2 As String = ""
Dim city As String
Dim state As String
Dim zip As String
Dim streetMarkerIndex As Integer
zip = splitString(splitString.Length - 1).ToString()
state = splitString(splitString.Length - 2).ToString()
streetMarkerIndex = getLastIndexOf(splitString, streetMarker) + 1
Dim sb As New StringBuilder
For counter As Integer = streetMarkerIndex To splitString.Length - 3
sb.Append(splitString(counter) + " ")
Next counter
city = RTrim(sb.ToString())
Dim addressIndex As Integer = 0
For counter As Integer = 0 To streetMarkerIndex
If IsNumeric(splitString(counter)) _
Or splitString(counter).ToString.ToLower = "po" _
Or splitString(counter).ToString().ToLower().Replace(".", "") = "po" Then
addressIndex = counter
Exit For
End If
Next counter
sb = New StringBuilder
For counter As Integer = addressIndex To streetMarkerIndex - 1
sb.Append(splitString(counter) + " ")
Next counter
address1 = RTrim(sb.ToString())
sb = New StringBuilder
If addressIndex = 0 Then
If splitString(splitString.Length - 2).ToString() <> splitString(streetMarkerIndex + 1) Then
For counter As Integer = streetMarkerIndex To splitString.Length - 2
sb.Append(splitString(counter) + " ")
Next counter
End If
Else
For counter As Integer = 0 To addressIndex - 1
sb.Append(splitString(counter) + " ")
Next counter
End If
address2 = RTrim(sb.ToString())
Dim output As New Collection
output.Add(address1, "Address1")
output.Add(address2, "Address2")
output.Add(city, "City")
output.Add(state, "State")
output.Add(zip, "Zip")
Return output
End Function
Private Function getLastIndexOf(ByVal sArray As String(), ByVal checkArray As String()) As Integer
Dim sourceIndex As Integer = 0
Dim outputIndex As Integer = 0
For Each item As String In checkArray
For Each source As String In sArray
If source.ToLower = item.ToLower Then
outputIndex = sourceIndex
If item.ToLower = "box" Then
outputIndex = outputIndex + 1
End If
End If
sourceIndex = sourceIndex + 1
Next
sourceIndex = 0
Next
Return outputIndex
End Function
Passing the parseAddress function "A. P. Croll & Son 2299 Lewes-Georgetown Hwy, Georgetown, DE 19947" returns:
2299 Lewes-Georgetown Hwy
A. P. Croll & Son
Georgetown
DE
19947
I've been working in the address processing domain for about 5 years now, and there really is no silver bullet. The correct solution is going to depend on the value of the data. If it's not very valuable, throw it through a parser as the other answers suggest. If it's even somewhat valuable you'll definitely need to have a human evaluate/correct all the results of the parser. If you're looking for a fully automated, repeatable solution, you probably want to talk to a address correction vendor like Group1 or Trillium.
SmartyStreets has a new feature that extracts addresses from arbitrary input strings. (Note: I don't work at SmartyStreets.)
It successfully extracted all addresses from the sample input given in the question above. (By the way, only 9 of those 10 addresses are valid.)
Here's some of the output:
And here's the CSV-formatted output of that same request:
ID,Start,End,Segment,Verified,Candidate,Firm,FirstLine,SecondLine,LastLine,City,State,ZIPCode,County,DpvFootnotes,DeliveryPointBarcode,Active,Vacant,CMRA,MatchCode,Latitude,Longitude,Precision,RDI,RecordType,BuildingDefaultIndicator,CongressionalDistrict,Footnotes
1,32,79,"2299 Lewes-Georgetown Hwy, Georgetown, DE 19947",N,,,,,,,,,,,,,,,,,,,,,,
2,81,119,"11522 Shawnee Road, Greenwood DE 19950",Y,0,,11522 Shawnee Rd,,Greenwood DE 19950-5209,Greenwood,DE,19950,Sussex,AABB,199505209226,Y,N,N,Y,38.82865,-75.54907,Zip9,Residential,S,,AL,N#
3,121,160,"144 Kings Highway, S.W. Dover, DE 19901",Y,0,,144 Kings Hwy,,Dover DE 19901-7308,Dover,DE,19901,Kent,AABB,199017308444,Y,N,N,Y,39.16081,-75.52377,Zip9,Commercial,S,,AL,L#
4,190,232,"2 Penns Way Suite 405 New Castle, DE 19720",Y,0,,2 Penns Way Ste 405,,New Castle DE 19720-2407,New Castle,DE,19720,New Castle,AABB,197202407053,Y,N,N,Y,39.68332,-75.61043,Zip9,Commercial,H,,AL,N#
5,247,285,"33 Bridle Ridge Court, Lewes, DE 19958",Y,0,,33 Bridle Ridge Cir,,Lewes DE 19958-8961,Lewes,DE,19958,Sussex,AABB,199588961338,Y,N,N,Y,38.72749,-75.17055,Zip7,Residential,S,,AL,L#
6,306,339,"2742 Pulaski Hwy Newark, DE 19711",Y,0,,2742 Pulaski Hwy,,Newark DE 19702-3911,Newark,DE,19702,New Castle,AABB,197023911421,Y,N,N,Y,39.60328,-75.75869,Zip9,Commercial,S,,AL,A#
7,341,378,"2284 Bryn Zion Road, Smyrna, DE 19904",Y,0,,2284 Bryn Zion Rd,,Smyrna DE 19977-3895,Smyrna,DE,19977,Kent,AABB,199773895840,Y,N,N,Y,39.23937,-75.64065,Zip7,Residential,S,,AL,A#N#
8,406,450,"1500 Serpentine Road, Suite 100 Baltimore MD",Y,0,,1500 Serpentine Rd Ste 100,,Baltimore MD 21209-2034,Baltimore,MD,21209,Baltimore,AABB,212092034250,Y,N,N,Y,39.38194,-76.65856,Zip9,Commercial,H,,03,N#
9,455,495,"580 North Dupont Highway Dover, DE 19901",Y,0,,580 N DuPont Hwy,,Dover DE 19901-3961,Dover,DE,19901,Kent,AABB,199013961803,Y,N,N,Y,39.17576,-75.5241,Zip9,Commercial,S,,AL,N#
10,497,525,"P.O. Box 778 Dover, DE 19903",Y,0,,PO Box 778,,Dover DE 19903-0778,Dover,DE,19903,Kent,AABB,199030778781,Y,N,N,Y,39.20946,-75.57012,Zip5,Residential,P,,AL,
I was the developer who originally wrote the service. The algorithm we implemented is a bit different from any specific answers here, but each extracted address is verified against the address lookup API, so you can be sure if it's valid or not. Each verified result is guaranteed, but we know the other results won't be perfect because, as has been made abundantly clear in this thread, addresses are unpredictable, even for humans sometimes.
This won't solve your problem, but if
you only needed lat/long data for
these addresses, the Google Maps API
will parse non-formatted addresses
pretty well.
Good suggestion, alternatively you can execute a CURL request for each address to Google Maps and it will return the properly formatted address. From that, you can regex to your heart's content.
+1 on James A. Rosen's suggested solution as it has worked well for me, however for completists this site is a fascinating read and the best attempt I've seen in documenting addresses worldwide: http://www.columbia.edu/kermit/postal.html
Are there any standards at all in the way that the addresses are recorded? For example:
Are there always commas or new-lines separating street1 from street2 from city from state from zip?
Are address types (road, street, boulevard, etc) always spelled out? always abbreviated? Some of each?
Define "error".
My general answer is a series of Regular Expressions, though the complexity of this depends on the answer. And if there is no consistency at all, then you may only be able to achieve partial success with a Regex (ie: filtering out zip code and state) and will have to do the rest by hand (or at least go through the rest very carefully to make sure you spot the errors).
Another request for sample data.
As has been mentioned I would work backwards from the zip.
Once you have a zip I would query a zip database, store the results, and remove them & the zip from the string.
That will leave you with the address mess. MOST (All?) addresses will start with a number so find the first occurrence of a number in the remaining string and grab everything from it to the (new) end of the string. That will be your address. Anything to the left of that number is likely an addressee.
You should now have the City, State, & Zip stored in a table and possibly two strings, addressee and address. For the address, check for the existence of "Suite" or "Apt." etc. and split that into two values (address lines 1 & 2).
For the addressee I would punt and grab the last word of that string as the last name and put the rest into the first name field. If you don't want to do that, you'll need to check for salutation (Mr., Ms., Dr., etc.) at the start and make some assumptions based on the number of spaces as to how the name is made up.
I don't think there's any way you can parse with 100% accuracy.
Try www.address-parser.com. We use their web service, which you can test online
Based on the sample data:
I would start at the end of the string. Parse a Zip-code (either format). Read end to first space. If no Zip Code was found Error.
Trim the end then for spaces and special chars (commas)
Then move on to State, again use the Space as the delimiter. Maybe use a lookup list to validate 2 letter state codes, and full state names. If no valid state found, error.
Trim spaces and commas from the end again.
City gets tricky, I would actually use a comma here, at the risk of getting too much data in the city. Look for the comma, or beginning of the line.
If you still have chars left in the string, shove all of that into an address field.
This isn't perfect, but it should be a pretty good starting point.
If it's human entered data, then you'll spend too much time trying to code around the exceptions.
Try:
Regular expression to extract the zip code
Zip code lookup (via appropriate government DB) to get the correct address
Get an intern to manually verify the new data matches the old
This won't solve your problem, but if you only needed lat/long data for these addresses, the Google Maps API will parse non-formatted addresses pretty well.
RecogniContact is a Windows COM object that parses US and European addresses. You can try it right on
http://www.loquisoft.com/index.php?page=8
You might want to check this out!! http://jgeocoder.sourceforge.net/parser.html
Worked like a charm for me.
This type of problem is hard to solve because of underlying ambiguities in the data.
Here is a Perl based solution that defines a recursive descent grammar tree based on regular expressions to parse many valid combination of street addresses: http://search.cpan.org/~kimryan/Lingua-EN-AddressParse-1.20/lib/Lingua/EN/AddressParse.pm . This includes sub properties within an address such as:
12 1st Avenue N Suite # 2 Somewhere CA 12345 USA
It is similar to http://search.cpan.org/~timb/Geo-StreetAddress-US-1.03/US.pm mentioned above, but also works for addresses that are not from the USA, such as the UK, Australia and Canada.
Here is the output for one of your sample addresses. Note that the name section would need to be removed first from "A. P. Croll & Son 2299 Lewes-Georgetown Hwy, Georgetown, DE 19947" to reduce it to "2299 Lewes-Georgetown Hwy, Georgetown, DE 19947". This is easily achieved by removing all data up to the first number found in the string.
Non matching part ''
Error '0'
Error descriptions ''
Case all '2299 Lewes-Georgetown Hwy Georgetown DE 19947'
COMPONENTS ''
country ''
po_box_type ''
post_box ''
post_code '19947'
pre_cursor ''
property_identifier '2299'
property_name ''
road_box ''
street 'Lewes-Georgetown'
street_direction ''
street_type 'Hwy'
sub_property_identifier ''
subcountry 'DE'
suburb 'Georgetown'
Since there is chance of error in word, think about using SOUNDEX combined with LCS algorithm to compare strings, this will help a lot !
using google API
$d=str_replace(" ", "+", $address_url);
$completeurl ="http://maps.googleapis.com/maps/api/geocode/xml?address=".$d."&sensor=true";
$phpobject = simplexml_load_file($completeurl);
print_r($phpobject);
For ruby or rails developers there is a nice gem available called street_address.
I have been using this on one of my project and it does the work I need.
The only Issue I had was whenever an address is in this format P. O. Box 1410 Durham, NC 27702 it returned nil and therefore I had to replace "P. O. Box" with '' and after this it were able to parse it.
There are data services that given a zip code will give you list of street names in that zip code.
Use a regex to extract Zip or City State - find the correct one or if a error get both.
pull the list of streets from a data source Correct the city and state, and then street address. Once you get a valid Address line 1, city, state, and zip you can then make assumptions on address line 2..3
I don't know HOW FEASIBLE this would be, but I haven't seen this mentioned so I thought I would go ahead and suggest this:
If you are strictly in the US... get a huge database of all zip codes, states, cities and streets. Now look for these in your addresses. You can validate what you find by testing if, say, the city you found exists in the state you found, or by checking if the street you found exists in the city you found. If not, chances are John isn't for John's street, but is the name of the addressee... Basically, get the most information you can and check your addresses against it.
An extreme example would be to get A LIST OF ALL THE ADDRESSES IN THE US OF A and then find which one has the most relevant match to each of your addresses...
There is javascript port of perl Geo::StreetAddress::US package: https://github.com/hassansin/parse-address . It's regex-based and works fairly well.

Resources