Bing web search api advanced query - azure

I have a question about bing web search api :
I want use a filter in my query for :
articles + documents
articles only
documents only
Articles are html pages
Documents are pdf, doc, docx, ... with /lfy/documents/ path
I have tried with ext parameter but he seems not work with ext:html
I try filtering by path (contains "/documents" for documents, don't contains /documents for web pages)
1/ Filter including site with syntax error
q=site:www.msa.fr/lfy/ Barèmes des cotisations sur sallaires
-> return results :-)
Altered query is : "alteredQuery": "barèmes des cotisations sur salaires"
sallaires become salaires : ok !
2/ Filter including site and excluding sub site with syntax error
q=site:www.msa.fr/lfy/ NOT site:www.msa.fr/lfy/documents Barèmes des cotisations sur sallaires
-> return no results
Adding NOT site: break the syntax correction
3/ Filter including site and excluding sub site without syntax error
q=site:www.msa.fr/lfy/ NOT site:www.msa.fr/lfy/documents Barèmes des cotisations sur salaires
-> return results
Adding "NOT site:xxx" break the syntax correction
Note : i use
responseFilter=Webpages
mkt=fr-fr
safesearch=Moderate
Every one have a solution ?
Thanks for your help.
Florian.

You need to use "-" instead of "NOT". The below should work:
q=Barèmes des cotisations sur sallaires site:www.msa.fr/lfy/
q=Barèmes des cotisations sur sallaires site:www.msa.fr/lfy/ -site:www.msa.fr/lfy/documents

Related

How do I add to strings in a list? (scraping links off Amazon)

I'm trying to scrape links off of Amazon. I ran into a tiny problem.
I wish to add a particular string 'a' to every string in a list.
links = soup.find_all("a", attrs= {"class":"a-link-normal a-text-normal"})
urls = [item.get('href') for item in links]
print(urls)
The output is :
['/Redragon-KUMARA-Backlit-Mechanical-Keyboard/dp/B016MAK38U?dchild=1', '/MOTOSPEED-Professional-Mechanical-Keyboard-Illuminated/dp/B07P9J5CBJ?dchild=1', '/Mechanical-Keyboard-Keyboards-Detachable-Programmable/dp/B07QS6MG8B?dchild=1', '/Element-Mechanical-81-Replaceable-Rainbow/dp/B07DYPJWD4?dchild=1', '/DURGOD-Mechanical-Keyboard-Connector-Typists/dp/B07B8DS585?dchild=1']
Now how do I add a = https://amazon.com.au in front of every string in the list output?
I want the output to be as follows:
['https://amazon.com.au/Redragon-KUMARA-Backlit-Mechanical-Keyboard/dp/B016MAK38U?dchild=1', 'https://amazon.com.au/MOTOSPEED-Professional-Mechanical-Keyboard-Illuminated/dp/B07P9J5CBJ?dchild=1', 'https://amazon.com.au/Mechanical-Keyboard-Keyboards-Detachable-Programmable/dp/B07QS6MG8B?dchild=1', 'https://amazon.com.au/Element-Mechanical-81-Replaceable-Rainbow/dp/B07DYPJWD4?dchild=1', 'https://amazon.com.au/DURGOD-Mechanical-Keyboard-Connector-Typists/dp/B07B8DS585?dchild=1']
I'm a newbie (1.5 months into python), so apologise in advance if I'm not clear enough.
Thank you!
Possible duplicate of :
Appending the same string to a list of strings in Python .
In short -
urls = ['https://amazon.com.au'+link for link in links]

Nodejs encode string ISO8859-1

I've been looking for an answer for some days.
I am receiving data per post from a form containing special characters (accents)
eg : Está é uma sala de teste
App Node: Est%E1+%E9+uma+sala+de+teste
What is the correct way in nodejs, to decode the string to save to in my database?
I did it that way. But I'm sure it's not the right way
decode string with accents
My apologize if this is a duplicate issue, but not that I found myself able to resolve the issue
thank you any advanced.
You can use a module like iconv-urlencode for that:
const conv = require('iconv-urlencode');
let input = 'Est%E1+%E9+uma+sala+de+teste';
console.log( conv.decode(input, 'latin-1') );
// Está é uma sala de teste

vim text replacement : :%s/é/&&eacute/g foreign language

I have a text with multiple french character: I'm trying to replace all the occurrences for an HTML code
example été to get => été
right now I'm having either error or a weird text
:%s/é/&é/g => this gives me a ééeacute;tééeacute;
:%s/é/&é/g => this gives me a éeacute;téeacute;
:%s/é//é/g => this gives me an error Trailling character
In the replacement part, & is special, it represents the whole match:
J'ai mangé du paté de campagne.
:s/é/&´/g
J'ai mangéacute; du patéeacute; de campagne.
Escape it to obtain the desired &:
:s/é/\´/g
J'ai mang´ du paté de campagne.

Exporting results

I'm sure this is an issue anyone who uses Stata for publications or reports has run into:
How do you conveniently export your output to something that can be parsed by a scripting language or Excel?
There are a few ado files that do this for specific commands. For example:
findit tabout
findit outreg2
But what about exporting the output of the table command? Or the results of an anova?
I would love to hear about how Stata users address this problem for either specific commands or in general.
After experimenting with this for a while, I've found a solution that works for me.
There are a variety of ADOs that handle exporting specific functions. I've made use of outreg2 for regressions and tabout for summary statistics.
For more simple commands, it's easy to write your own programs to save results automatically to plaintext in a standard format. Here are a few I wrote...note that these both display results (to be saved to a log file) and export them into text files – if you wanted to just save to text you could get rid of the di's and qui the sum, tab, etc. commands:
cap program drop sumout
program define sumout
di ""
di ""
di "Summary of `1'"
di ""
sum `1', d
qui matrix X = (r(mean), r(sd), r(p50), r(min), r(max))
qui matrix colnames X = mean sd median min max
qui mat2txt, matrix(X) saving("`2'") replace
end
cap program drop tab2_chi_out
program define tab2_chi_out
di ""
di ""
di "Tabulation of `1' and `2'"
di ""
tab `1' `2', chi2
qui matrix X = (r(p), r(chi2))
qui matrix colnames X = chi2p chi2
qui mat2txt, matrix(X) saving("`3'") replace
end
cap program drop oneway_out
program define oneway_out
di ""
di ""
di "Oneway anova with dv = `1' and iv = `2'"
di ""
oneway `1' `2'
qui matrix X = (r(F), r(df_r), r(df_m), Ftail(r(df_m), r(df_r), r(F)))
qui matrix colnames X = anova_between_groups_F within_groups_df between_groups_df P
qui mat2txt, matrix(X) saving("`3'") replace
end
cap program drop anova_out
program define anova_out
di ""
di ""
di "Anova command: anova `1'"
di ""
anova `1'
qui matrix X = (e(F), e(df_r), e(df_m), Ftail(e(df_m), e(df_r), e(F)), e(r2_a))
qui matrix colnames X = anova_between_groups_F within_groups_df between_groups_df P RsquaredAdj
qui mat2txt, matrix(X) saving("`2'") replace
end
The question is then how to get the output into Excel and format it. I found that the best way to import the text output files from Stata into Excel is to concatenate them into one big text file and then import that single file using the Import Text File... feature in Excel.
I concatenate the files by placing this Ruby code in the output folder and then running int from my Do file with qui shell cd path/to/output/folder/ && ruby table.rb:
output = ""
Dir.new(".").entries.each do |file|
next if file =~/\A\./ || file == "table.rb" || file == "out.txt"
if file =~ /.*xml/
system "rm #{file}"
next
end
contents = File.open(file, "rb").read
output << "\n\n#{file}\n\n" << contents
end
File.open("out.txt", 'w') {|f| f.write(output)}
Once I import out.txt into its own sheet in Excel, I use a bunch of Excel's built-in functions to pull the data together into nice, pretty tables.
I use a combination of vlookup, offset, match, iferror, and hidden columns with cell numbers and filenames to do this. The source .txt file is included in out.txt just above the contents of that file, which lets you look up the contents of the file using these functions and then reference specific cells using vlookup and offset.
This Excel business is actually the most complicated part of this system and there's really no good way to explain it without showing you the file, though hopefully you can get enough of an idea to figure it out for yourself. If not, feel free to contact me through http://maxmasnick.com and I can get you more info.
I have found that the estout package is the most developed and has good documentation.
This is an old question and a lot has happened since it was posted.
Stata now has several built-in commands and functions that allow anyone to
export customized output fairly easily:
putexcel
putexcel with advanced syntax
putdocx
putpdf
There are also equivalent Mata functions / classes, which offer greater flexibility:
_docx*()
Pdf*()
xl()
From my experience, there aren't 100% general solutions. Community-contributed commands such as estout are now mature enough to handle most basic operations. That said, if you have something that deviates even slightly from the template you will have to program this yourself.
Most tutorials throw in several packages where it would indeed nice to have only one exporting everything, which is what Max suggests above with his interesting method.
I personally use tabout for summary statistics and frequencies, estout for regression output, and am trying out mkcorr for correlation matrixes.
It's been a while, but I believe you can issue a log command to capture the output.
log using c:\data\anova_analysis.log, text
[commands]
log close
I use estpost-- part of the estout package-- to tabulate results from non-estimation commands. You can then store them and export easily.
Here's an example:
estpost corr varA varB varC varD, matrix
est store corrs
esttab corrs using corrs.rtf, replace
You can then add options to change formatting, etc.
You can use asdoc that is available on SSC. To download,
ssc install asdoc
asdoc works well with almost all Stata commands. Specifically, it produces publication quality tables for :
summarize command - to report summary statistics
cor or pwcorr command - to report correlations
tabstat - for flexible tables of descriptive statistics
tabulate - for one-way, two-way, three-way tabulations
regress - for detailed, nested, and wide regression tables
table - flexible tables
and many more. You can explore more about asdoc here
https://fintechprofessor.com/2018/01/31/asdoc/

Parse usable Street Address, City, State, Zip from a string [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Problem: I have an address field from an Access database which has been converted to SQL Server 2005. This field has everything all in one field. I need to parse out the address's individual sections into their appropriate fields in a normalized table. I need to do this for approximately 4,000 records, and it needs to be repeatable.
Assumptions:
Assume an address in the US (for now)
assume that the input string will sometimes contain an addressee (the person being addressed) and/or a second street address (i.e. Suite B)
states may be abbreviated
zip code could be standard 5 digits or zip+4
there are typos in some instances
UPDATE: In response to the questions posed, standards were not universally followed; I need need to store the individual values, not just geocode and errors means typo (corrected above)
Sample Data:
A. P. Croll & Son 2299 Lewes-Georgetown Hwy, Georgetown, DE 19947
11522 Shawnee Road, Greenwood DE 19950
144 Kings Highway, S.W. Dover, DE 19901
Intergrated Const. Services 2 Penns Way Suite 405 New Castle, DE 19720
Humes Realty 33 Bridle Ridge Court, Lewes, DE 19958
Nichols Excavation 2742 Pulaski Hwy Newark, DE 19711
2284 Bryn Zion Road, Smyrna, DE 19904
VEI Dover Crossroads, LLC 1500 Serpentine Road, Suite 100 Baltimore MD 21
580 North Dupont Highway Dover, DE 19901
P.O. Box 778 Dover, DE 19903
I've done a lot of work on this kind of parsing. Because there are errors you won't get 100% accuracy, but there are a few things you can do to get most of the way there, and then do a visual BS test. Here's the general way to go about it. It's not code, because it's pretty academic to write it, there's no weirdness, just lots of string handling.
(Now that you've posted some sample data, I've made some minor changes)
Work backward. Start from the zip code, which will be near the end, and in one of two known formats: XXXXX or XXXXX-XXXX. If this doesn't appear, you can assume you're in the city, state portion, below.
The next thing, before the zip, is going to be the state, and it'll be either in a two-letter format, or as words. You know what these will be, too -- there's only 50 of them. Also, you could soundex the words to help compensate for spelling errors.
before that is the city, and it's probably on the same line as the state. You could use a zip-code database to check the city and state based on the zip, or at least use it as a BS detector.
The street address will generally be one or two lines. The second line will generally be the suite number if there is one, but it could also be a PO box.
It's going to be near-impossible to detect a name on the first or second line, though if it's not prefixed with a number (or if it's prefixed with an "attn:" or "attention to:" it could give you a hint as to whether it's a name or an address line.
I hope this helps somewhat.
I think outsourcing the problem is the best bet: send it to the Google (or Yahoo) geocoder. The geocoder returns not only the lat/long (which aren't of interest here), but also a rich parsing of the address, with fields filled in that you didn't send (including ZIP+4 and county).
For example, parsing "1600 Amphitheatre Parkway, Mountain View, CA" yields
{
"name": "1600 Amphitheatre Parkway, Mountain View, CA, USA",
"Status": {
"code": 200,
"request": "geocode"
},
"Placemark": [
{
"address": "1600 Amphitheatre Pkwy, Mountain View, CA 94043, USA",
"AddressDetails": {
"Country": {
"CountryNameCode": "US",
"AdministrativeArea": {
"AdministrativeAreaName": "CA",
"SubAdministrativeArea": {
"SubAdministrativeAreaName": "Santa Clara",
"Locality": {
"LocalityName": "Mountain View",
"Thoroughfare": {
"ThoroughfareName": "1600 Amphitheatre Pkwy"
},
"PostalCode": {
"PostalCodeNumber": "94043"
}
}
}
}
},
"Accuracy": 8
},
"Point": {
"coordinates": [-122.083739, 37.423021, 0]
}
}
]
}
Now that's parseable!
The original poster has likely long moved on, but I took a stab at porting the Perl Geo::StreetAddress:US module used by geocoder.us to C#, dumped it on CodePlex, and think that people stumbling across this question in the future may find it useful:
US Address Parser
On the project's home page, I try to talk about its (very real) limitations. Since it is not backed by the USPS database of valid street addresses, parsing can be ambiguous and it can't confirm nor deny the validity of a given address. It can just try to pull data out from the string.
It's meant for the case when you need to get a set of data mostly in the right fields, or want to provide a shortcut to data entry (letting users paste an address into a textbox rather than tabbing among multiple fields). It is not meant for verifying the deliverability of an address.
It doesn't attempt to parse out anything above the street line, but one could probably diddle with the regex to get something reasonably close--I'd probably just break it off at the house number.
I've done this in the past.
Either do it manually, (build a nice gui that helps the user do it quickly) or have it automated and check against a recent address database (you have to buy that) and manually handle errors.
Manual handling will take about 10 seconds each, meaning you can do 3600/10 = 360 per hour, so 4000 should take you approximately 11-12 hours. This will give you a high rate of accuracy.
For automation, you need a recent US address database, and tweak your rules against that. I suggest not going fancy on the regex (hard to maintain long-term, so many exceptions). Go for 90% match against the database, do the rest manually.
Do get a copy of Postal Addressing Standards (USPS) at http://pe.usps.gov/cpim/ftp/pubs/Pub28/pub28.pdf and notice it is 130+ pages long. Regexes to implement that would be nuts.
For international addresses, all bets are off. US-based workers would not be able to validate.
Alternatively, use a data service. I have, however, no recommendations.
Furthermore: when you do send out the stuff in the mail (that's what it's for, right?) make sure you put "address correction requested" on the envelope (in the right place) and update the database. (We made a simple gui for the front desk person to do that; the person who actually sorts through the mail)
Finally, when you have scrubbed data, look for duplicates.
After the advice here, I have devised the following function in VB which creates passable, although not always perfect (if a company name and a suite line are given, it combines the suite and city) usable data. Please feel free to comment/refactor/yell at me for breaking one of my own rules, etc.:
Public Function parseAddress(ByVal input As String) As Collection
input = input.Replace(",", "")
input = input.Replace(" ", " ")
Dim splitString() As String = Split(input)
Dim streetMarker() As String = New String() {"street", "st", "st.", "avenue", "ave", "ave.", "blvd", "blvd.", "highway", "hwy", "hwy.", "box", "road", "rd", "rd.", "lane", "ln", "ln.", "circle", "circ", "circ.", "court", "ct", "ct."}
Dim address1 As String
Dim address2 As String = ""
Dim city As String
Dim state As String
Dim zip As String
Dim streetMarkerIndex As Integer
zip = splitString(splitString.Length - 1).ToString()
state = splitString(splitString.Length - 2).ToString()
streetMarkerIndex = getLastIndexOf(splitString, streetMarker) + 1
Dim sb As New StringBuilder
For counter As Integer = streetMarkerIndex To splitString.Length - 3
sb.Append(splitString(counter) + " ")
Next counter
city = RTrim(sb.ToString())
Dim addressIndex As Integer = 0
For counter As Integer = 0 To streetMarkerIndex
If IsNumeric(splitString(counter)) _
Or splitString(counter).ToString.ToLower = "po" _
Or splitString(counter).ToString().ToLower().Replace(".", "") = "po" Then
addressIndex = counter
Exit For
End If
Next counter
sb = New StringBuilder
For counter As Integer = addressIndex To streetMarkerIndex - 1
sb.Append(splitString(counter) + " ")
Next counter
address1 = RTrim(sb.ToString())
sb = New StringBuilder
If addressIndex = 0 Then
If splitString(splitString.Length - 2).ToString() <> splitString(streetMarkerIndex + 1) Then
For counter As Integer = streetMarkerIndex To splitString.Length - 2
sb.Append(splitString(counter) + " ")
Next counter
End If
Else
For counter As Integer = 0 To addressIndex - 1
sb.Append(splitString(counter) + " ")
Next counter
End If
address2 = RTrim(sb.ToString())
Dim output As New Collection
output.Add(address1, "Address1")
output.Add(address2, "Address2")
output.Add(city, "City")
output.Add(state, "State")
output.Add(zip, "Zip")
Return output
End Function
Private Function getLastIndexOf(ByVal sArray As String(), ByVal checkArray As String()) As Integer
Dim sourceIndex As Integer = 0
Dim outputIndex As Integer = 0
For Each item As String In checkArray
For Each source As String In sArray
If source.ToLower = item.ToLower Then
outputIndex = sourceIndex
If item.ToLower = "box" Then
outputIndex = outputIndex + 1
End If
End If
sourceIndex = sourceIndex + 1
Next
sourceIndex = 0
Next
Return outputIndex
End Function
Passing the parseAddress function "A. P. Croll & Son 2299 Lewes-Georgetown Hwy, Georgetown, DE 19947" returns:
2299 Lewes-Georgetown Hwy
A. P. Croll & Son
Georgetown
DE
19947
I've been working in the address processing domain for about 5 years now, and there really is no silver bullet. The correct solution is going to depend on the value of the data. If it's not very valuable, throw it through a parser as the other answers suggest. If it's even somewhat valuable you'll definitely need to have a human evaluate/correct all the results of the parser. If you're looking for a fully automated, repeatable solution, you probably want to talk to a address correction vendor like Group1 or Trillium.
SmartyStreets has a new feature that extracts addresses from arbitrary input strings. (Note: I don't work at SmartyStreets.)
It successfully extracted all addresses from the sample input given in the question above. (By the way, only 9 of those 10 addresses are valid.)
Here's some of the output:
And here's the CSV-formatted output of that same request:
ID,Start,End,Segment,Verified,Candidate,Firm,FirstLine,SecondLine,LastLine,City,State,ZIPCode,County,DpvFootnotes,DeliveryPointBarcode,Active,Vacant,CMRA,MatchCode,Latitude,Longitude,Precision,RDI,RecordType,BuildingDefaultIndicator,CongressionalDistrict,Footnotes
1,32,79,"2299 Lewes-Georgetown Hwy, Georgetown, DE 19947",N,,,,,,,,,,,,,,,,,,,,,,
2,81,119,"11522 Shawnee Road, Greenwood DE 19950",Y,0,,11522 Shawnee Rd,,Greenwood DE 19950-5209,Greenwood,DE,19950,Sussex,AABB,199505209226,Y,N,N,Y,38.82865,-75.54907,Zip9,Residential,S,,AL,N#
3,121,160,"144 Kings Highway, S.W. Dover, DE 19901",Y,0,,144 Kings Hwy,,Dover DE 19901-7308,Dover,DE,19901,Kent,AABB,199017308444,Y,N,N,Y,39.16081,-75.52377,Zip9,Commercial,S,,AL,L#
4,190,232,"2 Penns Way Suite 405 New Castle, DE 19720",Y,0,,2 Penns Way Ste 405,,New Castle DE 19720-2407,New Castle,DE,19720,New Castle,AABB,197202407053,Y,N,N,Y,39.68332,-75.61043,Zip9,Commercial,H,,AL,N#
5,247,285,"33 Bridle Ridge Court, Lewes, DE 19958",Y,0,,33 Bridle Ridge Cir,,Lewes DE 19958-8961,Lewes,DE,19958,Sussex,AABB,199588961338,Y,N,N,Y,38.72749,-75.17055,Zip7,Residential,S,,AL,L#
6,306,339,"2742 Pulaski Hwy Newark, DE 19711",Y,0,,2742 Pulaski Hwy,,Newark DE 19702-3911,Newark,DE,19702,New Castle,AABB,197023911421,Y,N,N,Y,39.60328,-75.75869,Zip9,Commercial,S,,AL,A#
7,341,378,"2284 Bryn Zion Road, Smyrna, DE 19904",Y,0,,2284 Bryn Zion Rd,,Smyrna DE 19977-3895,Smyrna,DE,19977,Kent,AABB,199773895840,Y,N,N,Y,39.23937,-75.64065,Zip7,Residential,S,,AL,A#N#
8,406,450,"1500 Serpentine Road, Suite 100 Baltimore MD",Y,0,,1500 Serpentine Rd Ste 100,,Baltimore MD 21209-2034,Baltimore,MD,21209,Baltimore,AABB,212092034250,Y,N,N,Y,39.38194,-76.65856,Zip9,Commercial,H,,03,N#
9,455,495,"580 North Dupont Highway Dover, DE 19901",Y,0,,580 N DuPont Hwy,,Dover DE 19901-3961,Dover,DE,19901,Kent,AABB,199013961803,Y,N,N,Y,39.17576,-75.5241,Zip9,Commercial,S,,AL,N#
10,497,525,"P.O. Box 778 Dover, DE 19903",Y,0,,PO Box 778,,Dover DE 19903-0778,Dover,DE,19903,Kent,AABB,199030778781,Y,N,N,Y,39.20946,-75.57012,Zip5,Residential,P,,AL,
I was the developer who originally wrote the service. The algorithm we implemented is a bit different from any specific answers here, but each extracted address is verified against the address lookup API, so you can be sure if it's valid or not. Each verified result is guaranteed, but we know the other results won't be perfect because, as has been made abundantly clear in this thread, addresses are unpredictable, even for humans sometimes.
This won't solve your problem, but if
you only needed lat/long data for
these addresses, the Google Maps API
will parse non-formatted addresses
pretty well.
Good suggestion, alternatively you can execute a CURL request for each address to Google Maps and it will return the properly formatted address. From that, you can regex to your heart's content.
+1 on James A. Rosen's suggested solution as it has worked well for me, however for completists this site is a fascinating read and the best attempt I've seen in documenting addresses worldwide: http://www.columbia.edu/kermit/postal.html
Are there any standards at all in the way that the addresses are recorded? For example:
Are there always commas or new-lines separating street1 from street2 from city from state from zip?
Are address types (road, street, boulevard, etc) always spelled out? always abbreviated? Some of each?
Define "error".
My general answer is a series of Regular Expressions, though the complexity of this depends on the answer. And if there is no consistency at all, then you may only be able to achieve partial success with a Regex (ie: filtering out zip code and state) and will have to do the rest by hand (or at least go through the rest very carefully to make sure you spot the errors).
Another request for sample data.
As has been mentioned I would work backwards from the zip.
Once you have a zip I would query a zip database, store the results, and remove them & the zip from the string.
That will leave you with the address mess. MOST (All?) addresses will start with a number so find the first occurrence of a number in the remaining string and grab everything from it to the (new) end of the string. That will be your address. Anything to the left of that number is likely an addressee.
You should now have the City, State, & Zip stored in a table and possibly two strings, addressee and address. For the address, check for the existence of "Suite" or "Apt." etc. and split that into two values (address lines 1 & 2).
For the addressee I would punt and grab the last word of that string as the last name and put the rest into the first name field. If you don't want to do that, you'll need to check for salutation (Mr., Ms., Dr., etc.) at the start and make some assumptions based on the number of spaces as to how the name is made up.
I don't think there's any way you can parse with 100% accuracy.
Try www.address-parser.com. We use their web service, which you can test online
Based on the sample data:
I would start at the end of the string. Parse a Zip-code (either format). Read end to first space. If no Zip Code was found Error.
Trim the end then for spaces and special chars (commas)
Then move on to State, again use the Space as the delimiter. Maybe use a lookup list to validate 2 letter state codes, and full state names. If no valid state found, error.
Trim spaces and commas from the end again.
City gets tricky, I would actually use a comma here, at the risk of getting too much data in the city. Look for the comma, or beginning of the line.
If you still have chars left in the string, shove all of that into an address field.
This isn't perfect, but it should be a pretty good starting point.
If it's human entered data, then you'll spend too much time trying to code around the exceptions.
Try:
Regular expression to extract the zip code
Zip code lookup (via appropriate government DB) to get the correct address
Get an intern to manually verify the new data matches the old
This won't solve your problem, but if you only needed lat/long data for these addresses, the Google Maps API will parse non-formatted addresses pretty well.
RecogniContact is a Windows COM object that parses US and European addresses. You can try it right on
http://www.loquisoft.com/index.php?page=8
You might want to check this out!! http://jgeocoder.sourceforge.net/parser.html
Worked like a charm for me.
This type of problem is hard to solve because of underlying ambiguities in the data.
Here is a Perl based solution that defines a recursive descent grammar tree based on regular expressions to parse many valid combination of street addresses: http://search.cpan.org/~kimryan/Lingua-EN-AddressParse-1.20/lib/Lingua/EN/AddressParse.pm . This includes sub properties within an address such as:
12 1st Avenue N Suite # 2 Somewhere CA 12345 USA
It is similar to http://search.cpan.org/~timb/Geo-StreetAddress-US-1.03/US.pm mentioned above, but also works for addresses that are not from the USA, such as the UK, Australia and Canada.
Here is the output for one of your sample addresses. Note that the name section would need to be removed first from "A. P. Croll & Son 2299 Lewes-Georgetown Hwy, Georgetown, DE 19947" to reduce it to "2299 Lewes-Georgetown Hwy, Georgetown, DE 19947". This is easily achieved by removing all data up to the first number found in the string.
Non matching part ''
Error '0'
Error descriptions ''
Case all '2299 Lewes-Georgetown Hwy Georgetown DE 19947'
COMPONENTS ''
country ''
po_box_type ''
post_box ''
post_code '19947'
pre_cursor ''
property_identifier '2299'
property_name ''
road_box ''
street 'Lewes-Georgetown'
street_direction ''
street_type 'Hwy'
sub_property_identifier ''
subcountry 'DE'
suburb 'Georgetown'
Since there is chance of error in word, think about using SOUNDEX combined with LCS algorithm to compare strings, this will help a lot !
using google API
$d=str_replace(" ", "+", $address_url);
$completeurl ="http://maps.googleapis.com/maps/api/geocode/xml?address=".$d."&sensor=true";
$phpobject = simplexml_load_file($completeurl);
print_r($phpobject);
For ruby or rails developers there is a nice gem available called street_address.
I have been using this on one of my project and it does the work I need.
The only Issue I had was whenever an address is in this format P. O. Box 1410 Durham, NC 27702 it returned nil and therefore I had to replace "P. O. Box" with '' and after this it were able to parse it.
There are data services that given a zip code will give you list of street names in that zip code.
Use a regex to extract Zip or City State - find the correct one or if a error get both.
pull the list of streets from a data source Correct the city and state, and then street address. Once you get a valid Address line 1, city, state, and zip you can then make assumptions on address line 2..3
I don't know HOW FEASIBLE this would be, but I haven't seen this mentioned so I thought I would go ahead and suggest this:
If you are strictly in the US... get a huge database of all zip codes, states, cities and streets. Now look for these in your addresses. You can validate what you find by testing if, say, the city you found exists in the state you found, or by checking if the street you found exists in the city you found. If not, chances are John isn't for John's street, but is the name of the addressee... Basically, get the most information you can and check your addresses against it.
An extreme example would be to get A LIST OF ALL THE ADDRESSES IN THE US OF A and then find which one has the most relevant match to each of your addresses...
There is javascript port of perl Geo::StreetAddress::US package: https://github.com/hassansin/parse-address . It's regex-based and works fairly well.

Resources