Organisation Details Recognition - nlp

I am trying to write a company's details parser that can split text like the following into it's constituent parts:
THALES LAND AND JOINT SYSTEMS
Total Signature Management
Wookey Hole Road
Wells
Somerset
BA5 1AA
Tel: +44(0)1749 682384
Fax: +44 (0)1749 682235
The problem I am having is, how I can tell that "Total Signature Management" is not actually part of the address? Normally, a company will display its name "THALES LAND AND JOINT SYSTEM" and line 2 would normally be the first part of the address.
In the case above, the company name is followed by a non address part, is there anyway to tell the difference?
Thanks

You could calculate the probability of Address<->Description based on the occuring words. In this example it's quite obvious: the "road" line is much more likely to be part of an address than the "management" line.
This should work nicely if the non-address part will only appear after the company name. If it's possible that the non-address parts can be found somewhere in the text, it's getting near to impossible to separate them without further information.
Maybe you want to take a look on a similar question I asked yesterday.
Edit: You could create a statistical model based on previous categorized address-parts (the ones you are sure, that they are addresses ;) ).

Related

How to efficiently extract street from a string

I need to extract street from apartment description. I don't necessarily need it with number (not every listing have it anyway), but it would be appreciated.
MY 'SOLUTIONS'
1. Use regular expression. But then after reading many descriptions I realized that people often omit characteristic signs like writing 'street' equivalent before actual street name etc.
2. Create list containing possible street names. Because I know in which city the apartments will be, I can download every street name from this city, load it and then try to match any word from description with streets stored in list (probably some kind of hash table). But descriptions can be quite long and streets can consist of few words. Then probably I need to use this solution. Also address in description can have 1-2 characters miss-typed.
3. Hand over this work to third-party map engine. Cut few words from string, send it to some map engine (i.e. Google Maps), if it doesn't make sense for it, then try next few words etc.
Is there any better solution?
I feel like I have to assume things like address will always be typed correctly etc., otherwise complexity will be unacceptably high.
EDIT:
Programming language - I asked it as a generic question, not specific to language. But I'm using Python
String format - there is no exact string format, because everyone writes description as he wants. Usually the localization is somewhere in the beginning, but that's not always the case.
Make different format - I can't, because I'm scraping this data from different sites.

Is it possible to have a top level domain that is 2 character and not country code?

As mention in title, I would like to know if the top level domain can be exactly 2 character and it is not country code.
Technically? Of course. Are there any? Well, if you're picky there's at least one that used to be a country code but no longer is (.SU). Will you ever be able to register a top-level domain that is not a country code? Almost certainly not.

SSIS Split String address

I have a column which is made up of addresses as show below.
Address
1 Reid Street, Manchester, M1 2DF
12 Borough Road, London, E12,2FH
15 Jones Street, Newcastle, Tyne & Wear, NE1 3DN
etc .. etc....
I am wanting to split this into different columns to import into my SQL database. I have been trying to use Findstring to seperate by the comma but am having trouble when some addresses have more "sections" than others. ANy ideas whats the best way to go about this?
Many THanks
This is a requirements specification problem, not an implementation problem. The more you can afford to assume about the format of the addresses, the more detailed parsing you will be able to do; the other side of the same coin is that the less you will assume about the structure of the address, the fewer incorrect parses you will be blamed for.
It is crucial to determine whether you will only need to process UK postal emails, or whether worldwide addresses may occur.
Based on your examples, certain parts of the address seem to be always present, but please check this resource to determine whether they are really required in all UK email addresses.
If you find a match between the depth of parsing that you need, and the assumptions that you can safely make, you should be able to keep parsing by comma indexes (FINDSTRING); determine some components starting from the left, and some starting from the right of the string; and keep all that remains as an unparsed body.
It may also well happen that you will find that your current task is a mission impossible, especially in connection with international postal addresses. This is why most websites and other data collectors require the entry of postal address in an already parsed form by the user.
Excellent points raised by Hanika. Some of your parsing will depend on what your target destination looks like. As an ignorant yank, based on Hanika's link, I'd think your output would look something like
Addressee
Organisation
BuildingName
BuildingAddress
Locality
PostTown
Postcode
BasicsMet (boolean indicating whether minimum criteria for a good address has been met.)
In the US, just because an address could not be properly CASSed doesn't mean it couldn't be delivered - cip, my grandparent-in-laws live in enough small town that specifying their name and city is sufficient for delivery as local postal officials know who they are. For bulk mailings though, their address would not qualify for the bulk mailing rate and would default to first class mailing. I assume a similar scenario exists for UK mail
The general idea is for each row flowing through, you'll want to do your best to parse the data out into those buckets. The optimal solution for getting it "right" is to change the data entry method to validate and capture data into those discrete buckets. Since optimal never happens, it becomes your task to sort through the dross to find your gold.
Whilst you can write some fantastic expressions with FINDSTRING, I'd advise against it in this case as maintenance alone will drive you mad. Instead, add a Script Transformation and build your parsing logic in .NET (vb or c#). There will then be a cycle of running data through your transformation and having someone eyeball the results. If you find a new scenario, you go back and adjust your business rules. It's ugly, it's iterative and it's prone to producing results that a human wouldn't have.
Alternatives to rolling your address standardisation logic
buy it. Eventually your business needs outpace your ability to cope with constantly changing business rules. There are plenty of vendors out there but I'm only familiar with US based ones
upgrade to SQL Server 2012 to use DQS (Data Quality Services). You'll probably still need to buy a product to build out your knowledge base but you could offload the business rule making task to a domain expert ("Hey you, you make peanuts an hour. Make sure all the addresses coming out of this look like addresses" was how they covered this in the beginning of one of my jobs).

Heuristic to predict Name or Company

Problem
We are recieving strings and they may either represent a company name or a person's name. We need a heuristic to determine this.
Initial thoughts
Use an XML doc with either node Commercial String /Commercial or Personal String /Personal and score matching strings +1 (sorry dont know how to format XML in SO)
Cant just check for proper nouns. I.E. Bob's Company is a company where Bob Compton is a name
Need to return confidence level in some format. I can't think of how to do it as a percentage, all I can think to do is if it finds a match use an integer
Possible Commercial (all will be converted to lower case): co, co., inc, inc., etc (verbose versions of each)
I can get a English Name list from online
Question
Has anyone ran into this kind of domain problem before? What methods did you use? Any flashy way of solving this?
Thank You.
I haven't done this before, but some other thoughts:
Check for non-proper nouns (e.g. "and", "the", "piping"). In fact, if you have an English dictionary and a names list, any word that is not a name could be a good pointer to a company name.
A big problem is that some companies are just named after a person(s). "Fred Meyer", "J.C. Penney", and "Lockheed Martin" are examples of companies that look just like human names. There's likely no really good way around this (probably nothing easy anyway). If you can categorize first and last names, a double last name or last name only might be a good reason to lower the certainty.
I would agree with your integer idea. Unless you can do some very broad and very thorough testing, your percentages would probably be meaningless. I would probably run all the tests (returning name, company, or unknown) and compare the results, adding up an integer based on consistency in results.
Can you compare to a database of known company names?
E.g. in the UK: http://wck2.companieshouse.gov.uk
Of course, this doesn't help if it's actually someone's name, but there's a company with the same name.

Accurate algorithm for normalizing taxonomy terms?

I'm developing a shopping comparison website, and the project is in a very advanced stage. We index 50 million products daily using merchant feeds from various affiliate networks. Most of the problems I had is already solved, including the majority of the performance bottlenecks.
What is my problem: Please, first of all, we are using apache solr with drupal BUT, this problem IS NOT specific to drupal or solr, if you do not have knowledge of them, it doesn't matter.
We receive product feeds from over 2000 different merchants, and those feeds are a mess. They have no specific pattern, each merchant send the feeds the way they want. We already solved many problems regarding this, but one remains. Normalizing the taxonomy terms for the faceted browsing functionality.
Suppose that I have a "Narrow by Brands" browsing facet on my website. Now suppose that 100 merchants offer products from Microsoft. Now comes the problem. Some merchants put in the "Brands" column of the data feed "Microsoft", others "Microsoft, Inc.", others "Microsoft Corporation" others "Products from Microsoft", etc... there is no specific pattern between merchants and worst, some individual merchants are so sloppy that they have different strings for the same brand IN THE SAME DATA FEED.
We do not want all those different brands appearing in the navigation. We have a manual solution to the problem where we manually map the imported brands to the "good" brands table ("Microsoft Corporation" -> "Microsoft", "Products from Microsoft" -> "Microsoft", etc..). We have something like 10,000 brands in the database and this is doable. The problem is when it comes with bigger things like "Authors". When we import books into the system, there are over 800,000 authors and we have the same problem and this is not doable by hand mapping. The problem is the same: "Tom Mike Apostol", "Tom M. Apostol", "Apostol, Tom M.", etc...
Does anybody know a good way to automatically solve this problem with an acceptable degree of accuracy (85%-95% accuracy)?
Thanks you for the help!
Some idea that comes to my mind, altough it's just a loose thought:
Convert names to initials (in your example: TMA). Treat '-' as spaces, so fe. Antoine de Saint-Exupéry would be ADSE. Problem here is how to treat ",", altough, it's common usage is to have surname before forename, so just swapping positions should work (so A,TM would be TM,A, get rid of comma - TMA).
Filters authors in database by those initials
For each intitial, if you have whole name (Tom, Apostol) check if it match, otherwise (M.) consider it a match automatically.
If you want some tolerance, you can compare names with Levenshtein distance and tolerate some differences (here you have Oracle implementation)
Names that match you treat as the same authors, to find the whole name, for each initial (T, M, A) you look up your filtered authors (after step 2) and try to find one without just initial (M.) but with whole name (Mike), if you can't find one, use initial. Therefore, each of examples you gave would be converted to the same value, which would be full name (Tom Mike Apostol).
Things that are worth to think about:
Include mappings for name synonyms (would be more likely maximally hundred of records, like Thomas <-> Tom
This way is crucial to have valid initials (no M instead of N etc.).
edit: I've coded such thing some time ago, when I had to identify a person by it's signature, ignoring scanning problems, people sometimes sign by Name S. Surname, or N.S. or just by Name Surname (which is another thing maybe you should consider in the solution, to allow the algorithm to ignore second name, altough in your situation it would be rather rare to ommit someone's second name I guess).

Resources