Comparing two different sentences having same meaning in Excel - excel

I have two cells.
Cell 1 contains this value --> Portfolio Rule Failure (Justification Required): Style Sector Structure: 0.93% for MUNI - SENIOR LIVING breaks the 0.00% maximum failure limit. Style Min Security Rating: NR breaks the BBB- minimum failure limit.
Cell 2 contains this value --> Hard Rule Failure (Requires Portfolio Rule Justification to override): Sector Max Weight % - Style failed: MUNI - SENIOR LIVING: 0.93% Min None Max 0% Min Security Rating - Style failed: Worse Than BBB-: 0.93% Min None Max 0%
If you read, both depicts the same meaning. If i try to compare both these in excel, it will say that both are different. But actually they have same meaning though the words used are different. Is there a way in excel or some data analysis tools to say that both are same?
One way to replace the similar pattern words in one of the columns with the other, but I have 1000s of records like this, hence it might be nearly impossible to update these manually.
Please advice.

Here's an approach you might try: If you can get the complete inventory of all possible messages or message patterns into a dedicated worksheet and do the identification of duplicates there, and provide a standard definition then use vlookup to grab that standard
essentially you build a dictionary that serves to interpret the messages once, and then refer to it as needed.
You may need to parse the original message into logical pieces like
Message type eg hard failure, warning, etc
Attribute that triggered the message eg MUNI - SENIOR LIVING
Reason, eg failure limit exceeded

Related

Alternate to sumproduct

Does anyone know a more efficient way to write this? I seem to be having quite a bit of issues with excel giving me "not enough system resources" because I have about 30 slightly varied versions of this in my excel spread sheet.
=SUMPRODUCT((Data!B2:B1000="Human Resources")*(Data!E2:E1000<>"Resolved")*(Data!E2:E1000<>"Closed")*(Data!E2:E1000<>"Cancelled"))
I have looked into countifs but I can't seem to get that to work.
COUNTIFS should do the job, like this
=COUNTIFS(B:B,"Human Resources",E:E,"<>Resolved",E:E,"<>Closed",E:E,"<>Cancelled")
I think what you are after is something like this (note the spacing is exaggerated between the ranges for emphasis, remove for actual use):
=SUMPRODUCT((Data!B2:B1000="Human Resources") * (Data!E2:E1000<>"Closed") , (Data!B2:B1000="Human Resources") * (Data!E2:E1000<>"Cancelled" ) , (Data!B2:B1000="Human Resources") * (Data!E2:E1000<>"Resolved" ))
Test Data:
Human Resources Resolved
Human Resources Closed
Human Resources Cancelled
Human Resources Open
Human Resources Yellow
Human Resources Duck
Human Resources Rock
Human Resources Resolved
Human Resources Closed
Human Resources Cancelled
Parks and Rec 3
Expected: 4
Result: 4
I tried the same formula, changing the references to point to the larger local set, with a random sample of 1000 rows and there was no noticeable slow down or warning. By no means is it bench-marked or optimized, but the intent is left pretty clear.
Larger data sets might cause your machine to run out of available indexes or raw memory, and if you suspect that is the case, partition the data set and do a chunk at a time (perhaps try first splitting it in half attempting 500 rows at once), summing each chunk after all chunks are done.
One last note, for posterity, SUMPRODUCT is fairly version dependent. If you run into problems, you can change it to a strictly SUM formula, with just a bit more work.

NLP - Improving Running Time and Recall of Fuzzy string matching

I have made a working algorithm but the running time is very horrible. Yes, I know from the start that it will be horrible but not that much. For just 200000 records, the program runs for more than an hour.
Basically what I am doing is:
for each searchfield in search fields
for each sample in samples
do a q-gram matching
if there are matches then return it
else
split the searchfield into uniwords
for each sample in samples
split sample into uniwords
for each uniword in samples
if the uniword is a known abbreviation
then search the dictionary for its full word or other known abbr
else do a jaro-winkler matching
average the distances of all the uniwords
if the average is above threshold then make it as a match and break
end for
if there is a match make a comment that it matched one of the samples partially
end else
end for
Yes, this code is very loop-happy. I am using brute-force because the recall is very important. So, I'm wondering how can I make it faster since I am not only running it for 200000 data for millions of data and the computers of the client are not high-end (1GB-2GB of Ram Pentium 4 or Dual-Core, the computer where I test this program is a Dual Core with 4GB of Ram). I came across TF/IDF but I do not know if it will be sufficient. And I wonder how can google make searches real time.
Thanks in advance!
Edit:
This program is a data filterer. From 200,000 dummy data (actual data is about 12M), I must filter data that is irrelevant to the samples (500 dummy samples, I still do not know how much the actual amount of samples).
With the given dummy data and samples, the running time is about 1 hour but after tinkering here and there, I have successfully lessen it to 10-15 minutes. I have lessen it by grouping the fields and samples that begin with the same character (discounting special and non-meaningful words e.g. the, a, an) and matching the fields to the sample with the same first character. I know there is a problem there. What if the field was misspelled at the first character? But I think the number of those are negligible. The samples are spelled correctly since it is always maintained.
what is your programing language? I guess using q=2 or 3 is sufficient. Also I suggested to come from uni gram to higher degrees.

Find UK PostCodes closest to other UK Post Codes by matching the Post Code String

Here is a question that has me awake for a number of days now. The only conclusion I came up so far is that Red Bull does not usually help coders.
I have a scenario in my application where I have a couple of jobs (1 to 50). The job has an address and I have the following properties of an address: Postcode, Latitude, and Longitude.
I have a table of workers also and they too have addresses. While the jobs or workers are created through screens, I use Google Map queries to make sure the provided Postcode is valid and is in UK so all the addresses are verified.
I am using a scheduler control to display some workers on y-axis and a timeline on x-axis. Every job has a date and can only move vertically on the scheduler on the job’s date. The user selects a number of jobs and they are displayed in a basket close to the scheduler. The user can then drag and drop job against workers. All this is manual so it works.
My task is to automate this so that the user does not do much except just verifying and allotting the jobs. Therefore, I have to automate the process.
Every worker has a property called WillingMaximumDistanceTravel which is an integer representing miles, the worker is willing to travel for a job.
Now here is the headache: I have over 1500 workers. I have a utility function that uses Newtonsoft’s Json Convert to de-serialize a stream of response from Google Maps. I need to feed it Postcode A and B.
I also plan to introduce a new table to DB to store the distance finds as Postcode A, Postcode B, and Distance. Therefore, if I find myself comparing the same postcodes again, I will just retrieve the result from DB instead and slowly and eventually, I would no longer require bothering Google anymore as this table would be very comprehensive.
I cannot use the simple Haversine formula, as Crow-fly path is not my requirement here. The pain in this is that it takes a lot of time to calculate. Some workers can travel over 10 miles while some vary from 15 to 80. I have to take the first job from the list and run it with every applicable worker o the system! I was wondering that the UK postcode has a pattern to it. If we sort a list of UK postcodes, can we rough-estimate, from the alphanumeric pattern, where will we hit a 100-mile mark, a 200-mile mark and so on?
If anyone is interested in the code, please drop a line and I will paste it.
(I work for Google, but I'm not speaking on behalf of Google. I have nothing to do with the maps API.)
I suspect this isn't a great situation for using the Google Maps API, simply because you're pushing so much data through. You really don't want to make that many requests, even if you could do so under the directions limits.
When I tackled something similar in a previous job, we bought into a locally-hosted maps API - but even that wasn't fast enough for this sort of work. We ended up precomputing the time to travel from the centroid of each postcode "area" (probably the wrong name for it, but the first part of the postcode followed by the first digit of the remainder, e.g. "SW1W 9" for "SW1W 9TQ") to every other area, storing the result in a giant table. I think we only did it for postcodes which were within 100 miles or something similar, to cut down on the amount of preprocessing.
Even then, a simple DB wasn't quite as fast as we wanted - so we stored the results in a giant file, with a single byte per source/destination pair. (We had a fixed sequence of source postcodes and target postcodes, so we didn't need to specify those.) At that point, computing a travel time consisted of:
Work out postcode areas (substring work)
Find the index of each postcode area within the sequence
Check if we'd loaded that part of the file (we lazy loaded for startup speed)
Load the row if necessary, and just access it otherwise
The bytes were on a sliding scale of accuracy, so for the first 60 minutes it was on a per-minute basis, then each extra value meant an extra 2 minutes, then 5 etc. (Those aren't the exact values, but it was something like that.)
When you've worked out "good candidates" you can ask an on-site API or the Google Maps API for more accurate directions for your exact postcodes, of course.
You want to look for a spatial-index or a space-filling-curve. A spatial index reduce the 2d problem to a 1d problem and recursivley subdivide the surface into smaller tiles but it is basically a reordering of the tiles. You can subdivide the surface either with an index or a string using 4 characters. The latter one can be useful to you because it let you query the string with all string operation hidden in the database engine. You want to look for Nick's spatial index quadtree hilbert-curve blog.

Evaluating the "Value" Attribute

I'm attempting to use the OpenAmplify API to evaluate the content of a URI. The point is to draw out the topics that are truly relevant to the article. Unfortunately, the topical analysis I'm getting back is:
Huge, and
Varied
Neither quality is terribly useful for what I'm trying to do because the signal to noise ratio is being heavily skewed towards noise. I'm analyzing web content, so there is a certain amount (perhaps a large amount) of irrelevant content (ads, etc.) involved. I get that.
Nonetheless, many of the topics being returned are either useless (utterly non-sensical, not even words), irrelevant (as in, where did that come from?) or too granular to provide any meaning or insight. I can probably filter out most of this noise using the value, um, value that is returned for each domain, subdomain, topic, et al, but I don't really know what it means.
Certainly I understand that the value it's a measure of "the prominence of the word in the text," but the number itself appears entirely arbitrary in a way that I prevents me saying something like "ignore any terms with a value less than 50" and have it carry any real meaning.
Are there any range criteria that I can use to help me understand how to use a topic's value score as a filtering threshold? Alternatively, is there another field that I should be using for this sort of filtration?
Thanks for your help.
From other channels, I've learned that the value attribute can't be evaluated the way I was hoping. It means different things for different signals and none are defined in such a way that are meaningful for this kind of requirement.

Algorithms for splitting personal names in parts

I'm looking for references on separating a name: "John A. Doe" in parts, first=John, middle=A., last=Doe. In Mexico we have paternal, maternal, first and second given names, and can be written in different permutations, so the problem is quite complex.
As it depends on data, we are working with matching software that calculates a score for every word so we can take decisions (it is based on a big database). The input data is not clean, it is imported from some government web pages and is human filtered so it could have junk that has to be recognized as well. Any suggestions?
[Edit]
Examples:
name:
Javier Abdul Córdoba Gándara
common permutations (or as it may appear in gvt data referring to same person):
Córdoba Gándara Javier Abdul
Javier A. Córdoba Gándara
Javier Abdul Córdoba G.
paternal=Córdoba
maternal=Gándara
first given:Javier
second given:Abdul
name: María de la Luz Sánchez Martínez
paternal:Sánchez
maternal: Martínez
first given: María de la Luz
name: Paloma Viridiana Alin Arias Medina
paternal: Arias
maternal: Medina
first given: Paloma
second given: Viridiana Alin
As I said what the meaning of each word depends on the score. One has no way of knowing that Viridiana and Alin are given names if not from the score.
We have a very strong database (80 million records or so) so we can get some use of the scoring system. I am designing some algorithm that uses that but looking for other references.
Unfortunately - and having done quite a bit of this work myself - your ideal algorithm will be very data specific, and you will need to work this out for your particular situation.
Of the total time and effort to develop this algorithm, I'd say the time will be split roughly as follows:
10% for general string manipulation
30% for the specific nature of the
data (Mexican name formats, data input quirks)
60% to cater for data quality / lack of
quality
And I believe that's quite generous towards the general string manipulation. Of course it depends whether you need quality results for all records, or only the 'clean' records etc, and if you are able to ignore the 'difficult' records it makes it a lot simpler.
Some general tips
If they are not required, remove non alphanumeric / whitespace characters
Split on spaces
Use hyphens / punctuation to identify surnames or family names
Initials (which are generally single
letters) are not surnames; i.e. they
must be first / middle
determine the level of confidence that you have programmatically identified the each name (and test this thoroughly). You may find there are subsets of data that contain similar patterns that need to be catered for individually (they may come from different sources etc)
You may need to add some natural language or machine learning to check. The problem of identifying author names (e.g. in scientific papers) is difficult as they can be reported with differing orders, degrees of abbreviation, elisions etc. If your database is dirty you will end with ambiguity whatever you do.

Resources