Get search word Hits ( number of occurences) per document in Lucene - search

Can any one suggest me the best way to get Hits( no of occurrences ) of a word per document in Lucene?..

Lucene uses a field-based, rather than document-based, index.
In order to get term counts per document:
Iterate over documents using IndexReader.document() and isDeleted().
In document d, iterate over fields using Document.getFields().
For each field f, get terms using getTermFreqVector().
Go over the term vector and sum frequencies per terms.
The sum of term frequencies per field will give you the document's term frequency vector.

SpanTermQuery.getSpans will give an enumeration of docs and where the terms appears. The docs are sorted, so you can just count the number of times each doc appears, ignoring the position info.

Related

Quanteda: Removing documents with low occurrence of word x

When reading on methods of textual analysis, some eliminate documents with "10% lowest density score", that is, documents that are relatively long compared to the occurrence of a certain keyword. How can I achieve a similar result in quanteda?
I've created a corpus using a query of the words "refugee" and "asylum seeker". Now I would like to remove all documents where the count frequency of refugee|asylum_seeker is below 3. However, I imagine it is also possible to use the relative frequency if document length is to be taken into account.
Could someone help me? The solution in my head looks like this, however I don't know how to implement it.
For count frequency: Add counts of occurrences of refugee|asylum_seeker per document and remove documents with an added count below 3.
For relative frequency: Inspect the overall average relative frequency of both words refugee and asylum_seeker, to then calculate the per row relative frequencies of the features and apply a function to remove all documents with a relative frequency of both features below X.
Create a dfm from your tokenised corpus, using dfmat <- dfm(your tokens).
Remove the documents features this way:
dfm_remove(dfmat,
as.logical(dfmat[, c("refugee")] < 3 |
dfmat[, c("asylum_seeker")] < 3)
)

Document similarity- Odd one out

Lets say I have "n" number of documents over a specific topic giving certain details. I want to get those documents who are not similar to the majority of the documents. As vague as this might seem, I know how to find cosine similarity between 2 documents. But lets say, I "know" I have 10 documents that are similar to each other, I introduce an 11th document and I need a way to judge how similar is this document with those 10 collectively and not just with every individual document.
I am working with scikit learn, so an answer or technique with its reference will help!
Represent each document as a bag of words and use tf-idf weight to represent a word in a particular document. Then compute cosine similarity with all n documents. Sum all similarity values and then normalize (divide the final sim value by n). It should give you a reasonable similarity between the n documents and your target document.
You can also consider mutual information (sklearn.metrics.mutual_info_score), KL-divergence to measure similarity/difference between two documents. Note that if you want to use them, you need to represent documents as a probability distribution. To compute probability of a term in a document, you can simply use the following formula:
Probability(w) = TF(w) / TTF(w)
where,
TF(w) = term frequency of word, w in a document, d
TTF(w) = total term frequency of word, w [sum of tf in all documents]
I believe any one of them will give you reasonable idea about similarity/dissimilarity between the n documents and your target document.

Best way to rank sentences based on similarity from a set of Documents

I want to know the best way to rank sentences based on similarity from a set of documents.
For e.g lets say,
1. There are 5 documents.
2. Each document contains many sentences.
3. Lets take Document 1 as primary, i.e output will contain sentences from this document.
4. Output should be list of sentences ranked in such a way that sentence with FIRST rank is the most similar sentence in all 5 documents, then 2nd then 3rd...
Thanks in advance.
I'll cover the basics of textual document matching...
Most document similarity measures work on a word basis, rather than sentence structure. The first step is usually stemming. Words are reduced to their root form, so that different forms of similar words, e.g. "swimming" and "swims" match.
Additionally, you may wish to filter the words you match to avoid noise. In particular, you may wish to ignore occurances of "the" and "a". In fact, there's a lot of conjunctions and pronouns that you may wish to omit, so usually you will have a long list of such words - this is called "stop list".
Furthermore, there may be bad words you wish to avoid matching, such as swear words or racial slur words. So you may have another exclusion list with such words in it, a "bad list".
So now you can count similar words in documents. The question becomes how to measure total document similarity. You need to create a score function that takes as input the similar words and gives a value of "similarity". Such a function should give a high value if the same word appears multiple times in both documents. Additionally, such matches are weighted by the total word frequency so that when uncommon words match, they are given more statistical weight.
Apache Lucene is an open-source search engine written in Java that provides practical detail about these steps. For example, here is the information about how they weight query similarity:
http://lucene.apache.org/java/2_9_0/api/all/org/apache/lucene/search/Similarity.html
Lucene combines Boolean model (BM) of Information Retrieval with
Vector Space Model (VSM) of Information Retrieval - documents
"approved" by BM are scored by VSM.
All of this is really just about matching words in documents. You did specify matching sentences. For most people's purposes, matching words is more useful as you can have a huge variety of sentence structures that really mean the same thing. The most useful information of similarity is just in the words. I've talked about document matching, but for your purposes, a sentence is just a very small document.
Now, as an aside, if you don't care about the actual nouns and verbs in the sentence and only care about grammar composition, you need a different approach...
First you need a link grammar parser to interpret the language and build a data structure (usually a tree) that represents the sentence. Then you have to perform inexact graph matching. This is a hard problem, but there are algorithms to do this on trees in polynomial time.
As a starting point you can compute soundex for each word and then compare documents based on soundexes frequencies.
Tim's overview is very nice. I'd just like to add that for your specific use case, you might want to treat the sentences from Doc 1 as documents themselves, and compare their similarity to each of the four remaining documents. This might give you a quick aggregate similarity measure per sentence without forcing you to go down the route of syntax parsing etc.

How to search phrase queries in inverted index structure?

If we want to search a query like this "t1 t2 t3" (t1,t2 ,t3 must be queued) in an inverted index structure ,
which ways should we do ?
1-First we search the "t1" term and find all documents that contains "t1" , then do this work for "t2" and then "t3" . Then find documents that positions of "t1" , "t2" and "t3" are next to each other .
2-First we search the "t1" term and find all documents that contains "t1" , then in all documents that we found , we search the "t2" and next , in the result of this , we find documents that contains "t3" .
I have a full inverted index . I want to know which ways above is optimized , (1) or (2) ?
thanks a lot.
As the wikipedia entry well explains,
There are two main variants of
inverted indexes: A record level
inverted index (or inverted file index
or just inverted file) contains a list
of references to documents for each
word. A word level inverted index (or
full inverted index or inverted list)
additionally contains the positions of
each word within a document. The
latter form offers more functionality
(like phrase searches), but needs more
time and space to be created.
Since you don't tell us which variant you have, we can't really answer your question precisely, but thinking about each possibility will help.
To open and search documents is typically a costly operation, unless your documents are unusually small, so you want to minimize that -- and option (2) doesn't really minimize it. If you have an inverted list, with option (1) you won't even need to open any document; if you only have an inverted file, you'll inevitably need to open documents and scan them (since you otherwise lack information to confirm word adjacency) -- but at least with option (1) you minimize the number of documents you have to open and scan (only those in the intersection of the lists of documents containing each word).
So, in either case, option (1) is more promising (unless your documents are peculiarly small).

Lucene number extracting

I have this number extracting problem.
I want to get all matches that don't have a certain number in it
ex : 125501874, 125001873
Every number that as 55 at the position 2 are not to be considered.
The first numbers range is 0 to 9 and the second is 1-9 so the real range is [01-99]
(we cannot have 00 as the first two number)
With Lucene I wanted to add NOT field:[01-99]55*
But it doesn't seem to work. Is there an easy way to find ??55* and disregard it in a Search("NOT field:[01-99]55*")?
Thank you Lucene guru
Lucene can do this very efficiently if one creates an "index-only" field with only the third and fourth digits in it. The complete value can be "stored" (or stored and indexed if other queries use the whole number) in the original field.
Update: A followup comment asked, "Is [there] a way to create a temporary index on only the second digit?"
Using a ParallelReader "vertically partitions" the fields of an index. One partition could hold the current index, with its fields, while the other is a temporary index with the new field, possibly stored in a RAMDirectory.
Assuming the number is "stored" in the original index, iterate over each document in the original index, retrieve the stored field, parse out the key digits, and add a Document to the temporary index with the new field. As the ParallelReader documentation states, it is imperative that the document numbers match in both indexes.
Thank you erickson, Your solution is probably the best, using ParallelReader if only I could use temporary indexes, cause we cache the search query, we will need those later.
But like you said before, better start with an index on the relevant digits straighaway.
I have another solution.
NOT field:0?55*
NOT field:1?55*
...
NOT field:9?55*
It is efficient enough for the search I'm doing and it bypass the first character wildcard limitation. I wouldn't use that if their where more digits to check or if they where farther from the start.
Now I'm testing this on a million of row and it's pretty efficient for our needs.

Resources