Most efficient string search method - string

Say I have millions of lines of unique strings spread across hundreds of text files (the "dataset"). Now I want to check to see if any of those text files contain any of 2 million unique strings that are listed in another text file ("tofind"). What would be the most efficient way to go about this? Some extra application-specific info:
must be case sensitive
the string to find would match the found string in full (ie, it is NOT a substring)
each text file in the "dataset" contains approx 700K lines and is 50MB, though some can be several hundred MB.
again, the strings in both the "dataset" and "tofind" are unique. Indexing won't help.
There is no need to be able to search live (ie, as someone starts typing). I just want to output any matches to a text file with the match and the the file it was found in.
I have 32GB of RAM and an i7 3930K
My options include using simple command line/batch "findstr", etc, or possibly writing a search program in vbscript or c# (Java or Python if necessary, but I'm not as familiar with them). What would be the most efficient solution for this particular application?

If you have enough memory to load all the strings from tofind into memory, then you could create a set of pairs, the key being the length of the strings and the value being the set of strings. Load all your strings from tofind into this structure by storing them based on their length: A string of 5 characters would be stored in the value of the pair having 5 as key, a string of 10 characters would be stored in the value of the pair having 10 as key (you can refine this even more using the same style of grouping with the first character, but I won't describe this here, as I want to share the idea in the simplest possible way).
Then you can load the other strings and search for their occurrences. A string having a length of 10 would be searched in your pair having 10 as key, for instance.
If the size of the data set is just too large, then you can do the same by loading a batch of strings at a time and then purge the structure and rebuild it with the next batch.

Since you do not have to do this in real time it gives you a lot of lattitude in
designing a search process. I have not thought this through very carefully but it seems to
me that you could do this in a couple of steps:
Step 1
Eliminate those strings from the dataset that you know do not match any of the strings
in the tofind string list. A Bloom Filter is a very effective way to accomplish this.
It has a zero false negative rate, that is, if there isn't a hit on the bloom filter
then none of the strings match and the string can be eliminated.
The strings that hit on the Bloom Filter then need to be verified to ensure you did not
get a false positive. Bloom Filters are prone to false positives. However if you pay
close attentention to selecting good hashing functions and allocate a large enough filter, the
false positive rate can be quite low.
For each string where there is a hit on the Bloom Filter, save that string and the position in
the string where the hit was made. This information is passsed to Step 2.
Step 2
Verify strings that hit on the Bloom Filter. Now you need to verify all 1M tofind strings
using an efficient exact string matching function. A Trie seems like a good candidate for this
function. Load the tofind strings into the Trie and then search it starting at the position
found by the Bloom Filter. At this point you will either have a hit, in which case
a match was found, or a miss in which case the Bloom Filter reported a false positive.
Note: This process assumes that Step 1 can eliminate a significant number of strings from
the dataset. If you expect that most dataset strings will contain a match in tofind then
it might not be worth the effort.

Related

Finding the most similar string among a set of millions of strings

Let's say I have a dictionary (word list) of millions upon millions of words. Given a query word, I want to find the word from that huge list that is most similar.
So let's say my query is elepant, then the result would most likely be elephant.
If my word is fentist, the result will probably be dentist.
Of course assuming both elephant and dentist are present in my initial word list.
What kind of index, data structure or algorithm can I use for this so that the query is fast? Hopefully complexity of O(log N).
What I have: The most naive thing to do is to create a "distance function" (which computes the "distance" between two words, in terms of how different they are) and then in O(n) compare the query with every word in the list, and return the one with the closest distance. But I wouldn't use this because it's slow.
The problem you're describing is a Nearest Neighbor Search (NNS). There are two main methods of solving NNS problems: exact and approximate.
If you need an exact solution, I would recommend a metric tree, such as the M-tree, the MVP-tree, and the BK-tree. These trees take advantage of the triangle inequality to speed up search.
If you're willing to accept an approximate solution, there are much faster algorithms. The current state of the art for approximate methods is Hierarchical Navigable Small World (hnsw). The Non-Metric Space Library (nmslib) provides an efficient implementation of hnsw as well as several other approximate NNS methods.
(You can compute the Levenshtein distance with Hirschberg's algorithm)
I made similar algorythm some time ago
Idea is to have an array char[255] with characters
and values is a list of words hashes (word ids) that contains this character
When you are searching 'dele....'
search(d) will return empty list
search(e) will find everything with character e, including elephant (two times, as it have two 'e')
search(l) will brings you new list, and you need to combine this list with results from previous step
...
at the end of input you will have a list
then you can try to do group by wordHash and order by desc by count
Also intresting thing, if your input is missing one or more characters, you will just receive empty list in the middle of the search and it will not affect this idea
My initial algorythm was without ordering, and i was storing for every character wordId and lineNumber and char position.
My main problem was that i want to search
with ee to find 'elephant'
with eleant to find 'elephant'
with antph to find 'elephant'
Every words was actually a line from file, so it's often was very long
And number of files and lines was big
I wanted quick search for directories with more than 1gb text files
So it was a problem even store them in memory, for this idea you need 3 parts
function to fill your cache
function to find by char from input
function to filter and maybe order results (i didn't use ordering, as i was trying to fill my cache in same order as i read the file, and i wanted to put lines that contains input in the same order upper )
I hope it make sense

Important algorithm involving random access to a string?

I am implementing a different string representation where accessing a string in non-sequential manner is very costly. To avoid this I try to implement certain position caches or character blocks so one can jump to certain locations and scan from there.
In order to do so, I need a list of algorithms where scanning a string from right to left or random access of its characters is required, so I have a set of test cases to do some actual benchmarking and to create a model I can use to find a local/global optimum for my efforts.
Basically I know of:
String.charAt
String.lastIndexOf
String.endsWith
One scenario where one needs right to left access of strings is extracting the file extension and the file name (item) of paths.
For random access i find no algorithm at all unless one has prefix tables and access the string more randomly checking all those positions for longer than prefix strings.
Does anyone know other algorithms with either right to left or random access of string characters is required?
[Update]
The calculation of the hash-code of a String is calculated using every character and accessed from left to right along the value is stored in a local primary variable. So this is not something for random access.
Also the MD5 or CRC algorithm also all process the complete string. So I do not find any random access examples at all.
One interesting algorithm is Boyer-Moore searching, which involves both skipping forward by a variable number of characters and comparing backwards. If those two operations are not O(1), then KMP searching becomes more attractive, but BM searching is much faster for long search patterns (except in rare cases where the search pattern contains lots of repetitions of its own prefix). For example, BM shines for patterns which must be matched at word-boundaries.
BM can be implemented for certain variable-length encodings. In particular, it works fine with UTF-8 because misaligned false positives are impossible. With a larger class of variable-length encodings, you might still be able to implement a variant of BM which allows forward skips.
There are a number of algorithms which require the ability to reset the string pointer to a previously encountered point; one example is word-wrapping an input to a specific line length. Those won't be impeded by your encoding provided your API allows for saving a copy of an iterator.

inverted index sets - querying key prefixes

I'm using Redis in order to build an inverted index system for words and the documents that contains those words.
the setup is really simple: Redis Sets where the key of the Set is: i:word and the values of the Set are the documents ids that have this word
let's say i have 2 sets: i:example and i:result
the query - "example result" will intersect i:example and i:result and return all the ids that have both example and result as members
but what i'm looking for is a way to perform (in efficient manner) a query like: "ex res". the result set should contain at least all the ids from the query "example result"
Solutions that i thought of:
create prefix sets of size 2: p:ex - contains {"example", "expertise", "ex"...}. the lookup running time will not be a problem - O(1) to get the set and O(n) to check all elements in the set for words that start with the prefix (where n = set.size()) but i worry about the added size price.
Using scan: but i'm not sure about the running time - query like scan 0 match ex* will take O(n) where n is the number of keys in the db? I know redis is fast but it's probably not an optimized solution for query like "ex machi cont".
The usual way to go about this is the first approach you had mentioned, but usually you'd go with segments that are 3+ chars long. Note that you'll need to have a set for each segment, i.e.g. i:exa, i:exam, i:examp, i:exampl and of course i:example.
This will naturally take up space in your database (hence the suggestion to start at 3 rather than 2 characters). A possible tweak is to keep in the i:len(3) sets only references to i:len(4+) sets instead of document ids. This will required more read operations but will have significant savings in terms of RAM.
You should explore v2.8.9's addition of lexicographical ranges for Sorted Sets. By calling ZRANGEBYLEX you can get ranges of members (i.e.g. all the words that start with ex). While this could be useful in this context by itself, consider that you can also use your Sorted Set's members creatively to encode a word and its document reference. This can help you get over the "loss" of the score (since all scores need to be the same for lexicographical ordering to work). For example, assuming the words "bed" and "beg" in docs 1 and 2:
ZADD index 0 "beg:1" 0 "bed:2"
Lastly, here's a little something to think about too - adding suffix searching (i.e.g, everything that ends with "ample"): https://redislabs.com/blog/how-to-use-redis-at-least-x1000-more-efficiently

How Strings are stored in a VBA Dictionary structure?

As I am currently playing with huge number of strings (have a look at another question: VBA memory size of Arrays and Arraylist) I used a scripting dictionary just for the feature of the keyed access that it has.
Everything was looking fine except that it was some how slow in loading the strings and that it uses a lot of memory. For an example of 100,000 strings of 128 characters in length, the Task manager showed at the end of the sub approximately 295 MB and when setting Dictionary=Nothing a poor 12 MB was remaining in Excel. Even considering internal Unicode conversion of strings 128 * 2 * 100,000 gives 25.6 MB ! Can someone explain this big difference ?
Here is all the info I could find on the Scripting.Dictionary:
According to Eric Lippert, who wrote the Scripting.Dictionary, "the actual implementation of the generic dictionary is an extensible-hashing-with-chaining algorithm that re-hashes when the table gets too full." (It is clear from the context that he is referring to the Scripting.Dictionary) Wikipedia's article on Hash Tables is a pretty good introduction to the concepts involved. (Here is a search of Eric's blog for the Scripting.Dictionary, he occasionally mentions it)
Basically, you can think of a Hash Table as a large array in memory. Instead of storing your strings directly by an index, you must provide a key (usually a string). The key gets "hashed", that is, a consistent set of algorithmic steps is applied to the key to crunch it down into a number between 0 and current max index in the Hash Table. That number is used as the index to store your string into the hash table. Since the same set of steps is applied each time the key is hashed, it results in the same index each time, meaning if you are looking up a string by its key, there is no need to search through the array as your normally would.
The hash function (which is what converts a key to an index into the table) is designed to be as random as possible, but every once in a while two keys can crunch down to the same index - this is called a collision. This is handled by "chaining" the strings together in a linked list (or possibly a more searchable structure). So suppose you tried to look a string up in the Hash Table with a key. The key is hashed, and you get an index. Looking in the array at that index, it could be an empty slot if no string with that key was ever added, or it could be a linked list that contains one or more strings whose keys mapped to that index in the array.
The entire reason for going through the details above is to point out that a Hash Table must be larger than the number of things it will store to make it efficient (with some exceptions, see Perfect Hash Function). So much of the overhead you would see in a Hash Table are the empty parts of the array that have to be there to make the hash table efficient.
Additionally, resizing the Hash Table is an expensive operation because the all the existing strings have to be rehashed to new locations, so when the load factor of the Hash Table exceeds the predefined threshold and it gets resized, it might get doubled in size to avoid having to do so again soon.
The implementation of the structure that holds the chain of strings at each array position can also have a large impact on the overhead.
If I find anything else out, I'll add it here...

Fastest way to determine if a string contains a character

I have a string which consists of unicode characters. The same character can occur only once.
The length of the string is between 1 and ~50.
What is the fastest way to check if a particular character is in the string or not?
Iterating the string is not a good choice, isn't it? Is there any efficient algorithm for this purpose?
My first idea was to keep the characters in the string alphabetically sorted. It could be searched quickly, but the sorting and the comparison of unicode characters are not so trivial (using the right collation) and it has a big cost, probably bigger then iterating the whole string.
Maybe some hashing? Maybe the iteration is the fastest way?
Any idea?
If there's no preprocessing, the simplest and fastest way is to iterate through the characters.
If there's preprocessing, the previous approach might still the best, or you could try a small hashtable which stores whether a string contains that character. Storing the hash will take extra space, but could be better for memory cache (with low hash collision & assuming you don't have to access the actual string). Make sure you measure the peformance.
I have a feeling you're trying to over-engineer a really simple task. Have you verified that this is a bottleneck in your application?
A linear search through the string is O(n) with each operation being very simple. Sorting the string is O(n log n) with more complicated operations. It's pretty clear that the linear search will be faster in all cases.
If the characters are stored in UTF-8 or UTF-16 encoding then there's a possibility that you'll need to search for more than one contiguous element. There are ways to speed that up, such as Boyer-Moore or Knuth-Morris-Pratt. It's unclear whether there would be an actual speedup with such short search strings.
Is it a repeated operation on the same string or 1 time task ? If it is a 1 time task, then you can't do better than going through the string after all you have to look at all characters. O(n)
If it is repeated operation then you can do some preprocessing of the strings to make the subsequent operations faster. The most space efficient and fastest would be to build bloom filters for the characters in each string. Once built which is is fast too, you can say if a character is not present in 0(1) and only do a binary search of the sorted string only if bloom filter says yes.

Resources