I have multiple choice menus in my program like so:
Which option do you want? (choose one)
f: First option
s: Second option
t: Third option
The user then presses f, s or t to make their choice. For this example, I picked the letters manually, and it should be obvious how.
But in some cases there are conflicts: Suppose I had a Fourth option - I can't use f. Sensible choices include F, h and others, depending on UX philosophy.
Is there an algorithm that will, given a list of strings, generate a unique mnemonic letter to identify each string? By "mnemonic", I mean that the option should suggest the letter (as in my example), so that it is easy to remember which is which (as opposed to just mapping everything to a, b, c or x, y, z).
As I noted above, there are multiple ways of doing this, depending on what you prefer: Capitalized letters, letters within the first word, letters of secondary unique words, etc. For this question, I don't really care about these, so feel free to use your own rules - so long as the algorithm produces reasonably user-friendly results.
The baseline algorithm I've seen used is the following:
Pick the first letter (as in your example). Keep track of which letters you pick.
When the chosen letter is taken, pick the next letter if available.
If there are no more letters, then pick one lexicographically (the first free letter from the alphabet)
If there are no more letters, don't pick anything, the option won't be addressable. This makes sense with menus that are also clickable.
Of course, there are tweaks you can apply:
does your terminal/OS convention distinguish between upper case and lower case? You can use that after step 2 and before step 3 (if there are no more letters left to use, use an upper case letter).
can you use and detect alt, ctrl, win?
are there pre-assigned shortcuts you need to maintain (e.g. s to save)? Assign them before step one.
Related
I'm trying to check if some String in a list are in a given text. But the given text can have some typos. For example let's take this.
text: The brownw focx and the cat are in th eforest.
and my list is: [brown fox, forest, cat]
What I do actually to do this is that I separate my text in multiple groups, groups of one word and two words like so:
[The, brownw, focx, and, the, cat, are, in, th, eforest, The brownw, brownw focx, focx and, and the, the cat, cat are, are in, in th, th eforest]
Than I iterate over each group of word and check with the Levensthein algorithm how much the two strings match with each other. In case it's more than 90% I consider they are the same.
This approach however is very time consuming and I wonder if I can find an alternative to this.
Instead of using the full Levenshtein distance (which is slow to compute), you could do a couple of sanity checks beforehand, to try and exclude candidates which are obviously wrong:
word length: the will never match brown fox, as it is far too short. Count the word length, and exclude all candidates that are more than a few letters shorter or longer.
letters: just check what letters are in the word. for example, the does not contain a single letter from fox, so you can rule it out straightaway. With short words it might not make a big difference in performance, but for longer words it will do. Additional optimisation: look for rare characters (x,q,w) first, or simply ignore common ones (e,t,s) which are more likely to be present anyway.
Heuristics such as these will of course not give you the right answer, but they can help to filter out those that are definitely not going to match. Then you only need to perform the more expensive full check on a much smaller number of candidate words.
A group of amusing students write essays exclusively by plagiarising portions of the complete works of WIlliam Shakespere. At one end of the scale, an essay might exclusively consist a verbatim copy of a soliloquy... at the other, one might see work so novel that - while using a common alphabet - no two adjacent characters in the essay were used adjacently by Will.
Essays need to be graded. A score of 1 is assigned to any essay which can be found (character-by-character identical) in the plain-text of the complete works. A score of 2 is assigned to any work that can be successfully constructed from no fewer than two distinct (character-by-character identical) passages in the complete works, and so on... up to the limit - for an essay with N characters - which scores N if, and only if, no two adjacent characters in the essay were also placed adjacently in the complete works.
The challenge is to implement a program which can efficiently (and accurately) score essays. While any (practicable) data-structure to represent the complete works is acceptable - the essays are presented as ASCII strings.
Having considered this teasing question for a while, I came to the conclusion that it is much harder than it sounds. The naive solution, for an essay of length N, involves 2**(N-1) traversals of the complete works - which is far too inefficient to be practical.
While, obviously, I'm interested in suggested solutions - I'd also appreciate pointers to any literature that deals with this, or any similar, problem.
CLARIFICATIONS
Perhaps some examples (ranging over much shorter strings) will help clarify the 'score' for 'essays'?
Assume Shakespere's complete works are abridged to:
"The quick brown fox jumps over the lazy dog."
Essays scoring 1 include "own fox jump" and "The quick brow". The essay "jogging" scores 6 (despite being short) because it can't be represented in fewer than 6 segments of the complete works... It can be segmented into six strings that are all substrings of the complete works as follows: "[j][og][g][i][n][g]". N.B. Establishing scores for this short example is trivial compared to the original problem - because, in this example "complete works" - there is very little repetition.
Hopefully, this example segmentation helps clarify the 2*(N-1) substring searches in the complete works. If we consider the segmentation, the (N-1) gaps between the N characters in the essay may either be a gap between segments, or not... resulting in ~ 2*(N-1) substring searches of the complete works to test each segmentation hypothesis.
An (N)DFA would be a wonderful solution - if it were practical. I can see how to construct something that solved 'substring matching' in this way - but not scoring. The state space for scoring, on the surface, at least, seems wildly too large (for any substantial complete works of Shakespere.) I'd welcome any explanation that undermines my assumptions that the (N)DFA would be too large to be practical to compute/store.
A general approach for plagiarism detection is to append the student's text to the source text separated by a character not occurring in either and then to build either a suffix tree or suffix array. This will allow you to find in linear time large substrings of the student's text which also appear in the source text.
I find it difficult to be more specific because I do not understand your explanation of the score - the method above would be good for finding the longest stretch in the students work which is an exact quote, but I don't understand your N - is it the number of distinct sections of source text needed to construct the student's text?
If so, there may be a dynamic programming approach. At step k, we work out the least number of distinct sections of source text needed to construct first k characters of the student's text. Using a suffix array built just from the source text or otherwise, we find the longest match between the source text and characters x..k of the student's text, where x is of course as small as possible. Then the least number of sections of source text needed to construct the first k characters of student text is the least needed to construct 1..x-1 (which we have already worked out) plus 1. By running this process for k=1..the length of the student text we find the least number of sections of source text needed to reconstruct the whole of it.
(Or you could just search StackOverflow for the student's text, on the grounds that students never do anything these days except post their question on StackOverflow :-)).
I claim that repeatedly moving along the target string from left to right, using a suffix array or tree to find the longest match at any time, will find the smallest number of different strings from the source text that produces the target string. I originally found this by looking for a dynamic programming recursion but, as pointed out by Evgeny Kluev, this is actually a greedy algorithm, so let's try and prove this with a typical greedy algorithm proof.
Suppose not. Then there is a solution better than the one you get by going for the longest match every time you run off the end of the current match. Compare the two proposed solutions from left to right and look for the first time when the non-greedy solution differs from the greedy solution. If there are multiple non-greedy solutions that do better than the greedy solution I am going to demand that we consider the one that differs from the greedy solution at the last possible instant.
If the non-greedy solution is going to do better than the greedy solution, and there isn't a non-greedy solution that does better and differs later, then the non-greedy solution must find that, in return for breaking off its first match earlier than the greedy solution, it can carry on its next match for longer than the greedy solution. If it can't, it might somehow do better than the greedy solution, but not in this section, which means there is a better non-greedy solution which sticks with the greedy solution until the end of our non-greedy solution's second matching section, which is against our requirement that we want the non-greedy better solution that sticks with the greedy one as long as possible. So we have to assume that, in return for breaking off the first match early, the non-greedy solution gets to carry on its second match longer. But this doesn't work, because, when the greedy solution finally has to finish using its first match, it can jump on to the same section of matching text that the non-greedy solution is using, just entering that section later than the non-greedy solution did, but carrying on for at least as long as the non-greedy solution. So there is no non-greedy solution that does better than the greedy solution and the greedy solution is optimal.
Have you considered using N-Grams to solve this problem?
http://en.wikipedia.org/wiki/N-gram
First read the complete works of Shakespeare and build a trie. Then process the string left to right. We can greedily take the longest substring that matches one in the data because we want the minimum number of strings, so there is no factor of 2^N. The second part is dirt cheap O(N).
The depth of the trie is limited by the available space. With a gigabyte of ram you could reasonably expect to exhaustively cover Shakespearean English string of length at least 5 or 6. I would require that the leaf nodes are unique (which also gives a rule for constructing the trie) and keep a pointer to their place in the actual works, so you have access to the continuation.
This feels like a problem of partial matching a very large regular expression.
If so it can be solved by a very large non deterministic finite state automata or maybe more broadly put as a graph representing for every character in the works of Shakespeare, all the possible next characters.
If necessary for efficiency reasons the NDFA is guaranteed to be convertible to a DFA. But then this construction can give rise to 2^n states, maybe this is what you were alluding to?
This aspect of the complexity does not really worry me. The NDFA will have M + C states; one state for each character and C states where C = 26*2 + #punctuation to connect to each of the M states to allow the algorithm to (re)start when there are 0 matched characters. The question is would the corresponding DFA have O(2^M) states and if so is it necessary to make that DFA, theoretically it's not necessary. However, consider that in the construction, each state will have one and only one transition to exactly one other state (the next state corresponding to the next character in that work). We would expect that each one of the start states will be connected to on average M/C states, but in the worst case M meaning the NDFA will have to track at most M simultaneous states. That's a large number but not an impossibly large number for computers these days.
The score would be derived by initializing to 1 and then it would incremented every time a non-accepting state is reached.
It's true that one of the approaches to string searching is building a DFA. In fact, for the majority of the string search algorithms, it looks like a small modification on failure to match (increment counter) and success (keep going) can serve as a general strategy.
Usage case: I'm writing a domain specific language (DSL) for a regex-like but way more powerful Lispy string processing system focused on conditional replacements (like simulation of language evolution for conlangers/linguists) rather than matching as regexes do. As usual I wrote down the specs before actually writing down the code.
However, due to a somewhat stupid but hard to fix mistake, I ended up with a system only capable of doing stuff one char at a time. Thus, a rewrite rule might be (in pseudocode) change 'a' to 'e' when last char is 's' and next char is 'd'. Chars can also be deleted: delete 'a' when ....
Since the interpreter for the DSL is a bit spaghetti-ish (not in the sense of unstructured, but in the sense that 1. I haven't figured out OO for my implementation lang Chicken Scheme 2. No IDE, so must remember 20+ variable names and use emacs) I don't want to touch it, but rather "unsugar" string replacements to conditional char replacements.
The trivial example: change "ab" to "cd" unconditionally rewrites to change 'a' to 'c' when followed by 'b'; change 'b' to 'd' when preceded by a. However, when there are conditions, things become very ugly very quick. Is there some easy recursive way to do the rewriting, or is this nearly impossible in the rewriting phase and I should probably fix my DSL interpreter? (Note: my DSL has ways to get the n-th letter before and after the current char)
The problem is that since we are going through the data character-at-a-time, when a condition is applied to a multi-character string, that condition has to be expressed in different ways for every position. For instance "abc" followed by "x" combines in a straightforward way into the condition for a, b and c, but has to change shape. The x is actually three positions away from a, but only two from b. This is bad because it causes a proliferation of conditions, which all get wastefully evaluated.
I'd solve this by adding the concept of frames into the interpreter. A frame is established at the current character position, and then holds that position somehow, allowing frame-relative addressing of the characters.
I can think of a few ways of introducing this position fixing. One would be to introduce variable binding into the interpreter. It could support a pair of instructions bind symbol and unbind n, where we would be using a gensym for the symbol.
When generating the code for an operation on a string like "abc", we would generate an instruction like bind #:g0025, which would fix the position of the a, and then the compiler will analyze the conditions applied to the string, and re-phrase them in terms that are relative to #:g0025. After the processing of "abc", we would emit unbind 1 to drop the most recently bound variable.
We could also bind variables to the Boolean values of conditions.
As an example with the named frames, suppose we have
Replace "abc" with "ijk" when preceded by "x" and followed by "yz".
This goes to something like:
bind #:frame
bind #:cond0 to when #:frame[-1] is "x" and #:frame[3] is "y" and #:frame[4] is "z"
replace "a" with "i" when #:cond0 ; these insns move the char position
replace "b" with "j" when #:cond0
replace "c" with "k" when #:cond0
unbind 2
So the difficulty has been translated to one of compiling the condition into frame-relative addressing. The #:frame[3] is derived from the length of the "abc" pattern, which is available to the translator all at once. That is information not available in the target language, which doesn't have "abc" all at once.
The system almost certainly needs some way to try different matches at the same location. If there is no "abc" at the current position, another rule which replaces "foo" with something has to be tried at the same position. Perhaps when the conditions fail, the instruction doesn't advance the character position. So in our example above, that would work: all instructions share the same condition, so in the case of a match the position moves by three positions, otherwise it doesn't. Still, in spite of that, there may be a requirement to have multiple edits with different conditions at the same spot. The scope of my answer isn't to design the whole thing, though.
I am looking for a algorithm for string processing, I have searched for it but couldn't find a algorithm that meets my requirements. I will explain what the algorithm should do with an example.
There are two sets of word sets defined as shown below:
**Main_Words**: swimming, driving, playing
**Words_in_front**: I am, I enjoy, I love, I am going to go
The program will search through a huge set of words as soon it finds a word that is defined in Main_Words it will check the words in front of that Word to see if it has any matching words defined in Words_in_front.
i.e If the program encounters the word "Swimming" it has to check if the words in front of the word "Swimming" are one of these: I am, I enjoy, I love, I am going to go.
Are there any algorithms that can do this?
A straightforward way to do this would be to just do a linear scan through the text, always keeping track of the last N+1 words (or characters) you see, where N is the number of words (or characters) in the longest phrase contained in your words_in_front collection. When you have a "main word", you can just check whether the sequence of N words/characters before it ends with any of the prefixes you have.
This would be a bit faster if you transformed your words_in_front set into a nicer data structure, such as a hashmap (perhaps keyed by last letter in the phrase..) or a prefix/suffix tree of some sort, so you wouldn't have to do an .endsWith over every single member of the set of prefixes each time you have a matching "main word." As was stated in another answer, there is much room for optimization and a few other possible implementations, but there's a start.
Create a map/dictionary/hash/associative array (whatever is defined in your language) with key in Main_Words and Words_in_front are the linked list attached to the entry pointed by the key. Whenever you encounter a word matching a key, go to the table and see if in the attached list there are words that match what you have in front.
That's the basic idea, it can be optimized for both speed and space.
You should be able to build a regular expression along these lines:
I (am|enjoy|love|am going to go) (swimming|driving|playing)
I want to map some strings(word) with number. the similar the string, the nearer their value(mapped number) . also, while checking the positional combination of the letters should impact the mapping.the mapping function should be function of letters, positions (combination given position of letter thepriority such as pit and tip should be different), number of letters.
Well, I would give some examples : starter, stater , stapler, startler, tstarter are some words. These words are of format "(*optinal)sta(*opt)*er" where * denotes some sort of variable in our case it is either 't' or 'l' (i.e. in case of starter and staler). these all should be mapped INDIVIDUALLY, without context to other such that their value are not of much difference. and later on which creating groups I can put appropriate range of numbers for differentiating groups.
So while mapping the string their values should be similar. there are many words, so comparing each other would be complex. so mapping with some numeric value for each word independently and putting the similar string (as they have similar value) in a group and then later find these pattern by other means.
So, for now I need to look up for some existing methods of mapping such that similar strings (I guess I have clarify the term 'similar' for my context) have similar value and these value should be different to the dissimilar ones. please, again I emphasize that the number of string would be huge and comparing each with other is practically impossible(or computationally expensive and much slow).SO WHAT I THINK IS TO DEVISE AN ALGORITHM(taking help from existing ones) FOR MAPPING WORD(STRING) ON ITS OWN
Have I made you clear? Please give me some idea to start with. some terms to search and research.
I think I need some type of "bad" hash function to hash strings and then put them in bucket according to that hash value. at least some idea or algorithm names.
Seems like it would best to use a known algorithm like Levenshtein Distance
This search on StackOverflow
reveals this question about finding-groups-of-similar-strings-in-a-large-set-of-strings, which links to this article describing a SimHash which sounds exactly like what you want.