Letter substitutions termination - string

Given:
A char string S length l containing only characters from 'a' to 'z'
A set of ordered substitution rules R (in the form X->Y) where x, y are single letters from 'a' to 'z' (eg, 'a' -> ' e' could be a valid rule but 'ce'->'abc' would never be a valid rule)
When a rule r in R is applied on S, all letters of S which are equal to the left side of the rule r would be replaced by the letter in the right side of r, if the rule r cause any replacement in S, r is called triggered rule.
Flowchart (Algorithm) :
(1) Alternately apply all rules in R (following the order of rules in R) on S.
(2) While (there exists any 'triggered rule' DURING (1) ) : repeat (1)
(3) Terminate
The question is: Is there any way to determine if with a given string S and set R, the algorithm would terminate or not (running forever)
Example1 : (manually executed)
S = 'abcdef' R = { 'a'->'b' , 'b' -> 'c' }
(the order is implied the order of appearance from left to right of each rule)
Ater running algorithm on S and R:
(1.1): 'abcdef' --> 'bbcdef' --> 'cccdef'
(2.1): repeat (1) because there are 2 replacements during the (1.1)
(1.2): 'cccdef'
(2.2): continue to (3) because there is no replacement during the (1.2)
(3) : terminate the algorithm
=> The algorithm terminate with the given S and R
Example2:
S = 'abcdef' R = { 'a'->'b' , 'b' -> 'a' }
(the order is implied the appearance order from left to right of each rule)
Ater running algorithm on S and R:
(1.1): 'abcdef' --> 'bbcdef' --> 'abcdef'
(2.1): repeat (1) because there are 2 replacements during the (1.1)
(1.2): 'abcdef --> 'bbcdef' --> 'abcdef'
(2.2): repeat (1) because there are 2 replacements during the (1.2)
(1.3): ...... that would be alike (1.1) forever....
The step (3) (terminate) is never reached.
=> The algorithm won't terminate with the given S and R.
I worked on this and found no efficient algorithm for the question
"if the algorithm halts".
First idea came to my mind was to "find cycle" of letters which
are in triggered rules but the number of rules may be too large
for this idea to be ideal.
The second one is to propose a "threshold" for the time of the
repeat, if the threshold is exceeded, we conclude the algorithm
would not terninate.
The "threshold" could be choosen randomly, (as long as it big
enough) - this approach is not really compelling.
I am thinking that if there is any upper bound for the
"threshold" which ensures that we always get the right answer.
And I came up with threshold = 26 where 26 is the number of
letter from 'a' to 'z' - but I can't prove that it true (or not).
(I hope that It would be something like Bellman-Ford algorithm which determines negative cycle in a fixed number of step,..)
How about you? Please help me find the answer (this is not a
homework)
Thankyou for reading.

One simple way to think about solving this is to consider a string of length 1 and see if the problem can loop for any given starting letter. Since the string's length is never changing, and applying a rule applies to each character in S independently, it suffices to consider just a string of length 1.
Now, start with a state diagram with 26 states - 1 for each letter of the alphabet. Now, for your state transitions, consider this process:
Apply the transitions from R 1 at a time in order, until you reach the end of R. If from a particular state (letter), you do not ever reach a new letter, you know that if you reach the starting letter, you terminate. Otherwise, after applying the entire sequence of R, you will end up with a new letter. This will be your new state.
Note that all state transitions are deterministic because we apply the entire sequence of R, not just the individual transitions. If we applied the individual transitions, we might get confused, because we might have a -> b, b->a, a->c. When looking at the individual operations, we might think there are two possible transitions from a (either to b or to c), but really, considering the entire sequence, we see definitively that a transitions to c.
You will be done creating your state diagram after considering the next states of each starting letter. Creating the entire state diagram in this manner requires 26 * |R| operations. If the state diagram contains a loop, then if the string S contains any of the letters in the loop, then it fails to halt, otherwise it will halt.
Alternatively, if you just consider halting after 26 iterations through the entire sequence from R, you can use that as well.

Related

AQL to validate path to node

We're required to have some AQL that validates a specific path to an entity. The current solution performs very poorly, due to needing to scan whole collections.
e.g. here we have 3 entity 'types': a, b, c (though they are all in a single collection) and specific edge collections between them and we want to establish whether or not there is a connection between _key "123" and _key "234" that goes exactly through a -> b -> c.
FOR a IN entities FILTER e._key == "123"
FOR b IN 1..1 OUTBOUND e edges_a_to_b
FOR c IN 1..1 INBOUND e_1 edges_c_to_b
FILTER e_2._key == "234"
...
This can fan out very quickly!
We have another solution, where we use SHORTEST PATH and specify the appropriate DIRECTION and edge collections which is much faster (>100times). But worry that this approach does not satisfy quite our general case... the order of the edges is not enforced, and we may have to go through the same edge collection more than once, which we cannot do with that syntax.
Is there another way, possibly involving paths in the traversal?
Thanks!
Dan.
If i understand correctly you always know the exact path that is required between your two vertices.
So to take your example a -> b -> c, a valid result will have:
path.vertices == [a, b, c]
So we can use this path to filter on it, which only works if you use a single traversal step instead of multiple ones.
So what we try to du is the following pattern:
FOR c,e, path IN <pathlength> <direction> <start> <edge-collections>
FILTER path.vertices[0] == a // This needs to be formulated correctly
FILTER path.vertices[1] == b // This needs to be formulated correctly
FILTER path.vertices[2] == c // This needs to be formulated correctly
LIMIT 1 // We only net exactly one path, so limit 1 is enough
[...]
So with this hint is it possible to write the query in the following way:
FOR a IN entities
FILTER a._key == "123"
FOR c, e, path IN 2 OUTBOUND a edges_a_to_b, INBOUND edges_b_to_c
FILTER path.vertices[1] == /* whatever identifies b e.g. vertices[1].type == "b" */
FILTER path.vertices[2]._key == "234"
LIMIT 1 /* This will stop as soon as the first match is found, so very important! */
/* [...] */
This will allow the optimizer to apply the filter conditions as early as possible, und will (almost) use the same algorithm as the shortest path implementation.
The trick is to use one traversal instead of multiples to save internal overhead and allow for better optimization.
Also take into account that it might be better to search in the opposite direction:
e.g. instead of a -> b -> c check for c <- b <- a which might be faster.
This depends on the amount of edges per each node.
I assume a doctor has many surgeries, but a single patient most likely has only a small amount of surgeries so it is better to start at the patient and check backwards instead of starting at the doctor and check forwards.
Please let me know it this helps already, otherwise we can talk about more details and see if we can find some further optimizations.
Disclaimer: I am part of the Core-Dev team at ArangoDB

Data Structure for Subsequence Queries

In a program I need to efficiently answer queries of the following form:
Given a set of strings A and a query string q return all s ∈ A such that q is a subsequence of s
For example, given A = {"abcdef", "aaaaaa", "ddca"} and q = "acd" exactly "abcdef" should be returned.
The following is what I have considered considered so far:
For each possible character, make a sorted list of all string/locations where it appears. For querying interleave the lists of the involved characters, and scan through it looking for matches within string boundaries.
This would probably be more efficient for words instead of characters, since the limited number of different characters will make the return lists very dense.
For each n-prefix q might have, store the list of all matching strings. n might realistically be close to 3. For query strings longer than that we brute force the initial list.
This might speed things up a bit, but one could easily imagine some n-subsequences being present close to all strings in A, which means worst case is the same as just brute forcing the entire set.
Do you know of any data structures, algorithms or preprocessing tricks which might be helpful for performing the above task efficiently for large As? (My ss will be around 100 characters)
Update: Some people have suggested using LCS to check if q is a subsequence of s. I just want to remind that this can be done using a simple function such as:
def isSub(q,s):
i, j = 0, 0
while i != len(q) and j != len(s):
if q[i] == s[j]:
i += 1
j += 1
else:
j += 1
return i == len(q)
Update 2: I've been asked to give more details on the nature of q, A and its elements. While I'd prefer something that works as generally as possible, I assume A will have length around 10^6 and will need to support insertion. The elements s will be shorter with an average length of 64. The queries q will only be 1 to 20 characters and be used for a live search, so the query "ab" will be sent just before the query "abc". Again, I'd much prefer the solution to use the above as little as possible.
Update 3: It has occurred to me, that a data-structure with O(n^{1-epsilon}) lookups, would allow you to solve OVP / disprove the SETH conjecture. That is probably the reason for our suffering. The only options are then to disprove the conjecture, use approximation, or take advantage of the dataset. I imagine quadlets and tries would do the last in different settings.
It could done by building an automaton. You can start with NFA (nondeterministic finite automaton which is like an indeterministic directed graph) which allows edges labeled with an epsilon character, which means that during processing you can jump from one node to another without consuming any character. I'll try to reduce your A. Let's say you A is:
A = {'ab, 'bc'}
If you build NFA for ab string you should get something like this:
+--(1)--+
e | a| |e
(S)--+--(2)--+--(F)
| b| |
+--(3)--+
Above drawing is not the best looking automaton. But there are a few points to consider:
S state is the starting state and F is the ending state.
If you are at F state it means your string qualifies as a subsequence.
The rule of propagating within an autmaton is that you can consume e (epsilon) to jump forward, therefore you can be at more then one state at each point in time. This is called e closure.
Now if given b, starting at state S I can jump one epsilon, reach 2, and consume b and reach 3. Now given end string I consume epsilon and reach F, thus b qualifies as a sub-sequence of ab. So does a or ab you can try yourself using above automata.
The good thing about NFA is that they have one start state and one final state. Two NFA could be easily connected using epsilons. There are various algorithms that could help you to convert NFA to DFA. DFA is a directed graph which can follow precise path given a character -- in particular, it is always in exactly one state at any point in time. (For any NFA, there is a corresponding DFA whose states correspond to sets of states in the NFA.)
So, for A = {'ab, 'bc'}, we would need to build NFA for ab then NFA for bc then join the two NFAs and build the DFA of the entire big NFA.
EDIT
NFA of subsequence of abc would be a?b?c?, so you can build your NFA as:
Now, consider the input acd. To query if ab is subsequence of {'abc', 'acd'}, you can use this NFA: (a?b?c?)|(a?c?d). Once you have NFA you can convert it to DFA where each state will contain whether it is a subsequence of abc or acd or maybe both.
I used link below to make NFA graphic from regular expression:
http://hackingoff.com/images/re2nfa/2013-08-04_21-56-03_-0700-nfa.svg
EDIT 2
You're right! In case if you've 10,000 unique characters in the A. By unique I mean A is something like this: {'abc', 'def'} i.e. intersection of each element of A is empty set. Then your DFA would be worst case in terms of states i.e. 2^10000. But I'm not sure when would that be possible given that there can never be 10,000 unique characters. Even if you have 10,000 characters in A still there will be repetitions and that might reduce states alot since e-closure might eventually merge. I cannot really estimate how much it might reduce. But even having 10 million states, you will only consume less then 10 mb worth of space to construct a DFA. You can even use NFA and find e-closures at run-time but that would add to run-time complexity. You can search different papers on how large regex are converted to DFAs.
EDIT 3
For regex (a?b?c?)|(e?d?a?)|(a?b?m?)
If you convert above NFA to DFA you get:
It actually lot less states then NFA.
Reference:
http://hackingoff.com/compilers/regular-expression-to-nfa-dfa
EDIT 4
After fiddling with that website more. I found that worst case would be something like this A = {'aaaa', 'bbbbb', 'cccc' ....}. But even in this case states are lesser than NFA states.
Tests
There have been four main proposals in this thread:
Shivam Kalra suggested creating an automaton based on all the strings in A. This approach has been tried slightly in the literature, normally under the name "Directed Acyclic Subsequence Graph" (DASG).
J Random Hacker suggested extending my 'prefix list' idea to all 'n choose 3' triplets in the query string, and merging them all using a heap.
In the note "Efficient Subsequence Search in Databases" Rohit Jain, Mukesh K. Mohania and Sunil Prabhakar suggest using a Trie structure with some optimizations and recursively search the tree for the query. They also have a suggestion similar to the triplet idea.
Finally there is the 'naive' approach, which wanghq suggested optimizing by storing an index for each element of A.
To get a better idea of what's worth putting continued effort into, I have implemented the above four approaches in Python and benchmarked them on two sets of data. The implementations could all be made a couple of magnitudes faster with a well done implementation in C or Java; and I haven't included the optimizations suggested for the 'trie' and 'naive' versions.
Test 1
A consists of random paths from my filesystem. q are 100 random [a-z] strings of average length 7. As the alphabet is large (and Python is slow) I was only able to use duplets for method 3.
Construction times in seconds as a function of A size:
Query times in seconds as a function of A size:
Test 2
A consists of randomly sampled [a-b] strings of length 20. q are 100 random [a-b] strings of average length 7. As the alphabet is small we can use quadlets for method 3.
Construction times in seconds as a function of A size:
Query times in seconds as a function of A size:
Conclusions
The double logarithmic plot is a bit hard to read, but from the data we can draw the following conclusions:
Automatons are very fast at querying (constant time), however they are impossible to create and store for |A| >= 256. It might be possible that a closer analysis could yield a better time/memory balance, or some tricks applicable for the remaining methods.
The dup-/trip-/quadlet method is about twice as fast as my trie implementation and four times as fast as the 'naive' implementation. I used only a linear amount of lists for the merge, instead of n^3 as suggested by j_random_hacker. It might be possible to tune the method better, but in general it was disappointing.
My trie implementation consistently does better than the naive approach by around a factor of two. By incorporating more preprocessing (like "where are the next 'c's in this subtree") or perhaps merging it with the triplet method, this seems like todays winner.
If you can do with a magnitude less performance, the naive method does comparatively just fine for very little cost.
As you point out, it might be that all strings in A contain q as a subsequence, in which case you can't hope to do better than O(|A|). (That said, you might still be able to do better than the time taken to run LCS on (q, A[i]) for each string i in A, but I won't focus on that here.)
TTBOMK there are no magic, fast ways to answer this question (in the way that suffix trees are the magic, fast way to answer the corresponding question involving substrings instead of subsequences). Nevertheless if you expect the set of answers for most queries to be small on average then it's worth looking at ways to speed up these queries (the ones yielding small-size answers).
I suggest filtering based on a generalisation of your heuristic (2): if some database sequence A[i] contains q as a subsequence, then it must also contain every subsequence of q. (The reverse direction is not true unfortunately!) So for some small k, e.g. 3 as you suggest, you can preprocess by building an array of lists telling you, for every length-k string s, the list of database sequences containing s as a subsequence. I.e. c[s] will contain a list of the ID numbers of database sequences containing s as a subsequence. Keep each list in numeric order to enable fast intersections later.
Now the basic idea (which we'll improve in a moment) for each query q is: Find all k-sized subsequences of q, look up each in the array of lists c[], and intersect these lists to find the set of sequences in A that might possibly contain q as a subsequence. Then for each possible sequence A[i] in this (hopefully small) intersection, perform an O(n^2) LCS calculation with q to see whether it really does contain q.
A few observations:
The intersection of 2 sorted lists of size m and n can be found in O(m+n) time. To find the intersection of r lists, perform r-1 pairwise intersections in any order. Since taking intersections can only produce sets that are smaller or of the same size, time can be saved by intersecting the smallest pair of lists first, then the next smallest pair (this will necessarily include the result of the first operation), and so on. In particular: sort lists in increasing size order, then always intersect the next list with the "current" intersection.
It is actually faster to find the intersection a different way, by adding the first element (sequence number) of each of the r lists into a heap data structure, then repeatedly pulling out the minimum value and replenishing the heap with the next value from the list that the most recent minimum came from. This will produce a list of sequence numbers in nondecreasing order; any value that appears fewer than r times in a row can be discarded, since it cannot be a member of all r sets.
If a k-string s has only a few sequences in c[s], then it is in some sense discriminating. For most datasets, not all k-strings will be equally discriminating, and this can be used to our advantage. After preprocessing, consider throwing away all lists having more than some fixed number (or some fixed fraction of the total) of sequences, for 3 reasons:
They take a lot of space to store
They take a lot of time to intersect during query processing
Intersecting them will usually not shrink the overall intersection much
It is not necessary to consider every k-subsequence of q. Although this will produce the smallest intersection, it involves merging (|q| choose k) lists, and it might well be possible to produce an intersection that is nearly as small using just a fraction of these k-subsequences. E.g. you could limit yourself to trying all (or a few) k-substrings of q. As a further filter, consider just those k-subsequences whose sequence lists in c[s] are below some value. (Note: if your threshold is the same for every query, you might as well delete all such lists from the database instead, since this will have the same effect, and saves space.)
One thought;
if q tends to be short, maybe reducing A and q to a set will help?
So for the example, derive to { (a,b,c,d,e,f), (a), (a,c,d) }. Looking up possible candidates for any q should be faster than the original problem (that's a guess actually, not sure how exactly. maybe sort them and "group" similar ones in bloom filters?), then use bruteforce to weed out false positives.
If A strings are lengthy, you could make the characters unique based on their occurence, so that would be {(a1,b1,c1,d1,e1,f1),(a1,a2,a3,a4,a5,a6),(a1,c1,d1,d2)}. This is fine, because if you search for "ddca" you only want to match the second d to a second d. The size of your alphabet would go up (bad for bloom or bitmap style operations) and would be different ever time you get new A's, but the amount of false positives would go down.
First let me make sure my understanding/abstraction is correct. The following two requirements should be met:
if A is a subsequence of B, then all characters in A should appear in B.
for those characters in B, their positions should be in an ascending order.
Note that, a char in A might appear more than once in B.
To solve 1), a map/set can be used. The key is the character in string B, and the value doesn't matter.
To solve 2), we need to maintain the position of each characters. Since a character might appear more than once, the position should be a collection.
So the structure is like:
Map<Character, List<Integer>)
e.g.
abcdefab
a: [0, 6]
b: [1, 7]
c: [2]
d: [3]
e: [4]
f: [5]
Once we have the structure, how to know if the characters are in the right order as they are in string A? If B is acd, we should check the a at position 0 (but not 6), c at position 2 and d at position 3.
The strategy here is to choose the position that's after and close to the previous chosen position. TreeSet is a good candidate for this operation.
public E higher(E e)
Returns the least element in this set strictly greater than the given element, or null if there is no such element.
The runtime complexity is O(s * (n1 + n2)*log(m))).
s: number of strings in the set
n1: number of chars in string (B)
n2: number of chars in query string (A)
m: number of duplicates in string (B), e.g. there are 5 a.
Below is the implementation with some test data.
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.TreeSet;
public class SubsequenceStr {
public static void main(String[] args) {
String[] testSet = new String[] {
"abcdefgh", //right one
"adcefgh", //has all chars, but not the right order
"bcdefh", //missing one char
"", //empty
"acdh",//exact match
"acd",
"acdehacdeh"
};
List<String> subseqenceStrs = subsequenceStrs(testSet, "acdh");
for (String str : subseqenceStrs) {
System.out.println(str);
}
//duplicates in query
subseqenceStrs = subsequenceStrs(testSet, "aa");
for (String str : subseqenceStrs) {
System.out.println(str);
}
subseqenceStrs = subsequenceStrs(testSet, "aaa");
for (String str : subseqenceStrs) {
System.out.println(str);
}
}
public static List<String> subsequenceStrs(String[] strSet, String q) {
System.out.println("find strings whose subsequence string is " + q);
List<String> results = new ArrayList<String>();
for (String str : strSet) {
char[] chars = str.toCharArray();
Map<Character, TreeSet<Integer>> charPositions = new HashMap<Character, TreeSet<Integer>>();
for (int i = 0; i < chars.length; i++) {
TreeSet<Integer> positions = charPositions.get(chars[i]);
if (positions == null) {
positions = new TreeSet<Integer>();
charPositions.put(chars[i], positions);
}
positions.add(i);
}
char[] qChars = q.toCharArray();
int lowestPosition = -1;
boolean isSubsequence = false;
for (int i = 0; i < qChars.length; i++) {
TreeSet<Integer> positions = charPositions.get(qChars[i]);
if (positions == null || positions.size() == 0) {
break;
} else {
Integer position = positions.higher(lowestPosition);
if (position == null) {
break;
} else {
lowestPosition = position;
if (i == qChars.length - 1) {
isSubsequence = true;
}
}
}
}
if (isSubsequence) {
results.add(str);
}
}
return results;
}
}
Output:
find strings whose subsequence string is acdh
abcdefgh
acdh
acdehacdeh
find strings whose subsequence string is aa
acdehacdeh
find strings whose subsequence string is aaa
As always, I might be totally wrong :)
You might want to have a look into the Book Algorithms on Strings and Sequences by Dan Gusfield. As it turns out part of it is available on the internet. You might also want to read Gusfield's Introduction to Suffix Trees. As it turns out this book covers many approaches for you kind of question. It is considered one of the standard publications in this field.
Get a fast longest common subsequence algorithm implementation. Actually it suffices to determine the length of the LCS. Notice that Gusman's book has very good algorithms and also points to more sources for such algorithms.
Return all s ∈ A with length(LCS(s,q)) == length(q)

whats another way to write python3 zip [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Ive been working on a code that reads lines in a file document and then the code organizes them. However, i got stuck at one point and my friend told me what i could use. the code works but it seems that i dont know what he is doing at line 7 and 8 FROM THE BOTTOM. I used #### so you guys know which lines it is.
So, essentially how can you re-write those 2 lines of codes and why do they work? I seem to not understand dictionaries
from sys import argv
filename = input("Please enter the name of a file: ")
file_in=(open(filename, "r"))
print("Number of times each animal visited each station:")
print("Animal Id Station 1 Station 2")
animaldictionary = dict()
for line in file_in:
if '\n' == line[-1]:
line = line[:-1]
(a, b, c) = line.split(':')
ac = (a,c)
if ac not in animaldictionary:
animaldictionary[ac] = 0
animaldictionary[ac] += 1
alla = []
for key, value in animaldictionary:
if key not in alla:
alla.append(key)
print ("alla:",alla)
allc = []
for key, value in animaldictionary:
if value not in allc:
allc.append(value)
print("allc", allc)
for a in sorted(alla):
print('%9s'%a,end=' '*13)
for c in sorted(allc):
ac = (a,c)
valc = 0
if ac in animaldictionary:
valc = animaldictionary[ac]
print('%4d'%valc,end=' '*19)
print()
print("="*60)
print("Animals that visited both stations at least 3 times: ")
for a in sorted(alla):
x = 'false'
for c in sorted(allc):
ac = (a,c)
count = 0
if ac in animaldictionary:
count = animaldictionary[ac]
if count >= 3:
x = 'true'
if x is 'true':
print('%6s'%a, end=' ')
print("")
print("="*60)
print("Average of the number visits in each month for each station:")
#(alla, allc) =
#for s in zip(*animaldictionary.keys()):
# (alla,allc).append(s)
#print(alla, allc)
(alla,allc,) = (set(s) for s in zip(*animaldictionary.keys())) ##### how else can you write this
##### how else can you rewrite the next code
print('\n'.join(['\t'.join((c,str(sum(animaldictionary.get(ac,0) for a in alla for ac in ((a,c,),))//12)))for c in sorted(allc)]))
print("="*60)
print("Month with the maximum number of visits for each station:")
print("Station Month Number")
print("1")
print("2")
The two lines you indicated are indeed rather confusing. I'll try to explain them as best I can, and suggest alternative implementations.
The first one computes values for alla and allc:
(alla,allc,) = (set(s) for s in zip(*animaldictionary.keys()))
This is nearly equivalent to the loops you've already done above to build your alla and allc lists. You can skip it completely if you want. However, lets unpack what it's doing, so you can actually understand it.
The innermost part is animaldictionary.keys(). This returns an iterable object that contains all the keys of your dictionary. Since the keys in animaldictionary are two-valued tuples, that's what you'll get from the iterable. It's actually not necessary to call keys when dealing with a dictionary in most cases, since operations on the keys view are usually identical to doing the same operation on the dictionary directly.
Moving on, the keys gets wrapped up by a call to the zip function using zip(*keys). There's two things happening here. First, the * syntax unpacks the iterable from above into separate arguments. So if animaldictionary's keys were ("a1", "c1), ("a2", "c2"), ("a3", "c3") this would call zip with those three tuples as separate arguments. Now, what zip does is turn several iterable arguments into a single iterable, yielding a tuple with the first value from each, then a tuple with the second value from each, and so on. So zip(("a1", "c1"), ("a2", "c2"), ("a3", "c3")) would return a generator yielding ("a1", "a2", "a3") followed by ("c1", "c2", "c3").
The next part is a generator expression that passes each value from the zip expression into the set constructor. This serves to eliminate any duplicates. set instances can also be useful in other ways (e.g. finding intersections) but that's not needed here.
Finally, the two sets of a and c values get assigned to variables alla and allc. They replace the lists you already had with those names (and the same contents!).
You've already got an alternative to this, where you calculate alla and allc as lists. Using sets may be slightly more efficient, but it probably doesn't matter too much for small amounts of data. Another, more clear, way to do it would be:
alla = set()
allc = set()
for key in animaldict: # note, iterating over a dict yields the keys!
a, c = key # unpack the tuple key
alla.add(a)
allc.add(c)
The second line you were asking about does some averaging and combines the results into a giant string which it prints out. It is really bad programming style to cram so much into one line. And in fact, it does some needless stuff which makes it even more confusing. Here it is, with a couple of line breaks added to make it all fit on the screen at once.
print('\n'.join(['\t'.join((c,str(sum(animaldictionary.get(ac,0)
for a in alla for ac in ((a,c,),))//12)
)) for c in sorted(allc)]))
The innermost piece of this is for ac in ((a,c,),). This is silly, since it's a loop over a 1-element tuple. It's a way of renaming the tuple (a,c) to ac, but it is very confusing and unnecessary.
If we replace the one use of ac with the tuple explicitly written out, the new innermost piece is animaldictionary.get((a,c),0). This is a special way of writing animaldictionary[(a, c)] but without running the risk of causing a KeyError to be raised if (a, c) is not in the dictionary. Instead, the default value of 0 (passed in to get) will be returned for non-existant keys.
That get call is wrapped up in this: (getcall for a in alla). This is a generator expression that gets all the values from the dictionary with a given c value in the key
(with a default of zero if the value is not present).
The next step is taking the average of the values in the previous generator expression: sum(genexp)//12. This is pretty straightforward, though you should note that using // for division always rounds down to the next integer. If you want a more precise floating point value, use just /.
The next part is a call to '\t'.join, with an argument that is a single (c, avg) tuple. This is an awkward construction that could be more clearly written as c+"\t"+str(avg) or "{}\t{}".format(c, avg). All of these result in a string containing the c value, a tab character and the string form of the average calcualted above.
The next step is a list comprehension, [joinedstr for c in sorted(allc)] (where joinedstr is the join call in the previous step). Using a list comprehension here is a bit odd, since there's no need for a list (a generator expression would do just as well).
Finally, the list comprehension is joined with newlines and printed: print("\n".join(listcomp)). This is straightforward.
Anyway, this whole mess can be rewritten in a much clearer way, by using a few variables and printing each line separately in a loop:
for c in sorted(allc):
total_values = sum(animaldictionary.get((a,c),0) for a in alla)
average = total_values // 12
print("{}\t{}".format(c, average))
To finish, I have some general suggestions.
First, your data structure may not be optimal for the uses you are making of you data. Rather than having animaldict be a dictionary with (a,c) keys, it might make more sense to have a nested structure, where you index each level separately. That is, animaldict[a][c]. It might even make sense to have a second dictionaries containing the same values indexed in the reverse order (e.g. one is indexed [a][c] while another is indexed [c][a]). With this approach you might not need the alla and allc lists for iterating (you'd just loop over the contents of the main dictionary directly).
My second suggestion is about code style. Many of your variables are named poorly, either because their names don't have any meaning (e.g. c) or where the names imply a meaning that is incorrect. The most glaring issue is your key and value variables, which in fact unpack two pieces of the key (AKA a and c). In other situations you can get keys and values together, but only when you are iterating over a dictionary's items() view rather than on the dictionary directly.

Ukkonen's suffix tree algorithm in plain English

I feel a bit thick at this point. I've spent days trying to fully wrap my head around suffix tree construction, but because I don't have a mathematical background, many of the explanations elude me as they start to make excessive use of mathematical symbology. The closest to a good explanation that I've found is Fast String Searching With Suffix Trees, but he glosses over various points and some aspects of the algorithm remain unclear.
A step-by-step explanation of this algorithm here on Stack Overflow would be invaluable for many others besides me, I'm sure.
For reference, here's Ukkonen's paper on the algorithm: http://www.cs.helsinki.fi/u/ukkonen/SuffixT1withFigs.pdf
My basic understanding, so far:
I need to iterate through each prefix P of a given string T
I need to iterate through each suffix S in prefix P and add that to tree
To add suffix S to the tree, I need to iterate through each character in S, with the iterations consisting of either walking down an existing branch that starts with the same set of characters C in S and potentially splitting an edge into descendent nodes when I reach a differing character in the suffix, OR if there was no matching edge to walk down. When no matching edge is found to walk down for C, a new leaf edge is created for C.
The basic algorithm appears to be O(n2), as is pointed out in most explanations, as we need to step through all of the prefixes, then we need to step through each of the suffixes for each prefix. Ukkonen's algorithm is apparently unique because of the suffix pointer technique he uses, though I think that is what I'm having trouble understanding.
I'm also having trouble understanding:
exactly when and how the "active point" is assigned, used and changed
what is going on with the canonization aspect of the algorithm
Why the implementations I've seen need to "fix" bounding variables that they are using
Here is the completed C# source code. It not only works correctly, but supports automatic canonization and renders a nicer looking text graph of the output. Source code and sample output is at:
https://gist.github.com/2373868
Update 2017-11-04
After many years I've found a new use for suffix trees, and have implemented the algorithm in JavaScript. Gist is below. It should be bug-free. Dump it into a js file, npm install chalk from the same location, and then run with node.js to see some colourful output. There's a stripped down version in the same Gist, without any of the debugging code.
https://gist.github.com/axefrog/c347bf0f5e0723cbd09b1aaed6ec6fc6
The following is an attempt to describe the Ukkonen algorithm by first showing what it does when the string is simple (i.e. does not contain any repeated characters), and then extending it to the full algorithm.
First, a few preliminary statements.
What we are building, is basically like a search trie. So there
is a root node, edges going out of it leading to new nodes, and
further edges going out of those, and so forth
But: Unlike in a search trie, the edge labels are not single
characters. Instead, each edge is labeled using a pair of integers
[from,to]. These are pointers into the text. In this sense, each
edge carries a string label of arbitrary length, but takes only O(1)
space (two pointers).
Basic principle
I would like to first demonstrate how to create the suffix tree of a
particularly simple string, a string with no repeated characters:
abc
The algorithm works in steps, from left to right. There is one step for every character of the string. Each step might involve more than one individual operation, but we will see (see the final observations at the end) that the total number of operations is O(n).
So, we start from the left, and first insert only the single character
a by creating an edge from the root node (on the left) to a leaf,
and labeling it as [0,#], which means the edge represents the
substring starting at position 0 and ending at the current end. I
use the symbol # to mean the current end, which is at position 1
(right after a).
So we have an initial tree, which looks like this:
And what it means is this:
Now we progress to position 2 (right after b). Our goal at each step
is to insert all suffixes up to the current position. We do this
by
expanding the existing a-edge to ab
inserting one new edge for b
In our representation this looks like
And what it means is:
We observe two things:
The edge representation for ab is the same as it used to be
in the initial tree: [0,#]. Its meaning has automatically changed
because we updated the current position # from 1 to 2.
Each edge consumes O(1) space, because it consists of only two
pointers into the text, regardless of how many characters it
represents.
Next we increment the position again and update the tree by appending
a c to every existing edge and inserting one new edge for the new
suffix c.
In our representation this looks like
And what it means is:
We observe:
The tree is the correct suffix tree up to the current position
after each step
There are as many steps as there are characters in the text
The amount of work in each step is O(1), because all existing edges
are updated automatically by incrementing #, and inserting the
one new edge for the final character can be done in O(1)
time. Hence for a string of length n, only O(n) time is required.
First extension: Simple repetitions
Of course this works so nicely only because our string does not
contain any repetitions. We now look at a more realistic string:
abcabxabcd
It starts with abc as in the previous example, then ab is repeated
and followed by x, and then abc is repeated followed by d.
Steps 1 through 3: After the first 3 steps we have the tree from the previous example:
Step 4: We move # to position 4. This implicitly updates all existing
edges to this:
and we need to insert the final suffix of the current step, a, at
the root.
Before we do this, we introduce two more variables (in addition to
#), which of course have been there all the time but we haven't used
them so far:
The active point, which is a triple
(active_node,active_edge,active_length)
The remainder, which is an integer indicating how many new suffixes
we need to insert
The exact meaning of these two will become clear soon, but for now
let's just say:
In the simple abc example, the active point was always
(root,'\0x',0), i.e. active_node was the root node, active_edge was specified as the null character '\0x', and active_length was zero. The effect of this was that the one new edge that
we inserted in every step was inserted at the root node as a
freshly created edge. We will see soon why a triple is necessary to
represent this information.
The remainder was always set to 1 at the beginning of each
step. The meaning of this was that the number of suffixes we had to
actively insert at the end of each step was 1 (always just the
final character).
Now this is going to change. When we insert the current final
character a at the root, we notice that there is already an outgoing
edge starting with a, specifically: abca. Here is what we do in
such a case:
We do not insert a fresh edge [4,#] at the root node. Instead we
simply notice that the suffix a is already in our
tree. It ends in the middle of a longer edge, but we are not
bothered by that. We just leave things the way they are.
We set the active point to (root,'a',1). That means the active
point is now somewhere in the middle of outgoing edge of the root node that starts with a, specifically, after position 1 on that edge. We
notice that the edge is specified simply by its first
character a. That suffices because there can be only one edge
starting with any particular character (confirm that this is true after reading through the entire description).
We also increment remainder, so at the beginning of the next step
it will be 2.
Observation: When the final suffix we need to insert is found to
exist in the tree already, the tree itself is not changed at all (we only update the active point and remainder). The tree
is then not an accurate representation of the suffix tree up to the
current position any more, but it contains all suffixes (because the final
suffix a is contained implicitly). Hence, apart from updating the
variables (which are all of fixed length, so this is O(1)), there was
no work done in this step.
Step 5: We update the current position # to 5. This
automatically updates the tree to this:
And because remainder is 2, we need to insert two final
suffixes of the current position: ab and b. This is basically because:
The a suffix from the previous step has never been properly
inserted. So it has remained, and since we have progressed one
step, it has now grown from a to ab.
And we need to insert the new final edge b.
In practice this means that we go to the active point (which points to
behind the a on what is now the abcab edge), and insert the
current final character b. But: Again, it turns out that b is
also already present on that same edge.
So, again, we do not change the tree. We simply:
Update the active point to (root,'a',2) (same node and edge
as before, but now we point to behind the b)
Increment the remainder to 3 because we still have not properly
inserted the final edge from the previous step, and we don't insert
the current final edge either.
To be clear: We had to insert ab and b in the current step, but
because ab was already found, we updated the active point and did
not even attempt to insert b. Why? Because if ab is in the tree,
every suffix of it (including b) must be in the tree,
too. Perhaps only implicitly, but it must be there, because of the
way we have built the tree so far.
We proceed to step 6 by incrementing #. The tree is
automatically updated to:
Because remainder is 3, we have to insert abx, bx and
x. The active point tells us where ab ends, so we only need to
jump there and insert the x. Indeed, x is not there yet, so we
split the abcabx edge and insert an internal node:
The edge representations are still pointers into the text, so
splitting and inserting an internal node can be done in O(1) time.
So we have dealt with abx and decrement remainder to 2. Now we
need to insert the next remaining suffix, bx. But before we do that
we need to update the active point. The rule for this, after splitting
and inserting an edge, will be called Rule 1 below, and it applies whenever the
active_node is root (we will learn rule 3 for other cases further
below). Here is rule 1:
After an insertion from root,
active_node remains root
active_edge is set to the first character of the new suffix we
need to insert, i.e. b
active_length is reduced by 1
Hence, the new active-point triple (root,'b',1) indicates that the
next insert has to be made at the bcabx edge, behind 1 character,
i.e. behind b. We can identify the insertion point in O(1) time and
check whether x is already present or not. If it was present, we
would end the current step and leave everything the way it is. But x
is not present, so we insert it by splitting the edge:
Again, this took O(1) time and we update remainder to 1 and the
active point to (root,'x',0) as rule 1 states.
But there is one more thing we need to do. We'll call this Rule 2:
If we split an edge and insert a new node, and if that is not the
first node created during the current step, we connect the previously
inserted node and the new node through a special pointer, a suffix
link. We will later see why that is useful. Here is what we get, the
suffix link is represented as a dotted edge:
We still need to insert the final suffix of the current step,
x. Since the active_length component of the active node has fallen
to 0, the final insert is made at the root directly. Since there is no
outgoing edge at the root node starting with x, we insert a new
edge:
As we can see, in the current step all remaining inserts were made.
We proceed to step 7 by setting #=7, which automatically appends the next character,
a, to all leaf edges, as always. Then we attempt to insert the new final
character to the active point (the root), and find that it is there
already. So we end the current step without inserting anything and
update the active point to (root,'a',1).
In step 8, #=8, we append b, and as seen before, this only
means we update the active point to (root,'a',2) and increment remainder without doing
anything else, because b is already present. However, we notice (in O(1) time) that the active point
is now at the end of an edge. We reflect this by re-setting it to
(node1,'\0x',0). Here, I use node1 to refer to the
internal node the ab edge ends at.
Then, in step #=9, we need to insert 'c' and this will help us to
understand the final trick:
Second extension: Using suffix links
As always, the # update appends c automatically to the leaf edges
and we go to the active point to see if we can insert 'c'. It turns
out 'c' exists already at that edge, so we set the active point to
(node1,'c',1), increment remainder and do nothing else.
Now in step #=10, remainder is 4, and so we first need to insert
abcd (which remains from 3 steps ago) by inserting d at the active
point.
Attempting to insert d at the active point causes an edge split in
O(1) time:
The active_node, from which the split was initiated, is marked in
red above. Here is the final rule, Rule 3:
After splitting an edge from an active_node that is not the root
node, we follow the suffix link going out of that node, if there is
any, and reset the active_node to the node it points to. If there is
no suffix link, we set the active_node to the root. active_edge
and active_length remain unchanged.
So the active point is now (node2,'c',1), and node2 is marked in
red below:
Since the insertion of abcd is complete, we decrement remainder to
3 and consider the next remaining suffix of the current step,
bcd. Rule 3 has set the active point to just the right node and edge
so inserting bcd can be done by simply inserting its final character
d at the active point.
Doing this causes another edge split, and because of rule 2, we
must create a suffix link from the previously inserted node to the new
one:
We observe: Suffix links enable us to reset the active point so we
can make the next remaining insert at O(1) effort. Look at the
graph above to confirm that indeed node at label ab is linked to
the node at b (its suffix), and the node at abc is linked to
bc.
The current step is not finished yet. remainder is now 2, and we
need to follow rule 3 to reset the active point again. Since the
current active_node (red above) has no suffix link, we reset to
root. The active point is now (root,'c',1).
Hence the next insert occurs at the one outgoing edge of the root node
whose label starts with c: cabxabcd, behind the first character,
i.e. behind c. This causes another split:
And since this involves the creation of a new internal node,we follow
rule 2 and set a new suffix link from the previously created internal
node:
(I am using Graphviz Dot for these little
graphs. The new suffix link caused dot to re-arrange the existing
edges, so check carefully to confirm that the only thing that was
inserted above is a new suffix link.)
With this, remainder can be set to 1 and since the active_node is
root, we use rule 1 to update the active point to (root,'d',0). This
means the final insert of the current step is to insert a single d
at root:
That was the final step and we are done. There are number of final
observations, though:
In each step we move # forward by 1 position. This automatically
updates all leaf nodes in O(1) time.
But it does not deal with a) any suffixes remaining from previous
steps, and b) with the one final character of the current step.
remainder tells us how many additional inserts we need to
make. These inserts correspond one-to-one to the final suffixes of
the string that ends at the current position #. We consider one
after the other and make the insert. Important: Each insert is
done in O(1) time since the active point tells us exactly where to
go, and we need to add only one single character at the active
point. Why? Because the other characters are contained implicitly
(otherwise the active point would not be where it is).
After each such insert, we decrement remainder and follow the
suffix link if there is any. If not we go to root (rule 3). If we
are at root already, we modify the active point using rule 1. In
any case, it takes only O(1) time.
If, during one of these inserts, we find that the character we want
to insert is already there, we don't do anything and end the
current step, even if remainder>0. The reason is that any
inserts that remain will be suffixes of the one we just tried to
make. Hence they are all implicit in the current tree. The fact
that remainder>0 makes sure we deal with the remaining suffixes
later.
What if at the end of the algorithm remainder>0? This will be the
case whenever the end of the text is a substring that occurred
somewhere before. In that case we must append one extra character
at the end of the string that has not occurred before. In the
literature, usually the dollar sign $ is used as a symbol for
that. Why does that matter? --> If later we use the completed suffix tree to search for suffixes, we must accept matches only if they end at a leaf. Otherwise we would get a lot of spurious matches, because there are many strings implicitly contained in the tree that are not actual suffixes of the main string. Forcing remainder to be 0 at the end is essentially a way to ensure that all suffixes end at a leaf node. However, if we want to use the tree to search for general substrings, not only suffixes of the main string, this final step is indeed not required, as suggested by the OP's comment below.
So what is the complexity of the entire algorithm? If the text is n
characters in length, there are obviously n steps (or n+1 if we add
the dollar sign). In each step we either do nothing (other than
updating the variables), or we make remainder inserts, each taking O(1)
time. Since remainder indicates how many times we have done nothing
in previous steps, and is decremented for every insert that we make
now, the total number of times we do something is exactly n (or
n+1). Hence, the total complexity is O(n).
However, there is one small thing that I did not properly explain:
It can happen that we follow a suffix link, update the active
point, and then find that its active_length component does not
work well with the new active_node. For example, consider a situation
like this:
(The dashed lines indicate the rest of the tree. The dotted line is a
suffix link.)
Now let the active point be (red,'d',3), so it points to the place
behind the f on the defg edge. Now assume we made the necessary
updates and now follow the suffix link to update the active point
according to rule 3. The new active point is (green,'d',3). However,
the d-edge going out of the green node is de, so it has only 2
characters. In order to find the correct active point, we obviously
need to follow that edge to the blue node and reset to (blue,'f',1).
In a particularly bad case, the active_length could be as large as
remainder, which can be as large as n. And it might very well happen
that to find the correct active point, we need not only jump over one
internal node, but perhaps many, up to n in the worst case. Does that
mean the algorithm has a hidden O(n2) complexity, because
in each step remainder is generally O(n), and the post-adjustments
to the active node after following a suffix link could be O(n), too?
No. The reason is that if indeed we have to adjust the active point
(e.g. from green to blue as above), that brings us to a new node that
has its own suffix link, and active_length will be reduced. As
we follow down the chain of suffix links we make the remaining inserts, active_length can only
decrease, and the number of active-point adjustments we can make on
the way can't be larger than active_length at any given time. Since
active_length can never be larger than remainder, and remainder
is O(n) not only in every single step, but the total sum of increments
ever made to remainder over the course of the entire process is
O(n) too, the number of active point adjustments is also bounded by
O(n).
I tried to implement the Suffix Tree with the approach given in jogojapan's answer, but it didn't work for some cases due to wording used for the rules. Moreover, I've mentioned that nobody managed to implement an absolutely correct suffix tree using this approach. Below I will write an "overview" of jogojapan's answer with some modifications to the rules. I will also describe the case when we forget to create important suffix links.
Additional variables used
active point - a triple (active_node; active_edge; active_length), showing from where we must start inserting a new suffix.
remainder - shows the number of suffixes we must add explicitly. For instance, if our word is 'abcaabca', and remainder = 3, it means we must process 3 last suffixes: bca, ca and a.
Let's use a concept of an internal node - all the nodes, except the root and the leafs are internal nodes.
Observation 1
When the final suffix we need to insert is found to exist in the tree already, the tree itself is not changed at all (we only update the active point and remainder).
Observation 2
If at some point active_length is greater or equal to the length of current edge (edge_length), we move our active point down until edge_length is strictly greater than active_length.
Now, let's redefine the rules:
Rule 1
If after an insertion from the active node = root, the active length is greater than 0, then:
active node is not changed
active length is decremented
active edge is shifted right (to the first character of the next suffix we must insert)
Rule 2
If we create a new internal node OR make an inserter from an internal node, and this is not the first SUCH internal node at current step, then we link the previous SUCH node with THIS one through a suffix link.
This definition of the Rule 2 is different from jogojapan', as here we take into account not only the newly created internal nodes, but also the internal nodes, from which we make an insertion.
Rule 3
After an insert from the active node which is not the root node, we must follow the suffix link and set the active node to the node it points to. If there is no a suffix link, set the active node to the root node. Either way, active edge and active length stay unchanged.
In this definition of Rule 3 we also consider the inserts of leaf nodes (not only split-nodes).
And finally, Observation 3:
When the symbol we want to add to the tree is already on the edge, we, according to Observation 1, update only active point and remainder, leaving the tree unchanged. BUT if there is an internal node marked as needing suffix link, we must connect that node with our current active node through a suffix link.
Let's look at the example of a suffix tree for cdddcdc if we add a suffix link in such case and if we don't:
If we DON'T connect the nodes through a suffix link:
before adding the last letter c:
after adding the last letter c:
If we DO connect the nodes through a suffix link:
before adding the last letter c:
after adding the last letter c:
Seems like there is no significant difference: in the second case there are two more suffix links. But these suffix links are correct, and one of them - from the blue node to the red one - is very important for our approach with active point. The problem is that if we don't put a suffix link here, later, when we add some new letters to the tree, we might omit adding some nodes to the tree due to the Rule 3, because, according to it, if there's no a suffix link, then we must put the active_node to the root.
When we were adding the last letter to the tree, the red node had already existed before we made an insert from the blue node (the edge labled 'c'). As there was an insert from the blue node, we mark it as needing a suffix link. Then, relying on the active point approach, the active node was set to the red node. But we don't make an insert from the red node, as the letter 'c' is already on the edge. Does it mean that the blue node must be left without a suffix link? No, we must connect the blue node with the red one through a suffix link. Why is it correct? Because the active point approach guarantees that we get to a right place, i.e., to the next place where we must process an insert of a shorter suffix.
Finally, here are my implementations of the Suffix Tree:
Java
C++
Hope that this "overview" combined with jogojapan's detailed answer will help somebody to implement his own Suffix Tree.
Apologies if my answer seems redundant, but I implemented Ukkonen's algorithm recently, and found myself struggling with it for days; I had to read through multiple papers on the subject to understand the why and how of some core aspects of the algorithm.
I found the 'rules' approach of previous answers unhelpful for understanding the underlying reasons, so I've written everything below focusing solely on the pragmatics. If you've struggled with following other explanations, just like I did, perhaps my supplemental explanation will make it 'click' for you.
I published my C# implementation here: https://github.com/baratgabor/SuffixTree
Please note that I'm not an expert on this subject, so the following sections may contain inaccuracies (or worse). If you encounter any, feel free to edit.
Prerequisites
The starting point of the following explanation assumes you're familiar with the content and use of suffix trees, and the characteristics of Ukkonen's algorithm, e.g. how you're extending the suffix tree character by character, from start to end. Basically, I assume you've read some of the other explanations already.
(However, I did have to add some basic narrative for the flow, so the beginning might indeed feel redundant.)
The most interesting part is the explanation on the difference between using suffix links and rescanning from the root. This is what gave me a lot of bugs and headaches in my implementation.
Open-ended leaf nodes and their limitations
I'm sure you already know that the most fundamental 'trick' is to realize we can just leave the end of the suffixes 'open', i.e. referencing the current length of the string instead of setting the end to a static value. This way when we add additional characters, those characters will be implicitly added to all suffix labels, without having to visit and update all of them.
But this open ending of suffixes – for obvious reasons – works only for nodes that represent the end of the string, i.e. the leaf nodes in the tree structure. The branching operations we execute on the tree (the addition of new branch nodes and leaf nodes) won't propagate automatically everywhere they need to.
It's probably elementary, and wouldn't require mention, that repeated substrings don't appear explicitly in the tree, since the tree already contains these by virtue of them being repetitions; however, when the repetitive substring ends by encountering a non-repeating character, we need to create a branching at that point to represent the divergence from that point onwards.
For example in case of the string 'ABCXABCY' (see below), a branching to X and Y needs to be added to three different suffixes, ABC, BC and C; otherwise it wouldn't be a valid suffix tree, and we couldn't find all substrings of the string by matching characters from the root downwards.
Once again, to emphasize – any operation we execute on a suffix in the tree needs to be reflected by its consecutive suffixes as well (e.g. ABC > BC > C), otherwise they simply cease to be valid suffixes.
But even if we accept that we have to do these manual updates, how do we know how many suffixes need to be updated? Since, when we add the repeated character A (and the rest of the repeated characters in succession), we have no idea yet when/where do we need to split the suffix into two branches. The need to split is ascertained only when we encounter the first non-repeating character, in this case Y (instead of the X that already exists in the tree).
What we can do is to match the longest repeated string we can, and count how many of its suffixes we need to update later. This is what 'remainder' stands for.
The concept of 'remainder' and 'rescanning'
The variable remainder tells us how many repeated characters we added implicitly, without branching; i.e. how many suffixes we need to visit to repeat the branching operation once we found the first character that we cannot match. This essentially equals to how many characters 'deep' we are in the tree from its root.
So, staying with the previous example of the string ABCXABCY, we match the repeated ABC part 'implicitly', incrementing remainder each time, which results in remainder of 3. Then we encounter the non-repeating character 'Y'. Here we split the previously added ABCX into ABC->X and ABC->Y. Then we decrement remainder from 3 to 2, because we already took care of the ABC branching. Now we repeat the operation by matching the last 2 characters – BC – from the root to reach the point where we need to split, and we split BCX too into BC->X and BC->Y. Again, we decrement remainder to 1, and repeat the operation; until the remainder is 0. Lastly, we need to add the current character (Y) itself to the root as well.
This operation, following the consecutive suffixes from the root simply to reach the point where we need to do an operation is what's called 'rescanning' in Ukkonen's algorithm, and typically this is the most expensive part of the algorithm. Imagine a longer string where you need to 'rescan' long substrings, across many dozens of nodes (we'll discuss this later), potentially thousands of times.
As a solution, we introduce what we call 'suffix links'.
The concept of 'suffix links'
Suffix links basically point to the positions we'd normally have to 'rescan' to, so instead of the expensive rescan operation we can simply jump to the linked position, do our work, jump to the next linked position, and repeat – until there are no more positions to update.
Of course one big question is how to add these links. The existing answer is that we can add the links when we insert new branch nodes, utilizing the fact that, in each extension of the tree, the branch nodes are naturally created one after another in the exact order we'd need to link them together. Though, we have to link from the last created branch node (the longest suffix) to the previously created one, so we need to cache the last we create, link that to the next one we create, and cache the newly created one.
One consequence is that we actually often don't have suffix links to follow, because the given branch node was just created. In these cases we have to still fall back to the aforementioned 'rescanning' from root. This is why, after an insertion, you're instructed to either use the suffix link, or jump to root.
(Or alternatively, if you're storing parent pointers in the nodes, you can try to follow the parents, check if they have a link, and use that. I found that this is very rarely mentioned, but the suffix link usage is not set in stones. There are multiple possible approaches, and if you understand the underlying mechanism you can implement one that fits your needs the best.)
The concept of 'active point'
So far we discussed multiple efficient tools for building the tree, and vaguely referred to traversing over multiple edges and nodes, but haven't yet explored the corresponding consequences and complexities.
The previously explained concept of 'remainder' is useful for keeping track where we are in the tree, but we have to realize it doesn't store enough information.
Firstly, we always reside on a specific edge of a node, so we need to store the edge information. We shall call this 'active edge'.
Secondly, even after adding the edge information, we still have no way to identify a position that is farther down in the tree, and not directly connected to the root node. So we need to store the node as well. Let's call this 'active node'.
Lastly, we can notice that the 'remainder' is inadequate to identify a position on an edge that is not directly connected to root, because 'remainder' is the length of the entire route; and we probably don't want to bother with remembering and subtracting the length of the previous edges. So we need a representation that is essentially the remainder on the current edge. This is what we call 'active length'.
This leads to what we call 'active point' – a package of three variables that contain all the information we need to maintain about our position in the tree:
Active Point = (Active Node, Active Edge, Active Length)
You can observe on the following image how the matched route of ABCABD consists of 2 characters on the edge AB (from root), plus 4 characters on the edge CABDABCABD (from node 4) – resulting in a 'remainder' of 6 characters. So, our current position can be identified as Active Node 4, Active Edge C, Active Length 4.
Another important role of the 'active point' is that it provides an abstraction layer for our algorithm, meaning that parts of our algorithm can do their work on the 'active point', irrespective of whether that active point is in the root or anywhere else. This makes it easy to implement the use of suffix links in our algorithm in a clean and straight-forward way.
Differences of rescanning vs using suffix links
Now, the tricky part, something that – in my experience – can cause plenty of bugs and headaches, and is poorly explained in most sources, is the difference in processing the suffix link cases vs the rescan cases.
Consider the following example of the string 'AAAABAAAABAAC':
You can observe above how the 'remainder' of 7 corresponds to the total sum of characters from root, while 'active length' of 4 corresponds to the sum of matched characters from the active edge of the active node.
Now, after executing a branching operation at the active point, our active node might or might not contain a suffix link.
If a suffix link is present: We only need to process the 'active length' portion. The 'remainder' is irrelevant, because the node where we jump to via the suffix link already encodes the correct 'remainder' implicitly, simply by virtue of being in the tree where it is.
If a suffix link is NOT present: We need to 'rescan' from zero/root, which means processing the whole suffix from the beginning. To this end we have to use the whole 'remainder' as the basis of rescanning.
Example comparison of processing with and without a suffix link
Consider what happens at the next step of the example above. Let's compare how to achieve the same result – i.e. moving to the next suffix to process – with and without a suffix link.
Using 'suffix link'
Notice that if we use a suffix link, we are automatically 'at the right place'. Which is often not strictly true due to the fact that the 'active length' can be 'incompatible' with the new position.
In the case above, since the 'active length' is 4, we're working with the suffix 'ABAA', starting at the linked Node 4. But after finding the edge that corresponds to the first character of the suffix ('A'), we notice that our 'active length' overflows this edge by 3 characters. So we jump over the full edge, to the next node, and decrement 'active length' by the characters we consumed with the jump.
Then, after we found the next edge 'B', corresponding to the decremented suffix 'BAA', we finally note that the edge length is larger than the remaining 'active length' of 3, which means we found the right place.
Please note that it seems this operation is usually not referred to as 'rescanning', even though to me it seems it's the direct equivalent of rescanning, just with a shortened length and a non-root starting point.
Using 'rescan'
Notice that if we use a traditional 'rescan' operation (here pretending we didn't have a suffix link), we start at the top of the tree, at root, and we have to work our way down again to the right place, following along the entire length of the current suffix.
The length of this suffix is the 'remainder' we discussed before. We have to consume the entirety of this remainder, until it reaches zero. This might (and often does) include jumping through multiple nodes, at each jump decreasing the remainder by the length of the edge we jumped through. Then finally, we reach an edge that is longer than our remaining 'remainder'; here we set the active edge to the given edge, set 'active length' to remaining 'remainder', and we're done.
Note, however, that the actual 'remainder' variable needs to be preserved, and only decremented after each node insertion. So what I described above assumed using a separate variable initialized to 'remainder'.
Notes on suffix links & rescans
1) Notice that both methods lead to the same result. Suffix link jumping is, however, significantly faster in most cases; that's the whole rationale behind suffix links.
2) The actual algorithmic implementations don't need to differ. As I mentioned above, even in the case of using the suffix link, the 'active length' is often not compatible with the linked position, since that branch of the tree might contain additional branching. So essentially you just have to use 'active length' instead of 'remainder', and execute the same rescanning logic until you find an edge that is shorter than your remaining suffix length.
3) One important remark pertaining to performance is that there is no need to check each and every character during rescanning. Due to the way a valid suffix tree is built, we can safely assume that the characters match. So you're mostly counting the lengths, and the only need for character equivalence checking arises when we jump to a new edge, since edges are identified by their first character (which is always unique in the context of a given node). This means that 'rescanning' logic is different than full string matching logic (i.e. searching for a substring in the tree).
4) The original suffix linking described here is just one of the possible approaches. For example NJ Larsson et al. names this approach as Node-Oriented Top-Down, and compares it to Node-Oriented Bottom-Up and two Edge-Oriented varieties. The different approaches have different typical and worst case performances, requirements, limitations, etc., but it generally seems that Edge-Oriented approaches are an overall improvement to the original.
#jogojapan you brought awesome explanation and visualisation. But as #makagonov mentioned it's missing some rules regarding setting suffix links. It's visible in nice way when going step by step on http://brenden.github.io/ukkonen-animation/ through word 'aabaaabb'. When you go from step 10 to step 11, there is no suffix link from node 5 to node 2 but active point suddenly moves there.
#makagonov since I live in Java world I also tried to follow your implementation to grasp ST building workflow but it was hard to me because of:
combining edges with nodes
using index pointers instead of references
breaks statements;
continue statements;
So I ended up with such implementation in Java which I hope reflects all steps in clearer way and will reduce learning time for other Java people:
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
public class ST {
public class Node {
private final int id;
private final Map<Character, Edge> edges;
private Node slink;
public Node(final int id) {
this.id = id;
this.edges = new HashMap<>();
}
public void setSlink(final Node slink) {
this.slink = slink;
}
public Map<Character, Edge> getEdges() {
return this.edges;
}
public Node getSlink() {
return this.slink;
}
public String toString(final String word) {
return new StringBuilder()
.append("{")
.append("\"id\"")
.append(":")
.append(this.id)
.append(",")
.append("\"slink\"")
.append(":")
.append(this.slink != null ? this.slink.id : null)
.append(",")
.append("\"edges\"")
.append(":")
.append(edgesToString(word))
.append("}")
.toString();
}
private StringBuilder edgesToString(final String word) {
final StringBuilder edgesStringBuilder = new StringBuilder();
edgesStringBuilder.append("{");
for(final Map.Entry<Character, Edge> entry : this.edges.entrySet()) {
edgesStringBuilder.append("\"")
.append(entry.getKey())
.append("\"")
.append(":")
.append(entry.getValue().toString(word))
.append(",");
}
if(!this.edges.isEmpty()) {
edgesStringBuilder.deleteCharAt(edgesStringBuilder.length() - 1);
}
edgesStringBuilder.append("}");
return edgesStringBuilder;
}
public boolean contains(final String word, final String suffix) {
return !suffix.isEmpty()
&& this.edges.containsKey(suffix.charAt(0))
&& this.edges.get(suffix.charAt(0)).contains(word, suffix);
}
}
public class Edge {
private final int from;
private final int to;
private final Node next;
public Edge(final int from, final int to, final Node next) {
this.from = from;
this.to = to;
this.next = next;
}
public int getFrom() {
return this.from;
}
public int getTo() {
return this.to;
}
public Node getNext() {
return this.next;
}
public int getLength() {
return this.to - this.from;
}
public String toString(final String word) {
return new StringBuilder()
.append("{")
.append("\"content\"")
.append(":")
.append("\"")
.append(word.substring(this.from, this.to))
.append("\"")
.append(",")
.append("\"next\"")
.append(":")
.append(this.next != null ? this.next.toString(word) : null)
.append("}")
.toString();
}
public boolean contains(final String word, final String suffix) {
if(this.next == null) {
return word.substring(this.from, this.to).equals(suffix);
}
return suffix.startsWith(word.substring(this.from,
this.to)) && this.next.contains(word, suffix.substring(this.to - this.from));
}
}
public class ActivePoint {
private final Node activeNode;
private final Character activeEdgeFirstCharacter;
private final int activeLength;
public ActivePoint(final Node activeNode,
final Character activeEdgeFirstCharacter,
final int activeLength) {
this.activeNode = activeNode;
this.activeEdgeFirstCharacter = activeEdgeFirstCharacter;
this.activeLength = activeLength;
}
private Edge getActiveEdge() {
return this.activeNode.getEdges().get(this.activeEdgeFirstCharacter);
}
public boolean pointsToActiveNode() {
return this.activeLength == 0;
}
public boolean activeNodeIs(final Node node) {
return this.activeNode == node;
}
public boolean activeNodeHasEdgeStartingWith(final char character) {
return this.activeNode.getEdges().containsKey(character);
}
public boolean activeNodeHasSlink() {
return this.activeNode.getSlink() != null;
}
public boolean pointsToOnActiveEdge(final String word, final char character) {
return word.charAt(this.getActiveEdge().getFrom() + this.activeLength) == character;
}
public boolean pointsToTheEndOfActiveEdge() {
return this.getActiveEdge().getLength() == this.activeLength;
}
public boolean pointsAfterTheEndOfActiveEdge() {
return this.getActiveEdge().getLength() < this.activeLength;
}
public ActivePoint moveToEdgeStartingWithAndByOne(final char character) {
return new ActivePoint(this.activeNode, character, 1);
}
public ActivePoint moveToNextNodeOfActiveEdge() {
return new ActivePoint(this.getActiveEdge().getNext(), null, 0);
}
public ActivePoint moveToSlink() {
return new ActivePoint(this.activeNode.getSlink(),
this.activeEdgeFirstCharacter,
this.activeLength);
}
public ActivePoint moveTo(final Node node) {
return new ActivePoint(node, this.activeEdgeFirstCharacter, this.activeLength);
}
public ActivePoint moveByOneCharacter() {
return new ActivePoint(this.activeNode,
this.activeEdgeFirstCharacter,
this.activeLength + 1);
}
public ActivePoint moveToEdgeStartingWithAndByActiveLengthMinusOne(final Node node,
final char character) {
return new ActivePoint(node, character, this.activeLength - 1);
}
public ActivePoint moveToNextNodeOfActiveEdge(final String word, final int index) {
return new ActivePoint(this.getActiveEdge().getNext(),
word.charAt(index - this.activeLength + this.getActiveEdge().getLength()),
this.activeLength - this.getActiveEdge().getLength());
}
public void addEdgeToActiveNode(final char character, final Edge edge) {
this.activeNode.getEdges().put(character, edge);
}
public void splitActiveEdge(final String word,
final Node nodeToAdd,
final int index,
final char character) {
final Edge activeEdgeToSplit = this.getActiveEdge();
final Edge splittedEdge = new Edge(activeEdgeToSplit.getFrom(),
activeEdgeToSplit.getFrom() + this.activeLength,
nodeToAdd);
nodeToAdd.getEdges().put(word.charAt(activeEdgeToSplit.getFrom() + this.activeLength),
new Edge(activeEdgeToSplit.getFrom() + this.activeLength,
activeEdgeToSplit.getTo(),
activeEdgeToSplit.getNext()));
nodeToAdd.getEdges().put(character, new Edge(index, word.length(), null));
this.activeNode.getEdges().put(this.activeEdgeFirstCharacter, splittedEdge);
}
public Node setSlinkTo(final Node previouslyAddedNodeOrAddedEdgeNode,
final Node node) {
if(previouslyAddedNodeOrAddedEdgeNode != null) {
previouslyAddedNodeOrAddedEdgeNode.setSlink(node);
}
return node;
}
public Node setSlinkToActiveNode(final Node previouslyAddedNodeOrAddedEdgeNode) {
return setSlinkTo(previouslyAddedNodeOrAddedEdgeNode, this.activeNode);
}
}
private static int idGenerator;
private final String word;
private final Node root;
private ActivePoint activePoint;
private int remainder;
public ST(final String word) {
this.word = word;
this.root = new Node(idGenerator++);
this.activePoint = new ActivePoint(this.root, null, 0);
this.remainder = 0;
build();
}
private void build() {
for(int i = 0; i < this.word.length(); i++) {
add(i, this.word.charAt(i));
}
}
private void add(final int index, final char character) {
this.remainder++;
boolean characterFoundInTheTree = false;
Node previouslyAddedNodeOrAddedEdgeNode = null;
while(!characterFoundInTheTree && this.remainder > 0) {
if(this.activePoint.pointsToActiveNode()) {
if(this.activePoint.activeNodeHasEdgeStartingWith(character)) {
activeNodeHasEdgeStartingWithCharacter(character, previouslyAddedNodeOrAddedEdgeNode);
characterFoundInTheTree = true;
}
else {
if(this.activePoint.activeNodeIs(this.root)) {
rootNodeHasNotEdgeStartingWithCharacter(index, character);
}
else {
previouslyAddedNodeOrAddedEdgeNode = internalNodeHasNotEdgeStartingWithCharacter(index,
character, previouslyAddedNodeOrAddedEdgeNode);
}
}
}
else {
if(this.activePoint.pointsToOnActiveEdge(this.word, character)) {
activeEdgeHasCharacter();
characterFoundInTheTree = true;
}
else {
if(this.activePoint.activeNodeIs(this.root)) {
previouslyAddedNodeOrAddedEdgeNode = edgeFromRootNodeHasNotCharacter(index,
character,
previouslyAddedNodeOrAddedEdgeNode);
}
else {
previouslyAddedNodeOrAddedEdgeNode = edgeFromInternalNodeHasNotCharacter(index,
character,
previouslyAddedNodeOrAddedEdgeNode);
}
}
}
}
}
private void activeNodeHasEdgeStartingWithCharacter(final char character,
final Node previouslyAddedNodeOrAddedEdgeNode) {
this.activePoint.setSlinkToActiveNode(previouslyAddedNodeOrAddedEdgeNode);
this.activePoint = this.activePoint.moveToEdgeStartingWithAndByOne(character);
if(this.activePoint.pointsToTheEndOfActiveEdge()) {
this.activePoint = this.activePoint.moveToNextNodeOfActiveEdge();
}
}
private void rootNodeHasNotEdgeStartingWithCharacter(final int index, final char character) {
this.activePoint.addEdgeToActiveNode(character, new Edge(index, this.word.length(), null));
this.activePoint = this.activePoint.moveTo(this.root);
this.remainder--;
assert this.remainder == 0;
}
private Node internalNodeHasNotEdgeStartingWithCharacter(final int index,
final char character,
Node previouslyAddedNodeOrAddedEdgeNode) {
this.activePoint.addEdgeToActiveNode(character, new Edge(index, this.word.length(), null));
previouslyAddedNodeOrAddedEdgeNode = this.activePoint.setSlinkToActiveNode(previouslyAddedNodeOrAddedEdgeNode);
if(this.activePoint.activeNodeHasSlink()) {
this.activePoint = this.activePoint.moveToSlink();
}
else {
this.activePoint = this.activePoint.moveTo(this.root);
}
this.remainder--;
return previouslyAddedNodeOrAddedEdgeNode;
}
private void activeEdgeHasCharacter() {
this.activePoint = this.activePoint.moveByOneCharacter();
if(this.activePoint.pointsToTheEndOfActiveEdge()) {
this.activePoint = this.activePoint.moveToNextNodeOfActiveEdge();
}
}
private Node edgeFromRootNodeHasNotCharacter(final int index,
final char character,
Node previouslyAddedNodeOrAddedEdgeNode) {
final Node newNode = new Node(idGenerator++);
this.activePoint.splitActiveEdge(this.word, newNode, index, character);
previouslyAddedNodeOrAddedEdgeNode = this.activePoint.setSlinkTo(previouslyAddedNodeOrAddedEdgeNode, newNode);
this.activePoint = this.activePoint.moveToEdgeStartingWithAndByActiveLengthMinusOne(this.root,
this.word.charAt(index - this.remainder + 2));
this.activePoint = walkDown(index);
this.remainder--;
return previouslyAddedNodeOrAddedEdgeNode;
}
private Node edgeFromInternalNodeHasNotCharacter(final int index,
final char character,
Node previouslyAddedNodeOrAddedEdgeNode) {
final Node newNode = new Node(idGenerator++);
this.activePoint.splitActiveEdge(this.word, newNode, index, character);
previouslyAddedNodeOrAddedEdgeNode = this.activePoint.setSlinkTo(previouslyAddedNodeOrAddedEdgeNode, newNode);
if(this.activePoint.activeNodeHasSlink()) {
this.activePoint = this.activePoint.moveToSlink();
}
else {
this.activePoint = this.activePoint.moveTo(this.root);
}
this.activePoint = walkDown(index);
this.remainder--;
return previouslyAddedNodeOrAddedEdgeNode;
}
private ActivePoint walkDown(final int index) {
while(!this.activePoint.pointsToActiveNode()
&& (this.activePoint.pointsToTheEndOfActiveEdge() || this.activePoint.pointsAfterTheEndOfActiveEdge())) {
if(this.activePoint.pointsAfterTheEndOfActiveEdge()) {
this.activePoint = this.activePoint.moveToNextNodeOfActiveEdge(this.word, index);
}
else {
this.activePoint = this.activePoint.moveToNextNodeOfActiveEdge();
}
}
return this.activePoint;
}
public String toString(final String word) {
return this.root.toString(word);
}
public boolean contains(final String suffix) {
return this.root.contains(this.word, suffix);
}
public static void main(final String[] args) {
final String[] words = {
"abcabcabc$",
"abc$",
"abcabxabcd$",
"abcabxabda$",
"abcabxad$",
"aabaaabb$",
"aababcabcd$",
"ababcabcd$",
"abccba$",
"mississipi$",
"abacabadabacabae$",
"abcabcd$",
"00132220$"
};
Arrays.stream(words).forEach(word -> {
System.out.println("Building suffix tree for word: " + word);
final ST suffixTree = new ST(word);
System.out.println("Suffix tree: " + suffixTree.toString(word));
for(int i = 0; i < word.length() - 1; i++) {
assert suffixTree.contains(word.substring(i)) : word.substring(i);
}
});
}
}
Thanks for the well explained tutorial by #jogojapan, I implemented the algorithm in Python.
A couple of minor problems mentioned by #jogojapan turns out to be more sophisticated than I have expected, and need to be treated very carefully. It cost me several days to get my implementation robust enough (I suppose). Problems and solutions are listed below:
End with Remainder > 0 It turns out this situation can also happen during the unfolding step, not just the end of the entire algorithm. When that happens, we can leave the remainder, actnode, actedge, and actlength unchanged, end the current unfolding step, and start another step by either keep folding or unfolding depending on if the next char in the original string is on the current path or not.
Leap Over Nodes: When we follow a suffix link, update the active point, and then find that its active_length component does not work well with the new active_node. We have to move forward to the right place to split, or insert a leaf. This process might be not that straightforward because during the moving the actlength and actedge keep changing all the way, when you have to move back to the root node, the actedge and actlength could be wrong because of those moves. We need additional variable(s) to keep that information.
The other two problems have somehow been pointed out by #managonov
Split Could Degenerate When trying to split an edge, sometime you'll find the split operation is right on a node. That case we only need add a new leaf to that node, take it as a standard edge split operation, which means the suffix links if there's any, should be maintained correspondingly.
Hidden Suffix Links There is another special case which is incurred by problem 1 and problem 2. Sometimes we need to hop over several nodes to the right point for split, we might surpass the right point if we move by comparing the remainder string and the path labels. That case the suffix link will be neglected unintentionally, if there should be any. This could be avoided by remembering the right point when moving forward. The suffix link should be maintained if the split node already exists, or even the problem 1 happens during a unfolding step.
Finally, my implementation in Python is as follows:
Python
Tips: It includes a naive tree printing function in the code above, which is very important while debugging. It saved me a lot of
time and is convenient for locating special cases.
My intuition is as follows:
After k iterations of the main loop you have constructed a suffix tree which contains all suffixes of the complete string that start in the first k characters.
At the start, this means the suffix tree contains a single root node that represents the entire string (this is the only suffix that starts at 0).
After len(string) iterations you have a suffix tree that contains all suffixes.
During the loop the key is the active point. My guess is that this represents the deepest point in the suffix tree that corresponds to a proper suffix of the first k characters of the string. (I think proper means that the suffix cannot be the entire string.)
For example, suppose you have seen characters 'abcabc'. The active point would represent the point in the tree corresponding to the suffix 'abc'.
The active point is represented by (origin,first,last).
This means that you are currently at the point in the tree that you get to by starting at node origin and then feeding in the characters in string[first:last]
When you add a new character you look to see whether the active point is still in the existing tree. If it is then you are done.
Otherwise you need to add a new node to the suffix tree at the active point, fallback to the next shortest match, and check again.
Note 1:
The suffix pointers give a link to the next shortest match for each node.
Note 2:
When you add a new node and fallback you add a new suffix pointer for the new node.
The destination for this suffix pointer will be the node at the shortened active point.
This node will either already exist, or be created on the next iteration of this fallback loop.
Note 3: The canonization part simply saves time in checking the active point.
For example, suppose you always used origin=0, and just changed first and last.
To check the active point you would have to follow the suffix tree each time along all the intermediate nodes.
It makes sense to cache the result of following this path by recording just the distance from the last node.
Can you give a code example of what you mean by "fix" bounding variables?
Health warning: I also found this algorithm particularly hard to understand so please realise that this intuition is likely to be incorrect in all important details...
Hi i have tried to implement the above explained implementation in ruby , please check it out.
it seems to work fine.
the only difference in the implementation is that , i have tried to use the edge object instead of just using symbols.
its also present at https://gist.github.com/suchitpuri/9304856
require 'pry'
class Edge
attr_accessor :data , :edges , :suffix_link
def initialize data
#data = data
#edges = []
#suffix_link = nil
end
def find_edge element
self.edges.each do |edge|
return edge if edge.data.start_with? element
end
return nil
end
end
class SuffixTrees
attr_accessor :root , :active_point , :remainder , :pending_prefixes , :last_split_edge , :remainder
def initialize
#root = Edge.new nil
#active_point = { active_node: #root , active_edge: nil , active_length: 0}
#remainder = 0
#pending_prefixes = []
#last_split_edge = nil
#remainder = 1
end
def build string
string.split("").each_with_index do |element , index|
add_to_edges #root , element
update_pending_prefix element
add_pending_elements_to_tree element
active_length = #active_point[:active_length]
# if(#active_point[:active_edge] && #active_point[:active_edge].data && #active_point[:active_edge].data[0..active_length-1] == #active_point[:active_edge].data[active_length..#active_point[:active_edge].data.length-1])
# #active_point[:active_edge].data = #active_point[:active_edge].data[0..active_length-1]
# #active_point[:active_edge].edges << Edge.new(#active_point[:active_edge].data)
# end
if(#active_point[:active_edge] && #active_point[:active_edge].data && #active_point[:active_edge].data.length == #active_point[:active_length] )
#active_point[:active_node] = #active_point[:active_edge]
#active_point[:active_edge] = #active_point[:active_node].find_edge(element[0])
#active_point[:active_length] = 0
end
end
end
def add_pending_elements_to_tree element
to_be_deleted = []
update_active_length = false
# binding.pry
if( #active_point[:active_node].find_edge(element[0]) != nil)
#active_point[:active_length] = #active_point[:active_length] + 1
#active_point[:active_edge] = #active_point[:active_node].find_edge(element[0]) if #active_point[:active_edge] == nil
#remainder = #remainder + 1
return
end
#pending_prefixes.each_with_index do |pending_prefix , index|
# binding.pry
if #active_point[:active_edge] == nil and #active_point[:active_node].find_edge(element[0]) == nil
#active_point[:active_node].edges << Edge.new(element)
else
#active_point[:active_edge] = node.find_edge(element[0]) if #active_point[:active_edge] == nil
data = #active_point[:active_edge].data
data = data.split("")
location = #active_point[:active_length]
# binding.pry
if(data[0..location].join == pending_prefix or #active_point[:active_node].find_edge(element) != nil )
else #tree split
split_edge data , index , element
end
end
end
end
def update_pending_prefix element
if #active_point[:active_edge] == nil
#pending_prefixes = [element]
return
end
#pending_prefixes = []
length = #active_point[:active_edge].data.length
data = #active_point[:active_edge].data
#remainder.times do |ctr|
#pending_prefixes << data[-(ctr+1)..data.length-1]
end
#pending_prefixes.reverse!
end
def split_edge data , index , element
location = #active_point[:active_length]
old_edges = []
internal_node = (#active_point[:active_edge].edges != nil)
if (internal_node)
old_edges = #active_point[:active_edge].edges
#active_point[:active_edge].edges = []
end
#active_point[:active_edge].data = data[0..location-1].join
#active_point[:active_edge].edges << Edge.new(data[location..data.size].join)
if internal_node
#active_point[:active_edge].edges << Edge.new(element)
else
#active_point[:active_edge].edges << Edge.new(data.last)
end
if internal_node
#active_point[:active_edge].edges[0].edges = old_edges
end
#setup the suffix link
if #last_split_edge != nil and #last_split_edge.data.end_with?#active_point[:active_edge].data
#last_split_edge.suffix_link = #active_point[:active_edge]
end
#last_split_edge = #active_point[:active_edge]
update_active_point index
end
def update_active_point index
if(#active_point[:active_node] == #root)
#active_point[:active_length] = #active_point[:active_length] - 1
#remainder = #remainder - 1
#active_point[:active_edge] = #active_point[:active_node].find_edge(#pending_prefixes.first[index+1])
else
if #active_point[:active_node].suffix_link != nil
#active_point[:active_node] = #active_point[:active_node].suffix_link
else
#active_point[:active_node] = #root
end
#active_point[:active_edge] = #active_point[:active_node].find_edge(#active_point[:active_edge].data[0])
#remainder = #remainder - 1
end
end
def add_to_edges root , element
return if root == nil
root.data = root.data + element if(root.data and root.edges.size == 0)
root.edges.each do |edge|
add_to_edges edge , element
end
end
end
suffix_tree = SuffixTrees.new
suffix_tree.build("abcabxabcd")
binding.pry

Regular expression for strings with even number of a's and odd no of b's

Im having a problem in solving the problem:-
Its an assignment, i solved it, but it seems to be too long and vague, Can anyboby help me please......
Regular expression for the strings with even number of a's and odd number of b's where the character set={a,b}.
One way to do this is to pass it through two regular expressions making sure they both match (assuming you want to use regular expressions at all, see below for an alternative):
^b*(ab*ab*)*$
^a*ba*(ba*ba*)*$
Anything else (and, in fact, even that) is most likely just an attempt to be clever, one that's generally a massive failure.
The first regular expression ensures there are an even number of a with b anywhere in the mix (before, after and in between).
The second is similar but ensures that there's an odd number of b by virtue of the starting a*ba*.
A far better way to do it is to ignore regular expressions altogether and simply run through the string as follows:
def isValid(s):
set evenA to true
set oddB to false
for c as each character in s:
if c is 'a':
set evenA to not evenA
else if c is 'b':
set oddB to not oddB
else:
return false
return evenA and oddB
Though regular expressions are a wonderful tool, they're not suited for everything and they become far less useful as their readability and maintainability degrades.
For what it's worth, a single-regex answer is:
(aa|bb|(ab|ba)(aa|bb)*(ba|ab))*(b|(ab|ba)(bb|aa)*a)
but, if I caught anyone on my team actually using a monstrosity like that, they'd be sent back to do it again.
This comes from a paper by one Greg Bacon. See here for the actual inner workings.
Even-Even = (aa+bb+(ab+ba)(aa+bb)*(ab+ba))*
(Even-Even has even number of Aas and b's both)
Even a's and odd b's = Even-Even b Even-Even
This hsould work
This regular expression takes all strings with even number of a's and even number of b's
r1=((ab+ba)(aa+bb)*(ab+ba)+(aa+bb))*
Now to get regular expression for even a's and odd b's
r2=(b+a(aa+bb)*(ab+ba))((ab+ba)(aa+bb)*(ab+ba)+(aa+bb))*
(bb)*a(aa)*ab(bb)*
ab(bb)* a(aa)*
b(aa)*(bb)*
.
.
.
.
.
.
there can be many such regular expressions. Do you have any other condition like "starting with a" or something of the kind (other than odd 'b' and even 'a') ?
For even number of a's and b's , we have regex:
E = { (ab + ba) (aa+bb)* (ab+ba) }*
For even number of a's and odd number of b's , all we need to do is to add an extra b in the above expression E.
The required regex will be:
E = { ((ab + ba) (aa+bb)* (ab+ba))* b ((ab + ba) (aa+bb)* (ab+ba))* }
I would do as follows:
regex even matches the symbol a, then a sequence of b's, then the symbol a again, then another sequence of b's, such that there is an even number of b's:
even -> (a (bb)* a (bb)* | a b (bb)* a b (bb)*)
regex odd does the same with an odd total number of b's:
odd -> (a b (bb)* a (bb)* | a (bb)* a b (bb)*)
A string of even number of a's and odd number of b's either:
starts with an odd number of b's, and is followed by an even number of odd patterns amongst even patterns;
or starts with an even number of b's, and is followed by an odd number of odd patterns amongst even patterns.
Note that even has no incidence on the evenness/oddness of the a/b's in the string.
regex ->
(
b (bb)* even* (odd even* odd)* even*
|
(bb)* even* odd even* (odd even* odd)* even*
)
Of course one can replace every occurence of even and odd in the final regex to get a single regex.
It is easy to see that a string satisfying this regex will indeed have an even number of a's (as symbol a occurs only in even and odd subregexes, and these each use exactly two a's) and an odd number of b's (first case : 1 b + even number of b's + even number of odd; second case : even number of b's + odd number of odd).
A string with an even number of a's and an odd number of b's will satisfy this regex as it starts with zero or more b's, then is followed by [one a, zero or more b's, one more a and zero or more b's], zero or more times.
A high-level advice: construct a deterministic finite automaton for the language---very easy, encode parity of the number of as and bs in the states, with q0 encoding even nr. of as and even nr. of bs, and transition accordingly---, and then convert the DFA into a regular expression (either by using well-known algorithms for this or "from scratch").
The idea here is to exploit the well-understood equivalence between the DFA (an algorithmic description of regular languages) and the regular expressions (an algebraic description of regular languages).
The regular expression are given below :
(aa|bb)*((ab|ba)(aa|bb)*(ab|ba)(aa|bb)*b)*
The structured way to do it is to make one transition diagram and build the regular expression from it.
The regex in this case will be
(a((b(aa)*b)*a+b(aa)*ab)+b((a(bb)*a)*b+a(bb)*ba))b(a(bb)*a)*
It looks complicated but it covers all possible cases that may arise.
the answer is (aa+ab+ba+bb)* b (aa+ab+ba+bb)*
(bb)* b (aa)* + (aa)* b (bb)*
This is the answer which handles all kind of strings with odd b's and even a's.
If it is even number of a's followed by odd number of b's
(aa)*b(bb)* should work
if it is in any order
(aa)*b(bb)* + b(bb)(aa) should work
All strings that have even no of a's and odd no of b's
(((aa+bb) * b(aa+bb) * ) + (A +((a+b)b(a+b)) *)) *
here A is for null string. A can be neglected.
if there is any error plz point it out.

Resources