Scala string manipulation - string

I have the following Scala code :
val res = for {
i <- 0 to 3
j <- 0 to 3
if (similarity(z(i),z(j)) < threshold) && (i<=j)
} yield z(j)
z here represents Array[String] and similarity(z(i),z(j)) calculates similarity between two strings.
This problems works like that similarity is calculated between 1st string and all the other strings and then similarity is calculated between 2nd string and all other strings except for first and then similarity for 3rd string and so on.
My requirement is that if 1st string matches with 3rd, 4th and 8th string, then
all these 3 strings shouldn't participate in loops further and loop should jump to 2nd string, then 5th string, 6th string and so on.
I am stuck at this step and don't know how to proceed further.

I am presuming that your intent is to keep the first String of two similar Strings (eg. if 1st String is too similar to 3rd, 4th, and 8th Strings, keep only the 1st String [out of these similar strings]).
I have a couple of ways to do this. They both work, in a sense, in reverse: for each String, if it is too similar to any later Strings, then that current String is filtered out (not the later Strings). If you first reverse the input data before applying this process, you will find that the desired outcome is produced (although in the first solution below the resulting list is itself reversed - so you can just reverse it again, if order is important):
1st way (likely easier to understand):
def filterStrings(z: Array[String]) = {
val revz = z.reverse
val filtered = for {
i <- 0 to revz.length if !revz.drop(i+1).exists(zz => similarity(zz, revz(i)) < threshold)
} yield revz(i)
filtered.reverse // re-reverses output if order is important
}
The 'drop' call is to ensure that each String is only checked against later Strings.
2nd option (fully functional, but harder to follow):
val filtered = z.reverse.foldLeft((List.empty[String],z.reverse)) { case ((acc, zt), zz) =>
(if (zt.tail.exists(tt => similarity(tt, zz) < threshold)) acc else zz :: acc, zt.tail)
}._1
I'll try to explain what is going on here (in case you - or any readers - aren't use to following folds):
This uses a fold over the reversed input data, starting from the empty String (to accumulate results) and the (reverse of the) remaining input data (to compare against - I labeled it zt for "z-tail").
The fold then cycles through the data, checking each entry against the tail of the remaining data (so it doesn't get compared to itself or any earlier entry)
If there is a match, just the existing accumulator (labelled acc) will be allowed through, otherwise, add the current entry (zz) to the accumulator. This updated accumulator is paired with the tail of the "remaining" Strings (zt.tail), to ensure a reducing set to compare against.
Finally, we end up with a pair of lists: the required remaining Strings, and an empty list (no Strings left to compare against), so we take the first of these as our result.

If I understand correctly, you want to loop through the elements of the array, comparing each element to later elements, and removing ones that are too similar as you go.
You can't (easily) do this within a simple loop. You'd need to keep track of which items had been filtered out, which would require another array of booleans, which you update and test against as you go. It's not a bad approach and is efficient, but it's not pretty or functional.
So you need to use a recursive function, and this kind of thing is best done using an immutable data structure, so let's stick to List.
def removeSimilar(xs: List[String]): List[String] = xs match {
case Nil => Nil
case y :: ys => y :: removeSimilar(ys filter {x => similarity(y, x) < threshold})
}
It's a simple-recursive function. Not much to explain: if xs is empty, it returns the empty list, else it adds the head of the list to the function applied to the filtered tail.

Related

Function seems to work within constant-space, but constant space is too much

I have the following function, which counts number of differences between two strings:
distance1 :: String -> String -> Int
distance1 list1 list2 = length . filter (uncurry (/=)) $ zip list1 list2
It works just fine. Can work on any size lists within constant space.
I also was playing around with - let's say - low level, recursion-based, not-good implementation for this function and had the following:
distance2 :: String -> String -> Int
distance2 list1 list2 = distanceHelper 0 0
where
distanceHelper index result
| index == length list1 = result
| otherwise = distanceHelper (index + 1) (result + diff)
where
char1 = list1 !! index
char2 = list2 !! index
diff = if char1 /= char2 then 1 else 0
I know accessing by index for linked list is terrible, but here I'm not worrying about time, but about space. Since it is tail recursive, I expect it also to run for any size list within constant space.
The following is the program used to test:
main :: IO ()
main = print $ distance2 list1 list2
where
list1 = replicate count 'A'
list2 = replicate count 'B'
count = 100000000
If I'll run the one with distance1 and for any size (e.g. 100000000000000000), yes, it will be running for a very long time, but it will eat about 3-4 MB and do the job anyway.
If I'll run test with distance2 (just with 100000000), it will immediately eat a lot of memory (about 1G), but then will stop eating memory and continue to do the job without consuming more memory. So it makes impression it also runs for constant space, but that space is too much.
I would like to understand why exactly the second version takes so much memory?
Note: just in case tried second version with bang patterns, i.e. declared inner function as distanceHelper !index !result, but that didn't help.
I know accessing by index for linked list is terrible, but here I'm not worrying about time, but about space. Since it is tail recursive, I expect it also to run for any size list within constant space.
No, that's precisely the issue here.
If a list is generated with replicate count 'A', it can be generated lazily. If we access the first element, discard it, then the second one, discard it, and so on, the computation can be performed in constant space, since elements can be garbage collected quickly after they are discarded. This requires the consumer to be something like
consume [] = ...
consume (x:xs) = .... (consume xs) -- x was used and then discarded
If we instead use !! to access the list, the compiler can no longer discard the list elements. After all, we could later on request with !! an element we used a long time ago. Hence, the full list of count elements must be stored in memory.
Now, a very smart compiler might perform a static analysis and prove that the indices used in !! are strictly increasing, and we can indeed discard/garbage collect the prefix of the list. Most compilers are not that smart, though.
Further, length is also used here:
distanceHelper index result
| index == length list1 = result
...
length list1 will work in constant space if it can consume list1, i.e. if list1 is no longer used afterwards. This is not the case, so that will force the full list to be generated and kept in memory, using count cells. Yet another reason why we should avoid length and !!.
To stress the point above:
let list = replicate count 'A'
in length list
should be constant space, while
let list = replicate count 'A'
in length list + length list
can not be (barring very smart optimizations), since we can not consume list for the first length call -- we need it for the second call later on.
Even more subtly,
let list () = replicate count 'A'
in length (list ()) + length (list ())
will work in constant space, since the result of function calls is not cached. Above, we generate (and consume) the list twice, and this can be done in constant space.

Transform a Binary Number into a Decimal without recursivity [HASKELL]

I haven't found a way to solve this. I have a list of integer, where which element of the list is a binary digit (0 or 1) so I need to design a function which transforms this list of integers into the proper decimal number.
Example:
Input: [0,1,0]
Output: 2
But there is a specific condition, it is neccesary to use list of comprehension and you can't use recursivity.
The problem it is, when I need to know the position of the digit for apply the transform because I can't save the position in the list of comprehension.
Thank you
The problem it is, when I need to know the position of the digit for apply the transform because I can't save the position in the list of comprehension.
You can, by using zip and a range, you generate 2-tuples that carry the index, like:
[(idx, val) | (idx, val) <- zip [0..] bin]
will produce a list of 2-tuples: the first element containing the element, and the second the element of data at that position.
So if bin = [0,1,0], then the above list comprehension will result in:
Prelude> [(idx, val) | (idx, val) <- zip [0..] bin]
[(0,0),(1,1),(2,0)]
Since this seems to be the "core problem", I propose that you aim to solve the rest of the problem with the above strategy, or ask a question (edit this one, or ask a new one) if you encouter other problems.

Sort list of string based on length

I have a list of strings
List("cbda","xyz","jlki","badce")
I want to sort the strings in such a way that the odd length strings are sorted in descending order and even length strings are sorted in ascending order
List("abcd","zyx","ijkl","edcba")
Now I have implemented this by iterating over each elements separately, then finding their length and sorting them accordingly. Finally I store them in separate list. I was hoping to know if there is any other efficient way to do this in Scala, or any shorter way to do this (like some sort of list comprehensions we have in Python) ?
You can do it with sortWith and map:
list.map(s => {if(s.length % 2 == 0) s.sortWith(_ < _) else s.sortWith(_ > _)})
I'm not sure what you refer to in Python, so details could help if the examples below don't match your expectations
A first one, make you go through the list twice:
List("cbda","xyz","jlki","badce").map(_.sorted).map {
case even if even.length % 2 == 0 => even
case odd => odd.reverse
}
Or makes you go through elements with even length twice:
List("cbda","xyz","jlki","badce").map {
case even if even.length % 2 == 0 => even.sorted
case odd => odd.sorted.reverse
}

How to find maximum overlap between two strings in Scala?

Suppose I have two strings: s and t. I need to write a function f to find a max. t prefix, which is also an s suffix. For example:
s = "abcxyz", t = "xyz123", f(s, t) = "xyz"
s = "abcxxx", t = "xx1234", f(s, t) = "xx"
How would you write it in Scala ?
This first solution is easily the most concise, also it's more efficient than a recursive version as it's using a lazily evaluated iteration
s.tails.find(t.startsWith).get
Now there has been some discussion regarding whether tails would end up copying the whole string over and over. In which case you could use toList on s then mkString the result.
s.toList.tails.find(t.startsWith(_: List[Char])).get.mkString
For some reason the type annotation is required to get it to compile. I've not actually trying seeing which one is faster.
UPDATE - OPTIMIZATION
As som-snytt pointed out, t cannot start with any string that is longer than it, and therefore we could make the following optimization:
s.drop(s.length - t.length).tails.find(t.startsWith).get
Efficient, this is not, but it is a neat (IMO) one-liner.
val s = "abcxyz"
val t ="xyz123"
(s.tails.toSet intersect t.inits.toSet).maxBy(_.size)
//res8: String = xyz
(take all the suffixes of s that are also prefixes of t, and pick the longest)
If we only need to find the common overlapping part, then we can recursively take tail of the first string (which should overlap with the beginning of the second string) until the remaining part will not be the one that second string begins with. This also covers the case when the strings have no overlap, because then the empty string will be returned.
scala> def findOverlap(s:String, t:String):String = {
if (s == t.take(s.size)) s else findOverlap (s.tail, t)
}
findOverlap: (s: String, t: String)String
scala> findOverlap("abcxyz", "xyz123")
res3: String = xyz
scala> findOverlap("one","two")
res1: String = ""
UPDATE: It was pointed out that tail might not be implemented in the most efficient way (i.e. it creates a new string when it is called). If that becomes an issue, then using substring(1) instead of tail (or converting both Strings to Lists, where it's tail / head should have O(1) complexity) might give a better performance. And by the same token, we can replace t.take(s.size) with t.substring(0,s.size).

Data Structure for Subsequence Queries

In a program I need to efficiently answer queries of the following form:
Given a set of strings A and a query string q return all s ∈ A such that q is a subsequence of s
For example, given A = {"abcdef", "aaaaaa", "ddca"} and q = "acd" exactly "abcdef" should be returned.
The following is what I have considered considered so far:
For each possible character, make a sorted list of all string/locations where it appears. For querying interleave the lists of the involved characters, and scan through it looking for matches within string boundaries.
This would probably be more efficient for words instead of characters, since the limited number of different characters will make the return lists very dense.
For each n-prefix q might have, store the list of all matching strings. n might realistically be close to 3. For query strings longer than that we brute force the initial list.
This might speed things up a bit, but one could easily imagine some n-subsequences being present close to all strings in A, which means worst case is the same as just brute forcing the entire set.
Do you know of any data structures, algorithms or preprocessing tricks which might be helpful for performing the above task efficiently for large As? (My ss will be around 100 characters)
Update: Some people have suggested using LCS to check if q is a subsequence of s. I just want to remind that this can be done using a simple function such as:
def isSub(q,s):
i, j = 0, 0
while i != len(q) and j != len(s):
if q[i] == s[j]:
i += 1
j += 1
else:
j += 1
return i == len(q)
Update 2: I've been asked to give more details on the nature of q, A and its elements. While I'd prefer something that works as generally as possible, I assume A will have length around 10^6 and will need to support insertion. The elements s will be shorter with an average length of 64. The queries q will only be 1 to 20 characters and be used for a live search, so the query "ab" will be sent just before the query "abc". Again, I'd much prefer the solution to use the above as little as possible.
Update 3: It has occurred to me, that a data-structure with O(n^{1-epsilon}) lookups, would allow you to solve OVP / disprove the SETH conjecture. That is probably the reason for our suffering. The only options are then to disprove the conjecture, use approximation, or take advantage of the dataset. I imagine quadlets and tries would do the last in different settings.
It could done by building an automaton. You can start with NFA (nondeterministic finite automaton which is like an indeterministic directed graph) which allows edges labeled with an epsilon character, which means that during processing you can jump from one node to another without consuming any character. I'll try to reduce your A. Let's say you A is:
A = {'ab, 'bc'}
If you build NFA for ab string you should get something like this:
+--(1)--+
e | a| |e
(S)--+--(2)--+--(F)
| b| |
+--(3)--+
Above drawing is not the best looking automaton. But there are a few points to consider:
S state is the starting state and F is the ending state.
If you are at F state it means your string qualifies as a subsequence.
The rule of propagating within an autmaton is that you can consume e (epsilon) to jump forward, therefore you can be at more then one state at each point in time. This is called e closure.
Now if given b, starting at state S I can jump one epsilon, reach 2, and consume b and reach 3. Now given end string I consume epsilon and reach F, thus b qualifies as a sub-sequence of ab. So does a or ab you can try yourself using above automata.
The good thing about NFA is that they have one start state and one final state. Two NFA could be easily connected using epsilons. There are various algorithms that could help you to convert NFA to DFA. DFA is a directed graph which can follow precise path given a character -- in particular, it is always in exactly one state at any point in time. (For any NFA, there is a corresponding DFA whose states correspond to sets of states in the NFA.)
So, for A = {'ab, 'bc'}, we would need to build NFA for ab then NFA for bc then join the two NFAs and build the DFA of the entire big NFA.
EDIT
NFA of subsequence of abc would be a?b?c?, so you can build your NFA as:
Now, consider the input acd. To query if ab is subsequence of {'abc', 'acd'}, you can use this NFA: (a?b?c?)|(a?c?d). Once you have NFA you can convert it to DFA where each state will contain whether it is a subsequence of abc or acd or maybe both.
I used link below to make NFA graphic from regular expression:
http://hackingoff.com/images/re2nfa/2013-08-04_21-56-03_-0700-nfa.svg
EDIT 2
You're right! In case if you've 10,000 unique characters in the A. By unique I mean A is something like this: {'abc', 'def'} i.e. intersection of each element of A is empty set. Then your DFA would be worst case in terms of states i.e. 2^10000. But I'm not sure when would that be possible given that there can never be 10,000 unique characters. Even if you have 10,000 characters in A still there will be repetitions and that might reduce states alot since e-closure might eventually merge. I cannot really estimate how much it might reduce. But even having 10 million states, you will only consume less then 10 mb worth of space to construct a DFA. You can even use NFA and find e-closures at run-time but that would add to run-time complexity. You can search different papers on how large regex are converted to DFAs.
EDIT 3
For regex (a?b?c?)|(e?d?a?)|(a?b?m?)
If you convert above NFA to DFA you get:
It actually lot less states then NFA.
Reference:
http://hackingoff.com/compilers/regular-expression-to-nfa-dfa
EDIT 4
After fiddling with that website more. I found that worst case would be something like this A = {'aaaa', 'bbbbb', 'cccc' ....}. But even in this case states are lesser than NFA states.
Tests
There have been four main proposals in this thread:
Shivam Kalra suggested creating an automaton based on all the strings in A. This approach has been tried slightly in the literature, normally under the name "Directed Acyclic Subsequence Graph" (DASG).
J Random Hacker suggested extending my 'prefix list' idea to all 'n choose 3' triplets in the query string, and merging them all using a heap.
In the note "Efficient Subsequence Search in Databases" Rohit Jain, Mukesh K. Mohania and Sunil Prabhakar suggest using a Trie structure with some optimizations and recursively search the tree for the query. They also have a suggestion similar to the triplet idea.
Finally there is the 'naive' approach, which wanghq suggested optimizing by storing an index for each element of A.
To get a better idea of what's worth putting continued effort into, I have implemented the above four approaches in Python and benchmarked them on two sets of data. The implementations could all be made a couple of magnitudes faster with a well done implementation in C or Java; and I haven't included the optimizations suggested for the 'trie' and 'naive' versions.
Test 1
A consists of random paths from my filesystem. q are 100 random [a-z] strings of average length 7. As the alphabet is large (and Python is slow) I was only able to use duplets for method 3.
Construction times in seconds as a function of A size:
Query times in seconds as a function of A size:
Test 2
A consists of randomly sampled [a-b] strings of length 20. q are 100 random [a-b] strings of average length 7. As the alphabet is small we can use quadlets for method 3.
Construction times in seconds as a function of A size:
Query times in seconds as a function of A size:
Conclusions
The double logarithmic plot is a bit hard to read, but from the data we can draw the following conclusions:
Automatons are very fast at querying (constant time), however they are impossible to create and store for |A| >= 256. It might be possible that a closer analysis could yield a better time/memory balance, or some tricks applicable for the remaining methods.
The dup-/trip-/quadlet method is about twice as fast as my trie implementation and four times as fast as the 'naive' implementation. I used only a linear amount of lists for the merge, instead of n^3 as suggested by j_random_hacker. It might be possible to tune the method better, but in general it was disappointing.
My trie implementation consistently does better than the naive approach by around a factor of two. By incorporating more preprocessing (like "where are the next 'c's in this subtree") or perhaps merging it with the triplet method, this seems like todays winner.
If you can do with a magnitude less performance, the naive method does comparatively just fine for very little cost.
As you point out, it might be that all strings in A contain q as a subsequence, in which case you can't hope to do better than O(|A|). (That said, you might still be able to do better than the time taken to run LCS on (q, A[i]) for each string i in A, but I won't focus on that here.)
TTBOMK there are no magic, fast ways to answer this question (in the way that suffix trees are the magic, fast way to answer the corresponding question involving substrings instead of subsequences). Nevertheless if you expect the set of answers for most queries to be small on average then it's worth looking at ways to speed up these queries (the ones yielding small-size answers).
I suggest filtering based on a generalisation of your heuristic (2): if some database sequence A[i] contains q as a subsequence, then it must also contain every subsequence of q. (The reverse direction is not true unfortunately!) So for some small k, e.g. 3 as you suggest, you can preprocess by building an array of lists telling you, for every length-k string s, the list of database sequences containing s as a subsequence. I.e. c[s] will contain a list of the ID numbers of database sequences containing s as a subsequence. Keep each list in numeric order to enable fast intersections later.
Now the basic idea (which we'll improve in a moment) for each query q is: Find all k-sized subsequences of q, look up each in the array of lists c[], and intersect these lists to find the set of sequences in A that might possibly contain q as a subsequence. Then for each possible sequence A[i] in this (hopefully small) intersection, perform an O(n^2) LCS calculation with q to see whether it really does contain q.
A few observations:
The intersection of 2 sorted lists of size m and n can be found in O(m+n) time. To find the intersection of r lists, perform r-1 pairwise intersections in any order. Since taking intersections can only produce sets that are smaller or of the same size, time can be saved by intersecting the smallest pair of lists first, then the next smallest pair (this will necessarily include the result of the first operation), and so on. In particular: sort lists in increasing size order, then always intersect the next list with the "current" intersection.
It is actually faster to find the intersection a different way, by adding the first element (sequence number) of each of the r lists into a heap data structure, then repeatedly pulling out the minimum value and replenishing the heap with the next value from the list that the most recent minimum came from. This will produce a list of sequence numbers in nondecreasing order; any value that appears fewer than r times in a row can be discarded, since it cannot be a member of all r sets.
If a k-string s has only a few sequences in c[s], then it is in some sense discriminating. For most datasets, not all k-strings will be equally discriminating, and this can be used to our advantage. After preprocessing, consider throwing away all lists having more than some fixed number (or some fixed fraction of the total) of sequences, for 3 reasons:
They take a lot of space to store
They take a lot of time to intersect during query processing
Intersecting them will usually not shrink the overall intersection much
It is not necessary to consider every k-subsequence of q. Although this will produce the smallest intersection, it involves merging (|q| choose k) lists, and it might well be possible to produce an intersection that is nearly as small using just a fraction of these k-subsequences. E.g. you could limit yourself to trying all (or a few) k-substrings of q. As a further filter, consider just those k-subsequences whose sequence lists in c[s] are below some value. (Note: if your threshold is the same for every query, you might as well delete all such lists from the database instead, since this will have the same effect, and saves space.)
One thought;
if q tends to be short, maybe reducing A and q to a set will help?
So for the example, derive to { (a,b,c,d,e,f), (a), (a,c,d) }. Looking up possible candidates for any q should be faster than the original problem (that's a guess actually, not sure how exactly. maybe sort them and "group" similar ones in bloom filters?), then use bruteforce to weed out false positives.
If A strings are lengthy, you could make the characters unique based on their occurence, so that would be {(a1,b1,c1,d1,e1,f1),(a1,a2,a3,a4,a5,a6),(a1,c1,d1,d2)}. This is fine, because if you search for "ddca" you only want to match the second d to a second d. The size of your alphabet would go up (bad for bloom or bitmap style operations) and would be different ever time you get new A's, but the amount of false positives would go down.
First let me make sure my understanding/abstraction is correct. The following two requirements should be met:
if A is a subsequence of B, then all characters in A should appear in B.
for those characters in B, their positions should be in an ascending order.
Note that, a char in A might appear more than once in B.
To solve 1), a map/set can be used. The key is the character in string B, and the value doesn't matter.
To solve 2), we need to maintain the position of each characters. Since a character might appear more than once, the position should be a collection.
So the structure is like:
Map<Character, List<Integer>)
e.g.
abcdefab
a: [0, 6]
b: [1, 7]
c: [2]
d: [3]
e: [4]
f: [5]
Once we have the structure, how to know if the characters are in the right order as they are in string A? If B is acd, we should check the a at position 0 (but not 6), c at position 2 and d at position 3.
The strategy here is to choose the position that's after and close to the previous chosen position. TreeSet is a good candidate for this operation.
public E higher(E e)
Returns the least element in this set strictly greater than the given element, or null if there is no such element.
The runtime complexity is O(s * (n1 + n2)*log(m))).
s: number of strings in the set
n1: number of chars in string (B)
n2: number of chars in query string (A)
m: number of duplicates in string (B), e.g. there are 5 a.
Below is the implementation with some test data.
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.TreeSet;
public class SubsequenceStr {
public static void main(String[] args) {
String[] testSet = new String[] {
"abcdefgh", //right one
"adcefgh", //has all chars, but not the right order
"bcdefh", //missing one char
"", //empty
"acdh",//exact match
"acd",
"acdehacdeh"
};
List<String> subseqenceStrs = subsequenceStrs(testSet, "acdh");
for (String str : subseqenceStrs) {
System.out.println(str);
}
//duplicates in query
subseqenceStrs = subsequenceStrs(testSet, "aa");
for (String str : subseqenceStrs) {
System.out.println(str);
}
subseqenceStrs = subsequenceStrs(testSet, "aaa");
for (String str : subseqenceStrs) {
System.out.println(str);
}
}
public static List<String> subsequenceStrs(String[] strSet, String q) {
System.out.println("find strings whose subsequence string is " + q);
List<String> results = new ArrayList<String>();
for (String str : strSet) {
char[] chars = str.toCharArray();
Map<Character, TreeSet<Integer>> charPositions = new HashMap<Character, TreeSet<Integer>>();
for (int i = 0; i < chars.length; i++) {
TreeSet<Integer> positions = charPositions.get(chars[i]);
if (positions == null) {
positions = new TreeSet<Integer>();
charPositions.put(chars[i], positions);
}
positions.add(i);
}
char[] qChars = q.toCharArray();
int lowestPosition = -1;
boolean isSubsequence = false;
for (int i = 0; i < qChars.length; i++) {
TreeSet<Integer> positions = charPositions.get(qChars[i]);
if (positions == null || positions.size() == 0) {
break;
} else {
Integer position = positions.higher(lowestPosition);
if (position == null) {
break;
} else {
lowestPosition = position;
if (i == qChars.length - 1) {
isSubsequence = true;
}
}
}
}
if (isSubsequence) {
results.add(str);
}
}
return results;
}
}
Output:
find strings whose subsequence string is acdh
abcdefgh
acdh
acdehacdeh
find strings whose subsequence string is aa
acdehacdeh
find strings whose subsequence string is aaa
As always, I might be totally wrong :)
You might want to have a look into the Book Algorithms on Strings and Sequences by Dan Gusfield. As it turns out part of it is available on the internet. You might also want to read Gusfield's Introduction to Suffix Trees. As it turns out this book covers many approaches for you kind of question. It is considered one of the standard publications in this field.
Get a fast longest common subsequence algorithm implementation. Actually it suffices to determine the length of the LCS. Notice that Gusman's book has very good algorithms and also points to more sources for such algorithms.
Return all s ∈ A with length(LCS(s,q)) == length(q)

Resources