It seems that many difficult string algorithms can be solved both using suffix tries(trees) and Dynamic Programming.
But I am not sure which approach is best to use and when.
Additionally which approach is better to master on the specific area of algorithms and have it in your arsenal in the area of job interviews? I assume it would be the one that would be used more frequently by a programmer in any task or something like that?
This is more of which algorithmic technique is more useful to master as most frequent to use in your job than simply comparing asymptotic notations
Think of a problem requiring the Lexicographically nth substring of a given string : A suffix array is just what you need...and it is easy to learn the bare essentials for solving most problems involving suffix arrays..
On the other hand DP is an algorithmic technique..MASTER IT and you will be able to solve a HUGE number of problems..not only strings.
For an interview though i will take DP anyday...for interviewers, a DP problem lets them make it knotty that is almost impossible to solve without DP (within given constraints) but the solution would mean that you give them a basic recursion and how DP helps you solve it.If it were a suffix-array-only-problem that would mean that they are assessing you over a single data structure( easy once learned) rather than an more general technique which requires mastery.
PS: I had put off learning DP until recently when i got fed up trying to solve problems (that require DP ) using any advanced data structures and would invariably fail ( Case in point : UVA 1394 -- simple problem now that i know how to solve it using DP but instead went on to study segment trees and achieved a O(nlgn) whereas DP gave me O(n). So final advice : if one hasn't studied DP drop everything else and go for it.
honestly, for job interview, no suffix tree is needed. that's too difficult and beyond the scope. however, DP is widely used in interviews for some famous companies like google and facebook.
suffix tree has limitation for solving problems compared with DP. usually it is used to solve string related problems. but DP can solve many different areas.
Related
I am trying to solve a pattern mining problem for strings and I think that suffix trees or arrays might be a good option to solve this problem.
I will quickly outline the problem:
I have a set strings of different lengths (quotation are just to mark repetitions for the explanation):
C"BECB"ECCECCECEEB"BECB"FCCECCECCECCECCFCBFCBFCC
DCBBDCDDCCECBCEC"BECB""BECB"BECB"BECB"BECB"EDB"BECB""BECB"BEC
etc.
I now would like to find repeated patterns within each string and repeated patterns that are common between the strings. A repeated pattern in string (1) would be "BECB". Moreover the pattern "BECB" is also repeated in string (2). Of course there are several criteria that need to be decided on as for example the minimum number of repetitions or the minimum number of symbols in a pattern.
From the book by Dan Gusfield "Algorithms on Strings, Trees and Sequences" I know that it is possible to find such repeats (e.g. maximal pairs, maximal repetitive structures etc.) using certain algorithms and a suffix tree data structure. This comes in handy as I would like to use probabilistic suffix trees to also calculate some predictions on these sequences. (But this is not the topic of this post.)
Unfortunately I can't find any implementations of these algorithms. Hence, I am wondering if I am even on the right path using suffix trees to solve the mentioned problem. It seems very strange to me that for such a well described problem no packages are available (in R or Python for example).
Are there any alternatives (with existing packages) that solve my problem?
Or do you know any implementation of algorithms for suffix trees?
Here is an implementation in C++ that follows the approach of Dan Gusfield's book : https://cp-algorithms.com/string/suffix-tree-ukkonen.html
You could rewrite it in python. Such algorithms are quite specialised for high performance applications, so it's quite normal that they don't appear in any standard library; nonetheless they are still well known so you can usually find good implementations on the net.
Both suffix trees and suffix arrays are good data structures to help in solving the kinds of problems you want to solve.
Building a (multi-string) suffix tree in python is probably not a good idea -- it involves a lot of operations on individual characters and the resulting data structure consumes a lot of memory unless you spend a lot of code avoiding that.
Building a suffix array in python is more approachable, and the resulting data structure (probably just an array of integers) is reasonably compact.
It doesn't seem too difficult to find suffix array code in python on the web, and there's a nice explanation here: https://louisabraham.github.io/articles/suffix-arrays
It would be more difficult to find one that already supports multiple strings, so you would have to decide how you want to do that. In any case, the prefix doubling algorithm is easy to implement if you leverage the standard built-in sort(), and in python that will probably produce the fastest result.
I understand that dynamic programming is a divide and conquer algorithm and use recursive to solve a problem, but I am having difficulty with dynamic programming can achieve to escape local minima?
In dynamic programming approaches, you evaluate all possible cases just like a brute force algorithm but only reuse solutions to sub problems that have already been computed. Thats why you always achieve the global solution to your problem.
The way DP works is that you build a bigger optimal solution from the optimal solutions to smaller problems. You don't really have local minima that way, so you can't talk about escaping them. You're always at the optimal point of whatever (sub)problem you're at - any time you get a solution to some (sub)problem, it will be optimal. That's completely different from having a complete (but possibly non-optimal) solution that you're trying to successively improve until you reach optimality.
I only know that by branch and bound, one can REDUCE the procedure to obtain a solution, but that only helps for problems which have a solution space tree.
Dynamic Programming
Dynamic programming (usually referred to as DP ) is a very powerful technique to solve a particular class of problems. It demands very elegant formulation of the approach and simple thinking and the coding part is very easy. The idea is very simple, If you have solved a problem with the given input, then save the result for future reference, so as to avoid solving the same problem again. Shortly 'Remember your Past' .
If the given problem can be broken up in to smaller sub-problems and these smaller subproblems are in turn divided in to still-smaller ones, and in this process, if you observe some over-lappping subproblems, then its a big hint for DP. Also, the optimal solutions to the subproblems contribute to the optimal solution of the given problem ( referred to as the Optimal Substructure Property ).
There are two ways of doing this.
1.) Top-Down : Start solving the given problem by breaking it down. If you see that the problem has been solved already, then just return the saved answer. If it has not been solved, solve it and save the answer. This is usually easy to think of and very intuitive. This is referred to as Memoization.
2.) Bottom-Up : Analyze the problem and see the order in which the sub-problems are solved and start solving from the trivial subproblem, up towards the given problem. In this process, it is guaranteed that the subproblems are solved before solving the problem. This is referred to as Dynamic Programming.
Branch And Bound
A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set at the root.
The algorithm explores branches of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated bounds on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm.
For more Information About Branch and Bound Please refer this link :
http://www.cs.umsl.edu/~sanjiv/classes/cs5130/lectures/bb.pdf
Dynamic programming requires a recursive structure (a.k.a., optimal substructure in CRLS). That is, at a given state, one can characterize the optimal decision based on partial solutions.
Branch and bound is a more general and is used to solve more difficul problems via implicit enumerations of the solution space.
Today I read that there is a software called WinCalibra (scroll a bit down) which can take a text file with properties as input.
This program can then optimize the input properties based on the output values of your algorithm. See this paper or the user documentation for more information (see link above; sadly doc is a zipped exe).
Do you know other software which can do the same which runs under Linux? (preferable Open Source)
EDIT: Since I need this for a java application: should I invest my research in java libraries like gaul or watchmaker? The problem is that I don't want to roll out my own solution nor I have time to do so. Do you have pointers to an out-of-the-box applications like Calibra? (internet searches weren't successfull; I only found libraries)
I decided to give away the bounty (otherwise no one would have a benefit) although I didn't found a satisfactory solution :-( (out-of-the-box application)
Some kind of (Metropolis algorithm-like) probability selected random walk is a possibility in this instance. Perhaps with simulated annealing to improve the final selection. Though the timing parameters you've supplied are not optimal for getting a really great result this way.
It works like this:
You start at some point. Use your existing data to pick one that look promising (like the highest value you've got). Set o to the output value at this point.
You propose a randomly selected step in the input space, assign the output value there to n.
Accept the step (that is update the working position) if 1) n>o or 2) the new value is lower, but a random number on [0,1) is less than f(n/o) for some monotonically increasing f() with range and domain on [0,1).
Repeat steps 2 and 3 as long as you can afford, collecting statistics at each step.
Finally compute the result. In your case an average of all points is probably sufficient.
Important frill: This approach has trouble if the space has many local maxima with deep dips between them unless the step size is big enough to get past the dips; but big steps makes the whole thing slow to converge. To fix this you do two things:
Do simulated annealing (start with a large step size and gradually reduce it, thus allowing the walker to move between local maxima early on, but trapping it in one region later to accumulate precision results.
Use several (many if you can afford it) independent walkers so that they can get trapped in different local maxima. The more you use, and the bigger the difference in output values, the more likely you are to get the best maxima.
This is not necessary if you know that you only have one, big, broad, nicely behaved local extreme.
Finally, the selection of f(). You can just use f(x) = x, but you'll get optimal convergence if you use f(x) = exp(-(1/x)).
Again, you don't have enough time for a great many steps (though if you have multiple computers, you can run separate instances to get the multiple walkers effect, which will help), so you might be better off with some kind of deterministic approach. But that is not a subject I know enough about to offer any advice.
There are a lot of genetic algorithm based software that can do exactly that. Wrote a PHD about it a decade or two ago.
A google for Genetic Algorithms Linux shows a load of starting points.
Intrigued by the question, I did a bit of poking around, trying to get a better understanding of the nature of CALIBRA, its standing in academic circles and the existence of similar software of projects, in the Open Source and Linux world.
Please be kind (and, please, edit directly, or suggest editing) for the likely instances where my assertions are incomplete, inexact and even flat-out incorrect. While working in related fields, I'm by no mean an Operational Research (OR) authority!
[Algorithm] Parameter tuning problem is a relatively well defined problem, typically framed as one of a solution search problem whereby, the combination of all possible parameter values constitute a solution space and the parameter tuning logic's aim is to "navigate" [portions of] this space in search of an optimal (or locally optimal) set of parameters.
The optimality of a given solution is measured in various ways and such metrics help direct the search. In the case of the Parameter Tuning problem, the validity of a given solution is measured, directly or through a function, from the output of the algorithm [i.e. the algorithm being tuned not the algorithm of the tuning logic!].
Framed as a search problem, the discipline of Algorithm Parameter Tuning doesn't differ significantly from other other Solution Search problems where the solution space is defined by something else than the parameters to a given algorithm. But because it works on algorithms which are in themselves solutions of sorts, this discipline is sometimes referred as Metaheuristics or Metasearch. (A metaheuristics approach can be applied to various algorihms)
Certainly there are many specific features of the parameter tuning problem as compared to the other optimization applications but with regard to the solution searching per-se, the approaches and problems are generally the same.
Indeed, while well defined, the search problem is generally still broadly unsolved, and is the object of active research in very many different directions, for many different domains. Various approaches offer mixed success depending on the specific conditions and requirements of the domain, and this vibrant and diverse mix of academic research and practical applications is a common trait to Metaheuristics and to Optimization at large.
So... back to CALIBRA...
From its own authors' admission, Calibra has several limitations
Limit of 5 parameters, maximum
Requirement of a range of values for [some of ?] the parameters
Works better when the parameters are relatively independent (but... wait, when that is the case, isn't the whole search problem much easier ;-) )
CALIBRA is based on a combination of approaches, which are repeated in a sequence. A mix of guided search and local optimization.
The paper where CALIBRA was presented is dated 2006. Since then, there's been relatively few references to this paper and to CALIBRA at large. Its two authors have since published several other papers in various disciplines related to Operational Research (OR).
This may be indicative that CALIBRA hasn't been perceived as a breakthrough.
State of the art in that area ("parameter tuning", "algorithm configuration") is the SPOT package in R. You can connect external fitness functions using a language of your choice. It is really powerful.
I am working on adapters for e.g. C++ and Java that simplify the experimental setup, which requires some getting used to in SPOT. The project goes under name InPUT, and a first version of the tuning part will be up soon.
Does anyone known of a a good reference for canonical CS problems?
I'm thinking of things like "the sorting problem", "the bin packing problem", "the travailing salesman problem" and what not.
edit: websites preferred
You can probably find the best in an algorithms textbook like Introduction to Algorithms. Though I've never read that particular book, it's quite renowned for being thorough and would probably contain most of the problems you're likely to encounter.
"Computers and Intractability: A guide to the theory of NP-Completeness" by Garey and Johnson is a great reference for this sort of thing, although the "solved" problems (in P) are obviously not given much attention in the book.
I'm not aware of any good on-line resources, but Karp's seminal paper Reducibility among Combinatorial Problems (1972) on reductions and complexity is probably the "canonical" reference for Hard Problems.
Have you looked at Wikipedia's Category:Computational problems and Category:NP Complete Problems pages? It's probably not complete, but they look like good starting points. Wikipedia seems to do pretty well in CS topics.
I don't think you'll find the answers to all those problems in only one book. I've never seen any decent, comprehensive website on algorithms, so I'd recommend you to stick to the books. That said, you can always get some introductory material on canonical algorithm texts (there are always three I usually recommend: CLRS, Manber, Aho, Hopcroft and Ullman (this one is a bit out of date in some key topics, but it's so formal and well-written that it's a must-read). All of them contain important combinatorial problems that are, in some sense, canonical problems in computer science. After learning some fundamentals in graph theory you'll be able to move to Network Flows and Linear Programming. These comprise a set of techniques that will ultimately solve most problems you'll encounter (linear programming with the variables restricted to integer values is NP-hard). Network flows deals with problems defined on graphs (with weighted/capacitated edges) with very interesting applications in fields that seemingly have no relationship to graph theory whatsoever. THE textbook on this is Ahuja, Magnanti and Orlin's. Linear programming is some kind of superset of network flows, and deals with optimizing a linear function on variables subject to restrictions in the form of a linear system of equations. A book that emphasizes the relationship to network flows is Bazaraa's. Then you can move on to integer programming, a very valuable tool that presents many natural techniques for modelling problems like bin packing, task scheduling, the knapsack problem, and so on. A good reference would be L. Wolsey's book.
You definitely want to look at NIST's Dictionary of Algorithms and Data Structures. It's got the traveling salesman problem, the Byzantine generals problem, the dining philosophers' problem, the knapsack problem (= your "bin packing problem", I think), the cutting stock problem, the eight queens problem, the knight's tour problem, the busy beaver problem, the halting problem, etc. etc.
It doesn't have the firing squad synchronization problem (I'm surprised about that omission) or the Jeep problem (more logistics than computer science).
Interestingly enough there's a blog on codinghorror.com which talks about some of these in puzzle form. (I can't remember whether I've read Smullyan's book cited in the blog, but he is a good compiler of puzzles & philosophical musings. Martin Gardner and Douglas Hofstadter and H.E. Dudeney are others.)
Also maybe check out the Stony Brook Algorithm Repository.
(Or look up "combinatorial problems" on google, or search for "problem" in Wolfram Mathworld or look at Hilbert's problems, but in all these links many of them are more pure-mathematics than computer science.)
#rcreswick those sound like good references but fall a bit shy of what I'm thinking of. (However, for all I know, it's the best there is)
I'm going to not mark anything as accepted in hopes people might find a better reference.
Meanwhile, I'm going to list a few problems here, fell free to add more
The sorting problem Find an order for a set that is monotonic in a given way
The bin packing problem partition a set into a minimum number of sets where each subset is "smaller" than some limit
The travailing salesman problem Find a Hamiltonian cycle in a weighted graph with the minimum total weight