I understand the basics of this search, however the beta cut-off part is confusing me, when beta <= value of alphabeta I can either return beta, break, or continue the loop.
return beta doesn't seem to work properly at all, it returns the wrong players move for a different state of the board (further into the search tree)
break seems to work correctly, it is very fast but it seems a bit TOO fast
continue is a lot slower than break but it seems more correct...I'm guessing this is the right way but pseudocode on google all use 'break' but because this is pseudocode I'm not sure what they mean by 'break'
Just for the fun of it I'm going to guess that you're talking about Minimax with Alpha-Beta cutoff, where
ALPHA-BETA cutoff is a method for
reducing the number of nodes explored
in the Minimax strategy. For the nodes
it explores it computes, in addition
to the score, an alpha value and a
beta value.
Here is a page that describes this method and also provides a link to a C program that implements this method. Hopefully something here helps you with your problem, if I'm totally off with my guess please give more detail in your question.
function MINIMAX(N) is
begin
if N is a leaf then
return the estimated score of this leaf
else
Let N1, N2, .., Nm be the successors of N;
if N is a Min node then
return min{MINIMAX(N1), .., MINIMAX(Nm)}
else
return max{MINIMAX(N1), .., MINIMAX(Nm)}
end MINIMAX;
Beta cutoffs occur when the branch you are currently searching is better for your opponent than one you've already searched. It was once explained to me as follows:
suppose are fighting with your enemy, and you consider a number of your choices.
After fully searching the best possible outcome of your first choice (throwing a punch), you determine the result is your opponent will eventually poke you in the eye. We'll call this beta... the best your opponent can do so far. Obviously, you would like to find a result that does better.
Now we consider your next option (running away in disgrace). When exploring your opponents first possible reply, we find that the best possible outcome is you are shot in the back with a gun. This is where a beta cutoff is triggered... we stop searching the rest of your opponents moves and return beta, because we really don't care if you find in searching his other replies he can also nuke you... you would already opt for the poke in the eye from the previous option.
Now specifically what this means is your program should return beta... if it doesn't work, you should compare to an alpha-beta search algorithm elsewhere.
Related
I am working on the following problem and having a hell of a time at the moment.
We are playing the Guessing Game. The game will work as follows:
I pick a number between 1 and n.
You guess a number.
If you guess the right number, you win the game.
If you guess the wrong number, then I will tell you whether the number I picked is higher or lower, and you will continue guessing.
Every time you guess a wrong number x, you will pay x dollars. If you run out of money, you lose the game.
Given a particular n, return the minimum amount of money you need to guarantee a win regardless of what number I pick.
So, what do I know? Clearly this is a dynamic programming problem. I have two choices, break things up recursively or go ahead and do things bottom up. Bottom up seems like a better choice to me (though technically the max recursion depth would be 100 as we are guaranteed n<=100). The question then is: What do the sub-problems look like?
Well, I think we could start thinking about subarrays (but possible we need subsequences here) so what is the worst case in each possible sub-division kind of thing? That is:
[1,2,3,4,5,6]
[[1],[2],[3],[4],[5],[6]] -> 21
[[1,2],[3,4],[5,6]] -> 9
[[1,2,3],[4,5,6]] -> 7
...
but I don't think I quite have the idea yet. So, to get succinct since this post is kind of long: How are we breaking this up? What is the sub-problem we are trying to solve here?
Relevant Posts:
Binary Search Doesn't work in this case?
First I have Hands = Set[Tuple[str,str]] to represents the suits and ranks of the card respectively( Hands = {("Diamonds", "4"),("Clubs","J"),...}). then I have to check if Hands contain straight flush combination(All 5 cards have the same suit in sequence.) I tried using for loop to check if all the cards have same suits but the problem is that I can't slice the element inside set. After that I am stumped. Is there a way to return a boolean that indicate whether variable Hands is straight flush?
Here is my code I have been working on
Hands = Set[Tuple[str,str]]
h = {("Diamonds", "Q"),("Diamonds","J"),("Diamonds","K"),("Diamonds","A"),("Diamonds","2")}
def is_sflush(h:Hands) -> bool:
for i in h:
if h[i][0] == h[i+1][0]: #This is where I am wrong and need help here
This sounds like a H/W problem, so not to give away the farm...
you have 2 checks to figure out: same suit and sequential. Do them separately.
For the "same suit", I recommend making a set of the suits from the cards (not the ranks), which you can do from a set comprehension. What will the size of that set tell you?
The sequential part is a bit more work. :) You might need an extra data structure that has the correct sequencing or position of the cards as something to compare against. Several strategies could work.
There's an algorithm currently driving me crazy.
I've seen quite a few variations of it, so I'll just try to explain the easiest one I can think about.
Let's say I have a project P:
Project P is made up of 4 sub projects.
I can solve each of those 4 in two separate ways, and each of those modes has a specific cost and a specific time requirement:
For example (making it up):
P: 1 + 2 + 3 + 4 + .... n
A(T/C) Ta1/Ca1 Ta2/Ca2 etc
B(T/C) Tb1/Cb1 etc
Basically I have to find the combination that of those four modes which has the lowest cost. And that's kind of easy, the problem is: the combination has to be lower than specific given time.
In order to find the lowest combination I can easily write something like:
for i = 1 to n
aa[i] = min(aa[i-1],ba[i-1]) + value(a[i])
bb[i] = min(bb[i-1],ab[i-1]) + value(b[i])
ba[i] = min(bb[i-1],ab[i-1]) + value(b[i])
ab[i] = min(aa[i-1],ba[i-1]) + value(a[i])
Now something like is really easy and returns the correct value every time, the lowest at the last circle is gonna be the correct one.
Problem is: if min returns modality that takes the last time, in the end I'll have the fastest procedure no matter the cost.
If if min returns the lowest cost, I'll have the cheapest project no matter the amount of time taken to realize it.
However I need to take both into consideration: I can do it easily with a recursive function with O(2^n) but I can't seem to find a solution with dynamic programming.
Can anyone help me?
If there are really just four projects, you should go with the exponential-time solution. There are only 16 different cases, and the code will be short and easy to verify!
Anyway, the I'm pretty sure the problem you describe is the knapsack problem, which is NP-hard. So, there will be no exact solution that's sub-exponential unless P=NP. However, depending on what "n" actually is (is it 4 in your case? or the values of the time and cost?) there may be a pseudo-polynomial time solution. The Wikipedia article contains descriptions of these.
This is a problem appeared in today's Pacific NW Region Programming Contest during which no one solved it. It is problem B and the complete problem set is here: http://www.acmicpc-pacnw.org/icpc-statements-2011.zip. There is a well-known O(n^2) algorithm for LCS of two strings using Dynamic Programming. But when these strings are extended to rings I have no idea...
P.S. note that it is subsequence rather than substring, so the elements do not need to be adjacent to each other
P.S. It might not be O(n^2) but O(n^2lgn) or something that can give the result in 5 seconds on a common computer.
Searching the web, this appears to be covered by section 4.3 of the paper "Incremental String Comparison", by Landau, Myers, and Schmidt at cost O(ne) < O(n^2), where I think e is the edit distance. This paper also references a previous paper by Maes giving cost O(mn log m) with more general edit costs - "On a cyclic string to string correcting problem". Expecting a contestant to reproduce either of these papers seems pretty demanding to me - but as far as I can see the question does ask for the longest common subsequence on cyclic strings.
You can double the first and second string and then use the ordinary method, and later wrap the positions around.
It is a good idea to "double" the strings and apply the standard dynamic programing algorithm. The problem with it is that to get the optimal cyclic LCS one then has to "start the algorithm from multiple initial conditions". Just one initial condition (e.g. setting all Lij variables to 0 at the boundaries) will not do in general. In practice it turns out that the number of initial states that are needed are O(N) in number (they span a diagonal), so one gets back to an O(N^3) algorithm.
However, the approach does has some virtue as it can be used to design efficient O(N^2) heuristics (not exact but near exact) for CLCS.
I do not know if a true O(N^2) exist, and would be very interested if someone knows one.
The CLCS problem has quite interesting properties of "periodicity": the length of a CLCS of
p-times reapeated strings is p times the CLCS of the strings. This can be proved by adopting a geometric view off the problem.
Also, there are some additional benefits of the problem: it can be shown that if Lc(N) denotes the averaged value of the CLCS length of two random strings of length N, then
|Lc(N)-CN| is O(\sqrt{N}) where C is Chvatal-Sankoff's constant. For the averaged length L(N) of the standard LCS, the only rate result of which I know says that |L(N)-CN| is O(sqrt(Nlog N)). There could be a nice way to compare Lc(N) with L(N) but I don't know it.
Another question: it is clear that the CLCS length is not superadditive contrary to the LCS length. By this I mean it is not true that CLCS(X1X2,Y1Y2) is always greater than CLCS(X1,Y1)+CLCS(X2,Y2) (it is very easy to find counter examples with a computer).
But it seems possible that the averaged length Lc(N) is superadditive (Lc(N1+N2) greater than Lc(N1)+Lc(N2)) - though if there is a proof I don't know it.
One modest interest in this question is that the values Lc(N)/N for the first few values of N would then provide good bounds to the Chvatal-Sankoff constant (much better than L(N)/N).
As a followup to mcdowella's answer, I'd like to point out that the O(n^2 lg n) solution presented in Maes' paper is the intended solution to the contest problem (check http://www.acmicpc-pacnw.org/ProblemSet/2011/solutions.zip). The O(ne) solution in Landau et al's paper does NOT apply to this problem, as that paper is targeted at edit distance, not LCS. In particular, the solution to cyclic edit distance only applies if the edit operations (add, delete, replace) all have unit (1, 1, 1) cost. LCS, on the other hand, is equivalent to edit distances with (add, delete, replace) costs (1, 1, 2). These are not equivalent to each other; for example, consider the input strings "ABC" and "CXY" (for the acyclic case; you can construct cyclic counterexamples similarly). The LCS of the two strings is "C", but the minimum unit-cost edit is to replace each character in turn.
At 110 lines but no complex data structures, Maes' solution falls towards the upper end of what is reasonable to implement in a contest setting. Even if Landau et al's solution could be adapted to handle cyclic LCS, the complexity of the data structure makes it infeasible in a contest setting.
Last but not least, I'd like to point out that an O(n^2) solution DOES exist for CLCS, described here: http://arxiv.org/abs/1208.0396 At 60 lines, no complex data structures, and only 2 arrays, this solution is quite reasonable to implement in a contest setting. Arriving at the solution might be a different matter, though.
I am aware that languages like Prolog allow you to write things like the following:
mortal(X) :- man(X). % All men are mortal
man(socrates). % Socrates is a man
?- mortal(socrates). % Is Socrates mortal?
yes
What I want is something like this, but backwards. Suppose I have this:
mortal(X) :- man(X).
man(socrates).
man(plato).
man(aristotle).
I then ask it to give me a random X for which mortal(X) is true (thus it should give me one of 'socrates', 'plato', or 'aristotle' according to some random seed).
My questions are:
Does this sort of reverse inference have a name?
Are there any languages or libraries that support it?
EDIT
As somebody below pointed out, you can simply ask mortal(X) and it will return all X, from which you can simply pick a random one from the list. What if, however, that list would be very large, perhaps in the billions? Obviously in that case it wouldn't do to generate every possible result before picking one.
To see how this would be a practical problem, imagine a simple grammar that generated a random sentence of the form "adjective1 noun1 adverb transitive_verb adjective2 noun2". If the lists of adjectives, nouns, verbs, etc. are very large, you can see how the combinatorial explosion is a problem. If each list had 1000 words, you'd have 1000^6 possible sentences.
Instead of the deep-first search of Prolog, a randomized deep-first search strategy could be easyly implemented. All that is required is to randomize the program flow at choice points so that every time a disjunction is reached a random pole on the search tree (= prolog program) is selected instead of the first.
Though, note that this approach does not guarantees that all the solutions will be equally probable. To guarantee that, it is required to known in advance how many solutions will be generated by every pole to weight the randomization accordingly.
I've never used Prolog or anything similar, but judging by what Wikipedia says on the subject, asking
?- mortal(X).
should list everything for which mortal is true. After that, just pick one of the results.
So to answer your questions,
I'd go with "a query with a variable in it"
From what I can tell, Prolog itself should support it quite fine.
I dont think that you can calculate the nth solution directly but you can calculate the n first solutions (n randomly picked) and pick the last. Of course this would be problematic if n=10^(big_number)...
You could also do something like
mortal(ID,X) :- man(ID,X).
man(X):- random(1,4,ID), man(ID,X).
man(1,socrates).
man(2,plato).
man(3,aristotle).
but the problem is that if not every man was mortal, for example if only 1 out of 1000000 was mortal you would have to search a lot. It would be like searching for solutions for an equation by trying random numbers till you find one.
You could develop some sort of heuristic to find a solution close to the number but that may affect (negatively) the randomness.
I suspect that there is no way to do it more efficiently: you either have to calculate the set of solutions and pick one or pick one member of the superset of all solutions till you find one solution. But don't take my word for it xd