How to solve This Breadth-First Search Question - search

So my professor in college gave us a quiz, using Breadth-First Search , I answered this question as follows:-
S -> {A,B} -> {C,D,E} -> {G2}
since we found the goal then we stop , so the answer is d(otherwise)
however the professor gave us the answer as b.
Can anyone explain why and how to solve this type of questions ?

Ah, I just saw that you ended at G2 as opposed to G1. (At first I thought the problem was you wanting to return an answer as a list of lists.)
I would agree with you that the answer is (D). G2 is discovered as a neighbor of D, before G1 is discovered as a neighbor of E.
I don't see how one would arrive at G1 first, unless doing something weird like sorting on insertion into the queue.

Related

Reversing a list, evaluation order

I am reading page 69 of Haskell School of Expression and I am not sure that I got the evalution of rev [1:2:3:4] right.
Hudak does not explain the evalution(rewriting) order in detail in his book for reverse.
Could someone please either confirm that my guess (shown in the attached picture) is correct or if not correct then point out what I got wrong. I believe that it is correct but I am not 100% sure, this is the reason for asking.
So the question is:
when I evaluate one step of reverse then aftes the evaluation (i.e. rewriting) the result should be surrounded by parenthesis, right?
If I understand correctly, these unlucky appearance of parentheseses is the reason for the poor (read quadratic) time complexity of reverse. In this example 6 steps are spent in total on list appending in order to reverse a 4 element list.
Yes, nested, left-associative calls to append (in Haskell, goes by the names (++) and (<>)) generates poor performance of singly-linked lists.
There are several solutions to this problem, since it's been known about for 30 or 40 years, at least. I believe the library version of reverse uses an accumulator to achieve linear complexity rather than quadratic, but it's still not something you want to call frequently on lists.

confusion regarding the optimal solution in rod cutting

I am talking about the famous rod cutting problem in CLRS.
Two optimal equations are given:
1: r[n] = max(p_n, r_1+r_{n-1}, ..., r_{n-1} + r_1);
2: r[n] = max(p_i+r_{n-1}, ..., p_{n-1} + r_1);
I have been confused for a while regarding why the 2nd equation is correct.
Suppose p_k+r_{n-k} is the max value, is that possible there exists a r_k:
r_k+r_{n-k} > p_k+r_{n-k}?
In such a case, the above 2nd equation is not correct.
Any help?
I dont know, how to answer it properly. I too had the same confusion so I searched for the same, finally got here. I dont see any answers, so i think either we are too dumb or none has understood the solution properly. How ever I came across the link.confusion about rod cutting algorithm - dynamic programming. The second explanation here makes some sense to me.
What he says is, For any of the possible solution, it will always be the case that the max contains some Pi which is given in the array. So what we do is, directly include it in our solution. In the recurrence 1: r[n] = max(p_n, r_1+r_{n-1}, ..., r_{n-1} + r_1); the solution might lie in r2+r(n-2) but in the 2nd recurrence the same solution might lie in p1+r(n-1). Let me know if find out any clear answer.

A reverse inference engine (find a random X for which foo(X) is true)

I am aware that languages like Prolog allow you to write things like the following:
mortal(X) :- man(X). % All men are mortal
man(socrates). % Socrates is a man
?- mortal(socrates). % Is Socrates mortal?
yes
What I want is something like this, but backwards. Suppose I have this:
mortal(X) :- man(X).
man(socrates).
man(plato).
man(aristotle).
I then ask it to give me a random X for which mortal(X) is true (thus it should give me one of 'socrates', 'plato', or 'aristotle' according to some random seed).
My questions are:
Does this sort of reverse inference have a name?
Are there any languages or libraries that support it?
EDIT
As somebody below pointed out, you can simply ask mortal(X) and it will return all X, from which you can simply pick a random one from the list. What if, however, that list would be very large, perhaps in the billions? Obviously in that case it wouldn't do to generate every possible result before picking one.
To see how this would be a practical problem, imagine a simple grammar that generated a random sentence of the form "adjective1 noun1 adverb transitive_verb adjective2 noun2". If the lists of adjectives, nouns, verbs, etc. are very large, you can see how the combinatorial explosion is a problem. If each list had 1000 words, you'd have 1000^6 possible sentences.
Instead of the deep-first search of Prolog, a randomized deep-first search strategy could be easyly implemented. All that is required is to randomize the program flow at choice points so that every time a disjunction is reached a random pole on the search tree (= prolog program) is selected instead of the first.
Though, note that this approach does not guarantees that all the solutions will be equally probable. To guarantee that, it is required to known in advance how many solutions will be generated by every pole to weight the randomization accordingly.
I've never used Prolog or anything similar, but judging by what Wikipedia says on the subject, asking
?- mortal(X).
should list everything for which mortal is true. After that, just pick one of the results.
So to answer your questions,
I'd go with "a query with a variable in it"
From what I can tell, Prolog itself should support it quite fine.
I dont think that you can calculate the nth solution directly but you can calculate the n first solutions (n randomly picked) and pick the last. Of course this would be problematic if n=10^(big_number)...
You could also do something like
mortal(ID,X) :- man(ID,X).
man(X):- random(1,4,ID), man(ID,X).
man(1,socrates).
man(2,plato).
man(3,aristotle).
but the problem is that if not every man was mortal, for example if only 1 out of 1000000 was mortal you would have to search a lot. It would be like searching for solutions for an equation by trying random numbers till you find one.
You could develop some sort of heuristic to find a solution close to the number but that may affect (negatively) the randomness.
I suspect that there is no way to do it more efficiently: you either have to calculate the set of solutions and pick one or pick one member of the superset of all solutions till you find one solution. But don't take my word for it xd

A math modelling interview question

What's the best way to (relative) grade a class (of 50 students) on a test (with 7 questions)?
They did not want the traditional percentile-intervals answer, but a more CS-ey one.
It's a pretty open ended question, they asked to assume the following framework:
Input
[m_1,...,m_50], where each m_i is a 7-vector for marks scored in the 7 questions for each of the 50 students.
[c_1,...,c_7], where each c_i is a vector of 'concepts' tested by each question. c_i's need not be disjoint. We can assume to have an importance ordering amongst elements of union(c_i).
Simplistic approach: Assuming that all concepts have the same value I would just sum it all up. One point for each concept everywhere.
Holistic approach: It could be that the question with more concepts is significantly harder than the question with fewer (and worth more than the sum of concepts). Concepts "interact" with each other. To alleviate this I would put a value of (N over C) to each question, where N is the size of the vector of concepts, and C is total number of concepts. And then I would sum it all up.
True holistic approach: If concepts are repeated in different questions then we should "tone down" their influence. However I'm not sure how to accomplish this. Maybe we should divide each (N over C) value with the number of repetitions of each concept involved.
I ignored the importance ordering of concepts, because I don't know how to put a value on that.

Alpha-Beta cutoff

I understand the basics of this search, however the beta cut-off part is confusing me, when beta <= value of alphabeta I can either return beta, break, or continue the loop.
return beta doesn't seem to work properly at all, it returns the wrong players move for a different state of the board (further into the search tree)
break seems to work correctly, it is very fast but it seems a bit TOO fast
continue is a lot slower than break but it seems more correct...I'm guessing this is the right way but pseudocode on google all use 'break' but because this is pseudocode I'm not sure what they mean by 'break'
Just for the fun of it I'm going to guess that you're talking about Minimax with Alpha-Beta cutoff, where
ALPHA-BETA cutoff is a method for
reducing the number of nodes explored
in the Minimax strategy. For the nodes
it explores it computes, in addition
to the score, an alpha value and a
beta value.
Here is a page that describes this method and also provides a link to a C program that implements this method. Hopefully something here helps you with your problem, if I'm totally off with my guess please give more detail in your question.
function MINIMAX(N) is
begin
if N is a leaf then
return the estimated score of this leaf
else
Let N1, N2, .., Nm be the successors of N;
if N is a Min node then
return min{MINIMAX(N1), .., MINIMAX(Nm)}
else
return max{MINIMAX(N1), .., MINIMAX(Nm)}
end MINIMAX;
Beta cutoffs occur when the branch you are currently searching is better for your opponent than one you've already searched. It was once explained to me as follows:
suppose are fighting with your enemy, and you consider a number of your choices.
After fully searching the best possible outcome of your first choice (throwing a punch), you determine the result is your opponent will eventually poke you in the eye. We'll call this beta... the best your opponent can do so far. Obviously, you would like to find a result that does better.
Now we consider your next option (running away in disgrace). When exploring your opponents first possible reply, we find that the best possible outcome is you are shot in the back with a gun. This is where a beta cutoff is triggered... we stop searching the rest of your opponents moves and return beta, because we really don't care if you find in searching his other replies he can also nuke you... you would already opt for the poke in the eye from the previous option.
Now specifically what this means is your program should return beta... if it doesn't work, you should compare to an alpha-beta search algorithm elsewhere.

Resources