I was wondering if someone can help me, I have done a lot of work on the 8 Piece puzzle program, and now I want to stretch this out, however I am struggling.
So here is the board.
The idea of this is to try and get a piece for instance P1 which is currently residing at a1 to end up at b1. There can be anywhere up to 5 other pieces on the board and each piece can only occupy and move 1 space at a time.
What I want to be able to do is calculate the moves that are needed to get, for example P1 at position B1, to P1 at position A3 (Or any finishing position to be honest) if it can only move 1 space at a time. No two pieces can occupy the same space, and no pieces can cross over to the other side if their is a piece in the t zone. How can I code it so that when I input something like
?- pmove([at(a1,p3),at(b1,p1),at(b3,p2)],
[at(a1,p1),at(b1,p3),at(b3,p2)],M).
which inputs the starting states of all the positions and where they should all end up It will output something along the lines of:
M = [move(p3,a1,t),move(p1,b1,b2),move(p3,t,b1),
move(p1,b2,t),move(p1,t,a1)]
Showing all of the moves it took to get the piece from start to finish, as well as the moves the other pieces had to take to get to their positions. I believe it would be best to use Breadth First Search for this, but I am not all too sure where to go from there.
This is an excellent problem and working through it will be an excellent way to improve at Prolog! So I commend your choice of problem.
This answer is a sketch, not a complete solution, and I hope that is sufficient for your needs.
What you want to do next is to write a predicate that expresses a single valid move from one state to another, something that looks like this:
singlemove(StartBoardState, Move, NextBoardState) :- ...
So we're going to regard lists of at(Place, Piece) as a board state, and structures like move(Piece,From,To) as a move; the former will unify with StartBoardState and NextBoardState and the latter with Move.
The rules of your game should be encoded in singlemove/3. You could think of it as the relationship between a given board state, every valid move and the resultant states.
I think once you have this, at least one inefficient way to solve your problem will become apparent to you, using a brute-force search. Once you have that working (slowly, possibly only for two-move games), you can begin to see how to improve the performance by making the search more intelligent.
How to Implement singlemove/3
From the question, the rules of this game are:
...it can only move 1 space at a time. No two pieces can occupy the same space, and no pieces can cross over to the other side if their is a piece in the t zone.
So first, let's state some facts about the board. What are the spaces called?
boardspace(b1).
boardspace(b2).
boardspace(b3).
boardspace(t).
boardspace(a1).
boardspace(a2).
boardspace(a3).
Now we need some positional information.
:- op(300, xfx, left_of).
:- op(300, xfx, right_of).
b1 left_of b2.
b2 left_of b3.
a1 left_of a2.
a2 left_of a3.
row(b1, upper).
row(b2, upper).
row(b3, upper).
row(a1, lower).
row(a2, lower).
row(a3, lower).
X right_of Y :- Y left_of X.
adjacent(X, Y) :- X left_of Y.
adjacent(X, Y) :- X right_of Y.
adjacent(t, X) :- boardspace(X), X \= t.
adjacent(X, t) :- boardspace(X), X \= t.
I'm not sure yet if I'm going to need all of this, but this seems like a plausible start. Now let's address the rules.
There are thus three rules to the game:
Only one piece may move per turn.
A piece can only move one space per turn.
No two pieces can occupy the same space.
No pieces can "cross over to the other side" if there is a piece in the t-zone.
I feel like #1 is handled adequately by having a predicate singlemove/3 at all. We call it once, we get one move.
For #2, we can construct the list of nearby spaces based on what is adjacent to the piece right now. Assuming p1 is to move and I have a board as defined above, member(at(StartLocation, p1), Board), adjacent(StartLocation, EndLocation) will unify EndLocation with places that p1 can move. Let's try it:
?- Board = [at(a1,p3),at(b1,p1),at(b3,p2)],
member(at(StartLocation, p1), Board),
adjacent(StartLocation, EndLocation).
Board = [at(a1, p3), at(b1, p1), at(b3, p2)],
StartLocation = b1,
EndLocation = b2 ;
Board = [at(a1, p3), at(b1, p1), at(b3, p2)],
StartLocation = b1,
EndLocation = t ;
false.
So this seems correct; the adjacent locations to b1 are b2 and t.
Now let's codify the next rule, no two pieces can occupy the same space.
unoccupied(Board, Location) :-
\+ member(at(Location, _), Board).
Now we can combine these two things into a good start at singlemove/3:
singlemove(Board,
move(Piece,StartLocation,EndLocation),
[at(EndLocation,Piece)|RemainingLocations]) :-
select(at(StartLocation,Piece), Board, RemainingLocations),
adjacent(StartLocation, EndLocation),
unoccupied(Board, EndLocation).
Let's try it:
?- Board = [at(a1,p3),at(a2,p1),at(a3,p2)],
singlemove(Board, Move, NextBoard).
Board = [at(a1, p3), at(a2, p1), at(a3, p2)],
Move = move(p3, a1, t),
NextBoard = [at(t, p3), at(a2, p1), at(a3, p2)] ;
Board = [at(a1, p3), at(a2, p1), at(a3, p2)],
Move = move(p1, a2, t),
NextBoard = [at(t, p1), at(a1, p3), at(a3, p2)] ;
Board = [at(a1, p3), at(a2, p1), at(a3, p2)],
Move = move(p2, a3, t),
NextBoard = [at(t, p2), at(a1, p3), at(a2, p1)] ;
false.
So what's interesting about this? I'm using select/3 to chop the list into candidates and remainders. I'm building the results in the head of the clause. But otherwise, I'm really just taking your rules, translating them into Prolog, and listing them. So you can see you just need one more predicate to implement your fourth rule and it will go right after unoccupied/2, to further filter out invalid moves.
Hopefully, you get the gist of the process. My data model may be too much, but then again, you may find you need more of it than it seemed at first. And the search strategy is weak. But this is the underpinning of the overall solution, the base case. The inductive case will be interesting—I again suggest you try it with the built-in depth-first strategy and see how horrible it is before resorting to BFS. You will probably want to use trace/0 and see when you get stuck in a trap and how you can circumvent it with better explicit reasoning. But I think this is a good start and I hope it's helpful to you.
Related
I am dealing with a problem which is a variant of a subset-sum problem, and I am hoping that the additional constraint could make it easier to solve than the classical subset-sum problem. I have searched for a problem with this constraint but I have been unable to find a good example with an appropriate algorithm either on StackOverflow or through googling elsewhere.
The problem:
Assume you have two lists of positive numbers A1,A2,A3... and B1,B2,B3... with the same number of elements N. There are two sums Sa and Sb. The problem is to find the simultaneous set Q where |sum (A{Q}) - Sa| <= epsilon and |sum (B{Q}) - Sb| <= epsilon. So, if Q is {1, 5, 7} then A1 + A5 + A7 - Sa <= epsilon and B1 + B5 + B7 - Sb <= epsilon. Epsilon is an arbitrarily small positive constant.
Now, I could solve this as two completely separate subset sum problems, but removing the simultaneity constraint results in the possibility of erroneous solutions (where Qa != Qb). I also suspect that the additional constraint should make this problem easier than the two NP complete problems. I would like to solve an instance with 18+ elements in both lists of numbers, and most subset-sum algorithms have a long run time with this number of elements. I have investigated the pseudo-polynomial run time dynamic programming algorithm, but this has the problems that a) the speed relies on a short bit-depth of the list of numbers (which does not necessarily apply to my instance) and b) it does not take into account the simultaneity constraint.
Any advice on how to use the simultaneity constraint to reduce the run time? Is there a dynamic programming approach I could use to take into account this constraint?
If I understand your description of the problem correctly (I'm confused about why you have the distance symbols around "sum (A{Q}) - Sa" and "sum (B{Q}) - Sb", it doesn't seem to fit the rest of the explanation), then it is in NP.
You can see this by making a reduction from Subset sum (SUB) to Simultaneous subset sum (SIMSUB).
If you have a SUB problem consisting of a set X = {x1,x2,...,xn} and a target called t and you have an algorithm that solves SIMSUB when given two sets A = {a1,a2,...,an} and B = {b1,b2,...,bn}, two intergers Sa and Sb and a value for epsilon then we can solve SUB like this:
Let A = X and let B be a set of length n consisting of only 0's. Set Sa = t, Sb = 0 and epsilon = 0. You can now run the SIMSUB algorithm on this problem and get the solution to your SUB problem.
This shows that SUBSIM is as least as hard as SUB and therefore in NP.
I am reading the book: introduction to the theory of computation and got stuck on this example.
Convert a DFA to an equivalent expression by converting it first to a GNFA(generalized nondeterministic finite automaton) and then convert GNFA to a regular expression.
here is the example:
enter image description here
I should use this recursively to arrive at the the fourth state:
enter image description here
Unfortunately, I cannot understand what is going on from b to c? I only understand that we are trying to get rid of state 2, but how we arrive at c from b?
Thank you very much!
This can be quite tricky at first but I suggest you check definition 1.64 and see the function CONVERT(G) for more clearance. But as a brief explanation using the function for each possible neighbour state:
First from a to b, add a start state and a new accept state;
Afterwards you need to calculate each new path after qrip is removed, in this case state 1;
So, from start to q2, you get only label a from epsilon and a;
Same goes from start to q3, resulting only in b;
Now from q2 to q2 going trough qrip, you have label a to qrip and label a to get back, so you get (aa U b);
Same goes to q3 to q3 through qrip, so resulting in bb, notice that there is no loop in q3 so no union;
Now from q2 to q3 through qrip, you only need to concatenate a and b resulting in ab label;
Lastly the other way around, from q3 to q2 going through qrip, concatenate b and a resulting in ba but this time making the union with the previous path between q3 and q2;
Now choose a new qrip and proceed to do the same algorithm again.
Hope the explanation was clear enough, but as said before refer to the algorithm in the book for a better and more detailed explanation.
The two popular methods for converting a given DFA to its regular expression are-
Arden’s Method
State Elimination Method
Arden’s Theorem states that:
Let P and Q be two regular expressions over ∑.
To use Arden’s Theorem, the following conditions must be satisfied-
The transition diagram must not have any ∈ transitions.
There must be only a single initial state.
Step-01:
Form an equation for each state considering the transitions which come towards that state.
Add ‘∈’ in the equation of the initial state.
Step-02:
Bring the final state in the form R = Q + RP to get the required regular expression.
If P does not contain a null string ∈, then-
R = Q + RP has a unique solution i.e. R = QP*
I have a code in Gnu Mathprog for an energy model:
s.t.EBa1_RateOfFuelProduction1{r in REGION, l in TIMESLICE, f in FUEL, t in TECHNOLOGY, m in MODE_OF_OPERATION, y in YEAR: OutputActivityRatio[r,t,f,m,y] <> 0}:
RateOfActivity[r,l,t,m,y]*OutputActivityRatio[r,t,f,m,y] = RateOfProductionByTechnologyByMode[r,l,t,m,f,y];
s.t.EBa4_RateOfFuelUse1{r in REGION, l in TIMESLICE, f in FUEL, t in TECHNOLOGY, m in MODE_OF_OPERATION, y in YEAR: InputActivityRatio[r,t,f,m,y]<>0}:
RateOfActivity[r,l,t,m,y]*InputActivityRatio[r,t,f,m,y] = RateOfUseByTechnologyByMode[r,l,t,m,f,y];
I want to put these two constraints in one, and i am thinking to insert two conditional expressions(if).The first if, will be referred to technology(t) and fuel(f)where the OutputActivityRatio<>0 and the second one for the same technology(t) it will start checking again the f(fuels) to see if the InputActivityRatio<>0.
Like that:
s.t.RateOfProduction{r in REGION, l in TIMESLICE, f in FUEL, t in TECHNOLOGY, m in MODE_OF_OPERATION, y in YEAR: OutputActivityRatio[r,t,f,m,y] <>0}:
RateOfActivity[r,l,t,m,y]*OutputActivityRatio[r,t,f,m,y] = RateOfProductionByTechnologyByMode[r,l,t,m,f,y]
If InputActivityRatio[r,t,ff,m,y]<>0 then
RateOfActivity[r,l,t,m,y]*InputActivityRatio[r,t,f,m,y] = RateOfUseByTechnologyByMode[r,l,t,m,f,y]
else 0
else 0 ;
My question is: is it possible to have two if in series (nested if) and between them to have an equation as well?How can I write something like that?
Thank you very much!
As described in your other Question (regarding nested if-then-else in mathprog) there are no If-Then-Else statements in mathprog. The workaround with conditional for-loops is also no solution for your problem, since you can only use them in pre- or post processing of your data (you can't use this in your constraints!).
But there are still possibilities to merge your constraints. I think something like the following would work, if your condition is that either Input or Output is 0.
s.t.RateOfProduction{r in REGION, l in TIMESLICE, f in FUEL, t in TECHNOLOGY, m in MODE_OF_OPERATION, y in YEAR}:
(RateOfActivity[r,l,t,m,y]*OutputActivityRatio[r,t,f,m,y])
+ (RateOfActivity[r,l,t,m,y]*InputActivityRatio[r,t,f,m,y])
= RateOfProductionByTechnologyByMode[r,l,t,m,f,y];
Here in the lefthandside summation one multiplication would turn zero.
Since I don't know which parts are variables and which a parameters, this solution could also fail (for example it could be problematic if there is input and output at the same time and the rest of the model doesn't contain the right bounds for that)
For an university project I'm trying to write the chinese game of Go (http://en.wikipedia.org/wiki/Go_%28game%29) in Alloy. (i'm using the 4.2 version)
I managed to write the base structure. Go's played on a board 9 x 9 wide, but i'm using a smaller set of 3 x 3 for checking it faster.
The board is made of crosses which can either be empty or occupied by black or white stones.
abstract sig Colour {}
one sig White, Black, Empty extends Colour {}
abstract sig Cross {
Status: one Colour,
near: some Cross,
group: lone Group
}
one sig C11, C12, C13,
C21, C22, C23,
C31, C32, C33 extends Cross {}
sig Group {
stones : some Cross,
freedom : some Cross
}
pred closeStones {
near=
C11->C12 + C11->C21 +
C12->C11 + C12->C13 + C12->C22 +
C13->C12 + C13->C23 +
C21->C22 + C21->C11 + C21->C31 +
C22->C21 + C22->C23 + C22->C12 + C22->C32 +
C23->C22 + C23->C13 + C23->C33 +
C31->C32 + C31->C21 +
C32->C31 + C32->C33 + C32->C22 +
C33->C32 + C33->C23
}
fact stones2 {
all g : Group |
all c : Cross |
(c.group=g) iff c in g.stones
}
fact noGroup{
all c : Cross | (c.Status=Empty) iff c.group=none
}
fact groupNearStones {
all disj c,d : Cross |
((d in c.near) and c.Status=d.Status)
iff
d.group=c.group
}
The problem is: following Go rules, every stones must be considered as part of a group. This group is made of all the adiacent stones with the same colour.
My fact "groupNearStones" should be sufficient to describe that condition, but this way I can't get groups made of more of 3 stones.
I've tried rewriting it in different ways, but either the analizer says it found "0 variables" or it groups up all the stones with the same status, regardless of wheter they're near each other or not.
If you could give me any insight I will be grateful, since i'm breaking my head on this simple matter for days.
Ask yourself two questions.
First: in Go, what constitutes a group? You say yourself: it is a set of adjacent stones with the same color. Not that every stone in the group must be adjacent to every other; it suffices for every stone to be adjacent to another stone in the group.
So from a formal point of view: given a stone S, the set of stones in the group as S is the transitive closure of the stones reachable through the relation same_color_and_adjacent, or S.*same_color_and_adjacent.
Second: what constitutes being the same color and adjacent? I think you can define this easily, with what you have.
On a side issue; you may find it easier to scale the model to arbitrary sizes of boards if you reify the notion of rows and columns.
I hope this helps.
[Addendum:] Apparently it doesn't help enough. I'll try to be a bit more explicit, but I want the full solution to come from you and not from me.
Note that the point of defining a relation like same_color_and_adjacent is not to eliminate the formulation of facts or predicates in your model, but to make them easier to write and to write correctly. It's not magic.
Consider first a reformulation of your fact groupNearStones in terms of a single relation that holds for pairs of stones which are adjacent and have the same color. The relation can be defined by modifying your declaration for Cross:
abstract sig Cross {
Status: one Colour,
near: some Cross,
group: lone Group,
near_and_similar : some Cross
}{
near_and_similar = near & { c : Cross | c.#Status = Status}
}
Now your existing fact can be written as:
fact groupNearStones2 {
all disj c,d : Cross |
d in c.near_and_similar
iff
d.group=c.group
}
Actually, I would write both versions of groupNearStones as predicates, not facts. That would allow you to check that the new formulation is really equivalent to the old one by running a check like:
pred GNS_equal_GNS2 {
groupNearStones iff groupNearStones2
}
(I have not run such a check; I'm being a little lazy.)
Now, let us consider the problems you mention:
You never get groups containing more than three stones. Actually, given the formulation of groupNearStones, I'm surprised you get groups with more than two. Consider what groupNearStones says: any two stones in a group are adjacent and have the same color. Draw a board on a piece of paper and draw a group of five stones. Now ask whether such a group satisfies the fact groupNearStones. Say the group is C11, C12, C13, C21, C22. What does groupNearStones say about the pair C21, C13?
Do you see the problem? Are the relations near and 'close enough to be in the same group' really the same? If they are not the same, are they related?
Hint: think about transitive closure.
You never get groups containing a single stone.
How surprising is this, given that groupNearStones says that c.group = d.group only if c and d are disjoint? If you never get single-stone groups, then every stone that should be a single-stone group is not classed as being in any group at all, since such a stone must not satisfy the expression s.group = s.group.
Do you see the problem?
Hint: think about reflexive transitive closure.
In a model I started to sketch in Alloy the other day, I get the following message when I attempt to find an instance of a particular predicate:
Translation capacity exceeded.
In this scope, universe contains 34 atoms
and relations of arity 12 cannot be represented.
Visit http://alloy.mit.edu/ for advice on refactoring.
Any suggestions of where on the site alloy.mit.edu to look? I'm not finding anything with an obvious label like "Refactoring models that exceed translation capacity".
That's the essential question.
[Postscript: the cause of my problem appears to have been a bad initial formulation of the quantified variable declarations I was using in a predicate; the problem went away once I got the syntax of the declarations right. The full details are not instructive enough to be worth keeping on record, so I'm dropping the original description of the specifics. The short version is: to elicit the instantiation of a particular concrete example, I initially wrote a predicate of the form
pred m {
one t1 : table,
r1, r2, r3 : row,
c1, c2 : column,
c11, c21 : headingcell,
c12, c22, c13, c23 : datacell | {
... // description of the example here
}
}
The one scopes all twelve variables, and is [I am told on good authority] translated internally into a set comprehension defined by a relation of arity 12. What I wanted to say was something more like the following, which does not raise the translation-capacity issue:
pred m {
some t1 : table |
some disj r1, r2, r3 : row |
some disj c1, c2 : column |
some disj c11, c21 : headingcell |
some disj c12, c22, c13, c23 : datacell | {
...
}
}
So: one way to fix some models which elicit the translation-capacity error message is to clean up the quantification of variables.
The basic question, however, retains its interest: when a model elicits the translation-capcity error message and the quantifiers are already clean and correct, is there a document to read?]
The kind of refactoring needed in this case is unlikely to be a simple syntactic one. Rather, it here means restructuring the model so that it doesn't use a relation of that high arity. In your example above, I can't really see which relation has arity 12. If you post (or send me) a self-contained model, I can look at it, identify the problematic relation, and maybe even suggest how to avoid it.