Keywords: some and one before higher-order quantification - alloy

I have an Alloy specification representing a model transformation rules.. In the specification, I use higher-order quantification to specify rule matching. One strange thing is the analyzer works differently with "some" and "one", which I cannot understand.
For example, in the pred rule_enter[trans:Trans](see line 240), I use two higher-order quantification to encoding matching of the left and right side of a graph transformation rule.
*********************EXAMPLE**************************************
some e_check0:Acheck&trans.darrows, e_TP0:ATP&(trans.source.arrows-trans.darrows), e_PF10:APF1&trans.darrows, e_TR0:ATR&(trans.source.arrows-trans.darrows), e_F1R0:AF1R&trans.darrows |
let n_P0 = e_check0.src, n_T0 = e_TP0.src, n_R0 = e_TR0.trg, n_F10 = e_PF10.trg |
(n_P0 = e_check0.trg and n_P0 = e_TP0.trg and n_P0 = e_PF10.src and n_T0 = e_TR0.src and n_F10 = e_F1R0.src and n_R0 = e_F1R0.trg and
n_F10 in NF1&trans.dnodes and
n_P0 in NP&(trans.source.nodes-trans.dnodes) and n_T0 in NT&(trans.source.nodes-trans.dnodes) and n_R0 in NR&(trans.source.nodes-trans.dnodes))
some e_crit0:Acrit&trans.aarrows, e_TP0:ATP&(trans.source.arrows-trans.darrows), e_PF20:APF2&trans.aarrows, e_TR0:ATR&(trans.source.arrows-trans.darrows), e_F2R0:AF2R&trans.aarrows |
let n_P0 = e_crit0.src, n_T0 = e_TP0.src, n_R0 = e_TR0.trg, n_F20 = e_PF20.trg |
(n_P0 = e_crit0.trg and n_P0 = e_TP0.trg and n_P0 = e_PF20.src and n_T0 = e_TR0.src and n_F20 = e_F2R0.src and n_R0 = e_F2R0.trg and
n_F20 in NF2&trans.anodes and
n_P0 in NP&(trans.source.nodes-trans.dnodes) and n_T0 in NT&(trans.source.nodes-trans.dnodes) and n_R0 in NR&(trans.source.nodes-trans.dnodes))
Here I use the keyword "some". The Analyzer can work with a scope 10.
But if I use the keyword "one", the analyzer reports the following error with a scope 5:
*********************EXAMPLE**************************************
Executing "Check check$1 for 5 but exactly 1 Trans, exactly 2 Graph, exactly 1 Rule"
Solver=minisat(jni) Bitwidth=0 MaxSeq=0 SkolemDepth=1 Symmetry=20
Generating CNF...
.
Translation capacity exceeded.
In this scope, universe contains 89 atoms
and relations of arity 5 cannot be represented.
Visit http://alloy.mit.edu/ for advice on refactoring.
MY QUESTION is why the two quantification have different performances?

one in alloy is encoded using set comprehension and the cardinality operator, e.g.,
one s: S | p[s]
is transformed to
#{s: S | p[s]} = 1
Set comprehension cannot be skolemized, so when the quantifier in question is higher-order, Alloy simply gives up.

Higher-order quantifications are in general not allowed in Alloy. However, some existential quantifications (i.e., some) can be converted into solvable procedures through a process known as skolemization, which I believe does not apply to uniqueness quantifications (i.e., one). The process is briefly explained here for a (first-order) Alloy example.
I wasn't able to process your example (sorry), but I would guess that is one such case.

I do not have a concrete answer for your example but generally it is more complex to encode one than some: Let's assume you have a set S that can maximally contain the elements a, b, c.
Alloy translates the problem to a SAT problem.
You can encode S in a SAT problem with 3 Boolean variables xa, xb, xc where xa=TRUE (resp. FALSE) means that a is in S (resp. not in S).
The statement some S can now easily encoded as the formula
xa \/ xb \/ xc
(with \/ as logical or).
On the other hand, for one you need to encode additionally that if one of the variables xa, xb, xc is true, the others are false. E.g.
xa \/ xb \/ xc
xa => not( xb \/ xc )
xb => not( xa \/ xc )
xc => not( xa \/ xb )
In conjuctive normal form (CNF, that's what the SAT solver takes as input) you have the clauses:
xa \/ xb \/ xc
-xa \/ -xb
-xa \/ -xc
-xb \/ -xa
-xb \/ -xc
-xc \/ -xa
-xc \/ -xb
Maybe there are techniques to optimize that but you can see that one needs more clauses to be encoded than some.

Related

Nested "where" or "let in" CodeStyle in Haskell [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last year.
Improve this question
I faced with some issue in Haskell: code-style and big functions (I'm continuing to learn Haskell through writing toy-language).
I have some necessary-big-function (see example). And there are two sub-function (nested where) in it. And there is no reason to place its sub-finction in module scoupe.
How "Haskell code-style" or "Haskell code-style best practics" suggest to solve this problem of "ungraceful and clumsy code"?
Function (with later comment):
-- We can delete (on DCE) any useless opers.
-- Useful opers - only opers, whitch determine (directly or transitivery) result of GlobalUse oper
addGlobalUsageStop :: [Var] -> IR -> IR
addGlobalUsageStop guses iR = set iOpers (ios ++ ios') $ set opN opn' iR
where
ios = _iOpers iR
gdefs = _gDefs iR :: M.Map Int Var
opn = _opN iR
guses' = nub $ filter isRegGlobal guses
ogs = catMaybes $ map (concatIOperWithGDef gdefs) $ reverse ios
where
concatIOperWithGDef gdefs' (i, o) = case gdefs' M.!? i of
Nothing -> Nothing
Just gd -> Just (o, gd)
nops = newGUses ogs guses'
where
newGUses [] _ = []
newGUses _ [] = []
newGUses ((Oper _ d _ _, g):os) guses = if elem g guses
then (Oper GlobalUse g (RVar d) None):newGUses os (filter (g /=) guses)
else newGUses os guses
ios' = zip [opn..] nops
opn' = opn + length ios'
Notices:
If you want to know why I even wrote such big function the answer is:
because this is some big (and one-needed functionality in compiler): - for each "returning variable" we shoul find last operation, witch defines it (actually corresponding virtual register), and expand our IR with constructed opers.
I'v seen some similar questions: Haskell nested where clause and "let ... in" syntax
but they are about "how typewrite correct code?", and my question "is this code Code-Style correct, and if it isn't - what should i do?".
And there is no reason to place its sub-function in module scope
Think about it the other way around. Is there any reason to put the sub-function in the local scope? Then do it. This could be because
it needs to access locally bound variables. In this case it must be local, or else you need extra parameters.
it does something very obvious and only relevant to the specific use case. This could be one-line definitions of some operation that you don't care thinking a properly descriptive name, or it could be a go helper that does basically the whole work of the enclosing function.
If neither of these apply, and you can give the local function a descriptive name (as you've already done) then put it in module scope. Add a type signature, which makes it clearer yet. Don't export it from the module.
Putting the function in module scope also removes the need to rename variables like you did with gdefs'. That's one of the more common causes for bugs in Haskell code.
The question is a good one, but the example code isn't a great example. For me, the correct fix in this particular case is not to talk about how to stylishly nest wheres; it's to talk about how to use library functions and language features to simplify the code enough that you don't need where in the first place. In particular, list comprehensions get you very far here. Here's how I would write those two definitions:
import Data.Containers.ListUtils (nubOrdOn)
... where
ogs = [(o, gd) | (i, o) <- reverse ios, Just gd <- [gdefs M.!? i]]
nops = nubOrdOn fun
[ Oper GlobalUse g (RVar d) None
| (Oper _ d _ _, g) <- ogs
, g `elem` guses'
]
fun (Oper _ g _ _) = g -- this seems useful enough to put at the global scope; it may even already exist there
Since ogs isn't mentioned anywhere else in the code, you might consider inlining it:
-- delete the definition of ogs
nops = nubOrdOn fun
[ Oper GlobalUse g (RVar d) None
| (i, Oper _ d _ _) <- reverse ios
, Just g <- [gdefs M.!? i]
, g `elem` guses'
]

Overloading Functions with Pattern Matching?

Hello fellow Haskell Fans!
All my questions are about the -- OVERLOADED(?) FUNCTION -- part, I included the rest for completeness.
I was wondering if it makes sense to use Pattern Matching to Overload my function order like I did in my example below.
I was also wondering if the first function with the function call "checkBalance balance" in the first version of the order function always gets executerd (because I didn't sepcify a pattern for it) or never (because all the patterns of Food are covered in the functions below).
Thanks in advance from a beginner :)
-- TYPE DECLARATIONS --
data Spice = Regular | Medium | Hot
data Base = Noodles | Rice
data Meat = Duck | Chicken | Pork
data Sauce = Tomato | Meatballs | Carbonara
data Food = Indian Spice | Pasta Sauce | Chinese Base Meat
data DeliveryOption = Pickup | Delivery
data DeliveryTime = Immediate | Later
type CreditBalance = Int
data Order = O Food DeliveryOption CreditBalance
data OrderStatus = Success | Pending | Declined
-- OVERLOADED(?) FUNCTION --
order :: (Order, CreditBalance) -> OrderStatus
order (O {}, balance)
| not (checkBalance balance ) = Declined
| ...
order (O Indian {} _ _, _)
| ...
order (O Pasta {} _ _, _)
| ...
order (O Chinese {} _ _, _)
| ...
-- ANOTHER FUNCTION --
checkBalance :: CreditBalance -> Bool
checkBalance balance
| balance > 100 = True
| otherwise = False
I can't see anything really wrong with that function definition.
Function clauses are tried in order, so that first branch with checkBalance will always be tried first, and the next guard, and so on, and if none of the guards of the first group is matched, then the next group will be tried (O Indian {} _ _).
If the guards of the first group were exhaustive, then the other branches below would not be reachable, which would mean something is wrong but it's hard to say more without more details.
-- OVERLOADED(?) FUNCTION --
order :: (Order, CreditBalance) -> OrderStatus
order (O {}, balance)
| not (checkBalance balance ) = Declined
| ...
The above pattern will cover every case and anything below it will never have the opportunity to check.
Reason is that Order has only one constructor, namely O, and (O {}), matches all possible arguments to the O constructor. The other member of the tuple is just a simple Int which always matches.
Since patterns are matched from top to bottom and the first that matches is chosen, the ordering of their definition in code is important. If you put the broadest possible pattern at the top, the more specific ones below will never have the opportunity to match.
As for overloading function, I can think of how one could (ab)use pattern matching to imitate function overloading as in OOP, but then you would also need to (ab)use data declarations and the whole type system to bend them to conform to such an idea and this would just make things worse.

How do I model-check a module that depends on an unbound variable?

I'm going through Specifying Systems right now and I'm a little puzzled about how I'd model-check the following module:
---------------------------- MODULE BoundedFIFO ----------------------------
EXTENDS Naturals, Sequences
VARIABLES in, out
CONSTANT Message, N
ASSUME (N \in Nat) /\ (N > 0)
Inner(q) == INSTANCE InnerFIFO
BNext(q) == /\ Inner(q)!Next
/\ Inner(q)!BufRcv => (Len(q) < N)
Spec == \EE q : Inner(q)!Init /\ [][BNext(q)]_<<in, out, q>>
=============================================================================
I see that both the Init and BNext formulas are operators, parameterized by q. How would I supply this to the model checker?
You can't: \E x : P(x) is an unbounded expression, which TLC cannot handle. Many of the specs in Specifying Systems can't be modeled. More recent guides, such as the TLA+ Hyperbook or Learn TLA+, are more careful to keep all of their specs modelable.

Prolog String manipulation, replacing part of string

What i am trying to do
?- string_manipulation(1\2\3,Z).
Z = 1/2/3.
?- string_manipulation(s/t/a/c/k,Z).
Z = s\t\a\c\k.
What i have tried so far
sign(/,\).
string_manipulation(Forward,Back):-
sign(Forward,_\),
; sign(/,Back).
I will be honest with you. I know this code is rubbish. I am kinda lost with this one. Just started learning Prolog, watched some videos and read some documentation but could not just find something similar to that from internet in the first look. Maybe someone could point me in some direction so i could learn the string manipulation with this one.
From the post title, and the predicate name (so called functor), seems you're looking for something like DCGs, but as an exercise in manipulation of structured terms, and operators, here is a solution for your probem:
string_manipulation(Xs, Ys) :-
member(( Xo , Yo ), [ ( / , \ ), ( \ , / ) ]),
Xs =.. [Xo, H, Xt],
Ys =.. [Yo, T, Yt],
string_manipulation(H, T),
string_manipulation(Xt, Yt).
string_manipulation(S, S) :-
atomic(S).
In SWI-Prolog, we need this preliminary declaration:
?- op(400,yfx,\).
true.
since by default
?- current_op(X,Y,/).
X = 400,
Y = yfx.
and
?- current_op(X,Y,\).
X = 200,
Y = fy.
Declaring the same precedence and associativity helps to keep things clearer.
Edit
The valuable suggestion by #mat:
string_manipulation(Xs, Ys) :-
op_replacement(Xo, Yo),
Xs =.. [Xo, H, Xt],
...
and
op_replacement(/, \).
op_replacement(\, /).
Looks like you want to replace an atom in an atom by another atom. But you would need to place quotes around the arguments, like for example '1\2\3' instead 1\2\3, otherwise the argument is not an atom but a term.
If your Prolog system has atom_split/3, you can bootstrap atom_replace/4 from it. atom_split/3 is part of Prolog Commons, and you need a bidrectional version of it. Namely you can then define:
atom_replace(Source, Old, New, Target) :-
atom_split(Source, Old, List),
atom_split(Target, New, List).
Here are some example runs. Don't worry about the backslash backslash, that is just needed to input an atom that contains a backslash. The second example using write/1 shows that it will not enter the atom:
Jekejeke Prolog 3, Runtime Library 1.3.6
?- atom_replace('1\\2\\3', '\\', '/', X).
X = '1/2/3'
?- atom_replace('s/t/a/c/k', '/', '\\', X), write(X), nl.
s\t\a\c\k
X = 's\\t\\a\\c\\k'

G-machine, (non-)strict contexts - why case expressions need special treatment

I'm currently reading Implementing functional languages: a tutorial by SPJ and the (sub)chapter I'll be referring to in this question is 3.8.7 (page 136).
The first remark there is that a reader following the tutorial has not yet implemented C scheme compilation (that is, of expressions appearing in non-strict contexts) of ECase expressions.
The solution proposed is to transform a Core program so that ECase expressions simply never appear in non-strict contexts. Specifically, each such occurrence creates a new supercombinator with exactly one variable which body corresponds to the original ECase expression, and the occurrence itself is replaced with a call to that supercombinator.
Below I present a (slightly modified) example of such transformation from 1
t a b = Pack{2,1} ;
f x = Pack{2,2} (case t x 7 6 of
<1> -> 1;
<2> -> 2) Pack{1,0} ;
main = f 3
== transformed into ==>
t a b = Pack{2,1} ;
f x = Pack{2,2} ($Case1 (t x 7 6)) Pack{1,0} ;
$Case1 x = case x of
<1> -> 1;
<2> -> 2 ;
main = f 3
I implemented this solution and it works like charm, that is, the output is Pack{2,2} 2 Pack{1,0}.
However, what I don't understand is - why all that trouble? I hope it's not just me, but the first thought I had of solving the problem was to just implement compilation of ECase expressions in C scheme. And I did it by mimicking the rule for compilation in E scheme (page 134 in 1 but I present that rule here for completeness): so I used
E[[case e of alts]] p = E[[e]] p ++ [Casejump D[[alts]] p]
and wrote
C[[case e of alts]] p = C[[e]] p ++ [Eval] ++ [Casejump D[[alts]] p]
I added [Eval] because Casejump needs an argument on top of the stack in weak head normal form (WHNF) and C scheme doesn't guarantee that, as opposed to E scheme.
But then the output changes to enigmatic: Pack{2,2} 2 6.
The same applies when I use the same rule as for E scheme, i.e.
C[[case e of alts]] p = E[[e]] p ++ [Casejump D[[alts]] p]
So I guess that my "obvious" solution is inherently wrong - and I can see that from outputs. But I'm having trouble stating formal arguments as to why that approach was bound to fail.
Can someone provide me with such argument/proof or some intuition as to why the naive approach doesn't work?
The purpose of the C scheme is to not perform any computation, but just delay everything until an EVAL happens (which it might or might not). What are you doing in your proposed code generation for case? You're calling EVAL! And the whole purpose of C is to not call EVAL on anything, so you've now evaluated something prematurely.
The only way you could generate code directly for case in the C scheme would be to add some new instruction to perform the case analysis once it's evaluated.
But we (Thomas Johnsson and I) decided it was simpler to just lift out such expressions. The exact historical details are lost in time though. :)

Resources