I have a problem with using the method closure() for relations. If someone could explain how does the transitive closure work in KodKod.
Let's take for example:
Relation r1 = Relation.nary("r1",4);
Relation r2 = Relation.binary("r2");
Relation i = Relation.unary("i");
Relation j = Relation.unary("j");
Formula f = r.in(r2.product(i).product(j));
and I want to know how to say : a variable k Oneof(j) not in the transitive closure of the relation r1
Arity of relation r1 in your example is 4, and transitive closure can only be applied to binary relations.
Assuming r1 is binary, something like k.in(r1.closure()).not(), where k is any expression that evaluates to a binary relation, should work.
Related
I'm trying to understand how recursive set operate internally by comparing similar feature in another functional programming languages and concepts.
I can find it in wiki. In that, I need to know Y combinator, fixed point. I can get it briefly in wiki.
Then, now I start to apply this in Haskell.
Haskell
It is easy. But I want to know behind the scenes.
*Main> let x = y; y = 10; in x
10
When you write a = f b in a lazy functional language like Haskell or Nix, the meaning is stronger than just assignment. a and f b will be the same thing. This is usually called a binding.
I'll focus on a Nix example, because you're asking about recursive sets specifically.
A simple attribute set
Let's look at the initialization of an attribute set first. When the Nix interpreter is asked to evaluate this file
{ a = 1 + 1; b = true; }
it parses it and returns a data structure like this
{ a = <thunk 1>; b = <thunk 2>; }
where a thunk is a reference to the relevant syntax tree node and a reference to the "environment", which behaves like a dictionary from identifiers to their values, although implemented more efficiently.
Perhaps the reason we're evaluating this file is because you requested nix-build, which will not just ask for the value of a file, but also traverse the attribute set when it sees that it is one. So nix-build will ask for the value of a, which will be computed from its thunk. When the computation is complete, the memory that held the thunk is assigned the actual value, type = tInt, value.integer = 2.
A recursive attribute set
Nix has a special syntax that combines the functionality of attribute set construction syntax ({ }) and let-binding syntax. This is avoids some repetition when you're constructing attribute sets with some shared values.
For example
let b = 1 + 1;
in { b = b; a = b + 5; }
can be expressed as
rec { b = 1 + 1; a = b + 5; }
Evaluation works in a similar manner.
At first the evaluator returns a representation of the attribute set with all thunks, but this time the thunks reference an new environment that includes all the attributes, on top of the existing lexical scope.
Note that all these representations can be constructed while performing a minimal amount of work.
nix-build traverses attrsets in alphabetic order, so it will evaluate a first. It's a thunk that references the a + syntax node and an environment with b in it. Evaluating this requires evaluating the b syntax node (an ExprVar), which references the environment, where we find the 1 + 1 thunk, which is changed to a tInt of 2 as before.
As you can see, this process of creating thunks but only evaluating them when needed is indeed lazy and allows us to have various language constructs with their own scoping rules.
Haskell implementations usually follow a similar pattern, but may compile the code rather than interpret a syntax tree, and resolve all variable references to constant memory offsets completely. Nix tries to do this to some degree, but it must be able to fall back on strings because of the inadvisable with keyword that makes the scope dynamic.
I guess several things by myself.
In eagar evaluation language, I must declare before use it. So the order of declaration is simple.
int x = 10;
int y = x;
Just for Nix language
In wiki, there isn't any concept comparision with Haskell though let ... in is compared with Haskell.
lexical scope
all variables are lexically scoped.
mutual recursion
https://en.wikipedia.org/wiki/Let_expression#Mutually_recursive_let_expression
Say I have a struct:
mutable struct DataHolder
data1::Vector{Float64}
data2::Vector{Float64}
function DataHolder()
emp = Float64[]
new(emp, emp)
end
end
d = DataHolder()
When I try to push a value to only one element of struct d by doing:
push!(d.data1, 1.0)
the value is pushed not only d.data1 but also d.data2. Indeed, the REPL says
julia> d
DataHolder([1.0], [1.0])
How can I push a value to only one element of the struct??
Your problem is not in push!, but rather in the inner constructor of DataHolder. Specifically:
emp = Float64[]
new(emp, emp)
This code pattern means that the fields of the new DataHolder both point toward the same array (in memory). So if you mutate one of them (say, via push!), you also mutate the other.
You could instead replace those two lines with:
new(Float64[], Float64[])
to get the behaviour you want.
More generally, although it is not forbidden, your use of the inner constructor is a bit odd. Typically, inner constructors should have a method signature corresponding exactly to the fields of your struct, and the inner constructor itself is typically only used to provide a set of universal tests that any new DataHolder should undergo.
Personally I would re-write your code as follows:
mutable struct DataHolder
data1::Vector{Float64}
data2::Vector{Float64}
function DataHolder(data1::Vector{Float64}, data2::Vector{Float64})
#Any tests on data1 and data2 go here
new(data1, data2)
end
end
DataHolder() = DataHolder(Float64[], Float64[])
If you don't need to do any universal tests on DataHolder, then delete the inner constructor entirely.
Final food for thought: Does DataHolder really need to be mutable? If you only want to be able to mutate the arrays in data1 and data2, then DataHolder itself does not need to be mutable, because those arrays are already mutable. You only need DataHolder to be mutable if you plan to completely re-allocate the values in those fields, e.g. an operation of the form dh.data1 = [2.0].
UPDATE AFTER COMMENT:
Personally I don't see anything wrong with DataHolder() = DataHolder(Float64[], ..., Float64[]). It's just one line of code and you never have to think about it again. Alternatively, you could do:
DataHolder() = DataHolder([ Float64[] for n = 1:10 ]...)
which just splats an a vector of empty vectors into the constructor arguments.
#ColinTBowers has answered your question. Here is a very simple and a more general implementation:
struct DataHolder # add mutable if you really need it
data1::Vector{Float64}
data2::Vector{Float64}
end
DataHolder() = DataHolder([], [])
Perhaps you'd like to allow other types than Float64 (because why wouldn't you?!):
struct DataHolder{T}
data1::Vector{T}
data2::Vector{T}
end
DataHolder{T}() where {T} = DataHolder{T}([], [])
DataHolder() = DataHolder{Float64}() # now `Float64` is the default type.
Now you can do this:
julia> DataHolder{Rational}()
DataHold{Rational}(Rational[], Rational[])
julia> DataHolder()
DataHold{Float64}(Float64[], Float64[])
I'm puzzled as to how arguments are passed into a cppFunction when we use Rcpp. In particular, I wonder if someone can explain the result of the following code.
library(Rcpp)
cppFunction("void test(double &x, NumericVector y) {
x = 2016;
y[0] = 2016;
}")
a = 1L
b = 1L
c = 1
d = 1
test(a,b)
test(c,d)
cat(a,b,c,d) #this prints "1 1 1 2016"
As stated before in other areas, Rcpp establishes convenient classes around R's SEXP objects.
For the first parameter, the double type does not have a default SEXP object. This is because within R, there is no such thing as a scalar. Thus, new memory is allocate making the & reference incompatible. Hence, the variable scope for the modification is limited to the function and there is never an update to the result. As a result, for both test cases, you will see 1.
For the second case, there is a mismatch between object classes. Within the first call object supplied is of type integer due to the L appended on the end, which conflicts with the C++ function expected type of numeric. The issue is resolved once the L is dropped as the object is instantiated as a numeric. Therefore, an intermediary memory location does not need to be created that is of the correct type to receive the value. Hence, the modification in the second case is able to propagate back to R.
e.g.
a = 1L
class(a)
# "integer"
a = 1
class(a)
# "numeric"
Are these two equivalent:
r: A -> B
r: A set -> set B
That is, is set the default multiplicity?
If yes, then I will quibble with the definition of the arrow operator in the Software Abstractions book. The book says on page 55:
The arrow product (or just product) p->q of two relations p and q is
the relation you get by taking every combination of a tuple from p and
a tuple from q and concatenating them.
I interpret that definition to mean the only valid instance for p->q is one that has every possible combination of tuples from p with tuples from q. But that's not right (I think). Any instance containing mappings between p and q is valid. For example, on page 56 is this example,
Name = {(N0), (N1)}
Addr = {(D0), (D1)}
The book says this is a valid relation for Name->Addr
{(N0, D0), (N0, D1), (N1, D0), (N1, D1)}
But that's not the only valid relation, right? For example, this is a valid relation:
{(N0, D0), (N1, D1)}
Is that right?
The declaration r : A->B means r is a subset of A->B. The expression A->B has just one value, which is the cross product of A and B. The declaration results in a set of possible values for r, which would include both the example given in the book that you cite, and the example that you ask about.
Sometimes, when I would like to use recursion in Alloy, I find I can get by with transitive closure, or sequences.
For example, in a model of context-free grammars:
abstract sig Symbol {}
sig NT, T extends Symbol {}
// A grammar is a tuple(V, Sigma, Rules, Start)
one sig Grammar {
V : set NT,
Sigma : set T,
Rules : set Prod,
Start : one V
}
// A production rule has a left-hand side
// and a right-hand side
sig Prod {
lhs : NT,
rhs : seq Symbol
}
fact tidy_world {
// Don't fill the model with irrelevancies
Prod in Grammar.Rules
Symbol in (Grammar.V + Grammar.Sigma)
}
One possible definition of reachable non-terminals would be "the start symbol, and any non-terminal appearing on the right-hand side of a rule for a reachable symbol." A straightforward translation would be
// A non-terminal 'refers' to non-terminals
// in the right-hand sides of its rules
pred refers[n1, n2 : NT] {
let r = (Grammar.Rules & lhs.n1) |
n2 in r.rhs.elems
}
pred reachable[n : NT] {
n in Grammar.Start
or some n2 : NT
| (reachable[n2] and refers[n2,n])
}
Predictably, this blows the stack. But if we simply take the transitive closure of Grammar.Start under the refers relation (or, strictly speaking, a relation formed from the refers predicate), we can define reachability:
// A non-terminal is 'reachable' if it's the
// start symbol or if it is referred to by
// (rules for) a reachable symbol.
pred Reachable[n : NT] {
n in Grammar.Start.*(
{n1, n2 : NT | refers[n1,n2]}
)
}
pred some_unreachable {
some n : (NT - Grammar.Start)
| not Reachable[n]
}
run some_unreachable for 4
In principle, the definition of productive symbols is similarly recursive: a symbol is productive iff it is a terminal symbol, or it has at least one rule in which every symbol in the right-hand side is productive. The simple-minded way to write this is
pred productive[s : Symbol] {
s in T
or some p : (lhs.s) |
all r : (p.rhs.elems) | productive[r]
}
Like the straightforward definition of reachability, this blows the stack. But I have not yet found a relation I can define on symbols which will give me, via transitive closure, the result I want. Have I found a case where transitive closure cannot substitute for recursion? Or have I just not thought hard enough to find the right relation?
There is an obvious, if laborious, hack:
pred p0[s : Symbol] { s in T }
pred p1[s : Symbol] { p0[s]
or some p : (lhs.s)
| all e : p.rhs.elems
| p0[e]}
pred p2[s : Symbol] { p1[s]
or some p : (lhs.s)
| all e : p.rhs.elems
| p1[e]}
pred p3[s : Symbol] { p2[s]
or some p : (lhs.s)
| all e : p.rhs.elems
| p2[e]}
... etc ...
pred productive[n : NT] {
p5[n]
}
This works OK as long as one doesn't forget to add enough predicates to handle the longest possible chain of non-terminal references, if one raises the scope.
Concretely, I seem to have several questions; answers to any of them would be welcome:
1 Is there a way to define the set of productive non-terminals in Alloy without resorting to the p0, p1, p2, ... hack?
2 If one does have to resort to the hack, is there a better way to define it?
3 As a theoretical question: is it possible to characterize the set of recursive predicates that can be defined using transitive closure, or sequences of atoms, instead of recursion?
Have I found a case where transitive closure cannot substitute for recursion?
Yes, that is the case. More precisely, you found a recursive relation that cannot be expressed with the first-order transitive closure (that is supported in Alloy).
Is there a way to define the set of productive non-terminals in Alloy without resorting to the p0, p1, p2, ... hack? If one does have to resort to the hack, is there a better way to define it?
There is no a way to do this in Alloy. However, there might be a way to do this in Alloy* which supports higher-order quantification. (The idea would be to describe the set of productive elements with a closure over the relation of "productiveness", which would use second-order quantification over the set of productive symbols, and constraining this set to be minimal. Similar idea is described in "A.1.9 Axiomatizing Transitive Closure" in the Alloy book.)
As a theoretical question: is it possible to characterize the set of recursive predicates that can be defined using transitive closure, or sequences of atoms, instead of recursion?
This is an interesting question. The wiki article mentions relative expressiveness of second order logic when transitive closure and fixed point operator are added (the later being able to express forms of recursion).