Isabelle: locale interpration about record fails in proof - locale

Using the Algebra library, I encountered the following problem. In a proof I wanted to interpret the additive structure of a ring as a group. Here is a sample code:
theory aaa
imports "~~/src/HOL/Algebra/Ring"
begin
lemma assumes "ring R"
shows "True"
proof-
interpret ring R by fact
interpret additive: comm_group "⦇carrier = carrier R, mult = add R, one = zero R⦈" by(unfold_locales)
But I can't access the facts from the group locale. Typing
thm additive.m_assoc
gives the message "Undefined fact". However, it works when I define the additive structure with the monoid.make command:
interpret additivee: comm_group "monoid.make (carrier R) (add R) (zero R)" sorry
thm additivee.m_assoc
It also works if I try to do the same for the multiplicative structure, or if I remove
interpret ring R by fact
Any ideas about what's going on?

The commands interpretation and interpret only register those facts from locales that are not already in scope from previous interpretations. The ring locale is a sub-locale of comm_group with the prefix add and precisely the parameter instantiation you are giving in the first interpretation. Since all these facts are already available (albeit under a different name), interpret does not add them once more. In the interpretation additivee, the instantiation of the parameters is different, so the facts from the locale are added.

Related

Is there a fast way of going from a symbol to a function call in Julia? [duplicate]

This question already has an answer here:
Julia: invoke a function by a given string
(1 answer)
Closed 6 years ago.
I know that you can call functions using their name as follows
f = x -> println(x)
y = :f
eval(:($y("hi")))
but this is slow since it is using eval is it possible to do this in a different way? I know it's easy to go the other direction by just doing symbol(f).
What are you trying to accomplish? Needing to eval a symbol sounds like a solution in search of a problem. In particular, you can just pass around the original function, thereby avoiding issues with needing to track the scope of f (or, since f is just an ordinary variable in your example, the possibility that it would get reassigned), and with fewer characters to type:
f = x -> println(x)
g = f
g("hi")
I know it's easy to go the other direction by just doing symbol(f).
This is misleading, since it's not actually going to give you back f (that transform would be non-unique). But it instead gives you the string representation for the function (which might happen to be f, sometimes). It is simply equivalent to calling Symbol(string(f)), since the combination is common enough to be useful for other purposes.
Actually I have found use for the above scenario. I am working on a simple form compiler allowing for the convenient definition of variational problems as encountered in e.g. finite element analysis.
I am relying on the Julia parser to do an initial analysis of the syntax. The equations entered are valid Julia syntax, but will trigger errors on execution because some of the symbols or methods are not available at the point of the problem definition.
So what I do is roughly this:
I have a type that can hold my problem description:
type Cmd f; a; b; end
I have defined a macro so that I have access to the problem description AST. I travers this expression and create a Cmd object from its elements (this is not completely unlike the strategy behind the #mat macro in MATLAB.jl):
macro m(xp)
c = Cmd(xp.args[1], xp.args[3], xp.args[2])
:($c)
end
At a later step, I run the Cmd. Evaluation of the symbols happens only at this stage (yes, I need to be careful of the evaluation context):
function run(c::Cmd)
xp = Expr(:call, c.f, c.a, c.b)
eval(xp)
end
Usage example:
c = #m a^b
...
a, b = 2, 3
run(c)
which returns 9. So in short, the question is relevant in at least some meta-programming scenarios. In my case I have to admit I couldn't care less about performance as all of this is mere preprocessing and syntactic sugar.

Does Unbound always need to be in a `FreshM` monad?

I'm working on a project based on some existing code that uses the unbound library.
The code uses unsafeUnbind a bunch, which is causing me problems.
I've tried using freshen, but I get the following error:
error "fresh encountered bound name!
Please report this as a bug."
I'm wondering:
Is the library intended to be used entirely within a FreshM monad? Or are their ways to do things like lambda application without being in Fresh?
What kinds of values can I give to freshen, in order to avoid the errors they list?
If I end up using unsafeUnbind, under what conditions is it safe to use?
Is the library intended to be used entirely within a FreshM monad? Or are their ways to do things like lambda application without being in Fresh?
In most situations you will want to operate within a Fresh or an LFresh monad.
What kinds of values can I give to freshen, in order to avoid the errors they list?
So I think the reason you're getting the error is because you're passing a term to freshen rather than a pattern. In Unbound, patterns are like a generalization of names: a single Name E is a pattern consisting of a single variable which stands for Es, but also (p1, p2) or [p] are patterns comprised of a pair of patterns p1 and p2 or a list of patterns p, respectively. This lets you define terms that bind two variables at the same time, for example. Other more exotic type constructors include Embed t and Rebind p1 p2 former makes a pattern that embeds a term inside of a pattern, while the latter is similar to (p1,p2) except that the names within p1 scope over p2 (for example if p2 has Embeded terms in it, p1 will be scope over those terms). This is really powerful because it lets you define things like Scheme's let* form, or telescopes like in dependently typed languages. (See the paper for details).
Now finally the type constructorBind p t is what brings a term and a type together: A term Bind p t means that the names in p are bound in Bind p t and scope over t. So an (untyped) lambda term might be constructed with data Expr = Lam (Bind Var Expr) | App Expr Expr | V Var where type Var = Name Expr.
So back to freshen. You should only call freshen on patterns so calling it on something of type Bind p t is incorrect (and I suspect the source of the error message you're seeing) - you should call it on just the p and then apply the resulting permutation to the term t to apply the renaming that freshen constructs.
If I end up using `unsafeUnbind, under what conditions is it safe to use?
The place where I've used it is if I need to temporarily sneak under a binder and do some operation that I know for sure does not do anything to the names. An example might be collecting some source position annotations from a term, or replacing some global constant by a closed term. Also if you can guarantee that the term you're working with already has been renamed so any names that you unsafeUnbind are going to be unique already.
Hope this helps.
PS: I maintain unbound-generics which is a clone of Unbound, but using GHC.Generics instead of RepLib.

Difference between (Facts and Predicates) && (Single and Determ)

I just wonder, what is the difference between "facts" and "predicates" section in prolog ?
and what is the difference between "single" and "determ" keyword ?
Fact in prolog is substitution of predicat like table in db Table(Column1,Column2, ...) indeed Facts takes the form like Fact(Arg1,Arg2) which gives us {true,false} values ONLY for the specific constants mentioned inside "()"
so Fact is a complex term or predicate of arguments indeed, Args are not a variables,are individual constants.
example
father(fathername,childname).
Rule are also substitutions of predicate takes the form
rule_type1(+In_Args,?Out_Args) :- body .
rule_type2(+In_Args) :- body . % (true,false)
rule_type3 :- body .
it order to generate data from facts or from logique rules derived into body through Querys
example
max(X,Y,Z) :- X>=Y -> Z=X ; Z=Y .
?- max(3,5,Z). /* give us */ Z=5
in visual-prolog Facts can be declared with several optional keywords:
Facts declared with the keyword determ.
The keyword determ determins that the facts database can only contain one instance of a fact (database predicate) fact_N(...) declared with this keyword. So if you try to assert one and then a second such fact into the database, the Visual Prolog engine will generate runtime error. (1041 Assert to a fact declared as determ, but fact already exists).
example
Facts declared with the keyword single.
The keyword single before a fact fact_N declaration determines that one and only one instance of a fact must always exist:
Since single facts must be already known when the program calls Goal; therefore, single facts must be initialized in a clauses section in the program's source code.
For example:
FACTS
single singleFact(STRING, STRING)
CLAUSES
singleFact("","").
Just to point the obvious: "Facts section" is for facts, facts are predicates that are always true, are used to describe some properties.
Single and determ are "fact mode", used optionally in a fact declaration, Single means the fact always has one and only one value, determ means the fact can have zero or one value.

Is using util/ordering exactly the same as axiomatizing a total order in the usual way?

The util/ordering module contains a comment at the top of the file about the fact that the bound of the module parameter is constrained to have exactly the bound permitted by the scope for the said signature.
I have read a few times (here for instance) that it is an optimization that allows to generate a nice symmetry-breaking predicate, which I can grasp. (BTW, with respect to the said post, am I right to infer that the exactly keyword in the module parameter specification is here to enforce explictly this exact bound (while it was implicit in pre-4.x Alloy versions)?)
However, the comment also contains a part that does not seem to refer to optimization but really to an issue that has a semantic flavour:
* Technical comment:
* An important constraint: elem must contain all atoms permitted by the scope.
* This is to let the analyzer optimize the analysis by setting all fields of each
* instantiation of Ord to predefined values: e.g. by setting 'last' to the highest
* atom of elem and by setting 'next' to {<T0,T1>,<T1,T2>,...<Tn-1,Tn>}, where n is
* the scope of elem. Without this constraint, it might not be true that Ord.last is
* a subset of elem, and that the domain and range of Ord.next lie inside elem.
So, I do not understand this, in particular the last sentence about Ord.last and Ord.next... Suppose I model a totally-ordered signature S in the classical way (i.e. specifying a total, reflexive, antisymmetric, transitive relation in S -> S, all this being possible using plain first-order logic) and that I take care to specify an exact bound for S: will it be equivalent to stating open util/ordering[S] (ignoring efficiency and confusing atom-naming issues)?
Sorry for the slow response to this. This isn't very clear, is it? All it means is that because of the symmetry breaking, the values of last, prev and next are hardwired. If that were done, and independently elem were to be bound to a set that is smaller than the set of all possible atoms for elem, then you'd have strange violations of the declarations such as Ord.last not being in the set elem. So there's nothing to understand beyond: (1) that the exactly keyword forces elem to contain all the atoms in the given scope, and (2) the ordering relation is hardwired so that the atoms appear in the "natural" order.

Why do a lot of programming languages put the type *after* the variable name?

I just came across this question in the Go FAQ, and it reminded me of something that's been bugging me for a while. Unfortunately, I don't really see what the answer is getting at.
It seems like almost every non C-like language puts the type after the variable name, like so:
var : int
Just out of sheer curiosity, why is this? Are there advantages to choosing one or the other?
There is a parsing issue, as Keith Randall says, but it isn't what he describes. The "not knowing whether it is a declaration or an expression" simply doesn't matter - you don't care whether it's an expression or a declaration until you've parsed the whole thing anyway, at which point the ambiguity is resolved.
Using a context-free parser, it doesn't matter in the slightest whether the type comes before or after the variable name. What matters is that you don't need to look up user-defined type names to understand the type specification - you don't need to have understood everything that came before in order to understand the current token.
Pascal syntax is context-free - if not completely, at least WRT this issue. The fact that the variable name comes first is less important than details such as the colon separator and the syntax of type descriptions.
C syntax is context-sensitive. In order for the parser to determine where a type description ends and which token is the variable name, it needs to have already interpreted everything that came before so that it can determine whether a given identifier token is the variable name or just another token contributing to the type description.
Because C syntax is context-sensitive, it very difficult (if not impossible) to parse using traditional parser-generator tools such as yacc/bison, whereas Pascal syntax is easy to parse using the same tools. That said, there are parser generators now that can cope with C and even C++ syntax. Although it's not properly documented or in a 1.? release etc, my personal favorite is Kelbt, which uses backtracking LR and supports semantic "undo" - basically undoing additions to the symbol table when speculative parses turn out to be wrong.
In practice, C and C++ parsers are usually hand-written, mixing recursive descent and precedence parsing. I assume the same applies to Java and C#.
Incidentally, similar issues with context sensitivity in C++ parsing have created a lot of nasties. The "Alternative Function Syntax" for C++0x is working around a similar issue by moving a type specification to the end and placing it after a separator - very much like the Pascal colon for function return types. It doesn't get rid of the context sensitivity, but adopting that Pascal-like convention does make it a bit more manageable.
the 'most other' languages you speak of are those that are more declarative. They aim to allow you to program more along the lines you think in (assuming you aren't boxed into imperative thinking).
type last reads as 'create a variable called NAME of type TYPE'
this is the opposite of course to saying 'create a TYPE called NAME', but when you think about it, what the value is for is more important than the type, the type is merely a programmatic constraint on the data
If the name of the variable starts at column 0, it's easier to find the name of the variable.
Compare
QHash<QString, QPair<int, QString> > hash;
and
hash : QHash<QString, QPair<int, QString> >;
Now imagine how much more readable your typical C++ header could be.
In formal language theory and type theory, it's almost always written as var: type. For instance, in the typed lambda calculus you'll see proofs containing statements such as:
x : A y : B
-------------
\x.y : A->B
I don't think it really matters, but I think there are two justifications: one is that "x : A" is read "x is of type A", the other is that a type is like a set (e.g. int is the set of integers), and the notation is related to "x ε A".
Some of this stuff pre-dates the modern languages you're thinking of.
An increasing trend is to not state the type at all, or to optionally state the type. This could be a dynamically typed langauge where there really is no type on the variable, or it could be a statically typed language which infers the type from the context.
If the type is sometimes given and sometimes inferred, then it's easier to read if the optional bit comes afterwards.
There are also trends related to whether a language regards itself as coming from the C school or the functional school or whatever, but these are a waste of time. The languages which improve on their predecessors and are worth learning are the ones that are willing to accept input from all different schools based on merit, not be picky about a feature's heritage.
"Those who cannot remember the past are condemned to repeat it."
Putting the type before the variable started innocuously enough with Fortran and Algol, but it got really ugly in C, where some type modifiers are applied before the variable, others after. That's why in C you have such beauties as
int (*p)[10];
or
void (*signal(int x, void (*f)(int)))(int)
together with a utility (cdecl) whose purpose is to decrypt such gibberish.
In Pascal, the type comes after the variable, so the first examples becomes
p: pointer to array[10] of int
Contrast with
q: array[10] of pointer to int
which, in C, is
int *q[10]
In C, you need parentheses to distinguish this from int (*p)[10]. Parentheses are not required in Pascal, where only the order matters.
The signal function would be
signal: function(x: int, f: function(int) to void) to (function(int) to void)
Still a mouthful, but at least within the realm of human comprehension.
In fairness, the problem isn't that C put the types before the name, but that it perversely insists on putting bits and pieces before, and others after, the name.
But if you try to put everything before the name, the order is still unintuitive:
int [10] a // an int, ahem, ten of them, called a
int [10]* a // an int, no wait, ten, actually a pointer thereto, called a
So, the answer is: A sensibly designed programming language puts the variables before the types because the result is more readable for humans.
I'm not sure, but I think it's got to do with the "name vs. noun" concept.
Essentially, if you put the type first (such as "int varname"), you're declaring an "integer named 'varname'"; that is, you're giving an instance of a type a name. However, if you put the name first, and then the type (such as "varname : int"), you're saying "this is 'varname'; it's an integer". In the first case, you're giving an instance of something a name; in the second, you're defining a noun and stating that it's an instance of something.
It's a bit like if you were defining a table as a piece of furniture; saying "this is furniture and I call it 'table'" (type first) is different from saying "a table is a kind of furniture" (type last).
It's just how the language was designed. Visual Basic has always been this way.
Most (if not all) curly brace languages put the type first. This is more intuitive to me, as the same position also specifies the return type of a method. So the inputs go into the parenthesis, and the output goes out the back of the method name.
I always thought the way C does it was slightly peculiar: instead of constructing types, the user has to declare them implicitly. It's not just before/after the variable name; in general, you may need to embed the variable name among the type attributes (or, in some usage, to embed an empty space where the name would be if you were actually declaring one).
As a weak form of pattern-matching, it is intelligable to some extent, but it doesn't seem to provide any particular advantages, either. And, trying to write (or read) a function pointer type can easily take you beyond the point of ready intelligability. So overall this aspect of C is a disadvantage, and I'm happy to see that Go has left it behind.
Putting the type first helps in parsing. For instance, in C, if you declared variables like
x int;
When you parse just the x, then you don't know whether x is a declaration or an expression. In contrast, with
int x;
When you parse the int, you know you're in a declaration (types always start a declaration of some sort).
Given progress in parsing languages, this slight help isn't terribly useful nowadays.
Fortran puts the type first:
REAL*4 I,J,K
INTEGER*4 A,B,C
And yes, there's a (very feeble) joke there for those familiar with Fortran.
There is room to argue that this is easier than C, which puts the type information around the name when the type is complex enough (pointers to functions, for example).
What about dynamically (cheers #wcoenen) typed languages? You just use the variable.

Resources