I just wonder, what is the difference between "facts" and "predicates" section in prolog ?
and what is the difference between "single" and "determ" keyword ?
Fact in prolog is substitution of predicat like table in db Table(Column1,Column2, ...) indeed Facts takes the form like Fact(Arg1,Arg2) which gives us {true,false} values ONLY for the specific constants mentioned inside "()"
so Fact is a complex term or predicate of arguments indeed, Args are not a variables,are individual constants.
example
father(fathername,childname).
Rule are also substitutions of predicate takes the form
rule_type1(+In_Args,?Out_Args) :- body .
rule_type2(+In_Args) :- body . % (true,false)
rule_type3 :- body .
it order to generate data from facts or from logique rules derived into body through Querys
example
max(X,Y,Z) :- X>=Y -> Z=X ; Z=Y .
?- max(3,5,Z). /* give us */ Z=5
in visual-prolog Facts can be declared with several optional keywords:
Facts declared with the keyword determ.
The keyword determ determins that the facts database can only contain one instance of a fact (database predicate) fact_N(...) declared with this keyword. So if you try to assert one and then a second such fact into the database, the Visual Prolog engine will generate runtime error. (1041 Assert to a fact declared as determ, but fact already exists).
example
Facts declared with the keyword single.
The keyword single before a fact fact_N declaration determines that one and only one instance of a fact must always exist:
Since single facts must be already known when the program calls Goal; therefore, single facts must be initialized in a clauses section in the program's source code.
For example:
FACTS
single singleFact(STRING, STRING)
CLAUSES
singleFact("","").
Just to point the obvious: "Facts section" is for facts, facts are predicates that are always true, are used to describe some properties.
Single and determ are "fact mode", used optionally in a fact declaration, Single means the fact always has one and only one value, determ means the fact can have zero or one value.
Related
When people talk about F# they sometimes mention the term top-level;
what does top-level mean?
For example in previous SO Q&A
Error FS0037 sometimes, very confusing
Defining Modules VS.NET vs F# Interactive
What the difference between a namespace and a module in F#?
AutoOpen attribute in F#
F# and MEF: Exporting Functions
How to execute this F# function
The term also appears regularly in comments but for those Q&A I did not reference them.
The Wikipedia article on scope touches on this, but has no specifics for F#.
The F# 3.x spec only states:
11.2.1.1 Arity Conformance for Functions and Values
The parentheses indicate a top-level function, which might be a
first-class computed expression that computes to a function value,
rather than a compile-time function value.
13.1 Custom Attributes
For example, the STAThread attribute should be placed immediately
before a top-level “do” statement.
14.1.8 Name Resolution for Type Variables
It is initially empty for any member or any other top-level construct
that contains expressions and types.
I suspect the term has different meanings in different contexts:
Scope, F# interactive, shadowing.
If you could also explain the origins from F# predecessor languages, (ML, CAML, OCaml) it would be appreciated.
Lastly I don't plan to mark an answer as accepted for a few days to avoid hasty answers.
I think the term top-level has different meaning in different contexts.
Generally speaking, I'd use it whenever you have some structure that allows nesting to refer to the one position at the top that is not nested inside anything else.
For example, if you said "top-level parentheses" in an expression, it would refer to the outer-most pair of parentheses:
((1 + 2) * (3 * (8)))
^ ^
When talking about functions and value bindings (and scope) in F#, it refers to the function that is not nested inside another function. So functions inside modules are top-level:
module Foo =
let topLevel n =
let nested a = a * 10
10 + nested n
Here, nested is nested inside topLevel.
In F#, functions and values defined using let can appear inside modules or inside classes, which complicates things a bit - I'd say only those inside modules are top-level, but that's probably just because they are public by default.
The do keyword works similarly - you can nest it (although almost nobody does that) and so top-level do that allows STAThread attribute is the one that is not nested inside another do or let:
module Foo =
[<STAThread>]
do
printfn "Hello!"
Bud it is not allowed on any do nested inside another expression:
do
[<STAThread>]
do
printfn "Hello!"
printfn "This is odd notation, I know..."
I understand that the concept literal is applied to whenever you represent a fixed value in source code, exactly as it is meant to be interpreted, vs. a variable or a constant, which are names for several of a class or one of them respectively.
But they are also opposed to expressions. I thought it was because they could incorporate variables. But even expressions like 1+2 are not (see first answer in What does the word "literal" mean?).
So, when I define a variable this way:
var=1+2
1+2 is not a literal even though it is not a name and evaluates to a single value. I could then guess that it is because it doesn't represent the target value directly; in other words, a literal represents a value "exactly as it is".
But then how is it possible that a function like this one is a literal (as pointed it the same linked answer)?
(x) => x*x
Only anonymous functions can be literal because they are not bound to an identifier
so (x)=>x*x is a literal because it is a anonymous function,or function literal
but a
void my_func()
{#something}
is not a literal cause it is bound to an identifier;
read these,
https://en.wikipedia.org/wiki/Literal_(computer_programming)
https://en.wikipedia.org/wiki/Anonymous_function
Expressions can be divided into two general types: atomic expressions and composite expressions.
Composite expressions can be divided by operator, and so on; atomic expressions can be divided into variables, constants, and literals. I guess different authors might use other categories or boundaries here, so it might not be universal. But I will argue why this categorization might make sense.
It's fairly obvious why strings or numbers are literals, or why a sum isn't. A function call can be considered composite, as it operates on subexpressions - its parameters. A function definition does not operate on subexpressions. Only when the so defined function is called, that call passes parameters into the function. In a compiled language, the anonymous function will likely be replaced by a target address where the corresponding code is located - that memory location is obviously not dependent on any subexpression.
#rdRahul's answer references this Wikipedia article, which says that object literals, such as {"cat", "dog"} can be considered literals. This can be easily argued by pointing out that the object which is the value of the expression is just one opaque thing, such as a pointer to the memory location of the object.
The util/ordering module contains a comment at the top of the file about the fact that the bound of the module parameter is constrained to have exactly the bound permitted by the scope for the said signature.
I have read a few times (here for instance) that it is an optimization that allows to generate a nice symmetry-breaking predicate, which I can grasp. (BTW, with respect to the said post, am I right to infer that the exactly keyword in the module parameter specification is here to enforce explictly this exact bound (while it was implicit in pre-4.x Alloy versions)?)
However, the comment also contains a part that does not seem to refer to optimization but really to an issue that has a semantic flavour:
* Technical comment:
* An important constraint: elem must contain all atoms permitted by the scope.
* This is to let the analyzer optimize the analysis by setting all fields of each
* instantiation of Ord to predefined values: e.g. by setting 'last' to the highest
* atom of elem and by setting 'next' to {<T0,T1>,<T1,T2>,...<Tn-1,Tn>}, where n is
* the scope of elem. Without this constraint, it might not be true that Ord.last is
* a subset of elem, and that the domain and range of Ord.next lie inside elem.
So, I do not understand this, in particular the last sentence about Ord.last and Ord.next... Suppose I model a totally-ordered signature S in the classical way (i.e. specifying a total, reflexive, antisymmetric, transitive relation in S -> S, all this being possible using plain first-order logic) and that I take care to specify an exact bound for S: will it be equivalent to stating open util/ordering[S] (ignoring efficiency and confusing atom-naming issues)?
Sorry for the slow response to this. This isn't very clear, is it? All it means is that because of the symmetry breaking, the values of last, prev and next are hardwired. If that were done, and independently elem were to be bound to a set that is smaller than the set of all possible atoms for elem, then you'd have strange violations of the declarations such as Ord.last not being in the set elem. So there's nothing to understand beyond: (1) that the exactly keyword forces elem to contain all the atoms in the given scope, and (2) the ordering relation is hardwired so that the atoms appear in the "natural" order.
Using the Algebra library, I encountered the following problem. In a proof I wanted to interpret the additive structure of a ring as a group. Here is a sample code:
theory aaa
imports "~~/src/HOL/Algebra/Ring"
begin
lemma assumes "ring R"
shows "True"
proof-
interpret ring R by fact
interpret additive: comm_group "⦇carrier = carrier R, mult = add R, one = zero R⦈" by(unfold_locales)
But I can't access the facts from the group locale. Typing
thm additive.m_assoc
gives the message "Undefined fact". However, it works when I define the additive structure with the monoid.make command:
interpret additivee: comm_group "monoid.make (carrier R) (add R) (zero R)" sorry
thm additivee.m_assoc
It also works if I try to do the same for the multiplicative structure, or if I remove
interpret ring R by fact
Any ideas about what's going on?
The commands interpretation and interpret only register those facts from locales that are not already in scope from previous interpretations. The ring locale is a sub-locale of comm_group with the prefix add and precisely the parameter instantiation you are giving in the first interpretation. Since all these facts are already available (albeit under a different name), interpret does not add them once more. In the interpretation additivee, the instantiation of the parameters is different, so the facts from the locale are added.
I have an interesting question, but I'm not sure exactly how to phrase it...
Consider the lambda calculus. For a given lambda expression, there are several possible reduction orders. But some of these don't terminate, while others do.
In the lambda calculus, it turns out that there is one particular reduction order which is guaranteed to always terminate with an irreducible solution if one actually exists. It's called Normal Order.
I've written a simple logic solver. But the trouble is, the order in which it processes the constraints seems to have a huge effect on whether it finds any solutions or not. Basically, I'm wondering whether something like a normal order exists for my logic programming language. (Or wether it's impossible for a mere machine to deterministically solve this problem.)
So that's what I'm after. Presumably the answer critically depends on exactly what the "simple logic solver" is. So I will attempt to briefly describe it.
My program is closely based on the system of combinators in chapter 9 of The Fun of Programming (Jeremy Gibbons & Oege de Moor). The language has the following structure:
The input to the solver is a single predicate. Predicates may involve variables. The output from the solver is zero or more solutions. A solution is a set of variable assignments which make the predicate become true.
Variables hold expressions. An expression is an integer, a variable name, or a tuple of subexpressions.
There is an equality predicate, which compares expressions (not predicates) for equality. It is satisfied if substituting every (bound) variable with its value makes the two expressions identical. (In particular, every variable equals itself, bound or not.) This predicate is solved using unification.
There are also operators for AND and OR, which work in the obvious way. There is no NOT operator.
There is an "exists" operator, which essentially creates local variables.
The facility to define named predicates enables recursive looping.
One of the "interesting things" about logic programming is that once you write a named predicate, it typically works fowards and backwards (and sometimes even sideways). Canonical example: A predicate to concatinate two lists can also be used to split a list into all possible pairs.
But sometimes running a predicate backwards results in an infinite search, unless you rearrange the order of the terms. (E.g., swap the LHS and RHS of an AND or an OR somehwere.) I'm wondering whether there's some automated way to detect the best order to run the predicates in, to ensure prompt termination in all cases where the solution set is exactually finite.
Any suggestions?
Relevant paper, I think: http://www.cs.technion.ac.il/~shaulm/papers/abstracts/Ledeniov-1998-DCS.html
Also take a look at this: http://en.wikipedia.org/wiki/Constraint_logic_programming#Bottom-up_evaluation