everyone. I am doing a minimum question and got a confusion when writing the proofing steps.
The proof step I wrote
Here, If I (θ<X1,...Xn<θ+1) can be defined as g(x|θ)? I look through https://online.stat.psu.edu/stat415/lesson/24/24.2 and the definition does not clarify if g(x|θ) is a function that could only depend on pure θ or if including θ will be sufficient enough.
Thank you!
Related
I am stuck on a seemingly simple Bellman equation. The function to be solved is self-referential and does not depend on any other functions. I wonder if there is uniqueness proof for this class of Bellman equations.
The specific equation is the following:
F[z]=-(n-1)/R F[rz+1-r]+n/R F[(rz+1-r)*(1-(1-t)/n)]
where n,R>1>r,t>0
See also the attached image
F(z)=0 is clearly a solution. Because only the function shows up on either side of the equation and nothing else. It makes sense that 0 is the only solution. But I cannot prove uniqueness using standard arguments such as contraction mapping. I wonder if you folk could offer some insights? Thank you.
I'm working with linear problems on rationals in Z3. To use Z3 I take SBV.
An example of a problem I pose is:
import Data.SBV
solution1 = do
x <- sRational "x"
w <- sRational "w"
constrain $ x.< w
constrain $ x + 2*w .>=0 .|| x .== 1
My question is:
Are these kinds of problems decidable?
I couldn't find a list of decidable theories or a way to tell if a theory is decidable.
The closest I found is this. The theory about the real ones is decidable, but is it the same for rational numbers? Intuition tells me that it is, but I have not found the information that allows me to assure it.
Thanks in advance
SBV models rationals using the standard "two integers" idea; that is, it represents the numerator and the denominator separately as integers. This means that if you add two symbolic rationals, you'll have a non-linear term over the integers. So, in theory, the problem will be in the semi-decidable fragment. That is, even if you restrict your multiplications to concrete scalars, addition of symbolic rationals will give rise to non-linear terms over integers.
Having said that, I had good luck using rationals; where z3 was able to decide most problems of interest without much difficulty. If it proves to be an issue, you should switch to SReal type (i.e., algebraic reals), for which z3 has a decision procedure. But of course, the models you get can now include algebraic reals, such as square-root-of-2, etc. (i.e., the roots of any polynomial with integer coefficients.)
Side note If your problem allows for delta-sat (i.e., satisfiability with perturbations), you should look into dReal (http://dreal.github.io), which SBV also supports as a backend solver. But perhaps that's not what you had in mind.
Theoretical note
Strictly speaking, linear arithmetic over rationals is decidable; see Section 3 of https://www.cs.ox.ac.uk/people/james.worrell/lecture15-2015.pdf for a proof. However, SMT solvers do not support rationals out-of-the-box; and SBV (as I mentioned above), uses two symbolic integers to represent rationals. So, adding two rationals will give rise to multiplication of two symbolic integers, taking you out of the decidable fragment. Of course, in practice, the solvers are quite adept at coming up with solutions even in the presence of non-linear terms; it's just that you're not always guaranteed. So, a more strict answer to your question is while linear arithmetic over rationals is decidable, the translation used by SBV puts the problem into the non-linear integer arithmetic domain, and hence decidability is not guaranteed. In any case, SMTLib does not come with a theory of rationals, so you're kind of out-of-luck when it comes to first class support for them.
I guess a rational solution will exist iff an integer solution exists to a suitably scaled collection of constraints. For example, x=1/2(=5/10), w=3/5(=6/10) is a solution to your example problem. Scaling your problem by 10, we have the equivalent constraint set:
10*x < 10*w
(10*x + 20*w >= 0) || (10*x == 10)
Writing x'=10*x and w'=10*w, this means that x'=5, w'=6 is an integer solution to:
x' < w'
(x' + w' >= 0) || (x' == 10)
Presburger famously showed that first-order logic plus integers and addition is decidable. (Multiplication by a constant is also allowed, since it can be expanded to an addition -- e.g. 3*x is x+x+x.)
I guess the only trick left is to show that it's possible to choose what scaling to use without having solved the problem yet. Nothing obvious occurs to me off the top of my head, but it seems reasonable that this should be doable. For example, perhaps if you take the product of all the nonzero numerators and denominators in your constraint set, you can show that the set of rationals with that product as their denominator is indistinguishable from the full set of rationals. (If so, you could look through the proof to see if it still works with a smaller denominator.)
I'm not a z3 expert, so I can't talk about how this translates to whether that tool specifically is suitable, but it seems likely to me that it is possible to create a suitable tool.
Control.Category.Constrained is a very interesting project that presents the class for cartesian closed categories - Curry.
Yet, I do not see why we think of all cartesian closed categories which allow curry and uncurry (Hom(X * Y, Z) ≅ Hom(X, Z^Y) in terms of category theory). Wikipedia says that such property holds only for locally small cartesian closed categories. Under this post many people suggest that Hask itself is not locally small (on the other hand, everyone says that Hask is not a cartesian closed category, which I reckon to be pure and uninteresting formalism).
In this post on Math.SE speaks on assuming all categories are locally small. But it is given from a mathematical point of view where we discuss properties. I would like to know why we decided to concentrate on curry and uncurry as Curry’s methods. Is it because pretty much everyone who knows Haskell also knows these functions? Or is there any other reason?
I would like to know why we decided to concentrate on curry and uncurry as Curry’s methods. Is it because pretty much everyone who knows Haskell also knows these functions?
As the library author I can answer that with confidence and the answer is yes: it is because curry and uncurry are well-established part of the Haskell vernacular. constrained-categories was never intended to radically change Haskell and/or make it more mathematically solid in some sense, but rather to subtly generalise the existing class hierarchies – mostly to allow defining functors etc. that couldn't be given Prelude.Functor instances.
Whether Curry could be formalised in terms of local smallness I frankly don't know. I'm also not sure whether that and other “maths foundations” aspects can even be meaningfully discussed in the context of a Haskell library. Somewhat off-topic rant ahead It's just a fact that Haskell is a non-total language, and yes, that means just about any axiom can be thwarted by some undefined attack. But I also don't really see that as a problem. Many people seem to think of Haskell as a sort of uncanny valley: too restrictive for use in real-world applications, yet nothing can be proved properly. I see it exactly the other way around: Haskell has a sufficiently powerful type system to be able to express the mathematical ideas that are useful for real-world applications, without getting its value semantics caught up too deep in the underlying foundations to be practical to actually use in the real world. (I.e., you don't constantly spend weeks proving some “obviously it's true that...” theorem. I'm looking at you, Coq...)Instead of writing 100% rigorous proofs, we narrow down the types as best as possible and then use QuickCheck to see whether something typically works as the maths would demand.
Don't get me wrong, I think formalising the foundations is important too and dependently-typed total languages are great, but all that is somewhat missing the point of where Haskell's potential really lies. At least it's not where I aim my Haskell development, including constrained-categories. If somebody who's deeper into the pure maths wants to chime in, I'm delighted to hear about it.
Because Prolog uses chronological backtracking(from the Prolog Wikipedia page) even after an answer is found(in this example where there can only be one solution), would this justify Prolog as using eager evaluation?
mother_child(trude, sally).
father_child(tom, sally).
father_child(tom, erica).
father_child(mike, tom).
sibling(X, Y) :- parent_child(Z, X), parent_child(Z, Y).
parent_child(X, Y) :- father_child(X, Y).
parent_child(X, Y) :- mother_child(X, Y).
With the following output:
?- sibling(sally, erica).
true ;
false.
To summarize the discussion with #WillNess below, yes, Prolog is strict. However, Prolog's execution model and semantics are substantially different from the languages that are usually labelled strict or non-strict. For more about this, see below.
I'm not sure the question really applies to Prolog, because it doesn't really have the kind of implicit evaluation ordering that other languages have. Where this really comes into play in a language like Haskell, you might have an expression like:
f (g x) (h y)
In a strict language like ML, there is a defined evaluation order: g x will be evaluated, then h y, and f (g x) (h y) last. In a language like Haskell, g x and h y will only be evaluated as required ("non-strict" is more accurate than "lazy"). But in Prolog,
f(g(X), h(Y))
does not have the same meaning, because it isn't using a function notation. The query would be broken down into three parts, g(X, A), h(Y, B), and f(A,B,C), and those constituents can be placed in any order. The evaluation strategy is strict in the sense that what comes earlier in a sequence will be evaluated before what comes next, but it is non-strict in the sense that there is no requirement that variables be instantiated to ground terms before evaluation can proceed. Unification is perfectly content to complete without having given you values for every variable. I am bringing this up because you have to break down a complex, nested expression in another language into several expressions in Prolog.
Backtracking has nothing to do with it, as far as I can tell. I don't think backtracking to the nearest choice point and resuming from there precludes a non-strict evaluation method, it just happens that Prolog's is strict.
That Prolog pauses after giving each of the several correct answers to a problem has nothing to do with laziness; it is a part of its user interaction protocol. Each answer is calculated eagerly.
Sometimes there will be only one answer but Prolog doesn't know that in advance, so it waits for us to press ; to continue search, in hopes of finding another solution. Sometimes it is able to deduce it in advance and will just stop right away, but only sometimes.
update:
Prolog does no evaluation on its own. All terms are unevaluated, as if "quoted" in Lisp.
Prolog will unfold your predicate definitions as written and is perfectly happy to keep your data structures full of unevaluated uninstantiated holes, if so entailed by your predicate definitions.
Haskell does not need any values, a user does, when requesting an output.
Similarly, Prolog produces solutions one-by-one, as per the user requests.
Prolog can even be seen to be lazier than Haskell where all arithmetic is strict, i.e. immediate, whereas in Prolog you have to explicitly request the arithmetic evaluation, with is/2.
So perhaps the question is ill-posed. Prolog's operations model is just too different. There are no "results" nor "functions", for one; but viewed from another angle, everything is a result, and predicates are "multi"-functions.
As it stands, the question is not correct in what it states. Chronological backtracking does not mean that Prolog will necessarily backtrack "in an example where there can be only one solution".
Consider this:
foo(a, 1).
foo(b, 2).
foo(c, 3).
?- foo(b, X).
X = 2.
?- foo(X, 2).
X = b.
So this is an example that does have only one solution and Prolog recognizes that, and does not attempt to backtrack. There are cases in which you can implement a solution to a problem in a way that Prolog will not recognize that there is only one logical solution, but this is due to the implementation and is not inherent to Prolog's execution model.
You should read up on Prolog's execution model. From the Wikipedia article which you seem to cite, "Operationally, Prolog's execution strategy can be thought of as a generalization of function calls in other languages, one difference being that multiple clause heads can match a given call. In that case, [emphasis mine] the system creates a choice-point, unifies the goal with the clause head of the first alternative, and continues with the goals of that first alternative." Read Sterling and Shapiro's "The Art of Prolog" for a far more complete discussion of the subject.
from Wikipedia I got
In eager evaluation, an expression is evaluated as soon as it is bound to a variable.
Then I think there are 2 levels - at user level (our predicates) Prolog is not eager.
But it is at 'system' level, because variables are implemented as efficiently as possible.
Indeed, attributed variables are implemented to be lazy, and are rather 'orthogonal' to 'logic' Prolog variables.
I do not really know if this is scientifically proven, but I've read in a book (It was a relatively modern AI book by Peter Norvig) that second-order logical programming could be more expressive than existing first-order languages.
The question is: Is it statistically/symbolically proven that higher-order predicate logics exceed first-order predicates in their expressive power? Or they just bring the modularity/convenience/maintainability to your knowledge bases?
Additionally: If there is some kind of firm direction in which I could go seeking more expressive power than I have (I mean exactly the descriptive potential of the symbols I write in given semantics/syntax) - then I would be glad to hear just almost everything :)
Thank you.
Second order logic is more powerful and expressive than first order logic. Second order logic allows one to quantify over relations in addition to variables; thus it is possible, using a single sentence of second order logic, to express something that would require an infinite number of first order logic sentences. The relationship is similar to that between FOL and propositional logic.
As an example, consider the SOL statement:
\forall R \exists x \exists y (x R y)
This states that for any relation R there are x and y such that x R y holds. In order to express this in FOL, one would need a statement for each relation R in the language, which clearly could be infinite.
For a more interesting example, one could look at the proof that the transitive closure of a relation is not expressible in FOL. I can post it if you want to see it; but for the sake of succinctness I will omit it unless someone wants it.
Edit: You may also be interested in Descriptive Complexity -- essentially, it ties together the notions of complexity and expressibility -- if you can fully state a problem in a certain fragment of logic, then you know it is contained within the corresponding complexity class. For example, if a problem can be stated in Existential Second Order Logic, then it's in NP; if it can be stated in First Order Logic + a Least Fixed Point operator, then it's in P. If you can show that every statement of existential second order logic can be translated to FOL(LFP), then you've proven P=NP. (well, you've proven NP\subset P, but since the other containment is already known, you've proven equality...)
You may want to look into dependent type theories.