Uniqueness of the solution to a Bellman equation - dynamic-programming

I am stuck on a seemingly simple Bellman equation. The function to be solved is self-referential and does not depend on any other functions. I wonder if there is uniqueness proof for this class of Bellman equations.
The specific equation is the following:
F[z]=-(n-1)/R F[rz+1-r]+n/R F[(rz+1-r)*(1-(1-t)/n)]
where n,R>1>r,t>0
See also the attached image
F(z)=0 is clearly a solution. Because only the function shows up on either side of the equation and nothing else. It makes sense that 0 is the only solution. But I cannot prove uniqueness using standard arguments such as contraction mapping. I wonder if you folk could offer some insights? Thank you.

Related

Why is it fair to think of just locally small cartesian closed categories in Haskell for the Curry class?

Control.Category.Constrained is a very interesting project that presents the class for cartesian closed categories - Curry.
Yet, I do not see why we think of all cartesian closed categories which allow curry and uncurry (Hom(X * Y, Z) ≅ Hom(X, Z^Y) in terms of category theory). Wikipedia says that such property holds only for locally small cartesian closed categories. Under this post many people suggest that Hask itself is not locally small (on the other hand, everyone says that Hask is not a cartesian closed category, which I reckon to be pure and uninteresting formalism).
In this post on Math.SE speaks on assuming all categories are locally small. But it is given from a mathematical point of view where we discuss properties. I would like to know why we decided to concentrate on curry and uncurry as Curry’s methods. Is it because pretty much everyone who knows Haskell also knows these functions? Or is there any other reason?
I would like to know why we decided to concentrate on curry and uncurry as Curry’s methods. Is it because pretty much everyone who knows Haskell also knows these functions?
As the library author I can answer that with confidence and the answer is yes: it is because curry and uncurry are well-established part of the Haskell vernacular. constrained-categories was never intended to radically change Haskell and/or make it more mathematically solid in some sense, but rather to subtly generalise the existing class hierarchies – mostly to allow defining functors etc. that couldn't be given Prelude.Functor instances.
Whether Curry could be formalised in terms of local smallness I frankly don't know. I'm also not sure whether that and other “maths foundations” aspects can even be meaningfully discussed in the context of a Haskell library. Somewhat off-topic rant ahead It's just a fact that Haskell is a non-total language, and yes, that means just about any axiom can be thwarted by some undefined attack. But I also don't really see that as a problem. Many people seem to think of Haskell as a sort of uncanny valley: too restrictive for use in real-world applications, yet nothing can be proved properly. I see it exactly the other way around: Haskell has a sufficiently powerful type system to be able to express the mathematical ideas that are useful for real-world applications, without getting its value semantics caught up too deep in the underlying foundations to be practical to actually use in the real world. (I.e., you don't constantly spend weeks proving some “obviously it's true that...” theorem. I'm looking at you, Coq...)Instead of writing 100% rigorous proofs, we narrow down the types as best as possible and then use QuickCheck to see whether something typically works as the maths would demand.
Don't get me wrong, I think formalising the foundations is important too and dependently-typed total languages are great, but all that is somewhat missing the point of where Haskell's potential really lies. At least it's not where I aim my Haskell development, including constrained-categories. If somebody who's deeper into the pure maths wants to chime in, I'm delighted to hear about it.

Haskell: use or uses in Getter

In Control.Lens we have Getter that can access the nested structure. Getter has use and uses, but it's not clear to me how they work. So it'd be great if someone can provide some simple examples that use or uses is utilised.
Why do I need to know it? because I'm reading some implementation in Haskell and "uses" and "use" were used in them. In particualr it says:
inRange <- uses fsCurrentCoinRangeUpperBound (coinIndex <=)
If the above code is just for comparing (<=) two values, then why do we need "uses" at all, there?
As I tried to make clear in my answer to your other question over at Use cases of makePrisms with examples there is a lot of requisite knowledge required before you can understand this.
First up, you have to understand Lens quite well. Judging from your other question you're just beginning them. This is great! They're amazingly cool and it's excellent to tackle such things.
However, I'd give you a big amount of caution here, one of the dangers of Haskell is it's so powerful, and can be so expressive and terse, that it seems easy to try to skip stuff.
If you skipped understanding algebraic data types very well, for example, you can easily read code and think you have an understanding of it when you don't at all. This can then lead to compounded confusion, and you'll feel like you don't understand any of it, which actually might be true, but that feeling is not a good feeling to have when learning Haskell.
I don't want you to feel like that.
So I encourage you to learn Lens, but if you don't have the requisite knowledge for Lens, then I encourage you to go get that first. It's not too hard to understand this stuff to a degree, but the way Lens is written is not trivial or easy to approach for programmers who aren't quite familiar with at least simple types, parameterized types, Algebraic Data Types, Typeclasses, the Functor typeclass, and to really understand it, you need to understand several instances of Functor.
As well, if you're trying to understand use and uses, which only make sense when dealing with State values, then I'd suggest it's almost impossible to understand what's happening without knowing what State is, as well as what Lens does and is.
use and uses are for taking a lens and a state value and looking into the current state inside a State value. So, to a degree you really need to also understand what do syntax is doing, therefore the Monad typeclass to a degree, as well as how the State / MonadState work from that perspective.
If any of these preliminaries are skipped, you'll be confused.
I hope this helps! And I wish you well.

Wrapped floating point type with built-in epsilon

I'm working on some geometrical calculations that will require me to compare coordinates based on Doubles. I usually deal with the floating point inaccuracies in this situation by including some artificial epsilon. This is common and there is lots of information available on this topic.
http://floating-point-gui.de/errors/comparison/
http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm
My thought is to wrap Double in a newtype and implement Eq and Ord using epsilon. This seems like such an obvious concept that either its already been done and must be in a library on Hackage or there is something obviously wrong with the concept I haven't thought of yet. So my questions, Does anyone know of an existing module that contains a similar type (I did a quick search and did not see anything)? Or, is this a bogus idea? Thanks.
It's not a bogus idea. One approach is to create types that allow you write floating point expressions which in order to be evaluated require an piece of configuration data - i.e. the value of epsilon. This would work much like the Reader monad.
An nice approach to this problem is given in:
http://okmij.org/ftp/Haskell/types.html#Prepose
and an efficient implementation for GHC may be found on hackage in the reflection package.

Would error messages improve if we could disable features?

Inspired by this reddit discussion, I'm wondering if there are any technical obstacles in the way of a "Dr Scheme"-style approach to assisting beginners when it comes to comprehending error messages. A case in point is the notorious
Prelude> 1 "doesn't"
<interactive>:3:1:
No instance for (Num ([Char] -> t0))
arising from the literal `1'
Possible fix: add an instance declaration for (Num ([Char] -> t0))
In the expression: 1
In the expression: 1 "doesn't"
In an equation for `it': it = 1 "doesn't"
Suppose we were to make the local assumption that the programmer will declare no new instances. Indeed, one can get a long way in Haskell interacting with only prelude classes only via deriving, so this assumption is not unrealistic for beginners. Under such an assumption, the above error message would be beside the point. Could we improve upon it? We might still need to figure out how much to say about the type of 1, but we could surely be more direct about the problem.
Are there other opportunities to reframe error messages on the basis of realistic simplifying assumptions? Note, I'm asking a question about changing the text of error messages based on a model of the programmer's experience: I am not locally considering any changes to which code is considered erroneous (e.g., by assuming that beginners will use overloaded things at specialised types, eliminating ambiguity).
A follow-up thought: does the text of the existing error messages contain enough information to support such transformations, assuming a "configuration file" modelling the student? That is, could an enterprising hacker implement an intelligibility-enhancing postprocessor for ghci without troubling the busy people at HQ?
I'm not convinced that ignoring certain features for the sake of error messages would be helpful.
For one thing, are the errors encountered by beginners really that different from anyone else? Just because I know how to interpret a "no Num instance for this absurd thing which is clearly not a numeric type" error doesn't mean I wouldn't appreciate GHC more clearly pointing out what bone-headed thing I've done to provoke such a message.
I would suggest that the circumstances in which a beginner or an expert would be writing instances, or any other kind of "advanced" code, differ very little. The beginner, however, will be coding under those circumstances far less often, with an inclusive lower bound of "never".
If one wishes to insulate beginners from arcane corners of the language, then make sure they don't encounter them at all. For instance (ha, ha), there's an argument to be made for avoiding type classes entirely when first introducing the language.
I can't think of any cases where a more beginner-friendly error message, so long as it doesn't throw away information, would not also be more pleasant for everyone else as well.
Suppose we were to make the local assumption that the programmer will declare no new instances.
Suppose we instead make the assumption that the programmer will declare no orphan instances where the class and/or type are defined by the Haskell Report. This assumption is most assuredly realistic for beginners and, in fact, seems reasonable as a general rule.
Adding such an instance because of a compiler error at all is plausible in very few situations. With the glaring and galling exception of the anonymous writer monad (unless that has been remedied since I last checked) orphan instances for a standard class and type seems exceedingly unlikely, and one or the other should only arise when a library has unexpectedly overlooked a few instances, or is old and dusty enough to predate the prevalence of stuff like Applicative or Foldable. In any case, anyone who does want such an instance can be safely assumed to know what they're doing well enough to interpret GHC's error message, whatever it may be.
But enough about that particular example.
Are there other opportunities to reframe error messages on the basis of realistic simplifying assumptions?
Very likely, but we're also very far from exhausting the space of realistic complicating assumptions!
The point of the above digression on instances is that by using information already available to the compiler and heuristics that examine a larger context (e.g., where certain things are defined, to distinguish the above example from non-orphan instances) error messages could be improved in general.
A follow-up thought: does the text of the existing error messages contain enough information to support such transformations, assuming a "configuration file" modelling the student?
I really can't see this working very well except in the simplest and most egregious cases, the example you gave being a notable example. It could rephrase existing flavors of error, but beginners would benefit most from having more detailed/explicit information that the existing message may not contain.
Consider two other error messages that are chronically vexing to beginners: complaints about matching "rigid type variables", and complaints about "infinite types". Rephrasing might improve those somewhat, but short of including a tutorial on polymorphism in each error message it's not going to much help a beginner fix their code, unless you can interpret the relevant expression and type well enough to be specific about the cause.
Together with inspecting the source code a "translator" might have some legs, but I suspect modifying GHC directly will quickly become the easiest route. Generating error messages would be an interesting extensibility point, but I don't think this can be done with GHC's existing plugin infrastructure.
The greatest obstacle to any of this, I expect, is not implementation. Rather, it's the decision of which heuristics would improve error messages rather than accomplishing the opposite. Without a base of objective data, this is something of an error-prone (ha, ha) guessing game. It would be very interesting to assemble a large survey of errors encountered by (primarily intermediate or novice) Haskell programmers and, most importantly, what actual change they made to resolve the issue.
I believe that the Helium subset of Haskell is an attempt to do this. It has been a long time since I have tried it but IIRC, it produced much better error messages than GHC.

A Question About the Expressive Power of Higher-Order Logical Reasoning Formalisms

I do not really know if this is scientifically proven, but I've read in a book (It was a relatively modern AI book by Peter Norvig) that second-order logical programming could be more expressive than existing first-order languages.
The question is: Is it statistically/symbolically proven that higher-order predicate logics exceed first-order predicates in their expressive power? Or they just bring the modularity/convenience/maintainability to your knowledge bases?
Additionally: If there is some kind of firm direction in which I could go seeking more expressive power than I have (I mean exactly the descriptive potential of the symbols I write in given semantics/syntax) - then I would be glad to hear just almost everything :)
Thank you.
Second order logic is more powerful and expressive than first order logic. Second order logic allows one to quantify over relations in addition to variables; thus it is possible, using a single sentence of second order logic, to express something that would require an infinite number of first order logic sentences. The relationship is similar to that between FOL and propositional logic.
As an example, consider the SOL statement:
\forall R \exists x \exists y (x R y)
This states that for any relation R there are x and y such that x R y holds. In order to express this in FOL, one would need a statement for each relation R in the language, which clearly could be infinite.
For a more interesting example, one could look at the proof that the transitive closure of a relation is not expressible in FOL. I can post it if you want to see it; but for the sake of succinctness I will omit it unless someone wants it.
Edit: You may also be interested in Descriptive Complexity -- essentially, it ties together the notions of complexity and expressibility -- if you can fully state a problem in a certain fragment of logic, then you know it is contained within the corresponding complexity class. For example, if a problem can be stated in Existential Second Order Logic, then it's in NP; if it can be stated in First Order Logic + a Least Fixed Point operator, then it's in P. If you can show that every statement of existential second order logic can be translated to FOL(LFP), then you've proven P=NP. (well, you've proven NP\subset P, but since the other containment is already known, you've proven equality...)
You may want to look into dependent type theories.

Resources