Bartosz Milewski, in the section Terminal Object of Chapter 5 underlines the following
Notice that in this example the uniqueness condition is crucial, because there are other sets (actually, all of them, except for the empty set) that have incoming morphisms from every set. For instance, there is a Boolean-valued function (a predicate) defined for every type:
yes :: a -> Bool
yes _ = True
But Bool is not a terminal object. There is at least one more Bool-valued function from every type:
no :: a -> Bool
no _ = False
Insisting on uniqueness gives us just the right precision to narrow down the definition of the terminal object to just one type.
And all this come after the section Initial Object, where a similar observation is not made.
Therefore I have the feeling that I might be missing a key difference between initial and terminal objects.
From my understanding the uniqueness is crucial also for initial objects, because I could observe that there are other sets (actually, all of them) that have an outgoing morphisms to every set (except the empty set), but there would be in general more than one such a morphism (except for the one going to ().
So my question is: is there any difference between initial and terminal objects, beside the direction of the arrows that connect them to the other objects, that Bartosz is probably trying to underline with the quoted text?
I know that question like What does the author mean? can be a bit ill posed, as we are not in his mind, but I guess who knows category theory can throw at least some plausible hypothesis.
that have an outgoing morphisms to every set (except the empty set)
That's the key part. The requirement is that there be outgoing morphisms to every set full stop. The existance of one set where this breaks down is enough to undermine it. That's why, indeed, the initial object in Set is already well-defined without even requiring uniqueness: the empty set is the only set that has an outgoing arrow to the empty set.
Meanwhile, every non-empty set has incoming arrows from truely every set including the empty one, but only for one-element sets is this arrow unique.
Here's the direct quote from the book: "The initial object is the object that has one and only one morphism going to any object in the category." Notice the only one part.
BTW, I was also careful not to say "any other object," because there also is a unique morphism from the initial object to itself: it's always the identity.
Related
In Java, with a little exception, null is of every type. Is there a corresponding object like that in Haskell?
Short answer: Yes
As in any sensible Turing-complete language, infinite loops can be given any type:
loop :: a
loop = loop
This (well, this, or maybe this) is occasionally useful as a temporary placeholder for as-yet-unimplemented functionality or as a signal to readers that we are in a dead branch for reasons that are too tedious to explain to the compiler. But it is generally not used at all analogously to the way null is typically used in Java code.
Normally to signal lack of a value when that's a sensible thing to do, one instead uses
Nothing :: Maybe a
which, while it can't be any type at all, can be the lack of any type at all.
Technically yes, as Daniel Wagner's answer states.
However I would argue that "a value that can be used for every type" and "a value like Java's null" are actually very different requirements. Haskell does not have the latter. I think this is a good thing (as does Tony Hoare, who famously called his invention of null-references a billion-dollar mistake).
Java-like null has no properties except that you can check whether a given reference is equal to it. Anything else you ask of it will blow up at runtime.
Haskell undefined (or error "my bad", or let x = x in x, or fromJust Nothing, or any of the infinite ways of getting at it) has no properties at all. Anything you ask of it will blow up at runtime, including whether any given value is equal to it.
This is a crucial distinction because it makes it near-useless as a "missing" value. It's not possible to do the equivalent of if (thing == null) { do_stuff_without_thing(); } else { do_stuff_with(thing); } using undefined in place of null in Haskell. The only code that can safely handle a possibly-undefined value is code that just never inspects that value at all, and so you can only safely pass undefined to other code when you know that it won't be used in any way1.
Since we can't do "null pointer checks", in Haskell code we almost always use some type T (for arguments, variables, and return types) when we mean there will be a value of type T, and we use Maybe T2 when we mean that there may or may not be a value of type T.
So Haskellers use Nothing roughly where Java programmers would use null, but Nothing is in practice very different from Haskell's version of a value that is of every type. Nothing can't be used on every type, only "Maybe types" - but there is a "Maybe version" of every type. The type distinction between T and Maybe T means that it's clear from the type whether you can omit a value, when you need to handle the possible absence of a value3, etc. In Java you're relying on the documentation being correct (and present) to get that knowledge.
1 Laziness does mean that the "won't be inspected at all" situation can come up a lot more than it would in a strict language like Java, so sub-expressions that may-or-may-not be the bottom value are not that uncommon. But even their use is very different from Java's idioms around values that might be null.
2 Maybe is a data-type with the definition data Maybe a = Nothing | Just a, whether the Nothing constructor contains no other information and the Just constructor just stores a single value of type a. So for a given type T, Maybe T adds an additional "might not be present" feature and nothing else to the base type T.
3 And the Haskell version of handling possible absence is usually using combinators like maybe or fromMaybe, or pattern matching, all of which have the advantage over if (thing == null) that the compiler is aware of which part of the code is handling a missing value and which is handling the value.
Short answer: No
It wouldn't be very type safe to have it. Maybe you can provide more information to your question to understand what you are trying to accomplish.
Edit: Daniel Wagner is right. An infinite loop can be of every type.
Short answer: Yes. But also no.
While it's true that an infinite loop, aka undefined (which are identical in the denotational semantics), inhabits every type, it is usually sufficient to reason about programs as if these values didn't exist, as exhibited in the popular paper Fast and Loose Reasoning is Morally Correct.
Bottom inhabits every type in Haskell. It can be written explicitly as undefined in GHC.
I disagree with almost every other answer to this question.
loop :: a
loop = loop
does not define a value of any type. It does not even define a value.
loop :: a
is a promise to return a value of type a.
loop = loop
is an endless loop, so the promise is broken. Since loop never returns at all, it follows that it never returns a value of type a. So no, even technically, there is no null value in Haskell.
The closest thing to null is to use Maybe. With Maybe you have Nothing, and this is used in many contexts. It is also much more explicit.
A similar argument can be used for undefined. When you use undefined in a non-strict setting, you just have a thunk that will throw an error as soon as it is evaluated. But it will never give you a value of the promised type.
Haskell has a bottom type because it is unavoidable. Due to the halting problem, you can never prove that a function will actually return at all, so it is always possible to break promises. Just because someone promises to give you 100$, it does not mean that you will actually get it. He can always say "I didn't specify when you will get the money" or just refuse to keep the promise. The promise doesn't even prove that he has the money or that he would be able to provide it when asked about it.
An example from the Apple-world..
Objective C had a null value and it has been called nil. The newer Swift language switched to an Optional type, where Optional<a> can be abbreviated to a?. It behaves exactly like Haskells Maybe monad. Why did they do this? Maybe because of Tony Hoare's apology. Maybe because Haskell was one of Swifts role-models.
When the ordering module is called with a set then all these functions are suddenly available on the set: first, last, next, prev, etc.
first returns the first atom.
last returns the last atom.
first.next returns the second atom.
But wait!
A set is, by definition, unordered. So how can you order a set?
Consider this set of colors:
abstract sig Color {}
one sig red extends Color {}
one sig yellow extends Color {}
one sig green extends Color {}
What does it mean to order that set of colors? Suppose we call the ordering module:
open util/ordering[Color]
What does first return? What does last return? What does first.next return?
Let’s have Alloy generate some instances:
run {}
Here are a few of the instances that are generated:
Instance #1
first returns: yellow
last returns: green
first.next returns: red
Instance #2
first returns: yellow
last returns: red
first.next returns: green
Instance #3
first returns: green
last returns: yellow
first.next returns: red
Notice that the ordering is different with each instance.
Now, let’s order a plain signature:
open util/ordering[Time]
sig Time {}
run {}
Only one instance is generated:
Instance #1
first returns: Time0
last returns: Time2
first.next returns: Time1
No more instances!
Lessons Learned
For a set created by enumerating its atoms, the ordering module orders the set in any way.
For a set created by a signature, the ordering module orders the set this way: Blah0, Blah1, Blah2, …, where “Blah” is the signature name.
I think the real lesson, however, is that the functions provided in the ordering module (first, last, next, etc.) make it appear that the set is ordered. But that is an illusion, it is just a view placed on top of the set. The set, in fact, has no ordering.
Do I have a correct understanding? Anything you would add to this?
The reason behind this difference holds in 2 words: Symmetry breaking.
In short, Alloy wants to avoid returning isomorphic instances (instances which are the same up to label renaming).
Think of it that way:
When you analyze a model consisting solely of the signature one Time{} the analyzer will return a single instance composed of atom Time$0. Labeling this atom Time$1, Time$2 or CoolAtom won't change the fact that the instance is composed of a single atom of type Time.
When analyzing the model declaring Color as being either red, yellow or green, the analyzer will return 3 instances, each consisting of a different color.
Why you ask? That's because those atoms are semantically different in the sense that they do not have the same type.
Note that in no ways you have created a set enumerating its atoms. You have defined the set of colors as being composed of several sets (red, green, yellow) which all happen to have an arity of one.
Your final understanding however is correct in the sense that using ordering doesn't alter the essence of the signature it is used on (defining a set of atoms), but rather provides functions used to define an ordering on elements typed by the said signature.
When you invoke the ordering module on some set S, you are simply adding an ordering to every instance, as if you explicitly included a homogeneous relation on the set S. As Loic notes, the key reason this is better than doing it explicitly is that you get symmetry breaking for free so better performance (and more convenient atom numbering). And of course, you don't need to axiomatize the ordering yourself.
I would like to know a bit more about the bounding this error relates to.
I have an alloy model for which I create an instance manually (by writting it down in XML).
This instance is readable and the A4Solution can be correctly displayed.
But when I try to evaluate an expression in this instance using the eval() function, I receive this error message, though the field name and type of the exprvar retrieved from the model is exactly the same as the one in the instance..
I would like to know what does this bounding consist of. What are the properties taken into consideration to tell that one element of the instance is bounded to one element of the model.
Is the hidden ID figuring in the XML somewhere taken into consideration ?
I would like to know what does this bounding consist of.
It means that every free variable (i.e., relation in Kodkod) has to be given a bound before asking the solver to solve the formula.
What are the properties taken into consideration to tell that one element of the instance is bounded to one element of the model.
The instance XML file contains and exact value (a tuple set) for each and every sig and field from the model; those tuple sets are exactly the bounds that should be used when evaluating expressions. I'm not sure how exactly you are trying to use the Alloy API; but recreating the bounds from the XML file should (mostly) be handled by the API behind the scene.
Judging by your last comment to my previous answer, your problem might be related to this post.
In a nutshell, if you are trying to evaluate AST expression objects obtained from one "alloy world" (CompModule) against a solution that was entirely recreated from an XML file, you will get exactly that error. For example, if you have two PrimSig objects, s1 and s2, and they both have the same name (e.g., both were obtained by parsing sig S {...}), they are not "Java equals" (s1.equals(s2) returns false); the same holds for the underlying Kodkod relations/variables. So if you then try to evaluate s1 in a context which has a bound only for s2, you'll get an error saying that s1 is not bound.
Providing the set of Signatures defined in the metamodel while reading the A4Solution solved my problem.
Thus changing:
solution = A4SolutionReader.read(null, new XMLNode(f));
by
solution = A4SolutionReader.read(mm.getAllReachableSigs(), new XMLNode(f));
solved this illegal bounding issue
I'm having trouble using Haskell's type system elegantly. I'm sure my problem is a common one, but I don't know how to describe it except in terms specific to my program.
The concepts I'm trying to represent are:
datapoints, each of which takes one of several forms, e.g. (id, number of cases, number of controls), (id, number of cases, population)
sets of datapoints and aggregate information: (set of id's, total cases, total controls), with functions for adding / removing points (so for each variety of point, there's a corresponding variety of set)
I could have a class of point types and define each variety of point as its own type. Alternatively, I could have one point type and a different data constructor for each variety. Similarly for the sets of points.
I have at least one concern with each approach:
With type classes: Avoiding function name collision will be annoying. For example, both types of points could use a function to extract "number of cases", but the type class can't require this function because some other point type might not have cases.
Without type classes: I'd rather not export the data constructors from, say, the Point module (providing other, safer functions to create a new value). Without the data constructors, I won't be able to determine of which variety a given Point value is.
What design might help minimize these (and other) problems?
To expand a bit on sclv's answer, there is an extended family of closely-related concepts that amount to providing some means of deconstructing a value: Catamorphisms, which are generalized folds; Church-encoding, which represents data by its operations, and is often equivalent to partially applying a catamorphism to the value it deconstructs; CPS transforms, where a Church encoding resembles a reified pattern match that takes separate continuations for each case; representing data as a collection of operations that use it, usually known as object-oriented programming; and so on.
In your case, what you seem to want is an an abstract type, i.e. one that doesn't export its internal representation, but not a completely sealed one, i.e. that leaves the representation open to functions in the module that defines it. This is the same pattern followed by things like Data.Map.Map. You probably don't want to go the type class route, since it sounds like you need to work with a variety of data points, rather than on an arbitrary choice of a single type of data point.
Most likely, some combination of "smart constructors" to create values, and a variety of deconstruction functions (as described above) exported from the module is the best starting point. Going from there, I expect most of the remaining details should have an obvious approach to take next.
With the latter solution (no type classes), you can export a catamorphism on the type rather than the constructors..
data MyData = PointData Double Double | ControlData Double Double Double | SomeOtherData String Double
foldMyData pf cf sf d = case d of
(PointData x y) -> pf x y
(ControlData x y z) -> cf x y z
(SomeOtherData s x) -> sf s x
That way you have a way to pull your data apart into whatever you want (including just ignoring the values and passing functions that return what type of constructor you used) without providing a general way to construct your data.
I find the type-classes-based approach better as long as you are not going to mix different data points in a single data structure.
The name collision problem you mentioned can be solved by creating a separate type class for each distinct field, like this:
class WithCases p where
cases :: p -> NumberOfCases
I cannot distinguish the two terms.
How about you?
Every object is an instance of a type. The word instance is used instead of object when describing member scope (i.e., static vs. instance). The word object is usually used instead of instance when referring to the target of a reference (e.g., a variable or field).
For many (probably most) purposes, there's no real difference. You can get into some fringe areas, such as C, which defines essentially anything (but a function) that has an address as being an "object" -- but of course, nothing in C is an "instance" the way that term is typically used (i.e., an instance of a class). Then again, it's all a matter of definitions -- as long as you and whomever you're talking with agree on what the words mean, it doesn't matter a whole lot whether you call something an "instance" or an "object" or a "blarfle".
Object is a more general term. Anything is an object, it may not an instance of particular type. For most common purposes, those terms can be interchanged especially in the context of a program.