Range restriction of n-ary relation in Alloy - alloy

I have a signature
abstract sig B{}
sig B1 extends B{}
sig B2 extends B{}
sig A{
rel: B->C
}
How do I restrict the B of rel to be of type B1? I tried a fact rel :> (B1 -> C) but I get a type error.
Thanks.

I would express it as follows:
signature fact :
sig A{
rel: B->C
}{
rel.C in B1
}
or standalone fact:
fact {
rel[A].C in B1
}

You can only restrict the domain (i.e. the left-most set) or the range (i.e. the right-most set) of a relation. Restriction does not constrain a relation, but builds a new relation out of an existing one.
D <: Rel creates a new relation where the domain of Rel is restricted to D.
Rel :> R creates a new relation where the range of Rel is restricted to R.
To constrain the domain of rel in your example, you would usually use the statements given bei Loïc. Theoretically you could also use restriction for this, but that is less idiomatic:
(B1 <: rel) = rel

Related

MultiParamTypeClasses - Why is this type variable ambiguous?

Suppose I define a multi-parameter type class:
{-# LANGUAGE MultiParamTypeClasses, AllowAmbiguousTypes, FlexibleContexts, FlexibleInstances #-}
class Table a b c where
decrement :: a -> a
evalutate :: a -> b -> c
Then I define a function that uses decrement, for simplicity:
d = decrement
When I try to load this in ghci (version 8.6.3):
• Could not deduce (Table a b0 c0)
arising from a use of ‘decrement’
from the context: Table a b c
bound by the type signature for:
d :: forall a b c. Table a b c => a -> a
at Thing.hs:13:1-28
The type variables ‘b0’, ‘c0’ are ambiguous
Relevant bindings include d :: a -> a (bound at Thing.hs:14:1)
These potential instance exist:
instance Table (DummyTable a b) a b
This is confusing to me because the type of d is exactly the type of decrement, which is denoted in the class declaration.
I thought of the following workaround:
data Table a b = Table (a -> b) ((Table a b) -> (Table a b))
But this seems notationally inconvenient, and I also just wanted to know why I was getting this error message in the first place.
The problem is that, since decrement only requires the a type, there is no way to figure out which types b and c should be, even at the point where the function is called (thus solving the polymorphism into a specific type) - therefore, GHC would be unable to decide which instance to use.
For example: let's suppose you have two instances of Table: Table Int String Bool, and Table Int Bool Float; you call your function d in a context where it is supposed to map an Int to another Int - problem is, that matches both instances! (a is Int for both).
Notice how, if you make your function equal to evalutate:
d = evalutate
then the compiler accepts it. This is because, since evalutate depends on the three type parameters a, b, and c, the context at the call site would allow for non-ambiguous instance resolution - just check which are the types for a, b, and c at the place where it is called.
This is, of course, not usually a problem for single-parameter type classes - only one type to resolve; it is when we deal with multiple parameters that things get complicated...
One common solution is to use functional dependencies - make b and c depend on a:
class Table a b c | a -> b c where
decrement :: a -> a
evalutate :: a -> b -> c
This tells the compiler that, for every instance of Table for a given type a, there will be one, and only one, instance (b and c will be uniquely determined by a); so it will know that there won't be any ambiguities and accept your d = decrement happily.

What is the difference between type class dependence in haskell and sub typing in OOP?

We often use type class dependence to emulate the sub typing relationship.
e.g:
when we want to express the sub typing relationship between Animal, Reptile and Aves in OOP:
abstract class Animal {
abstract Animal move();
abstract Animal hunt();
abstract Animal sleep();
}
abstract class Reptile extends Animal {
abstract Reptile crawl();
}
abstract class Aves extends Animal {
abstract Aves fly();
}
we can translate each abstract class above into a type class in Haskell:
class Animal a where
move :: a -> a
hunt :: a -> a
sleep :: a -> a
class Animal a => Reptile a where
crawl :: a -> a
class Animal a => Aves a where
fly :: a -> a
And even when we want a heterogeneous list, we have ExistentialQuantification .
So I'm wondering, why we still say that Haskell doesn't have sub-typing, is there still something which sub-typing can do but type class cannot? What is the relationship and difference between them?
A typeclass with one parameter is a class of types, which you can think of as a set of types. If Sub is a subclass (sub-typeclass) of Super, then the set of types implementing Sub is a subset of (or equal to) the set of types implementing Super. All Monads are Applicatives, and all Applicatives are Functors.
Everything you can do with subclassing, you can do with existentially quantified, typeclass-constrained types in Haskell. This is because they’re essentially the same thing: in a typical OOP language, every object with virtual methods includes a vtable pointer, which is the same as the “dictionary” pointer that’s stored in an existentially quantified value with a typeclass constraint. Vtables are existentials! When someone gives you a superclass reference, you don’t know whether it’s an instance of the superclass or a subclass, you only know that it has a certain interface (either from the class or from an OOP “interface”).
In fact you can do more with Haskell’s generalised existentials. An example I like is packing an action returning a value of some type a along with a variable where the result will be written once the action completes; the source returns a value of the same type as the variable, but this is hidden from the outside:
data Request = forall a. Request (IO a) (MVar a)
Because Request hides the type a, you can store multiple requests of different types in the same container. Because a is completely opaque, the only thing that a caller can do with a Request is run the action (synchronously or asynchronously) and write the result into the MVar. It’s hard to use it wrong!
The difference is that in OOP languages you can typically:
Implicitly upcast—use a subclass reference where a superclass reference is expected, which must be done explicitly in Haskell (e.g. by packing in an existential)
Attempt to downcast, which is not allowed in Haskell unless you add an extra Typeable constraint that stores the runtime type information
Typeclasses can model more things than OOP interfaces and subclassing, however, for a few reasons. For one thing, since they’re constraints on types, not objects, you can have constants associated with a type, such as mempty in the Monoid typeclass:
class Semigroup m where
(<>) :: m -> m -> m
class (Semigroup m) => Monoid m where
mempty :: m
In OOP languages there’s typically no notion of a “static interface” that would let you express this. The future “concepts” feature in C++ is the nearest equivalent.
The other thing is that subtyping and interfaces are predicated on a single type, whereas you can have a typeclass with multiple parameters, which denotes a set of tuples of types. You can think of this as a relation. For example, the set of pairs of types where one can be coerced to the other:
class Coercible a b where
coerce :: a -> b
With functional dependencies, you can inform the compiler of various properties of this relation:
class Ref ref m | ref -> m where
new :: a -> m (ref a)
get :: ref a -> m a
put :: ref a -> a -> m ()
instance Ref IORef IO where
new = newIORef
get = readIORef
put = writeIORef
Here the compiler knows that the relation is single-valued, or a function: each value of the “input” (ref) maps to exactly one value of the “output” (m). In other words, if the ref parameter of a Ref constraint is determined to be IORef, then the m parameter must be IO—you cannot have this functional dependency and also a separate instance mapping IORef to a different monad, like instance Ref IORef DifferentIO. This type of functional relation between types can also be expressed with associated types or the more modern type families (which are usually clearer, in my opinion).
Of course, it’s not idiomatic to translate an OOP subclass hierarchy directly to Haskell using the “existential typeclass antipattern”, which is usually overkill. There’s often a far simpler translation, such as ADTs/GADTs/records/functions—roughly this corresponds to the OOP advice of “prefer composition over inheritance”.
Most of the time, when you would write a class in OOP, in Haskell you shouldn’t generally reach for a typeclass, but rather a module. A module that exports a type and some functions operating on it is essentially the same thing as the public interface of a class, when it comes to encapsulation and code organisation. For dynamic behaviour, typically the best solution isn’t type-based dispatch; instead, just use a higher-order function. It is functional programming, after all. :)

Why don't impredicative types allow for heterogenous lists? [duplicate]

A friend of mine posed a seemingly innocuous Scala language question last week that I didn't have a good answer to: whether there's an easy way to declare a collection of things belonging to some common typeclass. Of course there's no first-class notion of "typeclass" in Scala, so we have to think of this in terms of traits and context bounds (i.e. implicits).
Concretely, given some trait T[_] representing a typeclass, and types A, B and C, with corresponding implicits in scope T[A], T[B] and T[C], we want to declare something like a List[T[a] forAll { type a }], into which we can throw instances of A, B and C with impunity. This of course doesn't exist in Scala; a question last year discusses this in more depth.
The natural follow-up question is "how does Haskell do it?" Well, GHC in particular has a type system extension called impredicative polymorphism, described in the "Boxy Types" paper. In brief, given a typeclass T one can legally construct a list [forall a. T a => a]. Given a declaration of this form, the compiler does some dictionary-passing magic that lets us retain the typeclass instances corresponding to the types of each value in the list at runtime.
Thing is, "dictionary-passing magic" sounds a lot like "vtables." In an object-oriented language like Scala, subtyping is a much more simple, natural mechanism than the "Boxy Types" approach. If our A, B and C all extend trait T, then we can simply declare List[T] and be happy. Likewise, as Miles notes in a comment below, if they all extend traits T1, T2 and T3 then I can use List[T1 with T2 with T3] as an equivalent to the impredicative Haskell [forall a. (T1 a, T2 a, T3 a) => a].
However, the main, well-known disadvantage with subtyping compared to typeclasses is tight coupling: my A, B and C types have to have their T behavior baked in. Let's assume this is a major dealbreaker, and I can't use subtyping. So the middle ground in Scala is pimps^H^H^H^H^Himplicit conversions: given some A => T, B => T and C => T in implicit scope, I can again quite happily populate a List[T] with my A, B and C values...
... Until we want List[T1 with T2 with T3]. At that point, even if we have implicit conversions A => T1, A => T2 and A => T3, we can't put an A into the list. We could restructure our implicit conversions to literally provide A => T1 with T2 with T3, but I've never seen anybody do that before, and it seems like yet another form of tight coupling.
Okay, so my question finally is, I suppose, a combination of a couple questions that were previously asked here: "why avoid subtyping?" and "advantages of subtyping over typeclasses" ... is there some unifying theory that says impredicative polymorphism and subtype polymorphism are one and the same? Are implicit conversions somehow the secret love-child of the two? And can somebody articulate a good, clean pattern for expressing multiple bounds (as in the last example above) in Scala?
You're confusing impredicative types with existential types. Impredicative types allow you to put polymorphic values in a data structure, not arbitrary concrete ones. In other words [forall a. Num a => a] means that you have a list where each element works as any numeric type, so you can't put e.g. Int and Double in a list of type [forall a. Num a => a], but you can put something like 0 :: Num a => a in it. Impredicative types is not what you want here.
What you want is existential types, i.e. [exists a. Num a => a] (not real Haskell syntax), which says that each element is some unknown numeric type. To write this in Haskell, however, we need to introduce a wrapper data type:
data SomeNumber = forall a. Num a => SomeNumber a
Note the change from exists to forall. That's because we're describing the constructor. We can put any numeric type in, but then the type system "forgets" which type it was. Once we take it back out (by pattern matching), all we know is that it's some numeric type. What's happening under the hood, is that the SomeNumber type contains a hidden field which stores the type class dictionary (aka. vtable/implicit), which is why we need the wrapper type.
Now we can use the type [SomeNumber] for a list of arbitrary numbers, but we need to wrap each number on the way in, e.g. [SomeNumber (3.14 :: Double), SomeNumber (42 :: Int)]. The correct dictionary for each type is looked up and stored in the hidden field automatically at the point where we wrap each number.
The combination of existential types and type classes is in some ways similar to subtyping, since the main difference between type classes and interfaces is that with type classes the vtable travels separately from the objects, and existential types packages objects and vtables back together again.
However, unlike with traditional subtyping, you're not forced to pair them one to one, so we can write things like this which packages one vtable with two values of the same type.
data TwoNumbers = forall a. Num a => TwoNumbers a a
f :: TwoNumbers -> TwoNumbers
f (TwoNumbers x y) = TwoNumbers (x+y) (x*y)
list1 = map f [TwoNumbers (42 :: Int) 7, TwoNumbers (3.14 :: Double) 9]
-- ==> [TwoNumbers (49 :: Int) 294, TwoNumbers (12.14 :: Double) 28.26]
or even fancier things. Once we pattern match on the wrapper, we're back in the land of type classes. Although we don't know which type x and y are, we know that they're the same, and we have the correct dictionary available to perform numeric operations on them.
Everything above works similarly with multiple type classes. The compiler will simply generate hidden fields in the wrapper type for each vtable and bring them all into scope when we pattern match.
data SomeBoundedNumber = forall a. (Bounded a, Num a) => SBN a
g :: SomeBoundedNumber -> SomeBoundedNumber
g (SBN n) = SBN (maxBound - n)
list2 = map g [SBN (42 :: Int32), SBN (42 :: Int64)]
-- ==> [SBN (2147483605 :: Int32), SBN (9223372036854775765 :: Int64)]
As I'm very much a beginner when it comes to Scala, I'm not sure I can help with the final part of your question, but I hope this has at least cleared up some of the confusion and given you some ideas on how to proceed.
#hammar's answer is perfectly right. Here is the scala way of doint it. For the example i'll take Show as the type class and the values i and d to pack in a list :
// The type class
trait Show[A] {
def show(a : A) : String
}
// Syntactic sugar for Show
implicit final class ShowOps[A](val self : A)(implicit A : Show[A]) {
def show = A.show(self)
}
implicit val intShow = new Show[Int] {
def show(i : Int) = "Show of int " + i.toString
}
implicit val stringShow = new Show[String] {
def show(s : String) = "Show of String " + s
}
val i : Int = 5
val s : String = "abc"
What we want is to be able run the following code
val list = List(i, s)
for (e <- list) yield e.show
Building the list is easy but the list won't "remember" the exact type of each of its elements. Instead it will upcast each element to a common super type T. The more precise super super type between String and Int being Any, the type of the list is List[Any].
The problem is: what to forget and what to remember? We want to forget the exact type of the elements BUT we want to remember that they are all instances of Show. The following class does exactly that
abstract class Ex[TC[_]] {
type t
val value : t
implicit val instance : TC[t]
}
implicit def ex[TC[_], A](a : A)(implicit A : TC[A]) = new Ex[TC] {
type t = A
val value = a
val instance = A
}
This is an encoding of the existential :
val ex_i : Ex[Show] = ex[Show, Int](i)
val ex_s : Ex[Show] = ex[Show, String](s)
It pack a value with the corresponding type class instance.
Finally we can add an instance for Ex[Show]
implicit val exShow = new Show[Ex[Show]] {
def show(e : Ex[Show]) : String = {
import e._
e.value.show
}
}
The import e._ is required to bring the instance into scope. Thanks to the magic of implicits:
val list = List[Ex[Show]](i , s)
for (e <- list) yield e.show
which is very close to the expected code.

Dependent Types: How is the dependent pair type analogous to a disjoint union?

I've been studying dependent types and I understand the following:
Why universal quantification is represented as a dependent function type. ∀(x:A).B(x) means “for all x of type A there is a value of type B(x)”. Hence it's represented as a function which when given any value x of type A returns a value of type B(x).
Why existential quantification is represented as a dependent pair type. ∃(x:A).B(x) means “there exists an x of type A for which there is a value of type B(x)”. Hence it's represented as a pair whose first element is a particular value x of type A and whose second element is a value of type B(x).
Aside: It's also interesting to note that universal quantification is always used with material implication while existential quantification is always used with logical conjunction.
Anyway, the Wikipedia article on dependent types states that:
The opposite of the dependent type is the dependent pair type, dependent sum type or sigma-type. It is analogous to the coproduct or disjoint union.
How is it that a pair type (which is normally a product type) is analogous to a disjoint union (which is a sum type)? This has always confused me.
In addition, how is the dependent function type analogous to the product type?
The confusion arises from using similar terminology for the structure of a Σ type and for how its values look like.
A value of Σ(x:A) B(x) is a pair (a,b) where a∈A and b∈B(a). The type of the second element depends on the value of the first one.
If we look at the structure of Σ(x:A) B(x), it's a disjoint union (coproduct) of B(x) for all possible x∈A.
If B(x) is constant (independent of x) then Σ(x:A) B will be just |A| copies of B, that is A⨯B (a product of 2 types).
If we look at the structure of Π(x:A) B(x), it's a product of B(x) for all possible x∈A. Its values could be viewed as |A|-tuples where a-th component is of type B(a).
If B(x) is constant (independent of x) then Π(x:A) B will be just A→B - functions from A to B, that is Bᴬ (B to A) using the set-theory notation - the product of |A| copies of B.
So Σ(x∈A) B(x) is a |A|-ary coproduct indexed by the elements of A, while Π(x∈A) B(x) is a |A|-ary product indexed by the elements of A.
A dependent pair is typed with a type and a function from values of that type to another type. The dependent pair has values of pairs of a value of the first type and a value of the second type applied to the first value.
data Sg (S : Set) (T : S -> Set) : Set where
Ex : (s : S) -> T s -> Sg S T
We can recapture sum types by showing how Either is canonically expressed as a sigma type: it's just Sg Bool (choice a b) where
choice : a -> a -> Bool -> a
choice l r True = l
choice l r False = r
is the canonical eliminator of booleans.
eitherIsSg : {a b : Set} -> Either a b -> Sg Bool (choice a b)
eitherIsSg (Left a) = Sg True a
eitherIsSg (Right b) = Sg False b
sgIsEither : {a b : Set} -> Sg Bool (choice a b) -> Either a b
sgIsEither (Sg True a) = Left a
sgIsEither (Sg False b) = Right b
Building on Petr Pudlák’s answer, another angle to see this in a purely non-dependent fashion is to notice that the type Either a a is isomorphic to the type (Bool, a). Although the latter is, at first glance, a product, it makes sense to say it’s a sum type, as it is the sum of two instances of a.
I have to do this example with Either a a instead of Either a b, because for the latter to be expressed as a product, we need – well – dependent types.
Good question. The name could originate from Martin-Löf who used the term "Cartesian product of a family of sets" for the pi type. See the following notes, for example:
http://www.cs.cmu.edu/afs/cs/Web/People/crary/819-f09/Martin-Lof80.pdf
The point is while a pi type is in principle akin to an exponential, you can always see an exponential as an n-ary product where n is the exponent. More concretely, the non-dependent function A -> B can be seen as an exponential type B^A or an infinite product Pi_{a in A} B = B x B x B x ... x B (A times). A dependent product is in this sense a potentially infinite product Pi_{a in A} B(a) = B(a_1) x B(a_2) x ... x B (a_n) (once for every a_i in A).
The reasoning for dependent sum could be similar, as you can see a product as an n-ary sum where n is one of the factors of the product.
This is probably redundant with the other answers at this point, but here is the core of the issue:
How is it that a pair type (which is normally a product type) is analogous to a disjoint union (which is a sum type)? This has always confused me.
But what is a product but a sum of equal numbers? e.g. 4 × 3 = 3 + 3 + 3 + 3.
The same relationship holds for types, or sets, or similar things. In fact, the nonnegative integers are just the decategorification of finite sets. The definitions of addition and multiplication on numbers are chosen so that the cardinality of a disjoint union of sets is the sum of the cardinalities of the sets, and the cardinality of a product of sets is equal to the product of the cardinalities of the sets. In fact, if you substitute "set" with "herd of sheep", this is probably how arithmetic was invented.
First, see what a co-product is.
A co-product is a terminal object A for all objects B_i such that for all arrows B_i -> X there is an arrow B_i -> A, and a unique A -> X such that the corresponding triangles commute.
You can view this as a Haskell data type A with B_i -> A being a bunch of constructors with a single argument of type B_i. It is clear then that for every B_i -> X it is possible to supply an arrow from A -> X such that through pattern-matching you could apply that arrow to B_i to get X.
The important connection to sigma types is that the index i in B_i can be of any type, not just a type of natural numbers.
The important difference from the answers above is that it does not have to have a B_i for every value i of that type: once you've defined B_i ∀ i, you have a total function.
The difference between Π and Σ, as may be seen from Petr Pudlak's answer, is that for Σ some of the values B_i in the tuple may be missing - for some i there may be no corresponding B_i.
The other clear difference between Π and Σ is that Π characterizes a product of B_i by providing i-th projection from the product Π to each B_i (this is what the function i -> B_i means), but Σ provides the arrows the other way around - it provides the i-th injection from B_i into Σ.

Is there a way to implement constraints in Haskell's type classes?

Is there some way (any way) to implement constraints in type classes?
As an example of what I'm talking about, suppose I want to implement a Group as a type class. So a type would be a group if there are three functions:
class Group a where
product :: a -> a -> a
inverse :: a -> a
identity :: a
But those are not any functions, but they must be related by some constraints. For example:
product a identity = a
product a (inverse a) = identity
inverse identity = identity
etc...
Is there a way to enforce this kind of constraint in the definition of the class so that any instance would automatically inherit it? As an example, suppose I'd like to implement the C2 group, defined by:
data C2 = E | C
instance Group C2 where
identity = E
inverse C = C
This two definitions uniquely determines C2 (the constraints above define all possible operations - in fact, C2 is the only possible group with two elements because of the constraints). Is there a way to make this work?
Is there a way to enforce this kind of constraint?
No. Lots of people have been asking for it, including the illustrious Tony Hoare, but nothing appears on the horizon yet.
This problem would be an excellent topic of discussion for the Haskell Prime group. If anyone has floated a good proposal, it is probably to be found there.
P.S. This is an important problem!
In some cases you can specify the properties using QuickCheck. This is not exactly enforcement, but it lets you provide generic tests that all instances should pass. For instance with Eq you might say:
prop_EqNeq x y = (x == y) == not (x != y)
Of course it is still up to the instance author to call this test.
Doing this for the monad laws would be interesting.
Type classes can contain definitions as well as declarations. Example:
class Equality a where
(?=), (!=) :: a -> a -> Bool
a ?= b = not (a != b)
a != b = not (a ?= b)
instance Eq a => Equality a where
(?=) = (==)
test = (1 != 2)
You can also specify special constraints (let's call them laws) in plain Haskell, but it's not guaranteed that the compiler will use them. A common example are monadic laws

Resources