I am supposed to write some haskell program but I don't really know where to start. I will be really really grateful if you can point me to some resources to read or explain me the question. I am sure this is something totally amateurish, but I really need a starting point.
data DFA q o = DFA (q -> o -> q) q [q]
data NFA q o = NFA (q -> o -> [q]) [q] [q]
-- I really realy don't understand the declarations here
-- I can guess that q is somewhat related to Q and o to E, but don't get what it really means
data Q = Q0 | Q1 | Q2
deriving (Eq, Enum, Bounded)
data E = A | B
-- what does n1 do ??
n1 :: NFA Q E
n1 = NFA d [Q0] [Q2] -- i see [Q0] refers to set of initial states and [Q2] refers to final states :)
where
d Q0 A = [Q0]
d Q0 B = [Q0, Q1]
d Q1 _ = [Q2]
d Q2 _ = []
-- the following functions are for me to write
starDFA :: Eq q => DFA q o -> [o] -> Bool
--for the above function, what are the arguments the function takes in ?
--how can we relate q with Q and [o] with [E] ??
Any explanations or references to proper starting points will be really helpful to me.
Sorry to ask such a dumb question, but I really don't know where to start :)
Thanks
Learn You a Haskell for Great Good! is probably the best Haskell tutorial at the moment. I recommend reading through the whole thing, but if you are in a hurry, I'll point out some of the more relevant sections for this assignment.
The main part in the code you are given are the data declarations, so once you're familiar with the basics (chapter 2 and the first section of chapter 3), a good place to dive in is the Algebraic data types intro, Type variables and Type parameters.
The above should be enough for deciphering the data declarations and understanding the relationship between q vs Q and o vs E.
Now to implement the actual function, you need to be familiar with how deterministic finite automata work and then know enough Haskell to write the actual implementation. Chapter 4 and chapter 5 are the most relevant chapters for this in the tutorial and you might also find the section on standard library list functions useful.
Once you get to this point, if you are stuck with the implementation, you can post another question with the code you have written so far.
In haskell we have three way to define new type, using three distinct keywords, type, newtype, and data.
In your example it's data keyword which is in use, let's focus a bit more on it.
It's better to start with the easiest one coming from your code
data E = A | B
Here, We have define a new type E which can take only two mode or state or value.
A type like this is what we call a sum type.
How can we use a it ?
Mostly with pattern matching.
useE :: E -> String
useE A = "This is A"
useE B = "This is B"
Now, a more complex data declaration from your code.
data Q = Q0 | Q1 | Q2 deriving (Eq, Enum, Bounded)
Again, as said previously we have a sum type which define a new type Q, taken three value, Q0, Q1 or Q2. But we have a deriving clause, which tell to the compiler that this new type implement the method (or function) deriving (or inherited) from Eq, Enum, Bounded class.
What's that mean ?
Let's take a look to a function.
Imagine you want to associate a number for each of the value of Q, how can we perform that ?
enumQ :: Q -> Int
enumQ x = fromEnum x
If you want more insight about this particular functionality provide by deriving clause, read the resources which have been indicated and try :info Enum under ghci. Note that the previous type could also derive from the same class. As these types are fully describe as the sum of an enumerable set of value (discriminated by |) we better understand why we call them sum type.
Finally the most difficult data declaration.
data DFA q o = DFA (q -> o -> q) q [q]
data NFA q o = NFA (q -> o -> [q]) [q] [q]
If fact they are almost the same data definition then I will go trough the first one and let you the analyses of the second one as an exercise.
data DFA q o = DFA (q -> o -> q) q [q]
This time we must talk about data constructor and type constructor.
On the left hand side of the equality, there is data constructor,
to built data and give it a name. On this side we have the required
parameter used to built this new data.
On the right hand side of the equality, there is type constructor, to
built this new type. On this side we have the explicit
plumbering which show to the reader how this new type (data) is built
using the existing type.
Now keeping in mind, that the following are type,
[x] ::: Type representing the polymorphic list, Example, [Int] =>
List of Int
x ::: A basic type, one of the existing one (Int, Char, String ...)
x -> y ::: Type which define a function taken a type x to produce a
type y.
x -> y -> z ::: Type which define a function taken a type x and a
type y to produce a type z. Which can be view as a function taking another function of type (x->y) and producing a type z. This is what we call an High-Order Function.
Then our data declaration, put in this context, is a data constructor, feed by two type parameter, q and o and as a result, it return a new type as the product of a high-order function a basic type and a list type. Which explain why we call this a product type.
It should be enough, now, infering by yourself, to answer your question what does n1 ?
Good luck.
From the little I understand about Haskell type declarations, the initial statements about DFA and NFA are saying something like (looking at NFA, for example):
(Left hand side:) NFA is a type that utilizes two types (q and o) in its construction.
(Right hand side:) an instance of NFA will be called NFA, and be composed of three parameters:
(1) "(q -> o -> [q])" , meaning a function that takes two parameters, one of type q and one of type o, and returns a list of q's, ([q])
(2) "[q]" , meaning one list of values of type q
(3) "[q]" , another list of values of type q
n1 seems like an instance construction of NFA, and we see
n1 = NFA d [Q0] [Q2]
So we can infer that:
(1) d is a function that takes two parameters, a 'q' and an 'o' and returns a list of q's
(2) [Q0] is a list of q's, and
(3) [Q2] is a list of q's.
And, indeed, the definition of d follows:d takes two parameters, a 'Q' and an 'E' and returns a list of Q's (which we know can be either Q0, Q1, or Q2) or an empty list.
I hope that helps a little and/or perhaps someone could clarify and correct my vague understanding as well.
Related
When I asked this question, one of the answers, now deleted, was suggesting that the type Either corresponds to XOR, rather than OR, in the Curry-Howard correspondence, because it cannot be Left and Right at the same time.
Where is the truth?
If you have a value of type P and a value of type Q (that is, you have both a proof of P and a proof of Q), then you are still able to provide a value of type Either P Q.
Consider
x :: P
y :: Q
...
z :: Either P Q
z = Left x -- Another possible proof would be `Right y`
While Either does not have a specific case that explicitly represents this situation (unlike These), it does not do anything to exclude it (as in exclusive OR).
This third case where both have proofs is a bit different than the other two cases where only one has a proof, which reflects the fact that "not excluding" something is a bit different than "including" something in intuitionistic logic, since Either does not provide a particular witness for this fact. However Either is not an XOR in the way that XOR would typically work since, as I said, it does not exclude the case where both parts have proofs. What Daniel Wagner proposes in this answer, on the other hand, is much closer to an XOR.
Either is kind of like an exclusive OR in terms of what its possible witnesses are. On the other hand, it is like an inclusive OR when you consider whether or not you can actually create a witness in four possible scenarios: having a proof of P and a refutation of Q, having a proof of Q and a refutation of P, having a proof of both or having a refutation of both.[1] While you can construct a value of type Either P Q when you have a proof of both P and Q (similar to an inclusive OR), you cannot distinguish this situation from the situation where only P has a proof or only Q has a proof using only a value of type Either P Q (kind of similar to an exclusive OR). Daniel Wagner's solution, on the other hand, is similar to exclusive OR on both construction and deconstruction.
It is also worth mentioning that These more explicitly represents the possibility of both having proofs. These is similar to inclusive OR on both construction and deconstruction. However, it's also worth noting that there is nothing preventing you from using an "incorrect" constructor when you have a proof of both P and Q. You could extend These to be even more representative of an inclusive OR in this regard with a bit of additional complexity:
data IOR a b
= OnlyFirst a (Not b)
| OnlySecond (Not a) b
| Both a b
type Not a = a -> Void
The potential "wrong constructor" issue of These (and the lack of a "both" witness in Either) doesn't really matter if you are only interested in a proof irrelevant logical system (meaning that there is no way to distinguish between any two proofs of the same proposition), but it might matter in cases where you want more computational relevance in the logic.[2]
In the practical situation of writing computer programs that are actually meant to be executed, computational relevance is often extremely important. Even though 0 and 23 are both proofs that the Int type is inhabited, we certainly like to distinguish between the two values in programs, in general!
Regarding "construction" and "destruction"
Essentially, I just mean "creating values of a type" by construction and "pattern matching" by destruction (sometimes people use the words "introduction" and "elimination" here, particularly in the context of logic).
In the case of Daniel Wagner's solutions:
Construction: When you construct a value of type Xor A B, you must provide a proof of exactly one of A or B and a refutation of the other one. This is similar to exclusive or. It is not possible to construct a value of this unless you have a refutation of either A or B and a proof of the other one. A particularly significant fact is that you cannot construct a value of this type if you have a proof of both A and B and you don't have a refutation of either of them (unlike inclusive OR).
Destruction: When you pattern match on a value of type Xor A B, you always have a proof of one of the types and a refutation of the other. It will never give you a proof of both of them. This follows from its definition.
In the case of IOR:
Construction: When you create a value of type IOR A B, you must do exactly one of the following: (1) provide only a proof of A and a refutation of B, (2) provide a proof of B and a refutation of B, (3) provide a proof of both A and B. This is like inclusive OR. These three possibilities correspond exactly to each of the three constructors of IOR, with no overlap. Note that, unlike the situation with These, you cannot use the "incorrect constructor" in the case where you have a proof of both A and B: the only way to make a value of type IOR A B in this case is to use Both (since you would otherwise need to provide a refutation of either A or B).
Destruction: Since the three possible situations where you have a proof of at least one of A and B are exactly represented by IOR, with a separate constructor for each (and no overlap between the constructors), you will always know exactly which of A and B are true and which is false (if applicable) by pattern matching on it.
Pattern matching on IOR
Pattern matching on IOR works exactly like pattern matching on any other algebraic datatype. Here is an example:
x :: IOR Char Int
x = Both 'c' 3
y :: IOR Char Void
y = OnlyFirst 'a' (\v -> v)
f :: Not p -> IOR p Int
f np = OnlySecond np 7
z :: IOR Void Int
z = f notVoid
g :: IOR p Int -> Int
g w =
case w of
OnlyFirst p q -> -1
OnlySecond p q -> q
Both p q -> q
-- We can show that the proposition represented by "Void" is indeed false:
notVoid :: Not Void
notVoid = \v -> v
Then a sample GHCi session, with the above code loaded:
ghci> g x
3
ghci> g z
7
[1]This gets a bit more complex when you consider that some statements are undecidable and therefore you cannot construct a proof or a refutation for them.
[2]Homotopy type theory would be one example of a proof relevant system, but this is reaching the limit of my knowledge as of now.
The confusion stems from the Boolean truth-table exposition of logic. In particular, when both arguments are True, OR is True, whereas XOR is False. Logically it means that to prove OR it's enough to provide the proof of one of the arguments; but it's okay if the other one is True as well--we just don't care.
In Curry-Howard interpretation, if somebody gives you an element of Either a b, and you were able to extract the value of a from it, you still know nothing about b. It could be inhabited or not.
On the other hand, to prove XOR, you not only need the proof of one argument, you must also provide the proof of the falsehood of the other argument.
So, with Curry-Howard interpretation, if somebody gives you an element of Xor a b and you were able to extract the value of a from it, you would conclude that b is uninhabited (that is, isomorphic to Void). Conversely, if you were able to extract the value of b, then you'd know that a was uninhabited.
The proof of falsehood of a is a function a->Void. Such a function would be able to produce a value of Void, given a value of a, which is clearly impossible. So there can be no values of a. (There is only one function that returns Void, and that's the identity on Void.)
Perhaps try replacing “proof” in the Curry-Howard isomorphism with “evidence”.
Below I will use italics for propositions and proofs (which I will also call evidence), the mathematical side of the isomorphism, and I will use code for types and values.
The question is: suppose I know the type for [values corresponding to] evidence that P is true (I will call this type P), and I know the type for evidence that Q is true (I call this type Q), then what is the type for evidence of the proposition R = P OR Q?
Well there are two ways to prove R: we can prove P, or we can prove Q. We could prove both but that would be more work than necessary.
Now ask what the type should be? It is the type for things which are either evidence of P or evidence of Q. I.e. values which are either things of type P or things of type Q. The type Either P Q contains precisely those values.
What if you have evidence of P AND Q? Well this is just a value of type (P, Q), and we can write a simple function:
f :: (p,q) -> Either p q
f (a,b) = Left a
And this gives us a way to prove P OR Q if we can prove P AND Q. Therefore Either cannot correspond to xor.
What is the type for P XOR Q?
At this point I will say that negations are a bit annoying in this sort of constructive logic.
Let’s convert the question to things we understand, and a simpler thing we don’t:
P XOR Q = (P AND (NOT Q)) OR (Q AND (NOT P))
Ask now: what is the type for evidence of NOT P?
I don’t have an intuitive explanation for why this is the simplest type but if NOT P were true then evidence of P being true would be a contradiction, which we say as proving FALSE, the unprovable thing (aka BOTTOM or BOT). That is, NOT P may be written in simpler terms as: P IMPLIES FALSE. The type for FALSE is called Void (in haskell). It is a type which no values inhabit because there are no proofs of it. Therefore if you could construct a value of that type you would have problems. IMPLIES corresponds to functions and so the type corresponding to NOT P is P -> Void.
We put this with what we know and get the following equivalence in the language of propositions:
P XOR Q = (P AND (NOT Q)) OR (Q AND (NOT P)) = (P AND (Q IMPLIES FALSE)) OR ((P IMPLIES FALSE) AND Q)
The type is then:
type Xor p q = Either (p, q -> Void) (p -> Void, q)
Compiling without Continuations describes a way to extend ANF System F with join points. GHC itself has join points in Core (an intermediate representation) rather than exposing join points directly in the surface language (Haskell). Out of curiosity, I started trying to write a language that simply extends System F with join points. That is, the join points are user facing. However, there's something about the typing rules in the paper that I don't understand. Here are the parts that I do understand:
There are two environments, one for ordinary values/functions and one that only has join points.
The rational for ∆ being ε in several of the rules. In the expression let x:σ = u in ..., u cannot reference any join points (VBIND) because it join points cannot return to arbitrary locations.
The strange typing rule for JBIND. The paper does a good job explaining this.
Here's what I don't get. The paper introduces a notation that I will call the "overhead arrow", but the paper itself does not explicitly give it a name or mention it. Visually, it looks like a arrow pointing to the right, and it goes above an expression. Roughly, this seems to indicate a "tail context" (the paper does use this term). In the paper, these overhead arrows can be applied to terms, types, data constructors, and even environments. They can be nested as well. Here's the main difficulty I'm having. There are several rules with premises that include type environments under an overhead arrow. JUMP, CASE, RVBIND, and RJBIND all include premises with such a type environment (Figure 2 in the paper). However, none of the typing rules have a conclusion where the type environment is under an overhead arrow. So, I cannot see how JUMP, CASE, etc. can ever be used since the premises cannot be derived by any of the other rules.
That's the question, but if anyone has any supplementary material that provides more context are the overhead arrow convention or if anyone is aware an implementation of the System-F-with-join-points type system (other than in GHC's IR), that would be helpful too.
In this paper, x⃗ means “A sequence of x, separated by appropriate delimiters”.
A few examples:
If x is a variable, λx⃗. e is an abbreviation for λx1. λx2. … λxn e. In other words, many nested 1-argument lambdas, or a many-argument lambda.
If σ and τ are types, σ⃗ → τ is an abbreviation for σ1 → σ2 → … → σn → τ. In other words, a function type with many parameter types.
If a is a type variable and σ is a type, ∀a⃗. σ is an abbreviation for ∀a1. ∀a2. … ∀an. σ. In other words, many nested polymorphic functions, or a polymorphic function with many type parameters.
In Figure 1 of the paper, the syntax of a jump expression is defined as:
e, u, v ⩴ … | jump j ϕ⃗ e⃗ τ
If this declaration were translated into a Haskell data type, it might look like this:
data Term
-- | A jump expression has a label that it jumps to, a list of type argument
-- applications, a list of term argument applications, and the return type
-- of the overall `jump`-expression.
= Jump LabelVar [Type] [Term] Type
| ... -- Other syntactic forms.
That is, a data constructor that takes a label variable j, a sequence of type arguments ϕ⃗, a sequence of term arguments e⃗, and a return type τ.
“Zipping” things together:
Sometimes, multiple uses of the overhead arrow place an implicit constraint that their sequences have the same length. One place that this occurs is with substitutions.
{ϕ/⃗a} means “replace a1 with ϕ1, replace a2 with ϕ2, …, replace an with ϕn”, implicitly asserting that both a⃗ and ϕ⃗ have the same length, n.
Worked example: the JUMP rule:
The JUMP rule is interesting because it provides several uses of sequencing, and even a sequence of premises. Here’s the rule again:
(j : ∀a⃗. σ⃗ → ∀r. r) ∈ Δ
(Γ; ε ⊢⃗ u : σ {ϕ/⃗a})
Γ; Δ ⊢ jump j ϕ⃗ u⃗ τ : τ
The first premise should be fairly straightforward, now: lookup j in the label context Δ, and check that the type of j starts with a bunch of ∀s, followed by a bunch of function types, ending with a ∀r. r.
The second “premise” is actually a sequence of premises. What is it looping over? So far, the sequences we have in scope are ϕ⃗, σ⃗, a⃗, and u⃗.
ϕ⃗ and a⃗ are used in a nested sequence, so probably not those two.
On the other hand, u⃗ and σ⃗ seem quite plausible if you consider what they mean.
σ⃗ is the list of argument types expected by the label j, and u⃗ is the list of argument terms provided to the label j, and it makes sense that you might want to iterate over argument types and argument terms together.
So this “premise” actually means something like this:
for each pair of σ and u:
Γ; ε ⊢ u : σ {ϕ/⃗a}
Pseudo-Haskell implementation
Finally, here’s a somewhat-complete code sample illustrating what this typing rule might look like in an actual implementation. x⃗ is implemented as a list of x values, and some monad M is used to signal failure when a premise is not satisfied.
data LabelVar
data Type
= ...
data Term
= Jump LabelVar [Type] [Term] Type
| ...
typecheck :: TermContext -> LabelContext -> Term -> M Type
typecheck gamma delta (Jump j phis us tau) = do
-- Look up `j` in the label context. If it's not there, throw an error.
typeOfJ <- lookupLabel j delta
-- Check that the type of `j` has the right shape: a bunch of `foralls`,
-- followed by a bunch of function types, ending with `forall r.r`. If it
-- has the correct shape, split it into a list of `a`s, a list of `\sigma`s
-- and the return type, `forall r.r`.
(as, sigmas, ret) <- splitLabelType typeOfJ
-- exactZip is a helper function that "zips" two sequences together.
-- If the sequences have the same length, it produces a list of pairs of
-- corresponding elements. If not, it raises an error.
for each (u, sigma) in exactZip (us, sigmas):
-- Type-check the argument `u` in a context without any tail calls,
-- and assert that its type has the correct form.
sigma' <- typecheck gamma emptyLabelContext u
-- let subst = { \sequence{\phi / a} }
subst <- exactZip as phis
assert (applySubst subst sigma == sigma')
-- After all the premises have been satisfied, the type of the `jump`
-- expression is just its return type.
return tau
-- Other syntactic forms
typecheck gamma delta u = ...
-- Auxiliary definitions
type M = ...
instance Monad M
lookupLabel :: LabelVar -> LabelContext -> M Type
splitLabelType :: Type -> M ([TypeVar], [Type], Type)
exactZip :: [a] -> [b] -> M [(a, b)]
applySubst :: [(TypeVar, Type)] -> Type -> Type
As far as I know SPJ’s style for notation, and this does align with what I see in the paper, it simply means “0 or more”. E.g. you can replace \overarrow{a} with a_1, …, a_n, n >= 0.
It may be “1 or more” in some cases, but it shouldn’t be hard to figure one which one of the two.
Imagine a language which doesn't allow multiple value constructors for a data type. Instead of writing
data Color = White | Black | Blue
we would have
data White = White
data Black = Black
data Blue = Black
type Color = White :|: Black :|: Blue
where :|: (here it's not | to avoid confusion with sum types) is a built-in type union operator. Pattern matching would work in the same way
show :: Color -> String
show White = "white"
show Black = "black"
show Blue = "blue"
As you can see, in contrast to coproducts it results in a flat structure so you don't have to deal with injections. And, unlike sum types, it allows to randomly combine types resulting in greater flexibility and granularity:
type ColorsStartingWithB = Black :|: Blue
I believe it wouldn't be a problem to construct recursive data types as well
data Nil = Nil
data Cons a = Cons a (List a)
type List a = Cons a :|: Nil
I know union types are present in TypeScript and probably other languages, but why did the Haskell committee chose ADTs over them?
Haskell's sum type is very similar to your :|:.
The difference between the two is that the Haskell sum type | is a tagged union, while your "sum type" :|: is untagged.
Tagged means every instance is unique - you can distunguish Int | Int from Int (actually, this holds for any a):
data EitherIntInt = Left Int | Right Int
In this case: Either Int Int carries more information than Int because there can be a Left and Right Int.
In your :|:, you cannot distinguish those two:
type EitherIntInt = Int :|: Int
How do you know if it was a left or right Int?
See the comments for an extended discussion of the section below.
Tagged unions have another advantage: The compiler can verify whether you as the programmer handled all cases, which is implementation-dependent for general untagged unions. Did you handle all cases in Int :|: Int? Either this is isomorphic to Int by definition or the compiler has to decide which Int (left or right) to choose, which is impossible if they are indistinguishable.
Consider another example:
type (Integral a, Num b) => IntegralOrNum a b = a :|: b -- untagged
data (Integral a, Num b) => IntegralOrNum a b = Either a b -- tagged
What is 5 :: IntegralOrNum Int Double in the untagged union? It is both an instance of Integral and Num, so we can't decide for sure and have to rely on implementation details. On the other hand, the tagged union knows exactly what 5 should be because it is branded with either Left or Right.
As for naming: The disjoint union in Haskell is a union type. ADTs are only a means of implementing these.
I will try to expand the categorical argument mentioned by #BenjaminHodgson.
Haskell can be seen as the category Hask, in which objects are types and morphisms are functions between types (disregarding bottom).
We can define a product in Hask as tuple - categorically speaking it meets the definition of the product:
A product of a and b is the type c equipped with projections p and q such that p :: c -> a and q :: c -> b and for any other candidate c' equipped with p' and q' there exists a morphism m :: c' -> c such that we can write p' as p . m and q' as q . m.
Read up on this in Bartosz' Category Theory for Programmers for further information.
Now for every category, there exists the opposite category, which has the same morphism but reverses all the arrows. The coproduct is thus:
The coproduct c of a and b is the type c equipped with injections i :: a -> c and j :: b -> c such that for all other candidates c' with i' and j' there exists a morphism m :: c -> c' such that i' = m . i and j' = m . j.
Let's see how the tagged and untagged union perform given this definition:
The untagged union of a and b is the type a :|: b such that:
i :: a -> a :|: b is defined as i a = a and
j :: b -> a :|: b is defined as j b = b
However, we know that a :|: a is isomorphic to a. Based on that observation we can define a second candidate for the product a :|: a :|: b which is equipped with the exact same morphisms. Therefore, there is no single best candidate, since the morphism m between a :|: a :|: b and a :|: b is id. id is a bijection, which implies that m is invertible and "convert" types either way. A visual representation of that argument. Replace p with i and q with j.
Restricting ourselves Either, as you can verify yourself with:
i = Left and
j = Right
This shows that the categorical complement of the product type is the disjoint union, not the set-based union.
The set union is part of the disjoint union, because we can define it as follows:
data Left a = Left a
data Right b = Right b
type DisjUnion a b = Left a :|: Right b
Because we have shown above that the set union is not a valid candidate for the coproduct of two types, we would lose many "free" properties (which follow from parametricity as leftroundabout mentioned) by not choosing the disjoint union in the category Hask (because there would be no coproduct).
This is an idea I've thought a lot about myself: a language with “first-class type algebra”. Pretty sure we could do about everything this way that we do in Haskell. Certainly if these disjunctions were, like Haskell alternatives, tagged unions; then you could directly rewrite any ADT to use them. In fact GHC can do this for you: if you derive a Generic instance, a variant type will be represented by a :+: construct, which is in essence just Either.
I'm not so sure if untagged unions would also do. As long as you require the types participating in a sum to be discernibly different, the explicit tagging should in principle not be necessary. The language would then need a convenient way to match on types at runtime. Sounds a lot like what dynamic languages do – obviously comes with quite some overhead though.
The biggest problem would be that if the types on both sides of :|: must be unequal then you lose parametricity, which is one of Haskell's nicest traits.
Given that you mention TypeScript, it is instructive to have a look at what its docs have to say about its union types. The example there starts from a function...
function padLeft(value: string, padding: any) { //etc.
... that has a flaw:
The problem with padLeft is that its padding parameter is typed as any. That means that we can call it with an argument that’s neither a number nor a string
One plausible solution is then suggested, and rejected:
In traditional object-oriented code, we might abstract over the two types by creating a hierarchy of types. While this is much more explicit, it’s also a little bit overkill.
Rather, the handbook suggests...
Instead of any, we can use a union type for the padding parameter:
function padLeft(value: string, padding: string | number) { // etc.
Crucially, the concept of union type is then described in this way:
A union type describes a value that can be one of several types.
A string | number value in TypeScript can be either of string type or of number type, as string and number are subtypes of string | number (cf. Alexis King's comment to the question). An Either String Int value in Haskell, however, is neither of String type nor of Int type -- its only, monomorphic, type is Either String Int. Further implications of that difference show up in the remainder of the discussion:
If we have a value that has a union type, we can only access members that are common to all types in the union.
In a roughly analogous Haskell scenario, if we have, say, an Either Double Int, we cannot apply (2*) directly on it, even though both Double and Int have instances of Num. Rather, something like bimap is necessary.
What happens when we need to know specifically whether we have a Fish? [...] we’ll need to use a type assertion:
let pet = getSmallPet();
if ((<Fish>pet).swim) {
(<Fish>pet).swim();
}
else {
(<Bird>pet).fly();
}
This sort of downcasting/runtime type checking is at odds with how the Haskell type system ordinarily works, even though it can be implemented using the very same type system (also cf. leftaroundabout's answer). In contrast, there is nothing to figure out at runtime about the type of an Either Fish Bird: the case analysis happens at value level, and there is no need to deal with anything failing and producing Nothing (or worse, null) due to runtime type mismatches.
I often run into term "adjoin" when trying to understand some concepts. Those things are too abstract for me to understand as I'm neither expert in field nor category theory.
The simplest case I found is a Monoid Maybe a instance which often behaves not as I would sometimes expect with regard to Nothing.
From Wikipedia we can learn that by "adjoining" an element to a semigroup we can get a different Monoid instance. I don't understand the sentence but the equations given suggest it's exactly what I need (and is not default for some reason):
Any semigroup S may be turned into a monoid simply by adjoining an element e not in S and defining e • s = s = s • e for all s ∈ S.
Doe "adjoining" mean the same as "adding" at least in this case?
Are there other simple examples of this concept?
What would be simplest possible instance of something that is "left-adjoint"?
Sometimes "adjoining" means "adding something new", as in the semigroup-related sentence you quote. E.g. someone might say that using Maybe a means adding/adjoining a new element Nothing to the type a. Personally, I would only use "adding" for this, though.
This has nothing to do with adjoints in the categorical sense, which are a tricky concept.
Roughly put, assume you have a functional type of the form
F a -> b
where F is some mapping from types to types (more precisely, a functor). Sometimes, you can express an isomorphic type to the above one having the form
a -> G b
where "magically" the function F on the left side moved to the right side, changing into G.
The canonical example is currying: Let e.g.
F T = (T, Int)
G T = Int -> T
Then we have
(F a) -> b
-- definition of F
= (a, Int) -> b
-- currying
=~ a -> (Int -> b)
-- definition of G
= a -> G b
In such case, we write F -| G which is read as "F is left adjoint to G".
Every time you can "nicely move" an operation on the input type on the other side of the arrow, changing it into another operation on the output type, you have an adjoint. (Technically, "nicely" means we have a natural isomorphism)
In Haskell, you can use unsafeCoerce to override the type system. How to do the same in F#?
For example, to implement the Y-combinator.
I'd like to offer a different solution, based on embedding the untyped lambda calculus in a typed functional language. The idea is to create a data type that allows us to change between types α and α → α, which subsequently allows to escape the restrictions of a type system. I'm not very familiar with F# so I'll give my answer in Haskell, but I believe it could be adapted easily (perhaps the only complication could be F#'s strictness).
-- | Roughly represents morphism between #a# and #a -> a#.
-- Therefore we can embed a arbitrary closed λ-term into #Any a#. Any time we
-- need to create a λ-abstraction, we just nest into one #Any# constructor.
--
-- The type parameter allows us to embed ordinary values into the type and
-- retrieve results of computations.
data Any a = Any (Any a -> a)
Note that the type parameter isn't significant for combining terms. It just allows us to embed values into our representation and extract them later. All terms of a particular type Any a can be combined freely without restrictions.
-- | Embed a value into a λ-term. If viewed as a function, it ignores its
-- input and produces the value.
embed :: a -> Any a
embed = Any . const
-- | Extract a value from a λ-term, assuming it's a valid value (otherwise it'd
-- loop forever).
extract :: Any a -> a
extract x#(Any x') = x' x
With this data type we can use it to represent arbitrary untyped lambda terms. If we want to interpret a value of Any a as a function, we just unwrap its constructor.
First let's define function application:
-- | Applies a term to another term.
($$) :: Any a -> Any a -> Any a
(Any x) $$ y = embed $ x y
And λ abstraction:
-- | Represents a lambda abstraction
l :: (Any a -> Any a) -> Any a
l x = Any $ extract . x
Now we have everything we need for creating complex λ terms. Our definitions mimic the classical λ-term syntax, all we do is using l to construct λ abstractions.
Let's define the Y combinator:
-- λf.(λx.f(xx))(λx.f(xx))
y :: Any a
y = l (\f -> let t = l (\x -> f $$ (x $$ x))
in t $$ t)
And we can use it to implement Haskell's classical fix. First we'll need to be able to embed a function of a -> a into Any a:
embed2 :: (a -> a) -> Any a
embed2 f = Any (f . extract)
Now it's straightforward to define
fix :: (a -> a) -> a
fix f = extract (y $$ embed2 f)
and subsequently a recursively defined function:
fact :: Int -> Int
fact = fix f
where
f _ 0 = 1
f r n = n * r (n - 1)
Note that in the above text there is no recursive function. The only recursion is in the Any data type, which allows us to define y (which is also defined non-recursively).
In Haskell, unsafeCoerce has the type a -> b and is generally used to assert to the compiler that the thing being coerced actually has the destination type and it's just that the type-checker doesn't know it.
Another, less common use, is to reinterpret a pattern of bits as another type. For example an unboxed Double# could be reinterpreted as an unboxed Int64#. You have to be sure about the underlying representations for this to be safe.
In F#, the first application can be achieved with box |> unbox as John Palmer said in a comment on the question. If possible use explicit type arguments to make sure that you don't accidentally have the wrong coercion inferred, e.g. box<'a> |> unbox<'b> where 'a and 'b are type variables or concrete types that are already in scope in your code.
For the second application, look at the BitConverter class for specific conversions of bit-patterns. In theory you could also do something like interfacing with unmanaged code to achieve this, but that seems very heavyweight.
These techniques won't work for implementing the Y combinator because the cast is only valid if the runtime objects actually do have the target type, but with the Y combinator you actually need to call the same function again but with a different type. For this you need the kinds of encoding tricks mentioned in the question John Palmer linked to.