In this image it says the Output should be A' + B, but to me, it seems like it should be A' + AB. If I'm wrong, could someone explain to me how A' + B is the correct answer? The reason I ask this is because in the second group of ones, you get ABC + ABC'. Since A and B areboth common in the last two groups, shouldn't they both be included? Thanks in advance.
Karnaugh Map Simplification Example
The key bit you are missing is that A' + B is equal to A' + AB
Because of the Distributive Property of boolean algebra we can rewrite
A' + AB
as the product
(A' + A)(A' + B)
and since the first term is the same as just saying 1, we are left with only our second term.
Related
I am going through my practice exam for Programming language Foundations using agda and it has the following question:
You are given the following Agda declaration:
data Even : N → Set where
ezero : Even 0
esuc : { n : N } → Even n → Even (2+ n)
Assume that the standard library of natural numbers has been imported. Answer the following questions:
a)What is the type of ezero?
b)Are there any terms of type Even 1?
c)How many terms are of type Even 2? List them
d)Describe one potential problem that might occur if we change the return type of esuc to be Even (n+2) instead of Even (2+n).
We're not provided a solution manual. The question seems pretty basic but I am not sure about any of these.I think the answer to the first three are:
a) Set
b) No terms of type Even 1
c) One term of type Even 2
d) don't know
Answers to these questions along with a brief explanation would be highly appreciated. Thanks
What is the type of ezero?
The type of the data constructor ezero can be read from the data declaration: ezero : Even 0 states that it has type Even 0.
Are there any terms of type Even 1?
No there aren't any. This can be seen by a case distinction: if there were a term then it'd start with either one of the two constructors. And because these have specific return indices, they'd have to unify with 1.
ezero would enforce 1 = 0
esuc would mean that there is an n such that 1 = 2+ n
Both of these situations are impossible.
How many terms are of type Even 2? List them
There is exactly one: esuc ezero. With a reasoning similar to the one in the previous question, we can prove that ezero and esuc (esuc p) (for some p) won't do.
Describe one potential problem that might occur if we change the return type of esuc to be Even (n+2) instead of Even (2+n).
Consider the proof plus-Even : {m n : N} → Even m → Even n → Even (m + n). Because (+) is defined by induction on its first argument, you won't be able to immediately apply esuc in the step case. You are going to need to use rewriting to reorganize the type of the goal from Even ((m +2) + n) (or Even (m + (n +2)) depending on which argument you perform induction on) to Even ((m + n) +2) beforehand.
I often run into term "adjoin" when trying to understand some concepts. Those things are too abstract for me to understand as I'm neither expert in field nor category theory.
The simplest case I found is a Monoid Maybe a instance which often behaves not as I would sometimes expect with regard to Nothing.
From Wikipedia we can learn that by "adjoining" an element to a semigroup we can get a different Monoid instance. I don't understand the sentence but the equations given suggest it's exactly what I need (and is not default for some reason):
Any semigroup S may be turned into a monoid simply by adjoining an element e not in S and defining e • s = s = s • e for all s ∈ S.
Doe "adjoining" mean the same as "adding" at least in this case?
Are there other simple examples of this concept?
What would be simplest possible instance of something that is "left-adjoint"?
Sometimes "adjoining" means "adding something new", as in the semigroup-related sentence you quote. E.g. someone might say that using Maybe a means adding/adjoining a new element Nothing to the type a. Personally, I would only use "adding" for this, though.
This has nothing to do with adjoints in the categorical sense, which are a tricky concept.
Roughly put, assume you have a functional type of the form
F a -> b
where F is some mapping from types to types (more precisely, a functor). Sometimes, you can express an isomorphic type to the above one having the form
a -> G b
where "magically" the function F on the left side moved to the right side, changing into G.
The canonical example is currying: Let e.g.
F T = (T, Int)
G T = Int -> T
Then we have
(F a) -> b
-- definition of F
= (a, Int) -> b
-- currying
=~ a -> (Int -> b)
-- definition of G
= a -> G b
In such case, we write F -| G which is read as "F is left adjoint to G".
Every time you can "nicely move" an operation on the input type on the other side of the arrow, changing it into another operation on the output type, you have an adjoint. (Technically, "nicely" means we have a natural isomorphism)
I've been studying dependent types and I understand the following:
Why universal quantification is represented as a dependent function type. ∀(x:A).B(x) means “for all x of type A there is a value of type B(x)”. Hence it's represented as a function which when given any value x of type A returns a value of type B(x).
Why existential quantification is represented as a dependent pair type. ∃(x:A).B(x) means “there exists an x of type A for which there is a value of type B(x)”. Hence it's represented as a pair whose first element is a particular value x of type A and whose second element is a value of type B(x).
Aside: It's also interesting to note that universal quantification is always used with material implication while existential quantification is always used with logical conjunction.
Anyway, the Wikipedia article on dependent types states that:
The opposite of the dependent type is the dependent pair type, dependent sum type or sigma-type. It is analogous to the coproduct or disjoint union.
How is it that a pair type (which is normally a product type) is analogous to a disjoint union (which is a sum type)? This has always confused me.
In addition, how is the dependent function type analogous to the product type?
The confusion arises from using similar terminology for the structure of a Σ type and for how its values look like.
A value of Σ(x:A) B(x) is a pair (a,b) where a∈A and b∈B(a). The type of the second element depends on the value of the first one.
If we look at the structure of Σ(x:A) B(x), it's a disjoint union (coproduct) of B(x) for all possible x∈A.
If B(x) is constant (independent of x) then Σ(x:A) B will be just |A| copies of B, that is A⨯B (a product of 2 types).
If we look at the structure of Π(x:A) B(x), it's a product of B(x) for all possible x∈A. Its values could be viewed as |A|-tuples where a-th component is of type B(a).
If B(x) is constant (independent of x) then Π(x:A) B will be just A→B - functions from A to B, that is Bᴬ (B to A) using the set-theory notation - the product of |A| copies of B.
So Σ(x∈A) B(x) is a |A|-ary coproduct indexed by the elements of A, while Π(x∈A) B(x) is a |A|-ary product indexed by the elements of A.
A dependent pair is typed with a type and a function from values of that type to another type. The dependent pair has values of pairs of a value of the first type and a value of the second type applied to the first value.
data Sg (S : Set) (T : S -> Set) : Set where
Ex : (s : S) -> T s -> Sg S T
We can recapture sum types by showing how Either is canonically expressed as a sigma type: it's just Sg Bool (choice a b) where
choice : a -> a -> Bool -> a
choice l r True = l
choice l r False = r
is the canonical eliminator of booleans.
eitherIsSg : {a b : Set} -> Either a b -> Sg Bool (choice a b)
eitherIsSg (Left a) = Sg True a
eitherIsSg (Right b) = Sg False b
sgIsEither : {a b : Set} -> Sg Bool (choice a b) -> Either a b
sgIsEither (Sg True a) = Left a
sgIsEither (Sg False b) = Right b
Building on Petr Pudlák’s answer, another angle to see this in a purely non-dependent fashion is to notice that the type Either a a is isomorphic to the type (Bool, a). Although the latter is, at first glance, a product, it makes sense to say it’s a sum type, as it is the sum of two instances of a.
I have to do this example with Either a a instead of Either a b, because for the latter to be expressed as a product, we need – well – dependent types.
Good question. The name could originate from Martin-Löf who used the term "Cartesian product of a family of sets" for the pi type. See the following notes, for example:
http://www.cs.cmu.edu/afs/cs/Web/People/crary/819-f09/Martin-Lof80.pdf
The point is while a pi type is in principle akin to an exponential, you can always see an exponential as an n-ary product where n is the exponent. More concretely, the non-dependent function A -> B can be seen as an exponential type B^A or an infinite product Pi_{a in A} B = B x B x B x ... x B (A times). A dependent product is in this sense a potentially infinite product Pi_{a in A} B(a) = B(a_1) x B(a_2) x ... x B (a_n) (once for every a_i in A).
The reasoning for dependent sum could be similar, as you can see a product as an n-ary sum where n is one of the factors of the product.
This is probably redundant with the other answers at this point, but here is the core of the issue:
How is it that a pair type (which is normally a product type) is analogous to a disjoint union (which is a sum type)? This has always confused me.
But what is a product but a sum of equal numbers? e.g. 4 × 3 = 3 + 3 + 3 + 3.
The same relationship holds for types, or sets, or similar things. In fact, the nonnegative integers are just the decategorification of finite sets. The definitions of addition and multiplication on numbers are chosen so that the cardinality of a disjoint union of sets is the sum of the cardinalities of the sets, and the cardinality of a product of sets is equal to the product of the cardinalities of the sets. In fact, if you substitute "set" with "herd of sheep", this is probably how arithmetic was invented.
First, see what a co-product is.
A co-product is a terminal object A for all objects B_i such that for all arrows B_i -> X there is an arrow B_i -> A, and a unique A -> X such that the corresponding triangles commute.
You can view this as a Haskell data type A with B_i -> A being a bunch of constructors with a single argument of type B_i. It is clear then that for every B_i -> X it is possible to supply an arrow from A -> X such that through pattern-matching you could apply that arrow to B_i to get X.
The important connection to sigma types is that the index i in B_i can be of any type, not just a type of natural numbers.
The important difference from the answers above is that it does not have to have a B_i for every value i of that type: once you've defined B_i ∀ i, you have a total function.
The difference between Π and Σ, as may be seen from Petr Pudlak's answer, is that for Σ some of the values B_i in the tuple may be missing - for some i there may be no corresponding B_i.
The other clear difference between Π and Σ is that Π characterizes a product of B_i by providing i-th projection from the product Π to each B_i (this is what the function i -> B_i means), but Σ provides the arrows the other way around - it provides the i-th injection from B_i into Σ.
First of all this is an assignment so I don't want a complete solution :)
I'm going to calculate the value of a deck in the Cardgame blackjack.
Rules are all Aces are 1 or 11.
suppose my hand is: (Ace, 5), My hand is now 16. Next card is a 6, my hand is now (Ace, 5,6) 22 but the ace that I already calculated before must now change to one so my hand is at 12.
my Hand datatype is defined recursive by
data Hand = Empty | Add Card Empty
so calculate a hand with fixed values are done by
valueOfHand (Add c h) = cardValue c + valueOfHand h
What's the pattern to change the values that appeared before?
I'm not sure if your class has already covered the list monad, but I think that's the most natural way to solve this. So instead of having cardValue return a simple value, it should return a non-deterministic value that lists all the possible values that the card might have, i.e.
cardValue :: Card -> [Int]
cardValue Ace = [1, 11]
cardValue Two = [2]
...
valueOfHand will then have two parts: one that computes a list of all possible hand values and another that selects the best, legal hand.
Let me know if this is enough for you to solve it or if you need more hints.
If, as you indicate in the comments, Aces may only have one value per hand (so a hand of three Aces is 3 or 33), then it makes sense to define your valueOfHand :: Hand -> Integer function in a way that first totals up non-Ace cards and then handles the Aces.
I would expect such a function would be based around something like this:
valueOfHand Empty = 0
valueOfHand h = valueOfAces (filter (\c -> c == Ace) h) (filter (\c -> c /= Ace) h)
For some function valueOfAces :: Hand -> Hand -> Integer.
The function instance for ArrowLoop contains
loop :: ((b,d) -> (c,d)) -> (b -> c)
loop f b = let (c,d) = f (b,d) in c
First I have a problem with the signature: How can we possibly get b -> c from (b,d) -> (c,d)? I mean, the c in the resulting tuple may depend on both elements of the input, how is it possible to "cut off" the influence of d?
Second I don't get how the let works here. Doesn't contain (c,d) = f (b,d) a cyclic definition for d? Where does d come from? To be honest, I'm surprised this is valid syntax, as it looks like we would kind of redefine d.
I mean in mathematics this would make kind of sense, e.g. f could be a complex function, but I would provide only the real part b, and I would need to chose the imaginary part d in a way that it doesn't change when I evaluate f (b,d), which would make it some kind of fixed point. But if this analogy holds, the let expression must somehow "search" for that fixed point for d (and there could be more than one). Which looks close to magic to me. Or do I think too complicated?
This works the same way the standard definition of fix works:
fix f = let x = f x in x
i.e., it's finding a fixed point in the exact same way fix does: recursively.
For instance, as a trivial example, consider loop (\((),xs) -> (xs, 1:xs)) (). This is just like fix (\xs -> 1:xs); we ignore our input, and use the d output (here xs) as our main output. The extra element in the tuple that loop has is just to contain the input parameter and output value, since arrows can't do currying. Consider how you'd define a factorial function with fix — you'd end up using currying, but when using arrows you'd use the extra parameter and output that loop gives you.
Basically, loop ties a knot, giving a arrow access to an auxiliary output of itself, just like fix ties a knot, giving a function access to its own output as an input.
"Search for the fixed point" is exactly what this does. This is Haskell's laziness in action. See more at Wikipedia.