Continual signal switching in arrowized FRP - haskell

I've been playing around with Arrowized FRP libraries in Haskell (Yampa, in particular), but I can't quite figure out how to do "continual" switching. By that I mean that a signal passes through a signal function (sf below) which is itself a signal (as drawn in the upper half of the image).
Since I don't know ahead of time what the parameters of the switch will be, I can't see how to reduce this to a simpler, binary switch.
How then should one do it, if it's possible at all? I'd prefer Yampa code, but am happy with any Arrowized FRP code. I haven't tried other libraries (e.g. Sodium or Reactive Banana) to know whether I'd have the same confusion in those cases, but I'm curious about them too.
EDIT
To make this clearer an more concrete, I've labeled the image; possible types for the labels are:
in: Either Int (Int -> Int)
1: (Int -> Int) -> (Either Int (Int -> Int) -> (Int -> Int))
sf could be:
(Either Int (Int -> Int) -> (Int -> Int)) -> Either Int (Int -> Int) -> (Int -> Int)
(e.g., app). But that's only if the part labeled with a question mark represents an input into sf. If it represents a more complex switch, the type would be
(Either Int (Int -> Int) -> (Int -> Int)) -> (Int -> Int)
instead.
2 and out are pretty much irrelevant.
The idea is that I want the circuit to behave as if sf were app, with the signal labeled f representing the function that is applied to in, and with in itself being the source of both the arguments to fs, and the fs themselves. I want to get a circuit that can process inputs and change it's behavior (the signal functions that constitute it) dynamically based on those inputs.
On the one hand, it seems to me like sf can't in fact be app, since in this case we don't have an ArrowApply; but on the other hand I imagine that same behavior can be achieved with some form of sophisticated switching.

You're asking to have an arrow that's output by an arrow to be used as an arrow.
That's what app from ArrowApply is for.
If you want to use that in some looped construct like your diagram, you might need ArrowLoop, but actually the do notation allows you to be fairly flexible with all this stuff anyway.
There's quite a lengthy explanation of app in this answer but I'll copy the main relevant bit:
What does app exactly do? it's type doesn't even have an (->) It lets you use the output of an arrow as an arrow. Let's look at the type.
app :: ArrowApply m => m (m b c, b) c
I prefer to use m to a because m feels more like a computation and a feels like a value. Some people like to use a type operator (infix type constructor), so you get
app :: ArrowApply (~>) => (b ~> c, b) ~> c
We think of b ~> c as an arrow, and we think of an arrow as a thing which takes bs, does something and gives cs. So this means app is an arrow that takes an arrow and a value, and can produce the value that the first arrow would have produced on that input.
It doesn't have -> in the type signature because when programming with arrows, we can turn any function into an arrow using arr :: Arrow (~>) => (b -> c) -> b ~> c, but you can't turn every arrow into a function, thus (b ~> c, b) ~> c is usable where (b ~> c, b) -> c or (b -> c, b) ~> c would not be.

I still think it's a case of ArrowLoop!
You have
in :: Arr () A
sf :: Arr (A -> B, A) B
one :: Arr B (A -> B)
two :: Arr B C
sf is just arr (uncurry ($)).
Then you have sf >>> (one &&& two) :: Arr (A -> B, A) (A -> B, C) and you can use loop (or rather loop with arr swap judiciously placed) to get an Arr A C.
Will that give you what you want?

Related

Usefulness of "function arrows associate to the right"?

Reading http://www.seas.upenn.edu/~cis194/spring13/lectures/04-higher-order.html it states
In particular, note that function arrows associate to the right, that
is, W -> X -> Y -> Z is equivalent to W -> (X -> (Y -> Z)). We can
always add or remove parentheses around the rightmost top-level arrow
in a type.
Function arrows associate to the right but as function application associates to the left then what is usefulness of this information ? I feel I'm not understanding something as to me it is a meaningless point that function arrows associate to the right. As function application always associates to the left then this the only associativity I should be concerned with ?
Function arrows associate to the right but [...] what is usefulness of this information?
If you see a type signature like, for example, f : String -> Int -> Bool you need to know the associativity of the function arrow to understand what the type of f really is:
if the arrow associates to the left, then the type means (String -> Int) -> Bool, that is, f takes a function as argument and returns a boolean.
if the arrow associates to the right, then the type means String -> (Int -> Bool), that is, f takes a string as argument and returns a function.
That's a big difference, and if you want to use f, you need to know which one it is. Since the function arrow associates to the right, you know that it has to be the second option: f takes a string and returns a function.
Function arrows associate to the right [...] function application associates to the left
These two choices work well together. For example, we can call the f from above as f "answer" 42 which really means (f "answer") 42. So we are passing the string "answer" to f which returns a function. And then we're passing the number 42 to that function, which returns a boolean. In effect, we're almost using f as a function with two arguments.
This is the standard way of writing functions with two (or more) arguments in Haskell, so it is a very common use case. Because of the associativity of function application and of the function arrow, we can write this common use case without parentheses.
When defining a two-argument curried function, we usually write something like this:
f :: a -> b -> c
f x y = ...
If the arrow did not associate to the right, the above type would instead have to be spelled out as a -> (b -> c). So the usefulness of ->'s associativity is that it saves us from writing too many parentheses when declaring function types.
If an operator # is 'right associative', it means this:
a # b # c # d = a # (b # (c # d))
... for any number of arguments. It behaves like foldr
This means that:
a -> b -> c -> d = a -> (b -> (c -> d))
Note: a -> (b -> (c -> d)) =/= ((a -> b) -> c) -> d ! This is very important.
What this tells us is that, say, foldr:
λ> :t foldr
foldr :: (a -> b -> b) -> b -> [a] -> b
Takes a function of type (a -> b -> b), and then returns... a function that takes a b, and then returns... a function that takes a [a], and then returns... a b. This means that we can apply functions like this
f a b c
because
f a b c = ((f a) b) c
and f will return two functions each time an argument is given.
Essentially, this isn't very useful as such, but is important information for when we want to interpret and call function types.
However, in functions like (++), associativity matters. If (++) were left associative, it would be very slow, so it's right associative.
Early functional language Lisp suffered from excessively nested parenthesis (which make code (or even text (if you do not mind to consider a broader context)) difficult to read. With time functional language designers opted to make functional code easy to read and write for pros even at cost of confusing rookies with less uniform rules.
In functional code,
function type declaration like (String -> Int) -> Bool are much more rare than functions like String -> (Int -> Bool), because functions that return functions are trade mark of functional style. Thus associating arrows to right helps reduce parentheses number (on overage, you might need to map a function to a primitive type). For function applications it is vise-versa.
The main purposes is convenience, because partial function application goes from left to right.
Every time you partially apply a function to a set of values, the remaining type has to be valid.
You can think of arrow types as a queue of types, where the queue itself is a type. During partial function application, you dequeue as many types from the queue as the number of arguments, yielding whatever remains of the queue. The resulting queue is still a valid type.
This is why types associate to the right. If types associate to the left, it will behave like a stack, and you won't be able to partially apply it the same way without leaving "holes" or undefined domains. For instance, say you have the following function:
foo :: a -> b -> c -> d
If Haskell types were left-associative, then passing a single parameter to foo would yield the following invalid type:
((? -> b) -> c) -> d
You will then be forced to circumvent it by adding parentheses, which could hamper readability.

What is indexed monad?

What is indexed monad and the motivation for this monad?
I have read that it helps to keep track of the side effects. But the type signature and documentation doesn't lead me to anywhere.
What would be an example of how it can help to keep track of side effects (or any other valid example)?
As ever, the terminology people use is not entirely consistent. There's a variety of inspired-by-monads-but-strictly-speaking-isn't-quite notions. The term "indexed monad" is one of a number (including "monadish" and "parameterised monad" (Atkey's name for them)) of terms used to characterize one such notion. (Another such notion, if you're interested, is Katsumata's "parametric effect monad", indexed by a monoid, where return is indexed neutrally and bind accumulates in its index.)
First of all, let's check kinds.
IxMonad (m :: state -> state -> * -> *)
That is, the type of a "computation" (or "action", if you prefer, but I'll stick with "computation"), looks like
m before after value
where before, after :: state and value :: *. The idea is to capture the means to interact safely with an external system that has some predictable notion of state. A computation's type tells you what the state must be before it runs, what the state will be after it runs and (like with regular monads over *) what type of values the computation produces.
The usual bits and pieces are *-wise like a monad and state-wise like playing dominoes.
ireturn :: a -> m i i a -- returning a pure value preserves state
ibind :: m i j a -> -- we can go from i to j and get an a, thence
(a -> m j k b) -- we can go from j to k and get a b, therefore
-> m i k b -- we can indeed go from i to k and get a b
The notion of "Kleisli arrow" (function which yields computation) thus generated is
a -> m i j b -- values a in, b out; state transition i to j
and we get a composition
icomp :: IxMonad m => (b -> m j k c) -> (a -> m i j b) -> a -> m i k c
icomp f g = \ a -> ibind (g a) f
and, as ever, the laws exactly ensure that ireturn and icomp give us a category
ireturn `icomp` g = g
f `icomp` ireturn = f
(f `icomp` g) `icomp` h = f `icomp` (g `icomp` h)
or, in comedy fake C/Java/whatever,
g(); skip = g()
skip; f() = f()
{h(); g()}; f() = h(); {g(); f()}
Why bother? To model "rules" of interaction. For example, you can't eject a dvd if there isn't one in the drive, and you can't put a dvd into the drive if there's one already in it. So
data DVDDrive :: Bool -> Bool -> * -> * where -- Bool is "drive full?"
DReturn :: a -> DVDDrive i i a
DInsert :: DVD -> -- you have a DVD
DVDDrive True k a -> -- you know how to continue full
DVDDrive False k a -- so you can insert from empty
DEject :: (DVD -> -- once you receive a DVD
DVDDrive False k a) -> -- you know how to continue empty
DVDDrive True k a -- so you can eject when full
instance IxMonad DVDDrive where -- put these methods where they need to go
ireturn = DReturn -- so this goes somewhere else
ibind (DReturn a) k = k a
ibind (DInsert dvd j) k = DInsert dvd (ibind j k)
ibind (DEject j) k = DEject j $ \ dvd -> ibind (j dvd) k
With this in place, we can define the "primitive" commands
dInsert :: DVD -> DVDDrive False True ()
dInsert dvd = DInsert dvd $ DReturn ()
dEject :: DVDrive True False DVD
dEject = DEject $ \ dvd -> DReturn dvd
from which others are assembled with ireturn and ibind. Now, I can write (borrowing do-notation)
discSwap :: DVD -> DVDDrive True True DVD
discSwap dvd = do dvd' <- dEject; dInsert dvd ; ireturn dvd'
but not the physically impossible
discSwap :: DVD -> DVDDrive True True DVD
discSwap dvd = do dInsert dvd; dEject -- ouch!
Alternatively, one can define one's primitive commands directly
data DVDCommand :: Bool -> Bool -> * -> * where
InsertC :: DVD -> DVDCommand False True ()
EjectC :: DVDCommand True False DVD
and then instantiate the generic template
data CommandIxMonad :: (state -> state -> * -> *) ->
state -> state -> * -> * where
CReturn :: a -> CommandIxMonad c i i a
(:?) :: c i j a -> (a -> CommandIxMonad c j k b) ->
CommandIxMonad c i k b
instance IxMonad (CommandIxMonad c) where
ireturn = CReturn
ibind (CReturn a) k = k a
ibind (c :? j) k = c :? \ a -> ibind (j a) k
In effect, we've said what the primitive Kleisli arrows are (what one "domino" is), then built a suitable notion of "computation sequence" over them.
Note that for every indexed monad m, the "no change diagonal" m i i is a monad, but in general, m i j is not. Moreover, values are not indexed but computations are indexed, so an indexed monad is not just the usual idea of monad instantiated for some other category.
Now, look again at the type of a Kleisli arrow
a -> m i j b
We know we must be in state i to start, and we predict that any continuation will start from state j. We know a lot about this system! This isn't a risky operation! When we put the dvd in the drive, it goes in! The dvd drive doesn't get any say in what the state is after each command.
But that's not true in general, when interacting with the world. Sometimes you might need to give away some control and let the world do what it likes. For example, if you are a server, you might offer your client a choice, and your session state will depend on what they choose. The server's "offer choice" operation does not determine the resulting state, but the server should be able to carry on anyway. It's not a "primitive command" in the above sense, so indexed monads are not such a good tool to model the unpredictable scenario.
What's a better tool?
type f :-> g = forall state. f state -> g state
class MonadIx (m :: (state -> *) -> (state -> *)) where
returnIx :: x :-> m x
flipBindIx :: (a :-> m b) -> (m a :-> m b) -- tidier than bindIx
Scary biscuits? Not really, for two reasons. One, it looks rather more like what a monad is, because it is a monad, but over (state -> *) rather than *. Two, if you look at the type of a Kleisli arrow,
a :-> m b = forall state. a state -> m b state
you get the type of computations with a precondition a and postcondition b, just like in Good Old Hoare Logic. Assertions in program logics have taken under half a century to cross the Curry-Howard correspondence and become Haskell types. The type of returnIx says "you can achieve any postcondition which holds, just by doing nothing", which is the Hoare Logic rule for "skip". The corresponding composition is the Hoare Logic rule for ";".
Let's finish by looking at the type of bindIx, putting all the quantifiers in.
bindIx :: forall i. m a i -> (forall j. a j -> m b j) -> m b i
These foralls have opposite polarity. We choose initial state i, and a computation which can start at i, with postcondition a. The world chooses any intermediate state j it likes, but it must give us the evidence that postcondition b holds, and from any such state, we can carry on to make b hold. So, in sequence, we can achieve condition b from state i. By releasing our grip on the "after" states, we can model unpredictable computations.
Both IxMonad and MonadIx are useful. Both model validity of interactive computations with respect to changing state, predictable and unpredictable, respectively. Predictability is valuable when you can get it, but unpredictability is sometimes a fact of life. Hopefully, then, this answer gives some indication of what indexed monads are, predicting both when they start to be useful and when they stop.
There are at least three ways to define an indexed monad that I know.
I'll refer to these options as indexed monads à la X, where X ranges over the computer scientists Bob Atkey, Conor McBride and Dominic Orchard, as that is how I tend to think of them. Parts of these constructions have a much longer more illustrious history and nicer interpretations through category theory, but I first learned of them associated with these names, and I'm trying to keep this answer from getting too esoteric.
Atkey
Bob Atkey's style of indexed monad is to work with 2 extra parameters to deal with the index of the monad.
With that you get the definitions folks have tossed around in other answers:
class IMonad m where
ireturn :: a -> m i i a
ibind :: m i j a -> (a -> m j k b) -> m i k b
We can also define indexed comonads à la Atkey as well. I actually get a lot of mileage out of those in the lens codebase.
McBride
The next form of indexed monad is Conor McBride's definition from his paper "Kleisli Arrows of Outrageous Fortune". He instead uses a single parameter for the index. This makes the indexed monad definition have a rather clever shape.
If we define a natural transformation using parametricity as follows
type a ~> b = forall i. a i -> b i
then we can write down McBride's definition as
class IMonad m where
ireturn :: a ~> m a
ibind :: (a ~> m b) -> (m a ~> m b)
This feels quite different than Atkey's, but it feels more like a normal Monad, instead of building a monad on (m :: * -> *), we build it on (m :: (k -> *) -> (k -> *).
Interestingly you can actually recover Atkey's style of indexed monad from McBride's by using a clever data type, which McBride in his inimitable style chooses to say you should read as "at key".
data (:=) a i j where
V :: a -> (a := i) i
Now you can work out that
ireturn :: IMonad m => (a := j) ~> m (a := j)
which expands to
ireturn :: IMonad m => (a := j) i -> m (a := j) i
can only be invoked when j = i, and then a careful reading of ibind can get you back the same as Atkey's ibind. You need to pass around these (:=) data structures, but they recover the power of the Atkey presentation.
On the other hand, the Atkey presentation isn't strong enough to recover all uses of McBride's version. Power has been strictly gained.
Another nice thing is that McBride's indexed monad is clearly a monad, it is just a monad on a different functor category. It works over endofunctors on the category of functors from (k -> *) to (k -> *) rather than the category of functors from * to *.
A fun exercise is figuring out how to do the McBride to Atkey conversion for indexed comonads. I personally use a data type 'At' for the "at key" construction in McBride's paper. I actually walked up to Bob Atkey at ICFP 2013 and mentioned that I'd turned him inside out at made him into a "Coat". He seemed visibly disturbed. The line played out better in my head. =)
Orchard
Finally, a third far-less-commonly-referenced claimant to the name of "indexed monad" is due to Dominic Orchard, where he instead uses a type level monoid to smash together indices. Rather than go through the details of the construction, I'll simply link to this talk:
https://github.com/dorchard/effect-monad/blob/master/docs/ixmonad-fita14.pdf
As a simple scenario, assume you have a state monad. The state type is a complex large one, yet all these states can be partitioned into two sets: red and blue states. Some operations in this monad make sense only if the current state is a blue state. Among these, some will keep the state blue (blueToBlue), while others will make the state red (blueToRed). In a regular monad, we could write
blueToRed :: State S ()
blueToBlue :: State S ()
foo :: State S ()
foo = do blueToRed
blueToBlue
triggering a runtime error since the second action expects a blue state. We would like to prevent this statically. Indexed monad fulfills this goal:
data Red
data Blue
-- assume a new indexed State monad
blueToRed :: State S Blue Red ()
blueToBlue :: State S Blue Blue ()
foo :: State S ?? ?? ()
foo = blueToRed `ibind` \_ ->
blueToBlue -- type error
A type error is triggered because the second index of blueToRed (Red) differs from the first index of blueToBlue (Blue).
As another example, with indexed monads you can allow a state monad to change the type for its state, e.g. you could have
data State old new a = State (old -> (new, a))
You could use the above to build a state which is a statically-typed heterogeneous stack. Operations would have type
push :: a -> State old (a,old) ()
pop :: State (a,new) new a
As another example, suppose you want a restricted IO monad which does not
allow file access. You could use e.g.
openFile :: IO any FilesAccessed ()
newIORef :: a -> IO any any (IORef a)
-- no operation of type :: IO any NoAccess _
In this way, an action having type IO ... NoAccess () is statically guaranteed to be file-access-free. Instead, an action of type IO ... FilesAccessed () can access files. Having an indexed monad would mean you don't have to build a separate type for the restricted IO, which would require to duplicate every non-file-related function in both IO types.
An indexed monad isn't a specific monad like, for example, the state monad but a sort of generalization of the monad concept with extra type parameters.
Whereas a "standard" monadic value has the type Monad m => m a a value in an indexed monad would be IndexedMonad m => m i j a where i and j are index types so that i is the type of the index at the beginning of the monadic computation and j at the end of the computation. In a way, you can think of i as a sort of input type and j as the output type.
Using State as an example, a stateful computation State s a maintains a state of type s throughout the computation and returns a result of type a. An indexed version, IndexedState i j a, is a stateful computation where the state can change to a different type during the computation. The initial state has the type i and state and the end of the computation has the type j.
Using an indexed monad over a normal monad is rarely necessary but it can be used in some cases to encode stricter static guarantees.
It may be important to take a look how indexing is used in dependent types (eg in agda). This can explain how indexing helps in general, then translate this experience to monads.
Indexing permits to establish relationships between particular instances of types. Then you can reason about some values to establish whether that relationship holds.
For example (in agda) you can specify that some natural numbers are related with _<_, and the type tells which numbers they are. Then you can require that some function is given a witness that m < n, because only then the function works correctly - and without providing such witness the program will not compile.
As another example, given enough perseverance and compiler support for your chosen language, you could encode that the function assumes that a certain list is sorted.
Indexed monads permit to encode some of what dependent type systems do, to manage side effects more precisely.

Is it valid to lift positive positive forall quantifiers to the outside?

This question came up in discussion on #haskell.
Is it always correct to lift a deeply nested forall to the top, if its occurrence is positive?
E.g:
((forall a. P(a)) -> S) -> T
(where P, S, T are to be understood as metavariables) to
forall a. (P(a) -> S) -> T
(which we would normally write just as (P(a) -> S) -> T
I know that you're certainly allowed to collect foralls from some positive positions, such as to the right of the last -> and so on.
This is valid in classical logic so it's not an absurd idea, but in general it's invalid in intuitionistic logic. However my informal game theory intuition of quantifiers which is that each type variable is "chosen by the caller" or "chosen by the callee" suggest that there are really only two choices and you can lift all the "chosen by the caller" options to the top. Unless the interleave of the moves in the game matters?
Assume
foo :: ((forall a. P a) -> S) -> T
and let for the sake of this discussion S = (P Int, P Char). A possible type-correct call could then be:
foo (\x :: (forall a. P a) -> (x,x))
Now, assume
bar :: forall a. (P a -> S) -> T
where S is as above. It is now hard to invoke bar! Let's try to call it on a = Int:
bar (\x :: P Int -> (x, something))
Now we need a something :: P Char which can not simply derived from x. The same happens if a = Char. If a is something else than Int, Char, then it would be even worse.
You mentioned intuitionistic logic. You might see that in that logic the type of foo is stronger than the one of bar. As a stronger hypothesis, the type of foo can therefore be applied to more cases in proofs. So, it shouldn't be a surprise to find that, as a term, foo is applicable in more contexts! :)

Mapping over Either's Left

Somewhere in my app I receive an Either ParserError MyParseResult from Parsec. Downstream this result gets some other parsing done over using other libs. During that second phase of parsing there also may occur some kind of error which I would like to pass as a Left String, but for that I need to convert the result from Parsec to String too. To achieve that I need a function which will allow me to map over a Left with a show function.
The mapping function I'm thinking of looks something like this:
mapLeft :: (a -> b) -> Either a c -> Either b c
mapLeft f (Left x) = Left $ f x
mapLeft _ x = x
But I was quite surprised not to find anything matching on hackage db. So now I'm having doubts whether I'm using a correct approach to my problem.
Why isn't there such a function in standard lib? What is wrong with my approach?
We have such a function in the standard libraries,
Control.Arrow.left :: a b c -> a (Either b d) (Either c d)
is the generalisation to arbitrary Arrows. Substitute (->) for a and apply it infix, to get the specialisation
left :: (b -> c) -> Either b d -> Either c d
There is nothing wrong with your approach in principle, it's a sensible way to handle the situation.
Another option is to use Bifunctor instance of Either. Then you have
first :: (a -> b) -> Either a c -> Either b c
(Also Bifunctor can be used to traverse over the first part of (a,b).)
This can be done easily with lens:
import Control.Lens
over _Left (+1) $ Left 10 => Left 11
over _Left (+1) $ Right 10 => Right 10
over _Right (+1) $ Right 10 => Right 11
Another simple option is mapLeft in Data.Either.Combinators:
mapLeft :: (a -> c) -> Either a b -> Either c b

Type class definition with functions depending on an additional type

Still new to Haskell, I have hit a wall with the following:
I am trying to define some type classes to generalize a bunch of functions that use gaussian elimination to solve linear systems of equations.
Given a linear system
M x = k
the type a of the elements m(i,j) \elem M can be different from the type b of x and k. To be able to solve the system, a should be an instance of Num and b should have multiplication/addition operators with b, like in the following:
class MixedRing b where
(.+.) :: b -> b -> b
(.*.) :: (Num a) => b -> a -> b
(./.) :: (Num a) => b -> a -> b
Now, even in the most trivial implementation of these operators, I'll get Could not deduce a ~ Int. a is a rigid type variable errors (Let's forget about ./. which requires Fractional)
data Wrap = W { get :: Int }
instance MixedRing Wrap where
(.+.) w1 w2 = W $ (get w1) + (get w2)
(.*.) w s = W $ ((get w) * s)
I have read several tutorials on type classes but I can find no pointer to what actually goes wrong.
Let us have a look at the type of the implementation that you would have to provide for (.*.) to make Wrap an instance of MixedRing. Substituting Wrap for b in the type of the method yields
(.*.) :: Num a => Wrap -> a -> Wrap
As Wrap is isomorphic to Int and to not have to think about wrapping and unwrapping with Wrap and get, let us reduce our goal to finding an implementation of
(.*.) :: Num a => Int -> a -> Int
(You see that this doesn't make the challenge any easier or harder, don't you?)
Now, observe that such an implementation will need to be able to operate on all types a that happen to be in the type class Num. (This is what a type variable in such a type denotes: universal quantification.) Note: this is not the same (actually, it's the opposite) of saying that your implementation can itself choose what a to operate on); yet that is what you seem to suggest in your question: that your implementation should be allowed to pick Int as a choice for a.
Now, as you want to implement this particular (.*.) in terms of the (*) for values of type Int, we need something of the form
n .*. s = n * f s
with
f :: Num a => a -> Int
I cannot think of a function that converts from an arbitary Num-type a to Int in a meaningful way. I'd therefore say that there is no meaningful way to make Int (and, hence, Wrap) an instance of MixedRing; that is, not such that the instance behaves as you would probably expect it to do.
How about something like:
class (Num a) => MixedRing a b where
(.+.) :: b -> b -> b
(.*.) :: b -> a -> b
(./.) :: b -> a -> b
You'll need the MultiParamTypeClasses extension.
By the way, it seems to me that the mathematical structure you're trying to model is really module, not a ring. With the type variables given above, one says that b is an a-module.
Your implementation is not polymorphic enough.
The rule is, if you write a in the class definition, you can't use a concrete type in the instance. Because the instance must conform to the class and the class promised to accept any a that is Num.
To put it differently: Exactly the class variable is it that must be instantiated with a concrete type in an instance definition.
Have you tried:
data Wrap a = W { get :: a }
Note that once Wrap a is an instance, you can still use it with functions that accept only Wrap Int.

Resources