Typesafe StablePtrs - haskell

I spent a lot of time encoding invariants in my data types and now I am working on exposing my library to C via the FFI. Rather than marshal data structures across the language barrier I simply use opaque pointers to allow C to build up an AST and then upon eval Haskell only needs to marshal a string over to C.
Here is some code to be more illuminating.
-- excerpt from Query.hs
data Sz = Selection | Reduction deriving Show
-- Column Datatype
data Column (a :: Sz) where
Column :: String -> Column Selection
BinExpr :: BinOp -> Column a -> Column b -> Column (OpSz a b)
AggExpr :: AggOp -> Column Selection -> Column Reduction
type family OpSz (a :: Sz) (b :: Sz) where
OpSz Selection Selection = Selection
OpSz Selection Reduction = Selection
OpSz Reduction Selection = Selection
OpSz Reduction Reduction = Reduction
data Query (a :: Sz) where
... etc
-- excerpt from Export.hs
foreign export ccall "selection"
select :: StablePtr [Column a] -> StablePtr (Query b) -> IO (StablePtr (Query Selection))
foreign export ccall
add :: StablePtr (Column a) -> StablePtr (Column b) -> IO (StablePtr (Column (OpSz a b)))
foreign export ccall
mul :: StablePtr (Column a) -> StablePtr (Column b) -> IO (StablePtr (Column (OpSz a b)))
foreign export ccall
eval :: StablePtr (Query Selection) -> IO CString
As far as I can tell however, is that this seems to throw type safety out the window. Essentially whatever C hands off to Haskell is going to be assumed to be of that type completely negating the reason I wrote the dsl in Haskell. Is there some way I can get the benefits of using StablePtr's and retain type safe? The last thing I want is to re implement the invariants in C.

The C counterpart to StablePtr a is a typedef for void * -- losing type safety at the FFI boundary.
The problem is that there are infinitely many possibilities for a :: *, hence for StablePtr a. Encoding these types in C, which has a limited type system (no parametric types!) can not be done unless resorting to very unidiomatic C types (see below).
In your specific case, a, b :: Sz so we only have finitely many cases, and some FFI tool could help in encoding those cases. Still, this can cause a combinatorial explosion of cases:
typedef struct HsStablePtr_Selection_ { void *p; } HsStablePtr_Selection;
typedef struct HsStablePtr_Reduction_ { void *p; } HsStablePtr_Reduction;
HsStablePtr_Selection
add_Selection_Selection(HsStablePtr_Selection a, HsStablePtr_Selection b);
HsStablePtr_Selection
add_Selection_Reduction(HsStablePtr_Selection a, HsStablePtr_Reduction b);
HsStablePtr_Selection
add_Reduction_Selection(HsStablePtr_Reduction a, HsStablePtr_Selection b);
HsStablePtr_Reduction
add_Reduction_Reduction(HsStablePtr_Reduction a, HsStablePtr_Reduction b);
In C11 one could reduce this mess using type-generic expressions,
which could add the "right" type casts without combinatorial explosion.
Still, no one wrote a FFI tool exploiting that. For instance:
void *add_void(void *x, void *y);
#define add(x,y) \
_Generic((x) , \
HsStablePtr_Selection: _Generic((y) , \
HsStablePtr_Selection: (HsStablePtr_Selection) add_void(x,y), \
HsStablePtr_Reduction: (HsStablePtr_Selection) add_void(x,y) \
) \
HsStablePtr_Reduction: _Generic((y) , \
HsStablePtr_Selection: (HsStablePtr_Selection) add_void(x,y), \
HsStablePtr_Reduction: (HsStablePtr_Reduction) add_void(x,y) \
) \
)
(The casts above are from pointer to struct, so they don't work and we should use struct literals instead, but let's ignore that.)
In C++ we would have richer types to exploit, but the FFI is meant to use C as a common lingua franca for binding to other languages.
A possible encoding of Haskell (monomorphic!) parametric types could be achieved, theoretically, exploiting the only type constructors c has: pointers, arrays, function pointers, const, volatile, ....
For instance, the stable pointer to type T = Either Char (Int, Bool) could be represented as follows:
typedef struct HsBool_ { void *p } HsBool;
typedef struct HsInt_ { void *p } HsInt;
typedef struct HsChar_ { void *p } HsChar;
typedef struct HsEither_ HsEither; // incomplete type
typedef struct HsPair_ HsPair; // incomplete type
typedef void (**T)(HsEither x1, HsChar x2
void (**)(HsPair x3, HsInt x4, HsBool x5));
Of course, from the C point of view, the type T is a blatant lie!! a value of type T would actually be void * pointing to some Haskell-side representation of type StablePtr T and surely not a pointer-to-pointer to C function! Still, passing T around would preserve type safety.
Note that the above one can only be called as an "encoding" in a very weak sense, namely it is an injective mapping from monomorphic Haskell types to C types, totally disregarding the semantics of C types. This is only done to ensure that, if such stable pointers are passed back to Haskell, there is some type checking at the C side.
I used C incomplete types so that one can never call these functions in C. I used pointers-to-pointers since (IIRC) pointers to functions can not be cast to void * safely.
Note that such a sophisticated encoding could be used in C, but could be hard to integrate with other languages. For instance, Java and Haskell could be made to interact using JNI + FFI, but I'm not sure the JNI part can cope with such a complex encoding. Perhaps, void * is more practical, albeit unsafe.
Safely encoding polymorphic functions, GADTs, type classes ... is left for future work :-P
TL;DR: the FFI could try harder to encode static types to C, but this is tricky and there is no large demand for that at this moment. Maybe in the future this could change.

Related

Practical applications of Rank 2 polymorphism?

I'm covering polymorphism and I'm trying to see the practical uses of such a feature.
My basic understanding of Rank 2 is:
type MyType = ∀ a. a -> a
subFunction :: a -> a
subFunction el = el
mainFunction :: MyType -> Int
mainFunction func = func 3
I understand that this is allowing the user to use a polymorphic function (subFunction) inside mainFunction and strictly specify it's output (Int). This seems very similar to GADT's:
data Example a where
ExampleInt :: Int -> Example Int
ExampleBool :: Bool -> Example Bool
1) Given the above, is my understanding of Rank 2 polymorphism correct?
2) What are the general situations where Rank 2 polymorphism can be used, as opposed to GADT's, for example?
If you pass a polymorphic function as and argument to a Rank2-polymorphic function, you're essentially passing not just one function but a whole family of functions – for all possible types that fulfill the constraints.
Typically, those forall quantifiers come with a class constraint. For example, I might wish to do number arithmetic with two different types simultaneously (for comparing precision or whatever).
data FloatCompare = FloatCompare {
singlePrecision :: Float
, doublePrecision :: Double
}
Now I might want to modify those numbers through some maths operation. Something like
modifyFloat :: (Num -> Num) -> FloatCompare -> FloatCompare
But Num is not a type, only a type class. I could of course pass a function that would modify any particular number type, but I couldn't use that to modify both a Float and a Double value, at least not without some ugly (and possibly lossy) converting back and forth.
Solution: Rank-2 polymorphism!
modifyFloat :: (∀ n . Num n => n -> n) -> FloatCompare -> FloatCompare
mofidyFloat f (FloatCompare single double)
= FloatCompare (f single) (f double)
The best single example of how this is useful in practice are probably lenses. A lens is a “smart accessor function” to a field in some larger data structure. It allows you to access fields, update them, gather results... while at the same time composing in a very simple way. How it works: Rank2-polymorphism; every lens is polymorphic, with the different instantiations corresponding to the “getter” / “setter” aspects, respectively.
The go-to example of an application of rank-2 types is runST as Benjamin Hodgson mentioned in the comments. This is a rather good example and there are a variety of examples using the same trick. For example, branding to maintain abstract data type invariants across multiple types, avoiding confusion of differentials in ad, a region-based version of ST.
But I'd actually like to talk about how Haskell programmers are implicitly using rank-2 types all the time. Every type class whose methods have universally quantified types desugars to a dictionary with a field with a rank-2 type. In practice, this is virtually always a higher-kinded type class* like Functor or Monad. I'll use a simplified version of Alternative as an example. The class declaration is:
class Alternative f where
empty :: f a
(<|>) :: f a -> f a -> f a
The dictionary representing this class would be:
data AlternativeDict f = AlternativeDict {
empty :: forall a. f a,
(<|>) :: forall a. f a -> f a -> f a }
Sometimes such an encoding is nice as it allows one to use different "instances" for the same type, perhaps only locally. For example, Maybe has two obvious instances of Alternative depending on whether Just a <|> Just b is Just a or Just b. Languages without type classes, such as Scala, do indeed use this encoding.
To connect to leftaroundabout's reference to lenses, you can view the hierarchy there as a hierarchy of type classes and the lens combinators as simply tools for explicitly building the relevant type class dictionaries. Of course, the reason it isn't actually a hierarchy of type classes is that we usually will have multiple "instances" for the same type. E.g. _head and _head . _tail are both "instances" of Traversal' s a.
* A higher-kinded type class doesn't necessarily lead to this, and it can happen for a type class of kind *. For example:
-- Higher-kinded but doesn't require universal quantification.
class Sum c where
sum :: c Int -> Int
-- Not higher-kinded but does require universal quantification.
class Length l where
length :: [a] -> l
If you are using modules in Haskell, you are already using Rank-2 types. Theoretically speaking, modules are records with rank-2 type properties.
For example, the Foo module below in Haskell ...
module Foo(id) where
id :: forall a. a -> a
id x = x
import qualified Foo
main = do
putStrLn (Foo.id "hello")
return ()
... can actually be thought as a record as follows:
type FooType = FooType {
id :: forall a. a -> a
}
Foo :: FooType
Foo = Foo {
id = \x -> x
}
P/S (unrelated this question): from a language design perspective, if you are going to support module system, then you might as well support higher-rank types (i.e. allow arbitrary quantification of type variables on any level) to reduce duplication of efforts (i.e. type checking a module should be almost the same as type checking a record with higher rank types).

Restrictions of unboxed types

I wonder why unboxed types in Haskell have these restrictions:
You cannot define a newtype for unboxed type:
newtype Vec = Vec (# Float#, Float# #)
but you can define type synonim:
type Vec = (# Float#, Float# #)
Type families can't return unboxed type:
type family Unbox (a :: *) :: # where
Unbox Int = Int#
Unbox Word = Word#
Unbox Float = Float#
Unbox Double = Double#
Unbox Char = Char#
Are there some fundamental reasons behind this, or it's just because no one asked for this features?
Parametric polymorphism in Haskell relies on the fact that all values of t :: * types are uniformly represented as a pointer to a runtime object. Thus, the same machine code works for all instantiations of polymorphic values.
Contrast polymorphic functions in Rust or C++. For example, the identity function there still has type analoguous to forall a. a -> a, but since values of different a types may have different sizes, the compilers have to generate different code for each instatiation. This also means that we can't pass polymorphic functions around in runtime boxes:
data Id = Id (forall a. a -> a)
since such a function would have to work correctly for arbitrary-sized objects. It requires some additional infrastructure to allow this feature, for example we could require that a runtime forall a. a -> a function takes extra implicit arguments that carry information about the size and constructors/destructors of a values.
Now, the problem with newtype Vec = Vec (# Float#, Float# #) is that even though Vec has kind *, runtime code that expects values of some t :: * can't handle it. It's a stack-allocated pair of floats, not a pointer to a Haskell object, and passing it to code expecting Haskell objects would result in segfaults or errors.
In general (# a, b #) isn't necessarily pointer-sized, so we can't copy it into pointer-sized data fields.
Type families returning # types are disallowed for related reasons. Consider the following:
type family Foo (a :: *) :: # where
Foo Int = Int#
Foo a = (# Int#, Int# #)
data Box = forall (a :: *). Box (Foo a)
Our Box is not representable runtime, since Foo a has different sizes for different a-s. Generally, polymorphism over # would require generating different code for different instantiations, like in Rust, but this interacts badly with regular parametric polymorphism and makes runtime representation of polymorphic values difficult, so GHC doesn't bother with any of this.
(Not saying though that a usable implementation couldn't possibly be devised)
A newtype would allow one to define class instances
instance C Vec where ...
which can not be defined for unboxed tuples. Type synonyms instead do not offer such functionality.
Also, Vec would not be a boxed type. This means that you can no longer instantiate type variables with Vec in general, unless their kind allows it. For instance [Vec] should be disallowed. The compiler should keep track of "regular" newtypes and "unboxed" newtypes in some way. This will have, I think, the only benefit of allowing the data constructor Vec to wrap unboxed values at compile time (since it is removed at runtime). This would probably be not enough useful to justify making the necessary changes to the type inference engine, I guess.

What does GADT offer that cannot be done with OOP and generics?

Are GADTs in functional languages equivalent to traditional OOP + generics, or there is a scenario where there are correctness constrants easily enforced by GADT but hard or impossible to achieve using Java or C#?
For example, this "well-typed interpreter" Haskell program:
data Expr a where
N :: Int -> Expr Int
Suc :: Expr Int -> Expr Int
IsZero :: Expr Int -> Expr Bool
Or :: Expr Bool -> Expr Bool -> Expr Bool
eval :: Expr a -> a
eval (N n) = n
eval (Suc e) = 1 + eval e
eval (IsZero e) = 0 == eval e
eval (Or a b) = eval a || eval b
can be written equivalently in Java using generics and appropriate implementation of each subclass, though much more verbose:
interface Expr<T> {
public <T> T eval();
}
class N extends Expr<Integer> {
private Integer n;
public N(Integer m) {
n = m;
}
#Override public Integer eval() {
return n;
}
}
class Suc extends Expr<Integer> {
private Expr<Integer> prev;
public Suc(Expr<Integer> aprev) {
prev = aprev;
}
#Override public Integer eval() {
return 1 + prev.eval()
}
}
/** And so on ... */
OOP classes are open, GADTs are closed (like plain ADTs).
Here, "open" means you can always add more subclasses later, hence the compiler can not assume to have access to all the subclasses of a given class. (There are a few exceptions, e.g. Java's final which however prevents any subclassing, and Scala's sealed classes). Instead, ADTs are "closed" in the sense you can not add further constructors later on, and the compiler knows that (and can exploit it to check e.g. exhaustiveness). For more information, see the "expression problem".
Consider the following code:
data A a where
A1 :: Char -> A Char
A2 :: Int -> A Int
data B b where
B1 :: Char -> B Char
B2 :: String -> B String
foo :: A t -> B t -> Char
foo (A1 x) (B1 y) = max x y
The above code relies on Char being the only type t for which one can produce both A t and B t. GADTs, being closed, can ensure that. If we tried to mimick this using OOP classes we fail:
class A1 extends A<Char> ...
class A2 extends A<Int> ...
class B1 extends B<Char> ...
class B2 extends B<String> ...
<T> Char foo(A<T> a, B<T> b) {
// ??
}
Here I think we can not implement the same thing unless resorting to unsafe type operations like type casts. (Moreover, these in Java don't even consider the parameter T because of type erasure.) We might think of adding some generic method to A or B to allow this, but this would force us to implement said method for Int and/or String as well.
In this specific case, one might simply resort to a non generic function:
Char foo(A<Char> a, B<Char> b) // ...
or, equivalently, to adding a non generic method to those classes.
However, the types shared between A and B might be a larger set than the singleton Char. Worse, classes are open, so the set can get larger as soon as one adds a new subclass.
Also, even if you have a variable of type A<Char> you still do not know if that's a A1 or not, and because of that you can not access A1's fields except by using a type cast. The type cast here would be safe only because the programmer knows there's no other subclass of A<Char>. In the general case, this might be false, e.g.
data A a where
A1 :: Char -> A Char
A2 :: t -> t -> A t
Here A<Char> must be a superclass of both A1 and A2<Char>.
#gsg asks in a comment about equality witnesses. Consider
data Teq a b where
Teq :: Teq t t
foo :: Teq a b -> a -> b
foo Teq x = x
trans :: Teq a b -> Teq b c -> Teq a c
trans Teq Teq = Teq
This can be translated as
interface Teq<A,B> {
public B foo(A x);
public <C> Teq<A,C> trans(Teq<B,C> x);
}
class Teq1<A> implements Teq<A,A> {
public A foo(A x) { return x; }
public <C> Teq<A,C> trans(Teq<A,C> x) { return x; }
}
The code above declares an interface for all the type pairs A,B, which is then implemented only in the case A=B (implements Teq<A,A>) by the class Teq1.
The interface requires a conversion function foo from A to B, and a "transitivity proof" trans, which given this of type Teq<A,B> and
an x of type Teq<B,C> can produce an object Teq<A,C>. This is the Java analogous of the Haskell code using GADTs right above.
The class can not be safely implemented when A/=B, as far as I can see: it would require either returning nulls or cheating with non termination.
Generics do not provide type equality constraints. Without them you need to rely on downcasts, i.e., lose type safety. Moreover, certain dispatch – in particular, the visitor pattern – cannot be implemented safely and with the proper interface for generics resembling GADTs. See this paper, investigating the very question:
Generalized Algebraic Data Types and Object-Oriented Programming
Andrew Kennedy, Claudio Russo. OOPSLA 2005.

How do phantom types work with newtype?

My understanding of newtypes is that they are compiled out by GHC. However, this can't be the whole story because phantom types can hold information.
From here:
you can wrap [a type] in a newtype and it'll be considered distinct to the type-checker, but identical at runtime. You can then use all sorts of deep trickery like phantom or recursive types without worrying about GHC shuffling buckets of bytes for no reason.
For example, imagine a newtype representing arithmetic modulo q:
newtype Zq q = Zq Int
class Modulus q where
getModulus :: q -> Int
addZq :: (Modulus q) => Zq q -> Zq q -> Zq q
addZq (Zq a) (Zq b) = Zq $ (a+b) `mod` (getModulus (undefined :: q))
addZq can't be compiled down to
addZq :: Int -> Int -> Int
so in what sense is the newtype compiled out, and where does the phantom type information get stored?
The thing to keep in mind is that you don't "compile down" to Haskell; you compile down to some other, more explicit language -- in GHC's case, the next well-known step down is core. And although you can't compile addZq down to something of type Int -> Int -> Int in Core, you can compile it down to something whose type you might write as Modulus q => Int -> Int -> Int. In this more explicit language, => has a different meaning than in Haskell; in this language, c => t is the type of a function which takes evidence (in this case, a class dictionary) for the claim c and produces something of type t. So Modulus q => Int -> Int -> Int is roughly the same as (q -> Int) -> Int -> Int -> Int, and addZq certainly can be given that type, even in Haskell.
When newtypes are said to be compiled out by GHC, this means only means runtime behavior. Specifically, they are guaranteed not to be slower than the same code without the newtype. But they still carry on the type information throughout the compilation process, including when used as phantom types.
In other words, a newtype is compile-time information only, and that's good enough for phantom types to work.

What are lenses used/useful for?

I can't seem to find any explanation of what lenses are used for in practical examples. This short paragraph from the Hackage page is the closest I've found:
This modules provides a convienient way to access and update the elements of a structure. It is very similar to Data.Accessors, but a bit more generic and has fewer dependencies. I particularly like how cleanly it handles nested structures in state monads.
So, what are they used for? What benefits and disadvantages do they have over other methods? Why are they needed?
They offer a clean abstraction over data updates, and are never really "needed." They just let you reason about a problem in a different way.
In some imperative/"object-oriented" programming languages like C, you have the familiar concept of some collection of values (let's call them "structs") and ways to label each value in the collection (the labels are typically called "fields"). This leads to a definition like this:
typedef struct { /* defining a new struct type */
float x; /* field */
float y; /* field */
} Vec2;
typedef struct {
Vec2 col1; /* nested structs */
Vec2 col2;
} Mat2;
You can then create values of this newly defined type like so:
Vec2 vec = { 2.0f, 3.0f };
/* Reading the components of vec */
float foo = vec.x;
/* Writing to the components of vec */
vec.y = foo;
Mat2 mat = { vec, vec };
/* Changing a nested field in the matrix */
mat.col2.x = 4.0f;
Similarly in Haskell, we have data types:
data Vec2 =
Vec2
{ vecX :: Float
, vecY :: Float
}
data Mat2 =
Mat2
{ matCol1 :: Vec2
, matCol2 :: Vec2
}
This data type is then used like this:
let vec = Vec2 2 3
-- Reading the components of vec
foo = vecX vec
-- Creating a new vector with some component changed.
vec2 = vec { vecY = foo }
mat = Mat2 vec2 vec2
However, in Haskell, there's no easy way of changing nested fields in a data structure. This is because you need to re-create all of the wrapping objects around the value that you are changing, because Haskell values are immutable. If you have a matrix like the above in Haskell, and want to change the upper right cell in the matrix, you have to write this:
mat2 = mat { matCol2 = (matCol2 mat) { vecX = 4 } }
It works, but it looks clumsy. So, what someone came up with, is basically this: If you group two things together: the "getter" of a value (like vecX and matCol2 above) with a corresponding function that, given the data structure that the getter belongs to, can create a new data structure with that value changed, you are able to do a lot of neat stuff. For example:
data Data = Data { member :: Int }
-- The "getter" of the member variable
getMember :: Data -> Int
getMember d = member d
-- The "setter" or more accurately "updater" of the member variable
setMember :: Data -> Int -> Data
setMember d m = d { member = m }
memberLens :: (Data -> Int, Data -> Int -> Data)
memberLens = (getMember, setMember)
There are many ways of implementing lenses; for this text, let's say that a lens is like the above:
type Lens a b = (a -> b, a -> b -> a)
I.e. it is the combination of a getter and a setter for some type a which has a field of type b, so memberLens above would be a Lens Data Int. What does this let us do?
Well, let's first make two simple functions that extract the getters and setters from a lens:
getL :: Lens a b -> a -> b
getL (getter, setter) = getter
setL :: Lens a b -> a -> b -> a
setL (getter, setter) = setter
Now, we can start abstracting over stuff. Let's take the situation above again, that we want to modify a value "two stories deep." We add a data structure with another lens:
data Foo = Foo { subData :: Data }
subDataLens :: Lens Foo Data
subDataLens = (subData, \ f s -> f { subData = s }) -- short lens definition
Now, let's add a function that composes two lenses:
(#) :: Lens a b -> Lens b c -> Lens a c
(#) (getter1, setter1) (getter2, setter2) =
(getter2 . getter1, combinedSetter)
where
combinedSetter a x =
let oldInner = getter1 a
newInner = setter2 oldInner x
in setter1 a newInner
The code is kind of quickly written, but I think it's clear what it does: the getters are simply composed; you get the inner data value, and then you read its field. The setter, when it is supposed to alter some value a with the new inner field value of x, first retrieves the old inner data structure, sets its inner field, and then updates the outer data structure with the new inner data structure.
Now, let's make a function that simply increments the value of a lens:
increment :: Lens a Int -> a -> a
increment l a = setL l a (getL l a + 1)
If we have this code, it becomes clear what it does:
d = Data 3
print $ increment memberLens d -- Prints "Data 4", the inner field is updated.
Now, because we can compose lenses, we can also do this:
f = Foo (Data 5)
print $ increment (subDataLens#memberLens) f
-- Prints "Foo (Data 6)", the innermost field is updated.
What all of the lens packages do is essentially to wrap this concept of lenses - the grouping of a "setter" and a "getter," into a neat package that makes them easy to use. In a particular lens implementation, one would be able to write:
with (Foo (Data 5)) $ do
subDataLens . memberLens $= 7
So, you get very close to the C version of the code; it becomes very easy to modify nested values in a tree of data structures.
Lenses are nothing more than this: an easy way of modifying parts of some data. Because it becomes so much easier to reason about certain concepts because of them, they see a wide use in situations where you have huge sets of data structures that have to interact with one another in various ways.
For the pros and cons of lenses, see a recent question here on SO.
Lenses provide convenient ways to edit data structures, in a uniform, compositional way.
Many programs are built around the following operations:
viewing a component of a (possibly nested) data structure
updating fields of (possibly nested) data structures
Lenses provide language support for viewing and editing structures in a way that ensures your edits are consistent; that edits can be composed easily; and that the same code can be used for viewing parts of a structure, as for updating the parts of the structure.
Lenses thus make it easy to write programs from views onto structures; and from structures back on to views (and editors) for those structures. They clean up a lot of the mess of record accessors and setters.
Pierce et al. popularized lenses, e.g. in their Quotient Lenses paper, and implementations for Haskell are now widely used (e.g. fclabels and data-accessors).
For concrete use cases, consider:
graphical user interfaces, where a user is editing information in a structured way
parsers and pretty printers
compilers
synchronizing updating data structures
databases and schemas
and many other situations where you have a data structure model of the world, and a editable view onto that data.
As an additional note it is often overlooked that lenses implement a very generic notion of "field access and update". Lenses can be written for all kinds of things, including function-like objects. It requires a bit of abstract thinking to appreciate this, so let me show you an example of the power of lenses:
at :: (Eq a) => a -> Lens (a -> b) b
Using at you can actually access and manipulate functions with multiple arguments depending on earlier arguments. Just keep in mind that Lens is a category. This is a very useful idiom for locally adjusting functions or other things.
You can also access data by properties or alternate representations:
polar :: (Floating a, RealFloat a) => Lens (Complex a) (a, a)
mag :: (RealFloat a) => Lens (Complex a) a
You can go further writing lenses to access individual bands of a Fourier-transformed signal and a lot more.

Resources