How to properly constrain `arbitrary` UUID-Generation? - haskell

I'm trying to create Arbitrary instances for some of my types to be used in QuickCheck property testing. I need randomly generated UUIDs, with the constraint that all-zero (nil) UUIDs are disallowed - that is, 00000000-0000-0000-0000-000000000000. Therefore, I set up the following generator:
nonzeroIdGen :: Gen UUID.UUID
nonzeroIdGen = arbitrary `suchThat` (not . UUID.null)
Which I use in an Arbitrary instance as follows:
instance Arbitrary E.EventId where
arbitrary = do
maybeEid <- E.mkEventId <$> nonzeroIdGen
return $ fromJust maybeEid
In general, this is unsafe code; but for testing, with supposedly guaranteed nonzero UUIDs, I thought the fromJust to be ok.
mkEventId is defined as
mkEventId :: UUID.UUID -> Maybe EventId
mkEventId uid = EventId <$> validateId uid
with EventId a new type-wrapper around UUID.UUID, and
validateId :: UUID.UUID -> Maybe UUID.UUID
validateId uuid = if UUID.null uuid then Nothing else Just uuid
To my surprise, I get failing tests because of all-zero UUIDs generated by the above code. A trace in mkEventId shows the following:
00000001-0000-0001-0000-000000000001
Just (EventId {getEventId = 00000001-0000-0001-0000-000000000001})
00000000-0000-0000-0000-000000000000
Nothing
Create valid Events. FAILED [1]
The first generated ID is fine, the second one is all-zero, despite my nonzeroIdGen generator from above. What am I missing?

I generally find that in cases like this, using newtypes to define instances of Arbitrary composes better. Here's one I made for valid UUID values:
newtype NonNilUUID = NonNilUUID { getNonNilUUID :: UUID } deriving (Eq, Show)
instance Arbitrary NonNilUUID where
arbitrary = NonNilUUID <$> arbitrary `suchThat` (/= nil)
You can then compose other Arbitrary instances from this one, like I do here with a Reservation data type:
newtype ValidReservation =
ValidReservation { getValidReservation :: Reservation } deriving (Eq, Show)
instance Arbitrary ValidReservation where
arbitrary = do
(NonNilUUID rid) <- arbitrary
(FutureTime d) <- arbitrary
n <- arbitrary
e <- arbitrary
(QuantityWithinCapacity q) <- arbitrary
return $ ValidReservation $ Reservation rid d n e q
Notice the pattern match (NonNilUUID rid) <- arbitrary to deconstruct rid as a UUID value.
You may notice that I've also created a ValidReservation newtype for my Reservation data type. I consistently do this to avoid orphan instances, and to avoid polluting my domain model with a QuickCheck dependency. (I have nothing against QuickCheck, but test-specific capabilities don't belong in the 'production' code.)
All the code shown here is available in context on GitHub.

Related

Haskell Data.Unique

I need to rename variables for an application where I am unifying terms, and the way I have done it in the past has been to use (gensym)-like features, and replace the name of the var with the gensym-ed name, usually mangled to a string. I also need to compare the names of variables.
so (rename (logvar1 "foo")) would return (logvar1 "#:23322) - or whatever, and I can get hold of the "#:23322" and check it against any other logical vars in the terms.
In haskell, I am not sure how to do that. I have found Data.Unique, which returns a IO Unique.
Unique is part of Eq, but of course IO Unique isn't.
What is the best way to write this function
m = newUnique
n = newUnique
test :: (Monad m, Eq a) => m a -> m a -> m Bool
test u1 u2 = do
v1 <- u1
v2 <- u2
if v1 == v2 then return True else return False
I would like True of False back, not IO True or IO False.
Of course, you can't always get what you want.....
I'm grateful for any help. I understand why the IO is there, I'm just not sure how best do deal with it in the rest of my program where I just want to compare
The idea of this library is that in the part of your program where you need to generate unique labels, you use IO. Later, when you only need to compare unique labels, no IO is needed
main :: IO ()
main = do
x <- newUnique
y <- newUnique
print $ same x y
same :: Eq a => a -> a -> Bool
same = (==)
I've defined an alias for (==) just to make it clear that there is no IO involved in comparisons, only in label creation.
You could do the same thing using State instead of IO, and implementing your own gensym with that. The advantage of that approach would be that you could get more transparent gensyms, e.g. giving them a name and a unique number instead of just an opaque unique identifier.
I don't think your test signature makes sense. You shouldn't compare monadic values – that would be comparing generators of unique values, not unique values themselves.
You do need a monad for generating the unique keys, because a non-monadic generator could not know about other ones that have already been generated; however doing this in IO is a bit overkill, a dedicated special monad would be better.
prim-uniq offers such generators: they can be used either in IO like Data.Unique.newUnique, but also locally in the ST monad.
type VarId s = Uniq s
data Expr s = LogVar (VarIs s)
| ...
deriving (Eq)
rename :: (PrimMonad m, s ~ PrimState m)
=> Expr s -> m (Expr s)
rename (LogVar _) = LogVar <$> getUniq
rename ... = ...

How to derive the type for Haskell record fields?

Coming from OOP this seems like alien code to me.
I don't understand why type of runIdentity is a function :
runIdentity :: Identity a -> a ? I specified to be runIdentity :: a
newtype Identity a = Identity {runIdentity :: a} deriving Show
instance Monad Identity where
return = Identity
Identity x >>= k = k x
instance Functor Identity where
fmap f (Identity x) = Identity (f x)
instance Applicative Identity where
pure = Identity
Identity f <*> Identity v = Identity (f v)
wrapNsucc :: Integer -> Identity Integer
wrapNsucc = Identity . succ
Calling runIdentity :
runIdentity $ wrapNsucc 5 -- gives 6 as output
You're right that runIdentity is but a simple field of type a. But the type of runIdentity is Identity a -> a, since runIdentity is a function to extract that field out of a Identity a. You can't get the runIdentity out of a value without supplying which value to get it from, after all.
Edit:
To expand a little on that OOP-analogy in the comments, think of a class
class Identity<T> {
public T runIdentity;
}
This is the Identity monad, loosely translated to OOP code. The template argument T basically is your a; as such, runIdentity is of type T. To get that T from your object, you'd probably do something like
Identity<int> foo = new Identity<int>();
int x = foo.runIdentity;
You see runIdentity as something of type T, but it's not really. You can't just do
int x = runIdentity; // Nope!
because - where to get the runIdentity from? Instead, think of this like doing
Identity<int> foo = new Identity<int>();
int x = runIdentity(foo);
This shows what actually happens when you're calling a member; you have a function (your runIdentity) and supply it an object to use - IIRC this is what Python does with def func(self). So instead of being plainly of type T, runIdentity is actually taking an Identity<T> as argument to return a T.
Thus, it's of type Identity a -> a.
Another way to see this is that record syntax in Haskell is basically just syntactic sugar over algebraic datatypes, i.e. records don't truly exist in Haskell, only algebraic datatypes do, with perhaps some additional syntactic niceties. Hence there isn't a notion of members the same way that classes have in a lot of OO languages.
data MyRecord = MyRecord { myInt :: Int, myString :: String }
really is just
data MyRecord Int String
with the additional functions
myInt :: MyRecord -> Int
myInt (MyRecord x _) = x
myString :: MyRecord -> String
myString (MyRecord _ y) = y
automatically defined.
The only things that you could not do by yourself with normal algebraic datatypes that record syntax gives you are a nice way of making a copy of MyRecord that only has a subset of fields changed and a nice way of naming certain patterns.
copyWithNewInt :: Int -> MyRecord -> MyRecord
copyWithNewInt x r = r { myInt = x }
-- Same thing as myInt, just written differently
extractInt :: MyRecord -> Int
extractInt (MyRecord { myInt = x }) = x
Because this is just syntactic sugar over ordinary algebraic datatypes, you could always fall back to doing things the usual way.
-- This is a more verbose but also valid way of doing things
copyWithNewInt :: Int -> MyRecord -> MyRecord
copyWithNewInt x (MyRecord _ oldString) = MyRecord x oldString
Incidentally this is why some otherwise ridiculous-seeming constraints exist (the most prominent is that you can't have another type defined with record syntax with myInt again, otherwise you're creating two functions in the same scope with the same name, which Haskell does not allow).
Therefore
newtype Identity a = Identity {runIdentity :: a} deriving Show
is equivalent (minus convenient update syntax which doesn't really matter when you have only one field) to
newtype Identity a = Identity a deriving Show
runIdentity :: Identity a -> a
runIdentity (Identity x) = x
Using record syntax just compresses all that into a single line (and perhaps gives more insight into why runIdentity is named that, i.e. as a verb, rather than as a noun).
newtype Identity a = Identity {runIdentity :: a} deriving Show
Using the record syntax here, you're really creating two things called runIdentity.
One is the field of the constructor Identity. You can use that with record pattern syntax, as in case i of Identity { x = runIdentity } -> x, where matching a value i :: Identity a to extract the field's contents into a local variable x. You can also use record construction or update syntax, as in Identity { runIdentity = "foo" } or i { runIdentity = "bar" }.
In all of those cases runIdentity isn't really a standalone thing in its own right. You're using it only as part of a larger syntactic construct, to say which field of Identity you're accessing. The "slot" of Identify a referred to with the help of the field runIdentity does indeed store things of type a. But this runIdentity field is not a value of type a. It's not even a value at all really, since it needs to have these extra properties (that values do not have) about referring to a particular "slot" in a data type. Values are standalone things, that exist and make sense on their own. Fields are not; field contents are, which is why we use types to classify fields, but fields themselves are not values.1 Values can be placed in data structures, returned from functions, etc. There's no way to define a value that you can place in a data structure, get back out, and then use with record pattern, construction, or update syntax.
The other thing named runIdentity defined with the record match syntax is an ordinary function. Functions are values; you can pass them to other functions, put them in data structures, etc. The intent is to give you a helper for getting the value of a field in a value of type Identity a. But because you have to specify which Identity a value you want to get the value of the runIdentity field from, you have to pass an Identity a into the function. So the runIdentity function is a value of type Identity a -> a, as distinct from the runIdentity field which is a non-value described by type a.
A simple way to see this distinction is to add a definition like myRunIdentity = runIdentity to your file. That definition declares that myRunIdentity is equal to runIdentity, but you can only define values like that. And sure enough myRunIdentity will be a function of type Identity a -> a, that you can apply to things of type Identity a to get an a value. But it won't be usable with record syntax as the field. The field runIdentity didn't "come along with" the value runIdentity in that definition.
This question might have been prompted by type :t runIdentity into ghci, asking it to show you the type. It would have answered runIdentity :: Identity a -> a. The reason is because the :t syntax works on values2. You can type any expression at all there, and it will give you the type of the value that would result. So :t runIdentity is seeing the runIdentity value (the function), not the runIdentity field.
As a final note, I've been banging on about how the field runIdentity :: a and the function runIdentity :: Identity -> a are two separate things. I did so because I thought cleanly separating the two would help people confused by why there can be two different answers to "what is the type of runIdentity". But it's also a perfectly valid interpretation to say that runIdentity is a single thing, and it's simply the case that when you use a field as a first-class value it behaves as a function. And that is how people often talk about fields. So please don't be confused if other sources insist that there is only one thing; these are simply two different ways of looking at the same language concepts.
1 A perspective on lenses, if you've heard of them, is that they are ordinary values that can be used to give us all of the semantics we need from "fields", without any special-purpose syntax. So a hypothetical language could theoretically not provide any syntax for field access at all, just giving us lenses when we declare a new data type, and we'd be able to make do.
But Haskell record syntax fields aren't lenses; used as values they're only "getter" functions, which is why there's dedicated pattern match, construction, and update syntax for using the fields in ways beyond what is possible with ordinary values.
2 Well, more properly it works on expressions, since it's type-checking the code, not running the code and then looking at the value to see what type it is (that wouldn't work anyway, since runtime Haskell values don't have any type information in the GHC system). But you can blur the lines and call values and expressions the same kind of thing; fields are quite different.

Uniqueness and other restrictions for Arbitrary in QuickCheck

I'm trying to write a modified Arbitrary instance for my data type, where (in my case) a subcomponent has a type [String]. I would ideally like to bring uniqueness in the instance itself, that way I don't need ==> headers / prerequisites for every test I write.
Here's my data type:
data Foo = Vars [String]
and the trivial Arbitrary instance:
instance Arbitrary Foo where
arbitrary = Vars <$> (:[]) <$> choose ('A','z')
This instance is strange, I know. In the past, I've had difficulty when quickcheck combinatorically explodes, so I'd like to keep these values small. Another request - how can I make an instance where the generated strings are under 4 characters, for instance?
All of this, fundamentally requires (boolean) predicates to augment Arbitrary instances. Is this possible?
Definitely you want the instance to produce only instances that match the intention of the data type. If you want all the variables to be distinct, the Arbitrary instance must reflect this. (Another question is if in this case it wouldn't make more sense to define Vars as a set, like newtype Vars = Set [String].)
I'd suggest to check for duplicates using Set or Hashtable, as nub has O(n^2) complexity, which might slow down your test considerably for larger inputs. For example:
import Control.Applicative
import Data.List (nub)
import qualified Data.Set as Set
import Test.QuickCheck
newtype Foo = Vars [String]
-- | Checks if a given list has no duplicates in _O(n log n)_.
hasNoDups :: (Ord a) => [a] -> Bool
hasNoDups = loop Set.empty
where
loop _ [] = True
loop s (x:xs) | s' <- Set.insert x s, Set.size s' > Set.size s
= loop s' xs
| otherwise
= False
-- | Always worth to test if we wrote `hasNoDups` properly.
prop_hasNoDups :: [Int] -> Property
prop_hasNoDups xs = hasNoDups xs === (nub xs == xs)
Your instance then needs to create a list of list, and each list should be randomized. So instead of (: []), which creates just a singleton list (and just one level), you need to call listOf twice:
instance Arbitrary Foo where
arbitrary = Vars <$> (listOf . listOf $ choose ('A','z'))
`suchThat` hasNoDups
Also notice that choose ('A', 'z') allows to use all characters between A and z, which includes many control characters. My guess is that you rather want something like
oneof [choose ('A','Z'), choose ('a','z')]
If you really want, you could also make hasNoDups O(n) using hash tables in the ST monad.
Concerning limiting the size: you could always have your own parametrized functions that produce different Gen Foo, but I'd say in most cases it's not necessary. Gen has it's own internal size parameter, which is increased throughout the tests (see this answer), so different sizes (as generated using listOf) of lists are covered.
But I'd suggest you to implement shrink, as this will give you much nicer counter-examples. For example, if we define (a wrong test) that tried to verify that no instance of Var contains 'a' in any of its variable:
prop_Foo_hasNoDups :: Foo -> Property
prop_Foo_hasNoDups (Vars xs) = all (notElem 'a') xs === True
we'll get ugly counter-examples such as
Vars ["RhdaJytDWKm","FHHhrqbI","JVPKGTqNCN","awa","DABsOGNRYz","Wshubp","Iab","pl"]
But adding
shrink (Vars xs) = map Vars $ shrink xs
to Arbitrary Foo makes the counter-example to be just
Vars ["a"]
suchThat :: Gen a -> (a -> Bool) -> Gen a is a way to embed Boolean predicates in a Gen. See the haddocks for more info.
Here's how you would make the instance unique:
instance Arbitrary Foo where
arbitrary = Vars <$> (:[]) <$> (:[]) <$> choose ('A','z')
`suchThat` isUnique
where
isUnique x = nub x == x

What's the difference between makeLenses and makeFields?

Pretty self-explanatory. I know that makeClassy should create typeclasses, but I see no difference between the two.
PS. Bonus points for explaining the default behaviour of both.
Note: This answer is based on lens 4.4 or newer. There were some changes to the TH in that version, so I don't know how much of it applies to older versions of lens.
Organization of the lens TH functions
The lens TH functions are all based on one function, makeLensesWith (also named makeFieldOptics inside lens). This function takes a LensRules argument, which describes exactly what is generated and how.
So to compare makeLenses and makeFields, we only need to compare the LensRules that they use. You can find them by looking at the source:
makeLenses
lensRules :: LensRules
lensRules = LensRules
{ _simpleLenses = False
, _generateSigs = True
, _generateClasses = False
, _allowIsos = True
, _classyLenses = const Nothing
, _fieldToDef = \_ n ->
case nameBase n of
'_':x:xs -> [TopName (mkName (toLower x:xs))]
_ -> []
}
makeFields
defaultFieldRules :: LensRules
defaultFieldRules = LensRules
{ _simpleLenses = True
, _generateSigs = True
, _generateClasses = True -- classes will still be skipped if they already exist
, _allowIsos = False -- generating Isos would hinder field class reuse
, _classyLenses = const Nothing
, _fieldToDef = camelCaseNamer
}
What do these mean?
Now we know that the differences are in the simpleLenses, generateClasses, allowIsos and fieldToDef options. But what do those options actually mean?
makeFields will never generate type-changing optics. This is controlled by the simpleLenses = True option. That option doesn't have haddocks in the current version of lens. However, lens HEAD added documentation for it:
-- | Generate "simple" optics even when type-changing optics are possible.
-- (e.g. 'Lens'' instead of 'Lens')
So makeFields will never generate type-changing optics, while makeLenses will if possible.
makeFields will generate classes for the fields. So for each field foo, we have a class:
class HasFoo t where
foo :: Lens' t <Type of foo field>
This is controlled by the generateClasses option.
makeFields will never generate Iso's, even if that would be possible (controlled by the allowIsos option, which doesn't seem to be exported from Control.Lens.TH)
While makeLenses simply generates a top-level lens for each field that starts with an underscore (lowercasing the first letter after the underscore), makeFields will instead generate instances for the HasFoo classes. It also uses a different naming scheme, explained in a comment in the source code:
-- | Field rules for fields in the form # prefixFieldname or _prefixFieldname #
-- If you want all fields to be lensed, then there is no reason to use an #_# before the prefix.
-- If any of the record fields leads with an #_# then it is assume a field without an #_# should not have a lens created.
camelCaseFields :: LensRules
camelCaseFields = defaultFieldRules
So makeFields also expect that all fields are not just prefixed with an underscore, but also include the data type name as a prefix (as in data Foo = { _fooBar :: Int, _fooBaz :: Bool }). If you want to generate lenses for all fields, you can leave out the underscore.
This is all controlled by the _fieldToDef (exported as lensField by Control.Lens.TH).
As you can see, the Control.Lens.TH module is very flexible. Using makeLensesWith, you can create your very own LensRules if you need a pattern not covered by the standard functions.
Disclaimer: this is based on experimenting with the working code; it gave me enough information to proceed with my project, but I'd still prefer a better-documented answer.
data Stuff = Stuff {
_foo
_FooBar
_stuffBaz
}
makeLenses
Will create foo as a lens accessor to Stuff
Will create fooBar (changing the capitalized name to lowercase);
makeFields
Will create baz and a class HasBaz; it will make Stuff an instance of that class.
Normal
makeLenses creates a single top-level optic for each field in the type. It looks for fields that start with an underscore (_) and it creates an optic that is as general as possible for that field.
If your type has one constructor and one field you'll get an Iso.
If your type has one constructor and multiple fields you'll get many Lens.
If your type has multiple constructors you'll get many Traversal.
Classy
makeClassy creates a single class containing all the optics for your type. This version is used to make it easy to embed your type in another larger type achieving a kind of subtyping. Lens and Traversal optics will be created according to the rules above (Iso is excluded because it hinders the subtyping behavior.)
In addition to one method in the class per field you'll get an extra method that makes it easy to derive instances of this class for other types. All of the other methods have default instances in terms of the top-level method.
data T = MkT { _field1 :: Int, _field2 :: Char }
class HasT a where
t :: Lens' a T
field1 :: Lens' a Int
field2 :: Lens' a Char
field1 = t . field1
field2 = t . field2
instance HasT T where
t = id
field1 f (MkT x y) = fmap (\x' -> MkT x' y) (f x)
field2 f (MkT x y) = fmap (\y' -> MkT x y') (f y)
data U = MkU { _subt :: T, _field3 :: Bool }
instance HasT U where
t f (MkU x y) = fmap (\x' -> MkU x' y) (f x)
-- field1 and field2 automatically defined
This has the additional benefit that it is easy to export/import all the lenses for a given type. import Module (HasT(..))
Fields
makeFields creates a single class per field which is intended to be reused between all types that have a field with the given name. This is more of a solution to record field names not being able to be shared between types.

How do I handle the Maybe result of at in Control.Lens.Indexed without a Monoid instance

I recently discovered the lens package on Hackage and have been trying to make use of it now in a small test project that might turn into a MUD/MUSH server one very distant day if I keep working on it.
Here is a minimized version of my code illustrating the problem I am facing right now with the at lenses used to access Key/Value containers (Data.Map.Strict in my case)
{-# LANGUAGE OverloadedStrings, GeneralizedNewtypeDeriving, TemplateHaskell #-}
module World where
import Control.Applicative ((<$>),(<*>), pure)
import Control.Lens
import Data.Map.Strict (Map)
import qualified Data.Map.Strict as DM
import Data.Maybe
import Data.UUID
import Data.Text (Text)
import qualified Data.Text as T
import System.Random (Random, randomIO)
newtype RoomId = RoomId UUID deriving (Eq, Ord, Show, Read, Random)
newtype PlayerId = PlayerId UUID deriving (Eq, Ord, Show, Read, Random)
data Room =
Room { _roomId :: RoomId
, _roomName :: Text
, _roomDescription :: Text
, _roomPlayers :: [PlayerId]
} deriving (Eq, Ord, Show, Read)
makeLenses ''Room
data Player =
Player { _playerId :: PlayerId
, _playerDisplayName :: Text
, _playerLocation :: RoomId
} deriving (Eq, Ord, Show, Read)
makeLenses ''Player
data World =
World { _worldRooms :: Map RoomId Room
, _worldPlayers :: Map PlayerId Player
} deriving (Eq, Ord, Show, Read)
makeLenses ''World
mkWorld :: IO World
mkWorld = do
r1 <- Room <$> randomIO <*> (pure "The Singularity") <*> (pure "You are standing in the only place in the whole world") <*> (pure [])
p1 <- Player <$> randomIO <*> (pure "testplayer1") <*> (pure $ r1^.roomId)
let rooms = at (r1^.roomId) ?~ (set roomPlayers [p1^.playerId] r1) $ DM.empty
players = at (p1^.playerId) ?~ p1 $ DM.empty in do
return $ World rooms players
viewPlayerLocation :: World -> PlayerId -> RoomId
viewPlayerLocation world playerId=
view (worldPlayers.at playerId.traverse.playerLocation) world
Since rooms, players and similar objects are referenced all over the code I store them in my World state type as maps of Ids (newtyped UUIDs) to their data objects.
To retrieve those with lenses I need to handle the Maybe returned by the at lens (in case the key is not in the map this is Nothing) somehow. In my last line I tried to do this via traverse which does typecheck as long as the final result is an instance of Monoid but this is not generally the case. Right here it is not because playerLocation returns a RoomId which has no Monoid instance.
No instance for (Data.Monoid.Monoid RoomId)
arising from a use of `traverse'
Possible fix:
add an instance declaration for (Data.Monoid.Monoid RoomId)
In the first argument of `(.)', namely `traverse'
In the second argument of `(.)', namely `traverse . playerLocation'
In the second argument of `(.)', namely
`at playerId . traverse . playerLocation'
Since the Monoid is required by traverse only because traverse generalizes to containers of sizes greater than one I was now wondering if there is a better way to handle this that does not require semantically nonsensical Monoid instances on all types possibly contained in one my objects I want to store in the map.
Or maybe I misunderstood the issue here completely and I need to use a completely different bit of the rather large lens package?
If you have a Traversal and you want to get a Maybe for the first element, you can just use headOf instead of view, i.e.
viewPlayerLocation :: World -> PlayerId -> Maybe RoomId
viewPlayerLocation world playerId =
headOf (worldPlayers.at playerId.traverse.playerLocation) world
The infix version of headOf is called ^?. You can also use toListOf to get a list of all elements, and other functions depending on what you want to do. See the Control.Lens.Fold documentation.
A quick heuristic for which module to look for your functions in:
A Getter is a read-only view of exactly one value
A Lens is a read-write view of exactly one value
A Traversal is a read-write view of zero-or-more values
A Fold is a read-only view of zero-or-more values
A Setter is a write-only (well, modify-only) view of zero-or-more values (possibly uncountably many values, in fact)
An Iso is, well, an isomorphism -- a Lens that can go in either direction
Presumably you know when you're using an Indexed function, so you can look in the corresponding Indexed module
Think about what you're trying to do and what the most general module to put it in would be. :-) In this case you have a Traversal, but you're only trying to view, not modify, so the function you want is in .Fold. If you also had the guarantee that it was referring to exactly one value, it would be in .Getter.
Short answer: the lens package is not magic.
Without telling me what the error or default is, you want to make:
viewPlayerLocation :: World -> PlayerId -> RoomId
You know two things, that
To retrieve those with lenses I need to handle the Maybe returned by the at lens
and
traverse which does typecheck as long as the final result is an instance of Monoid
With a Monoid you get mempty :: Monoid m => m as the default when the lookup fails.
What can fail: The PlayerId can not be in the _worldPlayers and the _playerLocation can not be in the _worldRooms.
So what should your code do if a lookup fails? Is this "impossible" ? If so, then use fromMaybe (error "impossible") :: Maybe a -> a to crash.
If it possible for the lookup to fail then is there a sane default? Perhaps return Maybe RoomId and let the caller decide?
There is ^?! which frees you from calling fromMaybe.

Resources