Set operators not provided by Data.Vector of Haskell, what is the reason - haskell

My application involves heavy array operations (e.g. log(1) indexing), thus Data.Vector and Data.Vector.Unboxed are preferred to Data.List.
It also involves many set operations (e.g. intersectBy), which however, are not provided by the Data.Vector.
Each of these functions can be implemented like in Data.List in 3-4 lines .
Is there any reason they all not implemented with Data.Vector? I can only speculate. Maybe set operations in Data.Vector is discouraged for performance reasons, i.e. intersectBy would first produce the intersection through list comprehension and then convert the list into a Data.Vector?

I assume it's missing because intersection of unsorted, immutable arrays must have a worst-case run time of Ω(n*m) without using additional space and Data.Vector is optimized for performance. If you want, you can write that function yourself, though:
import Data.Vector as V
intersect :: Eq a => V.Vector a -> V.Vector a -> V.Vector a
intersect x = V.filter (`V.elem` x)
Or by using a temporary set data structure to achieve an expected O(n + m) complexity:
import Data.HashSet as HS
intersect :: (Hashable a, Eq a) => V.Vector a -> V.Vector a -> V.Vector a
intersect x = V.filter (`HS.member` set)
where set = HS.fromList $ V.toList x
If you can afford the extra memory usage, maybe you can use some kind of aggregate type for your data, for example an array for fast random access and a hash trie like Data.HashSet for fast membership checks and always keep both containers up to date. That way you can reduce the asymptotic complexity for intersection to something like O(min(n, m))

Related

Efficient way to do n-ary branch / tabulated functions?

I'm trying to get some basic information on the performance characteristics of branches in SBV.
Let's suppose I have an SInt16 and a very sparse lookup table Map Int16 a. I can implement the lookup with nested ite:
sCase :: (Mergeable a) => SInt16 -> a -> Map Int16 a -> a
sCase x def = go . toList
where
go [] = def
go ((k,v):kvs) = ite (x .== literal k) v (go kvs)
However, this means the generated tree will be very deep.
Does that matter?
If yes, is it better to instead generate a balanced tree of branches, effectively mirroring the Map's structure? Or is there some other scheme that would give even better performance?
If there are less than 256 entries in the map, would it change anything to "compress" it so that sCase works on an SInt8 and a Map Int8 a?
Is there some built-in SBV combinator for this use case that works better than iterated ite?
EDIT: It turns out that it matters a lot what my a is, so let me add some more detail to that. I am currently using sCase to branch in a stateful computation modeled as an RWS r w s a, with the following instances:
instance forall a. Mergeable a => Mergeable (Identity a) where
symbolicMerge force cond thn els = Identity $ symbolicMerge force cond (runIdentity thn) (runIdentity els)
instance (Mergeable s, Mergeable w, Mergeable a, forall a. Mergeable a => Mergeable (m a)) => Mergeable (RWST r w s m a) where
symbolicMerge force cond thn els = Lazy.RWST $
symbolicMerge force cond (runRWST thn) (runRWST els)
So stripping away all the newtypes, I'd like to branch into something of type r -> s -> (a, s, w) s.t. Mergeable s, Mergeable w and Mergeable a.
Symbolic look-ups are expensive
Symbolic array lookup will be expensive regardless of what data-structure you use. It boils down to the fact that there's no information available to the symbolic execution engine to cut-down on the state-space, so it ends up doing more or less what you coded yourself.
SMTLib Arrays
However, the best solution in these cases is to actually use SMT's support for arrays: http://smtlib.cs.uiowa.edu/theories-ArraysEx.shtml
SMTLib arrays are different than what you'd consider as an array in a regular programming language: It does not have bounds. In that sense, it's more of a map from inputs to outputs, spanning the entire domain. (i.e., they are equivalent to functions.) But SMT has custom theories to deal with arrays and thus they can handle problems involving arrays much more efficiently. (On the down-side, there's no notion of index-out-of-bounds or somehow controlling the range of elements you can access. You can code those up yourself on top of the abstraction though, leaving it up to you to decide how you want to handle such invalid accesses.)
If you are interested in learning more about how SMT solvers deal with arrays, the classic reference is: http://theory.stanford.edu/~arbrad/papers/arrays.pdf
Arrays in SBV
SBV supports arrays, through the SymArray class: https://hackage.haskell.org/package/sbv-8.7/docs/Data-SBV.html#t:SymArray
The SFunArray type actually does not use SMTLib arrays. This was designed to support solvers that didn't understand Arrays, such as ABC: https://hackage.haskell.org/package/sbv-8.7/docs/Data-SBV.html#t:SFunArray
The SArray type fully supports SMTLib arrays: https://hackage.haskell.org/package/sbv-8.7/docs/Data-SBV.html#t:SArray
There are some differences between these types, and the above links describe them. However, for most purposes, you can use them interchangeably.
Converting a Haskell map to an SBV array
Going back to your original question, I'd be tempted to use an SArray to model such a look up. I'd code it as:
{-# LANGUAGE ScopedTypeVariables #-}
import Data.SBV
import qualified Data.Map as M
import Data.Int
-- Fill an SBV array from a map
mapToSArray :: (SymArray array, SymVal a, SymVal b) => M.Map a (SBV b) -> array a b -> array a b
mapToSArray m a = foldl (\arr (k, v) -> writeArray arr (literal k) v) a (M.toList m)
And use it as:
g :: Symbolic SBool
g = do let def = 0
-- get a symbolic array, initialized with def
arr <- newArray "myArray" (Just def)
let m :: M.Map Int16 SInt16
m = M.fromList [(5, 2), (10, 5)]
-- Fill the array from the map
let arr' :: SArray Int16 Int16 = mapToSArray m arr
-- A simple problem:
idx1 <- free "idx1"
idx2 <- free "idx2"
pure $ 2 * readArray arr' idx1 + 1 .== readArray arr' idx2
When I run this, I get:
*Main> sat g
Satisfiable. Model:
idx1 = 5 :: Int16
idx2 = 10 :: Int16
You can run it as satWith z3{verbose=True} g to see the SMTLib output it generates, which avoids costly lookups by simply delegating those tasks to the backend solver.
Efficiency
The question of whether this will be "efficient" really depends on how many elements your map has that you're constructing the array from. The larger the number of elements and the trickier the constraints, the less efficient it will be. In particular, if you ever write to an index that is symbolic, I'd expect slow-downs in solving time. If they're all constants, it should be relatively performant. As is usual in symbolic programming, it's really hard to predict any performance without seeing the actual problem and experimenting with it.
Arrays in the query context
The function newArray works in the symbolic context. If you're in a query context, instead use freshArray: https://hackage.haskell.org/package/sbv-8.7/docs/Data-SBV-Control.html#v:freshArray

Do newtypes incur no cost even when you cannot pattern-match on them?

Context
Most Haskell tutorials I know (e.g. LYAH) introduce newtypes as a cost-free idiom that allows enforcing more type safety. For instance, this code will type-check:
type Speed = Double
type Length = Double
computeTime :: Speed -> Length -> Double
computeTime v l = l / v
but this won't:
newtype Speed = Speed { getSpeed :: Double }
newtype Length = Length { getLength :: Double }
-- wrong!
computeTime :: Speed -> Length -> Double
computeTime v l = l / v
and this will:
-- right
computeTime :: Speed -> Length -> Double
computeTime (Speed v) (Length l) = l / v
In this particular example, the compiler knows that Speed is just a Double, so the pattern-matching is moot and will not generate any executable code.
Question
Are newtypes still cost-free when they appear as arguments of parametric types? For instance, consider a list of newtypes:
computeTimes :: [Speed] -> Length -> [Double]
computeTimes vs l = map (\v -> getSpeed v / l) vs
I could also pattern-match on speed in the lambda:
computeTimes' :: [Speed] -> Length -> [Double]
computeTimes' vs l = map (\(Speed v) -> v / l) vs
In either case, for some reason, I feel that real work is getting done! I start to feel even more uncomfortable when the newtype is buried within a deep tree of nested parametric datatypes, e.g. Map Speed [Set Speed]; in this situation, it may be difficult or impossible to pattern-match on the newtype, and one would have to resort to accessors like getSpeed.
TL;DR
Will the use of a newtype never ever incur a cost, even when the newtype appears as a (possibly deeply-buried) argument of another parametric type?
On their own, newtypes are cost-free. Applying their constructor, or pattern matching on them has zero cost.
When used as parameter for other types e.g. [T] the representation of [T] is precisely the same as the one for [T'] if T is a newtype for T'. So, there's no loss in performance.
However, there are two main caveats I can see.
newtypes and instances
First, newtype is frequently used to introduce new instances of type classes. Clearly, when these are user-defined, there's no guarantee that they have the same cost as the original instances. E.g., when using
newtype Op a = Op a
instance Ord a => Ord (Op a) where
compare (Op x) (Op y) = compare y x
comparing two Op Int will cost slightly more than comparing Int, since the arguments need to be swapped. (I am neglecting optimizations here, which might make this cost free when they trigger.)
newtypes used as type arguments
The second point is more subtle. Consider the following two implementations of the identity [Int] -> [Int]
id1, id2 :: [Int] -> [Int]
id1 xs = xs
id2 xs = map (\x->x) xs
The first one has constant cost. The second has a linear cost (assuming no optimization triggers). A smart programmer should prefer the first implementation, which is also simpler to write.
Suppose now we introduce newtypes on the argument type, only:
id1, id2 :: [Op Int] -> [Int]
id1 xs = xs -- error!
id2 xs = map (\(Op x)->x) xs
We can no longer use the constant cost implementation because of a type error. The linear cost implementation still works, and is the only option.
Now, this is quite bad. The input representation for [Op Int] is exactly, bit by bit, the same for [Int]. Yet, the type system forbids us to perform the identity in an efficient way!
To overcome this issue, safe coercions where introduced in Haskell.
id3 :: [Op Int] -> [Int]
id3 = coerce
The magic coerce function, under certain hypotheses, removes or inserts newtypes as needed to make type match, even inside other types, as for [Op Int] above. Further, it is a zero-cost function.
Note that coerce works only under certain conditions (the compiler checks for them). One of these is that the newtype constructor must be visible: if a module does not export Op :: a -> Op a you can not coerce Op Int to Int or vice versa. Indeed, if a module exports the type but not the constructor, it would be wrong to make the constructor accessible anyway through coerce. This makes the "smart constructors" idiom still safe: modules can still enforce complex invariants through opaque types.
It doesn't matter how deeply buried a newtype is in a stack of (fully) parametric types. At runtime, the values v :: Speed and w :: Double are completely indistinguishable – the wrapper is erased by the compiler, so even v is really just a pointer to a single 64-bit floating-point number in memory. Whether that pointer is stored in a list or tree or whatever doesn't make a difference either. getSpeed is a no-op and will not appear at runtime in any way at all.
So what do I mean by “fully parametric”? The thing is, newtypes can obviously make a difference at compile time, via the type system. In particular, they can guide instance resolution, so a newtype that invokes a different class method may certainly have worse (or, just as easily, better!) performance than the wrapped type. For example,
class Integral n => Fibonacci n where
fib :: n -> Integer
instance Fibonacci Int where
fib = (fibs !!)
where fibs = [ if i<2 then 1
else fib (i-2) + fib (i-1)
| i<-[0::Int ..] ]
this implementation is pretty slow, because it uses a lazy list (and performs lookups in it over and over again) for memoisation. On the other hand,
import qualified Data.Vector as Arr
-- | A number between 0 and 753
newtype SmallInt = SmallInt { getSmallInt :: Int }
instance Fibonacci SmallInt where
fib = (fibs Arr.!) . getSmallInt
where fibs = Arr.generate 754 $
\i -> if i<2 then 1
else fib (SmallInt $ i-2) + fib (SmallInt $ i-1)
This fib is much faster, because thanks to the input being limited to a small range, it is feasible to strictly allocate all of the results and store them in a fast O (1) lookup array, not needing the spine-laziness.
This of course applies again regardless of what structure you store the numbers in. But the different performance only comes about because different method instantiations are called – at runtime this means simply, completely different functions.
Now, a fully parametric type constructor must be able to store values of any type. In particular, it cannot impose any class restrictions on the contained data, and hence also not call any class methods. Therefore this kind of performance difference can not happen if you're just dealing with generic [a] lists or Map Int a maps. It can, however, occur when you're dealing with GADTs. In this case, even the actual memory layout might be completely differet, for instance with
{-# LANGUAGE GADTs #-}
import qualified Data.Vector as Arr
import qualified Data.Vector.Unboxed as UArr
data Array a where
BoxedArray :: Arr.Vector a -> Array a
UnboxArray :: UArr.Unbox a => UArr.Vector a -> Array a
might allow you to store Double values more efficiently than Speed values, because the former can be stored in a cache-optimised unboxed array. This is only possible because the UnboxArray constructor is not fully parametric.

Haskell vector C++ push_back analogue

I've discovered that Haskell Data.Vector.* miss C++ std::vector::push_back's functionality. There is grow/unsafeGrow, but they seem to have O(n) complexity.
Is there a way to grow vectors in O(1) amortized time for an element?
No there really is no such facility in Data.Vector. It isn't too difficult to implement this from scratch using MutableArray like Data.Vector.Mutable does (see my implementation below), but there are some significant drawbacks. In particular, all of its operations end up happening inside some state context usually ST or IO. This has the downsides that
Any code that manipulates such a data structure ends up having to be monadic
The compiler is much less likely to be able to optimize. For example, libraries like vector use something really clever called fusion to optimize away intermediate allocations. This sort of thing is not possible in a state context.
Parallelism is going to be a lot tougher: in ST I can't even have two threads and in IO I will have race conditions all over the place. The nasty bit here is that any sharing is going to have to happen in IO.
As if all this wasn't enough, garbage collection also performs better inside pure code.
What do I do then?
It isn't particularly often that you have a need for exactly this behaviour - usually you are better off using an immutable data structure (thereby avoiding all of the aforementioned problems) which does something similar. Just limiting ourselves to containers which comes with GHC, some alternatives include:
if you are almost always just using push_back, maybe you just want a stack (a plain old [a]).
if you anticipate doing more push_back than lookups, Data.Sequence gives you O(1) appending to either end and O(log n) lookup.
if you are interested in a lot of operations especially hashmap-like, Data.IntMap is pretty optimized. Even if the theoretical cost of those operations is O(log n), you will need a pretty big IntMap to start feeling those costs.
Making something like C++ vector
Of course, if one doesn't care about the restrictions mentioned initially, there is no reason not to have a C++ like vector. Just for fun, I went ahead and implemented this from scratch (needs packages data-default and primitive).
The reason this code is probably not already in some library is that it goes against much of the spirit of Haskell (I do this with the intent of conforming to a C++ style vector).
The only operation that actually makes a new vector is newVector - everything else "modifies" an existing vector. Since pushBack doesn't return a new GrowVector, it has to modify the existing one (including its length and/or capacity), so length and capacity have to be "pointers". In turn, that means that even getting the length is a monadic operation.
While this isn't unboxed, it would not be too difficult to replicate vectors data family approach - it is just tedious1.
With that said:
module GrowVector (
GrowVector, newEmpty, size, read, write, pushBack, popBack
) where
import Data.Primitive.Array
import Data.Primitive.MutVar
import Data.Default
import Control.Monad
import Control.Monad.Primitive (PrimState, PrimMonad)
import Prelude hiding (length, read)
data GrowVector s a = GrowVector
{ underlying :: MutVar s (MutableArray s a) -- ^ underlying array
, length :: MutVar s Int -- ^ perceived length of vector
, capacity :: MutVar s Int -- ^ actual capacity
}
type GrowVectorIO = GrowVector (PrimState IO)
-- | Make a new empty vector with the given capacity. O(n)
newEmpty :: (Default a, PrimMonad m) => Int -> m (GrowVector (PrimState m) a)
newEmpty cap = do
arr <- newArray cap def
GrowVector <$> newMutVar arr <*> newMutVar 0 <*> newMutVar cap
-- | Read an element in the vector (unchecked). O(1)
read :: PrimMonad m => GrowVector (PrimState m) a -> Int -> m a
g `read` i = do arr <- readMutVar (underlying g); arr `readArray` i
-- | Find the size of the vector. O(1)
size :: PrimMonad m => GrowVector (PrimState m) a -> m Int
size g = readMutVar (length g)
-- | Double the vector capacity. O(n)
resize :: (Default a, PrimMonad m) => GrowVector (PrimState m) a -> m ()
resize g = do
curCap <- readMutVar (capacity g) -- read current capacity
curArr <- readMutVar (underlying g) -- read current array
curLen <- readMutVar (length g) -- read current length
newArr <- newArray (2 * curCap) def -- allocate a new array twice as big
copyMutableArray newArr 1 curArr 1 curLen -- copy the old array over
underlying g `writeMutVar` newArr -- use the new array in the vector
capacity g `modifyMutVar'` (*2) -- update the capacity in the vector
-- | Write an element to the array (unchecked). O(1)
write :: PrimMonad m => GrowVector (PrimState m) a -> Int -> a -> m ()
write g i x = do arr <- readMutVar (underlying g); writeArray arr i x
-- | Pop an element of the vector, mutating it (unchecked). O(1)
popBack :: PrimMonad m => GrowVector (PrimState m) a -> m a
popBack g = do
s <- size g;
x <- g `read` (s - 1)
length g `modifyMutVar'` (+ negate 1)
pure x
-- | Push an element. (Amortized) O(1)
pushBack :: (Default a, PrimMonad m) => GrowVector (PrimState m) a -> a -> m ()
pushBack g x = do
s <- readMutVar (length g) -- read current size
c <- readMutVar (capacity g) -- read current capacity
when (s+1 == c) (resize g) -- if need be, resize
write g (s+1) x -- write to the back of the array
length g `modifyMutVar'` (+1) -- increase te length
Current semantics of grow
I think the github issue does a pretty good job of explaining the semantics:
I think the intended semantics are that it may do a realloc, but not guaranteed to, and all the current implementations do the simpler copying semantics because for on heap allocations the cost should be roughly the same.
Basically you should use grow when you want a new mutable vector of an increased size, starting with the elements of the old vector (and no longer care about the old vector). This is quite useful - for example one could implement GrowVector using MVector and grow.
1 the approach is that for every new type of unboxed vector you want to have, you make a data instance that "expands" your type into a fixed number of unboxed arrays (or other unboxed vectors). This is the point of data family - to allow different instantiations of a type to have totally different runtime representations, and to also be extensible (you can add your own data instance if you want).

What is "a Haskell way" to transpose a graph?

Suppose I have a tree represented as a list of parents and I want to reverse the edges, obtaining a list of children for each node. For this tree - http://i.stack.imgur.com/uapqT.png - transformation would look like:
[0,0,0,1,1,2,5,4,4] -> [[2,1],[4,3],[5],[],[8,7],[6],[],[],[]]
But it's not limited to graph transposing, however. I have a few other problems that I would solve in imperative language in the following way: traverse some source data array and non-sequentially update a resulting array as I get to know something about it.
Essentially, my question is "what is Haskell's idiomatic way to solve things like this?". As I understand, I can do it in imperative way by means of mutable vectors, but isn't there some purely functional method? If not, how would I properly use mutables?
Finally, I need it to work fast, that is O(n) complexity, and non-standard packages are not an option for me.
It's worth to consider the pure functions in Data.Vector or Data.Array that internally use mutation, in order to be more efficient (the accum-s in both libraries, plus the unfolds and construct-s in vector).
The accum-s are great when we don't care about intermediate states of an array during construction. They're nicely applicable for transposing graphs, although we have to provide a range for the node keys:
{-# LANGUAGE TupleSections #-}
import qualified Data.Array as A
type Graph = [(Int, [Int])]
transpose :: (Int, Int) -> Graph -> Graph
transpose range g =
A.assocs $ A.accumArray (flip (:)) [] range (do {(i, ns) <- g; map (,i) ns})
Here we first unroll the graph into an adjacency list, but with swapped pairs of indices, and then accumulate them into an array. It's roughly as fast as a standard imperative loop over a mutable array, and it's more convenient than the ST monad.
Alternatively, we can just use IntMap, likely alongside the State monad, and just port our imperative algorithms as they are, and the performance will be satisfactory for most purposes.
Fortunately IntMap provides a lot of higher-order functions, so we're not (always) forced to program in an imperative style with it. There's an analogue for accum, for instance:
import qualified Data.IntMap.Strict as IM
transpose :: Graph -> Graph
transpose g =
IM.assocs $ IM.fromListWith (++) (do {(i, ns) <- g; (i,[]) : map (,[i]) ns})
A purely functional way would be to use a map to store the information, producing O(n log n) algorithm:
import qualified Data.IntMap as IM
import Data.Maybe (fromMaybe)
childrenMap :: [Int] -> IM.IntMap [Int]
childrenMap xs = foldr addChild IM.empty $ zip xs [0..]
where
addChild :: (Int, Int) -> IM.IntMap [Int] -> IM.IntMap [Int]
addChild (parent, child) = IM.alter (Just . (child :) . fromMaybe []) parent
You could also use an imperative solution and keep things pure using the ST monad, which is obviously O(n), but the imperative code somewhat obscures the main idea:
import Control.Monad (forM_)
import Data.Array
import Data.Array.MArray
import Data.Array.ST
childrenST :: [Int] -> [[Int]]
childrenST xs = elems $ runSTArray $ do
let l = length xs
arr <- newArray (0, l - 1) []
let add (parent, child) =
writeArray arr parent . (child :) =<< readArray arr parent
forM_ (zip xs [0..]) add
return arr
One drawback of this approach is that an index is out of bounds, it just fails.
Another is that you traverse the list twice. However, if you used arrays instead of lists everywhere, this wouldn't matter.

Set-Like data structure that maintains insertion Order?

The properties I'm looking for are
initially maintains insertion order
transversing in the insertion order
and of course maintain that each element is unique
But there are cases where It's okay to disregard insertion order, such as...
retrieving a difference between two different sets
performing a union the two sets eliminating any duplicates
Java's LinkedHashSet seems to be exactly what I'm after, except for the fact it's not written in Haskell.
current & initial solution
The easiest (and a relevantly inefficient) solution is to implement it as a list and transform it into a set when I need too, but I believe there is likely a better way.
other ideas
My first idea was to implement it as a Data.Set of a newtype of (Int, a) where it would be ordered by the first tuple index, and the second index (a) being the actual value. I quickly realised this wasn't going to work because as the set would allow for duplicates of the type a, which would have defeated the whole purpose of using a set.
simultaneously maintaining a list and a set? (nope)
Another Idea I had was have an abstract data type that would maintain both a list and set representation of the data, which doesn't sound to efficient either.
recap
Are there any descent implementations of such a data structure in Haskell? I've seen Data.List.Ordered but it seems to just add set operations to lists, which sounds terribly inefficient as well (but likely what I'll settle with if I can't find a solution). Another solution suggested here, was to implement it via finger tree, but I would prefer to not reimplement it if it's already a solved problem.
You can certainly use Data.Set with what is isomorphic to (Int, a), but wrapped in a newtype with different a Eq instance:
newtype Entry a = Entry { unEntry :: (Int, a) } deriving (Show)
instance Eq a => Eq (Entry a) where
(Entry (_, a)) == (Entry (_, b)) = a == b
instance Ord a => Ord (Entry a) where
compare (Entry (_, a)) (Entry (_, b)) = compare a b
But this won't quite solve all your problems if you want automatic incrementing of your index, so you could make a wrapper around (Set (Entry a), Int):
newtype IndexedSet a = IndexedSet (Set (Entry a), Int) deriving (Eq, Show)
But this does mean that you'll have to re-implement Data.Set to respect this relationship:
import qualified Data.Set as S
import Data.Set (Set)
import Data.Ord (comparing)
import Data.List (sortBy)
-- declarations from above...
null :: IndexedSet a -> Bool
null (IndexedSet (set, _)) = S.null set
-- | If you re-index on deletions then size will just be the associated index
size :: IndexedSet a -> Int
size (IndexedSet (set, _)) = S.size set
-- Remember that (0, a) == (n, a) for all n
member :: Ord a => a -> IndexedSet a -> Bool
member a (IndexedSet (set, _)) = S.member (Entry (0, a)) set
empty :: IndexedSet a
empty = IndexedSet (S.empty, 0)
-- | This function is critical, you have to make sure to increment the index
-- Might also want to consider making it strict in the i field for performance
insert :: Ord a => a -> IndexedSet a -> IndexedSet a
insert a (IndexedSet (set, i)) = IndexedSet (S.insert (Entry (i, a)) set, i + 1)
-- | Simply remove the `Entry` wrapper, sort by the indices, then strip those off
toList :: IndexedSet a -> [a]
toList (IndexedSet (set, _))
= map snd
$ sortBy (comparing fst)
$ map unEntry
$ S.toList set
But this is fairly trivial in most cases and you can add functionality as you need it. The only thing you'll need to really worry about is what to do in deletions. Do you re-index everything or are you just concerned about order? If you're just concerned about order, then it's simple (and size can be left sub-optimal by having to actually calculate the size of the underlying Set), but if you re-index then you can get your size in O(1) time. These sorts of decisions should be decided based on what problem you're trying to solve.
I would prefer to not reimplement it if it's already a solved problem.
This approach is definitely a re-implementation. But it isn't complicated in most of the cases, could be pretty easily turned into a nice little library to upload to Hackage, and retains a lot of the benefits of sets without much bookkeeping.

Resources