In the Data.Word module, it provides types like Word8, Word16, etc.
Is there a way to implement my own Word type, such as Word4 (efficiently)?
SBV package has an example for it: Data.SBV.Examples.Misc.Word4.
Related
I am working on writing a prototype programming language in Haskell with polymorphic variants using the singletons library. I have a basic type of types that looks like this:
import Data.Singletons.TH
import Data.Singletons
import GHC.Natural
import Data.Singletons.TypeLits
$(singletons [d|
data MyType =
PredT
| ProcT [MyType]
| IntT
| FloatT
| StringT
| FuncT MyType MyType
| VariantT Natural [MyType]
| UnionT [MyType]
|])
The Natural argument in VariantT is used to identity a particular variant, and it is important for this to actually be Natural (and not something like a Nat defined as an algebraic data type) for efficency reasons.
The issue is, with this definition, I get:
Couldn't match expected type ‘Natural’
with actual type ‘Demote Natural’
Usually, in my experience with the singletons library, I get errors like this (to my understanding anyway), when trying to use a type as a singleton where SingKind is not supported for that type e.x. for Char, so I am at a loss as to why this is not working.
I have tried Demote Natural, Nat, as well as different imports (thinking maybe I'm not using the right "Nat" or "Natural" that singletons works with), all giving me similar errors. What's the issue here? Do I have to write the definitions that singletons generates manually for types for which Demote a != a, or is there something that I'm missing here?
Apparently this is a still unfixed issue. If I understand correctly, currently the singletons TH script works by reusing the same type as both the promoted and demoted type, but Nat completely breaks this model. The long-term solution is to wait for GHC to merge Nat and Natural. Meanwhile, you will have to manually duplicate or generalize your type, or roll your own Nat.
https://github.com/goldfirere/singletons/issues/478
As a short-term fix, it seems possible to extend the TH script singletons to do something like that automatically. That would be a nice contribution for people who use singletons extensively.
Haskell provides a standard typeclass 'Alternative' that effectively provides the <|> operator for any type that is also an Applicative.
As I understand it Alternative is considered a Monoid on Applicative's, however the <|> operator seems to make complete sense in a lot of types that aren't Applicative Functors as well, and there doesn't need to be any specific dependency on the Applicative typeclass for it to work properly.
Is there a reason why Alternative needs to be a subclass of Applicative, and if so is there a standard typeclass to define similar functionality on non-applicative types?
I think Alt from the semigroupoids package comes closest to being a 'standard' typeclass. https://hackage.haskell.org/package/semigroupoids-5.0.0.1/docs/Data-Functor-Alt.html#t:Alt
Almost all of examples I've seen of the State Monad have been wrapped inside a newtype.
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
import Control.Monad.State
import Control.Applicative
data Bazzar
= Bazzar {
valueOne :: Int
, valueTwo :: Int
}
newtype BazState a = BazState { unBazify :: State Bazzar a }
deriving (Functor, Applicative, Monad, MonadState Bazzar)
Are there any reasones why I shouldn't just make a type alias?
type BazState a = State Bazzar a
I realize the purpose of newtype is to differentiate between two different uses for the same type of data structure, like reimplementing type classes for existing types, or if you want to differentiate your use of the type from normal behavior. Or implement additional typeclasses for use of that class
If you're not doing any of the stuff mentioned above, isn't using newtype in this case just needless indirection?
Other than being able to define instances for a newtype, you can use it as a "closed constructor" API for your library. That way you export a single type without any constructors, along with functions that act as primitives and combinators so that users of your library can't construct invalid values of your type. It also means that if you're careful enough, you can change the underlying structure without breaking the outward facing API. A great example of this comes from Neil Mitchell, who said in a recent post about modifying the Shake build system to use the Continuation monad:
The cool thing about Haskell is that I've been able to completely replace the underlying Shake Action monad from StateT/IO, to ReaderT/IO, to ReaderT/ContT/IO, without ever breaking any users of Shake. Haskell allows me to produce effective and flexible abstractions.
As Nikita Volkov mentioned in his question Data.Text vs String I also wondered why I have to deal with the different String implementations type String = [Char] and Data.Text in haskell. In my code I use the pack and unpack functions really often.
My question: Is there a way to have an automatic conversion between both string types so that I can avoid writing pack and unpack so often?
In other programming languages like Python or JavaScript there is for example an automatic conversion between integers and floats if it is needed. Can I reach something like this also in haskell? I know, that the mentioned languages are weakly typed, but I heard that C++ has a similar feature.
Note: I already know the language extension {-# LANGUAGE OverloadedStrings #-}. But as I understand this language extensions just applies to strings defined as "...". I want to have an automatic conversion for strings which I got from other functions or I have as arguments in function definitions.
Extended question: Haskell. Text or Bytestring covers also the difference between Data.Text and Data.ByteString. Is there a way to have an automatic conversion between the three strings String, Data.Text and Data.ByteString?
No.
Haskell doesn't have implicit coercions for technical, philosophical, and almost religious reasons.
As a comment, converting between these representations isn't free and most people don't like the idea that you have hidden and potentially expensive computations lurking around. Additionally, with strings as lazy lists, coercing them to a Text value might not terminate.
We can convert literals to Texts automatically with OverloadedStrings by desugaring a string literal "foo" to fromString "foo" and fromString for Text just calls pack.
The question might be to ask why you're coercing so much? Is there some why do you need to unpack Text values so often? If you constantly changing them to strings it defeats the purpose a bit.
Almost Yes: Data.String.Conversions
Haskell libraries make use of different types, so there are many situations in which there is no choice but to heavily use conversion, distasteful as it is - rewriting libraries doesn't count as a real choice.
I see two concrete problems, either of which being potentially a significant problem for Haskell adoption :
coding ends up requiring specific implementation knowledge of the libraries you want to use.This is a big issue for a high-level language
performance on simple tasks is bad - which is a big issue for a generalist language.
Abstracting from the specific types
In my experience, the first problem is the time spent guessing the package name holding the right function for plumbing between libraries that basically operate on the same data.
To that problem there is a really handy solution : the Data.String.Conversions package, provided you are comfortable with UTF-8 as your default encoding.
This package provides a single cs conversion function between a number of different types.
String
Data.ByteString.ByteString
Data.ByteString.Lazy.ByteString
Data.Text.Text
Data.Text.Lazy.Text
So you just import Data.String.Conversions, and use cs which will infer the right version of the conversion function according to input and output types.
Example:
import Data.Aeson (decode)
import Data.Text (Text)
import Data.ByteString.Lazy (ByteString)
import Data.String.Conversions (cs)
decodeTextStoredJson' :: T.Text -> MyStructure
decodeTextStoredJson' x = decode (cs x) :: Maybe MyStructure
NB : In GHCi you generally do not have a context that gives the target type so you direct the conversion by explicitly stating the type of the result, like for read
let z = cs x :: ByteString
Performance and the cry for a "true" solution
I am not aware of any true solution as of yet - but we can already guess the direction
it is legitimate to require conversion because the data does not change ;
best performance is achieved by not converting data from one type to another for administrative purposes ;
coercion is evil - coercitive, even.
So the direction must be to make these types not different, i.e. to reconcile them under (or over) an archtype from which they would all derive, allowing composition of functions using different derivations, without the need to convert.
Nota : I absolutely cannot evaluate the feasability / potential drawbacks of this idea. There may be some very sound stoppers.
I have a data type defined in another library. I would like to hook into that datatype with a lens generated by the Control.Lens library.
Do I need to newtype my type in my code or is it considered safe to lens an already defined data type?
You don't need a newtype. There are actually many packages on hackage that define lenses for already existing types (for example, xml-lens or even lens itself).
The problem with defining instances is that there is no way to hide them. If you define lenses, you can just hide them when importing, like any other function:
import Module.Lens hiding (someGeneratedLens, ...)
This is not possible with instances (See https://stackoverflow.com/a/8731340/2494803 for reasons). Lenses are also not required to be globally unique, unlike instances.