This question already has answers here:
Is there a language with constrainable types?
(8 answers)
Closed 7 years ago.
Let's take Haskell as an example, as it gets the closest to what I'm about to describe of the languages I know.
A type, Int for example, can be viewed as the set of all possible values (of that type).
Why is it that we only get to work with very specific sets?
Int, Double, etc... and not with all their subsets in the type system.
I would love a language where we can define arbitrary types like Int greater than 5. Are there examples of such languages? If not, why not?
You are looking for Dependent types. Idris, Agda and Coq are well known in this category.
You can actually mostly define that in Haskell because it's basically an Int plus some semantics. For example, you have to decide what you're going to do with subtractions that go beneath the threshold, like what (-) 6 7 gives. (A common convention with Peano arithmetic is to give 0 -- so in this case it would return 6, the least value of the system.) You also need to choose whether you're going to error on a fromInteger 3 or else whether you're going to store, say, newtype IntGT5 = IntGT5 (Maybe Int) instead of newtype IntGT5 = IntGT5 Int. You have to write all of the typeclass instances yourself, and once you've done that you'll have an appropriate type.
If you've got an abiding interest in this problem, two things to pay attention to are liquid types and subtyping. Let me tell you a little about the latter.
Alan Kay invented OOP to be something different than what it is (he wanted every program to be written as a network of communicating computers), but what it turned out to be in the modern world is basically a way to do complex subtyping. "Duck typing", for example, is about creating an "intersection type" of a bunch of really general types (like `things with a "waddle" method, things with a "quack" method) which other types are subtypes of. So subtyping is very naturally OOP.
The thesis I've linked you to points out another interesting thing about subtyping: you can use subtypes to erase the distinction between type and value. Of course, making types into values does not need subtyping; it is e.g. already the case in the Python language, where you can call type(x) to get an object which represents the type of the object. But the interesting thing is that as subtypes go, you can just define a type 3 :: Int (3 of kind Int) which is the type of all Ints which are equal to 3. It is in essence a unit/void type which is a subtype of a bigger class. By blurring the distinction between 3 of kind Int and 3 of type Int you get every value being a type as well. You could then do something JSON-like with this approach, where {x: Int, y: 3 :: Int} is a type of objects containing two properties x and y where x is any Int and y is the integer 3 :: Int.
The Ada language supports ranged numeric types. I can't really comment intelligently on this language, but I've been led to understand by folks who know it that the range checks are enforced by runtime checks inserted by the compiler.
Relational database theory also has concepts that tie to this. The set of values that an attribute can take is its domain, which is a function of its data type and the constraints on it. Actual RDBMSs often do not implement these concepts in their full generality, though.
Dependent types are a much more general system that can encode this sort of constraint, and many others. My capsule explanation of dependent types is like this: they are a way of designing a programming language that contains proofs as first-class values. A proof is a value that, by virtue its existence and the rules that it obeys, guarantees the truth of a proposition (a.k.a. "sentence," "statement," etc.).
So under dependent types, you can have a type x > 5 whose values are proofs that x is larger than five. If you have an actual value of this type, then it really is true that x is larger than five, and thus the code where this proof is in scope may safely proceed under that assumption. You can package up the number and the proof together using a dependent sum type that we can notate as ∃x : Int. x > 5, whose values are pairs such that:
The first element is an integer x
The second element is a proof of x > 5
Related
In many articles about Haskell they say it allows to make some checks during compile time instead of run time. So, I want to implement the simplest check possible - allow a function to be called only on integers greater than zero. How can I do it?
module Positive (toPositive, getPositive, Positive) where
newtype Positive = Positive { unPositive :: Int }
toPositive :: Int -> Maybe Positive
toPositive n = if (n <= 0) then Nothing else Just (Positive n)
-- We can't export unPositive, because unPositive can be used
-- to update the field. Trivially renaming it to getPositive
-- ensures that getPositive can only be used to access the field
getPositive :: Positive -> Int
getPositive = unPositive
The above module doesn't export the constructor, so the only way to build a value of type Positive is to supply toPositive with a positive integer, which you can then unwrap using getPositive to access the actual value.
You can then write a function that only accepts positive integers using:
positiveInputsOnly :: Positive -> ...
Haskell can perform some checks at compile time that other languages perform at runtime. Your question seems to imply you are hoping for arbitrary checks to be lifted to compile time, which isn't possible without a large potential for proof obligations (which could mean you, the programmer, would need to prove the property is true for all uses).
In the below, I don't feel like I'm saying anything more than what pigworker touched on while mentioning the very cool sounding Inch tool. Hopefully the additional words on each topic will clarify some of the solution space for you.
What People Mean (when speaking of Haskell's static guarantees)
Typically when I hear people talk about the static guarantees provided by Haskell they are talking about the Hindley Milner style static type checking. This means one type can not be confused for another - any such misuse is caught at compile time (ex: let x = "5" in x + 1 is invalid). Obviously, this only scratches the surface and we can discuss some more aspects of static checking in Haskell.
Smart Constructors: Check once at runtime, ensure safety via types
Gabriel's solution is to have a type, Positive, that can only be positive. Building positive values still requires a check at runtime but once you have a positive there are no checks required by consuming functions - the static (compile time) type checking can be leveraged from here.
This is a good solution for many many problems. I recommended the same thing when discussing golden numbers. Never-the-less, I don't think this is what you are fishing for.
Exact Representations
dflemstr commented that you can use a type, Word, which is unable to represent negative numbers (a slightly different issue than representing positives). In this manner you really don't need to use a guarded constructor (as above) because there is no inhabitant of the type that violates your invariant.
A more common example of using proper representations is non-empty lists. If you want a type that can never be empty then you could just make a non-empty list type:
data NonEmptyList a = Single a | Cons a (NonEmptyList a)
This is in contrast to the traditional list definition using Nil instead of Single a.
Going back to the positive example, you could use a form of Peano numbers:
data NonNegative = One | S NonNegative
Or user GADTs to build unsigned binary numbers (and you can add Num, and other instances, allowing functions like +):
{-# LANGUAGE GADTs #-}
data Zero
data NonZero
data Binary a where
I :: Binary a -> Binary NonZero
O :: Binary a -> Binary a
Z :: Binary Zero
N :: Binary NonZero
instance Show (Binary a) where
show (I x) = "1" ++ show x
show (O x) = "0" ++ show x
show (Z) = "0"
show (N) = "1"
External Proofs
While not part of the Haskell universe, it is possible to generate Haskell using alternate systems (such as Coq) that allow richer properties to be stated and proven. In this manner the Haskell code can simply omit checks like x > 0 but the fact that x will always be greater than 0 will be a static guarantee (again: the safety is not due to Haskell).
From what pigworker said, I would classify Inch in this category. Haskell has not grown sufficiently to perform your desired tasks, but tools to generate Haskell (in this case, very thin layers over Haskell) continue to make progress.
Research on More Descriptive Static Properties
The research community that works with Haskell is wonderful. While too immature for general use, people have developed tools to do things like statically check function partiality and contracts. If you look around you'll find it's a rich field.
I would be failing in my duty as his supervisor if I failed to plug Adam Gundry's Inch preprocessor, which manages integer constraints for Haskell.
Smart constructors and abstraction barriers are all very well, but they push too much testing to run time and don't allow for the possibility that you might actually know what you're doing in a way that checks out statically, with no need for Maybe padding. (A pedant writes. The author of another answer appears to suggest that 0 is positive, which some might consider contentious. Of course, the truth is that we have uses for a variety of lower bounds, 0 and 1 both occurring often. We also have some use for upper bounds.)
In the tradition of Xi's DML, Adam's preprocessor adds an extra layer of precision on top of what Haskell natively offers but the resulting code erases to Haskell as is. It would be great if what he's done could be better integrated with GHC, in coordination with the work on type level natural numbers that Iavor Diatchki has been doing. We're keen to figure out what's possible.
To return to the general point, Haskell is currently not sufficiently dependently typed to allow the construction of subtypes by comprehension (e.g., elements of Integer greater than 0), but you can often refactor the types to a more indexed version which admits static constraint. Currently, the singleton type construction is the cleanest of the available unpleasant ways to achieve this. You'd need a kind of "static" integers, then inhabitants of kind Integer -> * capture properties of particular integers such as "having a dynamic representation" (that's the singleton construction, giving each static thing a unique dynamic counterpart) but also more specific things like "being positive".
Inch represents an imagining of what it would be like if you didn't need to bother with the singleton construction in order to work with some reasonably well behaved subsets of the integers. Dependently typed programming is often possible in Haskell, but is currently more complicated than necessary. The appropriate sentiment toward this situation is embarrassment, and I for one feel it most keenly.
I know that this was answered a long time ago and I already provided an answer of my own, but I wanted to draw attention to a new solution that became available in the interim: Liquid Haskell, which you can read an introduction to here.
In this case, you can specify that a given value must be positive by writing:
{-# myValue :: {v: Int | v > 0} #-}
myValue = 5
Similarly, you can specify that a function f requires only positive arguments like this:
{-# f :: {v: Int | v > 0 } -> Int #-}
Liquid Haskell will verify at compile-time that the given constraints are satisfied.
This—or actually, the similar desire for a type of natural numbers (including 0)—is actually a common complaints about Haskell's numeric class hierarchy, which makes it impossible to provide a really clean solution to this.
Why? Look at the definition of Num:
class (Eq a, Show a) => Num a where
(+) :: a -> a -> a
(*) :: a -> a -> a
(-) :: a -> a -> a
negate :: a -> a
abs :: a -> a
signum :: a -> a
fromInteger :: Integer -> a
Unless you revert to using error (which is a bad practice), there is no way you can provide definitions for (-), negate and fromInteger.
Type-level natural numbers are planned for GHC 7.6.1: https://ghc.haskell.org/trac/ghc/ticket/4385
Using this feature it's trivial to write a "natural number" type, and gives a performance you could never achieve (e.g. with a manually written Peano number type).
I'm new to F# and Haskell and am implementing a project in order to determine which language I would prefer to devote more time to.
I have a numerous situations where I expect a given numerical type to have given dimensions based on parameters given to a top-level function (ie, at runtime). For example, in this F# snippet, I have
type DataStreamItem = LinearAlgebra.Vector<float32>
type Ball =
{R : float32;
X : DataStreamItem}
and I expect all instances of type DataStreamItem to have D dimensions.
My question is in the interests of algorithm development and debugging since such shape-mismatche-bugs can be a headache to pin down but should be a non-issue when the algorithm is up-and-running:
Is there a way, in either F# or Haskell, to constrain DataStreamItem and / or Ball to have dimensions of D? Or do I need to resort to pattern matching on every calculation?
If the latter is the case, are there any good, light-weight paradigms to catch such constraint violations as soon as they occur (and that can be removed when performance is critical)?
Edit:
To clarify the sense in which D is constrained:
D is defined such that if you expressed the algorithm of the function main(DataStream) as a computation graph, all of the intermediate calculations would depend on the dimension of D for the execution of main(DataStream). The simplest example I can think of would be a dot-product of M with DataStreamItem: the dimension of DataStream would determine the creation of dimension parameters of M
Another Edit:
A week later, I find the following blog outlining precisely what I was looking for in dependant types in Haskell:
https://blog.jle.im/entry/practical-dependent-types-in-haskell-1.html
And Another:
This reddit contains some discussion on Dependent Types in Haskell and contains a link to the quite interesting dissertation proposal of R. Eisenberg.
Neither Haskell not F# type system is rich enough to (directly) express statements of the sort "N nested instances of a recursive type T, where N is between 2 and 6" or "a string of characters exactly 6 long". Not in those exact terms, at least.
I mean, sure, you can always express such a 6-long string type as type String6 = String6 of char*char*char*char*char*char or some variant of the sort (which technically should be enough for your particular example with vectors, unless you're not telling us the whole example), but you can't say something like type String6 = s:string{s.Length=6} and, more importantly, you can't define functions of the form concat: String<n> -> String<m> -> String<n+m>, where n and m represent string lengths.
But you're not the first person asking this question. This research direction does exist, and is called "dependent types", and I can express the gist of it most generally as "having higher-order, more powerful operations on types" (as opposed to just union and intersection, as we have in ML languages) - notice how in the example above I parametrize the type String with a number, not another type, and then do arithmetic on that number.
The most prominent language prototypes (that I know of) in this direction are Agda, Idris, F*, and Coq (not really the full deal AFAIK). Check them out, but beware: this is kind of the edge of tomorrow, and I wouldn't advise starting a big project based on those languages.
(edit: apparently you can do certain tricks in Haskell to simulate dependent types, but it's not very convenient, and you have to enable UndecidableInstances)
Alternatively, you could go with a weaker solution of doing the checks at runtime. The general gist is: wrap your vector types in a plain wrapper, don't allow direct construction of it, but provide constructor functions instead, and make those constructor functions ensure the desired property (i.e. length). Something like:
type Stream4 = private Stream4 of DataStreamItem
with
static member create (item: DataStreamItem) =
if item.Length = 4 then Some (Stream4 item)
else None
// Alternatively:
if item.Length <> 4 then failwith "Expected a 4-long vector."
item
Here is a fuller explanation of the approach from Scott Wlaschin: constrained strings.
So if I understood correctly, you're actually not doing any type-level arithmetic, you just have a “length tag” that's shared in a chain of function calls.
This has long been possible to do in Haskell; one way that I consider quite elegant is to annotate your arrays with a standard fixed-length type of the desired length:
newtype FixVect v s = FixVect { getFixVect :: VU.Vector s }
To ensure the correct length, you only provide (polymorphic) smart constructors that construct from the fixed-length type – perfectly safe, though the actual dimension number is nowhere mentioned!
class VectorSpace v => FiniteDimensional v where
asFixVect :: v -> FixVect v (Scalar v)
instance FiniteDimensional Float where
asFixVect s = FixVect $ VU.singleton s
instance (FiniteDimensional a, FiniteDimensional b, Scalar a ~ Scalar b) => FiniteDimensional (a,b) where
asFixVect (a,b) = case (asFixVect a, asFixVect b) of
(FixVect av, FixVect bv) -> FixVect $ av<>bv
This construction from unboxed tuples is really inefficient, however this doesn't mean you can write efficient programs with this paradigm – if the dimension always stays constant, you only need to wrap and unwrap the once and can do all the critical operations through safe yet runtime-unchecked zips, folds and LA combinations.
Regardless, this approach isn't really widely used. Perhaps the single constant dimension is in fact too limiting for most relevant operations, and if you need to unwrap to tuples often it's way too inefficient. Another approach that is taking off these days is to actually tag the vectors with type-level numbers. Such numbers have become available in a usable form with the introduction of data kinds in GHC-7.4. Up until now, they're still rather unwieldy and not fit for proper arithmetic, but the upcoming 8.0 will greatly improve many aspects of this dependently-typed programming in Haskell.
A library that offers efficient length-indexed arrays is linear.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
In short, I am looking for a theorem prover which its underlying logic supports multiple subtyping / subclassing mechanism.( I tried to use Isabelle, but it does not seem to provide a first class support for subtyping. see this )
I would like to define a couple of types among which some are subclasses / subtypes of others. Furthermore, each type might be subtype of more than one type. For example:
Type A
Type B
Type C
Type E
Type F
C is subtype of A
C is also subtype of B
E and F are subtypes of B
PS:
I am editing this question again to be more specific (because of a complains about being of-topic!): I am looking for a theorem prover / proof assistance in which I can define the above structure in a straight forward manner (not with workarounds as it is kindly described by some respectable answers here). If I take the types as classes then It seems above subtypings could be easily formulated in C++! So I am looking for a formal system / tool that I can define such a subtyping structure there and I can reason?
Many thanks
PVS has traditionally emphasized "predicate subtyping" a lot, but the system is a bit old-fashioned these days and has fallen behind the other big players that are more active: Coq, Isabelle/HOL, Agda, other HOLs, ACL2.
You did not make your application clear. I reckon that any of the big systems could be applied to the problem, one way or the other. Formalization is a matter to phrase your problem in a suitable way within the given logical environment. Logics are not programming languages, but have the real power of mathematics. Thus with some experience in a particular logic, you will be able to do great and amazing things that you did not expect at first sight.
When choosing your system, lists of particular low-level features are not so relevant. It is more important that you like the general style and culture of the system, before you make a commitment. You can compare that to learning a foreign language. Before you spend months or years to study do you collect features of the grammar? I don't think so.
You include the 'isabelle' tag, and it happens to be that according to "Wiki Subtyping", Isabelle provides one form of subtyping, "coercive subtyping", though as explained by Andreas Lochbihler, Isabelle doesn't really have subtyping like what you're wanting (and others want, too).
However, you're talking in vague generalities, so I easily provide a contrived example that meets the requirements of your 5 types. And though it is contrived, it's not meaningless, as I explain below.
(*The 5 types.*)
datatype tA = tA_con int rat real
type_synonym tB = "(nat * int)"
type_synonym tC = int
type_synonym tE = rat
type_synonym tF = real
(*The small amount of code required to automatically coerce from tC to tB.*)
fun coerce_C_to_B :: "tC => tB" where "coerce_C_to_B i = (0, i)"
declare [[coercion coerce_C_to_B]]
(*I can use type tC anywhere I can use type tB.*)
term "(2::tC)::tB"
value "(2::tC)::tB"
In the above example, it can be seen that types tC, tE, and tF lend themselves naturally, and easily, to being coerced to types tA or tB.
This coercion of types is done quite a bit in Isabelle. For example, the type nat is used to define int, and int is used to define rat. Consequently, nat is automatically coerced to int, though int isn't to rat.
Wrap I (you haven't been using canonical HOL):
In your previous question examples, you've been using typedecl to introduce new types, and that doesn't generally reflect how people define new types.
Types defined with typedecl are nearly always foundational and axiomatized, such as with ind, in Nat.thy.
See here: isabelle.in.tum.de/repos/isabelle/file/8f4a332500e4/src/HOL/Nat.thy#l21
The keyword datatype_new is one of the primary, automagical ways to define new types in Isabelle/HOL.
Part of the power of datatype_new (and datatype) is its use to define recursive types, and its use with fun, for example with pattern matching.
In comparison to other proof assistants, I assume that the new abilities of datatype_new is not trivial. For example, a distinguishing feature between types and ZFC sets has been that ZFC sets can be nested arbitrarily deep. Now, with datatype_new, a type of countable or finite set can be defined that can be nested arbitrarily deep.
You can use standard types, such as tuples, lists, and records to define new types, which can then be used with coercions, as shown in my example above.
Wrap II (but, yes, that would be nice):
I could have continued with the list above, but I separate from that list two other keywords to define new types, typedef and quotient_type.
I separate these two because, now, we enter into the realm of your complaint, that the logic of Isabelle/HOL doesn't make it easy, many times, to define a type/subtype relationship.
Knowing nothing much, I do know now that I should only use typedef as a last resort. It's actually used quite a bit in the HOL sources, but then, the developers then have to do a lot of work to make a type defined with it easy to use, such as with fset
http://isabelle.in.tum.de/repos/isabelle/file/8f4a332500e4/src/HOL/Library/FSet.thy
Wrap III (however, none are perfect in this imperfect world):
You listed the 3 proof assistants that probably have the largest market share, Coq, Isabelle, and Agda.
With proof assistants, we define your priorities, do our research, and then pick one, but it's like with programming languages. We're not going to get everything with any of them.
For myself, mathematical syntax and structured proofs are very important. Isabelle seems to be sufficiently powerful, so I choose it. It's not a perfect world, for sure.
Wrap IV (Haskell, Isablle, and type classes):
Isabelle, in fact, does have a very powerful form of subclassing, "type classes".
Well, it is powerful, but it is also limited in that you can only use one type variable when defining a type class.
If you look at Groups.thy, you'll see the introduction of class after class after class, to create a hierarchy of classes.
isabelle.in.tum.de/repos/isabelle/file/8f4a332500e4/src/HOL/Groups.thy
You also included the 'haskell' tag. The functional programming attributes of Isabelle/HOL, with its datatype and type classes, help tie the use of Isabelle/HOL to the use of Haskell, as demonstrated by the ability of the Isabelle code generator to produce Haskell code.
There are ways to achieve that in agda.
Group the functions related to one "type" into fields of record
Construct instances of such record for the types you want
Pass that record along into proofs that require them
For example:
record Monoid (A : Set) : Set where
constructor monoid
field
z : A
m+ : A -> A -> A
xz : (x : A) -> m+ x z == x
zx : (x : A) -> m+ z x == x
assoc : (x : A) -> (y : A) -> (z : A) -> m+ (m+ x y) z == m+ x (m+ y z)
open Monoid public
Now list-is-monoid = monoid Nil (++) lemma-append-nil lemma-nil-append lemma-append-assoc instantiates (proves) that a List is a Monoid (given the the proofs of Nil being a neutral element, and a proof of associativity).
What it says in the title. If I write a type signature, is it possible to algorithmically generate an expression which has that type signature?
It seems plausible that it might be possible to do this. We already know that if the type is a special-case of a library function's type signature, Hoogle can find that function algorithmically. On the other hand, many simple problems relating to general expressions are actually unsolvable (e.g., it is impossible to know if two functions do the same thing), so it's hardly implausible that this is one of them.
It's probably bad form to ask several questions all at once, but I'd like to know:
Can it be done?
If so, how?
If not, are there any restricted situations where it becomes possible?
It's quite possible for two distinct expressions to have the same type signature. Can you compute all of them? Or even some of them?
Does anybody have working code which does this stuff for real?
Djinn does this for a restricted subset of Haskell types, corresponding to a first-order logic. It can't manage recursive types or types that require recursion to implement, though; so, for instance, it can't write a term of type (a -> a) -> a (the type of fix), which corresponds to the proposition "if a implies a, then a", which is clearly false; you can use it to prove anything. Indeed, this is why fix gives rise to ⊥.
If you do allow fix, then writing a program to give a term of any type is trivial; the program would simply print fix id for every type.
Djinn is mostly a toy, but it can do some fun things, like deriving the correct Monad instances for Reader and Cont given the types of return and (>>=). You can try it out by installing the djinn package, or using lambdabot, which integrates it as the #djinn command.
Oleg at okmij.org has an implementation of this. There is a short introduction here but the literate Haskell source contains the details and the description of the process. (I'm not sure how this corresponds to Djinn in power, but it is another example.)
There are cases where is no unique function:
fst', snd' :: (a, a) -> a
fst' (a,_) = a
snd' (_,b) = b
Not only this; there are cases where there are an infinite number of functions:
list0, list1, list2 :: [a] -> a
list0 l = l !! 0
list1 l = l !! 1
list2 l = l !! 2
-- etc.
-- Or
mkList0, mkList1, mkList2 :: a -> [a]
mkList0 _ = []
mkList1 a = [a]
mkList2 a = [a,a]
-- etc.
(If you only want total functions, then consider [a] as restricted to infinite lists for list0, list1 etc, i.e. data List a = Cons a (List a))
In fact, if you have recursive types, any types involving these correspond to an infinite number of functions. However, at least in the case above, there is a countable number of functions, so it is possible to create an (infinite) list containing all of them. But, I think the type [a] -> [a] corresponds to an uncountably infinite number of functions (again restrict [a] to infinite lists) so you can't even enumerate them all!
(Summary: there are types that correspond to a finite, countably infinite and uncountably infinite number of functions.)
This is impossible in general (and for languages like Haskell that does not even has the strong normalization property), and only possible in some (very) special cases (and for more restricted languages), such as when a codomain type has the only one constructor (for example, a function f :: forall a. a -> () can be determined uniquely). In order to reduce a set of possible definitions for a given signature to a singleton set with just one definition need to give more restrictions (in the form of additional properties, for example, it is still difficult to imagine how this can be helpful without giving an example of use).
From the (n-)categorical point of view types corresponds to objects, terms corresponds to arrows (constructors also corresponds to arrows), and function definitions corresponds to 2-arrows. The question is analogous to the question of whether one can construct a 2-category with the required properties by specifying only a set of objects. It's impossible since you need either an explicit construction for arrows and 2-arrows (i.e., writing terms and definitions), or deductive system which allows to deduce the necessary structure using a certain set of properties (that still need to be defined explicitly).
There is also an interesting question: given an ADT (i.e., subcategory of Hask) is it possible to automatically derive instances for Typeable, Data (yes, using SYB), Traversable, Foldable, Functor, Pointed, Applicative, Monad, etc (?). In this case, we have the necessary signatures as well as additional properties (for example, the monad laws, although these properties can not be expressed in Haskell, but they can be expressed in a language with dependent types). There is some interesting constructions:
http://ulissesaraujo.wordpress.com/2007/12/19/catamorphisms-in-haskell
which shows what can be done for the list ADT.
The question is actually rather deep and I'm not sure of the answer, if you're asking about the full glory of Haskell types including type families, GADT's, etc.
What you're asking is whether a program can automatically prove that an arbitrary type is inhabited (contains a value) by exhibiting such a value. A principle called the Curry-Howard Correspondence says that types can be interpreted as mathematical propositions, and the type is inhabited if the proposition is constructively provable. So you're asking if there is a program that can prove a certain class of propositions to be theorems. In a language like Agda, the type system is powerful enough to express arbitrary mathematical propositions, and proving arbitrary ones is undecidable by Gödel's incompleteness theorem. On the other hand, if you drop down to (say) pure Hindley-Milner, you get a much weaker and (I think) decidable system. With Haskell 98, I'm not sure, because type classes are supposed to be able to be equivalent to GADT's.
With GADT's, I don't know if it's decidable or not, though maybe some more knowledgeable folks here would know right away. For example it might be possible to encode the halting problem for a given Turing machine as a GADT, so there is a value of that type iff the machine halts. In that case, inhabitability is clearly undecidable. But, maybe such an encoding isn't quite possible, even with type families. I'm not currently fluent enough in this subject for it to be obvious to me either way, though as I said, maybe someone else here knows the answer.
(Update:) Oh a much simpler interpretation of your question occurs to me: you may be asking if every Haskell type is inhabited. The answer is obviously not. Consider the polymorphic type
a -> b
There is no function with that signature (not counting something like unsafeCoerce, which makes the type system inconsistent).
I am so familiar with imperative language and their features. So, I wonder how I can convert one data type to another?
ex:
in c++
static_cast<>
in c
( data_type) <another_data_type>
Use an explicit coercion function.
For example, fromIntegral will convert any Integral type (Int, Integer, Word*, etc.) into any numeric type.
You can use Hoogle to find the actual function that suits your need by its type.
Haskell's type system is very different and quite a bit smarter then C/C++/Javas. To understand why you are not going to get the answer you expect, it will help to compare the two.
For C and friends the type is a way of describing the layout of data in memory. The compiler does makes a few sanity checks trying to ensure that memory is not corrupted, but in the end its all bytes and you can call them what ever you want. This is even more true of pointers which are always laid out the same in memory but can reference anything or (frighteningly) nothing.
In Haskell, types are a language that one writes to the compiler with. As a programmer you have no control over how the compiler represents data, and because haskell is lazy a lot of data in your program may be no more then a promise to produce a value on demand (called a thunk in the code GHC's and HUGS). While a c compiler can be instructed to treat data differently, there is no equivalent way to tell a haskell compiler to treat one type like another in general.
As mentioned in other answers, there are some types where there are obvious ways to convert one type to another. Any of the numerical types like Double, Fraction, or Real (in general any instance of the Num class) can be made from an Integer, but we need to use a function specifically designed to make this happen. In a sense this is not a 'cast' but an actual function in the same way that \x -> x > 0 is a function for converting numbers into booleans.
I'll make one last guess as to why you might be asking a question like this. When I was just starting with haskell I wrote a lot of functions like:
area :: Double -> Double -> Double -- find the area of a rectangle
area x y = x * y
I would then find myself dumping fromInteger calls all over the place to get my data in the correct type for the function. Having come from a C background I wrote all my functions with monomorphic types. The trick to not needing to cast from one type to another is to write functions that work with different types. Haskell type classes are a huge shift for OOP programmers and so they often get ignored the first couple of tries, but they are what make the otherwise very strict haskell type system usable. If you can relax your type signatures (e.g. area :: (Num a)=> a -> a -> a) you will find yourself wishing for that cast function much less often.
There are many different functions that convert different data types.
The examples would be:
fromIntegral - to convert between say Int and Double
pack / unpack - to convert between ByteString and String
read - to convert from String to Int