Haskell library like SymPy? [closed] - haskell

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I need to manipulate expressions like 1 + sqrt(3) and do basic arithmetic like addition, subtraction, and division. I'd like the result to be in some sort of canonical form so that it can be used as a key in a map. Turning 1 + sqrt(3) into a float is not feasible due to roundoff problems.
I used SymPy for this task in Python. Is there an equivalent native library for Haskell?

Please check out the numbers package. If all you need is to store exact numbers like "1 + √3", you may want to use Data.Number.CReal instead of symbolic arithmetics. It stores the expressions and can be computed to arbitrary number of digits when needed.
Prelude Data.Number.CReal> let cx = 1 + sqrt (3 :: CReal)
Prelude Data.Number.CReal> showCReal 400 cx
"2.7320508075688772935274463415058723669428052538103806280558069794519330169088000370811461867572485756756261414154067030299699450949989524788116555120943736485280932319023055820679748201010846749232650153123432669033228866506722546689218379712270471316603678615880190499865373798593894676503475065760507566183481296061009476021871903250831458295239598329977898245082887144638329173472241639845878553977"
There is also a Data.Number.Symbolic module in the package but the description says "It's mainly useful for debugging".

It seems you are looking for Computer Algebra System (CAS) in Haskell. Inspite of so many references to algebraic objects in the names of Haskell packages/modules, I've never heard of a general purpose and well-maintained CA system in Haskell (like SymPy or Sage in Python).
However in the list of Computer Algebra Systems on Wikipedia I've found a reference to
DoCon. The Algebraic Domain Constructor
It uses a non-standard license, but I dare say it is still Open Source (though with rename and attribution requirements). As of July 2010 docon-2.11 still builds with GHC 6.12.1 and runs demos/tests (I only had to insert a LANGUAGE FlexibleContexts pragma in one file of the demo).
DoCon is well documented (362 pages of the Manual). Its Manual is packed inside of the zip with sources, so I put it online separately for convenience:
DoCon 2.11 Manual.ps
Please look through to check if it suits your needs.

Check out the cyclotomic package, which implements exact arithmetic on the cyclotomic numbers. These include all algebraic numbers (hence in particular 1+sqrt(3)) and the key operations (like equality) are decidable.
They do not provide an Ord instance (for the same reason the complex numbers do not), but one can implement a non-semantic instance if all one needs is to use them as keys in a lookup table. You may want to contact the author about how to do this correctly, as there may be some invariants that are not obvious (e.g. one may need to be careful about zeros in the coeffs map).

Related

How does pure functional language(haskell) users make Design?what alternative to uml they use? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
giving that haskell doesn't have the notion objects(even it has the keyword class) How does Haskell developer design their code? how do they represent interaction/entities? does they have alternative to uml? or class equivalent?(or do they even care?)
I don't know of any formal(ized) notion of drawing or describing architecture.
Nevertheless you can structure your application
The most basic way is using the expressiveness of the type system:
wrapping simple types in newtypes - or using type synonyms to - distinguish between say an Int and an Age value, e.g. newtype Age = Age Int.
composing more complex types as products of existing e.g. data Person = Person { name :: String, age :: Age}
or by using sum types data Employee = Chef Person | Waiter Person
All these data types - which can be made much more elaborate - can be accessed/modified* via record syntax or lenses. I tend to think of the data types as my skeleton of the application I write - and use the compiler to stay true to the ideas I had when starting and never subvert the types by using unsafeXX functions.
Lack of OO/Encapsulation
I have never had - having functions attached to objects is not necessary (utterly wrong in my opinion, but that is not up for discussion here).
A basic form of encapsulation can be done via newtype and modules with their exports.
Polymorphism
A lot can already be done with typeclasses, which if you have not yet worked with them, are somewhat like interfaces, (but with less typing) - but there is a lot more you can accomplish with advanced things like type families.
Conclusion
So if I would like to draw a structure/architecture of my program, I would start with my data-types as boxes and use arrows as functions between them. And maybe use "special" boxes for container types.
*: technically one usually doesn't modify stuff, but create new values from old values as (almost) everything is immutable.

Why classes implicitly derive from only the Object Class? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I do not have any argument opposing why we need only a single universal class. However why not we have two universal classes, say an Object and an AntiObject Class.
In nature and in science we find the concept of duality - like Energy & Dark Energy; Male & Female; Plus & Minus; Multiply & Divide; Electrons & Protons; Integration & Derivation; and in set theory. There are so many examples of dualism that it is a philosophy in itself. In programming itself we see Anti-Patterns which helps us to perform work in contrast to how we use Design patterns.
We call it object-oriented programming. Is that a limiting factor or is there something fundamental I am missing in understanding the formation of programming languages?
Edit:
I am not sure, but the usefulness of this duality concept may lie in creating garbage collectors that create AntiObjects that combine with free or loose Objects to destruct themselves, thereby releasing memory. Or may be AntiObjects work along with Objects to create a self-modifying programming language - that allows us to create a safe self modifying code, do evolutionary computing using genetic programming, do hiding of code to prevent reverse engineering.
I've moved this question to Computer Science Site of Stack Exchange, as this is considered off-topic here. Please use that if you want to comment/answer this question.
The inheritance tree is commonly (as it is in C#) a tree, with a single root, for a number of reasons, which all seem to lead back to one big one:
If there were multiple roots, there wouldn't be a way to specify "any type of object" (aside from something like C++'s void *, which would be hideous as it tosses away any notion of "type").
Even the idea of "any type of object" loses some usefulness, as you can no longer guarantee anything about the objects you'll be accepting. How do you say "all objects have properties a, b, and c" in such a way as to let programs actually use them? You'd need an interface that all of them implement...and then, that interface becomes the root type.
GC'able languages would be useless if they couldn't collect every type of object they manage. Oops, there goes that "any type of object" again!
All-around, it's simpler to have one type be the root of the hierarchy. It lets you make contracts/guarantees/etc that apply to every object in the system, and makes fewer demands on code that wants to be able to deal with objects in a universal manner.
C++ gets away with having multiple root types because (1) C++ allows multiple inheritance, so objects can bridge the gaps between inheritance trees; (2) it has templates (which are far, far more able than generics to take any type of object); (3) it can discard and sidestep any notion of "type" altogether via means like void *; and (4) it doesn't offer to manage and collect your objects for you.
C# didn't want all the complexity of multiple inheritance and templates, and it wanted garbage collection.
In Nature, if in a family there are two children who are totally different and opposite from each other, still they have common parents.
All those examples which you have given come under common category. For e.g. Male and Female comes under Homosapiens Category. Plus and Minus come under Operator category.
In OOPS also there are two types. Reference type and value type but still they both come under object.
What you are suggesting is also good. But let us for a second, in a universe, accept what you are suggesting. Still there will be a Super_Class containing your Object and AntiObject class. So it has to stop somewhere and in OOPS object is that class where it stops.

Seeking programming language which meets these few requirements [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
Seeking programming language. Must have the following qualities (in order of ascending length of feature in characters):
Compiled
Namespaces
Garbage collection
Omits OOP features!
Fixed number of types
Available on Mac OS X
First-class functions
Dynamic typing preferred
Closures (lexical scoping)
Can interface with C libraries (ncurses, etc)
Availability on linux a plus but not necessary
--
To give a little more context, I want to be able to use it to write command-line utilities for linux/BSD/Mac, which may or may not use existing C libraries (such as ncurses, etc).
Update for clarification:
Namespaces: I want to avoid having to name my function string_strip when I could create a new namespace called string and define in it a function named strip.
Omits OOP Features: There's definitely a difference between a language having a feature and me not using it, versus the language intentionally omitting it. If I wanted to use Go but without touching anything OOP-related, I couldn't use most of the standard library.
Fixed number of types: Why would a languages without OOP give you the option of creating a custom "type"? What does type even mean without OOP? It would probably just be used for composition of types, ie. a Person = struct { Name, Age }, whereas you could do this with a Hash or Map just fine.
Dynamic typing preferred: Type inference is fine, I guess......
I'm not sure what you mean by namespaces, but aren't you describing Scheme?
Well, I'll try to put forth some languages that fit almost every single requirement:
Haskell (which is statically typed)
specifically the GHC distribution - it's compiled (or can emit LLVM code)
it uses modules which are kind of like Namespaces
it's garbage collected, it is not an OO language
I don't particularly understand 'fixed number of types', as Haskell gives you types, but you can create more, and Haskell supports algebraic types and pattern matching
it's available on all Win/Mac/Linux
it has first class functions and closures (functional language after all)
and it can interface with C libraries.
Erlang
it has a bytecode compiler, and if you're on an Intel x86-family CPU, there is a native compiler called HiPE.
Dynamically typed
Not an OO language, it's near-functional
Has 8 primitives and 2 compound types - if you want a collection you're building a list or tuple of them
Is garbage collected
Has (immutable) closures
Has first class functions
Windows, Mac, Linux supported
Has packages which act as the namespace protectors
C bindings - Erlang has port drivers and Erlang Native Interface.
Check out Racket (based on Scheme).
It has an FFI. I've created FFI bindings for SQLite and ODBC with it, and I've found the FFI to be useful and convenient.
"Namespaces" is ambiguous to me. Racket has a module system, and it also has what it calls namespaces, which are first-class top-level environment objects.
It does not have "a fixed number of types". I don't understand that requirement at all.

How is Lexical Scoping implemented? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
A couple of years ago I started writing an interpreter for a little Domain Specific Language which included programmer-defined functions.
At first I implemented variable scope using a simple stack of symbol-tables. But now I want to move to proper lexical scoping (with the option of closures). Can anyone explain the data-structure and algorithm behind lexical scope?
To get correct lexical scoping and closures in an interpreter, all you need to do is follow these rules:
In your interpreter, variables are always looked up in an environment table passed in by the caller or kept as a variable, not some global env-stack. The signature of your eval operation is like eval(expression, env) => value.
When interpreted code calls a function, the environment is NOT passed to that function. The signature of your function application operation is like apply(function, arguments) => value.
When an interpreted function is called, the environment its body is evaluated in is the environment in which the function definition was made, and has nothing whatsoever to do with the caller. So if you have a local function, then it is a closure, that is, a data structure containing fields {function definition, env-at-definition-time}.
To expand on that last bit in Python-ish syntax:
x = 1
return lambda y: x + y
should be executed as if it were
x = 1
return make_closure(<AST for "lambda y: x + y">, {"x": x})
where the second dict argument may be just the current-env rather than a data structure constructed at that time. (On the other hand, retaining the entire env rather than just the closed-over variables can mean programs have surprising memory leaks because closures are holding onto things the don't need. This is worth fixing in any 'practical' language implementation but not when you are just experimenting with language semantics.)
There are many different ways to implement lexical scoping. Here are some of my favorites:
If you don't need super-fast performance, use a purely functional data structure to implement your symbol tables, and represent a nested function by a pair containing a pointer to the code and a pointer to the symbol table.
If you need native-code speeds, my favorite technique is described in Making a Fast Curry by Simon Marlow and Simon Peyton Jones.
If you need native-code speeds, but curried functions are not that important, consider closure-passing style.
Read The implementation of Lua 5.0 for instance.
There is no single right way to do this. The important thing is to clearly state the semantics that you are looking to provide, and then the data structures and algorithms will follow.
Stroustrup implemented this in the first C++ compiler simply with one symbol table per scope, and a chaining rule that followed scopes outwards until a definition is found. How this works exactly depends on your precise semantics. Make sure you nail those down first.
Knuth in The Art of Computer Programming, Vol 1, gives an algorithm for a Cobol symbol table whereby scoping is done via links.

Finite questions

Are there a finite number of questions that can be asked regarding a specific language (and or topic), for example - for T-SQL given that there are only so many commands, can there be a limited number of non-repetitive questions? and if so can you use that to determine sizing for a site like stackoverflow and to determine the probability of a new question being a repeat of a prior one? If there is a finite number, how would you determine/calculate it: for instance, T-SQL has x number of commands, each one can have a set of relevant questions (syntax, example of use, etc.) - so could the # of questions = x times potential questions time some relevant variation? or something like that?
No, since, theoretically, programs can be of infinite length, and this site is not just about language commands, but programs developed with those languages.
I'm pretty sure Turing says no, and if you don't believe him them Gödel might have something to say about it.
A stack overflow question is expressed as a finite length sequence of bytes. One could in principle consider the question body in terms of an integer, expressed lowest digit first, in base 256 (or larger, if you wish to think about it as unicode). This is a bijection between questions and whole numbers. Therefore the set of all stack overflow questions has a countably infinite cardinality (How do i typeset \aleph_0 in SO?).

Resources