Are there any more original, more functional Haskell web-frameworks? [closed] - haskell

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I looked at Haskell web frameworks like Snap and Yesod. Most seem to implement an MVC-ish approach reminding me of web frameworks like Ruby on Rails. Yes, MVC can be achieved with FP, but IMHO it doesn't show the great advantages of an FP approach. As HTTP is a stateless protocol, I would have hoped that there might be an Haskell framework that takes a more original, more pure functional approach. Are there any?

I'm not sure which features in FP you would like a framework to utilize, but I think Yesod uses some features to great benefit. (Happstack does as well, but I'm just not as familiar with it.)
Type-safe URLs eliminate an entire class of typo-generated bugs, plus automatically deal with input validation.
Proper typing practically eliminates XSS attacks.
Depending on the scope of data you're dealing with, using either STM or MVars for your storage needs make it easy to avoid race conditions and deadlocks in multi-threaded applications.
I'm sure there's a lot more that I'm not thinking of, but I hope that makes the point. But perhaps what you're looking for is something like a continuation-based framework. I personally think they're a bad idea (I'm a believer in REST), but I suppose it might seem more "functional."

It depends what you're trying to achieve. If by stateless you actually mean stateless, I use the templating framework Hakyll to generate static pages. It has an interesting structure to deal with dependencies and file updates.

I think WardB was asking not about fancy types, but rather about the denotative/stateless semantics of FP, in contrast to imperative/nondenotative (and often nondeterministic) semantics of things like IO and STM.
It's this denotative aspect that supports precise & tractable reasoning.
The term "functional" has been stretched to embrace imperative/nondenotative programming as well, which often leads to confusion.
Peter Landin recommended replacing "functional" with "denotative" to help clear up exactly this sort of confusion.
I don't know of any denotative (nonimperative) Haskell web frameworks.
Getting there would probably require breaking some long-standing imperative mental habits.
Which is to say: interesting work!

I was looking for the same but haven't really found it. Specifically I was looking for a continuations approach that renders traditional HTTP sessions unnecessary. These sorts of frameworks are quite popular in Scheme. The closest that I found was Chris Eidhof's answer to the Arc Challenge :-
https://gist.github.com/260052
It's a sketchy prototype and probably many man-months away from being something that you could use for serious work. If my Haskell skills were better, I might be tempted to try and develop it.
I also believe that WASH also took this approach but that seems to have died. I'm pinning my hopes on mysnapsession as I believe that is also looking at a continuations based session facility which would be exciting because I have been very impressed with the quality of the rest of Snap and the momentum behind it.

Related

What are the main benefits of using Haskell for web developing? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm learning Haskell for great good.
I'm pretty into OOP and the various type systems. I used Java to develop webapps (Java EE, spring, Hibernate, struts 1.x), now I'm using regularly Python (pylons, django, sqlalchemy, pymongo) and Javascript. I had a huge improvement in my personal productivity: the lightweight approach, duck typing, awesome iterators, functions as first class citizens, simple syntax and configuration, fast tools like pip and distribute (and much more) helped me a lot.
But the first reason of my productivity boost is the Python language itself.
What are the main benefits of using Haskell for web developing?
For example, how its type inference can really improve my web app? So far, I noticed that when you decorate your function with its type-signature you are adding a lot of semantics to your program. I expect all this effort to come back in some way, to save many lines of code and to make them sound. I really like the sharp distinction between types and data, I'm starting to understand how they works, but I want something back :P
Don't get me wrong, I've just started studying Haskell so Maybe I'm missing some awesomness but I really want to understand its paradigm and when it's worth using it.
Most web applications aim to be stateless and handle concurrency well. Its also rather important to scale (Google SEO reasons, and user experience).
Haskell handles these problems rather well (although IMHO in more academic and perhaps less "human" intuitive way).
That being said due to the sheer lack of people doing web app dev (compared to say node.js) and that traditional web app dev has been more focused in a OOP mind frame it might be rather difficult.
I had some issues trying to use it as you can see in my questions below:
How do I do automatic data serialization of data objects?
Handling incremental Data Modeling Changes in Functional Programming

Fascinated by FP but still think imperative, how do I think functional? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Like most ppl, I started with and still do a lot of imperative code(mostly Java, Ruby, Javascript).
I've never been a big fan of OO, either because I never understood it properly or because I don't think OO.
Got my first glimpse of FP via javascript, passing functions around, closures, etc. Have been in love with FP since then.
Recently, I've developed interest in Clojure (and may be Scala) and someday might even have a go at Haskell. I like what I see in the functional approach, but how do I think functional? I've been doing imperative stuff for the past 3-4 yrs and my brain tends to think imperative while approaching a problem.
How can I unlearn the imperative style(do I need to?) and think more functionally ?
No. Don't unlearn imperative style as you'll still need it. Even many well-done FP libraries look somewhat imperative under the covers. It's better to think in terms of adding FP to your list of tools and techniques rather going full-metal with one technique or the other.
Now, as to how you go about learning the FP style? Toy projects - or utility project you write for your own use. Or hell, write yourself yet another blog - but with a twist! The key thing, as was stated above, is practice. When doing so, avoid shared state. Strive for purity (that is, avoid side effects) in as many places as you can. Use closures, pattern matching (if the language supports it) and lambda functions. Think in terms of functions being first-class data types in your program - take them as input and return them as output. The list goes on, but you'll see the same concepts repeated time and again if you're watching for them.
If you feel you need a gentle nudge in this regard, use a tool (language) that encourages the FP style such as F# or Scala.
If you feel you need more of a "tough love" kind of help, reach for Haskell :)
Otherwise use the tool of your choice and just keep FP concepts (above) in mind. If you decide go this route there's a pretty good book called Real-World Functional Programming that uses F# and C# to illustrate techniques that apply in both a "mostly" FP language (F#) and a "mostly" OO language C#.
Go for Haskell. Because it's pure you are forced to think functional. In F# or Scala you still can write very imperative code. I strongly recommend the book by Graham Hutton on Haskell.

Tips for grokking declarative programming languages? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Question
As stated, have you any tips to help grok / understand / get-your-head-around declarative programming languages?
Or is it simply a case that you’ve to immerse yourself in the language and it’s syntax, until it seeps in, until you get that golden moment where you Get It. This isn’t really an option as I can no longer lock myself in a room for days on end, poring over half a dozen different books on the subject matter (responsibilities being what they are and all)
So, any tips or tricks that helped you when you tackled declarative languages, any insights to pass on?
P.S. I’ll personally upvote the first answer that says “Shutup and put in the work”.
Background
I was 13 years old I when I first started wring code (basic, on my sisters Oric-1).
Since then I’ve worked with many new concepts and many different languages, taking all in my stride, me taking the upper hand quickly enough. Object Orientation? Not a bother. Event driven paradigms? Smoke me a kipper, I’ll be back for breakfast.
Owl, Mfc, ActiveX, Vb3, 4, 5 & 6, VB.Net, Pascal, Delphi, C, C++ & C#. None have stood in my way, at least not for very long.
However recently my perfect score has taken a bit of a battering.
A couple of weeks ago I threw myself into Xaml, and folks, I’m more sinking than swimming.
I think my main problem is that it’s declarative. All my other programming skills are procedural. I’ve hit this block before with MSBuild, I can copy examples of how to get MSBuild things working, but would be lost putting something together from scratch.
Back to Xaml, currently I’m going insane trying to wire triggers to properties and get the effect’s I need.
I may post my specific Xaml question here soon enough. For now I’m asking this general “declarative programming” question.
P.S. No, I'm not actually this cocky. Yes, I stumbled like hell the first time I hit OO and the first time I'd to write an event driven UI (VB3 on Windows 3.11).
Edit
It's starting to sink in, the tenacity that got me this far in this field is paying off, it just takes so much fracking time!
. . . I think I'm getting too old for this stuff . . . :)
I had to teach XSL (or XSLT, as you wish) a bunch at the beginning of the century :), and it's a different world, really. That, however, is the basis for the paradigm-shift: you have to realize that declarative languages really are different. The most important advice I have is to keep studying other people's solutions, put the work in, and really try to stop thinking in FLOW. The worst thing is that, in XSL, there is an "if" and an "else," but usually there's another way to do things.
Unlike learning OO, in XSL (or any declarative language, I suppose) you will not manage to do what you're trying to do unless you do it declaratively.
So the answer is in part, "shut up and do the work" as you suggest, but the more important point is to realize that a lot of the work is getting your head around the paradigm shift. So the real answer is, "keep your eyes peeled for the paradigm shift." You have to stop thinking in flow and start thinking in terms of rules that can fire in any order... if they're done right, it doesn't matter when they fire. When you are finally thinking in rules instead of WHEN stuff happens, you're beginning to grok the shift.
Find some examples, with explanations of the "why", from someone who really knows the language. It's learning the patterns and idioms that makes a difference.
I suspect you're trying to do imperative things in declarative land, which means you think in terms of steps. Write the dataflow down in terms of required inputs + stateless function of those inputs and see if that helps.
Try a functional or functionalesqe language like ML or Scheme.
I don't know what your specific problems with Xaml are (and I haven't used it myself) , but I've found that when using XML based technologies like XSLT, a little LISP or Scheme experience can go a long way. You might want to look at playing with the excellent scheme system available free from http://www.plt-scheme.org.
I can see where this may be blowing your mind. All those languages you list are indeed quite similar (procedural).
Once you get this down, I highly encourage you to learn a functional language as well. You may also find it tough going, but learning it will help your general coding skills greatly. You'll have a whole new bag of tricks (even in procedural languages), and you will never be afraid of recursion again.
Consider your favorite “programmer ignorance” pet peeve. The first code snippet is obviously procedural. In the second snippet you make a declarative statement that for the percentage to be valid it has to be between 0 and 100.
So i'd guess you'll have no trouble grokking declarative programming languages as long as you work on it hard enough... there is no royal road to geometry
Like Binary Worrier, I had a long history with things like C, C++, MFC, etc and have been coming up to speed on XAML, WPF, and C#. I had a side trip through HTML, Javascript, and XSLT which I think helped a great deal in preparing me for XAML.
The basic idea behind XAML is fairly straightforward - it's all about what you show, not what you do. The hard part with XAML is that there is just a ton of implementation details to learn and you wind up learning them all at the same time in order to be able to get much of anything done.
I could probably be more helpful if the question was more specific.
"Programming is about giving a computer a sequence of instructions."
Most programmers react with equanimity to this statement. It's almost like... "duh?"
But the belief in this statement is what causes people to have trouble understanding other programming paradigms. It's not true, and hasn't been for a very long time. To arrive at a better understanding of programming, many may benefit from thinking on why this statement is false.
Even if you programmed in pure assembly, modern processors would rearrange your instructions, perform branch prediction, and attempt to execute multiple potentially codependent instructions at the same time. In this way they think in terms of logical dependencies, not sequences. The sequence metaphor is the false notion that an instruction logically depends on everything that preceded it. If this were true, the best way to reason about programs would be to examine the control flow. But it is not true.
It's not just declarative programming that doesn't fit with this metaphor, but also parallel and asynchronous programming.
I find the easiest way to "grok" a language is simply to start using it exclusively for all your coding. With a completely new language I would say for me the learning curve is approximately 2 weeks of coding about 4-5 hours a day. After that point it suddenly "clicks" and you can start relying less on manuals and docs.
I took a class in college (Programming Languages). It pretty much felt like I was repeatedly slamming my head against a brick wall, but about 3/4 of the way through the class, I realized the wall wasn't there anymore; I had been beating my head against nothing for a few weeks. It was a pretty surreal feeling.
I think any other way won't have the same charm. Read Godel, Escher, Bach; listen to a lot of Emerson, Lake, and Palmer and Kaikhosru Sorabji; smoke some ganja, and put in the time.

Languages that free you from clarifying your ideas [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
ok, so is there a programming language that frees you from clarifying your ideas?
I couldn't help asking. But if there is one, what would you say comes closest today?
You mean, a programming language that lets you program without explaining what you want the program to do?
No, how would that work? The compiler needs to be told what program to compile.
Digging out an old, somewhat appropriate quote from Charles Babbage:
On two occasions I have been asked,
'Pray, Mr. Babbage, if you put into
the machine wrong figures, will the
right answers come out?' I am not able
rightly to apprehend the kind of
confusion of ideas that could provoke
such a question.
The compiler can't read your mind. The only way it can create a program to do what you want is to tell it what it is you want.
Of course, there are languages that free you from having to specify things that are irrelevant to your overall problem and are only relevant to the underlying implementation. (an obvious example is that most modern languages free you from having to worry about pointers or many other low-level concerns. Many languages also give you ways to iterate over sequences without having to write a manual for-loop. But you still have to "clarify your ideas", you still have to specify what your program should do. The best a language can do is free you from clarifying the things that are not relevant to your ideas.
That shouldn't be the role of a language, in my view. Instead, the language should help you to clarify your ideas, and let you express those clarified ideas in as intuitive a way as possible.
You could see it from two angles:
High-level languages like Prolog
free you from having to express every
messy detail of your algorithm. You just
sketch the high-level picture, and
prolog fills in the details (e.g.,
how to do search and deduct the
answers to your questions, etc).
On the other side of the spectrum,
low-level languages like C free you
from having to express your ideas in
an abstract way. You can just give a
sequence of very concrete, detailed
procedural steps (although you can
optionally introduce abstractions if
you want to).
So both extremes free you from certain aspects of expressing and clarifying your ideas.
I don't think so, but there are a few that prevent you from clearly expressing those ideas - I nominate BCPL.
For various problem domains, there are languages that free you from having to type a lot of stuff beyond what's necessary to clarify your ideas. But every language fails in some situations, and for some people. Not everybody is comfortable expressing their ideas in an object oriented design (say, C# or Java), as functions and closures (Scheme), as logical derivations (Prolog -- there are some problems for which it fits!), or as declarative statements of the desired result (XSLT, CSS, various DSL's, with varying success) -- yet each of these is the right answer in certain contexts, and most of them overlap to some extent. Indeed, few modern languages are all that purely oriented to single paradigms.
But some languages favour other things over expressiveness: such as having efficient implementations (C), or being easy to learn (say, Python or its scripting kindred).
I hope there isn't a language that frees you from clarifying your ideas. It should be the responsibility of all programmers to do that themselves, not to pass it off to some other person or programming construct.
All good points, was thinking more along the lines of scripting languages, where you can type away in the debugger until it does what you want (I've done that a time or two for some sysadmin wmi scripts).
Yes, Whitespace.

Does a new or emerging programming language really need practical libraries? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Using a factor or forth, Arc or whatever as an example (note: factor is a bad example because it has a large set of practical libraries). Lets say you are considering using a programming language. Does having a large set of practical libraries matter? If your language is well designed, then it would be easy to create a 'string' library or a 'date' library. Maybe even a web framework?
I mention this, because when a language emerges, it seems that someone brings up 'practical libraries'.
It definitely matters - nine times out of ten, I'm going to pick a language with good, well-supported libraries that help accomplish what I want to do.
This isn't to say I wouldn't enjoy writing my own libraries (quite the contrary), but for a production project where quick, accurate results are important, that wouldn't be a practical option by any stretch of the imagination.
Java and .Net have spoiled people with the abundance of classes in the framework or in additional high-quality libraries (quite often free and open source as well). Same goes for Ruby and Python. It'd be hard to adopt a new language without such a library, as your productivity will suffer teremndously by having to reimplement every single feature you need.
Unless it's a breakthrough language that introduces something radical like intentional programming (I tell the computer what I want it to do and it inferes the proper code to do it)... Why, you have one of those? :-)
Practical libraries are important. I don't get paid to write a framework, I get paid to add value to a business. If I told a client I had to charge them to write a string data type, I'd probally lose my job / contract.
Just look at the majority of questions here on SO to see how important tools are to people.
That said, if you find a new language that fits your task really well and you have the time and inclination to write your own tools then by all means dive in. That's one way that new languages get nice libraries, after all.
A large library makes a language much more productive to use. A really great language without support for parsing XML, crypto, a web framework, a UI framework, etc. takes a lot more time to produce working code with. For learning purposes, a language without a large library is fine, but for practical purposes, it is going to cost time and money to use such a language. Imagine if every time you wanted to load an image you had to write code to parse the .jpg headers. What if you had to hand-code your XML parser rather than loading it into a pre-built one. You'd likely screw it up and spend a lot of time debugging. If the goal of the project is to create a new tool, writing support code is not a great use of your time.
I think that the big reason for Python's popularity is its huge standard library. Same goes with Java and PHP. In fact, I'd probably say that the selection of libraries is more important than the language itself.
If your top priority is to create a finished product with the least amount of time and effort possible, then yes, having available libraries matters. (If your goal is fun, or learning for the sake of learning, then writing your own libraries can be a good experience.)
A good library is mature enough that lots of people have used it and weeded out most of the bugs. It doesn't matter how great your programming language is or how easy it is to write a library from scratch. There's no replacement for having your code tested over time.
A lot of libraries are not interesting or fun to write, and reimplementing them again is not going to revolutionize the world in any way. There is only so much you can do with a date library or a string library whatever. It either works or it doesn't. Many libraries are simply an implementation of a standard or of some de facto standard behavior, and someone just has to slog through the necessary work to get it right. The less of this you have to do yourself, the better.
Any brand new language that can take advantage of existing libraries starts off way ahead, in my opinion. Clojure for example, though a very new language, also has access to all the libraries of Java. This is arguably one big reason it's doing so well at the moment. Effort is put toward novel things rather than re-inventing wheels.
You think string libraries are easy to write? Go have a look at Unicode, UTF-8, UTF-16, legacy codepages, byte order issues and whatnot.
You think date/time libraries are easy to write? Go have a look at leap seconds, week numbering schemes and whatnot.
Having these kinds of things thought out for you, implemented once and implemented properly, saves more time and headaches than you think.
It's a tempting idea, and it certainly works if you're Paul Graham or Chuck Moore.
It might work if your domain is very very bounded, and you're not going to get a requirement thrown at you that's outside that domain, something as simple as a client asking for an "import from Excel" feature. On the other hand, Paul Graham used Lisp to write a web shop system, which is a very broad domain of requirements; I'd be interested in knowing how he handled something like a PDF export, would he have given the PDF spec and a Lisp manual to some intern on summer vacation from MIT, or would he have gone down to the C libraries?
It might work if your domain is driven by logic or by enduring natural principles, something like an astronomy simulation. If they're human requirements, they'll be full of contradictions and special cases (string and date libraries fall into that category, by the way), and there is no abstraction or language feature that entirely cuts through that, you'll have to slog through the special cases whether you're writing in Haskell or PHP.
It might work where optimisation is very very important (EDIT: and where you're smart enough to optimise it yourself) - you have a stripped down system where you know every layer of the stack because you've implemented it yourself with a particular goal in mind.
I associate the whole cluster of ideas with grad students: they're in the top 1% in programming skills and general smartness; they're working in a very narrow domain; they may not have the best equipment so they're trying to strip things down and optimise in depth; and they don't have the learning vs getting work done dilemma of working programmers.

Resources