How can I use functional programming in the real world? [closed] - haskell

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Functional languages are good because they avoid bugs by eliminating state, but also because they can be easily parallelized automatically for you, without you having to worry about the thread count.
As a Win32 developer though, can I use Haskell for some DLL files of my application? And if I do, is there a real advantage that would be taken automatically for me? If so, what gives me this advantage? The compiler?
Does F# parallelize functions you write across multiple cores and CPUs automatically for you? Would you ever see the thread count in Task Manager increase?
How can I start using Haskell in a practical way, and will I really see some benefits if I do?

It seems like the book Real World Haskell is just what you're looking for. You can read it free online.

F# does not contain any magic pixie dust that will pass functions off to different CPUs or machines. F#/Haskell and other functional programming languages make it easier for you to write functions that can be processed independent of the thread or CPU they were created on.
I don't feel right posting a link here to a podcast I participate in. It seems a little off, but in the Herding Code episode, where we talked with Matt Podwysocki, we asked the same question and he gave some interesting answers. There are also a lot of good links relating to functional programming in that episode. I found one link titled "Why Functional Programming Matters". That may provide some answers for you.

This might also be interesting:
"Real World Functional Programming"
Examples are in F# and C#, but the theory is fairly generic.
From what I've read (pre-release) it is definitely interesting, but so far I think it is making me want to stick more and more with C#, using libraries like Parallel Extensions.

You didn't mention, but I'm assuming, that you're using C++. One potentially easy way to get into functional is via C++/CLI to F#. C++ contains "magic pixie dust" (called IJW: It Just Works) to allow you to call into and out of managed code. With this, calling F# code is almost as simple as it is from C#.
I've used this in one program (FreeSWITCH), which is written entirely in C/C++. With a single managed C++/CLI (use the /clr switch), it magically transitions into managed code, and from there, I can go load my F# plugins and execute them. To make things even easier for deployment, F# can statically link all its dependencies, so you don't need to deploy the F# runtime files. One other thing that makes CLR code attractive is that you can pass managed code (delegates) to C code, and the runtime automatically makes a thunk for you.
If you decide to go the Haskell way, the feature you'll be looking for is FFI: Foreign Function Interface. However, I don't think it'll give you the same level of integration as C++/CLI with F#.

I'm currently learning Haskell myself. When you start out learning it, it doesn't seem very intriguing, because the learning experience is nothing like learning a language like C#.
It's a whole new world, but I noticed I could write very very complex expressions in just a few lines of code. When I looked back at the code, it was much more concise; it was small and tight.
I'm absolutely loving it! You can indeed write real-world programs that will be smaller, easier to maintain, and much more complex than most other languages allow. I vote for you to learn it!!

Since you mention Win32 and DLLs, I presume you're working with unmanaged code. In that case, GHC will work very well for you. Late last year I wrote a DDE server under Windows using FFI to talk to the Microsoft DDE libraries, and, surprisingly, it was an extremely pleasant experience (especially given that I'm a Unix guy).
Haskell's FFI is powerful (even supporting, e.g., callbacks into Haskell functions from C or other libraries), and having Haskell's type checking when writing C-level code is like a dream come true.
That last point is one of the major advantages of Haskell: the type system is amazing. That said, it's like any powerful tool; it needs time and effort to make good use of it.
So yes, it is possible to start out writing small bits of code in Haskell that link into the rest of your code (though you may find it easier to start with small Haskell programs that link to your other code), and it's well worth spending a fair amount of time learning about this and using it wherever you can. You may end up like me, planning a fairly major project tightly integrated with Windows code (in my case, a sophisticated Excel add-in) in Haskell.

Related

Writing a programming language [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Disclaimer: Yes I know this will take 3 years, at least.
I am looking forward to writing a new interpreted programming language. I have a quite solid idea of what I want in terms of dynamicness, syntax, object model, etc, etc.
Now that I have the idea, I have a few questions before I start:
Should I begin writing the full specification and then implement, or write them both all along?
I'm still doubting between C and C++. C++ would allow for more clean design and faster development while C would (maybe) ensure portability to more platforms (microprocessors?). Performance is a must.
Should I try to interest people for the project before the first working prototype so they can cooperate (the end product will be a liberal license anyway), or keep working alone until I have something that runs?
How modular should it be? I am sure that I won't immediately start working on a bytecode interpreter but something easier to implement but slower thing first, so modularity is a must in order to be able to extend later, but I guess overdoing it will hamper performance and clearity.
The answers to your questions depend largely on why you're doing this- the primary reason. Are you trying to create the next Ruby, or is this a learning exercise?
Specification: If this is a personal project, this is not as important. PHP gets a bad rap for having been developed "on the fly," yet many people use it every day. A more complete spec will probably help get people involved if/when you want help.
If you want cross-platform and performance, C is the way to go.
If you want people to join in, prove something first. Write a killer-cool application with your language and blog/talk about why your language is different/special/better.
Modularity of what, the language itself or the compiler? If you want to extend the language, a good spec will help (see #1.) The compiler should be designed with all the best practices in mind, which should help make it extensible.
I hear the Dragon Book is good for learning to develop compilers.
Your specification will be broken unless you write it hand-in-hand with the implementation.
If you think C++ would give you cleaner design and faster development, you should probably use it.
You will have difficulty getting anyone interested in a project unless there is something that runs and demonstrates what is unique about your language.
If you think your language will ever require a byte-code interpreter (and you do say "Performance is a must") you should investigate the capabilities of existing byte-code interpreters before you finalize your language design.
I think you have set yourself too many goals. You say "performance is a must" but in a comment reply you say your goal is "to learn a lot about language design" and that it is "pretty unlikely" that you'll use it in a real project. New programming languages are created to solve problems; more precisely, they're created to help people express solutions to problems in better ways. Designing a language without using it seriously, intensely, continually is like writing software without any test cases: you're likely to wind up with something unusable.
If you want to try your hand at language design, then find a problem---one that you care about---that existing languages won't let you solve the way you want. Then do whatever you can to get a working implementation and start writing and running programs using it. You don't need a hand-crafted JIT compiler with a runtime written in highly bummed assembly code. If you target the JVM or .NET, you get a very high-performance GC, scalable threading system, libraries, and lots of other good stuff for free, even if it interferes with that awesome idea you had for ______.
On the other hand, if you just want to make something run fast, don't try to design a language at the same time. Just find one that you like, learn about implementation strategies, and see if you can do better.

Is it worthwile to learn assembly language? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Is it still worthwhile to learn ASM?
I know a little of it, but I haven't really used it or learned it properly because everything I learn to do in assembler I can do in 1/10th the time with some language like C or C++. So, should I really learn and use ASM? Will it do me any good professionally? Will it increase my resourcefulness? In short, would it make me a better programmer?
Note: I am talking about low-level assembly like FASM or NASM and not something like HLA (High-Level Assembler).
I learned from Kip Irvine's book. If you ignore the (fair) criticisms of his (irrelevant) libraries, I can recommend it as a good introduction to the language itself -- although for the really interesting stuff you have to hunt out obsessives on the net.
I think it's useful to understand what happens at the lower levels. As you research assembler you will learn about cpu pipelining, branch prediction, cache alignment, SIMD, instruction reordering and so on. Knowledge of these will help you write better high-level code.
Furthermore, the conventional wisdom is to not try to hand-optimise assembly most of the time but let the compiler worry about it. When you see some examples of the twisted things that compilers generate, you will better understand why the conventional wisdom holds.
Example: LFSRs run fast with the rotate-with-carry instruction, for specific cases like this it's just as easy to write the assembler version as it is to discover whether or not the compiler is smart enough to figure it out. Sometimes you just know something that the compiler doesn't.
It also increases you understanding of security issues -- write-or-execute, stack overruns, etc.
Some concurrency issues only become apparent when you are aware of what is happening at the per-instruction level.
It can be useful sometimes when debugging if you don't have the complete source code.
There's the curiousity value. How are virtual functions implemented anyway? Ever try to write DirectX or COM programs in assembler? How do large structures get returned, does the calling function offer a space for them or vice-versa?
Then there are special assembly languages for graphics hardware, although shader languages went high-level a few years ago, anything which lets you think about a problem a different way is good.
I find it interesting that so many people jump to say that yes, you need/should learn assembly. To me the question is how much assembly do you need to know? I don't think you have to know assembly like a programming language, that is I don't believe that everyone should be able to write a program in assembly, but on the other hand, being able to read it and understand what it actually means (which might require more knowledge of the architecture than the assembler) is enough.
I for sure cannot write assembly (i.e. write any non trivial piece of code in assembly), but I can read it and that together with knowledge of the actual hardware architecture, and the calling conventions that are being used is enough to analyze performance, and identify what piece of C++ code was the source of that assembly.
Yes - the primary reason to learn assembly for C and C++ developers is it helps understanding what's going on under the hood of C and C++ code. It's not that you will actually write code in assembly, but you will be able to look at code disassembly to assess its efficiency and you will understand how different C and C++ features work much better.
It's worthwhile to learn lots of different languages, from lots of different paradigms. Learning Java, C++, C#, and Python doesn't count, since they are all instances of the same paradigm.
As assembly is at the root (well, close to the root) of all languages, I for one say that it is worthwhile to learn assembly.
Then again, it's worthwhile to learn a functional programming language, logic programming, scripting languages, math-based languages. You only have so much time, so you do have to pick and choose.
Knowing ASM is also useful when debugging, as sometimes all you have is "ASM dump of the error".
Depend of which programming level you wish to reach.
If you need to work with debuggers then YES.
If you need to know how compilers works then YES.
Any assembler/debugger is CPU dependent, so there is a lot of work, just check x86 family how big and old is it.
Do you have any use for it in what you plan to do? is it going to aid you in any way in what you currently do or plan to do? those are the two questions you should ask yourself, the answer to those is the answer to your question.
In a more general sense, yes, I'd say in my opinion is well worth learning asm (something like x86 or arm), how well it serves you depends on what you programming and how your debugging it.

Guidelines for creating a programming-language enjoyable to write programs in? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm currently working on the topic of programming-languages and interpreter-design. I have already created several programming languages but couldn't reach my goal so far:
Create a programming-language which focuses on giving the programmer a good feeling when writing code in it. It should just be fun and/or interesting and in no case annoying to write something in it.
I get this feeling when writing code in Python. I sometimes get the opposite with PHP and in rare cases when having to reinvent some wheel in C++.
So I've tried to figure out some syntactical features to make programming in my new language fun, but I just can't find any.
Which concrete features, maybe mainly in terms of syntax, do/could make programming in a language fun?
Examples:
I find it enjoyable to program in Ruby because of it's use of code blocks.
It would be nice if you could include exactly one example in your answer
Those features do not have to already exist in any language!
I'm doing this because I have experienced extreme rises in (my own) productivity when programming in languages I love (because of particular features).
You mentioned Ruby in your question. AFAIK, Ruby is the only programming language, for which Joy is an actual, stated, explicit design goal. (In fact, it is the only design goal.)
The reason that Yukihiro Matsumoto was able to design Ruby this way, is that he already knew and used tons of programming languages before he started designing Ruby and learned tons more in order to design Ruby. (Interestingly, he didn't know Python, and has said that he probably wouldn't have created Ruby if he did.)
Here's just a tiny fraction of the languages that matz has either used himself, or looked at for inspiration (or in some cases for inspiration what not to do):
CLU
Sather
Lisp
Scheme
Smalltalk
Perl
Python
Haskell
Scala
PHP
C
C++
Java
C#
Objective-C
Erlang
And I believe that this is one way that good programming languages can be designed (what Larry Wall calls postmodernist language design): Throw away everything that didn't work in the past, take everything that worked and combine that tastefully.
Of course, this requires that you actually know all those languages from which you want to "steal" and in particular, it requires that you know lots of very different languages with different paradigms, different concepts and different "feels", otherwise the idea pool from which you steal is rather small and inbred.
Consistency.
Its the feeling that you already know something when you use an API or feature you've never used before. It also makes you more productive as you don't have to learn something new for the sake of it.
I think this is also one of the Ruby 'likes', in that if you follow the naming convention, things start to 'just work' without bindings and glue and suchlike.
For example, using the STL in C++, many of the algorithms are the same for all containers - even strings. That makes it nice to use... except for those parts that do not follow the same API (eg vector of bools) then the difference is more noticable.
Two things to keep in mind are orthogonality and the principle of least surprise.
A programming language should make it easy to write correct programs and difficult (if not impossible) to write incorrect programs. For instance, in Java
long x = 2000000000 + 2000000000;
overflows, while
long x = 2000000000L + 2000000000;
doesn't. Is this obvious? I don't think so. Does anyone ever want something to overflow? I don't think so.
Hilarity.
http://lolcode.com/
Follow common practices (like using + for addition, & for bitwise/logical and)
Group logicaly-similar code in namespaces
Have an extensive string processing library
Incorporate debugging facilities
For a cross-platform language, try to minimize platform differences as much as possible
A language feature that appears simple and easy to learn surprises and delights the programmer with its unexpected power. I nominate Haskell type classes :-)

What programming concept/technique has boosted your productivity? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I've been programming for several years now and since then I have learned several concepts and techniques that have made me a better programmer (i.e. OOP, MVC, regex, hashing, etc). I feel too that by been able to learn several languages (basic, pascal, C/C++, lisp, prolog, python) I have widen my horizons in a very possitive way. But since some time ago I feel like I'm not learning any new good "trick". Can you suggest some interesting concept/technique/trick that could make me retake the learning flow?
A good paradigm shift always allows you to see things differently and become a better developer. I would suggest you read up on functional programming and maybe learn a functional language like Haskell or Scheme.
YAGNI (You Ain't Gonna Need It) and DTSTTCPW (Do The Simplest Thing That Could Possibly Work)
It's easy to spend a lot of time thinking about edge cases, and find that you've implemented something that's completely useless. I believe that a far better approach is to knock out a simple prototype, and then poke and prod it until you understand the domain well enough to create production code.
Recognize, however, that your prototype is going to evolve into production code whether you like it or not. So write it with that in mind.
Learning how to use your IDE and tools. This to me resulted in a far greater productivity increase.
For examples:
learning how to use a source level debugger
using tools like purify/boundschecker
fxcop
etc. I realize I am dating myself, but those were big steps. There are many more.
Any time you can change the way you think about a problem or solve a problem without having to undo previous work is HUGE gain. Process, tools, etc all can help with that. Don't limit yourself to finding silver bullet techniques for productivity gains.
Watching productive people work and getting them to tell you what they are doing and why is also invaluable.
If I'm honest, using, and learning a great framework like .NET has really increased my productivity.
I'm often amazed what people are willing to reinvent due to their ignorance that the very same function already exists in the framework.
AGILE and especially Test Driven Development. Best thing to happen to software development since the invention of Object Oriented Design.
Concerning coding, I'd say design patterns and architecture patterns are always nice to look at and can help you write cleaner/better code.
For methodology I would advice Agile development that is great. There are a numerous number of techniques and methods (I'm personally fan of extreme programming) and reading that can keep you busy and improve your general approach.
Finally I'd say learn new languages like Ruby
Design patterns
SCRUM process
DiSC assessment (and understanding of how it applies to collaborative s/w development)
StackOverflow.com (of course!)
Google
... other stuff too, I'm sure
Design Patterns. Learning how to break dependence upon implementation and inheritance, and depending on interfaces (contracts) instead changed the way I think about programming.
Debugging. Once I figured out how to actually step through the code and go line-by-line, examining the underlying state, it revolutionized how I troubleshoot code.
Practice, practice practice: I didn't realize how important it is to keep working on my skills apart from work until a relatively short time ago. Mistakes and solutions I make at home make me a better programmer at work, and vice a versa. Learning should never stop if you want to be good at something, and programming isn't an exception.
If I had to pick just one, I'd say Test-Driven Design, aka TDD: write unit tests (and check that they fail) before you incrementally add features.
Try to learn to see things from the user's standpoint.
For example:
learn how to write meaningful error messages
learn how to produce usable applications
learn some basic speed-optimization techniques
Remember that the user sees your application, not your code.
VIM Quick Reference Card. After I started using advanced vim (macros, plugins) I have stopped doing any repetitive actions during coding manually.
Apart from that, Scrum and working at night, when noone interrupts You gave me the highest benefit.
If you want to expand your experience into web programming, you should try and get a good handle on the HTTP Request/Response paradigm. This will make creating web apps much easier on you because you understand the underlying framework.
(http)://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol
I would look at some of the newer languages that combine OO and functional elements, like C# or Scala.
Learning Smalltalk has helped me become more productive. It is an easy language to learn and things can be built extremely quickly. For a stunning productivity aid check out Seaside, it's a framework for building web applications. Moreover, if you have only been used to curly brace languages Smalltalk will also make you smile!
I was helped by the following paradigms in this order:
1) bottom-up programming
2) top-down programming (C, Pascal)
3) object-oriented programming (Smalltalk, Java)
4) functional programming (lisp, Mathematica)
with some logic programming thrown in (prolog).
nHibernate hands down. The fact that I dont need to write database functionality for my business objects is very useful and time saving.
High level understanding, creating good abstractions with proper dependencies, is what pays off in long term. For example, Law of Demeter is an important guideline. I recommend also reading Eric Evan's Domain Driven Design
Code generators. They're the best thing in software engineering.
Would you like to write all your projects in asm? Nope, let's generate it from C++. Or from something sat above the JVM which diligently generates the necessary machine code.
Duplicating the same source code all over the place, but stuck with a language that insists on the line noise? Use macros.
Want to use lambdas in a language that doesn't have them? Work out how to fake the anonymous name and variable scoping required then generate the boilerplate.
None of the readily available languages quite fit your pattern of thought, desired syntax or even semantics? Write a compiler for a new one.
Better languages are nice. Better design patterns are nice. Emacs is awesome. But compilers are where all the power lies in our field. I suspect the only reason they aren't mentioned in any of the other answers is that we can't imagine programming without any.
Copy/paste technique

What are the primary differences between Haskell and F#? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 9 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I've searched on the Internet for comparisons between F# and Haskell but haven't found anything really definitive. What are the primary differences and why would I want to choose one over the other?
Haskell is a "pure" functional language, where as F# has aspects of both imperative/OO and functional languages. Haskell also has lazy evaluation, which is fairly rare amongst functional languages.
What do these things mean? A pure functional language, means there are no side effects (or changes in shared state, when a function is called) which means that you are guaranteed that if you call f(x), nothing else happens besides returning a value from the function, such as console output, database output, changes to global or static variables.. and although Haskell can have non pure functions (through monads), it must be 'explicitly' implied through declaration.
Pure functional languages and 'No side effect' programming has gained popularity recently as it lends itself well to multi core concurrency, as it is much harder to get wrong with no shared state, rather than myriad locks & semaphores.
Lazy evaluation is where a function is NOT evaluated until it is absolutely necessary required. meaning that many operation can be avoided when not necessary. Think of this in a basic C# if clause such as this:
if(IsSomethingTrue() && AnotherThingTrue())
{
do something;
}
If IsSomethingTrue() is false then AnotherThingTrue() method is never evaluated.
While Haskell is an amazing language, the major benefit of F# (for the time being), is that it sits on top of the CLR. This lends it self to polyglot programming. One day, you may write your web UI in ASP.net MVC, your business logic in C#, your core algorithms in F# and your unit tests in Ironruby.... All amongst the the .Net framework.
Listen to the Software Engineering radio with Simon Peyton Jones for more info on Haskell: Episode 108: Simon Peyton Jones on Functional Programming and Haskell
Big differences:
Platform
Object orientation
Laziness
The similarities are more important than the differences. Basically, you should use F# if you are on .NET already, Haskell otherwise. Also, OO and laziness mean that F# is closer to what you (probably) already know, so it is probably easier to learn.
Platform : Haskell has its own runtime, F# uses .NET. I don't know what the performance difference is, although I suspect the average code is about the same before optimisation. F# has the advantage if you need the .NET libraries.
Object orientation : F# has OO, and is very careful to make sure that .NET classes are easy to use even if your code isn't OO. Haskell has type classes which let you do something like OO, in a weird sort of way. They are like Ruby mixins crossed with Common Lisp generic functions. They're a little like Java/C# interfaces.
Laziness : Haskell is lazy, F# is not. Laziness enables some nice tricks and makes some things that look slow actually execute fast. But I find it a lot harder to guess how fast my code will run. Both languages let you use the other model, you just have to be explicit about it in your code.
Minor differences:
Syntax : Haskell has slightly nicer syntax in my opinion. It's a little more terse and regular, and I like declaring types on a separate line. YMMV.
Tools : F# has excellent Visual Studio integration, if you like that sort of thing. Haskell also has an older Visual Studio plugin, but I don't think it ever got out of beta. Haskell has a simple emacs mode, and you can probably use OCaml's tuareg-mode to edit F#.
Side effects : Both languages make it pretty obvious when you are mutating variables. But Haskell's compiler also forces you to mark side effects whenever you use them. The practical difference is that you have to be a lot more aware of when you use libraries with side effects as well.
F# is part of the ML family of languages and is very close to OCaml. You may want to read this discussion on the differences between Haskell and OCaml.
A major difference, which is probably a result ofthe purity but I less see mentioned, is the pervasive use of monads. As is frequently pointed out, monads can be built in most any language, but life changes greatly when they are used pervasively throughout the libraries, and you use them yourself.
Monads provide something seen in a much more limited way in other languages: abstraction of flow control. They're incredibly useful and elegant ways of doing all sorts of things, and a year of Haskell has entirely changed the way I program, in the same way that moving from imperative to OO programming many years ago changed it, or, much later, using higher-order functions did.
Unfortunately, there's no way in a space like this to provide enough understanding to let you see what the difference is. In fact, no amount of writing will do it; you simply have to spend enough time learning and writing code to gain a real understanding.
As well, F# sometimes may become slightly less functional or more awkward (from the functional programming point of view) when you interface with the .NET platform/libraries, as the libraries were obviously designed from an OO point of view.
So you might consider your decision this way: are you looking to try out one of these languages in order to get a quick, relatively small increment of improvement, or are you willing to put in more time and get less immediate benefit for something bigger in the long term. (Or, at least, if you don't get something bigger, the easy ability to switch to the other quickly?) If the former, F# is your choice, if the latter, Haskell.
A couple of other unrelated points:
Haskell has slightly nicer syntax, which is no suprise, since the designers of Haskell knew ML quite well. However, F#'s 'light' syntax goes a long way toward improving ML syntax, so there's not a huge gap there.
In terms of platforms, F# is of course .NET; how well that will work on Mono I don't know. GHC compiles to machine code with its own runtime, working well under both Windows and Unix, which compares to .NET in the same way, that, say, C++ does. This can be an advantage in some circumstances, especially in terms of speed and lower-level machine access. (I had no problem writing a DDE server in Haskell/GHC, for example; I don't think you could do that in any .NET language, and regardless, MS certainly doesn't want you doing that.)
Well, for one I'd say a main advantage is that F# compiles against the .NET platform which makes it easy to deploy on windows. I've seen examples which explained using F# combined with ASP.NET to build web applications ;-)
On the other hand, Haskell has been around for waaaaay longer, so I think the group of people who are real experts on that language is a lot bigger.
For F# I've only seen one real implementation so far, which is the Singularity proof of concept OS. I've seen more real world implementations of Haskell.

Resources