What exactly is Intentional Programming - programming-languages

On my reading spree, I stumbled upon something called Intentional Programming.
I understood it somewhat, but I not fully. If anyone can explain it in better detail, please do. Is it being used in any real application?

You got me started on this one...
Looks like C. Simonyi wanted to step to the next level of abstraction from High level languages. Reduce the dependency of customers on developers to make every change.. in code (cryptic for people not in development).
So he invents this new product called IP, which has a WYSIWYG type GUI editor to create a domain specific model. (i.e. IP has a GUI to create the building blocks for your app.. LISP allowed you to create the meta/building blocks but not in a way that domain experts could easily do it.)
Like the models in UML, the promise is that you can auto-generate the corresponding source code at the "push of a button". So the domain experts can tweak the model in the future and press the Bake button to deliver the next version of the app.
It seems to utilise DSLs however with the added benefit that multiple user-created DSLs can talk with each other via a built-in IP mechanism... which means the finance model and sales model can interact and reuse blocks as needed. As with DSLs, you get the benefit of code that conveys developer intent rather than appeases implementation language constraints.
The idea being to give greater control to the BA and domain experts who actually know what's needed...
Update:
Real world use looks like 'not yet'.. although Simonyi believes 'absolutely in the long term'.
Short Story: MS squished IP in favor of .Net framework, Simonyi left MS and formed his own company 'Intentional Software'.. with the contract that he could use the IP ideas but he would have to rewrite his working proto from the ground up.. (that should slow him down). It's still Work-In-Progress I think.. and being written in C# (to boot)
Sources:
Anything you can do, I can do meta by Scott Rosenberg, MIT Tech Review (2007)
To think till yesterday.. I didn't know a thing about this. Investigative reporter signing off. Going back to day job :)

It's the opposite of what happens when I come home at 2am after a pub crawl and fire up the laptop "just to check my email real quick, hon."
Then, the next day, when I peel open one eye and find my way to the bathroom at the crack of noon, I start brushing my teeth and realize, toothpaste dribbling out of my mouth, that last night I made 4 SVN commits, closed 3 bugs, and figured out how to solve the starvation problem on our distributed locking protocol. And I have no idea how the hell any of it works, anymore.
Or maybe it's what workmad3 said.

It appears to be a method of programming that allows the programmer to expand what is actually in the language to more closely follow their original intent, rather than forcing the programmers intent into the constrained syntax of the language.
It explicitly mentions LISP as a language that supports this, so I'd suggest you read up on this great language :) LISP Macros are exactly what are described in the article, allowing you to indefinitely expand the language to cover almost anything you would care to express. (A fairly common outcome of large LISP systems is that you end up with a domain specific language that is very good for writing specific applications, i.e. writing a word processor ends up with a word processor specific language).
For your last part, yes LISP (and thus Intentional Programming) is used in some projects. Paul Graham is a great proponent of LISP, and other examples of it include the original Crash Bandicoot (a game object creation system was created in LISP for this, including a LISP PlayStation compiler)

I have a slightly different understanding of Intentional Programming (as a more general term, not just what Charles Simonyi is doing). It is closely linked to fluent interfaces and can be achieved, with various degrees of difficulty, in modern Object Orientated languages.
Some of these concepts come from Domain Driven Design (in fact the term "fluent interface" has been popularised by Eric Evans, the author of "the" blue book - Domain Driven Design: Tacking Complexity in the Heart of Software).
The aim is to make business layer code readable by a non-programmer (i.e. a business person). This can be achieved by class and method names that explicitly state the intent of the operation. In my opinion, being explicit and being intentional produces highly readable and maintainable code.
Consider the two examples below that achieve the same thing - creating an order for a customer with 10% discount and adding a couple of products to it.
//C#, Normal version
Customer customer = CustomerService.Get(23);
Order order = new Order();
//What is 0.1? Need to look at Discount property to understand
order.Discount = 0.1;
order.Customer = customer;
//What's 34?
Product product = ProductService.Get(34);
//Do we really care about Order stores OrderLines?
order.OrderLines.Add(new OrderLine(product, 1));
Product product2 = ProductService.Get(54);
order.OrderLines.Add(new OrderLine(product2, 2)); //What's 2?
Order.Submit();
//C#, Fluent version
//byId is named parameter, states that this method looks up customer by Id
ICustomerForOrderCreation customer =
CustomerService.GetCustomerForOrderCreation(byId: 23);
//Explicit method to create a discount order and explicit percentage
Order order = customer.CreateDiscountOrder(10.Percent())
.WithProduct(ProductService.Get(byId: 34))
.WithProduct(ProductService.Get(byId: 54))
.WithQuantity(2); //Explicit quantity
Order.Submit();
By changing your programming style slightly, you are able to communicate your intent more clearly and reduce the amount of having to look at code elsewhere to understand what's going on.

Seems to me like yet another fad of software engineering. We've seen thousands of them already: meta programming, generative programming, visual programming, and so on. For a short time they get very fashionable, people use it everywhere, and then they invariably go back to old ways of creating software.
Why? Frederick Brooks has already answered this question over 20 years ago: there's No Single Silver Bullet to kill the werewolf...

Intentional Programming is encoding your intent, or goals. Thus it is Goal-Oriented Programming or Planning. Step up to manangement.

It's where you intend to program, you don't just accidently do it. ;)

Related

Is non-programmatic software development feasible? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm currently faced with a very unusual design problem, and hope that a developer wiser than myself might be able to offer some insight.
Background
Without being too specific, I've been hired by a non-profit organisation to assist with the redevelopment of their legacy, but very valuable (in terms of social value) software. The development team is unlike any I've encountered in my time as a software developer, and is comprised of a small number of developers and a larger group of non-programming domain experts. What makes the arrangement unusual is that the domain experts (lets call them content creators), use custom tooling, some of which is based around a prolog expert system engine, to develop web based software components/forms.
The Problem
The system uses a very awkward postback model to perform logical operations server side and return new forms/results. It is slow, and prone to failure. Simple things, like creating html forms using the existing tooling is much more arduous than it should be. As the demand for a more interactive, and performant experience grows, the software developers are finding increasingly that they have to circumvent the expert system/visual tooling used by the content creators, and write new components by hand in javascript. The content creators feel increasingly that their hands are tied, as they are now unable to contribute new components.
Design approach: Traditional/Typical
I have been advocating for the complete abandonment of the previous model and the adoption of a typical software development process. As mentioned earlier, the project has naturally evolved towards this as the non-programmatic development tooling has become incapable of meeting the needs of the business.
The content creators have a very valuable contribution to make however, and I would like to see them focusing on formally specifying the expected behaviour of the software with tools like Cucumber, instead of being involved in implementation.
Design approach: Non-programmatic
My co-worker, who I respect a great deal and suspect is far more knowledgeable than me, feels that the existing process is fine and that we just need to build better tooling. I can't help but feel however that there is something fundamentally flawed with this approach. I have yet to find one instance, either historical or contemporary where this model of software development has been successful. COBOL was developed with the philosophy of allow business people/domain experts to write applications without the need for a programmer, and to my mind all this did was create a new kind of programmer - the COBOL programmer. If it was possible to develop effective systems allowing non-programmers to create non-trivial applications, surely the demand for programmers would be much lower? The only frameworks that I am aware of that roughly fit this model are SAP's Smart Forms and Microsoft's Dynamix AX - both of which are very domain specific ERP systems.
DSLs, Templating Languages
Something of a compromise between the two concepts would be to implement some kind of DSL as a templating language. I'm not even sure that this would be successful however, as all of the content creators, with one exception, are completely non technical.
I've also considered building a custom IDE based on Visual Studio or Net Beans with graphical/toolbox style tooling.
Thoughts?
Is non-programmatic development a fools errand? Will this always result in something unsatisfactory, requiring hands on development from a programmer?
Many thanks if you've taken the time to read this, and I'd certainly appreciate any feedback.
You say:
Something of a compromise between the
two concepts would be to implement
some kind of DSL as a templating
language. I'm not even sure that this
would be successful however, as all of
the content creators, with one
exception, are completely non
technical.
Honestly, this sounds like exactly the approach I would use. Even "non-technical" users can become proficient (enough) in a simple DSL or templating language to get useful work done.
For example, I do a lot of work with scientific modeling software. Many modelers, while being much more at home with the science than with any form of engineering, have been forced to learn one or more programming languages in order to express their ideas in a way they can use. Heck, as far as I know, Fortran is still a required course in order to get a Meteorology degree, since all the major weather models currently in use are written in Fortran.
As a result, there is a certain community of "scientific programming" which is mostly filled with domain experts with relatively little formal software engineering training, expertise, or even interest. These people are more at home with languages/platforms like Matlab, R, and even Visual Basic (since they can use it to script applications like Excel and ESRI ArcMap). Recently, I've seen Python gaining ground in this space as well, mainly I think because it's relatively easy to learn.
I guess my point is that I see strong parallels between this field and your example. If your domain experts are capable of thinking rigorously about their problems (and this may not be the case, but your question is open-ended enough that it might be) then they are definitely capable of expressing their ideas in an appropriate domain-specific language.
I would start by discussing with the content creators some ideas about how they would like to express their decisions and choices. My guess would be that they would be happy to write "code" (even if you don't have to call it code) to describe what they want. Give them a "debugger" (a tool to interactively explore the consequences of their "code" changes) and some nice "IDE" support application, and I think you'll have a very workable solution.
Think of spreadsheets.
Spreadsheets are a simple system that allows non-technical users to make use of a computer's calculation abilities. In doing so, they have opened up computers to solve a great number of tasks which normally would have required custom software developed to solve them. So, yes non-programmatic software development is possible.
On the other hand, look at spreadsheets. Despite their calculational abilities you really would not want as a programmer to have to develop software with them. In the end, many of the techniques that make programming languages better for programmers make them worse for the general population. The ability to define a function, for example, makes a programmer's life much easier, but I think would confuse most others.
Additionally, past a certain point of complexity trying to use a spreadsheet would be a real pain. The spreadsheet works well within the realm for which it was designed. Once you stray too far out that, its just not workable. And again, its the very tools programmers use to deal with complexity which will prevent a system being both widely usable and sufficiently powerful.
I think that for any given problem area, you could develop a system that allows the experts to specify a solution. It will be much harder to develop that system then to solve the problem in the first place. However, if you repeatedly have similiar problems which the experts can develop solutions for, then it might be worthwhile.
I think development by non-developers is doomed to failure. It's difficult enough when developers try it. What's the going failure rate? 50% or higher?
My advice would be to either buy the closest commercial product you can find or hire somebody to help you develop a custom solution with your non-developer maintenance characteristics in mind.
Being a developer means keeping a million details in mind at once and caring about details like version control, deployment, testing, etc. Most people who don't care about those things quickly tire of the complexity.
By all means involve the domain experts. But don't saddle them with development and maintenance as well.
You could be putting your organization at risk with a poorly done solution. If it's important, do it right.
I don't believe any extensive non-programmer solution is going to work. Programming is more than language, it's knowing how to do things reasonably. Something designed to be non-programmer friendly will still almost certainly contain all the pitfalls a programmer knows to avoid even if it's expressed in English or a GUI.
I think what's needed in a case like this is to have the content creators worry about making content and an actual programmer translate that into reasonable computer code.
I have worked with two ERP systems that were meant for non-programmers and in both cases you could make just about every mistake in the book with them.
... Simple things, like creating html forms using the existing tooling is much more arduous than it should be...
More arduous for whom? You're taking a development model that works (however badly) for the non-programming content creators, and because something is arduous for someone you propose to replace that with a model where the content creators are left out in the cold entirely? Sounds crazy to me.
If your content creators can learn custom tooling built around a Prolog rules engine, then they have shown they can learn enough formalism to contribute to the project. If you think other aspects of the development need to be changed, I see only two honorable choices:
Implement the existing formalism ("custom tooling") using the new technology that you think will make things better in other ways. The content creators contribute exactly as they do now.
Design and implement a domain-specific language that handles the impedance mismatch between what your content creators know and can do and the way you and other developers think the work should be done.
Your scenario is a classic case where a domain-specific language is appropriate. But language design is not easy, especially when combined with serious usability questions. If you are lucky you will be able to hire someone to help you who is expert in both language design and usability. But if you are nonprofit, you probably don't have the budget. In this case one possibility is to look for help from another nonprofit—a nearby university, if you have one.
I'd advise you to read this article before attempting to scrap the whole system. I look at it this way. What changed to prompt the redevelopment? Your domain experts haven't forgotten how to use the original system, so you already have some competent "COBOL programmers" for your domain. From your description, it sounds like mostly the performance requirements have changed, and possibly a greater need for web forms.
Therefore, the desired solution isn't to change the responsibilities of the domain experts, it's to increase the performance and make it easier to create web forms. You have the advantage of an existing code base showing exactly what your domain experts are capable of. It would be a real shame not to use it.
I realize Prolog isn't exactly the hottest language around, but there are faster and slower implementations. Some implementations are designed mostly for programmer interactivity and are dynamically interpreted. Some implementations create optimized compiled native code. There are also complex logical programming techniques like memoization that can be used to enhance performance, but probably no one learns them in school anymore. A flow where content creators focus on creating new content and developers focus on optimization could be very workable. Also, Prolog is ideally suited for the model layer, but not so much for the view layer. Moving more of your view layer to a different technology could really pay off.
In general though, 2 thoughts:
You cannot reduce life to algorithms. Everything we know (philosophically, scientifically) and experience demonstrates this. (Sorry, Dr. Minsky).
That said, a Domain-Specific tool that allows non-programmers to express a finite language is definitely doable as several people have given examples. Another example of this type of system is Mathematica and especially Simulink which are used very successfuly over a range of applications. However, the failure of Expert Systems, Fuzzy logic, and Japan's Fifth Generation computer project of the 80's to take-off demonstrate the difficulty in doing this.
Labview is a very successful none programatic programming environment.
What an interesting problem.
I would have to ultimately agree with you, and disagree with your colleague.
The philosophy and approach of Domain Driven Development/Design exists exactly for your purposes, in that it puts paramount importance on the specific knowledge of the experienced domain experts, and on communicating that knowledge to talented software developers.
See, in your issue, there are two distinct things. The domain, and the software. The domain should be understood and specified first and foremost without software development in mind.
And then the transformation to software happens between the communication between domain experts and programmers.
I think trying to build "programming" tools for domain experts is a waste of time.
In Domain Driven Development your domain experts will continue to be important, and you'll end up with better software.
In your colleague's approach you're trying to replace programmers with tool.... maybe in the future, like, start trek future, that will be possible, but today I don't think so.
I am currently struggling with a similar problem in trying to enable healthcare providers to write rules for workflows, which isn't easy because they aren't programmers. You're a programmer not because you went to programming school -- you're a programmer because you think like a programmer. Fortunately, most hospitals have some anesthesiologist or biomedical engineer who thinks like a programmer and can manage to program. The key is to give the non-programmers-who-think-like-programmers a language that they can learn and master.
In my case, I want doctors to be able to formulate simple rules, such as: "If a patient's temperature gets too low, send their doctor a text message". Of course it's more complicated than that because the definition of "too low" depends on the age of the patient, a patient may have many doctors, and so on, but a smart doctor will be able to figure out those rules. The real issue is that the temperature sensor will often fall off the patient and read ambient temperature, meaning that the rule is useless unless you can figure out how to determine that the thermometer is actually reading the patient's temperature.
The big problem I have is that, while it's relatively easy to create a DSL so that doctors can express IF [temperature] [less-than] [lookup-table [age]] THEN [send-text-message], it's much harder to create one that can express all the different heuristics you might need to try before coming up with the right way to make sure the reading is valid.
In your case, you may want to consider how VB became popular: It has a form drawing tool that anybody can easily use to draw forms and set properties on form items. Since not everything can be specified by form properties and data binding, there's a code-behind mode that lets you do complex logic. But to make the tool accessible to beginning programmers, the language is BASIC, so users didn't have to learn about pointers, memory allocation, or data structures.
While you probably wouldn't want to give your users VB, you might consider a hybrid approach. You would have one "language" (it could be graphical, like VB's form designer or Labview) where inexperienced users can easily do the simple stuff, and another language to enable expert users to do the complicated stuff.
I had this as a comment previously but I figured it deserved more merit.
There are definitely a number of successful 'non-programmatic' tools around, off hand I can think of Labview, VPL and graph based (edit: I just noticed this link has more far more than just shaders on it) shaders which are prevalent in 3D applications.
Having said that, I don't know of any which are suited to web based dev (which appears to be your case).
I dout very much the investment on developing such tools would be worth it (unless maybe you could move to sell it as a product as well).
I agree with you - non-technical people will not be able to program anything non-trivial.
Some products try to create what's basically a really simple programming language. The problem is that programming is an aptitude as well as a skill. It takes a certain kind of mindset to think in the sort of logic used by computers, which just can't be abstracted away by any programming language (at least not without without making assumptions that it can't safely make).
I've seen this in action with business people trying to construct workflows in MS Dynamics CRM. Even though the product was clearly intended to allow them to do it without a programmer they just couldn't figure out how to make a loop or an if-else condition work, no matter how "friendly" the UI tried to make it. I watched in amazement as they struggled with something that seemed completely self-obvious to me. After a full day (!) of this they managed to produce a couple of very basic workflows that worked in some cases, but didn't handle edge cases like missing values or invalid data. It was basically a complete waste of time.
Granted, Dynamics CRM isn't exactly the epitome of user-friendliness, but I saw enough to convince me that this is, indeed, a fool's errand.
Now, if your users are not programmers, but still technical people they might be able to learn programming, but that's another story - they've really become "new programmers" rather than "non-programmers" then.
This is a pretty philosophical topic and difficult to answer for every case, but in general...
Is non-programmatic development a fools errand?
Outside of a very narrow scope, yes. Major software vendors have invested billions over the years in creating various packages to try to let non-technical users create & define workflows and processes with limited success. Your best bet is to take advantage of what has been done in that space rather than trying to re-invent it.
Edit:
Sharepoint, InfoPath, some SAP stuff are the examples I'm talking about. As I said above, "a very narrow scope". It's possible to let non-programmers create workflows, complex forms, some domain processes, but that's it. Anything more general-purpose is simply trying to make non-programmers into programmers by giving them very crude tools.
Non programatic software development IS feasable, as long as you are realistic about what non-developers can reasonaby achieve - its all about a compromise between capability and ease of use.
The key is to break the requirements down cleanly into things that the domain experts need to be able to do, and spend time implementing those features in a foolproof way - The classic mistake is to try to let the system to do too much.
For example suppose you want domain experts to be able to create a form with a masked text input:
Most developers will look at that requirement and create a fancy control which accepts some sort of regular expression and lets the domain expert do anything.
This is the classic developer way of looking at things, however it's likely that your domain expert does not understand regular expressions and the developer has missed the point of the reqirement which was for the domain expert to be able to create this form.
A better solution might have been to create a control that can be confgiured to mask either Email addresses or telephone numbers.
Yes this control is far less capable than the first control, and yes the domain experts have to ask developers to extend it when they want to be able to mask to car registration numbers, however the domain experts are able to use it.
It seems that the problem is of organizational nature and cannot be solved by technical means alone.
The root is that content creators are completely non-technical, yet have to perform inherently technical tasks of designing forms and writing Prolog rules. Various designers and DSLs can alleviate their problems, but never solve them.
Either reorganize system and processes so knowledge carriers actually enter knowledge - nothing else; or train content creators to perform necessary development with existing tools or may be DSLs.
Non-programmatic development can save from low-level chores, but striving to set up system once and let users indefinitely and unrestrictedly expand it is certainly a fools errands.
Computer games companies operate like this all the time, so far as I can tell: a few programmers and a lot of content creators who need to be able to control logic as well (like level designers).
It's also probably a healthy discipline to be able to separate your code from the data and rules driving it if you can.
I'm therefore with your colleague, but of course the specifics may not make this general solution appropriate!

Right Language for the Job

Using the right language for the job is the key - this is the comment I read in SO and I also belive thats the right thing to do. Because of this we ended up using different languages for different parts of the project - like perl, VBA(Excel Macros), C# etc. We have three to four languages currently in use inside the project. Using the right language for the job has made it immensly more easy to do automate a job, but of late people are complaining that any new person who has to take over the project will have to learn so many different languages to get started. Also it is difficult to find such kind of person. Please note that this is a one to two person working on the project maximum at a given point of time. I would like to know if the method we are following is right or should we converge to single language and try to use it across all the job even though another language might be better suited for it. Your experenece related to this would also help.
Languages used and their purpose:
Perl - Processing large text file(log files)
C# with Silverlight for web based reporting.
LabVIEW for automation
Excel macros for processing data in excel sheets, generating graphs and exporting to powerpoint.
I'd say you are doing it right. Almost all the projects I've ever worked on have used multiple languages, without any problems. I think you may be overestimating the difficulty people have picking up new languages, particularly if they all use the same paradigm. If your project were using Haskell, Smalltalk, C++ and assembler, you might have difficulties, I must admit :-)
Using a variety languages vs maintainability is simply another design decision with cost-vs-benefits trade-offs that, like any design decision, need to considered carefully. While I like using the "absolute best" tool for the task (whatever "absolute best" means), I wouldn't necessarily use it without considering other factors such as:
do we have sufficient skill and experience to implement successfully
will we be able to find the necessary resources to maintain it
do we already use the language/tech in our system
does the increase in complexity to the overall system (e.g. integration issues, the impact on build automation) outweigh the benefits of using the "absolute best" language
is there another language that we already use and have experience in that is usable in lieu of the "absolute best" language
I worked a system with around a dozen engineers that used C++, Java, SQL, TCL, C, shell scripts, and just a touch of Perl. I was proud that we used the "best language" where they made sense, but, in one case, using the "best language" (TCL) was a mistake - not because it was TCL - but rather because we failed to observe the full costs-vs-benefits of the choice:*
we had only 1 engineer deeply familiar with TCL - the original engineer who refused to use anything but TCL for a particular target component - and then that engineer left the project
this target component was the only part of the system to use TCL and it was small relative to the other components in the system
the component could have been also implemented in another language we already used that we had plenty of experience in (C or C++) with some extra effort
the component appeared deceptively simple, but in reality had subtle corner cases that bit us in production (not something we could have known then, but always something to consider as a possibility later)
we had to implement special changes to the nightly build because the TCL compiler (yes, we compiled the TCL code into an executable) wouldn't work unless it could first throw its logo up on the screen - a screen which wasn't available during a cron-initiated automated nightly build. (We resorted to using xvfb to satisfy it.)
when bugs turned up, we had difficult time finding engineers who wanted to work on it
when problems with this component cropped up only after sustained load in the field, we lacked the experience and deep understanding of the TCL execution engine to easily debug it in the field
finally, the maintenance and sustainment team, which is a much smaller team with fewer resources than the main development team, had one more language that they needed training and experience in just to support this relatively small component
Although there were a number of things we could have done to head-off some of the issues we hit down the road (e.g. put more emphasis on getting TCL experience earlier, running better tests to detect the issues earlier, etc), my point is that the overall cost-vs-benefit didn't justify using TCL to code that single component. Again, it was not a mistake to use TCL because it was TCL (TCL is a fine language), but rather it was a mistake because we failed to give full consideration to the cost-vs-benefits.
As a Software Engineer it's your job to learn new languages if you need to. I would say you should go with the right tool for the job.
It's like sweeping the floor with an octopus. Yeah, it gets the job done... kind of... but it's probably not the best tool for the job. You're better off using a mop.
Another option is to create positions geared towards working in specific languages. So you can have a C# developer, a Perl developer, and a VBA developer who will only work with that language. It's a bit more overhead, but it is a workable solution.
Any modern software project of any scope -- even if it's a one-person job -- requires more than one language. For instance, a web project usually requires Javascript, a backend language, and a DB query language (though any of these might be created by the backend language). That said, there's a threshold that is easily reached, and then it would be very hard to find new developers to take over projects. What's the limit? Three languages? Four? Let's say: five is too many, but one would be too few for any reasonably complex project.
Using the right language for the right job is definitely appropriate - I am mainly a web programmer, and I need to know server-side programming (Rails, PHP + others), SQL, Javascript, jQuery, HTML & CSS (not programming languages strictly, but complex things I need to know) - it would be impossible for me to do all of that in a single language or technology.
If you have a team of smart developers, picking up new languages will not be a problem for them. In fact they probably will be eager to do so. Just make sure they are given adequate time (and mentoring) to learn the new language.
Sure, there will be a learning curve to implementing production code if there is a new language to learn, but you will have a stronger team member for it.
If you have developers in your teams who strongly resist learning new languages, unless there is a very good reason (e.g they are justifiably adamant that they are being asked to used a different language when it is not appropriate to do so) - then they are not the sort of developers I'd want in my teams.
And don't bother trying to hire people who know all the languages you use. Hire smart programmers (who probably know at least one language you use) - they should pick up the other languages just fine.
I would be of the opinion, that if I a programmer on my team wanted to introduce a second (or third) language into a project, that there better be VERY VERY good reason to do so; as a project manager I would need to be convinced that the cost of doing so, more than offset the problems. And it would take a lot of convincing.
Splitting up project into multiple languages makes it very expensive to hire the right person(s) to take over that project when it needs maintenance. For small and medium shops it could be a huge obstacle.
Edit: I am not talking about using javascript and c# on the same project, I am talking about using C# for most of the code, F# for a few parts and then VB or C++ for others - there would need to be a compelling reason.
KISS: 'Keep it simple stupid' is a good axiom to follow in most cases.
EDIT: I am not completely opposed to adding languages, but the burden of proof is on the person to who wants to do it. KISS (to me) applies to not only getting the project/product done and shipped, but also must take into account the lifetime maint. and support requirements. Lots of languages come and go, and programmers are attracted to new languages like a moth to a light. Most projects I have worked on, I still oversee and/or support 5 or 10 years later - last thing I want to see is some long forgotten and/or orphaned language responsible for some key part of an application I need to support.
My experience using C++ and Lua was that I wrote more glue than actual operational code and for dubious benefit.
I'd start by saying the issue is whether you are using the right paradigm for the job?
Suppose you know how to do object oriented programming in C#. I don't think the leap to Java is all that great . Although you'd have to familiarize yourself with libraries and syntax, the idea is pretty similar.
If you have procedural parts to your project, such as parsing files and various data transformations, your Perl/Excel Macros seem pretty similar.
But to address your issues, I'd say above all, you'll need clarity in code. Since your staff are using several languages, they won't be familiar with all languages to an equal degree. So make sure of this:
1) Syntactic sugar is explained in comments. Sugars are pretty language specific and may not be obvious to a new reader. For instance, in VBA I seem to remember there are default properties, so that TextBox1 = "Hello" is what you'd write instead of TextBox1.Text = "Hello". This can be confusing. Also, things like LINQ statements can have un-obvious meanings. So make sure people have comments to read.
2) Where two components from different languages have to work together, make excruciatingly specific details about how that happens. For instance, I once had to write a C component to be called from Excel VBA. There's quote a few steps and potential errors in doing this, especially as far as compiler flags. Make sure both sides know how their side of the interaction occurs.
Finally, as far as hiring people goes, I think you have to find people who aren't married to a specific language. To put it vaguely, hire an intelligent person who sees business issues, not code. He'll learn the lingo soon enough.

Company standards: C#.NET vs VB.NET vs. whatever.NET

Just a question that came up from time to time at my old job when we were considering fleshing out our development staff with additional bodies. Does it really matter, if you are a .NET development house, if your developers all code in one language vs another.
I probably started out like alot of the 4million other folks there with Visual Basic way back when, and then migrated to VB.NET. Another developer we had at the time came from a C background and migrated over to C#.NET. Basically he was able to code very quickly in his native language and I was able to do so in mine and since our projects did not really overlap there was no issue until our boss basically said we need to switch to C#... for no other reason than standardization.
So I guess the 'subjective' part of the question is, is it better to sacrifice productivity for consistency? Now I should quantify this in saying we were a SMALL shop, less than 5 developers and given how most of our project plans were done on cocktail napkins its not like we were going for 6-Sigma anytime soon so it was not like 'standards' were a hard and fast rule.
Thanks.
C# is definitely becoming the preferred avenue for Microsoft and it would make sense to choose a language that all developers to use simply based on the fact that if developers move between tasks, having the projects in a uniform language (not to mention coding conventions) will make the maintenance on those projects much easier for everyone involved.
If it helps, I recently wrote an article on my blog looking back at my decision to standardize our development shop on VB.NET over C# and how well my various assumptions have held up and things hindsight has shown me.
Here's a link: "A Manager’s Retrospective on the C# versus VB.NET decision"
As to whether you should standardize the whole shop on one language. I'd say there are definite benefits. The most obvious one is that switching back and forth when you work on different projects is going to incur some mental overhead to re-acclimate yourself each time. Granted, it is going to be less if you have to do it frequently, but to justify the extra complexity there would have to be some offsetting benefit.
Ultimately my advice is to standardize on one or the other based on the following criteria:
(1) If you have a large VB code-base there is a good argument for standardizing on VB.NET.
(2) If everything else is equal, I'd lean towards C# for a number of reasons mentioned in the above article.
(3) Carefully take into account your current team's preferences and the general consensus among developers in your market that you would try to recruit. Language choice is a big deal for morale and hiring.
I think it makes sense to pick one and get everyone on it. The sacrifice in productivity will be brief, competent developers will migrate quickly.
Some advantages:
your last VB developer and the team get downsized...uh oh.
one of the developers quits and you have to take over their code
etc.
I'd say go for whatever works. VB.NET and C# use all the same underlying framework so you should be able to read each others code and with a little googling you should be able to modify it too.
I'd say go for productivity.. ultimately is the customer going to care if it's written in VB.NET or C#?? But saying that you have to think about maintanability.
So I guess the 'subjective' part of
the question is, is it better to
sacrifice productivity for
consistency?
Of course not. What good is consistency, by itself? Ultimately the goal of consistency is productivity. With that in mind, I would suggest that, long term, having everyone code in one language would be the best way to go...in general over the long haul, a lot more time is spent modifying and maintaining code than in writing it, and I would think that having people spend the ramp up time once would be better than having to switch back and forth all the time.
our boss basically said we need to
switch to C#... for no other reason
than standardization. (...) So I guess
the 'subjective' part of the question
is, is it better to sacrifice
productivity for consistency?
Of course not. The only question is what value this 'consistency' has in terms of future productivity, because productivity is all that matters. 'Consistency' by itself is not a business value. It may only help you to achieve more productivity in the future.
Several factors must be balanced:
(+) Tomorrow people will have to maintain only one language, the most popular .Net language: C#. (See Note 1)
(-) Today you must learn C#, which is a cost for your company, unless you do it for free in your weekends. Of course this is a (-) in terms of cost. Learning C# is good by itself, it just has a cost.
(-) Today after learning C#, you or someone else must switch one project from Visual Basic .Net to C#. That has a cost, depending on the LOC count of the project. Also, errors might be introduced, so you need to rerun your unit tests, go over all the failures and work them out. Also, you need to repeat some integration testing, to guarantee that everything works well.
You can't tell what the outcome of the balance is with this information. But you don't have to tell, anyway, since your boss presumably has better information and has already taken the decision. All that you need to do is to actually do it.
Note 1: However, this reasoning is dangerously close to "the world would be a better place if VB.NET didn't exist", which may be too strong a statement. VB.NET, or a mix of both languages, might be a good option in some scenarios.
We're still a mixed language group where I work; however, mixed with a purpose. Nearly all hardware related projects are done in C++ (not assembly, I know :() and all nearly all windows applications are done in C#.
Considering C# popularity and ease, I'd say it's in a company's interest to move to it for when they get new, greener employees.
How do you see yourself as shop?
VB.NET is strong in markets that directly lean on Office, maybe client apps too, while C# is particularly strong in serverside and larger applications.
Maybe it is not about following your or his choice, but make a choice to align with the nature of the shop.
It is important to stick to a singular language choice, if possible. Obviously, if you are writing unmanaged C++, or have some other similar scenario, there would be notable exceptions. Many people have hit on the efficiency of homogeneous language choices, once you have ramped up. Most of the developers I know will run into an efficiency issue when switching languages, even if they know it inside and out. After coding in C# for a couple of months, if I have to go back to VB.NET to maintain something, there is a good day where I have to consciously think about things that are normally automatic. They are simple things, like using vs. Imports, type conversion, etc. It isn't that you forget how it works, but that your mind wants to do something the way you have been doing it for an extended period. A lot like the old parlor trick of pushing your arms out against a door frame for a minute and then resting them at your side. They want to go back up, because that is the signal that your brain is used to sending.
Another very important takeaway, is that this efficiency factor is important in code reviews, as well. If you are working in VB.NET and have to review C#, you are going to have the same stumbling blocks. Sure, you understand the code, but your rate or accuracy in analyzing the code will not be the same as it would be if it was in the same language you are currently using.
There are some additional minor points. Some tooling is geared towards one language over another. For example, Resharper works for both VB.NET and C#, but it provides a larger set of refactorings if you are using C#. Also, if someone in your organization maintains a company coding standard, it makes changes to those standards half as complex, if only a single language needs to be addressed.
I think this also goes a long with coding style standards. Is there really a reason why a development team chooses to go with 3 spaces for tabs rather than 4? Not really there is no technical or business advantage to this decision other than uniformity and making everyone's code look the same so when I switch between files my brain doesn't have to adjust to the new formatting.
Same thing applies here with the uniformity for C# and VB.net. There is no technical or business advantage for taking one over the other (NOTE: I'm sure if you looked hard enough you could find something really obscure). But having everyone's code uniform helps when moving from project to project so people don't have to switch gears from thinking in C# to thinking in VB.net.
One of the advantages of using the .Net platform for development is the fact that you have the ability to develop in the language that is best able to express you solution to the problem at hand.
An arbitrary rule IMO should not prevent you from using F# or a DLR language when those languages would be a more natural fit than an imperative language like C# or VB.Net.
It's a harder decision to make with VB.Net vs. C# because the languages are so similar but If you need to do a lot of work with XML then VB.Net's XML literals may help make the code clearer, conversely C#s terse syntax and support for anonymous methods may make your intent clearer.

What became of 'The last one'? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
From Wikipedia:
The Last One was a unique software
program in 1981 which
took input from a user and generated a
program in BASIC which could then be
run. It is an example of a program
generator.
The software was not a programming
language, since unlike most
programming languages, programs were
generated by the user selecting
options from menus that would form the
basis of the generated code. This was
done in a logical sequence that would
eventually cause a program to be
generated in BASIC. At any time, the
user could elect to view a flow chart
showing the current progress of the
program's design. 2
But Wikipedia didn't say what became of this program. How popular/unpopular was it, and how many people use it? How and when did it meet its demise, or is it still available?
More information available here.
Here's the current story AFAICT: this article mentions that the consulting firm they formed way back then to put TLO into play was named DJ `AI' Systems and is now tloconsultants.com (tlo == The Last One). Cha-ching :-)
My guess (after a 2min site scan) is that they grew their business by continually expanding what appears to be business-oriented expert system "modules" that the generated code ran against (and also perhaps even assisted in or guided some of the code generation, most likely for the code that targeted its own routines) and then incorporate the knowledge of how to use the new modules back into TLO. Very impressive, especially for 1981 and with the engine that knew when it didn't know enough -- ScHrIaTp! I wish my manager had 1/10th that functionality.
And you gotta love that it took five minutes to generate 100 bug-free lines of BASIC code.
I'm curious as to whether they ever "closed the loop" (my term) because I didn't see it mentioned (as I didn't fully read it due to that dang corporate job and its fake-time-based insanity) as to whether they actually reached the point where its own representation was manipulated within it in order to generate the next version of TLO itself. The name "The Last One" suggests to me that David James fully understood the meaning of manifesting a piece of software capable of presenting its own representation to the user (== programmer) for modification with the end purpose being to generate its own subsequent version.
All such self-repping-and-editing programs (live processes are IMO far more difficult while being also tantalizingly more interesting) are actually, from my perspective, equivalent in the sense that they are all 'functions that transform functions that transform functions' (how about 'FtTFtTF's -- appropriately absurd and lovely, IMO :-)
Trying to wrap one's head around how to implement such a beautiful piece of software in the face of its myriad possibilities is the kind of programming puzzle that brings home why MDD is both the current brightest idea while simultaneously being rarely used in real-world projects. Your brain better be firing on ALL cyllinders to go treading that path. How long has it taken Simonyi and his billions?
I am also curious as to whether there are infinite variations of FtTFtTFs or just lots and lots of lots of them.
Enjoy!
"Lasting Peace and Happiness for all Human Beings!"
Well, I found a blog article by a person who did a major interview with the creator of "The Last One". At the time of the writing of the article (2007), he was still working with one of the creators of "The Last One". You can probably ask him what became of it.
TRUE STORY!!!
I was the director when TLO it first came to America from England. The company spent so much time trying to find the right marketing avenue that the bubble past them by. We all did 180 seminars with crowds of 50 to 100 each in as many days. There was Scot Norton, Gil Savage, Rodger David and Richard Housand and me, Michael Bartolucci. We had an exclusive for the US which I cry about every time I think about it. We decided to right an accounts receivable and give it away with the program. Then in a week it changed to General Ledger, then AP and so on. Had we took one idea (AR)and ran with it, I think we could of had our dream come true. It wad a viable program. We took a voice generator that was present it the 1981 Computer Conference and teamed up with them. I wrote a BASIC program while in front of 50 press members (mostly from Europe).It was error free and took about 20 minutes that created a simple database to create and it would add, change, and delete members of the database from a central menu. We did this on the third day of the Conference in Houston TX. Wen our marketing failed so did the company. I understand the original company took it into receivership and decided not to pursue it further.
That was my second job in as many years. I continued on for another 38 more very successful years in the computer field.
The next step of evolution were 4GL languages and CASE tools. After that, we have UML and today, MDD.
All of those come with more or less amount of tool support to generate code from some abstract "input". All of them more of less failed for the general case since the general case isn't abstract enough to map it to some formal and simple input.
Today, MDD is a solution for highly repetitive tasks and other programming tasks that can be easily abstracted. Think "copy data out of XML" (highly abstract, good tool support) vs. "calculating the gravity field of a black hole" (very specific, no reuse, little tool support).
[EDIT] As for the history of "The Last One", probably no one adopted it. Code generators always were a bit neglected. My guess is that this is because of the many pitfalls: If you need a million lines of code that look all the same, then a code generator is really cool. But you never need that. You need a million lines of code that are somewhat similar, where "somewhat" is often different from line to line.
But if no one here can answer what happened to the old program, I suggest to ask this question on the respective Wikipedia discussion page (see "discussion" at the top of the wiki page). People who wrote the article might know.
The Last One (TLO) was written by a bloke called David James, who was funded by "Scotty" Banbury, at the time a businessman whose main interest was a company called "Hilltop Tyres", based near Axminster in Devon.
It started life as a simple program generator on 6502 based machines, particularly the Commodore Pet and the Apple II. After a while, David dropped out and Scotty morphed into the principal author. He recoded the product as a meta generator, creating a new language which could, in theory, be retargeted at other languages. He spent a lot of time on C as a target but I don't know if he got that going, as I lost contact with Scotty and the product in the early 'nineties.
These language generators were popular at that time, another being Sycero/DB which could generate both Clipper/DBase code and quite clean ANSI C.
When first put on the market, both TLO and Sycero were useful tools for the bottom end of the market and their output was used even by quite large companies. The problem was that they generally used canned modules and simple substitution to create the target programs, although I think Scotty was experimenting with something that looked a bit like a bi-directional parser, able to translate BASIC into TLO as well as the other way around.

Do you design/sketch/draw a development solution first and then develop it? If so how? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I work a lot with decision makers looking to use technology better in their businesses. I have found that a picture is worth a thousand words and prototyping a system in a diagram of some sorts always lends a lot to a discussion. I have used Visio, UML (somewhat), Mind Maps, Flow Charts and Mocked Up WinForms to start the vision for these sponsors to ensure that everyone is on the same page. I always seem to be looking for that common process can be used to knit the business vision to the development process so that we all end up at the same end, "Something functional that solves a problem".
I am looking for suggestions or Cliff notes on how to approach the design process such that it works for applications that may only take a week to develop, but can also be used to encompass larger projects as well.
I know that this delves into the area of UML, but I have found that I have a hard time finding a guide to appropriately use the various diagram types, let alone help business users understand the diagrams and relate to them.
What do you use to capture a vision of a system/application to then present to the sponsors of a project? (all before you write a single line of code)...
Paper or whiteboard!
For the lone deveoper, I'd recommend paper. At least at first, eventually you may want to formalize it with UML, but I don't think its necessary.
For a group of developers that work together (physically), I'd recommend a whiteboard. That way its visible for everyone and everyone can improve and contribute. Again, you may want to formalize at this point, but I still don't think its neces
When I first started doing OOP (or designing an algorithm), I'd do it all in my head while coding. But after doing some reasonable complex projects I definitely see the benefit in drawing out the interactions between different objects in a system.
I do projects by myself, so I use lots of index cards for designing classes and paper for their interactions. The point is it needs to be easy to change. I've played with Dia, a diagram editor, for doing UML, but making changes is too difficult. I want to be able to be able to make quick changes, so I can figure out what works best.
Its worth mentioning that TDD and doing "spike"[1] projects can help when designing a system, too.
[1] From Extreme Programming Adventures in C#, page 8:
"Spike" is an Extreme Programming term
meaning "experiement." We use the word
because we think of a spike as a
quick, almost brute-force experiment
aimed at learning just one thing.
Think of drving a big nail through a
board.
For small or very bounded tasks, I think developers almost universally agree that any sort of diagram is an unnecessary step.
However when dealing with a larger, more complicated system, especially when two or more processes have to interact or complex business logic is needed, Process Activity Diagrams can be extremely useful. We use fairly pure agile methods in development and find these are almost the only type of diagrams we use. It is amazing how much you can optimize a high level design just by having all the big pieces in front of you an connecting them with flow lines. I can't stress enough how important it is to tailor the diagram to your problem, not the other way around, so while the link gives a good starting point, simply add what makes sense to represent your system/problem.
As for storage, whiteboard can be great for brainstorming and when the idea is still being refined, but I'd argue that electronic and a wiki is better once the idea is taking a fairly final shape (OmniGraffle is the king of diagramming if you are lucky enough to be able to use a Mac at work:) . Having an area where you dump all these diagrams can be extremely helpful for someone new to get a quick grasp on an overall piece of the system without having to dig through code. Also, because activity diagrams represent larger logic blocks, there is not the issue of always having to keep them up to date. When you make a large change to a system, then yes, but hopefully likely using the existing diagram to plan the change anyway.
Read up on Kruchten's 4+1 Views.
Here's a way you can proceed.
Collect use cases into a use case diagram. This will show actors and use cases. The diagram doesn't start with a lot of details.
Prioritize the use cases to focus on high value use cases.
Write narratives. If you want, you can draw Activity diagrams.
The above is completely non-technical. UML imposes a few rules on the shapes to be used, but other than that, you can depict things in end-user terminology.
You can do noun analysis, drawing Class diagrams to illustrate the entities and relationships. At first, these will be in user terminology. No geeky technical content.
You can expand the activity diagrams or add sequence diagrams to show the processing model. This will start with end-user, non-technical depictions of processing.
You can iterate through the class and activity diagrams to move from analysis to design. At some point, you will have moved out of analysis and into engineering mode. Users may not want to see all of these pictures.
You can draw component diagrams for the development view and deployment diagrams for the physical view. These will also iterate as your conception of the system expands and refines.
When designing an application (I mainly create web applications, but this does apply to others), I typically create user stories to determine exactly what the end user really needs. These form the typical "business requirements".
After the user stories are nailed down, I create flow charts to lay out the paths that people will take when using the app.
After that step (which sometimes gets an approval process) I create interface sketches (pen/pencil & graph paper), and begin the layout of the databases. This, and the next step are usually the most time consuming process.
The next step is taking the sketches and turn them into cleaned up wireframes. I use OmniGraffle for this step -- it's light years ahead of Visio.
After this, you may want to do typical UML diagrams, or other object layouts / functionality organization, but the business people aren't going to care so much about that kind of detail :)
When I'm putting together a design, I'm concerned with conveying the ideas cleanly and easily to the audience. That audience is made up of (typically) different folks with different backgrounds. What I don't want to do is get into "teaching mode" for a particular design model. If I have to spend considerable time telling my customer what the arrow with the solid head means and how it is different from the one that is hollow or what a square means versus a circle, I'm not making progress - at least not the progress I want to.
If it is reasonably informal, I'll sketch it out on a whiteboard or on some paper - block and simple arrows at most. The point of the rough design at this point is to make sure we're on the same page. It will vary by customer though.
If it is more formal, I might pull out a UML tool and put together some diagrams, but mostly my customers don't write software and are probably only marginally interesting in the innards. We keep it at the "bubble and line" level and might put together some bulleted lists where clarification is needed. My customer don't want to see class diagrams or anything like that, typically.
If we need to show some GUI interaction, I'll throw together some simple window prototypes in Visual Studio - it is quick and easy. I've found that the customer can relate to that pretty easily.
In a nutshell, I produce simple drawings (in some format) that can communicate the design to the interested parties and stake holders. I make sure I know what I want it to do and more importantly - what THEY NEED it to do, and talk to that. It typically doesn't get into the weeds because folks get lost there and I don't find it time well spent to diagram everything to the nth degree. Ultimately, if the customer and I (and all other parties concerned) are on the same page after talking through the design, I'm a happy guy.
I'm an Agile guy, so I tend to not put a lot of time into diagramming. There are certainly times when sketching something on a white board or a napkin will help ensure that you understand a particular problem or requirement, but nothing really beats getting working software in front of a customer so they can see how it works. If you are in a situation where your customers would accept frequent iterations and demos over up front design, I say go for it. It's even better if they are okay to get early feedback in the form of passing unit or integration tests (something like Fit works well here).
I generally dislike prototypes, because far too often the prototype becomes the final product. I had the misfortune of working on a project which was basically extending a commercial offering which turned out to be a "proof of concept" that got packaged and sold. The project was canceled after over 1000 defects were logged against the core application (not counting any enhancements or customizations we were currently working on).
I've tried using UML, and found that unless the person looking at the diagrams understands UML, they are of little help. Screen mock-ups are generally not a bad idea, but they only show the side of the application which directly effects the user, so you don't get much mileage for anything that isn't presentation. Oddly enough tools like the Workflow designer in Visual Studio produce pretty clear diagrams that are easy for non-developers to understand, so it makes a good tool for generating core application workflow, if your application is complex enough to benefit from it.
Still, of all the approaches I've used over the years, nothing beats a user getting their hands on something to let you know how well you understand the problem.
I suggest reading Joel's articles on "Painless Functional Specifications". Part 1 is titled "Why Bother?".
We use Mockup Screens at work ("Quick and Easy Screen Prototypes"). It's easy to alter screens and the scetches make clear that this is only a design.
The mockups are then included in a Word document containing the spec.
From Conceptual Blockbusting: A Guide To Better Ideas by James L. Adams:
Intellectual blocks result in an
inefficient choice of mental tactics
or a shortage of intellectual
ammunition. . . . 1. Solving the
problem using an incorrect language
(verbal, mathematical, visual) -- as
in trying to solve a problem
mathematically when it can more easily
be accomplished visually
(pg. 71, 3rd Edition)
Needless to say, if you choose to use diagrams to capture ideas that may be better captured with mathematics, it's equally bad. The trick, of course, is to find the right language to express both the problem and the solution too. And, of course, it may be appropriate to use more than one language to express both the problem and the solution.
My point is that you're assuming that diagrams are the best way to go. I'm not sure that I would be willing to make that assumption. You may get a better answer (and the customer may be happier with the result) via some other method of framing requirements and proposed designs.
By the way, Conceptual Blockbusting is highly recommended reading.
The UML advice works well if you're working on a large & risk-averse project with a lot of stakeholders, and with lots of contributors. Even on those projects, it really helps to develop a prototype to show to the decision makers. Usually walking them through the UI and a typical user story is quite sufficient. That said, you must beware that focus upon the UI for decision makers will tend to make them neglect some significant backend issues such as validations, business rules and data integrity. They will tend to write these issues off as "technical" issues rather than business decisions.
If, on the other hand, you're working on an Agile project where it's possible to make code changes quickly (and rollback mistakes quickly), you may be able to make an evolutionary prototype with all the works. Your application's architecture should be supple and flexible enough to support quick change (e.g. naked objects design pattern or Rails-style MVC). It helps to have a development culture that encourages experimentation, and acknowledges that BDUF is no predictor of working successful software.
4+1 views are good only for technical people. And only if they are interested enough. Remember those last dozen times you struggled to discuss use-case diagrams with the customer?
The only thing I found that works with everybody is in fact showing them screens of your application. You said yourself: a picture is worth a thousand words.
Curiously, there are two approaches that worked for me:
Present to users a complete user manual (before even development is started), OR
Use mockups that doesn't look at all like finished app: Discuss main screens of you app first. When satisfied, proceed discussing mockups but one scenario at a time.
For option (1) you can use whatever you want, it doesn't really matter.
For option (2) it's completely fine to start with pen and paper. But soon you are better off using a specialized mockup tool (as soon as you need to edit, maintain or organize your mockups)
I ended up writing my own mockup tool back in 2005, it became pretty popular: MockupScreens
And here is the most complete list of mockup tools I know of. Many of those are free: http://c2.com/cgi/wiki?GuiPrototypingTools

Resources