Letter O considered harmful? [closed] - history

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Back in the day, the FORTRAN standards committee reviewed a technical proposal called "Letter O considered harmful". I used to be able to find a link to the text of this proposal on the net, but it seems to have disappeared since the last time I looked for it -- the link disappeared off the relevant Wikipedia page and the only Google hits for the term are references back to Wikipedia. Does anyone happen to know a good repository of information about FORTRAN so that I could track it down, or even better, have a link to the proposal itself?

You are indeed correct!
Yes, there was such a proposal (entitled "Letter 'O' Considered Harmful")
in the official set of documents supplied to voting members at the November 1976 meeting of X3J3 that was held at Brookhaven National Laboratory. (At this same meeting, committee chose "Fortran 77", with six lower-case letters, as the name for this revision of the language.)
I am able to verify this because I was not only the host for this meeting but also the actual author of this anonymous "proposal". As such, I enlisted the typist (my boss' secretary, Bette) to type up this phony "proposal" in the proper format and slip it into the official distribution provided at the meetingplace (Conference Room B of Berkner Hall).
Loren Meissner was so amused by it that he wrote a little item in a Fortran publication for which he was editor. Walt Brainerd also mentioned it in his publication. I had sworn both of them to secrecy regarding my little prank, so those articles did not identify me. (Sorry, I don't recall the names of these two publications.)
The lists of pro and con arguments (as was typical of X3J3 proposals in those days) included:
Restoring the number of Fortran characters to 48 (by omitting 'O'to counterbalance the addition of the colon ':')
Solving ambiguities caused by nested DO loops.
Eliminating problems with (deprecated) Hollerith specifications in FORMAT statements.
Preventing misuse of GO TO statements.
while the "con" list contained only one objection (with a disclaimer):
This proposal may invalidate some existing FORTRAN programs, but most of these are probably "standard-conforming" anyway.

I think this is the guy to ask: Bruce A. Martin. He seems* to be the one who originally posted it on Wikipedia, and he puts himself as working at Brookhaven (where the article was circulated) at the same time.
The citation he gives on Wikipedia for the article is:
X3J3 post-meeting distribution for meeting held at Brookhaven National Laboratory in November 1976.
(* the user page for the user that posted it links to the website as being their material)

Mentioned on Wikipedia, referred to as a joke / folklore. Doesn't surprise me TBH.

Related

What is Information Architecture and Information Design? How can I practise that? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am learning User Experience. They says Information Architecture is much important to UX. I am studying about IA in online But I wonder how can I practise it? Please Help.
Thanks.
I'm an IA.
Information Architecture is a discipline that can be considered part of the "User Experience" domain. For a good look at this, view "Elements of User Experience" (pdf) by Jesse James Garrett to see where it fits.
Suggested starting point on Information Architecture: read the 'Polar Bear Book': "Information Architecture for the Web and Beyond, 4th ed", Rosenfeld, Morville and Arango. This is the classic text on the field. Its definition of what IAs do: Connect Users with Content through Context is an old one, but the simplest explanation. (f.e., today I've been working on a site section which changes in response to how far down a timeline a user is - it's using the timeline context to get the user the right content).
Boxes and Arrows is a site that has focused a lot on the IA discipline, though it's changed a bit in recent years (as has the IA landscape). It's worth your time to take a look.
The Information Architecture Summit (in Lyons, France for 2018, in the US, most likely, in 2019) is a great place to meet information architects, understand the community, get some starting points. The people who defined the domain are often there, and easy to talk to.
One way to think about what an IA does for a page/site/app/etc. is that the IA designs the information hierarchy so the user can find the important things, so that all things are as findable as they can be, and so the user understands where she is and what she can do at all times in a site.
An Interaction Designer designs user flow through a series of interactions (not necessarily pages)
A UX Designer manages the overall experience of the site, designs it to be coherent and satisfying, while hitting business goals
A Visual designer adds visual flair, while serving the usability, coherence, brand and overall experience of the site.
Information Design is a completely different discipline from Information architecture.
A couple of well-known Information Design leaders are Edward Tufte and Stephen Few. They are concerned with users' ability to understand your presented dataset.
They do this with good design heuristics ("maximize the data/ink ratio") and critique ("don't use chartjunk"), as well as great basic advice ("if you don't have a story to tell, you don't have anything to present". "Steal with abandon. There's no reason to re-solve a problem that has been solved for 100 years").
They also appear on Jesse James Garrett's chart on the Elements of UX.
Hope this gives you some useful tips for where to begin.

Garble every third line of code, using Vim, before sending to a prospective client? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a specific problem. I want to sell one of my models, programmed in the R programming language. I want to show the prospective client that there is a lot of advanced work in the code, and there is a lot of it, representing about 700 hours of R&D (around 2000 lines of R code). So, I want to send him the code. To impress him.
However I obviously don't want to disclose the full workings of the code, so I was thinking of garbling every third or fourth line, so that it cannot simply be OCR'd and replicated. I don't want to go down the NDA route, nor is the client adept at programming (wouldn't be able to replicate it himself - though could hire a programmer I guess). I'd also probably garble completely one or two key functions.
How would I do this in Vi / Vim?
Is there any other way of solving my issue that someone could suggest?
Yes I know I could show him the output of the program as the sales pitch, which I have already done, but we're haggling on pricing so a bit of "blinding with science" through a code listing, to see how much work is involved, won't hurt. It is my experience that many non-programmers have no idea of how much work can go into a piece of software.
You should move/repost your question to programmers.stackexchange.com, it seems more fitting.
However, I think that if you are trying to sell something to someone non technical, showing the code won't work. Typing 2000 line of code is not that long. He/She won't be able to gauge the value by reading some incomprehensible symbols.
Rather you should show the added value of your code for his/her business. So get a set of data (potentially from your customer), and extracting/showing relevant information from that set should be more impressive. I should add that the price of your model does not only depend of the work involved, but also from the potential benefits for you customer.
A piece of code is only solving a problem, you can probably sell it from 10$ to 100 millions $ depending on the problem solved.
I fully back the points brought up by Xavier T., but if you still think you need to show some representation of the full body of code to make an impression about the amount of your work, I'd either:
Create a printout with a very small font just to get the structure across (e.g. :set printfont=Courier_h4); obviously, this only works for paper copies, not PDFs.
Obfuscate by replacing all alphabetic letters with x, as in :%substitute/\a/x/g

Is there a search engine that will give a direct answer? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I've been wondering about this for a while and I can't see why Google haven't tried it yet - or maybe they have and I just don't know about it.
Is there a search engine that you can type a question into which will give you a single answer rather than a list of results which you then have to trawl through yourself to find what you want to know?
For example, this is how I would design the system:
User’s input: “Where do you go to get your eyes tested?”
System output: “Opticians. Certainty: 95%”
This would be calculated as follows:
The input is parsed from natural language into a simple search string, probably something like “eye testing” in this case. The term “Where do you go” would also be interpreted by the system and used when comparing results.
The search string would be fed into a search engine.
The system would then compare the contents of the results to find matching words or phrases taking note of what the question is asking (i.e. what, where, who, how etc.)
Once a suitable answer is determined, the system displays it to the user along with a measure of how sure it is that the answer is correct.
Due to the dispersed nature of the Internet, a correct answer is likely to appear multiple times, especially for simple questions. For this particular example, it wouldn’t be too hard for the system to recognise that this word keeps cropping up in the results and that it is almost certainly the answer being searched for.
For more complicated questions, a lower certainty would be shown, and possibly multiple results with different levels of certainty. The user would also be offered the chance to see the sources which the system calculated the results from.
The point of this system is that it simplifies searching. Many times when we use a search engine, we’re just looking for something really simple or trivial. Returning a long list of results doesn’t seem like the most efficient way of answering the question, even though the answer is almost certainly hidden away in those results.
Just take a look at the Google results for the above question to see my point:
http://www.google.co.uk/webhp?sourceid=chrome-instant&ie=UTF-8&ion=1&nord=1#sclient=psy&hl=en&safe=off&nord=1&site=webhp&source=hp&q=Where%20do%20you%20go%20to%20get%20your%20eyes%20tested%3F&aq=&aqi=&aql=&oq=&pbx=1&fp=72566eb257565894&fp=72566eb257565894&ion=1
The results given don't immediately answer the question - they need to be searched through by the user before the answer they really want is found. Search engines are great directories. They're really good for giving you more information about a subject, or telling you where to find a service, but they're not so good at answering direct questions.
There are many aspects that would have to be considered when creating the system – for example a website’s accuracy would have to be taken into account when calculating results.
Although the system should work well for simple questions, it may be quite a task to make it work for more complicated ones. For example, common misconceptions would need to be handled as a special case. If the system finds evidence that the user’s question has a common misconception as an answer, it should either point this out when providing the answer, or even simply disregard the most common answer in favour of the one provided by the website that points out that it is a common misconception. This would all have to be weighed up by comparing the accuracy and quality of conflicting sources.
It's an interesting question and would involve a lot of research, but surely it would be worth the time and effort? It wouldn't always be right, but it would make simple queries a lot quicker for the user.
Such a system is called an automatic Question Answering (QA) system, or a Natural Language search engine. It is not to be confused with a social Question Answering service, where answers are produced by humans. QA is a well studied area, as evidenced by almost a decade of TREC QA track publications, but it is one of the more difficult tasks in the field of natural language processing (NLP) because it requires a wide range of intelligence (parsing, search, information extraction, coreference, inference). This may explain why there are relatively few freely available online systems today, most of which are more like demos. Several include:
AnswerBus
START - MIT
QuALiM - Microsoft
TextMap - ISI
askEd!
Wolfram Alpha
Major search engines have shown interest in question answering technology. In an interview on Jun 1, 2011, Eric Scmidt said, Google’s new strategy for search is to provide answers, not just links. "'We can literally compute the right answer,' said Schmidt, referencing advances in artificial intelligence technology" (source).
Matthew Goltzbach, head of products for Google Enterprise has stated that "Question answering is the future of enterprise search." Yahoo has also forecasted that the future of search involves users getting real-time answers instead of links. These big players are incrementally introducing QA technology as a supplement to other kinds of search results, as seen in Google's "short answers".
While IBM's Jeopardy-playing Watson has done much to popularize machines answering question (or answers), many real-world challenges remain in the general form of question answering.
See also the related question on open source QA frameworks.
Update:
2013/03/14: Google and Bing search execs discuss how search is evolving to conversational question answering (AllThingsD)
Wolfram Alpha
http://www.wolframalpha.com/
Wolfram Alpha (styled Wolfram|Alpha)
is an answer engine developed by
Wolfram Research. It is an online
service that answers factual queries
directly by computing the answer from
structured data, rather than providing
a list of documents or web pages that
might contain the answer as a search
engine would.[4] It was announced in
March 2009 by Stephen Wolfram, and was
released to the public on May 15,
2009.[1] It was voted the greatest computer innovation of 2009 by Popular
Science.[5][6]
http://en.wikipedia.org/wiki/Wolfram_Alpha
Have you tried wolframalpha?
Have a look at this: http://www.wolframalpha.com/input/?i=who+is+the+president+of+brasil%3F
Ask Jeeves, now Ask.com, used to do this. Why nobody does this anymore, except Wolfram:
Question Answering (QA) is far from a solved problem.
There exist strong question answering systems, but they require full parsing of both the question and the data and therefore require tremendous amounts of computing power and storage, even compared to Google scale, to get any coverage.
Most web data is too noisy to handle; you first have to detect if it's in a language you support (or translate it, as some researchers have done; search for "cross-lingual question answering"), then try to detect noise, then parse. You lose more coverage.
The internet changes at lightning pace. You lose even more coverage.
Users have gotten accustomed to keyword search, so that's much more economical.
Powerset, acquired by Microsoft, is also trying to do question answering. They call their product a "natural language search engine" where you can type in a question such as "Which US State has the highest income tax?" and search on the question instead of using keywords.

What became of 'The last one'? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
From Wikipedia:
The Last One was a unique software
program in 1981 which
took input from a user and generated a
program in BASIC which could then be
run. It is an example of a program
generator.
The software was not a programming
language, since unlike most
programming languages, programs were
generated by the user selecting
options from menus that would form the
basis of the generated code. This was
done in a logical sequence that would
eventually cause a program to be
generated in BASIC. At any time, the
user could elect to view a flow chart
showing the current progress of the
program's design. 2
But Wikipedia didn't say what became of this program. How popular/unpopular was it, and how many people use it? How and when did it meet its demise, or is it still available?
More information available here.
Here's the current story AFAICT: this article mentions that the consulting firm they formed way back then to put TLO into play was named DJ `AI' Systems and is now tloconsultants.com (tlo == The Last One). Cha-ching :-)
My guess (after a 2min site scan) is that they grew their business by continually expanding what appears to be business-oriented expert system "modules" that the generated code ran against (and also perhaps even assisted in or guided some of the code generation, most likely for the code that targeted its own routines) and then incorporate the knowledge of how to use the new modules back into TLO. Very impressive, especially for 1981 and with the engine that knew when it didn't know enough -- ScHrIaTp! I wish my manager had 1/10th that functionality.
And you gotta love that it took five minutes to generate 100 bug-free lines of BASIC code.
I'm curious as to whether they ever "closed the loop" (my term) because I didn't see it mentioned (as I didn't fully read it due to that dang corporate job and its fake-time-based insanity) as to whether they actually reached the point where its own representation was manipulated within it in order to generate the next version of TLO itself. The name "The Last One" suggests to me that David James fully understood the meaning of manifesting a piece of software capable of presenting its own representation to the user (== programmer) for modification with the end purpose being to generate its own subsequent version.
All such self-repping-and-editing programs (live processes are IMO far more difficult while being also tantalizingly more interesting) are actually, from my perspective, equivalent in the sense that they are all 'functions that transform functions that transform functions' (how about 'FtTFtTF's -- appropriately absurd and lovely, IMO :-)
Trying to wrap one's head around how to implement such a beautiful piece of software in the face of its myriad possibilities is the kind of programming puzzle that brings home why MDD is both the current brightest idea while simultaneously being rarely used in real-world projects. Your brain better be firing on ALL cyllinders to go treading that path. How long has it taken Simonyi and his billions?
I am also curious as to whether there are infinite variations of FtTFtTFs or just lots and lots of lots of them.
Enjoy!
"Lasting Peace and Happiness for all Human Beings!"
Well, I found a blog article by a person who did a major interview with the creator of "The Last One". At the time of the writing of the article (2007), he was still working with one of the creators of "The Last One". You can probably ask him what became of it.
TRUE STORY!!!
I was the director when TLO it first came to America from England. The company spent so much time trying to find the right marketing avenue that the bubble past them by. We all did 180 seminars with crowds of 50 to 100 each in as many days. There was Scot Norton, Gil Savage, Rodger David and Richard Housand and me, Michael Bartolucci. We had an exclusive for the US which I cry about every time I think about it. We decided to right an accounts receivable and give it away with the program. Then in a week it changed to General Ledger, then AP and so on. Had we took one idea (AR)and ran with it, I think we could of had our dream come true. It wad a viable program. We took a voice generator that was present it the 1981 Computer Conference and teamed up with them. I wrote a BASIC program while in front of 50 press members (mostly from Europe).It was error free and took about 20 minutes that created a simple database to create and it would add, change, and delete members of the database from a central menu. We did this on the third day of the Conference in Houston TX. Wen our marketing failed so did the company. I understand the original company took it into receivership and decided not to pursue it further.
That was my second job in as many years. I continued on for another 38 more very successful years in the computer field.
The next step of evolution were 4GL languages and CASE tools. After that, we have UML and today, MDD.
All of those come with more or less amount of tool support to generate code from some abstract "input". All of them more of less failed for the general case since the general case isn't abstract enough to map it to some formal and simple input.
Today, MDD is a solution for highly repetitive tasks and other programming tasks that can be easily abstracted. Think "copy data out of XML" (highly abstract, good tool support) vs. "calculating the gravity field of a black hole" (very specific, no reuse, little tool support).
[EDIT] As for the history of "The Last One", probably no one adopted it. Code generators always were a bit neglected. My guess is that this is because of the many pitfalls: If you need a million lines of code that look all the same, then a code generator is really cool. But you never need that. You need a million lines of code that are somewhat similar, where "somewhat" is often different from line to line.
But if no one here can answer what happened to the old program, I suggest to ask this question on the respective Wikipedia discussion page (see "discussion" at the top of the wiki page). People who wrote the article might know.
The Last One (TLO) was written by a bloke called David James, who was funded by "Scotty" Banbury, at the time a businessman whose main interest was a company called "Hilltop Tyres", based near Axminster in Devon.
It started life as a simple program generator on 6502 based machines, particularly the Commodore Pet and the Apple II. After a while, David dropped out and Scotty morphed into the principal author. He recoded the product as a meta generator, creating a new language which could, in theory, be retargeted at other languages. He spent a lot of time on C as a target but I don't know if he got that going, as I lost contact with Scotty and the product in the early 'nineties.
These language generators were popular at that time, another being Sycero/DB which could generate both Clipper/DBase code and quite clean ANSI C.
When first put on the market, both TLO and Sycero were useful tools for the bottom end of the market and their output was used even by quite large companies. The problem was that they generally used canned modules and simple substitution to create the target programs, although I think Scotty was experimenting with something that looked a bit like a bi-directional parser, able to translate BASIC into TLO as well as the other way around.

World's First Computer Programming _Language_? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
OK -- a bit of an undefined question (is the pattern of plugs in an Eniac plugboard a language ??) but contenders include:
Konrad Zuse's PlanKalkül (1940s) -
never implemented (generally
accepted as the first).
Whatever Ada Lovelace (1840s) programmed in (not
Ada) -- if she is the first
programmer, as everyone says, she
must have used the first programming
language, no? Again probably never
implemented - but did Babbage have
anything that could be called a
language?
Turing's description of
his Turing machine (1936 paper). In
the paper he actually writes
programs and simulates their
execution mathematically - that
makes it as good as (and earlier
than) PlanKalkül in my book.
Autocode for the Machester Mark 1 computer (1952) -- compiled, high level, beats Fortan to the punch (?). Mr Turing again (!).
Fortran (Early 1950's) - beats out Lisp by a couple of years and undoubtedly passes the sniff test. But was it earlier than Mark 1 autocode ??
The PBS series Connections made the argument that the holes punched in tiles to control the patterns created on looms (circa 1700s??) were the first programming "language".
These were followed by player piano scrolls: Codes, on paper, which are read by, and control the operation of a machine. That's a programming language, isn't it?
DNA -- or does it have to involve silicon computers? ;-)
Since Ada Lovelace is widely regarded as the first programmer, I'd investigate what she called the set of symbols she was using.
Update: You can read the notation that Lovelace used in her Notes on Sketch of The Analytical Engine Invented by Charles Babbage By L. F. MENABREA. Lovelace was the translator, but her notes describing the programming of the Analytical Engine ended up being about four times longer than the original publication.
I think we need to agree on a definition of "programming language" to answer this question in any useful way. Is directly manipulating machine code a programming language?
Konrad Zuse's PlanKalkül (1940s) - never implemented
There was actually an implementation of the language published by Rojas et al. somewhere around the year 2000.
DNA -- or does it have to involve silicon computers? ;-)
Well, if you go down that road then the correct answer has to be RNA which existed before DNA. But then, do we have a Blind Programmer? ;-)
In the beginning there was Ada Lovelace , Then Bill said 'Let there be C#' And there was light !!
Assuming a definition of "programming language" as "a textual notation used to describe/control the intended behavior of a digital computer", I think there's only one possible answer: raw (numerical) machine code.
Many of the other answers (e.g. recipes for cooking) are clever, but aren't about programming per se, but about description/control in a different context or more general sense.
I would say that the first programming language actually used was the machine language of the first stored program computer, which I believe was Baby: http://www.computer50.org/
The language the analytical engine would have used was its own machine code, entered via punch cards indicating the operation to be performed and the columns (effectively registers) to perform it to. See these notes for some details.
Programming, at least in the declarative sense, comes down to combinations of sequence, alternation, and repetition. One might consider recipe authors as programmers, and therefore very early ones. Think about a recipe: it contains sequence (slice this, then chop that, then heat so and so...), alternation (if you want it moist then bake for 40 minutes, else if you want it "cakey" bake for 55 minutes), and repetition (while not stiff kneed the dough, repeat stirring until the batter is smooth). Recipes go back thousands of years.

Resources